title
stringlengths
9
81
summary
stringlengths
69
170
link
stringlengths
58
58
transcript
stringlengths
1.66k
94.2k
segments
list
John Schulman
John Schulman, OpenAI cofounder and researcher, inventor of PPO/TRPO talks RL from human feedback, tuning GPT-3 to follow instructions (InstructGPT) and answer long-fo...
https://media.transistor…c00.mp3?src=site
The answer was affirmative. We can get an agent to basically use a set of tools that we give it. In this case, the browsing commands like searchings. I would say I expect AI to be able to do better, a better job than humans at most jobs that humans do now. Five years or so. TalkAulRO podcast is all reinforcing learning all the time, featuring brilliant guests, both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. John Schulman is a co-founder of OpenAI and a researcher and engineer at OpenAI. He is well known for major contributions to the field of reinforcement learning, including the TRPO algorithm that's trust region policy optimization, GAE, generalized advanced estimation. Those are from his UC Berkeley dissertation and TRPO's descendant proximal policy optimization, or PPO. His current focus at OpenAI is on RL from human feedback. John, welcome to the show and thanks so much for being here. Thanks a lot for having me. You were literally one of the first people I thought of when I started the show three years back. Thanks, I'm honored. It means a lot to me to have you here today. I definitely remember you were nuts and bolts of deep RL video back in the day and watching that multiple times and gaining a lot from that. You helped a generation of RL practitioners back then. By the way, there's going to be a reboot of the nuts and bolts presentation. I got invited to give a talk at NERPS this year on it. I'll have to revamp the guidelines and everything. That'll be fun. Oh, that's awesome. Can't wait for that. You were clearly one of the earlier pioneers in deep RL. How did you choose to move your focus to RL from human feedback? Why is that an important problem? Why is that important to you? After GB3 was trained, I was blown away by how smart it was and I realized the next frontier was figuring out how to make language models actually useful. I'm still really interested in RL but solving RL benchmarks isn't the end of the story. To use your RL algorithm you need a reward function. Whereas the reward function come from in RL benchmarks, you usually just code up the reward function. But if you're not in a simulator environment, that doesn't work. What we have to do in any kind of real-world use case is have humans look at what the AI did and decide if it was good or bad. How exactly do you define this reward becomes a really challenging and important problem, especially as the tasks get harder to evaluate? Another angle on this is that language models are very smart but it's hard to get them to do anything useful. A big part of that is they're not necessarily trying to do what you want. They're just trying to imitate the training corpus. That means there's a big opportunity to improve them a lot by just giving them the right objective. That's what we can do by applying RL to these language models using human feedback to define the reward. Is human feedback harder or very different in some way than using a synthetic reward? There are a lot of new complications. You have to collect a data set dynamically. You're always in the business of building data sets of human preferences. Often the data quality there matters more than various algorithmic details. You also have to think a lot about exactly how you're giving the task to the human trainers and various other things that you wouldn't have thought about if you just had a programmatic reward function. Does the difference between human-raders or the noisiness of the reward signal cost any problems? I would say the noise definitely you need to be below some threshold of noise to learn anything. I think in general if you have a large noisy data set that can be as good as a smaller clean data set. Actually, noise isn't the thing that worries me the most. It's more that there are sometimes consistent biases that people have. For example, in settings like question answering or settings where you have a model writing some text, often people prefer longer answers. You end up with these very verbose answers. If you're not careful with the instructions that is. You can also instruct people the raiders to reward brevity. But without yet, if you're not careful you can incentivize the wrong kinds of behaviors. So let's move to some of your recent work. First up is WebGPT. Browser assisted question answering with human feedback. That's a Nekano at all with yourself as a co-author in 2021. Can you tell us what is the main idea of this paper? What is WebGPT? In WebGPT, we basically took our language models and we hooked them up to a web browser so they could retrieve information from the web. They can write an answer by summarizing the relevant pages from the web. That way if you're asking a question about current events or a question that requires some detailed scientific or technical knowledge, this AI can go out and look up the answer and with detailed citations to its sources. I would say there's two interesting points to this. One is we were exploring whether you could turn language models into a kind of agent. There's a lot of data on the web of different texts that people have written. But there's not a lot of data that shows how to actually do some multi-step process. So it's not that clear, uprearry whether you can get a language model to actually carry out some iterative process. We just have a lot of data like writing essays and having chats and so forth. So that was one thing we were exploring here and I think the answer was affirmative. We can get an agent to basically use a set of tools that we give it. In this case the browsing commands like searchings, scroll link, click on links. The second theme of this paper was around truthfulness. I mean a big issue with language models is I mean they're not very reliable at giving you true information. They know a vastly superhuman amount. But if you prompt them in the wrong way they'll just output lots of plausible sounding nonsense. So how to fix that is a big research question or one of the biggest research questions in the world of language models. I think it's going to be challenging to fully fix it but I think a big part of the story involves retrieval and having models write answers that contain citations. Citations to try trusted sources. So a person who's checking over the answer doesn't have to go and try to figure out where the model might have gotten this idea. They can go and directly look at the source and see if it supports the AI statement. With WebGBT we just wanted to see if we do give the language model a really flexible interface to the web. Can we have it answer hard questions truthfully using like with the help of all these citations. And it's actually really non-trivial because if you look at the data that we use the Reddit explain it like on five. The questions are really varied like some of them are about science, history, current events. Like our Raiders didn't necessarily know anything about these topics but still they had to judge the answers written detailed answers. So it would have been really hard to do it without the supporting citations. So we kind of validated that we could get good feedback in a hard domain like this with the help of citations. Can you talk about where the idea for WebGBT came from? Is that an idea you've had kicking around for a while or was it something that came up recently before the paper? How did that play out? Some of the ideas had been floating around like we thought that we actually had a project at OpenAI very early on a world called World of Bits. We were looking at controlling web browsers or doing tasks that involve tasks on the internet with the web browser but it was way too early at the time. So we kind of abandoned it for a few years. Actually we were trying to back then we were trying to do it with full visual input. So we thought yeah we could give some instructions to the agent like go and figure out figure out the address of this building or something. The agent would go and search the web or use Google Maps or whatever to figure out the answer. And we were trying to do this all in pixels that obviously didn't work very well. But now we have these great language models on the work on text data. We can also extract the text out of web pages to get most of the information. We can't really interact with a lot of dynamic websites. Yeah, where there's a lot of JavaScript and images and so forth. But as long as it's just browsing and reading text we're fine. So yeah we had good enough models and that made it kind of feasible to revisit this idea of using the internet as an environment. So I would say that was one of the sources of inspiration that long-stinted, that long kind of thread about like using the internet as an environment. Another motivation was just after we got after we started playing with GPD3 we noticed that it had all these problems with factual accuracy and the reliability of the information it was giving us. So that kind of motivated doing more research on how to make language models more truthful. We were kind of brainstorming what to do there and we went through some docs and eventually decided that we wanted to try some question answering like using the web, looking up knowledge on the web to help answer questions. So actually the original version of the project used trivia questions. So there's another, there's this well-known data set trivia QA that has some basic trivia questions. So we first worked a little bit on that data set and tried to see if we could boost the model's accuracy by giving it web search and yeah that actually works quite straight, that worked pretty easily. So then we decided to move on to long-form question answering and so that gave us the, that was the project we ended up working on for a while. It seems like you use a few different data sets here and a number of different training methods. I'll just mention the last behavior cloning, reward modeling, reinforcement learning, and rejection sampling. So we were using a fairly standard methodology which was actually adapted from previous work on RL from Human Preferences. So the pipeline is you first train a model with supervised learning where you you have human demonstrators show how to do the task, like show how to map from observations to actions. Yeah so that's the supervised learning or behavior cloning step then we train a reward model or preference model. It looks at two actions or two out trajectories and decides which one is better. In this case like in a question answering setting you're looking at two answers and deciding which answer is better and we use that to train a reward model that assigns higher score to the good answers than the bad ones. Then you do reinforcement learning against that reward function and of course you can iterate these last two steps. After you do a little RL now you're, you sort of exploited some of the flaws of the reward model like or some of the noise in the reward model and it's not necessarily accurate on your new distribution of data. You recollect more pairs of samples and refit this preference model and then you do another iteration of RL. So that's like that's the whole RL from Human Feedback Pipeline and there's this other idea called rejection sampling or best event sampling and in general you can do other kinds of search too where instead of doing RL once you have your reward model you can just search against that reward model so you can take a bunch of collect a bunch of samples and re-rank them with the reward model and take the best one as your action. Kind of like NPC. Yeah exactly. Yeah kind of depends exactly what setting you're in what you can do. If you're in a setting where there's some environment you're interacting with then you would have to simulate your, you would have to simulate the dynamics of your environment which yeah so that would look kind of like NPC. In our case we were the only thing we had to learn a model of was the human preference so like we're it's a question answering setting so it's really like a contextual banded problem so it's kind of straightforward to take a bunch of sample a bunch of actions where each action is a full answer and re-rank them or search against the search over answers. So in terms of the action space was it the action space just a list of commands or is it still generating tokens like a regular generative mode? We were generating tokens. We had two phases of like in each episode of the RL task so there is first a browsing phase where where the model goes and it issues searches and clicks on things and quotes relevant information like if it sees something useful on the page it'll it'll quote it using this quote commands and then once it's browse it's done browsing it'll issue another command called end browsing and it'll write its answer that's also expressed in tokens but really we rolled this all into one big RL task where your episode involves browsing and writing out the answer and it's all one big RL episode. Did you think this is going to work well or were you kind of surprised? At the very beginning of the project we didn't know if it was going to work or not. Like after we did the initial experiments with Trivia QA which actually didn't take that long to get running then it became pretty clear that it would work that the browsing part worked at least and we already know that we can get these models to write pretty good long form text with a bunch of if you give them a bunch of snippets of text that they they can cite. So I noticed the the the human raiders task was quite complicated as it was a long guide and there was many types of feedback that they were giving but in the end the paper said that only the final rating was used so I was just curious if you hadn't commented about that like why do you think maybe the model couldn't use that extra feedback whereas it was maybe just too much or not enough samples. Yeah that's been one frustrating finding so far in in that project and also some other projects we've had the same finding but you have your raiders go through this long process for each for each comparison they do where they're comparing a pair of answers and then you only use one bit of information from the whole from this whole process which might have taken like half an hour. It seems like it would be better if we if we were able to extract more information more about the process they went through in arriving at the answer. So we did collect all sorts of other information like we had them provide ratings along several different axes like coherence and factual accuracy and so forth but in the end we didn't really get much of a boost out of using any of this this other information so I'd say it seems like there's it should be possible to do better but unfortunately this methodology which seems kind of dumb so far it's hard to be and people have tried various other ideas for like how to use human feedback instead of you getting these preference scores there various other things you can do like you can have them right critiques and edit or maybe edit the responses. Yeah I think some of these things are are also promising but yeah this methodology of collecting preference data works well. Yeah I think it's it's still an open area of research. Oh yeah regarding the really long instructions. Yeah I think for any of these tasks there is a lot of subtlety in how to do the task properly and so we ended up adding more and more details of like what do you do in this situation and what do you do in that situation. I think it's starting to get pretty unwieldy with these really long instruction manuals so there's some promising ideas for how to address this like there's a paper from DeepMind recently Sparrow that used basically broke down the task and they trained they basically had people look at one aspect of the one aspect of the response at a time and and then they had a way of combining these different rule specific they would train a bunch of rule specific reward models and then combine them at the end. Yeah I think there's some other interesting ideas for how to how to make this process better. So I gather that from your answer about WebGPT and the whole idea of WebGPT is that you want the the language model type access to external knowledge but I wonder where you think the line should really be in terms of what a language model should know and what the language model should look up and maybe what the language model should not know or not purport to know. Do you have opinions about that? Yeah let's see like some people are advocating for very small language models that have like no external knowledge aside from language I guess would be the extreme position and then other people other people talked about language models that just know everything as opposed to having an external knowledge source. There's some interesting questions there so I think it is a little hard to separate knowledge factual knowledge from understanding. So as humans we get by like not memorizing all sorts of facts and just knowing that we can look them up if needed. For working on a specific domain it is useful to like have a lot of facts internalized so that you can recall them very quickly and kind of combine them combine them in your head. So I wouldn't take an extreme position on either side I would say I think retrieval is going to be really useful just at the very least for current events but also I don't think we want to try to pack all human knowledge into the weights of a neural net. On the other hand I think people have had a lot of luck just scaling up models and like as they soak up more factual knowledge they also get better at reasoning and other things and I think I haven't seen any demonstrations of tiny models that just do lots of retrieval and save all their weights for reasoning. Yeah I just haven't seen any evidence of this or that or I haven't seen any successful attempts at making this. Let's move on to training language models to follow instructions with human feedback that was uyang et al and that was 2022 with yourself as a co-author. Can you tell us the main idea with this paper? This is the instruct GPT paper. What does instruct GPT and what's going on here? Instruct GPT is a language model that's fine tuned to follow instructions and it's in fact the one that you can play with if you go to the open AI website you get a big text box and you can write some text and then press the button to generate a completion. So the idea here was I mean language models are pretty useful and you can sometimes get them to do what you want by prompting them just right. This idea of few shot prompting has been become pretty popular where you give a few examples like a few question answer examples and then if you ask another question it'll hopefully provide an answer in the same style. So the idea yeah so if you can get language models to do great things with prompting but prompting is itself an arg and it's tricky to get right and it's also kind of not necessarily getting the best possible performance out of the model. If you just take a raw language model and you try to you try to talk to it like you ask it a question it probably it doesn't know that it should actually answer that question as well as possible. For all it knows you want it to give a joke answer or a riddle or something. Yeah so the idea of instruct GPT was let's make a kind of small change for our language models so that they're much easier to use. In particular we're going to train them to if you have a piece of text where there's an instruction the model will try to follow that instruction to the best of its abilities and pretty much anything can be an instruction like you can have a the instruction can be to continue a chat or it can be to like summarize like summarize this text or give me a list of names for my company that sells widgets. Yeah instructions can be anything and that makes that makes this kind of model very powerful. So that was kind of that's the idea of an instruction following model it's like a model that can do anything that you specify with an instruction and by the way I wasn't a core contributor to this work I was more involved with like getting the RL infrastructure and some of the RL training details like helping out with that that stuff. But anyway yeah what we did in this project was we ran this this whole methodology that I just described of RL from even preferences in this instruction following setting. So we did supervised fine tuning, collected preference data, trained a reward model and then did RL against that reward model and one interesting detail is actually whereas the original initial data was just collected using contractors. At a certain point we had the the API and it's got this I mean we have this playground on the website where this is where you the big text box where you can use the model. So we we took prompts that people that users had put into the into the playground and use those for training like both to collect preference data and to do RL. So and this is like this is disclosed to users pretty prominently like when when people are using the playgrounds you get notified that your prompts might be used for the training and we're also careful to train in such a way that we don't memorize any information that was in in the prompts. Like it and it explicit like we have a pretty like elaborate process for making sure there's no like private information being leaked into the model. But anyway yeah that's that's basically the experimental setup and the result was that it works like this methodology works quite well and you get a model that's vastly preferred to the base model on this distribution of of realistic prompts that people are giving the model often which contain instructions. So the raw like the the raw language models generally do a really bad job following instructions but this RL trained instruction following model is is a lot better and it's something like if you just calculate how much better it's something like it's as good as a model that's a hundred times bigger. That's a lot. Yeah. You wanted the model to be truthful is that is that one of the criteria you wanted? Oh yeah truthfulness was one of the criteria. That seems amazing to me that truthfulness is something that I could learn by example like does that mean that truthfulness is somehow represented inside the network or because there's no external way for the model to confirm whether something is true or false. So how how might it know what is what is true without any external reference? I think to some extent there is some internal representation of truthfulness. So I would say like one way to think about what language models do is they're trained to imitate the whole internet and the internet is written by lots of different people and has lots of different types of content from fiction to nonfiction to like like technical like detailed technical literature to like jokes and like forum posts whatever. So what the model is basically an ensemble of all these people who wrote stuff on the internet the raw pre-trained model. When you feed it a prompt what it's doing internally has to be something like figuring out who wrote the first wrote this prompt and then trying to continue in that style. So if it thinks it's reading just reading something on the Wall Street Betts Reddit it's going to continue on that style but if it thinks it's in the New York Times it's going to write in a very different way. So effectively the model must be like calculating somewhere like what style is this or what ensemble what's the like narrower ensemble of styles that I'm trying to imitate now. At the very least when you do some kind of when you do training like either supervised fine tuning or are all from human feedback you can at least like narrow down the set of styles the model is producing and try to imitate like the best or the best person in the training set or the best style in the training set and obviously best will differ a lot. So what we'll end up with will depend on our instructions. So if we if we tell I don't know we'll end up with something that has kind of safe like not too not too controversial but a bit corporate will end up with something like that depending on what our instructions are. So at the very least like we can kind of narrow in on one style instead of having the whole distribution of styles on the internet. I think probably there's more to it than that like we're not just learning about style but the model probably is like internally trying to determine if things are if statements are true or not like if the prompt contains incorrect information because that probably would be useful for determining a likely completion. I'm just talking about the raw pre-trained model so I think yeah I think just the objective of predicting next tokens probably gives you a lot it forces the model to like the determine if things are true or not. I think for our alfine tuning there's a lot more potential for the model to actually like try to output something truthful as opposed to trying to imitate a certain style though it's hard to I guess it would be hard to like determine if that's what the model is actually trying to do. So it's almost like the the prompt is guiding the model it's like what corner of the internet do we want to do we want to imitate here and maybe we want to instruct GPG wants to to focus more on the most more truthful corners of the internet something similar to that. Yeah I would hope so at least I think that's a pretty good though maybe a little simplistic picture of what's going on. At the very least we should be able to imitate the most truthful corner of the internet. So can you talk about a generalization and how does this type of model perform out of distribution? Like I guess if it seems questions that are a bit different than what it was trained on. What happens if we get a little bit away from the training data with the reward models? I mean language models in general generalize surprisingly well and I would say overall like these pre-trained models that are trained on super diverse data sets from the internet. They tend to generalize quite well or surprisingly well at least it's surprising to those of us who were around for the earlier days of machine learning when everything was trained from scratch and very fragile. For example if you ask if you provide an instruction in some other language even a even a fairly rare language it'll often do a decent job following the instruction even if there's zero data in the whole instruction following the training process that's in that language and that's just to carry over from the pre-training. So I think generalization yeah I think language models generalize quite well. So you asked about reward models I think one of the tricky pieces about RL from human feedback is how so you have this reward model and you're actually training against it meaning you're training your policy to have high reward and it's going to exploit the errors in the reward model so it's going to eventually find adversarial examples to the reward model. This is worse than kind of normal out of distribution behavior it's like targeted out of distribution examples so so there are definitely some challenges around getting reward models to generalize well or generalize as far as possible from the training set. Can these types of agents tell us when they don't know something or is that a hard problem? I'd say sort of if you ask a question that's kind of in the core of the model's knowledge it will know know the answer and it'll know that it knows. By the way I'm talking about models like the for the instruct model if you ask it about something that's like very simple at the core of its knowledge it'll know if you there are certain things that it knows that it doesn't know like current events where it's been trained to know that it doesn't know certain things in real time but if you ask it about something that's kind of on the edge of its knowledge it's it's going to have a hard time it's it's necessarily going to be inaccurate. I mean there have been a couple papers about this question so there is in paper from Anthropic recently called language models mostly know what they know and there is also a paper from FHI and OpenAI called getting language models to express their uncertainty and words. These language models as well as a lot of other models in machine learning are training to maximize likelihood so maximize log-prob of data. You're already training them to always predict a distribution of outputs. So for language models given a prefix it's predicting a distribution over the next token. These predictions for the next token like generally are pretty well calibrated but 80% if it puts 80% probability on something and you look at all the times when it puts 80% probability on something like it's right 80% of the time. Like that's just a result of the training objective. The training objective like strongly incentivizes the model to be calibrated meaning it has a reasonable estimate of its uncertainty. So at the single token level models definitely are calibrated. The question is whether they're calibrated on whether this calibration extends to settings where they are generating multi-token outputs or whether they can judge the correctness of some multi-token statement. So I would say since models are calibrated at the single token level they I think they definitely have the information to be calibrated in these other settings. So that's why I think the problem of models knowing what they know isn't actually that hard or at least getting a model to express its uncertainty pretty much as well as a human does doesn't feel like an insurmountable problem but there's some practical difficulties to getting getting there. People use the phrase AI alignment in different ways. Can you talk about how you see alignment in your work on Aral from human feedback? I think of alignment mostly as the problem of getting the model to try to do the right thing so we can kind of make a distinction between what the model is capable of doing. Like if you just take a raw language model and you ask it a question like I said before it doesn't know that you actually wanted to give the correct answer as opposed to. It might think someone who is not very knowledgeable is answering. By doing some extra training we can get the model to actually try to do the right thing and so I would say that that's the main goal of alignment. So there was an open AI blog post recently that talked about the sequence in alignment. One was training AI systems using human feedback to use it training AI systems to assist human evaluation and three training AI systems to do alignment research. So is your current work mostly about this first item and when and how do you see us getting to these other stages? I'm doing some work now on number two training AI systems to assist human feedback. I think that's sort of becomes increasingly necessary as you start trying to get the systems to solve harder and harder problems. When you have models that are kind of very below human level or maybe at human level at a certain task it's pretty straightforward to supervise them. But once they're doing things that are very hard or doing things that require a lot of diverse technical knowledge it becomes pretty hard to provide a useful supervision signal. So we have to start doing things like one model writes an answer to do a question and then another model provides a critique of that answer points out some flaws and then the human only has to judge the first answer after looking at the critique meaning basically the critique helps the human assess the answer. So I think like that kind of idea is starting to become pretty relevant. A colleague's an I are exploring that kind of idea now. As for assisting alignment research there's some other work at open AI that's starting to explore this. It's also that sort of the for this down the road. So I saw Stuart Russell was on your PhD committee and I really enjoyed his book Human Compatible. I wonder if you share the idea mentioned in the book that the standard RL framing with this fixed reward signal is problematic and that agents powerful agents should try to do what we want and maintain some uncertainty about what it is we want and the agents that are too certain will be problematic. What do you have any thoughts on that idea? I totally agree with that idea. So I think first it's really hard to write down a simple reward function that actually captures what we want or what any any particular person wants. I can say I want a little more of this or a little more of that but you wouldn't want to take that to the extreme. If we build agents that try to cater to our to our wishes we should make sure they're like they have a lot of they have uncertainty about what we want or what we value and that that'll also cause them to be a little more cautious and say not disturb anything that might be important to us. So yeah I agree with that like Stuart Russell gave a very good like problem definition of what we want AI to do like we want it to basically we want to jointly like play this game where AI is the AI is trying to figure out what we want and then trying to do that but simultaneously maintaining some uncertainty about what we want. I would say if you you start to look at how to get that in practice it actually looks quite a bit like the kind of RL from human feedback that we're working on at OpenAI and others are working on other places. I think yeah I think I see what we're doing as a practical implementation of getting towards this behavior that Russell have described. Do you think of a AGI as an abstract goal or are we going to see a model come out one day and people are going to say oh that's the first AGI model like what does it have to do for people to say that? I think people will say that many times then realize that it doesn't quite do everything that you want. I think we're going to have a lot of like a long series of models that are that are superhuman at most things or at a certain class of things but they also have some failure modes and weaknesses. Like I expect us to like see multiple models that are proclaimed as AGI and then only after interacting with it a while you do realize it's not quite there. What would you say is the relationship between AGI and RL and AGI and these large language models? How do those concepts fit together? I would say that RL is a useful like component of training AGI or an almost essential component. The thing RL lets you do is it lets you optimize any objective for the agents. Any objective that is a function of the agents behavior. So with pre-training like what we do for language models you're kind of choosing an objective that lets us do something with all the training day we have which is all this internet text. So we choose this maximum likelihood objective which is basically the only or not the only thing but it's like a sensible way to absorb all this knowledge. But then if we really want to optimize the agents behavior for a specific objective RL is kind of the only framework that lets you do that. Okay John we have a few questions from the audience and I'm just going to pick the two that have the highest score in terms of Twitter likes. So the first is from Eric Chang VP of AI at a Hello Di Robotics. He asked RL distributions are non-stationary making it hard to reason about PPO losses and how that relates to return or generalization. Are there any intermediate plots and visualizations you like to generate to debug or incrementally build up a large scale RL system? Yeah there are definitely some stats that I look at so I will be I'll talk about this in the nuts and bolts like reboot waited a year but I'd say things like you're looking at the explained variants of the value function and looking at the like how many samples are getting clipped in PPO and what the KL between the what what the KL divergence is between the policy before and after the update is yeah things like that. And then Ethan the calibar from Miele asks what is your median estimate for the arrival date of AGI? I think not too far away but I like I said I expect there to be a lot of fall starts I would say expect like like AI to be able to do better a better job than humans at most jobs that humans do now five years or so that's not all jobs but most jobs for a while we're going to discover things that AI isn't very good at and then where we want to keep humans in control so I think there'll be some kind of gradual process over the next 10 or 15 years. I've been curious about this I see that some RL work is patented but I could not find a TRPO or PPO in I could not find patents on these are those protected patent protected at all or how do you how do you think of intellectual property protection for that kind of work? I haven't ever looked looked into patenting anything and open AI hasn't either as far as I know I think the trend over time has been for people to take a patent scene machine like a machine learning algorithms last seriously there is this algorithm in computer vision called sift which is like this key point to detector and this was patented I think the the guy who patented it he probably made his university some money from the patent but in the end all it did was cause people a lot of annoyance because like the people people had to come up with alternative algorithms that like had a different acronym and weren't patented so like the open CV open source library would have like had to be careful about putting this algorithm in their library because of the patent risks so I think like these patents aren't the patent rights aren't exercise that much and I think big companies like Google will patent a lot of stuff for defensive reasons so if they get in some big legal dispute with another company it can be used as like one of the bargaining chips but I think I don't think anyone's going to like get sued for royalties for not yeah for not providing royalties for the use of some algorithm okay and then there's been a ton of work in RL of course since you first published TRPO and BBO but from your point of view if you had to pick a few highlights in terms of a few important milestones in in RL algorithms since PPO came out and by the way it's amazing that in 2022 we're still using PPO I think quite similar into it's original form is that right yeah pretty much yeah so so what would you say are the the biggest highlights for you in terms of our algorithm since since you did PPO yeah there's definitely been some interesting stuff so I think like a little after PPO there is TD3 and SAC and those are seem like pretty solid value-based methods that was one development that was interesting I think like yeah I thought museiro and it's and it's like elaborations we're also like efficient zero we're also pretty impressive that you can get that good sample efficiency both of the things I just mentioned were kind of well I don't want to say mostly on toy tasks or benchmarks because yeah I'm sure people are doing some real things with these algorithms yeah so I think that's that stuff was interesting I think like the whole recent interest in search of interest in the offline RL was also notable I would say the like the stuff we're doing with RL from human feedback is the kind of offline RL because we're like we have a fixed dataset and we have a fixed reward modeling dataset and we're training against that this is like offline RL but you're doing it in a different way you're using an on-policy algorithm with a reward model as opposed to maybe a more typical way to do offline RL would be use off-policy algorithm would that work here or would that not work here well we're doing here is kind of like model-based RL because the reward model is like a model of the like the unknown part of the system so like the unknown part of the system here is the is the human radar or the human it's not the outputting appending to your list of tokens so this is kind of like the work that's like takes a dynamics model at the environment and does some kind of just runs a policy grading algorithm against it so it's not like so the idea of running an online algorithm against a model that's kind of a well-established idea so I would say the papers that previously did this they were in a pretty different regime were in this regime of doing fairly small updates to the policy because we have this these awesome pre-trained models and we don't need to actually change them that much so yeah we use these online algorithms I'd say part of the reason why we can get away with using just an like an online algorithm is because we've been just looking at a band a contextual banded problem yeah because we only have like one time step like you get a query and you output a response and then that response gets a reward so if we had a like a multi-step process such as a conversation where you can't assign a reward until the very end of the conversation and or you had some I don't know some interaction with like some real-world system that's hard to simulate you wouldn't then it wouldn't be S-ray forward to you wouldn't be able to use exactly exactly the same methodology you would probably have to use a you would have to probably train a Q function or or something like that if you want if you want your method to be sample efficient you would probably have to do something slightly different I think we'll we'll have to we'll have to start exploring this at some point soon but so far we haven't at least I haven't seen any cases in like in the domain I'm looking at that require this but I expect it to to be relevant at some point so we had Arvind Shrinivas talking about decision transformer on the show recently that was a great episode and I see that you were also a co-author on the the 2016 RL squared paper I want to ask you what your thoughts about meta RL Arvind had some interesting things to say about maybe the idea that a transformer could kind of supersede the need for an RL algorithm altogether what do you expect from meta RL do expect will will still be using human authored RL algorithms in the future yeah that's a pretty bold statement that we don't need we won't need any RL algorithms anymore yeah since the RL squared paper people have been talking less about meta learning as far as I can tell actually because of sequence modeling has gotten so good like transformer let sequence models so that it's kind of queer the meta learning is just a special case of learning like it's it's just it's just like a certain kind of long context learning learning involving long episodes and maybe it shouldn't be treated that differently or are addressed with special algorithms I would say yeah the ideas like decision transformer are pretty interesting where you try to reduce RL to supervise learning it's still not like certain exactly how these compare and performance to RL like people have started to analyze that empirically and theoretically and I would say in practice sometimes sometimes it's better sometimes it's worse in my experience like it's been worse on the problems that I've that I've my colleagues and I have where we've tested it but yeah it's definitely an interesting direction Dr. John Schillman thank you so much for sharing your time in your insight with the talk our audience today thanks so much thank you
[ { "end": 6.24, "start": 0, "text": " The answer was affirmative. We can get an agent to basically use a set of tools that we give it." }, { "end": 12.48, "start": 6.24, "text": " In this case, the browsing commands like searchings. I would say I expect AI to be able to do better," }, { "end": 17.84, "start": 12.48, "text": " a better job than humans at most jobs that humans do now. Five years or so." }, { "end": 27.92, "start": 22.56, "text": " TalkAulRO podcast is all reinforcing learning all the time, featuring brilliant guests," }, { "end": 34.08, "start": 27.92, "text": " both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host," }, { "end": 44.32, "start": 34.08, "text": " Robin Chohan. John Schulman is a co-founder of OpenAI and a researcher and engineer at OpenAI." }, { "end": 48.32000000000001, "start": 44.32, "text": " He is well known for major contributions to the field of reinforcement learning," }, { "end": 54.400000000000006, "start": 48.32000000000001, "text": " including the TRPO algorithm that's trust region policy optimization, GAE, generalized" }, { "end": 59.12, "start": 54.4, "text": " advanced estimation. Those are from his UC Berkeley dissertation and TRPO's" }, { "end": 65.03999999999999, "start": 59.12, "text": " descendant proximal policy optimization, or PPO. His current focus at OpenAI is on RL from" }, { "end": 68.16, "start": 65.03999999999999, "text": " human feedback. John, welcome to the show and thanks so much for being here." }, { "end": 71.75999999999999, "start": 68.16, "text": " Thanks a lot for having me. You were literally one of the first people I thought of when I started" }, { "end": 77.6, "start": 71.75999999999999, "text": " the show three years back. Thanks, I'm honored. It means a lot to me to have you here today. I definitely" }, { "end": 83.12, "start": 77.6, "text": " remember you were nuts and bolts of deep RL video back in the day and watching that multiple times" }, { "end": 88.88000000000001, "start": 83.12, "text": " and gaining a lot from that. You helped a generation of RL practitioners back then. By the way," }, { "end": 95.52000000000001, "start": 88.88000000000001, "text": " there's going to be a reboot of the nuts and bolts presentation. I got invited to give a talk" }, { "end": 101.92, "start": 95.52000000000001, "text": " at NERPS this year on it. I'll have to revamp the guidelines and everything. That'll be fun." }, { "end": 107.12, "start": 101.92, "text": " Oh, that's awesome. Can't wait for that. You were clearly one of the earlier pioneers in deep RL." }, { "end": 112.4, "start": 107.12, "text": " How did you choose to move your focus to RL from human feedback? Why is that an important problem?" }, { "end": 117.84, "start": 112.4, "text": " Why is that important to you? After GB3 was trained, I was blown away by how smart it was and I" }, { "end": 122.32000000000001, "start": 117.84, "text": " realized the next frontier was figuring out how to make language models actually useful. I'm still" }, { "end": 128.4, "start": 122.32000000000001, "text": " really interested in RL but solving RL benchmarks isn't the end of the story. To use your RL" }, { "end": 134.08, "start": 128.4, "text": " algorithm you need a reward function. Whereas the reward function come from in RL benchmarks," }, { "end": 138.16, "start": 134.08, "text": " you usually just code up the reward function. But if you're not in a simulator environment," }, { "end": 144.07999999999998, "start": 138.16, "text": " that doesn't work. What we have to do in any kind of real-world use case is have humans look at" }, { "end": 149.04, "start": 144.07999999999998, "text": " what the AI did and decide if it was good or bad. How exactly do you define this reward" }, { "end": 154, "start": 149.04, "text": " becomes a really challenging and important problem, especially as the tasks get harder to evaluate?" }, { "end": 159.2, "start": 154, "text": " Another angle on this is that language models are very smart but it's hard to get them to do" }, { "end": 164.24, "start": 159.2, "text": " anything useful. A big part of that is they're not necessarily trying to do what you want. They're" }, { "end": 168.88, "start": 164.24, "text": " just trying to imitate the training corpus. That means there's a big opportunity to improve" }, { "end": 173.84, "start": 168.88, "text": " them a lot by just giving them the right objective. That's what we can do by applying RL to these" }, { "end": 181.12, "start": 174.64000000000001, "text": " language models using human feedback to define the reward. Is human feedback harder or" }, { "end": 185.92000000000002, "start": 181.12, "text": " very different in some way than using a synthetic reward? There are a lot of new complications." }, { "end": 192.56, "start": 187.36, "text": " You have to collect a data set dynamically. You're always in the business of building data sets of" }, { "end": 199.12, "start": 192.56, "text": " human preferences. Often the data quality there matters more than various algorithmic details." }, { "end": 204.32, "start": 199.12, "text": " You also have to think a lot about exactly how you're giving the task to the human trainers" }, { "end": 208.32, "start": 204.32, "text": " and various other things that you wouldn't have thought about if you just had a programmatic reward" }, { "end": 213.44, "start": 208.32, "text": " function. Does the difference between human-raders or the noisiness of the reward signal cost any" }, { "end": 220.56, "start": 213.44, "text": " problems? I would say the noise definitely you need to be below some threshold of noise to learn" }, { "end": 226.64000000000001, "start": 220.56, "text": " anything. I think in general if you have a large noisy data set that can be as good as a smaller" }, { "end": 231.6, "start": 226.64000000000001, "text": " clean data set. Actually, noise isn't the thing that worries me the most. It's more that there are" }, { "end": 238, "start": 231.6, "text": " sometimes consistent biases that people have. For example, in settings like question answering" }, { "end": 244.4, "start": 238, "text": " or settings where you have a model writing some text, often people prefer longer answers. You end" }, { "end": 249.36, "start": 244.4, "text": " up with these very verbose answers. If you're not careful with the instructions that is. You can" }, { "end": 256.40000000000003, "start": 249.36, "text": " also instruct people the raiders to reward brevity. But without yet, if you're not careful you can" }, { "end": 262, "start": 257.04, "text": " incentivize the wrong kinds of behaviors. So let's move to some of your recent work. First up is" }, { "end": 268.40000000000003, "start": 262, "text": " WebGPT. Browser assisted question answering with human feedback. That's a Nekano at all with yourself" }, { "end": 273.84000000000003, "start": 268.40000000000003, "text": " as a co-author in 2021. Can you tell us what is the main idea of this paper? What is WebGPT?" }, { "end": 280.23999999999995, "start": 273.84, "text": " In WebGPT, we basically took our language models and we hooked them up to a web browser so they" }, { "end": 285.35999999999996, "start": 280.23999999999995, "text": " could retrieve information from the web. They can write an answer by summarizing the relevant pages" }, { "end": 290.08, "start": 285.35999999999996, "text": " from the web. That way if you're asking a question about current events or a question that requires" }, { "end": 295.35999999999996, "start": 290.08, "text": " some detailed scientific or technical knowledge, this AI can go out and look up the answer and" }, { "end": 301.67999999999995, "start": 295.35999999999996, "text": " with detailed citations to its sources. I would say there's two interesting points to this. One is" }, { "end": 306.24, "start": 301.68, "text": " we were exploring whether you could turn language models into a kind of agent. There's a lot of data" }, { "end": 310.32, "start": 306.24, "text": " on the web of different texts that people have written. But there's not a lot of data that shows" }, { "end": 316.24, "start": 310.32, "text": " how to actually do some multi-step process. So it's not that clear, uprearry whether you can get a" }, { "end": 321.68, "start": 316.24, "text": " language model to actually carry out some iterative process. We just have a lot of data like writing" }, { "end": 326.16, "start": 321.68, "text": " essays and having chats and so forth. So that was one thing we were exploring here and I think" }, { "end": 332.8, "start": 326.16, "text": " the answer was affirmative. We can get an agent to basically use a set of tools that we give it." }, { "end": 338.16, "start": 332.8, "text": " In this case the browsing commands like searchings, scroll link, click on links. The second" }, { "end": 344.24, "start": 338.16, "text": " theme of this paper was around truthfulness. I mean a big issue with language models is I mean" }, { "end": 349.76000000000005, "start": 344.24, "text": " they're not very reliable at giving you true information. They know a vastly superhuman amount. But" }, { "end": 354.64000000000004, "start": 349.76000000000005, "text": " if you prompt them in the wrong way they'll just output lots of plausible sounding nonsense. So" }, { "end": 359.84, "start": 354.64, "text": " how to fix that is a big research question or one of the biggest research questions in the" }, { "end": 364.32, "start": 359.84, "text": " world of language models. I think it's going to be challenging to fully fix it but I think a big" }, { "end": 370.32, "start": 364.32, "text": " part of the story involves retrieval and having models write answers that contain citations." }, { "end": 375.28, "start": 370.32, "text": " Citations to try trusted sources. So a person who's checking over the answer doesn't have to go and" }, { "end": 379.91999999999996, "start": 375.28, "text": " try to figure out where the model might have gotten this idea. They can go and directly look at" }, { "end": 387.6, "start": 379.92, "text": " the source and see if it supports the AI statement. With WebGBT we just wanted to see if we do give" }, { "end": 392.40000000000003, "start": 387.6, "text": " the language model a really flexible interface to the web. Can we have it answer hard questions" }, { "end": 398.32, "start": 392.40000000000003, "text": " truthfully using like with the help of all these citations. And it's actually really non-trivial" }, { "end": 403.76, "start": 398.32, "text": " because if you look at the data that we use the Reddit explain it like on five. The questions" }, { "end": 408.08000000000004, "start": 403.76, "text": " are really varied like some of them are about science, history, current events. Like our" }, { "end": 413.84, "start": 408.08, "text": " Raiders didn't necessarily know anything about these topics but still they had to judge the answers" }, { "end": 418.88, "start": 413.84, "text": " written detailed answers. So it would have been really hard to do it without the supporting" }, { "end": 425.12, "start": 418.88, "text": " citations. So we kind of validated that we could get good feedback in a hard domain like this" }, { "end": 431.12, "start": 425.12, "text": " with the help of citations. Can you talk about where the idea for WebGBT came from? Is that an idea" }, { "end": 435.12, "start": 431.12, "text": " you've had kicking around for a while or was it something that came up recently before the" }, { "end": 441.36, "start": 435.12, "text": " paper? How did that play out? Some of the ideas had been floating around like we thought that we" }, { "end": 447.12, "start": 441.36, "text": " actually had a project at OpenAI very early on a world called World of Bits. We were looking at" }, { "end": 452.16, "start": 447.12, "text": " controlling web browsers or doing tasks that involve tasks on the internet with the web browser" }, { "end": 458.4, "start": 452.16, "text": " but it was way too early at the time. So we kind of abandoned it for a few years. Actually we" }, { "end": 462.8, "start": 458.4, "text": " were trying to back then we were trying to do it with full visual input. So we thought yeah we could" }, { "end": 469.12, "start": 462.8, "text": " give some instructions to the agent like go and figure out figure out the address of this" }, { "end": 475.68, "start": 469.84000000000003, "text": " building or something. The agent would go and search the web or use Google Maps or whatever" }, { "end": 479.92, "start": 475.68, "text": " to figure out the answer. And we were trying to do this all in pixels that obviously didn't work" }, { "end": 486.16, "start": 479.92, "text": " very well. But now we have these great language models on the work on text data. We can also" }, { "end": 493.12, "start": 486.16, "text": " extract the text out of web pages to get most of the information. We can't really interact with" }, { "end": 498.16, "start": 493.12, "text": " a lot of dynamic websites. Yeah, where there's a lot of JavaScript and images and so forth. But" }, { "end": 504.64000000000004, "start": 498.16, "text": " as long as it's just browsing and reading text we're fine. So yeah we had good enough models and" }, { "end": 510.8, "start": 504.64000000000004, "text": " that made it kind of feasible to revisit this idea of using the internet as an environment." }, { "end": 516.32, "start": 510.8, "text": " So I would say that was one of the sources of inspiration that long-stinted, that long kind of" }, { "end": 522.4, "start": 516.32, "text": " thread about like using the internet as an environment. Another motivation was just after we got" }, { "end": 529.12, "start": 523.2, "text": " after we started playing with GPD3 we noticed that it had all these problems with factual" }, { "end": 535.52, "start": 529.12, "text": " accuracy and the reliability of the information it was giving us. So that kind of motivated doing" }, { "end": 540.4, "start": 535.52, "text": " more research on how to make language models more truthful. We were kind of brainstorming what to" }, { "end": 547.04, "start": 540.4, "text": " do there and we went through some docs and eventually decided that we wanted to try some question" }, { "end": 551.92, "start": 547.04, "text": " answering like using the web, looking up knowledge on the web to help answer questions. So actually" }, { "end": 556.24, "start": 551.92, "text": " the original version of the project used trivia questions. So there's another, there's this" }, { "end": 562.16, "start": 556.24, "text": " well-known data set trivia QA that has some basic trivia questions. So we first worked a little" }, { "end": 569.12, "start": 562.16, "text": " bit on that data set and tried to see if we could boost the model's accuracy by giving it web search" }, { "end": 576, "start": 569.12, "text": " and yeah that actually works quite straight, that worked pretty easily. So then we decided to move on" }, { "end": 582.72, "start": 576, "text": " to long-form question answering and so that gave us the, that was the project we ended up working on" }, { "end": 589.12, "start": 582.72, "text": " for a while. It seems like you use a few different data sets here and a number of different training" }, { "end": 594.96, "start": 589.12, "text": " methods. I'll just mention the last behavior cloning, reward modeling, reinforcement learning," }, { "end": 601.76, "start": 594.96, "text": " and rejection sampling. So we were using a fairly standard methodology which was actually adapted" }, { "end": 609.2, "start": 601.76, "text": " from previous work on RL from Human Preferences. So the pipeline is you first train a model with" }, { "end": 615.44, "start": 609.2, "text": " supervised learning where you you have human demonstrators show how to do the task, like show how to map" }, { "end": 620.8000000000001, "start": 615.44, "text": " from observations to actions. Yeah so that's the supervised learning or behavior cloning step then we" }, { "end": 628.7199999999999, "start": 620.8, "text": " train a reward model or preference model. It looks at two actions or two out trajectories and decides" }, { "end": 633.76, "start": 628.7199999999999, "text": " which one is better. In this case like in a question answering setting you're looking at two answers" }, { "end": 638.56, "start": 633.76, "text": " and deciding which answer is better and we use that to train a reward model that assigns higher score" }, { "end": 643.04, "start": 638.56, "text": " to the good answers than the bad ones. Then you do reinforcement learning against that reward function" }, { "end": 648.16, "start": 643.04, "text": " and of course you can iterate these last two steps. After you do a little RL now you're, you sort of" }, { "end": 653.4399999999999, "start": 648.16, "text": " exploited some of the flaws of the reward model like or some of the noise in the reward model and" }, { "end": 658.9599999999999, "start": 653.4399999999999, "text": " it's not necessarily accurate on your new distribution of data. You recollect more pairs of samples" }, { "end": 665.28, "start": 658.9599999999999, "text": " and refit this preference model and then you do another iteration of RL. So that's like that's" }, { "end": 670.9599999999999, "start": 665.28, "text": " the whole RL from Human Feedback Pipeline and there's this other idea called rejection sampling" }, { "end": 676.48, "start": 670.9599999999999, "text": " or best event sampling and in general you can do other kinds of search too where instead of doing" }, { "end": 681.52, "start": 676.48, "text": " RL once you have your reward model you can just search against that reward model so you can take" }, { "end": 687.6, "start": 681.52, "text": " a bunch of collect a bunch of samples and re-rank them with the reward model and take the best one" }, { "end": 694.08, "start": 687.6, "text": " as your action. Kind of like NPC. Yeah exactly. Yeah kind of depends exactly what setting you're in" }, { "end": 699.76, "start": 694.64, "text": " what you can do. If you're in a setting where there's some environment you're interacting with then" }, { "end": 705.44, "start": 699.76, "text": " you would have to simulate your, you would have to simulate the dynamics of your environment which" }, { "end": 711.84, "start": 705.44, "text": " yeah so that would look kind of like NPC. In our case we were the only thing we had to learn a model of" }, { "end": 718.24, "start": 711.84, "text": " was the human preference so like we're it's a question answering setting so it's really like a" }, { "end": 723.2, "start": 718.24, "text": " contextual banded problem so it's kind of straightforward to take a bunch of sample a bunch of" }, { "end": 730.8000000000001, "start": 723.2, "text": " actions where each action is a full answer and re-rank them or search against the search over answers." }, { "end": 736.4, "start": 730.8, "text": " So in terms of the action space was it the action space just a list of commands or is it still" }, { "end": 743.76, "start": 736.4, "text": " generating tokens like a regular generative mode? We were generating tokens. We had two phases of" }, { "end": 751.12, "start": 743.76, "text": " like in each episode of the RL task so there is first a browsing phase where where the model goes" }, { "end": 757.04, "start": 751.12, "text": " and it issues searches and clicks on things and quotes relevant information like if it sees" }, { "end": 761.92, "start": 757.04, "text": " something useful on the page it'll it'll quote it using this quote commands and then once it's" }, { "end": 769.28, "start": 762.8, "text": " browse it's done browsing it'll issue another command called end browsing and it'll write its" }, { "end": 775.68, "start": 769.28, "text": " answer that's also expressed in tokens but really we rolled this all into one big RL task where" }, { "end": 781.28, "start": 775.68, "text": " your episode involves browsing and writing out the answer and it's all one big RL episode." }, { "end": 785.28, "start": 781.28, "text": " Did you think this is going to work well or were you kind of surprised? At the very beginning of the" }, { "end": 790.72, "start": 785.28, "text": " project we didn't know if it was going to work or not. Like after we did the initial experiments" }, { "end": 797.68, "start": 790.72, "text": " with Trivia QA which actually didn't take that long to get running then it became pretty clear" }, { "end": 802.24, "start": 797.68, "text": " that it would work that the browsing part worked at least and we already know that we can get" }, { "end": 807.8399999999999, "start": 802.24, "text": " these models to write pretty good long form text with a bunch of if you give them a bunch of" }, { "end": 814.16, "start": 807.8399999999999, "text": " snippets of text that they they can cite. So I noticed the the the human raiders task was" }, { "end": 818.88, "start": 814.16, "text": " quite complicated as it was a long guide and there was many types of feedback that they were giving" }, { "end": 823.52, "start": 818.88, "text": " but in the end the paper said that only the final rating was used so I was just curious if you" }, { "end": 827.28, "start": 823.52, "text": " hadn't commented about that like why do you think maybe the model couldn't use that extra feedback" }, { "end": 833.12, "start": 827.28, "text": " whereas it was maybe just too much or not enough samples. Yeah that's been one frustrating" }, { "end": 840.0799999999999, "start": 833.8399999999999, "text": " finding so far in in that project and also some other projects we've had the same finding but" }, { "end": 845.76, "start": 840.08, "text": " you have your raiders go through this long process for each for each comparison they do where" }, { "end": 851.12, "start": 845.76, "text": " they're comparing a pair of answers and then you only use one bit of information from the whole" }, { "end": 855.84, "start": 851.12, "text": " from this whole process which might have taken like half an hour. It seems like it would be better if" }, { "end": 862.08, "start": 855.84, "text": " we if we were able to extract more information more about the process they went through in arriving" }, { "end": 867.0400000000001, "start": 862.08, "text": " at the answer. So we did collect all sorts of other information like we had them provide ratings" }, { "end": 873.4399999999999, "start": 867.04, "text": " along several different axes like coherence and factual accuracy and so forth but in the end" }, { "end": 880.24, "start": 874.3199999999999, "text": " we didn't really get much of a boost out of using any of this this other information so I'd say" }, { "end": 886.56, "start": 881.12, "text": " it seems like there's it should be possible to do better but unfortunately this methodology which" }, { "end": 893.68, "start": 886.56, "text": " seems kind of dumb so far it's hard to be and people have tried various other ideas for like how" }, { "end": 898, "start": 893.68, "text": " to use human feedback instead of you getting these preference scores there various other things you" }, { "end": 903.68, "start": 898, "text": " can do like you can have them right critiques and edit or maybe edit the responses. Yeah I think" }, { "end": 911.12, "start": 903.68, "text": " some of these things are are also promising but yeah this methodology of collecting preference data" }, { "end": 917.04, "start": 911.12, "text": " works well. Yeah I think it's it's still an open area of research. Oh yeah regarding the really" }, { "end": 922.64, "start": 917.04, "text": " long instructions. Yeah I think for any of these tasks there is a lot of subtlety in how to do the" }, { "end": 929.4399999999999, "start": 922.64, "text": " task properly and so we ended up adding more and more details of like what do you do in this situation" }, { "end": 933.76, "start": 929.4399999999999, "text": " and what do you do in that situation. I think it's starting to get pretty unwieldy with these really" }, { "end": 940.8, "start": 933.76, "text": " long instruction manuals so there's some promising ideas for how to address this like there's a" }, { "end": 946.4, "start": 940.8, "text": " paper from DeepMind recently Sparrow that used basically broke down the task and they trained" }, { "end": 952.3199999999999, "start": 947.04, "text": " they basically had people look at one aspect of the one aspect of the response at a time" }, { "end": 957.0400000000001, "start": 952.32, "text": " and and then they had a way of combining these different rule specific they would train a bunch" }, { "end": 961.6800000000001, "start": 957.0400000000001, "text": " of rule specific reward models and then combine them at the end. Yeah I think there's some other" }, { "end": 967.5200000000001, "start": 961.6800000000001, "text": " interesting ideas for how to how to make this process better. So I gather that from your answer" }, { "end": 972.6400000000001, "start": 967.5200000000001, "text": " about WebGPT and the whole idea of WebGPT is that you want the the language model type access to" }, { "end": 978.48, "start": 972.6400000000001, "text": " external knowledge but I wonder where you think the line should really be in terms of what a" }, { "end": 982.88, "start": 978.48, "text": " language model should know and what the language model should look up and maybe what the language" }, { "end": 987.6800000000001, "start": 982.88, "text": " model should not know or not purport to know. Do you have opinions about that? Yeah let's see" }, { "end": 994.16, "start": 988.4, "text": " like some people are advocating for very small language models that have like no external knowledge" }, { "end": 998.5600000000001, "start": 994.16, "text": " aside from language I guess would be the extreme position and then other people other people" }, { "end": 1002.5600000000001, "start": 998.5600000000001, "text": " talked about language models that just know everything as opposed to having an external knowledge" }, { "end": 1008.24, "start": 1002.5600000000001, "text": " source. There's some interesting questions there so I think it is a little hard to separate knowledge" }, { "end": 1015.6, "start": 1008.24, "text": " factual knowledge from understanding. So as humans we get by like not memorizing all sorts of" }, { "end": 1021.6800000000001, "start": 1016.4, "text": " facts and just knowing that we can look them up if needed. For working on a specific domain it is" }, { "end": 1028.88, "start": 1021.6800000000001, "text": " useful to like have a lot of facts internalized so that you can recall them very quickly and" }, { "end": 1034.24, "start": 1028.88, "text": " kind of combine them combine them in your head. So I wouldn't take an extreme position on either" }, { "end": 1041.44, "start": 1034.24, "text": " side I would say I think retrieval is going to be really useful just at the very least for" }, { "end": 1048.88, "start": 1041.44, "text": " current events but also I don't think we want to try to pack all human knowledge into the weights" }, { "end": 1054.72, "start": 1048.88, "text": " of a neural net. On the other hand I think people have had a lot of luck just scaling up models and" }, { "end": 1061.44, "start": 1055.68, "text": " like as they soak up more factual knowledge they also get better at reasoning and other things" }, { "end": 1068, "start": 1061.44, "text": " and I think I haven't seen any demonstrations of tiny models that just do lots of retrieval" }, { "end": 1073.68, "start": 1068, "text": " and save all their weights for reasoning. Yeah I just haven't seen any evidence of this" }, { "end": 1078.8, "start": 1073.68, "text": " or that or I haven't seen any successful attempts at making this. Let's move on to training" }, { "end": 1084.16, "start": 1078.8, "text": " language models to follow instructions with human feedback that was uyang et al and that was 2022" }, { "end": 1088.72, "start": 1084.16, "text": " with yourself as a co-author. Can you tell us the main idea with this paper? This is the instruct" }, { "end": 1094.64, "start": 1088.72, "text": " GPT paper. What does instruct GPT and what's going on here? Instruct GPT is a language model that's" }, { "end": 1099.44, "start": 1094.64, "text": " fine tuned to follow instructions and it's in fact the one that you can play with if you go to" }, { "end": 1105.84, "start": 1100.08, "text": " the open AI website you get a big text box and you can write some text and then press the button" }, { "end": 1112.24, "start": 1105.84, "text": " to generate a completion. So the idea here was I mean language models are pretty useful and you can" }, { "end": 1117.84, "start": 1112.96, "text": " sometimes get them to do what you want by prompting them just right. This idea of few shot" }, { "end": 1123.52, "start": 1117.84, "text": " prompting has been become pretty popular where you give a few examples like a few question answer" }, { "end": 1128.3999999999999, "start": 1123.52, "text": " examples and then if you ask another question it'll hopefully provide an answer in the same style." }, { "end": 1133.84, "start": 1128.3999999999999, "text": " So the idea yeah so if you can get language models to do great things with prompting but prompting" }, { "end": 1139.04, "start": 1133.84, "text": " is itself an arg and it's tricky to get right and it's also kind of not necessarily getting the" }, { "end": 1143.6799999999998, "start": 1139.04, "text": " best possible performance out of the model. If you just take a raw language model and you try to" }, { "end": 1148.4, "start": 1143.68, "text": " you try to talk to it like you ask it a question it probably it doesn't know that it should actually" }, { "end": 1154.0800000000002, "start": 1148.4, "text": " answer that question as well as possible. For all it knows you want it to give a joke answer or" }, { "end": 1160, "start": 1154.0800000000002, "text": " a riddle or something. Yeah so the idea of instruct GPT was let's make a kind of small change" }, { "end": 1164.4, "start": 1160, "text": " for our language models so that they're much easier to use. In particular we're going to train them" }, { "end": 1171.3600000000001, "start": 1164.4, "text": " to if you have a piece of text where there's an instruction the model will try to follow that" }, { "end": 1176.6399999999999, "start": 1171.36, "text": " instruction to the best of its abilities and pretty much anything can be an instruction like" }, { "end": 1183.04, "start": 1176.6399999999999, "text": " you can have a the instruction can be to continue a chat or it can be to like summarize like" }, { "end": 1190.8, "start": 1183.04, "text": " summarize this text or give me a list of names for my company that sells widgets. Yeah instructions" }, { "end": 1195.84, "start": 1190.8, "text": " can be anything and that makes that makes this kind of model very powerful. So that was kind of" }, { "end": 1199.76, "start": 1195.84, "text": " that's the idea of an instruction following model it's like a model that can do anything that" }, { "end": 1204.16, "start": 1199.76, "text": " you specify with an instruction and by the way I wasn't a core contributor to this work I was" }, { "end": 1212.16, "start": 1204.8, "text": " more involved with like getting the RL infrastructure and some of the RL training details" }, { "end": 1218.24, "start": 1212.16, "text": " like helping out with that that stuff. But anyway yeah what we did in this project was we ran this" }, { "end": 1224, "start": 1218.24, "text": " this whole methodology that I just described of RL from even preferences in this instruction" }, { "end": 1230.32, "start": 1224, "text": " following setting. So we did supervised fine tuning, collected preference data, trained a reward" }, { "end": 1236.72, "start": 1230.32, "text": " model and then did RL against that reward model and one interesting detail is actually whereas the" }, { "end": 1244.88, "start": 1236.72, "text": " original initial data was just collected using contractors. At a certain point we had the the API" }, { "end": 1252, "start": 1244.88, "text": " and it's got this I mean we have this playground on the website where this is where you the big" }, { "end": 1258.24, "start": 1252, "text": " text box where you can use the model. So we we took prompts that people that users had put into" }, { "end": 1264.56, "start": 1258.24, "text": " the into the playground and use those for training like both to collect preference data and to do RL." }, { "end": 1271.92, "start": 1264.56, "text": " So and this is like this is disclosed to users pretty prominently like when when people are using" }, { "end": 1276.88, "start": 1271.92, "text": " the playgrounds you get notified that your prompts might be used for the training and we're also" }, { "end": 1282.88, "start": 1276.88, "text": " careful to train in such a way that we don't memorize any information that was in in the prompts." }, { "end": 1288.5600000000002, "start": 1282.88, "text": " Like it and it explicit like we have a pretty like elaborate process for making sure there's no" }, { "end": 1295.0400000000002, "start": 1289.2800000000002, "text": " like private information being leaked into the model. But anyway yeah that's that's basically the" }, { "end": 1302.16, "start": 1295.7600000000002, "text": " experimental setup and the result was that it works like this methodology works quite well and you" }, { "end": 1308.64, "start": 1302.16, "text": " get a model that's vastly preferred to the base model on this distribution of of realistic prompts" }, { "end": 1314.4, "start": 1308.64, "text": " that people are giving the model often which contain instructions. So the raw like the the raw" }, { "end": 1321.68, "start": 1314.4, "text": " language models generally do a really bad job following instructions but this RL trained instruction" }, { "end": 1328.0800000000002, "start": 1321.68, "text": " following model is is a lot better and it's something like if you just calculate how much better" }, { "end": 1333.6, "start": 1328.08, "text": " it's something like it's as good as a model that's a hundred times bigger. That's a lot. Yeah." }, { "end": 1337.36, "start": 1333.6, "text": " You wanted the model to be truthful is that is that one of the criteria you wanted?" }, { "end": 1342.1599999999999, "start": 1337.36, "text": " Oh yeah truthfulness was one of the criteria. That seems amazing to me that truthfulness is" }, { "end": 1346.32, "start": 1342.1599999999999, "text": " something that I could learn by example like does that mean that truthfulness is somehow" }, { "end": 1351.04, "start": 1346.32, "text": " represented inside the network or because there's no external way for the model to confirm" }, { "end": 1355.76, "start": 1351.04, "text": " whether something is true or false. So how how might it know what is what is true without any" }, { "end": 1362.24, "start": 1355.76, "text": " external reference? I think to some extent there is some internal representation of truthfulness." }, { "end": 1367.12, "start": 1362.24, "text": " So I would say like one way to think about what language models do is they're trained to imitate" }, { "end": 1371.52, "start": 1367.12, "text": " the whole internet and the internet is written by lots of different people and has lots of different" }, { "end": 1379.04, "start": 1371.52, "text": " types of content from fiction to nonfiction to like like technical like detailed technical literature" }, { "end": 1386.3999999999999, "start": 1379.04, "text": " to like jokes and like forum posts whatever. So what the model is basically an ensemble of all" }, { "end": 1392.8799999999999, "start": 1386.3999999999999, "text": " these people who wrote stuff on the internet the raw pre-trained model. When you feed it a prompt" }, { "end": 1398.08, "start": 1392.8799999999999, "text": " what it's doing internally has to be something like figuring out who wrote the first wrote this prompt" }, { "end": 1403.04, "start": 1398.08, "text": " and then trying to continue in that style. So if it thinks it's reading just reading something on the" }, { "end": 1409.36, "start": 1403.04, "text": " Wall Street Betts Reddit it's going to continue on that style but if it thinks it's in the New" }, { "end": 1417.6, "start": 1409.36, "text": " York Times it's going to write in a very different way. So effectively the model must be like calculating" }, { "end": 1423.92, "start": 1417.6, "text": " somewhere like what style is this or what ensemble what's the like narrower ensemble of styles that" }, { "end": 1429.76, "start": 1423.92, "text": " I'm trying to imitate now. At the very least when you do some kind of when you do training like either" }, { "end": 1435.12, "start": 1429.76, "text": " supervised fine tuning or are all from human feedback you can at least like narrow down the set of" }, { "end": 1442.56, "start": 1435.12, "text": " styles the model is producing and try to imitate like the best or the best person in the training set" }, { "end": 1448, "start": 1442.56, "text": " or the best style in the training set and obviously best will differ a lot. So what we'll end up with" }, { "end": 1453.68, "start": 1448, "text": " will depend on our instructions. So if we if we tell I don't know we'll end up with something that" }, { "end": 1462.16, "start": 1453.68, "text": " has kind of safe like not too not too controversial but a bit corporate will end up with something" }, { "end": 1468.8, "start": 1462.16, "text": " like that depending on what our instructions are. So at the very least like we can kind of narrow" }, { "end": 1474, "start": 1468.8, "text": " in on one style instead of having the whole distribution of styles on the internet. I think probably" }, { "end": 1479.52, "start": 1474, "text": " there's more to it than that like we're not just learning about style but the model probably is" }, { "end": 1485.04, "start": 1479.52, "text": " like internally trying to determine if things are if statements are true or not like if the prompt" }, { "end": 1490.6399999999999, "start": 1485.04, "text": " contains incorrect information because that probably would be useful for determining a likely" }, { "end": 1495.6, "start": 1490.6399999999999, "text": " completion. I'm just talking about the raw pre-trained model so I think yeah I think just the" }, { "end": 1501.52, "start": 1495.6, "text": " objective of predicting next tokens probably gives you a lot it forces the model to like the" }, { "end": 1506.8799999999999, "start": 1501.52, "text": " determine if things are true or not. I think for our alfine tuning there's a lot more potential" }, { "end": 1513.1200000000001, "start": 1506.88, "text": " for the model to actually like try to output something truthful as opposed to trying to imitate" }, { "end": 1519.1200000000001, "start": 1513.1200000000001, "text": " a certain style though it's hard to I guess it would be hard to like determine if that's what the" }, { "end": 1524.4, "start": 1519.1200000000001, "text": " model is actually trying to do. So it's almost like the the prompt is guiding the model it's like" }, { "end": 1529.5200000000002, "start": 1524.4, "text": " what corner of the internet do we want to do we want to imitate here and maybe we want to" }, { "end": 1534.0800000000002, "start": 1529.5200000000002, "text": " instruct GPG wants to to focus more on the most more truthful corners of the internet" }, { "end": 1539.1999999999998, "start": 1534.08, "text": " something similar to that. Yeah I would hope so at least I think that's a pretty good though maybe" }, { "end": 1543.1999999999998, "start": 1539.1999999999998, "text": " a little simplistic picture of what's going on. At the very least we should be able to imitate" }, { "end": 1549.36, "start": 1543.1999999999998, "text": " the most truthful corner of the internet. So can you talk about a generalization and how does" }, { "end": 1554.56, "start": 1549.36, "text": " this type of model perform out of distribution? Like I guess if it seems questions that are a bit" }, { "end": 1558.3999999999999, "start": 1554.56, "text": " different than what it was trained on. What happens if we get a little bit away from the training" }, { "end": 1563.84, "start": 1558.3999999999999, "text": " data with the reward models? I mean language models in general generalize surprisingly well and" }, { "end": 1568.8, "start": 1563.84, "text": " I would say overall like these pre-trained models that are trained on super diverse data sets" }, { "end": 1573.84, "start": 1568.8, "text": " from the internet. They tend to generalize quite well or surprisingly well at least it's surprising" }, { "end": 1580.72, "start": 1573.84, "text": " to those of us who were around for the earlier days of machine learning when everything was" }, { "end": 1586.08, "start": 1580.72, "text": " trained from scratch and very fragile. For example if you ask if you provide an instruction in some" }, { "end": 1591.28, "start": 1586.08, "text": " other language even a even a fairly rare language it'll often do a decent job following the" }, { "end": 1597.36, "start": 1591.28, "text": " instruction even if there's zero data in the whole instruction following the training process" }, { "end": 1603.84, "start": 1597.92, "text": " that's in that language and that's just to carry over from the pre-training. So I think generalization" }, { "end": 1608.16, "start": 1603.84, "text": " yeah I think language models generalize quite well. So you asked about reward models I think one" }, { "end": 1614.08, "start": 1608.16, "text": " of the tricky pieces about RL from human feedback is how so you have this reward model and you're" }, { "end": 1618.6399999999999, "start": 1614.08, "text": " actually training against it meaning you're training your policy to have high reward and it's" }, { "end": 1623.6000000000001, "start": 1618.64, "text": " going to exploit the errors in the reward model so it's going to eventually find adversarial" }, { "end": 1628.72, "start": 1623.6000000000001, "text": " examples to the reward model. This is worse than kind of normal out of distribution behavior it's" }, { "end": 1634.0800000000002, "start": 1628.72, "text": " like targeted out of distribution examples so so there are definitely some challenges around" }, { "end": 1640.8000000000002, "start": 1634.8000000000002, "text": " getting reward models to generalize well or generalize as far as possible from the training set." }, { "end": 1645.6000000000001, "start": 1640.8000000000002, "text": " Can these types of agents tell us when they don't know something or is that a hard problem?" }, { "end": 1651.9199999999998, "start": 1645.6, "text": " I'd say sort of if you ask a question that's kind of in the core of the model's knowledge it will" }, { "end": 1656, "start": 1651.9199999999998, "text": " know know the answer and it'll know that it knows. By the way I'm talking about models like the" }, { "end": 1661.6, "start": 1656, "text": " for the instruct model if you ask it about something that's like very simple at the core of its" }, { "end": 1666.08, "start": 1661.6, "text": " knowledge it'll know if you there are certain things that it knows that it doesn't know like" }, { "end": 1672.8799999999999, "start": 1666.7199999999998, "text": " current events where it's been trained to know that it doesn't know certain things in real time but" }, { "end": 1678.16, "start": 1672.88, "text": " if you ask it about something that's kind of on the edge of its knowledge it's it's going to have a" }, { "end": 1682.96, "start": 1678.16, "text": " hard time it's it's necessarily going to be inaccurate. I mean there have been a couple papers" }, { "end": 1689.2800000000002, "start": 1683.7600000000002, "text": " about this question so there is in paper from Anthropic recently called language models" }, { "end": 1695.2800000000002, "start": 1689.2800000000002, "text": " mostly know what they know and there is also a paper from FHI and OpenAI called" }, { "end": 1700.48, "start": 1696.5600000000002, "text": " getting language models to express their uncertainty and words. These language" }, { "end": 1706.32, "start": 1700.48, "text": " models as well as a lot of other models in machine learning are training to maximize likelihood" }, { "end": 1711.6, "start": 1706.32, "text": " so maximize log-prob of data. You're already training them to always predict a distribution of" }, { "end": 1718.8, "start": 1711.6, "text": " outputs. So for language models given a prefix it's predicting a distribution over the next token." }, { "end": 1725.76, "start": 1718.8, "text": " These predictions for the next token like generally are pretty well calibrated but 80% if it puts 80%" }, { "end": 1731.6, "start": 1725.76, "text": " probability on something and you look at all the times when it puts 80% probability on something" }, { "end": 1736.72, "start": 1731.6, "text": " like it's right 80% of the time. Like that's just a result of the training objective. The training" }, { "end": 1742.96, "start": 1736.72, "text": " objective like strongly incentivizes the model to be calibrated meaning it has a reasonable" }, { "end": 1748.8, "start": 1742.96, "text": " estimate of its uncertainty. So at the single token level models definitely are calibrated." }, { "end": 1754.8, "start": 1748.8, "text": " The question is whether they're calibrated on whether this calibration extends to settings where" }, { "end": 1760.6399999999999, "start": 1754.8, "text": " they are generating multi-token outputs or whether they can judge the correctness of some" }, { "end": 1766, "start": 1760.6399999999999, "text": " multi-token statement. So I would say since models are calibrated at the single token level" }, { "end": 1772.6399999999999, "start": 1766.56, "text": " they I think they definitely have the information to be calibrated in these other settings." }, { "end": 1778.48, "start": 1772.6399999999999, "text": " So that's why I think the problem of models knowing what they know isn't actually that hard" }, { "end": 1783.9199999999998, "start": 1778.48, "text": " or at least getting a model to express its uncertainty pretty much as well as a human does" }, { "end": 1788.88, "start": 1783.92, "text": " doesn't feel like an insurmountable problem but there's some practical difficulties to getting" }, { "end": 1793.92, "start": 1788.88, "text": " getting there. People use the phrase AI alignment in different ways. Can you talk about how you see" }, { "end": 1800.0800000000002, "start": 1793.92, "text": " alignment in your work on Aral from human feedback? I think of alignment mostly as the problem of" }, { "end": 1805.28, "start": 1800.0800000000002, "text": " getting the model to try to do the right thing so we can kind of make a distinction between" }, { "end": 1811.28, "start": 1805.92, "text": " what the model is capable of doing. Like if you just take a raw language model and you ask" }, { "end": 1815.76, "start": 1811.28, "text": " it a question like I said before it doesn't know that you actually wanted to give the correct answer" }, { "end": 1821.68, "start": 1815.76, "text": " as opposed to. It might think someone who is not very knowledgeable is answering. By doing some" }, { "end": 1826, "start": 1821.68, "text": " extra training we can get the model to actually try to do the right thing and so I would say that" }, { "end": 1832.16, "start": 1826, "text": " that's the main goal of alignment. So there was an open AI blog post recently that talked about" }, { "end": 1840.24, "start": 1832.16, "text": " the sequence in alignment. One was training AI systems using human feedback to use it training AI" }, { "end": 1846, "start": 1840.24, "text": " systems to assist human evaluation and three training AI systems to do alignment research." }, { "end": 1852, "start": 1846, "text": " So is your current work mostly about this first item and when and how do you see us getting to" }, { "end": 1858.4, "start": 1852, "text": " these other stages? I'm doing some work now on number two training AI systems to assist human feedback." }, { "end": 1865.04, "start": 1858.4, "text": " I think that's sort of becomes increasingly necessary as you start trying to get the systems" }, { "end": 1869.44, "start": 1865.04, "text": " to solve harder and harder problems. When you have models that are kind of very below human level" }, { "end": 1875.6000000000001, "start": 1869.44, "text": " or maybe at human level at a certain task it's pretty straightforward to supervise them. But" }, { "end": 1880.4, "start": 1875.6000000000001, "text": " once they're doing things that are very hard or doing things that require a lot of diverse" }, { "end": 1886.96, "start": 1880.4, "text": " technical knowledge it becomes pretty hard to provide a useful supervision signal. So we have to" }, { "end": 1893.3600000000001, "start": 1886.96, "text": " start doing things like one model writes an answer to do a question and then another model provides" }, { "end": 1900.9599999999998, "start": 1893.36, "text": " a critique of that answer points out some flaws and then the human only has to judge the first answer" }, { "end": 1906.7199999999998, "start": 1900.9599999999998, "text": " after looking at the critique meaning basically the critique helps the human assess the answer. So I" }, { "end": 1912.1599999999999, "start": 1906.7199999999998, "text": " think like that kind of idea is starting to become pretty relevant. A colleague's an I are exploring" }, { "end": 1917.28, "start": 1912.1599999999999, "text": " that kind of idea now. As for assisting alignment research there's some other work at open AI that's" }, { "end": 1923.1999999999998, "start": 1917.28, "text": " starting to explore this. It's also that sort of the for this down the road. So I saw Stuart Russell" }, { "end": 1928.96, "start": 1923.2, "text": " was on your PhD committee and I really enjoyed his book Human Compatible. I wonder if you share" }, { "end": 1933.04, "start": 1928.96, "text": " the idea mentioned in the book that the standard RL framing with this fixed reward signal" }, { "end": 1939.8400000000001, "start": 1933.8400000000001, "text": " is problematic and that agents powerful agents should try to do what we want and maintain some" }, { "end": 1945.68, "start": 1939.8400000000001, "text": " uncertainty about what it is we want and the agents that are too certain will be problematic." }, { "end": 1952.56, "start": 1945.68, "text": " What do you have any thoughts on that idea? I totally agree with that idea. So I think first it's" }, { "end": 1959.36, "start": 1952.56, "text": " really hard to write down a simple reward function that actually captures what we want or what any" }, { "end": 1964.96, "start": 1959.36, "text": " any particular person wants. I can say I want a little more of this or a little more of that but" }, { "end": 1971.2, "start": 1965.6, "text": " you wouldn't want to take that to the extreme. If we build agents that try to cater to our to our" }, { "end": 1978.6399999999999, "start": 1971.2, "text": " wishes we should make sure they're like they have a lot of they have uncertainty about what we" }, { "end": 1984.64, "start": 1978.64, "text": " want or what we value and that that'll also cause them to be a little more cautious and say" }, { "end": 1991.92, "start": 1984.64, "text": " not disturb anything that might be important to us. So yeah I agree with that like Stuart Russell" }, { "end": 1998.3200000000002, "start": 1991.92, "text": " gave a very good like problem definition of what we want AI to do like we want it to basically" }, { "end": 2003.6000000000001, "start": 1998.3200000000002, "text": " we want to jointly like play this game where AI is the AI is trying to figure out what we want" }, { "end": 2008.4, "start": 2003.6000000000001, "text": " and then trying to do that but simultaneously maintaining some uncertainty about what we want." }, { "end": 2013.0400000000002, "start": 2008.4, "text": " I would say if you you start to look at how to get that in practice it actually looks quite a bit" }, { "end": 2019.76, "start": 2013.0400000000002, "text": " like the kind of RL from human feedback that we're working on at OpenAI and others are working on" }, { "end": 2027.8400000000001, "start": 2019.76, "text": " other places. I think yeah I think I see what we're doing as a practical implementation of getting" }, { "end": 2033.2, "start": 2027.8400000000001, "text": " towards this behavior that Russell have described. Do you think of a AGI as an abstract goal or" }, { "end": 2037.44, "start": 2033.2, "text": " are we going to see a model come out one day and people are going to say oh that's the first AGI" }, { "end": 2044, "start": 2037.44, "text": " model like what does it have to do for people to say that? I think people will say that many times" }, { "end": 2049.52, "start": 2044.72, "text": " then realize that it doesn't quite do everything that you want. I think we're going to have a lot of" }, { "end": 2055.44, "start": 2049.52, "text": " like a long series of models that are that are superhuman at most things or at a certain class of" }, { "end": 2064.08, "start": 2055.44, "text": " things but they also have some failure modes and weaknesses. Like I expect us to like see multiple" }, { "end": 2070.96, "start": 2064.08, "text": " models that are proclaimed as AGI and then only after interacting with it a while you do realize" }, { "end": 2078, "start": 2070.96, "text": " it's not quite there. What would you say is the relationship between AGI and RL and AGI and" }, { "end": 2084.96, "start": 2078, "text": " these large language models? How do those concepts fit together? I would say that RL is a useful" }, { "end": 2090.96, "start": 2084.96, "text": " like component of training AGI or an almost essential component. The thing RL lets you do is it" }, { "end": 2098.48, "start": 2090.96, "text": " lets you optimize any objective for the agents. Any objective that is a function of the agents" }, { "end": 2105.04, "start": 2098.48, "text": " behavior. So with pre-training like what we do for language models you're kind of choosing an" }, { "end": 2111.04, "start": 2105.04, "text": " objective that lets us do something with all the training day we have which is all this internet" }, { "end": 2116.4, "start": 2111.04, "text": " text. So we choose this maximum likelihood objective which is basically the only or not the" }, { "end": 2122.4, "start": 2116.4, "text": " only thing but it's like a sensible way to absorb all this knowledge. But then if we really want to" }, { "end": 2128.32, "start": 2122.4, "text": " optimize the agents behavior for a specific objective RL is kind of the only framework that lets you" }, { "end": 2133.12, "start": 2128.32, "text": " do that. Okay John we have a few questions from the audience and I'm just going to pick the two" }, { "end": 2139.36, "start": 2133.12, "text": " that have the highest score in terms of Twitter likes. So the first is from Eric Chang VP of AI" }, { "end": 2144.48, "start": 2139.36, "text": " at a Hello Di Robotics. He asked RL distributions are non-stationary making it hard to reason about" }, { "end": 2149.92, "start": 2144.48, "text": " PPO losses and how that relates to return or generalization. Are there any intermediate plots" }, { "end": 2156, "start": 2149.92, "text": " and visualizations you like to generate to debug or incrementally build up a large scale RL system?" }, { "end": 2163.2, "start": 2156, "text": " Yeah there are definitely some stats that I look at so I will be I'll talk about this in the nuts" }, { "end": 2172.2400000000002, "start": 2163.2, "text": " and bolts like reboot waited a year but I'd say things like you're looking at the explained" }, { "end": 2177.3599999999997, "start": 2172.24, "text": " variants of the value function and looking at the like how many samples are getting clipped in" }, { "end": 2183.3599999999997, "start": 2177.3599999999997, "text": " PPO and what the KL between the what what the KL divergence is between the policy before and after" }, { "end": 2191.3599999999997, "start": 2183.3599999999997, "text": " the update is yeah things like that. And then Ethan the calibar from Miele asks what is your median" }, { "end": 2198, "start": 2191.3599999999997, "text": " estimate for the arrival date of AGI? I think not too far away but I like I said I expect there to" }, { "end": 2205.12, "start": 2198, "text": " be a lot of fall starts I would say expect like like AI to be able to do better a better job than" }, { "end": 2211.36, "start": 2205.12, "text": " humans at most jobs that humans do now five years or so that's not all jobs but most jobs for a while" }, { "end": 2215.84, "start": 2211.36, "text": " we're going to discover things that AI isn't very good at and then where we want to keep humans in" }, { "end": 2221.12, "start": 2215.84, "text": " control so I think there'll be some kind of gradual process over the next 10 or 15 years." }, { "end": 2227.2, "start": 2221.12, "text": " I've been curious about this I see that some RL work is patented but I could not find a TRPO or" }, { "end": 2234.3999999999996, "start": 2227.2, "text": " PPO in I could not find patents on these are those protected patent protected at all or how do you" }, { "end": 2240.56, "start": 2234.3999999999996, "text": " how do you think of intellectual property protection for that kind of work? I haven't ever looked" }, { "end": 2246.16, "start": 2240.56, "text": " looked into patenting anything and open AI hasn't either as far as I know I think the trend over time" }, { "end": 2251.12, "start": 2246.16, "text": " has been for people to take a patent scene machine like a machine learning algorithms last" }, { "end": 2256, "start": 2251.12, "text": " seriously there is this algorithm in computer vision called sift which is like this key point" }, { "end": 2262.88, "start": 2256, "text": " to detector and this was patented I think the the guy who patented it he probably made his" }, { "end": 2268.88, "start": 2262.88, "text": " university some money from the patent but in the end all it did was cause people a lot of annoyance" }, { "end": 2275.28, "start": 2268.88, "text": " because like the people people had to come up with alternative algorithms that like had a" }, { "end": 2282.72, "start": 2275.28, "text": " different acronym and weren't patented so like the open CV open source library would have like" }, { "end": 2288.08, "start": 2282.72, "text": " had to be careful about putting this algorithm in their library because of the patent risks so" }, { "end": 2294.7999999999997, "start": 2288.64, "text": " I think like these patents aren't the patent rights aren't exercise that much and I think big" }, { "end": 2301.2, "start": 2294.7999999999997, "text": " companies like Google will patent a lot of stuff for defensive reasons so if they get in some big" }, { "end": 2307.52, "start": 2301.2, "text": " legal dispute with another company it can be used as like one of the bargaining chips but I think" }, { "end": 2315.12, "start": 2307.52, "text": " I don't think anyone's going to like get sued for royalties for not yeah for not providing royalties" }, { "end": 2320.24, "start": 2315.12, "text": " for the use of some algorithm okay and then there's been a ton of work in RL of course since you" }, { "end": 2326.4, "start": 2320.24, "text": " first published TRPO and BBO but from your point of view if you had to pick a few highlights in" }, { "end": 2333.52, "start": 2326.4, "text": " terms of a few important milestones in in RL algorithms since PPO came out and by the way it's" }, { "end": 2341.2, "start": 2333.52, "text": " amazing that in 2022 we're still using PPO I think quite similar into it's original form is that" }, { "end": 2347.44, "start": 2341.2, "text": " right yeah pretty much yeah so so what would you say are the the biggest highlights for you" }, { "end": 2352.96, "start": 2348.4, "text": " in terms of our algorithm since since you did PPO yeah there's definitely been some interesting" }, { "end": 2361.6, "start": 2352.96, "text": " stuff so I think like a little after PPO there is TD3 and SAC and those are seem like pretty solid" }, { "end": 2366.96, "start": 2361.6, "text": " value-based methods that was one development that was interesting I think like yeah I thought" }, { "end": 2375.36, "start": 2366.96, "text": " museiro and it's and it's like elaborations we're also like efficient zero we're also pretty" }, { "end": 2380.72, "start": 2375.36, "text": " impressive that you can get that good sample efficiency both of the things I just mentioned were" }, { "end": 2386.7999999999997, "start": 2380.72, "text": " kind of well I don't want to say mostly on toy tasks or benchmarks because yeah I'm sure people" }, { "end": 2391.92, "start": 2386.8, "text": " are doing some real things with these algorithms yeah so I think that's that stuff was interesting" }, { "end": 2400.2400000000002, "start": 2391.92, "text": " I think like the whole recent interest in search of interest in the offline RL was also notable" }, { "end": 2405.28, "start": 2400.2400000000002, "text": " I would say the like the stuff we're doing with RL from human feedback is the kind of offline RL" }, { "end": 2411.76, "start": 2405.92, "text": " because we're like we have a fixed dataset and we have a fixed reward modeling dataset and we're" }, { "end": 2416.2400000000002, "start": 2411.76, "text": " training against that this is like offline RL but you're doing it in a different way you're using" }, { "end": 2423.2, "start": 2416.24, "text": " an on-policy algorithm with a reward model as opposed to maybe a more typical way to do offline RL" }, { "end": 2427.9199999999996, "start": 2423.2, "text": " would be use off-policy algorithm would that work here or would that not work here well we're" }, { "end": 2434.3999999999996, "start": 2427.9199999999996, "text": " doing here is kind of like model-based RL because the reward model is like a model of the like the" }, { "end": 2440.56, "start": 2434.3999999999996, "text": " unknown part of the system so like the unknown part of the system here is the is the human" }, { "end": 2448.08, "start": 2440.56, "text": " radar or the human it's not the outputting appending to your list of tokens so this is kind of like" }, { "end": 2454.48, "start": 2448.08, "text": " the work that's like takes a dynamics model at the environment and does some kind of just runs a" }, { "end": 2459.44, "start": 2454.48, "text": " policy grading algorithm against it so it's not like so the idea of running an online algorithm" }, { "end": 2465.52, "start": 2460.08, "text": " against a model that's kind of a well-established idea so I would say the papers that previously" }, { "end": 2470.72, "start": 2465.52, "text": " did this they were in a pretty different regime were in this regime of doing fairly small" }, { "end": 2476.24, "start": 2470.72, "text": " updates to the policy because we have this these awesome pre-trained models and we don't need to" }, { "end": 2482.56, "start": 2476.24, "text": " actually change them that much so yeah we use these online algorithms I'd say part of the reason" }, { "end": 2490.4, "start": 2482.56, "text": " why we can get away with using just an like an online algorithm is because we've been just looking" }, { "end": 2495.52, "start": 2490.4, "text": " at a band a contextual banded problem yeah because we only have like one time step like you get" }, { "end": 2501.52, "start": 2495.52, "text": " a query and you output a response and then that response gets a reward so if we had a like a" }, { "end": 2509.04, "start": 2501.52, "text": " multi-step process such as a conversation where you can't assign a reward until the very end of" }, { "end": 2516, "start": 2509.04, "text": " the conversation and or you had some I don't know some interaction with like some real-world" }, { "end": 2520.64, "start": 2516, "text": " system that's hard to simulate you wouldn't then it wouldn't be S-ray forward to you wouldn't" }, { "end": 2526.08, "start": 2520.64, "text": " be able to use exactly exactly the same methodology you would probably have to use a you would have" }, { "end": 2532.24, "start": 2526.08, "text": " to probably train a Q function or or something like that if you want if you want your method to be" }, { "end": 2536.4, "start": 2532.24, "text": " sample efficient you would probably have to do something slightly different I think we'll we'll" }, { "end": 2542.88, "start": 2536.4, "text": " have to we'll have to start exploring this at some point soon but so far we haven't at least" }, { "end": 2550.48, "start": 2542.88, "text": " I haven't seen any cases in like in the domain I'm looking at that require this but I expect it to" }, { "end": 2556.96, "start": 2551.44, "text": " to be relevant at some point so we had Arvind Shrinivas talking about decision transformer" }, { "end": 2561.76, "start": 2556.96, "text": " on the show recently that was a great episode and I see that you were also a co-author on the" }, { "end": 2565.92, "start": 2561.76, "text": " the 2016 RL squared paper I want to ask you what your thoughts about meta RL" }, { "end": 2571.28, "start": 2566.6400000000003, "text": " Arvind had some interesting things to say about maybe the idea that a transformer could kind of" }, { "end": 2575.92, "start": 2571.28, "text": " supersede the need for an RL algorithm altogether what do you expect from meta RL" }, { "end": 2581.36, "start": 2575.92, "text": " do expect will will still be using human authored RL algorithms in the future yeah that's a pretty" }, { "end": 2586.6400000000003, "start": 2581.36, "text": " bold statement that we don't need we won't need any RL algorithms anymore yeah since the RL squared" }, { "end": 2593.0400000000004, "start": 2586.6400000000003, "text": " paper people have been talking less about meta learning as far as I can tell actually because" }, { "end": 2599.28, "start": 2593.0400000000004, "text": " of sequence modeling has gotten so good like transformer let sequence models so that it's kind" }, { "end": 2604.2400000000002, "start": 2599.28, "text": " of queer the meta learning is just a special case of learning like it's it's just it's just like" }, { "end": 2610.0800000000004, "start": 2604.2400000000002, "text": " a certain kind of long context learning learning involving long episodes and maybe it shouldn't be" }, { "end": 2615.36, "start": 2610.0800000000004, "text": " treated that differently or are addressed with special algorithms I would say yeah the ideas like" }, { "end": 2620.6400000000003, "start": 2615.36, "text": " decision transformer are pretty interesting where you try to reduce RL to supervise learning it's" }, { "end": 2626.0800000000004, "start": 2620.6400000000003, "text": " still not like certain exactly how these compare and performance to RL like people have started to" }, { "end": 2633.04, "start": 2626.08, "text": " analyze that empirically and theoretically and I would say in practice sometimes sometimes it's" }, { "end": 2638.48, "start": 2633.04, "text": " better sometimes it's worse in my experience like it's been worse on the problems that I've" }, { "end": 2644.56, "start": 2638.48, "text": " that I've my colleagues and I have where we've tested it but yeah it's definitely an interesting" }, { "end": 2649.12, "start": 2644.56, "text": " direction Dr. John Schillman thank you so much for sharing your time in your insight with the" }, { "end": 2660.08, "start": 2649.12, "text": " talk our audience today thanks so much thank you" } ]
Sven Mika
Sven Mika of Anyscale on RLlib present and future, Ray and Ray Summit 2022, applied RL in Games / Finance / RecSys, and more!
https://media.transistor…14b.mp3?src=site
There's a rise in interest in our finance. We have JPM for example, as well as other companies that we're seeing moving into the space and trying our own financial decision-making. Ray was actually developed because of the need to write a reinforcement learning library. Talk our rail podcast is all reinforcement learning all the time, featuring brilliant guests both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. A brief message from any scale are sponsored for this episode. Reinforcement learning is gaining traction as a complimentary approach to supervised learning, with applications ranging from recommended systems to games to production planning. So don't miss Ray Summit, the annual user conference for the Ray open source project, where you can hear how teams at Dow, Verizon, Riot Games and more are solving their RL challenges with RL lib. That's the Ray ecosystem's open source library for RL. Ray Summit is happening August 23rd and 24th in San Francisco. You can register at raysemmet.org and use the code RaySummit22RL for a further 25% off the already reduced prices of 100 bucks for Keynotes only or 150 to add a tutorial from Sven. These prices are for the first 25 people to register. Now I can see from personal experience I've used Ray's RL lib and I have recommended it for consulting clients. It's easy to get started with, but it's also highly scalable and supports a variety of advanced algorithms and settings. Now on to our episode. Sven Mika is the reinforcement learning team lead at any scale and lead commitor of RL lib. He holds a PhD in bio mathematics, bioinformatics and computational biology from Wittenhaireddecker University. Thank you Sven for joining us today. Hey Robin, nice to meet you. Nice to be here. Thanks for having me. Great to have you. So can we start with any scale? What does any scale do? Yeah, so any scale is the startup behind the Ray open source library, which is a Python package that is supposed to make distributed computing very, very easy. The Ray package comes with several what we call libraries. Mostly related to machine learning, for example, our lib for reinforcement learning, we have Ray surf for like model serving and so on. The idea of or the bets that we are making at any scale and in our philosophies that distributed computing is really hard. It's normally something that as a software developer you would like to outsource somehow when you work. If you want to write a for example a machine learning distributed application, you would probably not want to worry about this aspect of your work. So the idea is to have a platform, the end-scale platform where you can very easily bring up a cluster either on right now we support Amazon or GCP. Then run your preferably of course Ray applications but not just a restricted to Ray but any distributed application on the platform. The idea is to have both this OSS or the open source Ray system that will draw five users into becoming customers for any scale for this any scale platform. So we're roughly a hundred people right now. We collected more than a hundred million investor money so far and we have been around for roughly three years I believe. I joined any scale two and a half years ago as the RL person first or now we kind of since roughly a year ago we grew into a larger team of five full-time RL engineers and my team is responsible for developing and maintaining this RLIP library within the Ray Open Source system. We're here to talk about RLIP mostly but RLIP is based on a Ray so can you tell us a bit about Ray to get started? Yeah so with Ray you can either specify or recall these tasks so these are functions that you can tack with a Python decorator and then the function you can you can call it like say let's say thousand times and the function then gets executed on different notes based on your your resources that you have in the cluster in parallel and they can collect the results in parallel and this works locally for example with multi processing but also on a cluster with different machines and this is the easy case you have a function the harder cases you have a class and then we call this an actor so the class has a state and you can you can tag it the same way this is the the at ray dot remote tag that you put on your class and then you have these actors run on the on the cloud on the different machines using different types of resources that you can specify for example by default it's just a CPU but you can also of course think about actors that utilize GPUs and then you can kind of like sort of ping these actors by calling them methods kind of like think about a microservice that's that you would like to utilize on an array of microservice that you would like to to request data from and our lip utilizes Ray in such a way that the most common case that our lip taps into using Ray is the the environment parallelization so you have instead of just having a single environment that you step through collecting some data and then you you you learn of that data our lip by default is already distributed so so for example if you take the ppu algorithm our default setting our different configuration for that algorithm uses two of these actors or two of these we call them rollout workers that's the class and then we make it a ray actor by by decorating it and each of these rollout workers has has environment copies either one or more for batch four passing and so you can so ppu can can collect samples much faster than than a single environment ppu would be able to this is like a very simple example we have other algorithms like a two c a three c and then the newer ones that works in a similar way and have different very complex sometimes very complex execution patterns where you not just collect samples from the environment in parallel but you also call already calculate radiance on on the workers and central the results back for updating for this to to to to serve these really complex execution patterns that are all requires and raise the perfect tool and as a matter of fact Ray was actually developed because of the need to write a reinforcement very library so they wanted to the rice lab at Berkeley wanted to build a reinforcement library and then they figured out we need we need some nice tool that helps us with the unifying and and taking the difficulty away from from this to the computing there's such a wide variety of settings what are the settings that are best suited for rllib in terms of off policy on policy model based model free multi agent offline etc yeah so our lib is really there's no no particular for this on any of these again except for the like limited support for models for really model based rl also what's what we see a lot is where where the traction of the whole ray ecosystem comes comes in for the users is that ray has this not just the the rl library but also the other machine learning libraries for example ray tune for hyperbrandet tuning ray surf for models serving or ray ray train for supervised learning and there's a lot of interest right now in in combining all of these like for example to if you want to do what's called online learning you train some model either with supervised learning or with reinforcement learning you deploy into production you see what happens kind of evaluated there because because you don't have you cannot do that in the simulator you need to see what what it's doing production you collect more data in production and then you use that that new data to to retrain and to kind of like repeat the whole process so that's that's one of the other strength of rllib because it's so well integrated with these with these other machine learning libraries that that where it comes with so we featured authors of other major rl libraries on the show uh Anton and Reffa and Ashley Hill who wrote stable baselines and Pablo Samuel Castro who wrote dopamine um how do you situate rllib in a landscape with these other types of rl libraries yeah that's a very question we've we've thought about this ourselves for for like a lot and we did some surveys trying to find out what what people use and and why they use other libraries of an rllib and where our lip stands in this in this ecosystem of our libraries or um and yes stable baselines definitely is probably the um still is the go-to tool for when you when you start with rl and when you want to just understand maybe some algorithm um because it's the implementation is a little simpler you have only one environment um kind of a single single batch uh setup um rllib yes it's it's the the heavier version of of an rl library because of the scalability and the other really nice features that's that's uh we believe uh it's it stands out from from from the other from the crowd uh which are for example multi-agent support uh strong offline rl support uh we support both tens of flow and pie torch all types of models you can you can plug in an LSTM we have those of the shelf that you can just use uh attention that's a regular CNN networks and MLPs um so that's that's where we see our llib in in the place where if you have larger workloads uh we have we have customers that use uh 100 workers um so they they step through 100 environments in parallel uh the environment sits maybe on some server that they have to connect through uh these really complex large scale uh distributed workloads um are are yeah pretty standard for for using all the uh other around our llib is pretty standard for for supporting these uh we are trying to tackle this the problem of complexity and the problem of this deep learning curve that people that will tell us we have and we realize that as well of course um by different different uh projects that we're working on right now so we have a we have a huge push for the over last half year in simplifying APIs uh and also uh so that's that's one topic I can go a little bit more in detail if it like uh that's that's one thing we simplify the APIs make the code more structure more transparent uh more self-explicatory and the other the other larger um item that we have on our list is better documentation um more examples uh maybe create some youtube videos on on how to do cool stuff uh with our llib like very typical like how to set up typical things typical experiments with all of uh and so on well I have no doubt that you'll get that and more done um but to be fair to stable baselines they do support vector environments where you can run uh many environments but I believe that they are limited to a single machine which array of course uh doesn't have the limitation right how big can these runs get with our llib uh yeah so again I mentioned before we had so we we have seen users use uh hundreds and and more workers uh we we have run experiments with 250 as they believe on on for example on ePala some benchmarks use these so these run on yeah really large classes with like one head note that has a couple GPUs and then you have dozens of small or CPU machines so that these these environment workers can run on those uh and we've seen these these workloads used also by our by our users slash customers um in a meaningful way the the other access that comes in here for scaling is the the hyperparameter tuning access so uh this is this this this could be like a single job right where you say I have 100 environment workers uh and and on the head note for for learning for updating my model I use a couple of GPUs but you can also then scale this further on another access and say I have uh eight different hyperparameter sets that I would like to try uh or different model architecture so so again by combining our llib with with other ray libraries uh ray tune in this case uh you can uh yeah you can you can think that that this becomes even even larger uh job and then sure you can you can run hyperparameter in sequence but you would also like to to paralyze you can you tell us about some of the use cases you've seen for rllib uh what the customers are doing with it yeah I can talk about that that's actually well actually one of the most exciting parts of working with all that um so we are currently our rough idea is that we have uh two major bets that we're taking right now for the for the next couple of months to work on which is the gaming industry as well as uh the rex's uh sector um and uh for let me talk about the gaming industry we have uh two customers that have already presented publicly about what they're doing so I can I can talk about this here uh which is wildlife studios as well as white games and uh the interesting thing is that they use our lip for very different setups or use cases wildlife is building a um or has built an in-game items sales recommend a system that basically figures out prices for the for the players um that they would probably pay for some some items that they can buy for the games uh and they have used our lip for for that um also like in an online fashion kind of like training with our lip offline we use an offline or a lip then deploying into production uh using different OPE methods to figure out what's what's what could be the best model uh using ray surf for for the price serving and then collecting more data bring it all back and kind of repeating the cycle um and then right games uh does does this classic thing where they have they have these these multiplayer adversarial games where different teams play against each other and one of the main challenges for for these kind of games studios is that they have to make sure the games are fair and and uh there's uh there's no imbalance in the game maybe because you're picking a different character or different weapons and all these things um so that the game doesn't get boring um so that's that's the big challenge uh where they can normally they would use uh testers that would play the games a couple times and this is very expensive and very tedious uh so we've been much nicer to have a bot that can play the game against itself using using self-play uh and then learn how to uh or like kind of like figure out automatically where this um where this exploits could be where this imbalances could could be located for example they've they figured out that one one card and one of their uh cards like games uh was very powerful and they had to reduce the value of the cards by one and that that completely fixed like this imbalance when i think of recommender systems i often think of the one step case the banded case is that what you're talking about here or also the full rl setting the multi-step uh case yeah correct i'm actually talking about both so we we still see a lot of companies trying banded uh as a single step kind of like try to optimize the very next reward kind of setup um but also uh but also you have these these companies that always think in long-term effects on the recommendations on engaging the user uh maybe it has some some negative effect that you uh that you always make some recommendations and the user clicks on it engages with it uh and kind of gets gets tired maybe of the content so these these considerations uh kind of slowly creep into their um yeah into their considerations of of the problems that they want to solve so this the session-based uh long long-range um um yeah kind of the delayed reward settings that you can only use with classic rl and yeah there's a lot of movement right now a lot of uh companies uh want to want to try our l for xs where before they used either either some non-machine learning methods or like supervised learning uh now they i think they figured out that this this end-to-end setup of rl is really nice and you can uh it just gives you an action and you can just use that without having to program more logic into the system uh but it's it's very hard i feel like one one of the challenges here in rex's maybe to explain this uh is is the if you have to recommend several items so like um think about youtube and you have you go to youtube and you have these eight or or there's a couple of slots that are filled with with recommended videos uh it's it's quite uh crazy for the action space with what this means if you have to fill uh it kind of explodes easily if you think about the number of videos that youtube has several dozen billions i think um and you have to pick 12 of those that that makes your action space uh quickly explode if you don't have like a nice pre-selection method that you probably want to put there uh which has nothing to do with our stuff so you have to be careful that it's it's really like a big challenge i find it really really difficult uh that's just one problem the other problem is the user model like how do you uh how do you do this all without without a simulator maybe maybe you have a simulator but maybe like how do you program the user into the simulator the user behavior uh especially the long-term behavior the long-term effects on on what you do with the user what you recommend to the user into this model i find it extremely challenging uh an extremely challenging problem uh so other use cases uh that we're seeing uh there's there's a rise in interest in our our finance we have jpm for example i can i can say that because also they uh publicly spoke about this using our lip as well as other companies that we're seeing uh moving into the space uh and and to try our L on financial decision making by sell decisions uh and and so on um and then another one is uh we have we have seen some some attempts and then self-driving cars for robotics it does feel like some some some verticals are further ahead uh another one is logistics which is which is also very further ahead or like this this whole process optimization um uh sector where you have some maybe you have some factory uh like a chemical factory and and you would like to optimize different different processes there uh through through RL uh that's also very very far ahead um already and uh but but yeah the different verticals have have made a different amounts of progress into into moving into RL and the the only problem that we see right now still is that it's not at large scale yet so we see single companies starting to think about it but i don't think we are quite at the point where where really everyone wants to wants to do it right now um but i think we are we're close uh so so maybe it's another year uh it's hard to tell and this is the one of the difficulties uh for us at any scale uh to predict uh when this really when this point will happen where where everything really goes goes exponential um it's it's quite a challenge can you say a bit about the roadmap for RLib what do you see in the future for RLib? As i already mentioned before like one important product that we're currently working on i would say where maybe 50 60 percent on with that is API simplification so we are uh we have realized that stable baselines for example is a much nicer library to use and to learn and easier to learn and and we really respect that and we would like to uh for RLib um to become or or to to have that feature or have that feel as well um so we're trying to get rid of old complicated unintuitive APIs um uh can give you an example for example our our algorithms uh the configurations they used to be a python dictionary so we had we didn't really know what the different keys were and we had some users tell us um that one of the main uh locations where they would hang out on the internet would be the the RLib documentation page where all the the config options were listed um and so so instead now we have changed this to a to config objects so you can create a uh an instance of a config class and then you have type-safe properties that you can set inside this class uh the class comes with different helper methods for example to set training related parameters uh or or rollout related parameters uh resource related parameters so on uh so it's much more structured it's much more IDE friendly you can see the different uh dog strings in your IDE uh and immediately know what's what setting you need to um adjust to make the algorithm um yeah hopefully learn learn a little faster that's one change we did we are currently also exploring uh making our models easier to to uh customize um before every algorithm kind of had its own yeah model API like a dqn would have this this uh q model thing and it would have certain methods that you recall for for handling the the uh dueling uh head for example um now we're trying to unify all that so that in between the different algorithms you can use the same uh subclasses for example for for q network or for policy network or for a value function network or for a transition dynamics network um and that will over unified and you can you can this will make it easier to to plug and play these different maybe pytorch or tensaflow models that you have uh flying around anyway if you if you do research on these things um and then the algorithms will just recognize any of those so it's uh it will be much more pluggable and much more intuitive um to set up uh different arbitrarily complex models for the other ones so i understand our ellipse supports both pytorch and tenserflow how does that work yeah great question uh it makes things much more complicated yes uh we don't have an internal rl-lip specific deep learning framework no we just uh basically do everything um twice so so but but it's it's simpler than that so the the top um concept in our lip is the algorithm which is completely framework agnostic it determines when you when you sample or if it determines when things should happen so when should the sample when should i put the samples into my bibliopeliver when should the sample from the repeller for when should i update my uh policy network from from that data completely from agnostic uh just you just pass around abstract uh extract uh objects and concepts and then on the one level below you have the we have what we call the policy uh and that one is framework specific so we have a tenserflow policy superclass and the the torched policy superclass um and the different algorithms for example ppo and dqn they have their own loss functions which are part of this policy written uh in in two in two ways in in tenserflow and in pytorch uh the the problem of tenserflow one with the sessions and tenserflow two with with eager and and uh not using sessions and placeholders we solve by kind of automating this away so you really only have to write one tenserflow loss function to support both these versions but yes for each algorithm we have to write both both loss functions but that's mostly it and then the other thing you have to be careful of course is the default models so we have we have a set of different default models nlp's some some simple cnn setup as well as an LSTM setup of course those also we provide in both tenserflow and pytorch but the main work is really for loss functions if you want to implement a new algorithm for our users it doesn't matter they they just implement one of these uh and then of course their algorithm only exists in that one world but for the the built-in algorithms that come with our lip uh we went through the work and implemented the loss functions in both uh in both frameworks but it's it's not as work it's not as bad as it sounds it's not as much work i think as people would fear it is so that's that's a good thing does the tenserflow side still get a lot of use seems in the research side pytorch is more common i think so so a lot of industry users still on tenserflow they believe in the in the speed in the performance we we have seen weird stuff with torsially that sometimes it runs super fast depending on the machine you are on whether you're on an mprocessor or uh the also the GPU CUDA versions play a big role but a lot of industry still use tenserflow the tenserflow versions of the algorithms um also sometimes they don't really care they use everything out of the out of the box so they don't really have their own models they just use everything that comes with our lip anyways so then in this situation they can easily switch and compare which is also very nice but traditionally we have seen that tenserflow still has some some edge over pytorch performance wise but we also we from time to time we we look into pytorch and see why um or like how we can make it faster and which which they're these jat tricks that you can apply to to make it like similar to to how you would um yeah use tenserflow 2 with with the with the egotracing to make it faster we're still kind of like working on that one um but yeah we see we still see a lot of people on tenserflow definitely um i think in the research era the probably pytorch has has the uh the edge now but i think in industry it's still pretty undecided to this point we had jordan terrion recently who maintains gym and they were talking about our all systems that have both the agent and the environment in the GPU and so the entire oral clock cycle is happening within the GPU um is there any plans to do anything like that with our alib? yeah we've seen this come up in in discussions with with with jordan himself but also with our users and customers or potential customers the need to to do exactly that so to be on the GPU in your environment because maybe the environment has a complex image space observation space um and then to to not have to copy all the data from the GPU on the environment back to the CPU send it through the ray object store uh and then and then basically move it back to the GPU for for learning updates we have seen this this question a lot uh and we started thinking about we start experimenting with it um another possible setup is to even think about having um different GPUs so you have the environment on one GPU uh on on several and then the the central GPU for learning um how can you realize direct GPU to GPU communication to to also speed this up to at least avoid the CPU copy um and we we have come to the conclusion that this is more like a problem that we should solve in in ray core um via the object store so the object store the the thing that ray works with that uh basically is available from all the different nodes in a cluster uh things go into the object store as read only they get serialized put there and then uh you can class the reference around then with the reference on the other side uh of the cluster you can you can pull it out of the object store but this is all before stuff goes to the object store this is all being copied to the CPU so that's currently the problem uh we're trying to solve that um if we can say uh this this particular data should yes it should go to the object store but please uh leave it on uh on the GPU or or send it directly to this other GPU in the cluster uh we are we're currently working on this it's on our roadmap and uh but we have to still figure out a lot of details uh related to our lip um for example yeah what's what's what does this mean for the environments um then we may need to add jack support uh because you have the the nice um pi jacks API that you can then use um and yeah we currently don't don't support jacks but this is this is on our roadmap and this may may happen pretty soon yes can you tell us a bit about the team behind uh our live yeah great question so we have a as i was i mentioned before the our team sizes roughly we have five full-time engineers uh we had a couple interns that finished their uh fantastic projects uh already uh one one finished yesterday um Rohan who worked on of policy evaluation uh and nice new API for this uh and as well as the uh wrobos implementation that we have right now in our lip um and the other one child who was working on the decision transformer notation um it's quite a challenge uh for myself i'm i'm working remotely from from germany um and most of the other people are in san francisco um but we we have a pretty solid pipeline for like planning and and we work in sprints so we plan ahead every every two weeks on what everyone should should work on for the next for the next two weeks uh and then we do pretty solid quarterly planning where we um come up with uh with with lots of thoughts of what what we're direction that our lips should go into what's what's needed by the um by the users by the important customers um and what also what what things are we predicting to to happen next like is gaming going to be the next big thing in the next six months or so um so all this goes into the planning and then we come up with a uh they're quite detailed kind of like by the by the engineering week uh sort of planning um what everyone should work on uh distribute this among the engineers uh and then during the quarter make sure that we help each other out uh if they're if they're road blocks or if they're like uh someone gets stuck somewhere with uh the whole team works out and it's working really quite well we have only been together for a couple months now i have to say so the last two and a half years since i joined any scale most of the time i was more or less working alone on our lip uh maybe had some help from from interns in between uh also in the beginning uh Eric was still there uh who then moved into the rake core team but the the this our lip team this this really like larger team that's that's working professionally full-time on our lip uh has only been around for really a couple of months now since the beginning of this year so um and i feel like it's it's working really well like we we're getting a lot of stuff done with our lip is changing uh quite a lot right now as we go towards ray 2.0 uh and i'm really really happy about um the the the yeah intermediate results so far uh i look forward really to to um yeah to all the nice changes that that are to come can i ask you about testing how do you test rlib yeah uh yeah testing that's actually one of the pain points we discussed recently in in uh like how can we be more efficient testing right now yeah we have we have a c-i it uh we use build kite for our c-i tests um and the the rlib when you when you do a uh branch ship uh from from master and then you have your own um branch and you push a um an update to your PR uh takes about yeah more than an hour to truthfully run because we have we have all the unit tests we have the what we call the c-i learning test which are like smaller learning tests on carpool and pendulum environments for all the different algorithms for all the different frameworks uh sometimes even different different settings and different model types uh so that's roughly an hour uh which which it shouldn't shouldn't take that long we we can there's a lot of things that we can do better to to maybe cut it in half uh one of these things is our lip is not dependent on uh on building stuff our lips really just source files it's a pure python uh library you we should be able to just pull a a ready container and just run on there uh that's something we can we can have we optimize um and then we have daily uh release tests that we run so uh and those are like heavier or hard of task learning tests on on a tary on on uh mojoko uh also for the different algorithms for the different frameworks uh tens of full and pie torch um and those take uh several hours to run on like expensive GPU machines but we uh also there we did a lot of progress lately we add a lot of algorithms so we have much more trust in in ourselves right now uh which is very good um and very important um uh but yeah it's it's a it's a huge pain point as uh as as a yeah team of always as developers so i understand rlib is not your first rl library uh can you tell us about how you got here yeah sure so the my my journey to rl actually started with uh with with games so i was i was looking at the arvel engine uh in i think it was 2016 uh and i wanted to i had this crazy idea of writing a system that uh where you have like a game world and then you have some some characters in there and the characters would kind of like learn how to uh interact and kind of play the game and i did wasn't even didn't even know much about rl back then um but this idea of like creating kind of like a background story by just making the characters uh become smarter and smarter and and act in this world um it got me into into rl and then i figured this is the perfect method to to use for this to solve this kind of problem um so that's when i started learning what rl and and uh also writing some some some my own libraries up front or some own algorithms uh and i started with uh joining the tensor force group uh the tensor force open source project uh that was in 2017 um and then we yeah tensor force is another another open source our library um and then with some of these people from the tensor force uh always is our github repo we started our graph um in this was 2018 to the 19 uh we published a paper comparing ourselves or comparing our graph with our lip uh and that paper then uh got attention uh from from the indescale team uh and uh at the end of 2019 uh i believe indescale reached out to me uh and also to other people from the our graph team and and asked us whether we would like to work for them. Yeah and then so real it's a really smaller library that i've came up with uh i wanted to i was always obsessed with the idea to make it really really simple to implement an algorithm um like it shouldn't be harder than you read the paper you see the pseudocode you understand pseudocode and then you just use as many lines as are in the pseudocode to code the algorithm in a library it's it's uh it's a tough goal to achieve but that that should be the ideal in in my opinion for for any rl library or the ultimate goal uh and that was that was surreal so i tried to really uh implement the algorithms in a very kind of like uh dense but easy to read fashion um and i had some some algorithms and there i think ppo ssc dqn um yeah so that was there was just like some small site project it was it was also related to games i had it i had a module in there where you could plug in the unrelangian tool to this um to this our library uh and then learn some some smaller problems very similar to ML agents for unity uh want to do the same with with the unrelangian and that was that surreal Sven is there anything else you want to share with our audience while you're here uh yeah sure so we have our race summit coming up in august uh end of august 22nd and 23rd um in san francisco it's the first race summit uh this is the third one and the first one that's actually in person i'm super excited to be there uh we'll fly in on on uh at the end of august there to give a tutorial on our lip um and there are other cool talks about how people use uh ray in industry how people use race libraries in industry um of course also a lot of talks in our lip yeah if you interested uh sign up and and join us for the summit that'll be awesome Sven mika it's been great chatting with you and learning about rlib uh looking forward to race summit and thanks so much for sharing your time and your insight with uh the talk our audience today thank you Sven thanks lot for having so pleasure to be here
[ { "end": 5.72, "start": 0, "text": " There's a rise in interest in our finance. We have JPM for example, as well as other" }, { "end": 11.32, "start": 5.72, "text": " companies that we're seeing moving into the space and trying our own financial decision-making." }, { "end": 16.6, "start": 11.32, "text": " Ray was actually developed because of the need to write a reinforcement learning library." }, { "end": 26, "start": 20.6, "text": " Talk our rail podcast is all reinforcement learning all the time, featuring brilliant guests" }, { "end": 32.04, "start": 26, "text": " both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host," }, { "end": 40, "start": 32.04, "text": " Robin Chohan. A brief message from any scale are sponsored for this episode." }, { "end": 44.36, "start": 40, "text": " Reinforcement learning is gaining traction as a complimentary approach to supervised learning," }, { "end": 49.2, "start": 44.36, "text": " with applications ranging from recommended systems to games to production planning. So don't" }, { "end": 54.24, "start": 49.2, "text": " miss Ray Summit, the annual user conference for the Ray open source project, where you can hear" }, { "end": 60.6, "start": 54.24, "text": " how teams at Dow, Verizon, Riot Games and more are solving their RL challenges with RL lib." }, { "end": 66.88, "start": 60.6, "text": " That's the Ray ecosystem's open source library for RL. Ray Summit is happening August 23rd" }, { "end": 75.04, "start": 66.88, "text": " and 24th in San Francisco. You can register at raysemmet.org and use the code RaySummit22RL for a" }, { "end": 82.4, "start": 75.04, "text": " further 25% off the already reduced prices of 100 bucks for Keynotes only or 150 to add a" }, { "end": 86.88000000000001, "start": 82.4, "text": " tutorial from Sven. These prices are for the first 25 people to register. Now I can see from" }, { "end": 91.68, "start": 86.88000000000001, "text": " personal experience I've used Ray's RL lib and I have recommended it for consulting clients." }, { "end": 96.32000000000001, "start": 91.68, "text": " It's easy to get started with, but it's also highly scalable and supports a variety of advanced" }, { "end": 101.92, "start": 96.32000000000001, "text": " algorithms and settings. Now on to our episode. Sven Mika is the reinforcement learning team lead" }, { "end": 108.56, "start": 101.92, "text": " at any scale and lead commitor of RL lib. He holds a PhD in bio mathematics, bioinformatics and" }, { "end": 113.44, "start": 108.56, "text": " computational biology from Wittenhaireddecker University. Thank you Sven for joining us today." }, { "end": 116.88, "start": 113.44, "text": " Hey Robin, nice to meet you. Nice to be here. Thanks for having me." }, { "end": 120.8, "start": 116.88, "text": " Great to have you. So can we start with any scale? What does any scale do?" }, { "end": 128.88, "start": 120.8, "text": " Yeah, so any scale is the startup behind the Ray open source library, which is a Python package" }, { "end": 138.4, "start": 130.32, "text": " that is supposed to make distributed computing very, very easy. The Ray package comes with" }, { "end": 145.04000000000002, "start": 138.4, "text": " several what we call libraries. Mostly related to machine learning, for example, our lib for" }, { "end": 152.08, "start": 145.04000000000002, "text": " reinforcement learning, we have Ray surf for like model serving and so on. The idea of or the" }, { "end": 157.44, "start": 152.08, "text": " bets that we are making at any scale and in our philosophies that distributed computing is" }, { "end": 164.4, "start": 157.44, "text": " really hard. It's normally something that as a software developer you would like to outsource" }, { "end": 168.8, "start": 164.4, "text": " somehow when you work. If you want to write a for example a machine learning distributed" }, { "end": 174.56, "start": 168.8, "text": " application, you would probably not want to worry about this aspect of your work. So the idea is" }, { "end": 182.08, "start": 174.56, "text": " to have a platform, the end-scale platform where you can very easily bring up a cluster either on" }, { "end": 190.16, "start": 182.08, "text": " right now we support Amazon or GCP. Then run your preferably of course Ray applications but not" }, { "end": 195.76, "start": 190.16, "text": " just a restricted to Ray but any distributed application on the platform. The idea is to have both" }, { "end": 203.68, "start": 195.76, "text": " this OSS or the open source Ray system that will draw five users into becoming customers for any" }, { "end": 210.48, "start": 203.68, "text": " scale for this any scale platform. So we're roughly a hundred people right now. We collected" }, { "end": 218.16, "start": 211.68, "text": " more than a hundred million investor money so far and we have been around for roughly three years" }, { "end": 225.44, "start": 218.16, "text": " I believe. I joined any scale two and a half years ago as the RL person first or now we kind" }, { "end": 231.35999999999999, "start": 225.44, "text": " of since roughly a year ago we grew into a larger team of five full-time RL engineers" }, { "end": 236.8, "start": 232.16, "text": " and my team is responsible for developing and maintaining this RLIP library within the Ray Open" }, { "end": 241.51999999999998, "start": 236.8, "text": " Source system. We're here to talk about RLIP mostly but RLIP is based on a Ray so can you tell us" }, { "end": 247.84, "start": 241.51999999999998, "text": " a bit about Ray to get started? Yeah so with Ray you can either specify or recall these tasks" }, { "end": 253.84, "start": 247.84, "text": " so these are functions that you can tack with a Python decorator and then the function you can" }, { "end": 260.8, "start": 253.84, "text": " you can call it like say let's say thousand times and the function then gets executed on different" }, { "end": 266.88, "start": 260.8, "text": " notes based on your your resources that you have in the cluster in parallel and they can collect" }, { "end": 272.48, "start": 266.88, "text": " the results in parallel and this works locally for example with multi processing but also on a" }, { "end": 277.68, "start": 272.48, "text": " cluster with different machines and this is the easy case you have a function the harder cases" }, { "end": 283.84000000000003, "start": 277.68, "text": " you have a class and then we call this an actor so the class has a state and you can you can tag it" }, { "end": 290.48, "start": 283.84000000000003, "text": " the same way this is the the at ray dot remote tag that you put on your class and then you have" }, { "end": 295.28000000000003, "start": 290.48, "text": " these actors run on the on the cloud on the different machines using different types of resources" }, { "end": 300.8, "start": 295.28000000000003, "text": " that you can specify for example by default it's just a CPU but you can also of course think about" }, { "end": 307.36, "start": 300.8, "text": " actors that utilize GPUs and then you can kind of like sort of ping these actors by calling them" }, { "end": 313.44, "start": 307.36, "text": " methods kind of like think about a microservice that's that you would like to utilize on an array" }, { "end": 321.04, "start": 313.44, "text": " of microservice that you would like to to request data from and our lip utilizes Ray in such a way" }, { "end": 328.56, "start": 321.04, "text": " that the most common case that our lip taps into using Ray is the the environment parallelization" }, { "end": 334, "start": 328.56, "text": " so you have instead of just having a single environment that you step through collecting some data" }, { "end": 340.96, "start": 334, "text": " and then you you you learn of that data our lip by default is already distributed so so for" }, { "end": 346.32, "start": 340.96, "text": " example if you take the ppu algorithm our default setting our different configuration for that" }, { "end": 352.24, "start": 346.32, "text": " algorithm uses two of these actors or two of these we call them rollout workers that's the class" }, { "end": 358.08, "start": 352.24, "text": " and then we make it a ray actor by by decorating it and each of these rollout workers has has" }, { "end": 366.08, "start": 358.08, "text": " environment copies either one or more for batch four passing and so you can so ppu can can collect" }, { "end": 372.32, "start": 366.08, "text": " samples much faster than than a single environment ppu would be able to this is like a very simple" }, { "end": 379.68, "start": 372.32, "text": " example we have other algorithms like a two c a three c and then the newer ones that works in" }, { "end": 385.52, "start": 379.68, "text": " a similar way and have different very complex sometimes very complex execution patterns where you" }, { "end": 391.2, "start": 386.24, "text": " not just collect samples from the environment in parallel but you also call already calculate" }, { "end": 398, "start": 391.2, "text": " radiance on on the workers and central the results back for updating for this to to to to serve these" }, { "end": 402.88, "start": 398, "text": " really complex execution patterns that are all requires and raise the perfect tool and as a matter" }, { "end": 408, "start": 402.88, "text": " of fact Ray was actually developed because of the need to write a reinforcement very library so" }, { "end": 412.4, "start": 408, "text": " they wanted to the rice lab at Berkeley wanted to build a reinforcement library and then they" }, { "end": 417.84, "start": 412.4, "text": " figured out we need we need some nice tool that helps us with the unifying and and taking the" }, { "end": 423.28, "start": 417.84, "text": " difficulty away from from this to the computing there's such a wide variety of settings what are the" }, { "end": 429.68, "start": 423.28, "text": " settings that are best suited for rllib in terms of off policy on policy model based model free" }, { "end": 436.56, "start": 429.68, "text": " multi agent offline etc yeah so our lib is really there's no no particular for this on any of these" }, { "end": 442.24, "start": 436.56, "text": " again except for the like limited support for models for really model based rl also what's" }, { "end": 447.04, "start": 442.24, "text": " what we see a lot is where where the traction of the whole ray ecosystem comes comes in for the" }, { "end": 451.52, "start": 447.04, "text": " users is that ray has this not just the the rl library but also the other machine learning libraries" }, { "end": 457.76, "start": 451.52, "text": " for example ray tune for hyperbrandet tuning ray surf for models serving or ray ray train for" }, { "end": 463.44, "start": 457.76, "text": " supervised learning and there's a lot of interest right now in in combining all of these like" }, { "end": 469.2, "start": 463.44, "text": " for example to if you want to do what's called online learning you train some model either with" }, { "end": 473.2, "start": 469.2, "text": " supervised learning or with reinforcement learning you deploy into production you see what happens" }, { "end": 477.92, "start": 473.2, "text": " kind of evaluated there because because you don't have you cannot do that in the simulator you need" }, { "end": 481.68, "start": 477.92, "text": " to see what what it's doing production you collect more data in production and then you use that" }, { "end": 487.04, "start": 481.68, "text": " that new data to to retrain and to kind of like repeat the whole process so that's that's one of" }, { "end": 491.84, "start": 487.04, "text": " the other strength of rllib because it's so well integrated with these with these other machine" }, { "end": 497.35999999999996, "start": 491.84, "text": " learning libraries that that where it comes with so we featured authors of other major rl libraries" }, { "end": 504.08, "start": 497.35999999999996, "text": " on the show uh Anton and Reffa and Ashley Hill who wrote stable baselines and Pablo Samuel Castro who" }, { "end": 510.15999999999997, "start": 504.08, "text": " wrote dopamine um how do you situate rllib in a landscape with these other types of rl libraries" }, { "end": 516.16, "start": 510.71999999999997, "text": " yeah that's a very question we've we've thought about this ourselves for for like a lot and we did" }, { "end": 522.64, "start": 516.16, "text": " some surveys trying to find out what what people use and and why they use other libraries of an rllib" }, { "end": 529.92, "start": 522.64, "text": " and where our lip stands in this in this ecosystem of our libraries or um and yes stable baselines" }, { "end": 537.04, "start": 529.92, "text": " definitely is probably the um still is the go-to tool for when you when you start with rl and when" }, { "end": 542.64, "start": 537.04, "text": " you want to just understand maybe some algorithm um because it's the implementation is a little" }, { "end": 551.36, "start": 542.64, "text": " simpler you have only one environment um kind of a single single batch uh setup um rllib yes it's" }, { "end": 558.08, "start": 551.36, "text": " it's the the heavier version of of an rl library because of the scalability and the other really" }, { "end": 563.84, "start": 558.08, "text": " nice features that's that's uh we believe uh it's it stands out from from from the other from the" }, { "end": 569.4399999999999, "start": 563.84, "text": " crowd uh which are for example multi-agent support uh strong offline rl support uh we support both" }, { "end": 576.08, "start": 569.44, "text": " tens of flow and pie torch all types of models you can you can plug in an LSTM we have those of the" }, { "end": 583.36, "start": 576.08, "text": " shelf that you can just use uh attention that's a regular CNN networks and MLPs um so that's that's" }, { "end": 588.6400000000001, "start": 583.36, "text": " where we see our llib in in the place where if you have larger workloads uh we have we have" }, { "end": 594.4000000000001, "start": 588.6400000000001, "text": " customers that use uh 100 workers um so they they step through 100 environments in parallel" }, { "end": 599.1999999999999, "start": 594.4, "text": " uh the environment sits maybe on some server that they have to connect through uh these really complex" }, { "end": 606.3199999999999, "start": 599.1999999999999, "text": " large scale uh distributed workloads um are are yeah pretty standard for for using all the uh" }, { "end": 611.36, "start": 606.3199999999999, "text": " other around our llib is pretty standard for for supporting these uh we are trying to tackle this" }, { "end": 615.68, "start": 611.36, "text": " the problem of complexity and the problem of this deep learning curve that people that will tell" }, { "end": 621.12, "start": 615.68, "text": " us we have and we realize that as well of course um by different different uh projects that we're" }, { "end": 626.64, "start": 621.12, "text": " working on right now so we have a we have a huge push for the over last half year in simplifying" }, { "end": 632.8, "start": 626.64, "text": " APIs uh and also uh so that's that's one topic I can go a little bit more in detail if it like uh" }, { "end": 638.32, "start": 632.8, "text": " that's that's one thing we simplify the APIs make the code more structure more transparent" }, { "end": 643.76, "start": 638.32, "text": " uh more self-explicatory and the other the other larger um item that we have on our list" }, { "end": 649.44, "start": 643.76, "text": " is better documentation um more examples uh maybe create some youtube videos on on how to do cool" }, { "end": 653.9200000000001, "start": 649.44, "text": " stuff uh with our llib like very typical like how to set up typical things typical experiments with" }, { "end": 659.7600000000001, "start": 653.9200000000001, "text": " all of uh and so on well I have no doubt that you'll get that and more done um but to be fair" }, { "end": 665.2, "start": 659.7600000000001, "text": " to stable baselines they do support vector environments where you can run uh many environments but I" }, { "end": 670.5600000000001, "start": 665.2, "text": " believe that they are limited to a single machine which array of course uh doesn't have" }, { "end": 677.9200000000001, "start": 670.5600000000001, "text": " the limitation right how big can these runs get with our llib uh yeah so again I mentioned before" }, { "end": 684.0799999999999, "start": 677.92, "text": " we had so we we have seen users use uh hundreds and and more workers uh we we have run experiments with" }, { "end": 691.52, "start": 684.0799999999999, "text": " 250 as they believe on on for example on ePala some benchmarks use these so these run on" }, { "end": 697.76, "start": 693.1999999999999, "text": " yeah really large classes with like one head note that has a couple GPUs and then you have" }, { "end": 704.48, "start": 699.1999999999999, "text": " dozens of small or CPU machines so that these these environment workers can run on those" }, { "end": 709.6, "start": 704.48, "text": " uh and we've seen these these workloads used also by our by our users slash customers um in a" }, { "end": 715.04, "start": 709.6, "text": " meaningful way the the other access that comes in here for scaling is the the hyperparameter tuning" }, { "end": 720.4, "start": 715.04, "text": " access so uh this is this this this could be like a single job right where you say I have 100" }, { "end": 724.88, "start": 720.4, "text": " environment workers uh and and on the head note for for learning for updating my model I use a" }, { "end": 730.8000000000001, "start": 724.88, "text": " couple of GPUs but you can also then scale this further on another access and say I have uh" }, { "end": 735.3599999999999, "start": 730.8, "text": " eight different hyperparameter sets that I would like to try uh or different model architecture so" }, { "end": 740.8, "start": 736.16, "text": " so again by combining our llib with with other ray libraries uh ray tune in this case" }, { "end": 746.8, "start": 742.3199999999999, "text": " uh you can uh yeah you can you can think that that this becomes even even larger uh" }, { "end": 751.76, "start": 746.8, "text": " job and then sure you can you can run hyperparameter in sequence but you would also like to to" }, { "end": 756.7199999999999, "start": 751.76, "text": " paralyze you can you tell us about some of the use cases you've seen for rllib uh what the customers" }, { "end": 761.0400000000001, "start": 756.72, "text": " are doing with it yeah I can talk about that that's actually well actually one of the most exciting parts" }, { "end": 768.32, "start": 761.0400000000001, "text": " of working with all that um so we are currently our rough idea is that we have uh two major" }, { "end": 772.48, "start": 769.0400000000001, "text": " bets that we're taking right now for the for the next couple of months to work on which is the" }, { "end": 780.72, "start": 772.48, "text": " gaming industry as well as uh the rex's uh sector um and uh for let me talk about the gaming" }, { "end": 786.32, "start": 780.72, "text": " industry we have uh two customers that have already presented publicly about what they're doing so" }, { "end": 792.08, "start": 786.32, "text": " I can I can talk about this here uh which is wildlife studios as well as white games and uh the" }, { "end": 796.88, "start": 792.08, "text": " interesting thing is that they use our lip for very different setups or use cases wildlife is" }, { "end": 803.2800000000001, "start": 796.88, "text": " building a um or has built an in-game items sales recommend a system that basically figures out" }, { "end": 808.6400000000001, "start": 803.2800000000001, "text": " prices for the for the players um that they would probably pay for some some items that they can" }, { "end": 813.36, "start": 808.6400000000001, "text": " buy for the games uh and they have used our lip for for that um also like in an online fashion kind of" }, { "end": 817.92, "start": 813.36, "text": " like training with our lip offline we use an offline or a lip then deploying into production uh" }, { "end": 822.24, "start": 817.92, "text": " using different OPE methods to figure out what's what's what could be the best model uh using" }, { "end": 826.72, "start": 822.24, "text": " ray surf for for the price serving and then collecting more data bring it all back and kind of" }, { "end": 831.44, "start": 826.72, "text": " repeating the cycle um and then right games uh does does this classic thing where they have" }, { "end": 835.84, "start": 831.44, "text": " they have these these multiplayer adversarial games where different teams play against each other" }, { "end": 841.6, "start": 836.48, "text": " and one of the main challenges for for these kind of games studios is that they have to make sure" }, { "end": 847.0400000000001, "start": 841.6, "text": " the games are fair and and uh there's uh there's no imbalance in the game maybe because you're" }, { "end": 851.6, "start": 847.0400000000001, "text": " picking a different character or different weapons and all these things um so that the game doesn't" }, { "end": 857.2, "start": 851.6, "text": " get boring um so that's that's the big challenge uh where they can normally they would use uh" }, { "end": 862.32, "start": 857.2, "text": " testers that would play the games a couple times and this is very expensive and very tedious uh so" }, { "end": 866.48, "start": 862.32, "text": " we've been much nicer to have a bot that can play the game against itself using using self-play" }, { "end": 871.84, "start": 866.48, "text": " uh and then learn how to uh or like kind of like figure out automatically where this um where" }, { "end": 876.5600000000001, "start": 871.84, "text": " this exploits could be where this imbalances could could be located for example they've they figured" }, { "end": 881.76, "start": 876.5600000000001, "text": " out that one one card and one of their uh cards like games uh was very powerful and they had to" }, { "end": 886.72, "start": 881.76, "text": " reduce the value of the cards by one and that that completely fixed like this imbalance" }, { "end": 891.36, "start": 886.72, "text": " when i think of recommender systems i often think of the one step case the banded case" }, { "end": 896.24, "start": 891.36, "text": " is that what you're talking about here or also the full rl setting the multi-step uh case" }, { "end": 901.04, "start": 896.24, "text": " yeah correct i'm actually talking about both so we we still see a lot of companies trying banded" }, { "end": 908, "start": 901.04, "text": " uh as a single step kind of like try to optimize the very next reward kind of setup um but also uh" }, { "end": 913.92, "start": 908, "text": " but also you have these these companies that always think in long-term effects on the recommendations" }, { "end": 920.16, "start": 913.92, "text": " on engaging the user uh maybe it has some some negative effect that you uh that you always make" }, { "end": 924.5600000000001, "start": 920.16, "text": " some recommendations and the user clicks on it engages with it uh and kind of gets gets tired maybe" }, { "end": 930.16, "start": 924.56, "text": " of the content so these these considerations uh kind of slowly creep into their um yeah into" }, { "end": 936.0799999999999, "start": 930.16, "text": " their considerations of of the problems that they want to solve so this the session-based uh long" }, { "end": 942.88, "start": 936.0799999999999, "text": " long-range um um yeah kind of the delayed reward settings that you can only use with classic rl" }, { "end": 948.7199999999999, "start": 944.3199999999999, "text": " and yeah there's a lot of movement right now a lot of uh companies uh want to want to try our" }, { "end": 953.92, "start": 948.7199999999999, "text": " l for xs where before they used either either some non-machine learning methods or like supervised" }, { "end": 959.36, "start": 953.92, "text": " learning uh now they i think they figured out that this this end-to-end setup of rl is really nice" }, { "end": 964, "start": 959.36, "text": " and you can uh it just gives you an action and you can just use that without having to program more" }, { "end": 969.68, "start": 964, "text": " logic into the system uh but it's it's very hard i feel like one one of the challenges here in" }, { "end": 975.5999999999999, "start": 969.68, "text": " rex's maybe to explain this uh is is the if you have to recommend several items so like um" }, { "end": 979.8399999999999, "start": 975.5999999999999, "text": " think about youtube and you have you go to youtube and you have these eight or or there's a couple of" }, { "end": 986.5600000000001, "start": 979.84, "text": " slots that are filled with with recommended videos uh it's it's quite uh crazy for the action space" }, { "end": 990.96, "start": 986.5600000000001, "text": " with what this means if you have to fill uh it kind of explodes easily if you think about the" }, { "end": 997.44, "start": 991.76, "text": " number of videos that youtube has several dozen billions i think um and you have to pick" }, { "end": 1002.88, "start": 997.44, "text": " 12 of those that that makes your action space uh quickly explode if you don't have like a nice" }, { "end": 1008.24, "start": 1002.88, "text": " pre-selection method that you probably want to put there uh which has nothing to do with our" }, { "end": 1012.24, "start": 1008.24, "text": " stuff so you have to be careful that it's it's really like a big challenge i find it really really" }, { "end": 1017.28, "start": 1012.24, "text": " difficult uh that's just one problem the other problem is the user model like how do you uh how do" }, { "end": 1021.6800000000001, "start": 1017.28, "text": " you do this all without without a simulator maybe maybe you have a simulator but maybe like how" }, { "end": 1026.16, "start": 1021.6800000000001, "text": " do you program the user into the simulator the user behavior uh especially the long-term" }, { "end": 1030.24, "start": 1026.16, "text": " behavior the long-term effects on on what you do with the user what you recommend to the user" }, { "end": 1035.04, "start": 1030.96, "text": " into this model i find it extremely challenging uh an extremely challenging problem uh so other" }, { "end": 1040.96, "start": 1035.04, "text": " use cases uh that we're seeing uh there's there's a rise in interest in our our finance we have" }, { "end": 1046.08, "start": 1040.96, "text": " jpm for example i can i can say that because also they uh publicly spoke about this using our" }, { "end": 1050.72, "start": 1046.08, "text": " lip as well as other companies that we're seeing uh moving into the space uh and and to try our" }, { "end": 1056.8799999999999, "start": 1050.72, "text": " L on financial decision making by sell decisions uh and and so on um and then another one is" }, { "end": 1062.1599999999999, "start": 1056.8799999999999, "text": " uh we have we have seen some some attempts and then self-driving cars for robotics it does feel like" }, { "end": 1066.8000000000002, "start": 1062.16, "text": " some some some verticals are further ahead uh another one is logistics which is which is also" }, { "end": 1072.24, "start": 1066.8000000000002, "text": " very further ahead or like this this whole process optimization um uh sector where you have some" }, { "end": 1078.16, "start": 1072.24, "text": " maybe you have some factory uh like a chemical factory and and you would like to optimize different" }, { "end": 1084.88, "start": 1078.16, "text": " different processes there uh through through RL uh that's also very very far ahead um already and uh" }, { "end": 1090.5600000000002, "start": 1084.88, "text": " but but yeah the different verticals have have made a different amounts of progress into into moving" }, { "end": 1096.1599999999999, "start": 1090.56, "text": " into RL and the the only problem that we see right now still is that it's not at large scale yet so" }, { "end": 1102, "start": 1096.1599999999999, "text": " we see single companies starting to think about it but i don't think we are quite at the point where" }, { "end": 1107.04, "start": 1102, "text": " where really everyone wants to wants to do it right now um but i think we are we're close uh so" }, { "end": 1111.28, "start": 1107.04, "text": " so maybe it's another year uh it's hard to tell and this is the one of the difficulties uh for us" }, { "end": 1116.96, "start": 1111.28, "text": " at any scale uh to predict uh when this really when this point will happen where where everything" }, { "end": 1123.04, "start": 1116.96, "text": " really goes goes exponential um it's it's quite a challenge can you say a bit about the roadmap for RLib" }, { "end": 1128, "start": 1123.52, "text": " what do you see in the future for RLib? As i already mentioned before like one important" }, { "end": 1131.76, "start": 1128, "text": " product that we're currently working on i would say where maybe 50 60 percent on with that is" }, { "end": 1137.3600000000001, "start": 1131.76, "text": " API simplification so we are uh we have realized that stable baselines for example is a" }, { "end": 1141.68, "start": 1137.3600000000001, "text": " much nicer library to use and to learn and easier to learn and and we really respect that and" }, { "end": 1148, "start": 1141.68, "text": " we would like to uh for RLib um to become or or to to have that feature or have that feel as well um" }, { "end": 1155.44, "start": 1148, "text": " so we're trying to get rid of old complicated unintuitive APIs um uh can give you an example for example" }, { "end": 1161.44, "start": 1155.44, "text": " our our algorithms uh the configurations they used to be a python dictionary so we had we didn't" }, { "end": 1167.68, "start": 1161.44, "text": " really know what the different keys were and we had some users tell us um that one of the main" }, { "end": 1172.4, "start": 1167.68, "text": " uh locations where they would hang out on the internet would be the the RLib documentation page" }, { "end": 1177.8400000000001, "start": 1172.4, "text": " where all the the config options were listed um and so so instead now we have changed this to a" }, { "end": 1183.28, "start": 1177.8400000000001, "text": " to config objects so you can create a uh an instance of a config class and then you have" }, { "end": 1188.24, "start": 1184, "text": " type-safe properties that you can set inside this class uh the class comes with different" }, { "end": 1193.92, "start": 1188.24, "text": " helper methods for example to set training related parameters uh or or rollout related parameters" }, { "end": 1198.8000000000002, "start": 1193.92, "text": " uh resource related parameters so on uh so it's much more structured it's much more IDE friendly" }, { "end": 1203.68, "start": 1198.8000000000002, "text": " you can see the different uh dog strings in your IDE uh and immediately know what's what setting" }, { "end": 1209.28, "start": 1203.68, "text": " you need to um adjust to make the algorithm um yeah hopefully learn learn a little faster" }, { "end": 1216.48, "start": 1210.0800000000002, "text": " that's one change we did we are currently also exploring uh making our models easier to to uh" }, { "end": 1225.2, "start": 1216.48, "text": " customize um before every algorithm kind of had its own yeah model API like a dqn would have" }, { "end": 1230.64, "start": 1225.2, "text": " this this uh q model thing and it would have certain methods that you recall for for handling the" }, { "end": 1236.72, "start": 1230.64, "text": " the uh dueling uh head for example um now we're trying to unify all that so that in between the" }, { "end": 1241.6, "start": 1236.72, "text": " different algorithms you can use the same uh subclasses for example for for q network or for" }, { "end": 1248.08, "start": 1241.6, "text": " policy network or for a value function network or for a transition dynamics network um and that" }, { "end": 1252.3999999999999, "start": 1248.08, "text": " will over unified and you can you can this will make it easier to to plug and play these different" }, { "end": 1257.9199999999998, "start": 1252.9599999999998, "text": " maybe pytorch or tensaflow models that you have uh flying around anyway if you if you do research" }, { "end": 1262.56, "start": 1257.9199999999998, "text": " on these things um and then the algorithms will just recognize any of those so it's uh it will be" }, { "end": 1268.7199999999998, "start": 1262.56, "text": " much more pluggable and much more intuitive um to set up uh different arbitrarily complex models for" }, { "end": 1274.8, "start": 1268.72, "text": " the other ones so i understand our ellipse supports both pytorch and tenserflow how does that work" }, { "end": 1278.96, "start": 1274.8, "text": " yeah great question uh it makes things much more complicated yes uh we don't have an internal" }, { "end": 1287.28, "start": 1280, "text": " rl-lip specific deep learning framework no we just uh basically do everything um twice so so" }, { "end": 1292.96, "start": 1287.28, "text": " but but it's it's simpler than that so the the top um concept in our lip is the algorithm which is" }, { "end": 1298.8, "start": 1292.96, "text": " completely framework agnostic it determines when you when you sample or if it determines when things" }, { "end": 1303.68, "start": 1298.8, "text": " should happen so when should the sample when should i put the samples into my bibliopeliver" }, { "end": 1308.88, "start": 1303.68, "text": " when should the sample from the repeller for when should i update my uh policy network from from" }, { "end": 1315.76, "start": 1308.88, "text": " that data completely from agnostic uh just you just pass around abstract uh extract uh objects" }, { "end": 1321.28, "start": 1315.76, "text": " and concepts and then on the one level below you have the we have what we call the policy" }, { "end": 1326.96, "start": 1321.28, "text": " uh and that one is framework specific so we have a tenserflow policy superclass and the" }, { "end": 1332.8, "start": 1326.96, "text": " the torched policy superclass um and the different algorithms for example ppo and dqn they have" }, { "end": 1337.68, "start": 1332.8, "text": " their own loss functions which are part of this policy written uh in in two in two ways in" }, { "end": 1344.16, "start": 1337.68, "text": " in tenserflow and in pytorch uh the the problem of tenserflow one with the sessions and tenserflow" }, { "end": 1350.16, "start": 1344.16, "text": " two with with eager and and uh not using sessions and placeholders we solve by kind of automating this" }, { "end": 1354.48, "start": 1350.16, "text": " away so you really only have to write one tenserflow loss function to support both these versions" }, { "end": 1359.92, "start": 1355.2, "text": " but yes for each algorithm we have to write both both loss functions but that's mostly it" }, { "end": 1364.72, "start": 1360.72, "text": " and then the other thing you have to be careful of course is the default models so we have we have" }, { "end": 1373.6000000000001, "start": 1364.72, "text": " a set of different default models nlp's some some simple cnn setup as well as an LSTM setup" }, { "end": 1379.8400000000001, "start": 1373.6000000000001, "text": " of course those also we provide in both tenserflow and pytorch but the main work is really for" }, { "end": 1384.1599999999999, "start": 1379.84, "text": " loss functions if you want to implement a new algorithm for our users it doesn't matter they they" }, { "end": 1389.1999999999998, "start": 1384.1599999999999, "text": " just implement one of these uh and then of course their algorithm only exists in that one world" }, { "end": 1394.56, "start": 1389.1999999999998, "text": " but for the the built-in algorithms that come with our lip uh we went through the work and" }, { "end": 1399.84, "start": 1394.56, "text": " implemented the loss functions in both uh in both frameworks but it's it's not as work it's not" }, { "end": 1405.52, "start": 1399.84, "text": " as bad as it sounds it's not as much work i think as people would fear it is so that's that's a good" }, { "end": 1412.48, "start": 1405.52, "text": " thing does the tenserflow side still get a lot of use seems in the research side pytorch is more" }, { "end": 1418.32, "start": 1412.48, "text": " common i think so so a lot of industry users still on tenserflow they believe in the in the speed" }, { "end": 1423.68, "start": 1418.32, "text": " in the performance we we have seen weird stuff with torsially that sometimes it runs super fast" }, { "end": 1430.08, "start": 1423.68, "text": " depending on the machine you are on whether you're on an mprocessor or uh the also the GPU" }, { "end": 1435.4399999999998, "start": 1430.08, "text": " CUDA versions play a big role but a lot of industry still use tenserflow the tenserflow versions of" }, { "end": 1441.1999999999998, "start": 1435.4399999999998, "text": " the algorithms um also sometimes they don't really care they use everything out of the out of the box" }, { "end": 1445.6799999999998, "start": 1441.1999999999998, "text": " so they don't really have their own models they just use everything that comes with our lip anyways" }, { "end": 1450.32, "start": 1446.48, "text": " so then in this situation they can easily switch and compare which is also very nice" }, { "end": 1456.72, "start": 1450.32, "text": " but traditionally we have seen that tenserflow still has some some edge over pytorch performance wise" }, { "end": 1462.8, "start": 1456.72, "text": " but we also we from time to time we we look into pytorch and see why um or like how we can make" }, { "end": 1469.76, "start": 1462.8, "text": " it faster and which which they're these jat tricks that you can apply to to make it like similar to" }, { "end": 1475.1200000000001, "start": 1469.76, "text": " to how you would um yeah use tenserflow 2 with with the with the egotracing to make it faster" }, { "end": 1480.4, "start": 1475.92, "text": " we're still kind of like working on that one um but yeah we see we still see a lot of people on" }, { "end": 1487.3600000000001, "start": 1480.4, "text": " tenserflow definitely um i think in the research era the probably pytorch has has the uh the" }, { "end": 1493.52, "start": 1487.3600000000001, "text": " edge now but i think in industry it's still pretty undecided to this point we had jordan terrion" }, { "end": 1500.24, "start": 1493.52, "text": " recently who maintains gym and they were talking about our all systems that have both the agent" }, { "end": 1506.64, "start": 1500.24, "text": " and the environment in the GPU and so the entire oral clock cycle is happening within the GPU" }, { "end": 1512.3200000000002, "start": 1506.64, "text": " um is there any plans to do anything like that with our alib? yeah we've seen this come up in" }, { "end": 1517.3600000000001, "start": 1512.3200000000002, "text": " in discussions with with with jordan himself but also with our users and customers or potential" }, { "end": 1522.8000000000002, "start": 1517.3600000000001, "text": " customers the need to to do exactly that so to be on the GPU in your environment because maybe" }, { "end": 1529.0400000000002, "start": 1522.8000000000002, "text": " the environment has a complex image space observation space um and then to to not have to copy all the" }, { "end": 1534.16, "start": 1529.0400000000002, "text": " data from the GPU on the environment back to the CPU send it through the ray object store" }, { "end": 1538.96, "start": 1534.16, "text": " uh and then and then basically move it back to the GPU for for learning updates we have seen" }, { "end": 1543.92, "start": 1538.96, "text": " this this question a lot uh and we started thinking about we start experimenting with it um another" }, { "end": 1548.96, "start": 1543.92, "text": " possible setup is to even think about having um different GPUs so you have the environment on one" }, { "end": 1556.4, "start": 1548.96, "text": " GPU uh on on several and then the the central GPU for learning um how can you realize direct GPU" }, { "end": 1564.8000000000002, "start": 1556.4, "text": " to GPU communication to to also speed this up to at least avoid the CPU copy um and we we have come" }, { "end": 1569.8400000000001, "start": 1564.8000000000002, "text": " to the conclusion that this is more like a problem that we should solve in in ray core um via the object" }, { "end": 1575.52, "start": 1569.8400000000001, "text": " store so the object store the the thing that ray works with that uh basically is available from" }, { "end": 1581.44, "start": 1575.52, "text": " all the different nodes in a cluster uh things go into the object store as read only they get serialized" }, { "end": 1585.92, "start": 1581.44, "text": " put there and then uh you can class the reference around then with the reference on the other side" }, { "end": 1590.24, "start": 1585.92, "text": " uh of the cluster you can you can pull it out of the object store but this is all before stuff goes" }, { "end": 1594.0800000000002, "start": 1590.24, "text": " to the object store this is all being copied to the CPU so that's currently the problem uh we're" }, { "end": 1599.1200000000001, "start": 1594.0800000000002, "text": " trying to solve that um if we can say uh this this particular data should yes it should go to the" }, { "end": 1605.2, "start": 1599.1200000000001, "text": " object store but please uh leave it on uh on the GPU or or send it directly to this other GPU in the" }, { "end": 1610.4, "start": 1605.2, "text": " cluster uh we are we're currently working on this it's on our roadmap and uh but we have to still" }, { "end": 1616.5600000000002, "start": 1610.4, "text": " figure out a lot of details uh related to our lip um for example yeah what's what's what does" }, { "end": 1622.5600000000002, "start": 1616.5600000000002, "text": " this mean for the environments um then we may need to add jack support uh because you have the" }, { "end": 1628.8000000000002, "start": 1622.5600000000002, "text": " the nice um pi jacks API that you can then use um and yeah we currently don't don't support jacks" }, { "end": 1634, "start": 1628.8000000000002, "text": " but this is this is on our roadmap and this may may happen pretty soon yes can you tell us a bit" }, { "end": 1639.8400000000001, "start": 1634, "text": " about the team behind uh our live yeah great question so we have a as i was i mentioned before" }, { "end": 1644.8799999999999, "start": 1639.84, "text": " the our team sizes roughly we have five full-time engineers uh we had a couple interns that finished" }, { "end": 1651.52, "start": 1644.8799999999999, "text": " their uh fantastic projects uh already uh one one finished yesterday um Rohan who worked on" }, { "end": 1658.32, "start": 1651.52, "text": " of policy evaluation uh and nice new API for this uh and as well as the uh wrobos implementation" }, { "end": 1662.6399999999999, "start": 1658.32, "text": " that we have right now in our lip um and the other one child who was working on the decision" }, { "end": 1669.04, "start": 1662.6399999999999, "text": " transformer notation um it's quite a challenge uh for myself i'm i'm working remotely from from" }, { "end": 1676.24, "start": 1669.04, "text": " germany um and most of the other people are in san francisco um but we we have a pretty solid" }, { "end": 1682.3999999999999, "start": 1676.8799999999999, "text": " pipeline for like planning and and we work in sprints so we plan ahead every every two weeks" }, { "end": 1687.12, "start": 1682.3999999999999, "text": " on what everyone should should work on for the next for the next two weeks uh and then we do pretty" }, { "end": 1692.8799999999999, "start": 1687.12, "text": " solid quarterly planning where we um come up with uh with with lots of thoughts of what what" }, { "end": 1699.1200000000001, "start": 1692.88, "text": " we're direction that our lips should go into what's what's needed by the um by the users by the important" }, { "end": 1706.48, "start": 1699.1200000000001, "text": " customers um and what also what what things are we predicting to to happen next like is gaming" }, { "end": 1711.0400000000002, "start": 1706.48, "text": " going to be the next big thing in the next six months or so um so all this goes into the planning" }, { "end": 1716.72, "start": 1711.0400000000002, "text": " and then we come up with a uh they're quite detailed kind of like by the by the engineering week" }, { "end": 1721.92, "start": 1716.72, "text": " uh sort of planning um what everyone should work on uh distribute this among the engineers uh and then" }, { "end": 1725.8400000000001, "start": 1721.92, "text": " during the quarter make sure that we help each other out uh if they're if they're road blocks or if" }, { "end": 1730.5600000000002, "start": 1725.8400000000001, "text": " they're like uh someone gets stuck somewhere with uh the whole team works out and it's working" }, { "end": 1736.16, "start": 1730.5600000000002, "text": " really quite well we have only been together for a couple months now i have to say so the last two" }, { "end": 1739.92, "start": 1736.16, "text": " and a half years since i joined any scale most of the time i was more or less working alone on our" }, { "end": 1744.8000000000002, "start": 1739.92, "text": " lip uh maybe had some help from from interns in between uh also in the beginning uh Eric was still" }, { "end": 1750.88, "start": 1744.8000000000002, "text": " there uh who then moved into the rake core team but the the this our lip team this this really like" }, { "end": 1754.88, "start": 1750.88, "text": " larger team that's that's working professionally full-time on our lip uh has only been around for" }, { "end": 1759.8400000000001, "start": 1754.88, "text": " really a couple of months now since the beginning of this year so um and i feel like it's it's working" }, { "end": 1764.88, "start": 1759.8400000000001, "text": " really well like we we're getting a lot of stuff done with our lip is changing uh quite a lot" }, { "end": 1770.64, "start": 1764.88, "text": " right now as we go towards ray 2.0 uh and i'm really really happy about um the the the" }, { "end": 1777.8400000000001, "start": 1770.64, "text": " yeah intermediate results so far uh i look forward really to to um yeah to all the nice changes that" }, { "end": 1783.1999999999998, "start": 1777.84, "text": " that are to come can i ask you about testing how do you test rlib yeah uh yeah testing that's" }, { "end": 1787.04, "start": 1783.1999999999998, "text": " actually one of the pain points we discussed recently in in uh like how can we be more efficient" }, { "end": 1794.3999999999999, "start": 1787.04, "text": " testing right now yeah we have we have a c-i it uh we use build kite for our c-i tests um and" }, { "end": 1800.72, "start": 1795.36, "text": " the the rlib when you when you do a uh branch ship uh from from master and then you have your own" }, { "end": 1807.4399999999998, "start": 1801.28, "text": " um branch and you push a um an update to your PR uh takes about yeah more than an hour to" }, { "end": 1811.76, "start": 1807.44, "text": " truthfully run because we have we have all the unit tests we have the what we call the c-i" }, { "end": 1815.3600000000001, "start": 1811.76, "text": " learning test which are like smaller learning tests on carpool and pendulum environments" }, { "end": 1820, "start": 1815.92, "text": " for all the different algorithms for all the different frameworks uh sometimes even different" }, { "end": 1824.96, "start": 1820, "text": " different settings and different model types uh so that's roughly an hour uh which which it" }, { "end": 1829.2, "start": 1824.96, "text": " shouldn't shouldn't take that long we we can there's a lot of things that we can do better to to" }, { "end": 1835.2, "start": 1829.2, "text": " maybe cut it in half uh one of these things is our lip is not dependent on uh on building stuff" }, { "end": 1840.4, "start": 1835.2, "text": " our lips really just source files it's a pure python uh library you we should be able to just" }, { "end": 1845.76, "start": 1840.4, "text": " pull a a ready container and just run on there uh that's something we can we can have we optimize um" }, { "end": 1851.52, "start": 1845.76, "text": " and then we have daily uh release tests that we run so uh and those are like heavier or hard" }, { "end": 1857.52, "start": 1851.52, "text": " of task learning tests on on a tary on on uh mojoko uh also for the different algorithms for the" }, { "end": 1863.76, "start": 1857.52, "text": " different frameworks uh tens of full and pie torch um and those take uh several hours to run on" }, { "end": 1870.08, "start": 1863.76, "text": " like expensive GPU machines but we uh also there we did a lot of progress lately we add a lot of" }, { "end": 1875.36, "start": 1870.08, "text": " algorithms so we have much more trust in in ourselves right now uh which is very good um and very" }, { "end": 1882.32, "start": 1875.36, "text": " important um uh but yeah it's it's a it's a huge pain point as uh as as a yeah team of always as" }, { "end": 1888.8, "start": 1882.32, "text": " developers so i understand rlib is not your first rl library uh can you tell us about how you got" }, { "end": 1894.6399999999999, "start": 1888.8, "text": " here yeah sure so the my my journey to rl actually started with uh with with games so i was i was" }, { "end": 1900.96, "start": 1894.6399999999999, "text": " looking at the arvel engine uh in i think it was 2016 uh and i wanted to i had this crazy idea of" }, { "end": 1906.1599999999999, "start": 1900.96, "text": " writing a system that uh where you have like a game world and then you have some some characters" }, { "end": 1910.8799999999999, "start": 1906.1599999999999, "text": " in there and the characters would kind of like learn how to uh interact and kind of play the game" }, { "end": 1915.9199999999998, "start": 1910.8799999999999, "text": " and i did wasn't even didn't even know much about rl back then um but this idea of like" }, { "end": 1920.64, "start": 1915.92, "text": " creating kind of like a background story by just making the characters uh become smarter and smarter" }, { "end": 1926.0800000000002, "start": 1920.64, "text": " and and act in this world um it got me into into rl and then i figured this is the perfect method to" }, { "end": 1931.44, "start": 1926.0800000000002, "text": " to use for this to solve this kind of problem um so that's when i started learning what rl and and" }, { "end": 1937.28, "start": 1931.44, "text": " uh also writing some some some my own libraries up front or some own algorithms uh and i started" }, { "end": 1942.96, "start": 1937.28, "text": " with uh joining the tensor force group uh the tensor force open source project uh that was in" }, { "end": 1950.64, "start": 1942.96, "text": " 2017 um and then we yeah tensor force is another another open source our library um and then with" }, { "end": 1955.6000000000001, "start": 1950.64, "text": " some of these people from the tensor force uh always is our github repo we started our graph" }, { "end": 1963.04, "start": 1956.48, "text": " um in this was 2018 to the 19 uh we published a paper comparing ourselves or comparing our" }, { "end": 1969.68, "start": 1963.04, "text": " graph with our lip uh and that paper then uh got attention uh from from the indescale team uh" }, { "end": 1976.5600000000002, "start": 1969.68, "text": " and uh at the end of 2019 uh i believe indescale reached out to me uh and also to other people from" }, { "end": 1981.2, "start": 1976.5600000000002, "text": " the our graph team and and asked us whether we would like to work for them. Yeah and then so real" }, { "end": 1986, "start": 1981.2, "text": " it's a really smaller library that i've came up with uh i wanted to i was always obsessed with the" }, { "end": 1991.76, "start": 1986, "text": " idea to make it really really simple to implement an algorithm um like it shouldn't be harder than" }, { "end": 1996.96, "start": 1992.64, "text": " you read the paper you see the pseudocode you understand pseudocode and then you just" }, { "end": 2002.48, "start": 1996.96, "text": " use as many lines as are in the pseudocode to code the algorithm in a library it's it's uh" }, { "end": 2007.76, "start": 2002.48, "text": " it's a tough goal to achieve but that that should be the ideal in in my opinion for for any rl" }, { "end": 2014.32, "start": 2007.76, "text": " library or the ultimate goal uh and that was that was surreal so i tried to really uh implement the" }, { "end": 2021.1200000000001, "start": 2014.32, "text": " algorithms in a very kind of like uh dense but easy to read fashion um and i had some some algorithms" }, { "end": 2026.48, "start": 2021.1200000000001, "text": " and there i think ppo ssc dqn um yeah so that was there was just like some small site project it was" }, { "end": 2030.4, "start": 2026.48, "text": " it was also related to games i had it i had a module in there where you could plug in the unrelangian" }, { "end": 2036.08, "start": 2030.4, "text": " tool to this um to this our library uh and then learn some some smaller problems very similar to" }, { "end": 2040.96, "start": 2036.08, "text": " ML agents for unity uh want to do the same with with the unrelangian and that was that surreal" }, { "end": 2045.28, "start": 2040.96, "text": " Sven is there anything else you want to share with our audience while you're here uh yeah sure" }, { "end": 2052, "start": 2045.28, "text": " so we have our race summit coming up in august uh end of august 22nd and 23rd um in san francisco" }, { "end": 2055.76, "start": 2052, "text": " it's the first race summit uh this is the third one and the first one that's actually in person" }, { "end": 2060.4, "start": 2055.76, "text": " i'm super excited to be there uh we'll fly in on on uh at the end of august there to give a" }, { "end": 2066.6400000000003, "start": 2060.4, "text": " tutorial on our lip um and there are other cool talks about how people use uh ray in industry how" }, { "end": 2071.6000000000004, "start": 2066.6400000000003, "text": " people use race libraries in industry um of course also a lot of talks in our lip yeah if you" }, { "end": 2076.2400000000002, "start": 2071.6000000000004, "text": " interested uh sign up and and join us for the summit that'll be awesome Sven mika it's been great" }, { "end": 2081.92, "start": 2076.2400000000002, "text": " chatting with you and learning about rlib uh looking forward to race summit and thanks so much" }, { "end": 2086.56, "start": 2081.92, "text": " for sharing your time and your insight with uh the talk our audience today thank you Sven thanks" }, { "end": 2113.2799999999997, "start": 2086.56, "text": " lot for having so pleasure to be here" } ]
Karol Hausman and Fei Xia
Karol Hausman and Fei Xia of Google Research on newly updated (PaLM-)SayCan, Inner Monologue, robot learning, combining robotics with language models, and more!
https://media.transistor…17f.mp3?src=site
This type of emergent capability is super interesting for us to see and super exciting for us by using a better language model we can improve robotics performance kind of for free. Talk our L-podcast is all reinforced in learning all the time, featuring brilliant guests both research and applied. Join the conversation on Twitter at talkrlpodcast. I'm your host Rob and Chohan. A brief message from any scale are sponsored for this episode. Re-inforced in learning is gaining traction as a complimentary approach to supervised learning with applications ranging from recommender systems to games to production planning. So don't miss Ray Summit, the annual user conference for the Ray open source project, where you can hear how teams at DAO, Verizon, Riot Games and more are solving their RL challenges with RL Lib. That's the Ray ecosystem's open source library for RL. Ray Summit is happening August 23rd and 24th in San Francisco. You can register at raysemmet.org and use the code raysemmet22RL for a further 25% off the already reduced prices of 100 bucks for Keynotes only or 150 to add a tutorial from Sven. These prices are for the first 25 people to register. Now I can see from personal experience I've used Ray's RL Lib and I have recommended it for consulting clients. It's easy to get started with but it's also highly scalable and supports a variety of advanced algorithms and settings. Now on to our episode. Carol Hausman is a senior research scientist at Google Brain and an adjunct professor at Stanford working on robotics and machine learning. Carol is interested in enabling robots to acquire general purpose skills with minimal supervision in real world environments. FESHA is a research scientist with Google Research. FESHA is mostly interested in robot learning in complex and unstructured environments. Previously he's been approaching this problem by learning in realistic and scalable simulation environments including Gibson and I Gibson. Most recently he's been exploring using foundation models for those challenges. Thank you both for joining us today. Thank you for having us. Thanks for having me. So I reached out to you about an interview on SACAN because I wanted to hear more about how you combine these different lines of work with language models and robotics and I think it's really cool that you focused on this very challenging and practical domain and got some really interesting results. So let's get started with SACAN. So that paper is entitled Do as I can not as I say grounding language in robotic affordances that's on at all in 2022. To start with could you give us a high level idea of what is SACAN about? Yeah so SACAN is about allowing robots to execute long horizon abstract commands that can be expressed in natural language. So it would that the goal was to allow users to just talk to robot and describe what they want and even if the task is very long and it would be very difficult for robots to execute it. We thought that by leveraging large language models we should be able to break a task down into smaller set of steps that the robot is actually capable of doing and then helping the user with executing the instruction. The high level idea behind it is that we want to combine large language models with robot learning in a way that they benefit each other. So large language models in this equation provide the semantic knowledge that is in them. So they know quite a lot about the world from looking at text that is all over the internet. And at the same time they can also understand what you mean exactly and we can also use them to break down tasks into smaller steps. And then on the other hand robotics can be used to ground these language models in the real world. So the way that large language models are trained is such that they don't really get to experience what these words actually mean. They just get to read them and kind of learn about the statistics of which words comes after other words. And we were hoping that robotics can provide this actual experience of what it means to do something. What does it correspond to in the real world? So here the high level idea was that the robots would provide this kind of grounding through affordances so that the two combined together, LLM's and robots can execute long horizon tasks. And you use that phrase affordances. What does that mean here in this context? Yeah, in this context we refer to affordances as something that is aware of what the robot is capable of doing in a given situation with a given embodiment. So for instance, if you ask a robot to bring you a Coke, it should be able to tell whether it's actually possible to bring you a Coke, whether it has the right gripper or the right arm, whether even it has an arm so that it can bring it, whether there was a Coke that it can see, whether it's mean some room that there's no coax to be found. So affordances would be something that tells the robot what's currently possible given the state of the environment and the robots in body. I want to briefly add that the concept of affordance comes from like American psychologist James J. Gibson in the ecological approach to visual perception. It means what the environment offers individual. So in this case, it means what the robot can do in a certain state. So that's what we mean by affordances. There's this notion of using language models for scoring these affordance phrases. If I can call them that, can you can you talk about how that works? This is using a language model in a in a different way than I'm used to seeing not for generation but for scoring. So generally, there are two ways that you can decode from a language model. One way is called generative mode because language model essentially predict the probability of next token condition on previous tokens. So if you're just sample from the probability, then you're doing generative mode. You can do gritty sampling or you can do have some temperature and do like more diverse sampling. There's another way, basically, you force the output to be the phrases that you want and then you calculate the likelihood. That is the scoring mode. The reason that we use scoring mode is mainly to constrain language model to speak our robot language. So our robot only have a certain set of skills. So we want to constrain the output to be from the set of skills and we want to compare the likelihood of different skills. Through our experiments, we have compared the generative modes and scoring mode. In general, we find scoring mode to be more stable. And we also tried generative mode. In the generative mode, you generate some arbitrary phrase, then you still need to find nearest neighbor of the robot skill. There are some additional errors introduced in this mapping stage. So through the experiments we find the scoring mode seems to be more stable. What is the state of the art in this area before Sagan? Yeah, I think there's a few works that talk about using LLAMs as zero-shot planners. There is the original GPD3 paper that talks about the capabilities of language models as meta learners. There's also the paper from Wannongkwang at all that came out at a very similar time that talks about using language models as zero-shot planners. These don't have done talk about real robot results yet. And they have been showing, I think, some glimpses of what LLAMs are capable of in terms of just meta learners or as planners that could be applied to something like robotics. And I think that the other body of work that started being quite popular lately is just using language as a conditioning mechanism for policies for robotics. An example of work like this would be BCZ by Eric Zhang and others, where you still use a large language model to find the embedding of the instruction that you're sending to the robot, and that allows you to leverage natural instructions as well. But it doesn't really extract the high level knowledge from LLAM in the same way that they can thus, thus, it can contextualize it based on everything it learned. Tell us more about this robot. What is it? What is it capable of? So the robot that we use is a mobile manipulator from everyday robots. A mobile manipulator means it can navigate around and it also has an arm that can manipulate things. So in this work, we mainly use the vision system. We use the camera images, which is 640 by 512 RGB images as input. The robot has 7 degree of freedom arm with a 2 finger gripper attached to the end and then we mainly use that for manipulation. Finally, it has a navigation stack that it's based on wheels. It can drive around in the scene without collision. So that's basically the robot that we use. I want to highlight that the mobile manipulation is a big challenge because you need to decide where to stop to enable manipulation. So if you stop too far, then you're not able to manipulate. So generally, it's a very difficult problem for us to solve and the robot platform enables us to do that. You have taught this robot a set of skills, each with their own value function, and this is a pretty training step. How did you train these skills? What kind of skills are we talking about? Right. This is a good question. At the time when we published, I think this was around 300 different task instructions that included fairly simple skills such as picking, placing, moving things around, placing things upright, knocking things over, things like that. And we'll be updating the paper very soon. We'll be adding additional skills such as opening and closing drawers and putting things in and out of them. So this is kind of the level of skill complexity that we introduced. In terms of how we train these skills, this is I think the majority of the work goes and this is the really, really hard part of robotics. How do you actually get the robot to move and do the thing that you want it to do? So we use the combination of behavior cloning as well as train forceful learning. In this case, in particular, we are constantly comparing the different methods and how they scale with data and how they scale with the amount of demonstrations we have, the amount of a ton data collected autonomously, whether we can leverage simulation data as well, so this is constantly changing as to in which method is winning. At the time of the release of the paper, all the policies that were used on the real robot were trained using the behavior cloning, and then we use the value functions that were trained in simulation, that were leveraging all the simulation data. Simulation was then transformed to look more realistic using cycle again so that the images reflect a little bit better what the real world looks like and we were getting the value functions from those. So where did the idea for C-can come from? For us, we started with trying to incorporate planning to do more long horizon tasks first and we were thinking of all kinds of planners and different ways of thinking about the representation of the level level skills and so on. And as we were looking into that in the meantime, Brian Eakter and Fasia noticed that language is a really really good interface for planning and it's not only a really good interface that allows you to compose these different plans in many different ways, it's very compositional, but it also allows you to then leverage language models, which is kind of like a planner in the skies that you can just take from from from another lab that has trained it and and just use it and see how how well it works for your robotic domain. Yeah, I think during that time we also there is also a plethora of work that discuss using language model as 0 shot x where x could be like 0 shot reasoner, 0 shot planner, 0 shot learner and we were just thinking what if the x is robotics like what we can extract now like what knowledge can we extract from large language model and apply it to robotics. Apparently when we talk to a language model it produce something that is reasonable. For example, if you say I spill my drink it will say find cleaner find vacuum it's not absolutely right it's not actionable, but we find that the knowledge is there. So the question then becomes how do we make those knowledge more actionable and then that kind of inspired the work of say can. Okay, and then just one more definition. Can you clarify what you mean by grounding in this context exactly. Specifically in say can grounding refers to this idea of fordance models that are that allow you to predict what is the success rate of doing a certain task given given a current state. What's the what's the probability of you succeeding in the task given that you're starting at a particular state and given the description of the task. More generally the idea of grounding basically means that the the LLM don't really know what what the words that they're operating with the what they what they actually mean they're kind of just like parades that memorize different statistics and robotics kind of allows them to associate these words with with real world experiences. So they kind of ground them in in real experiences so that the robot actually it's or the whole system actually knows what it means to take something up or to drop something what it feels like and also what it what it looks like. So it's much more grounded knowledge. And I see that say can turns a natural language request into this list of steps that that corresponds to the set of skills that it already knows. So like if this human said how would you get a sponge from the counter and put it in the sink then the robot comes back with a list of steps one find a sponge to pick up the sponge three go to the sink etc. How do you get a general language model to come back with such a specific list. Yeah maybe I can speak to this question. So the way that we get the language model to produce the steps is the following. So first we use fuel shot prompting. So we already showed the language model a couple of examples of human ask a question and the robot will list one two three four five what are the steps. So the fuel shot prompting generally get the structure of the answer correct. So then every time human ask a question the robot will answer like one two three four five that's the fuel shot prompting part. Then we also have the scoring based decoding mechanism. So we basically have a question human ask a question and then we have robot says one and one and then we leave a blank there and we put all possible actions of the robot can do and then score different options. So for example when the question has a sponging that every option that also contains sponge will have higher score. That's because just how generally language model works it scores relevance. So that's the language part of the decoding scheme. We also have another branch that predict affordances like what is the success rate of you find a sponge here what is the success rate of you pick up the sponge here. We multiply the two scores the language score and also the affordance score and we get a combined score. Then all the options get a combined score and we just choose the highest combined score. In this case the first step would be to find a sponge and then we repeat this process and append like two blank and then ask the language model to score the second step. We repeat this process until it outputs a done token which means the entire task sequence is finished. Maybe to add a little bit to this I think one aspect that at least I didn't realize initially that we started noticing once we deployed the system is how interpretable it is. So we can very easily visualize what the language model thinks and what the robot thinks what the affordance model thinks and then you can kind of look into the robots to see whether the affordance model didn't have that many successes in that particular scenario and because of that it downgrades that particular skill or whether the LLM makes a prediction that doesn't really make sense so you can kind of quickly take a look and see exactly how the algorithm is progressing and why you picked that particular step as opposed to another one. So when the humans asked for how to do something and the robot I understand for the when the first step the robot is going to answer based on the existing context and then when it goes to say the second step is this the language model knowing that after the step one is done from general knowledge what makes sense to do next and then combine with the scoring so it's really leveraging the general knowledge embedded in the language model to order these things correctly is that right? That's right yeah because as you execute it steps get appended to the to the prompt so then the language model kind of knows that I already found this point now I should probably pick it up and then I use the affordance models and all of that to score it. Right that's very interesting so it kind of just emerges this ability to to chain these things together just emerges from that from that language model. That's correct yeah I mean there's I guess there's really good things about that and then some negatives too right like it would be it would be challenging to teach it or correct something in its planning is that is that one possible imitation here? Yeah that's a that's a very good point so here we are kind of diving into the world of planning that is a very vast world with you know many different options where you can do myopic planning not myopic planning with the feedback and all kinds of different things and I think SACAN is just a very first step showing how you can do how you can make these open loop plans but they're very myopic so like for instance if you fail that step number two the language model doesn't really have any feedback mechanism in SACAN to realize that while I failed I should probably try it again or you know try to fix it somehow. Some of the some of these things we are starting to work on how to make it a little less myopic and kind of optimize all the steps so that you can you can kind of look a little bit more into the future and think about your plan more holistically in the follow-up work on an inner monologue that I think they could talk a little bit more about we also try to incorporate some some feedback into the into the planner and into the language model so that we can correct these these missteps if they happen. Cool I'm looking forward to talking about that as well in just a few minutes. In terms of SACAN were there surprises for you in doing this and were you like pretty sure this would work from the beginning or were you were did you have a lot of uncertainty about certain parts of it? Yeah I think that's a super great question. At beginning we are not sure if that's gonna work like we just try to feel examples and it works surprisingly well for example it starts to work for if I say throw something away then and understand you need to go to the trash can so I think that's kind of interesting. Another interesting like kind of a ha moment for me is when we say I spill my drink can you help and the robot goes to find a sponge so that's super surprising to me because in order to do that sort of inference and you need to understand a lot of world knowledge you need to understand that if I spill something that means there's liquid if there's liquid a sponge can absorb a liquid then the robot should find a sponge so I always think that the sponge was kind of a ha moment that super super surprising and this kind of emergent capability super cool so that's one thing that is surprising to me. Another thing that is kind of surprising to me is how well things scales so in the paper that we are about to update we'll talk about Palm SACAN so previously SACAN was using a language model called Flan which has about 137 billion parameter model. When we switch it to a larger language model which is pathway language model which has 540 billion parameter model then it solves a lot of the planning mistakes that we are seeing at a smaller scale so it's really surprising that just by scaling the language model that we are solving a lot of like planning errors at this commonsense reasoning problems. One particular thing that is super interesting is that language models historically don't handle like negation very well if you say I don't like Coke bring me a drink it will still bring you a Coke because the Coke has it has Coke in the context it has Coke in the previous sentence so the relevance just makes the score of the Coke higher with the new Palm SACAN we find we can do a sort of technique called Ching of thought prompting so basically before generating the plan the robot also generate a Ching of thought with a Ching of thought it handles like negation super well you can say the user doesn't like Coke so I will bring something else Sprite is not Coke so I will bring Sprite and then it will generate the plan to bring a Sprite instead so this type of emergent capability is super interesting for us to see and super exciting for us and surprises us a lot. Yeah I think for me the big surprise that I wasn't expecting was this this visibility for the robotic system to scale with better language models I think this is super exciting because as we know there's many many people many researchers working on getting LLMs to be better and scaling them up to just improving them constantly and just seeing that by using a better language model we can improve robotics performance kind of you know for free by just swapping it out without changing anything on the robot thing that is that is really exciting that kind of allows us to ride that wave of better and better LLMs and improving road performance that way so we've been seeing bigger and better performing LLMs I'm not sure what hardware they run on but I'm assuming they don't run on the robot is that right that's right they run on TPUs and we call it through some sort of bridge what does the latency look like there just is that a limiting factor at all or is it pretty fast yeah that's a great question so for some part of the robot system it's latency sensitive for example if you're doing grasping it's super sensitive to latency like if you miss one step then you're getting like outdated data and then it can mess up the manipulation fortunately for the planning it's not a time sensitive step like the robot can just stop and think it can tell people I am doing the inference so in terms of the latency for the latest palms they can we are seeing about three seconds of latency in terms of reasoning so it's actually not too bad usually each steps takes about like 30 seconds so it's not bottlenecked by the inference speed of the planning and then the value functions are they running locally as well or they're just fast anyway yeah the value functions are super fast they they can run couple of hertz so that's not a bottleneck I yesterday I told my wife and my mother-in-law about the interview today and about the robots and and they were they were excited they asked me well what can this robot cook and I had to explain that you know robotics is really hard and you know it's not at that state of the art is not there yet it's no feeling of sake and that's just how the field is right now but what what should I tell them in terms of when when we might expect a robot like this to to cook us a meal which sounds like pretty far off but but maybe not the way things are going yeah I think it's really hard to make predictions about things like that I think we're making quite a lot of progress but it's kind of difficult to foresee what are the challenges that we'll we'll see with getting there one one thing that I tend to mention when when I get asked by my family questions like that is the smorevex paradox where in AI it's the the easy things that are hard and it's the hard things that are relatively easier so the the things that seem very easy to us such as manipulating objects or cooking a meal we're just walking around and you know playing with toys and things like simple manipulation skills only seem easy because we've been doing them for thousands and thousands of years and the evolution just made it so that it just seems extremely easy to us versus the the other things that require more reasoning so things like mathematical equations or playing complex games things that we would usually consider the intelligent things there we haven't been doing them for that long on the evolutionary scale so robots or the algorithms don't have that much to to catch up on so I feel like the the embodied version of AI where we actually have to manipulate the world around us and understand what's happening this is the really really hard part and it's kind of difficult to to get that across sometimes because you know it just seems so easy like I can cook a meal very easily or even a you know even a small kid can kind of has manipulation capabilities that far exceed what the robots can do today okay I'm gonna try explain that to them thanks for that okay and then in terms of the idea of end-to-end learning versus this compositional style where you're putting together prebuilt components I'm curious how you how you guys see that like it seemed that at some time you know some people were really exterling the virtues of end-to-end deep learning but then you more recently these foundation models have become really popular where there's a lot of pre-training involved and we don't expect to learn end-to-end or if at most a bit of fine-tuning do you think the future is going to involve a lot more of this pre-training and composition the way we're seeing here yeah that's a that's a really good question looking back at how robot learning has evolved it seems that initially we started with things that are a little bit more modular they're a little bit easier to implement a little bit easier to understand and they kind of make a lot of sense initially and then as we get better at them and they start to work we put them into this end-to-end system that optimizes only for the thing that we care about and then it finds the right representations to communicate between these different components that that happen in the past for instance with perception and control where we would have a perceptual system that would for instance estimate that the posts have an object or recognize the object and so on and then we'll take that representation that we came up with and feed it to a to a controller I think that right now with the with the language models with these planners we are going through a similar cycle where we are at this very initial step where it's much easier to think of them as these modular components where you have a separate planner that just gives you the next set of steps to do and then you have a separate end-to-end in this case but separate controller closed loop controller that can take a short command and execute it but I think over time as we can start to develop them more and more they'll become more unified and more end-to-end in this working in particular in in sake and prompting the LLMS was just the path of least resistance it was just very easy to do and we could see the results straight away but I think we we can start thinking about how can we combine it in one big system that can where we can be fine-tuning the LLM planner as well as the low-level skills jointly based on all the data that we that we are collecting okay let's talk about some of the the work that this this is built upon we won't go into integrate depth with these but just just a few brief mentions of for example I think the RL algorithm using here is is empty opt based on qt opt is that right and and can you briefly tell us about qt opt I think I if I understand correctly that's what you're using to learn the grasping from images with offline pre-training that's right the why qt opt there's other RL algorithms that that could do continuous control from images could you could you just spend a moment telling us why qt opt and empty opt for this for this case right yeah of course um yeah so in our experience we've been experimenting with a lot of these different algorithms and a lot of different variants I think one aspect that makes it a little bit different for us is that we try to do it at scale so with a lot of different tasks with a lot of robots a lot of data and so on and often the the algorithms as we see them on smaller scale benchmarks compared differently on larger scale so with qt opt in particular what we really like about that is that it's really stable if if setup correctly it just optimizes things quite well and it usually works and it's much less fragile than algorithms that use other actual critical algorithms I think we have some hunches on why that is one kind of one thought there is that in qt opt the optimization of the of the actor is completely independent from the optimization of the q function and I think that makes it just like a little bit more robust setup where there's no actor we just have another optimizer that stays constant throughout training and that kind of removes this one one aspect that can be quite difficult in in actor critical algorithms so we just found it a little bit more stable in these large scale settings but I think this is not the final answer I think you know as we explore these algorithms more and there's so many things to try so many different combinations between these algorithms different actor architectures critic architectures you know optimization schemes and so on I think we'll get more answers just at the current time to us qt opt will working the best and if you mentioned ralmo gen which I understand was part of your dissertation and that was partly inspirational for this work can you briefly describe what what that adds yeah sure so remote gen is a a previous work by me which explores using motion generation and rain force learning together so for that work we are also tackling the problem of mobile manipulation it's super challenging because you need to control where to move and where to do the manipulation what we found in that paper so that's basically a hierarchical reinforced learning work and the low level is motion generation which is not learned but rather some classical like planning based methods what we found is that for this long horizon problems it's beneficial to decompose the problem in a hierarchical fashion and it will be even better if the steps decomposed are semantically meaningful such as some like navigation steps interleaved with some manipulation steps so I would say that's a good inspiration for the say can lie of work where we also decompose a long horizon problem into a few short horizon steps which is like more manageable to learn in the low level okay and then you also mentioned another work action models where it uses hindsight relabeling and goal-chaining I guess to help with scaling the learning with the fixed amount of data can you just say briefly what this action model is contributing yeah so actionable models for the work that we did right after empty opt where I think the main kind of contribution in terms of sake and this is quite nuanced here so this is an offline or realm method that takes all the data that we used for for empty opt we collected for empty after we had a pre-specified set of tasks 12 or 14 tasks or something like that these were encoded as just one hot vectors so there was just task number one two three so one we collected a lot of data with it then we did this multi-task rain force learning with with Qt opt called empty opt and we were constantly talking about what other tasks to add and as you try to scale these systems this question actually becomes quite tricky especially as you try to do the set scale and you want the robots to run out to no mostly and so on and this is something that didn't occur at least to me when when we were starting that project that you know you kind of have to come up with as many tasks as you can and at the same task at the same time these tasks have to potentially reset each other so that they can run continuously autonomously without any human intervention they also have to be meaningful tasks and very diverse and so on so it seemed that at certain scale just coming up with the tasks themselves becomes a bottleneck so in actionable models we thought that rather than thinking of all kinds of different tasks let's just do a goal condition Q learning so let's consider every state as a potential task as a potential goal and try to get to that goal this was done completely offline so we didn't have to collect any of the shown data and we trained on all the data collected with empty opt and it worked really well I think this was kind of a big aha moment for us in terms of you know these one hot representations that we were using before to represent tasks were it kind of difficult to work with and the goal images just seemed much closer to what we actually wanted it would also allow us to just scale the number of tasks significantly because now any goal images actually a task representation and I think that was a step towards getting to language condition policies where language is this kind of spacing between where it's very compositional it's very natural to express to the robot what you wanted to do much more of a natural than than goal image but at the same time language captures these different relationships between tasks much better than one hot vectors for instance so if we had a task that is I don't pick up a carrot and pick up a cucumber if one is represented as task number one and the other is represented as task number two in terms of the representation of the sass there completely orthogonal there's nothing that they share versus the way that language was formed was such that you know we call things picking because whether it's picking carrot or picking cucumbers they kind of look very similar the language can groups them together that's how it came about so I think language not it's not only a really good interface but it also allows for better representations for learning all of these skills together okay let's move on to follow up work this is called inner monologue embodied reasoning through planning with action with language models and that was with yourself as authors slash co authors so this paper was published since we scheduled the interview and it definitely feels like an unexpected bonus so I'm excited to to be able to chat about you with us you you extend say can in in some different ways and get some really interesting results here so can you talk about the general idea of inner monologue for the inner monologue it's mainly tried to address the shortcomings of say can so say can is more doing open loop planning and it doesn't respond to certain failure cases it would just continue to do like plan further for inner monologue we tried to let the language model source feedback from the environment and from human and then do close loop planning one big inspiration for the inner monologue is palm-thick say can because we find using large larger language models it gives us like some extra capability to play with so we try to have a more unstructured way of like prompting the language model and find it can still do the planning in a very high quality so that's kind of the main idea of inner monologue so the text in the inner monologue looks looks a little bit like a screenplay script with the different actors and there's some narration different statements by the human robot scene some questions and I gathered it's all felt and fed into the language model so what are the different types of utterances here that the the text can contain is different than say can right right that's that's different than say can I would like to say that the inner monologue is also inspired by socratic models where they find you can just use multiple models to communicate using language so this is exactly what we're doing here there are different actors that can can talk to each other and then there is also a planning step which summarized the whoever have talked and generated a plan so here are some of the actors that we have here the first is success detection which it detects if a previous step is successful or not and then it will say the action is successful or the action failed second we have passive sync description the passive sync description basically describes the scene it can be an object detector telling you there are certain objects in the scene there are certain objects in certain locations there are some like state of the object this all pass this all fall into the passive sync description there is also active sync description where the robot can actively ask questions about the scene it can ask human like what is a color of certain things or it can ask a human here are two drinks which one do you like so it will ask question the way it feels it needs to so these are the source of feedback that we are gathering so we we talked to Rohan Shah recently on the show and he talked about this idea of active querying and actually learning to learning to ask but in this setting here how does your system learn or figure out when it's appropriate to ask we figure out where to ask mainly through it's still through field shot prompting we give it a couple examples when there are ambiguous when the query is ambiguous then it will further ask to clarify the query it's a little bit into the implementation detail where like the robot finishes a task and we score different options we score and continue and ask right so if the end ask score is higher then we will further prompt the language model to ask a question so here it's a slight deviation from the say can like scoring based decoding but for these cases we find generative decoding is more helpful here and it can always generate like meaningful questions to answer to ask to reduce ambiguity for example and you reported some interesting generalization and emergent capabilities in this intermodal log work can you can you talk about that and were some of those surprising to you yeah there are some some generalization or emergent capability that are super surprising to us so first let me just briefly define like what are the emergent capability I guess there are two meanings like one is that the capability only emerges at a larger scale so in this case we use the palm pathway language model and such capability only exhibit in such a scale if you use a smaller language model it probably will not have those capabilities the second kind of implicit meaning of emergent capability is that it's not a shown in the like field shot prompt so it's completely new to ask those capabilities one capability that we find is that it can generalize to like human changing the request for example the human say go to go to do task a and then as a robot was doing we insert go to do task b and the robot will change its plan to do task b and here we can also say never mind go go to finish your previous task and then in this case robot go back to finish the task a so this is quite surprising to us because we find that it understands this history it understands what is what does the previous task mean all due to like the large language model and our interface is very natural there are a couple emergent capabilities such as in one case we find it can also generalize to like you can use emoji as a query for example you can say a square a yellow square points to red circle and it will put the yellow block into the red bow so this is another like emergent capability that we see that is super interesting another very interesting emergent capability is that it can also propose a new plan based on a prompt such as try a new try a new method like when the robot fails at doing a task there are usually two reasons first it could it could be its manipulation policy has noise so it fails at doing a task in this case the best plan was to try again there could also be that the plan is not feasible for example if you are trying to grasp a block and the block is too heavy and in this case the plan would be to change a block to grasp so we find we can just provide a tiny hint to the language model we can just say please try a different plan or do you have any other idea and then the language model would just generate a different plan so this is also super exciting for us we have never seen this in our previous work and I saw that you showed inner monologue did well on on unseen tasks actually to me it seems surprisingly well and where is the baseline method got all zero so did you did you expect it to be doing this well on unseen tasks I think for in terms of unseen tasks we I guess you are referring to the comparison with clay ports yes yes so the clay port is trained to do like those tasks with demonstration so in that case it naturally doesn't generalize to new tasks super well like it's mainly we will perform pretty well for the same task but it doesn't generalize to novel tasks that's because the clipboard doesn't leverage like the rich knowledge presented in the large language models in our work the generalization mainly come from the language model in that case it's kind of natural for inner monologue to generalize to novel tasks where we see some other methods struggles and does the inner monologue consider the whole text as the prompt or you you mentioned a scratch pad I actually didn't follow that does it does it use the whole thing as the as the prompt for the language model right it used the whole thing as the prompt so I mentioned scratch pad because there are some relevant work in the NLP community that kind of inspired the inner monologue two of the papers are one is a chain of thought prompting where they just allow language model to generate a chain of thought before actually decoding the answer another is called scratch pad where the language model can just like call different modules and then keep some notes in the scratch pad before decoding an answer in the inner monologue we use the inner monologue itself as a scratch pad so every actor can write to that scratch pad and then we decode some actions like every time we decode for example robot action it is consuming all previous history steps as a prompt I see okay can we talk about this set of pre-trained skills how do you expand this set of skills Carol you said that that was a large part of the work was training these skills do you see a way around that or is there a way to are you envisioning a way to automatically acquire skills on supervised or how do you see that scaling up yeah this is this is a really important question and I can't emphasize enough how how difficult it is to actually to work on the low level skills and how important this this is definitely the bottleneck for the entire system so I think one thing that is quite exciting about about sake and is that before when we were thinking about what skills to add we would usually just like sit down and you know as engineers and researchers will just think about it and vote or something like this and then add that skill now we are at the level where sake and starts to be useful and can be used in in an office for instance where the robot can maybe bring you a snack from from a kitchen or something like this so it can start interacting with real users I think this is probably much better way coming up with new tasks so we can just see what are the things that users ask for quite often and then see what are the skills that would enable that so we can kind of more automatically decide what are the what are the the things that are missing in terms of how to add the skills there's many options there so sake and it's quite modular in that way where you can add a skill that was trained with behavior cloning with reinforcement learning and could be a scripted skill anything works as long as it has an affordance function associated with it so it kind of allows us to consider all these options separately when we when we are thinking about these skills we're kind of thinking about potentially also having the the language model come up with the skills that that could be useful in these settings so that would automate it even further overall yeah there's a lot of work that goes into this and I hope that will have some answers or some reports on this soon so stay tuned it seems to me the use of these large language models in this context is maybe a bit of a double edge sword like on you showed in the in the metal monologue paper that you had a request in Chinese and even though you didn't design it to understand Chinese necessarily because the language model had seen Chinese before it was able to understand at zero shot and do the right thing which is pretty amazing and then on the other hand it the language models would have all these things in them that you don't really need for this setting or maybe even I'm not sure if you'd even necessarily want like do you want your kitchen robot to have read all of read it or to understand irony and all this stuff I don't know maybe you do can you talk about like the idea of using these general purpose language models for very specialized purposes do you think in the future you'd want to have very specialized language models that were were kind of paired down it seems to me there's like a tension between like a good old fashioned AI system it just doesn't know enough things and you have to keep working hard to add facts and knowledge and and here you have the opposite problem where you have an LLM which actually in some ways maybe knows too much do you is that a concern at all or or not so much first of all we're using the general purpose like large language model mainly because their their scale and emergent capability and the built-in knowledge in that so it will be it will be difficult to shrink the model down while still keeping these knowledge so that would be like one key challenge for us however we do have motivation to bring those model down like to kind of distill those models one is one main thing is about efficiency so currently the inference is quite heavy and we definitely want to like make it smaller so that we can do inference faster in terms of like the unwanted behavior I would say current the say can decoding is quite safe because we only allow it to output like certain actions using the scoring mode so we don't get a lot of like undesired behavior so for us if we want to shrink the model down it's mainly for like efficiency purposes not for like unwanted behavior yeah I think the in terms of specializing these general purpose models right now we the main tool that we have for this other than affordance scoring and so on is prompting right so you can think of prompting us some way of of specifying the the task and specializing the model to the specific thing that you want it to to do I think as we gather more data for the for the task that we actually care about we could also think of other ways that just fine tuning the the model on fine tuning instead of parameters and I think there was kind of a many options that we could we could consider there to make this the model a little bit more specialized that it could be on just prompting it so there's a line in the CKN paper in the conclusion says it is also interesting to examine whether natural language is the right ontology to use to program robots and I just observing that language models most of them seem pretty generic they are only conditioned on the previous text and so it's not maybe not clear how to condition them on on other things do you see wanting to have language models that can be conditioned on other things or do you think the vanilla language models whether they're distilled or not are the are the right paradigm here any comments on that there may be two aspects to this and this may be like a little more philosophical so I think that the first aspect is that language just seems to be a really nice interface that is very interpretable for all of us but it also captures the compositionality and the relationships between all the different tasks that we might consider the robots to do so I think it's just like a really nice representation that potentially can make a robot learning easier because as we mentioned earlier if you have two tasks that look very similar and they will be probably described by the same set of words and I think that's that's really useful and kind of for free on top of that you also get the interpretability of it and then separately I think this is what with your question is pointing towards I think we should be considering other modalities in these in these large models and how they can influence the planners and robot learning in general I think something like inner monologue or secratic models is just one way of doing this that is more practical because a lot of multimodal models have the language component so you can just kind of ask a vision vision language model to describe what it's using language and then that's the way you can incorporate it into your big language model but as these multimodal models could better and better I would hope that we can incorporate much more into our prompt we can incorporate what we currently see you know what's our confidence in the actions that we are about to take and so on this would be just a much richer way of specify or kind of meta programming the robot right so not only you can just specify I want you to help me clean something up but maybe you can also demonstrate something and that's also part of the of the prompt that the robot can then understand no understand that you wanted to this thing to be picked up in a certain way or something like that so I think there's much more work to be done in in this interesting prompt multi model prompting mechanisms that would allow us to to teach robots better so I get that say can is a is lab work it's not meant to be deployed in its current state but when we eventually get to these types of robots being deployed do you think that they may have something in common with say can or what do you think there's any parts of of these systems that might be long-term advances versus stepping stones or is there more stepping stone situation yeah that's a that's a good question I think if we think of language models as these reasoning engines that can tell us a lot about the the semantics and about the world in general I think probably some form of this is here to stay these seem to be just really really powerful models that can understand common sense to a certain extent and that I think is very very helpful for for robot learning and I think we'll see this going forward maybe that will be a slightly different kind of model that can also incorporate other modalities as we mentioned but I could I can imagine that some form of this some form of this distilled knowledge what's that can you talk about a bit about how you think about your future work to what extent do you plan for an advance or are you taking things more step by step do you re-plan all the time how do you plan your your future work yeah that's a that's a good question I think it depends on the individual I think for for this project I tend to split it into three main aspects um the this data generation we need to be able to just generate a lot of data with robots then the other aspect is data sponge algorithms so just find algorithms that are able to absorb all of the data and that's often very very tricky and we spend a lot of time there and then the the third aspect are just I said just modeling how do you get the models to be to be better and I think for for a long time the bottleneck was actually the the algorithms themselves how well they can absorb all the data so we we saw for instance in in in language that once transformers came out they were just really really good data sponges and you can kind of throw a lot of data at them and then you can observe this fascinating scaling was and the performance continues to improve and we've been trying to do to to find an equivalent of that in robotics whether it's an offline or algorithm or some imitation algorithm or something else something that can absorb as much data and as diverse data as possible I think now we are slowly getting to the point where this is no longer a bottleneck there is a lot of algorithms that can absorb actually quite a lot of data so I think we'll we'll kind of then look at the the state of things and see what is the bottleneck now and I suspect that it will be data generation itself so how can we develop algorithms or develop even just processes for collecting very diverse data for very diverse tasks either on the real robots or how can we incorporate human data how can we just scale up our data collection significantly are you surprised by some of the fast progress in AI lately and do you think it's going to keep accelerating for me I personally am really surprised that the scaling laws continue to hold I think I find it absolutely fascinating and I think we kind of maybe take it take it for granted a little bit that you know we we saw it a few times and now it's just like it's considered maybe boring or some people refer to it as just like pure engineering that there aren't any novel ideas it's just about scaling things up and I think first I think scaling things up is extremely hard and I haven't really subscribed to the notion of it's just engineering I think it's just it's it's really really hard and it's as much of a there's so many novelties there as much as in any novel research idea and I think it was just yeah it's mind blowing to me that we can make so much progress by pushing this one direction how do you see the kind of competitions lash cooperation between different labs and are the labs doing cool work you yeah there's plenty of other labs that do really cool work I think we pay a lot of attention to what's happening in academia and in other industrial labs and particularly interested in the algorithms that address problems that we um start noticing at scale so it's I think we get a lot of inspiration from different works that come out from from different labs that sometimes maybe they don't even realize that this is the problem that that is you know really apparent when you scale things to like many robots or many robots doing many different tasks and yeah these are super super useful things we also tend to work with with interns and student researchers and it's always refreshing when they when they come in and bring in all kinds of new ideas and ways to to use our system um so yeah I think we we draw a lot of inspiration from those what do you think of of the concept of a GI do you find that that idea useful to talk about or is it a distraction maybe like on a more personal level it's it's a little hard to think about it about a GI when your day-to-day work is you know you're looking at the robot struggling with grasping like an apple you know on a on a countertop so like when you see how hard these things are and you know how much work it takes to actually get it to do like the simplest things it's kind of quite difficult to imagine you know all the steps that would need to be taken and how it just like at some point will will progress and exponentially from my side I like to be like more grounded and just to make solid progress towards the robot capability so I haven't been thinking about a GI too much however I do think uh when people discuss a GI they also think about like ethics and safety and I think those are good to for us to think about like early on when we start to build those methods we also take into like safety and ethics into consideration and I think like down the road when we have more powerful models we are uh salt we are safe on that regard makes sense and it seems that there's I mean there's been such great progress in terms of the language models being able to write these big essays the image models being able to generate incredible art and then there's kind of a gap between that and what we see in robotics are we waiting for something maybe it's the data sponge that you were talking about or the data generation Carol but are we waiting for some advance that can lead to some sort of image net moment for robotics is that ahead or is that behind us there's been a few moments that were significant I think in in robot learning but I don't think we've had the image net moment yet I think one of the one of the underlying maybe hopes behind something like Saken was to kind of attach ourselves a little bit more towards the progress that is happening in other fields right so if there if we find some way of of having language models improve robotics than as they improve robotics will improve as well or the same with multimodal models and so on as shown in inner monologue but I think um in terms of the low level skills I think these are still the early days we're I think quite bottlenecked by by the data available to us there aren't there isn't that much data out there of robots doing many different things you know nice data sets of just real robots doing diverse sort of tasks so that's another struggle that we have to we have to incorporate in all of this but I think we're making decent progress there but yeah I think the the bigger breakthroughs are still in front of us is there anything else I should have asked you about today or anything else you want to share with our audience I guess I would just briefly mention that it's really inspiring to see that the progress of the natural language processing kind of trickle down into robotics and start to solve some of the robotics problem for us in general I think this more interdisciplinary researching AI is super super exciting and we cannot wait to see more of that coming into robotics yeah I fully agree I think this unification I think it was really hard to think anything like this even a few years back that you know some improvement that you can make to an architecture can you know improve robotics and vision and language and and all of these things so it's I'm on how it's super exciting to see something like this that we were kind of all pushing in in one direction and we can all benefit from each other and even for us specifically at Google we are you know closely collaborating with with language folks with language researchers and it's just very cool to have this you know interdisciplinary team where we can all push in in a single direction I think on the other hand that's also important especially for academic labs to you know don't jump on the on the height train and maybe like either there is something that that you're really passionate about and you know something that that you believe will improve robotics or robot learning whatever you're interested in I think it's important to keep pushing on that as well I'm a little worried that we'll lose a little bit of this diversity in terms of research ideas that are out there because of this unification so I think it's important to keep both but yes it's super exciting well I want to thank you both so much for joining us today and taking the time to share your insight with the talk our audience thank you so much Fesha thanks thanks for our invitation and thank you Carol Haussman thank you thanks for having us
[ { "end": 7.12, "start": 0, "text": " This type of emergent capability is super interesting for us to see and super exciting for us" }, { "end": 12.48, "start": 7.12, "text": " by using a better language model we can improve robotics performance kind of for free." }, { "end": 22.32, "start": 16.96, "text": " Talk our L-podcast is all reinforced in learning all the time, featuring brilliant guests" }, { "end": 27.68, "start": 22.32, "text": " both research and applied. Join the conversation on Twitter at talkrlpodcast." }, { "end": 29.36, "start": 27.68, "text": " I'm your host Rob and Chohan." }, { "end": 36.64, "start": 33.6, "text": " A brief message from any scale are sponsored for this episode." }, { "end": 40.88, "start": 36.64, "text": " Re-inforced in learning is gaining traction as a complimentary approach to supervised learning" }, { "end": 45.2, "start": 40.88, "text": " with applications ranging from recommender systems to games to production planning." }, { "end": 49.92, "start": 45.2, "text": " So don't miss Ray Summit, the annual user conference for the Ray open source project," }, { "end": 56.16, "start": 49.92, "text": " where you can hear how teams at DAO, Verizon, Riot Games and more are solving their RL challenges" }, { "end": 60.8, "start": 56.16, "text": " with RL Lib. That's the Ray ecosystem's open source library for RL." }, { "end": 67.92, "start": 61.519999999999996, "text": " Ray Summit is happening August 23rd and 24th in San Francisco. You can register at raysemmet.org" }, { "end": 75.75999999999999, "start": 67.92, "text": " and use the code raysemmet22RL for a further 25% off the already reduced prices of 100 bucks" }, { "end": 81.75999999999999, "start": 75.75999999999999, "text": " for Keynotes only or 150 to add a tutorial from Sven. These prices are for the first 25 people to" }, { "end": 87.04, "start": 81.76, "text": " register. Now I can see from personal experience I've used Ray's RL Lib and I have recommended it" }, { "end": 91.44, "start": 87.04, "text": " for consulting clients. It's easy to get started with but it's also highly scalable and supports" }, { "end": 95.84, "start": 91.44, "text": " a variety of advanced algorithms and settings. Now on to our episode." }, { "end": 101.44, "start": 95.84, "text": " Carol Hausman is a senior research scientist at Google Brain and an adjunct professor at Stanford" }, { "end": 106.24000000000001, "start": 101.44, "text": " working on robotics and machine learning. Carol is interested in enabling robots to acquire" }, { "end": 113.44, "start": 106.24, "text": " general purpose skills with minimal supervision in real world environments. FESHA is a research" }, { "end": 119.19999999999999, "start": 113.44, "text": " scientist with Google Research. FESHA is mostly interested in robot learning in complex and" }, { "end": 123.75999999999999, "start": 119.19999999999999, "text": " unstructured environments. Previously he's been approaching this problem by learning in realistic" }, { "end": 130.95999999999998, "start": 123.75999999999999, "text": " and scalable simulation environments including Gibson and I Gibson. Most recently he's been exploring" }, { "end": 135.04, "start": 130.95999999999998, "text": " using foundation models for those challenges. Thank you both for joining us today." }, { "end": 140, "start": 135.04, "text": " Thank you for having us. Thanks for having me. So I reached out to you about an interview on" }, { "end": 144.07999999999998, "start": 140, "text": " SACAN because I wanted to hear more about how you combine these different lines of work with" }, { "end": 149.12, "start": 144.07999999999998, "text": " language models and robotics and I think it's really cool that you focused on this very challenging" }, { "end": 154.16, "start": 149.12, "text": " and practical domain and got some really interesting results. So let's get started with SACAN." }, { "end": 161.28, "start": 154.16, "text": " So that paper is entitled Do as I can not as I say grounding language in robotic affordances" }, { "end": 169.52, "start": 161.28, "text": " that's on at all in 2022. To start with could you give us a high level idea of what is SACAN about?" }, { "end": 176.96, "start": 169.52, "text": " Yeah so SACAN is about allowing robots to execute long horizon abstract commands that can be" }, { "end": 182.96, "start": 176.96, "text": " expressed in natural language. So it would that the goal was to allow users to just talk to" }, { "end": 188.4, "start": 182.96, "text": " robot and describe what they want and even if the task is very long and it would be very difficult" }, { "end": 194.56, "start": 188.4, "text": " for robots to execute it. We thought that by leveraging large language models we should be able" }, { "end": 199.92000000000002, "start": 194.56, "text": " to break a task down into smaller set of steps that the robot is actually capable of doing" }, { "end": 206.64000000000001, "start": 200.48000000000002, "text": " and then helping the user with executing the instruction. The high level idea behind it is that" }, { "end": 213.36, "start": 206.64000000000001, "text": " we want to combine large language models with robot learning in a way that they benefit each other." }, { "end": 220.16000000000003, "start": 213.36, "text": " So large language models in this equation provide the semantic knowledge that is in them. So they" }, { "end": 225.36, "start": 220.16000000000003, "text": " know quite a lot about the world from looking at text that is all over the internet." }, { "end": 232.96, "start": 227.12, "text": " And at the same time they can also understand what you mean exactly and we can also use them" }, { "end": 238.96, "start": 233.84, "text": " to break down tasks into smaller steps. And then on the other hand robotics can be used to" }, { "end": 244, "start": 238.96, "text": " ground these language models in the real world. So the way that large language models are trained" }, { "end": 249.68, "start": 244, "text": " is such that they don't really get to experience what these words actually mean. They just get to" }, { "end": 255.68, "start": 249.68, "text": " read them and kind of learn about the statistics of which words comes after other words." }, { "end": 261.2, "start": 255.68, "text": " And we were hoping that robotics can provide this actual experience of what it means to do" }, { "end": 266.56, "start": 261.2, "text": " something. What does it correspond to in the real world? So here the high level idea was that" }, { "end": 272.64, "start": 266.56, "text": " the robots would provide this kind of grounding through affordances so that the two combined" }, { "end": 279.44, "start": 272.64, "text": " together, LLM's and robots can execute long horizon tasks. And you use that phrase affordances." }, { "end": 284.56, "start": 279.44, "text": " What does that mean here in this context? Yeah, in this context we refer to affordances as something" }, { "end": 291.52, "start": 284.56, "text": " that is aware of what the robot is capable of doing in a given situation with a given embodiment." }, { "end": 297.91999999999996, "start": 291.52, "text": " So for instance, if you ask a robot to bring you a Coke, it should be able to tell whether it's" }, { "end": 302.4, "start": 297.91999999999996, "text": " actually possible to bring you a Coke, whether it has the right gripper or the right arm, whether" }, { "end": 306.64, "start": 302.4, "text": " even it has an arm so that it can bring it, whether there was a Coke that it can see," }, { "end": 314.08, "start": 307.2, "text": " whether it's mean some room that there's no coax to be found. So affordances would be something" }, { "end": 318.64, "start": 314.08, "text": " that tells the robot what's currently possible given the state of the environment and the robots" }, { "end": 325.91999999999996, "start": 318.64, "text": " in body. I want to briefly add that the concept of affordance comes from like American psychologist" }, { "end": 331.52, "start": 325.91999999999996, "text": " James J. Gibson in the ecological approach to visual perception. It means what the environment" }, { "end": 337.36, "start": 331.52, "text": " offers individual. So in this case, it means what the robot can do in a certain state. So that's" }, { "end": 342.8, "start": 337.36, "text": " what we mean by affordances. There's this notion of using language models for scoring these" }, { "end": 347.2, "start": 342.8, "text": " affordance phrases. If I can call them that, can you can you talk about how that works? This is" }, { "end": 352, "start": 347.2, "text": " using a language model in a in a different way than I'm used to seeing not for generation but for" }, { "end": 358.32, "start": 352, "text": " scoring. So generally, there are two ways that you can decode from a language model. One way is" }, { "end": 364.15999999999997, "start": 358.32, "text": " called generative mode because language model essentially predict the probability of next token" }, { "end": 369.84, "start": 364.15999999999997, "text": " condition on previous tokens. So if you're just sample from the probability, then you're doing" }, { "end": 376.15999999999997, "start": 369.84, "text": " generative mode. You can do gritty sampling or you can do have some temperature and do like more" }, { "end": 383.28000000000003, "start": 376.16, "text": " diverse sampling. There's another way, basically, you force the output to be the phrases that you want" }, { "end": 388.88000000000005, "start": 383.28000000000003, "text": " and then you calculate the likelihood. That is the scoring mode. The reason that we use scoring" }, { "end": 395.52000000000004, "start": 388.88000000000005, "text": " mode is mainly to constrain language model to speak our robot language. So our robot only have a" }, { "end": 401.76000000000005, "start": 395.52000000000004, "text": " certain set of skills. So we want to constrain the output to be from the set of skills and we want" }, { "end": 408.56, "start": 401.76, "text": " to compare the likelihood of different skills. Through our experiments, we have compared the generative" }, { "end": 416.48, "start": 408.56, "text": " modes and scoring mode. In general, we find scoring mode to be more stable. And we also tried" }, { "end": 423.12, "start": 416.48, "text": " generative mode. In the generative mode, you generate some arbitrary phrase, then you still need to" }, { "end": 428.96, "start": 423.12, "text": " find nearest neighbor of the robot skill. There are some additional errors introduced in this mapping" }, { "end": 434.64, "start": 428.96, "text": " stage. So through the experiments we find the scoring mode seems to be more stable. What is the" }, { "end": 440.4, "start": 434.64, "text": " state of the art in this area before Sagan? Yeah, I think there's a few works that talk about using" }, { "end": 448.4, "start": 440.4, "text": " LLAMs as zero-shot planners. There is the original GPD3 paper that talks about the capabilities of" }, { "end": 455.67999999999995, "start": 448.4, "text": " language models as meta learners. There's also the paper from Wannongkwang at all that came out" }, { "end": 461.76, "start": 455.68, "text": " at a very similar time that talks about using language models as zero-shot planners. These don't" }, { "end": 469.36, "start": 461.76, "text": " have done talk about real robot results yet. And they have been showing, I think, some glimpses of" }, { "end": 476.96000000000004, "start": 469.36, "text": " what LLAMs are capable of in terms of just meta learners or as planners that could be applied to" }, { "end": 483.6, "start": 476.96000000000004, "text": " something like robotics. And I think that the other body of work that started being quite popular" }, { "end": 489.04, "start": 483.6, "text": " lately is just using language as a conditioning mechanism for policies for robotics." }, { "end": 497.68, "start": 490.08000000000004, "text": " An example of work like this would be BCZ by Eric Zhang and others, where you still use a" }, { "end": 503.36, "start": 497.68, "text": " large language model to find the embedding of the instruction that you're sending to the robot," }, { "end": 508.88, "start": 503.36, "text": " and that allows you to leverage natural instructions as well. But it doesn't really extract the high" }, { "end": 515.12, "start": 508.88, "text": " level knowledge from LLAM in the same way that they can thus, thus, it can contextualize it based" }, { "end": 519.6, "start": 515.12, "text": " on everything it learned. Tell us more about this robot. What is it? What is it capable of?" }, { "end": 526.32, "start": 520.32, "text": " So the robot that we use is a mobile manipulator from everyday robots. A mobile manipulator" }, { "end": 532.88, "start": 526.32, "text": " means it can navigate around and it also has an arm that can manipulate things. So in this work," }, { "end": 540.72, "start": 532.88, "text": " we mainly use the vision system. We use the camera images, which is 640 by 512 RGB images as input." }, { "end": 545.84, "start": 540.72, "text": " The robot has 7 degree of freedom arm with a 2 finger gripper attached to the end and then we" }, { "end": 552.24, "start": 545.84, "text": " mainly use that for manipulation. Finally, it has a navigation stack that it's based on wheels." }, { "end": 558.24, "start": 552.88, "text": " It can drive around in the scene without collision. So that's basically the robot that we use." }, { "end": 564.16, "start": 558.24, "text": " I want to highlight that the mobile manipulation is a big challenge because you need to decide" }, { "end": 570.32, "start": 564.16, "text": " where to stop to enable manipulation. So if you stop too far, then you're not able to manipulate." }, { "end": 577.36, "start": 570.32, "text": " So generally, it's a very difficult problem for us to solve and the robot platform enables us to do that." }, { "end": 582.32, "start": 577.36, "text": " You have taught this robot a set of skills, each with their own value function, and this is" }, { "end": 589.12, "start": 582.32, "text": " a pretty training step. How did you train these skills? What kind of skills are we talking about?" }, { "end": 598.4000000000001, "start": 589.12, "text": " Right. This is a good question. At the time when we published, I think this was around 300 different" }, { "end": 606.5600000000001, "start": 598.4000000000001, "text": " task instructions that included fairly simple skills such as picking, placing, moving things around," }, { "end": 612.4, "start": 606.56, "text": " placing things upright, knocking things over, things like that. And we'll be updating the paper" }, { "end": 618.3199999999999, "start": 612.4, "text": " very soon. We'll be adding additional skills such as opening and closing drawers and putting things" }, { "end": 623.52, "start": 618.3199999999999, "text": " in and out of them. So this is kind of the level of skill complexity that we introduced." }, { "end": 632.2399999999999, "start": 624.7199999999999, "text": " In terms of how we train these skills, this is I think the majority of the work goes and this is" }, { "end": 637.6, "start": 632.24, "text": " the really, really hard part of robotics. How do you actually get the robot to move and do the" }, { "end": 645.12, "start": 637.6, "text": " thing that you want it to do? So we use the combination of behavior cloning as well as" }, { "end": 651.2, "start": 645.12, "text": " train forceful learning. In this case, in particular, we are constantly comparing the different" }, { "end": 657.04, "start": 651.2, "text": " methods and how they scale with data and how they scale with the amount of demonstrations we have," }, { "end": 662.16, "start": 657.04, "text": " the amount of a ton data collected autonomously, whether we can leverage simulation data as well," }, { "end": 668.24, "start": 662.16, "text": " so this is constantly changing as to in which method is winning. At the time of the release of the" }, { "end": 673.76, "start": 668.24, "text": " paper, all the policies that were used on the real robot were trained using the behavior cloning," }, { "end": 678.3199999999999, "start": 673.76, "text": " and then we use the value functions that were trained in simulation, that were leveraging all" }, { "end": 684, "start": 678.3199999999999, "text": " the simulation data. Simulation was then transformed to look more realistic using cycle" }, { "end": 688.3199999999999, "start": 684, "text": " again so that the images reflect a little bit better what the real world looks like and we were" }, { "end": 694.1600000000001, "start": 688.32, "text": " getting the value functions from those. So where did the idea for C-can come from? For us, we" }, { "end": 701.6800000000001, "start": 694.1600000000001, "text": " started with trying to incorporate planning to do more long horizon tasks first and we were thinking" }, { "end": 707.0400000000001, "start": 701.6800000000001, "text": " of all kinds of planners and different ways of thinking about the representation of the level" }, { "end": 716.08, "start": 707.0400000000001, "text": " level skills and so on. And as we were looking into that in the meantime, Brian Eakter and" }, { "end": 722.32, "start": 716.08, "text": " Fasia noticed that language is a really really good interface for planning and it's not only a" }, { "end": 727.6, "start": 722.32, "text": " really good interface that allows you to compose these different plans in many different ways," }, { "end": 732.8000000000001, "start": 727.6, "text": " it's very compositional, but it also allows you to then leverage language models, which is kind of" }, { "end": 738.8000000000001, "start": 732.8000000000001, "text": " like a planner in the skies that you can just take from from from another lab that has trained it" }, { "end": 744.88, "start": 738.8000000000001, "text": " and and just use it and see how how well it works for your robotic domain. Yeah, I think during that" }, { "end": 752.4, "start": 744.88, "text": " time we also there is also a plethora of work that discuss using language model as 0 shot x where" }, { "end": 759.2, "start": 752.4, "text": " x could be like 0 shot reasoner, 0 shot planner, 0 shot learner and we were just thinking what if the" }, { "end": 765.52, "start": 759.2, "text": " x is robotics like what we can extract now like what knowledge can we extract from large language" }, { "end": 771.2, "start": 765.52, "text": " model and apply it to robotics. Apparently when we talk to a language model it produce something" }, { "end": 776.96, "start": 771.2, "text": " that is reasonable. For example, if you say I spill my drink it will say find cleaner find vacuum" }, { "end": 782.6400000000001, "start": 776.96, "text": " it's not absolutely right it's not actionable, but we find that the knowledge is there. So the" }, { "end": 788.48, "start": 782.6400000000001, "text": " question then becomes how do we make those knowledge more actionable and then that kind of" }, { "end": 793.5200000000001, "start": 788.48, "text": " inspired the work of say can. Okay, and then just one more definition. Can you clarify what you" }, { "end": 799.36, "start": 793.5200000000001, "text": " mean by grounding in this context exactly. Specifically in say can grounding refers to this idea" }, { "end": 805.28, "start": 799.36, "text": " of fordance models that are that allow you to predict what is the success rate of doing a certain" }, { "end": 811.12, "start": 805.28, "text": " task given given a current state. What's the what's the probability of you succeeding in the task given" }, { "end": 817.12, "start": 811.12, "text": " that you're starting at a particular state and given the description of the task. More generally" }, { "end": 823.6, "start": 817.12, "text": " the idea of grounding basically means that the the LLM don't really know what what the words" }, { "end": 827.52, "start": 823.6, "text": " that they're operating with the what they what they actually mean they're kind of just like" }, { "end": 832.96, "start": 827.52, "text": " parades that memorize different statistics and robotics kind of allows them to associate these" }, { "end": 839.52, "start": 832.96, "text": " words with with real world experiences. So they kind of ground them in in real experiences so that" }, { "end": 844.88, "start": 839.52, "text": " the robot actually it's or the whole system actually knows what it means to take something up or" }, { "end": 850.0799999999999, "start": 844.88, "text": " to drop something what it feels like and also what it what it looks like. So it's much more" }, { "end": 856.56, "start": 850.0799999999999, "text": " grounded knowledge. And I see that say can turns a natural language request into this list of steps" }, { "end": 861.28, "start": 856.56, "text": " that that corresponds to the set of skills that it already knows. So like if this human said" }, { "end": 866.4, "start": 861.92, "text": " how would you get a sponge from the counter and put it in the sink then the robot comes back with" }, { "end": 872.64, "start": 866.4, "text": " a list of steps one find a sponge to pick up the sponge three go to the sink etc. How do you get a" }, { "end": 880.56, "start": 874.16, "text": " general language model to come back with such a specific list. Yeah maybe I can speak to this" }, { "end": 886.88, "start": 880.56, "text": " question. So the way that we get the language model to produce the steps is the following. So" }, { "end": 892.64, "start": 886.88, "text": " first we use fuel shot prompting. So we already showed the language model a couple of examples" }, { "end": 898.3199999999999, "start": 892.64, "text": " of human ask a question and the robot will list one two three four five what are the steps." }, { "end": 904.4, "start": 898.3199999999999, "text": " So the fuel shot prompting generally get the structure of the answer correct. So then every time" }, { "end": 910.56, "start": 904.4, "text": " human ask a question the robot will answer like one two three four five that's the fuel shot prompting part." }, { "end": 918, "start": 910.56, "text": " Then we also have the scoring based decoding mechanism. So we basically have a question human ask" }, { "end": 925.1999999999999, "start": 918, "text": " a question and then we have robot says one and one and then we leave a blank there and we put all" }, { "end": 931.92, "start": 925.1999999999999, "text": " possible actions of the robot can do and then score different options. So for example when the" }, { "end": 938.3199999999999, "start": 931.92, "text": " question has a sponging that every option that also contains sponge will have higher score." }, { "end": 944.4799999999999, "start": 938.3199999999999, "text": " That's because just how generally language model works it scores relevance. So that's the language" }, { "end": 951.36, "start": 944.4799999999999, "text": " part of the decoding scheme. We also have another branch that predict affordances like what is the" }, { "end": 956.64, "start": 951.36, "text": " success rate of you find a sponge here what is the success rate of you pick up the sponge here." }, { "end": 962.3199999999999, "start": 956.64, "text": " We multiply the two scores the language score and also the affordance score and we get a combined" }, { "end": 968.96, "start": 962.3199999999999, "text": " score. Then all the options get a combined score and we just choose the highest combined score." }, { "end": 974.3199999999999, "start": 968.96, "text": " In this case the first step would be to find a sponge and then we repeat this process and append" }, { "end": 981.52, "start": 974.3199999999999, "text": " like two blank and then ask the language model to score the second step. We repeat this process" }, { "end": 987.12, "start": 981.52, "text": " until it outputs a done token which means the entire task sequence is finished." }, { "end": 993.76, "start": 987.76, "text": " Maybe to add a little bit to this I think one aspect that at least I didn't realize initially that" }, { "end": 999.68, "start": 993.76, "text": " we started noticing once we deployed the system is how interpretable it is. So we can very easily" }, { "end": 1004.56, "start": 999.68, "text": " visualize what the language model thinks and what the robot thinks what the affordance model thinks" }, { "end": 1012.16, "start": 1004.56, "text": " and then you can kind of look into the robots to see whether the affordance model didn't have that" }, { "end": 1019.5999999999999, "start": 1012.16, "text": " many successes in that particular scenario and because of that it downgrades that particular skill" }, { "end": 1025.6, "start": 1019.5999999999999, "text": " or whether the LLM makes a prediction that doesn't really make sense so you can kind of quickly" }, { "end": 1030.48, "start": 1025.6, "text": " take a look and see exactly how the algorithm is progressing and why you picked that particular step" }, { "end": 1037.92, "start": 1030.48, "text": " as opposed to another one. So when the humans asked for how to do something and the robot I understand" }, { "end": 1044.88, "start": 1037.92, "text": " for the when the first step the robot is going to answer based on the existing context and then when" }, { "end": 1050.56, "start": 1044.88, "text": " it goes to say the second step is this the language model knowing that after the step one is done" }, { "end": 1055.6, "start": 1050.56, "text": " from general knowledge what makes sense to do next and then combine with the scoring so it's really" }, { "end": 1059.92, "start": 1055.6, "text": " leveraging the general knowledge embedded in the language model to order these things correctly" }, { "end": 1065.44, "start": 1059.92, "text": " is that right? That's right yeah because as you execute it steps get appended to the to the" }, { "end": 1070, "start": 1065.44, "text": " prompt so then the language model kind of knows that I already found this point now I should" }, { "end": 1075.2, "start": 1070, "text": " probably pick it up and then I use the affordance models and all of that to score it. Right that's" }, { "end": 1081.1200000000001, "start": 1075.2, "text": " very interesting so it kind of just emerges this ability to to chain these things together just" }, { "end": 1087.1200000000001, "start": 1081.1200000000001, "text": " emerges from that from that language model. That's correct yeah I mean there's I guess there's" }, { "end": 1091.04, "start": 1087.12, "text": " really good things about that and then some negatives too right like it would be it would be" }, { "end": 1096, "start": 1091.04, "text": " challenging to teach it or correct something in its planning is that is that one possible imitation here?" }, { "end": 1102.3999999999999, "start": 1096, "text": " Yeah that's a that's a very good point so here we are kind of diving into the world of planning" }, { "end": 1107.4399999999998, "start": 1102.3999999999999, "text": " that is a very vast world with you know many different options where you can do myopic planning" }, { "end": 1111.6, "start": 1107.4399999999998, "text": " not myopic planning with the feedback and all kinds of different things and I think SACAN is" }, { "end": 1116.8, "start": 1111.6, "text": " just a very first step showing how you can do how you can make these open loop plans but they're" }, { "end": 1122.96, "start": 1116.8, "text": " very myopic so like for instance if you fail that step number two the language model doesn't really" }, { "end": 1129.28, "start": 1122.96, "text": " have any feedback mechanism in SACAN to realize that while I failed I should probably try it again or" }, { "end": 1134.1599999999999, "start": 1129.28, "text": " you know try to fix it somehow. Some of the some of these things we are starting to work on how to" }, { "end": 1139.04, "start": 1134.1599999999999, "text": " make it a little less myopic and kind of optimize all the steps so that you can you can kind of look" }, { "end": 1145.2, "start": 1139.04, "text": " a little bit more into the future and think about your plan more holistically in the follow-up work" }, { "end": 1150.8, "start": 1145.2, "text": " on an inner monologue that I think they could talk a little bit more about we also try to incorporate" }, { "end": 1155.76, "start": 1150.8, "text": " some some feedback into the into the planner and into the language model so that we can correct" }, { "end": 1159.8400000000001, "start": 1155.76, "text": " these these missteps if they happen. Cool I'm looking forward to talking about that as well in" }, { "end": 1165.52, "start": 1159.8400000000001, "text": " just a few minutes. In terms of SACAN were there surprises for you in doing this and were you" }, { "end": 1170.0800000000002, "start": 1165.52, "text": " like pretty sure this would work from the beginning or were you were did you have a lot of uncertainty" }, { "end": 1176.32, "start": 1170.08, "text": " about certain parts of it? Yeah I think that's a super great question. At beginning we are not sure" }, { "end": 1182.1599999999999, "start": 1176.32, "text": " if that's gonna work like we just try to feel examples and it works surprisingly well for example" }, { "end": 1188.3999999999999, "start": 1182.1599999999999, "text": " it starts to work for if I say throw something away then and understand you need to go to the trash can" }, { "end": 1194.3999999999999, "start": 1188.3999999999999, "text": " so I think that's kind of interesting. Another interesting like kind of a ha moment for me is when" }, { "end": 1200.64, "start": 1194.4, "text": " we say I spill my drink can you help and the robot goes to find a sponge so that's super surprising" }, { "end": 1208.16, "start": 1200.64, "text": " to me because in order to do that sort of inference and you need to understand a lot of world knowledge" }, { "end": 1213.2800000000002, "start": 1208.16, "text": " you need to understand that if I spill something that means there's liquid if there's liquid a" }, { "end": 1218.8000000000002, "start": 1213.2800000000002, "text": " sponge can absorb a liquid then the robot should find a sponge so I always think that the" }, { "end": 1226.3999999999999, "start": 1218.8, "text": " sponge was kind of a ha moment that super super surprising and this kind of emergent capability" }, { "end": 1232.48, "start": 1226.3999999999999, "text": " super cool so that's one thing that is surprising to me. Another thing that is kind of surprising to" }, { "end": 1240.48, "start": 1232.48, "text": " me is how well things scales so in the paper that we are about to update we'll talk about Palm SACAN" }, { "end": 1247.12, "start": 1240.48, "text": " so previously SACAN was using a language model called Flan which has about 137 billion parameter" }, { "end": 1253.36, "start": 1247.12, "text": " model. When we switch it to a larger language model which is pathway language model which has" }, { "end": 1259.28, "start": 1253.36, "text": " 540 billion parameter model then it solves a lot of the planning mistakes that we are seeing at a" }, { "end": 1265.84, "start": 1259.28, "text": " smaller scale so it's really surprising that just by scaling the language model that we are solving" }, { "end": 1271.4399999999998, "start": 1265.84, "text": " a lot of like planning errors at this commonsense reasoning problems. One particular thing that" }, { "end": 1277.6000000000001, "start": 1271.44, "text": " is super interesting is that language models historically don't handle like negation very well" }, { "end": 1283.92, "start": 1277.6000000000001, "text": " if you say I don't like Coke bring me a drink it will still bring you a Coke because the Coke" }, { "end": 1290.72, "start": 1284.96, "text": " has it has Coke in the context it has Coke in the previous sentence so the relevance" }, { "end": 1298.64, "start": 1291.44, "text": " just makes the score of the Coke higher with the new Palm SACAN we find we can do a sort of" }, { "end": 1304.4, "start": 1298.64, "text": " technique called Ching of thought prompting so basically before generating the plan the robot also" }, { "end": 1309.76, "start": 1304.4, "text": " generate a Ching of thought with a Ching of thought it handles like negation super well you can say" }, { "end": 1315.92, "start": 1310.3200000000002, "text": " the user doesn't like Coke so I will bring something else Sprite is not Coke so I will bring" }, { "end": 1321.5200000000002, "start": 1315.92, "text": " Sprite and then it will generate the plan to bring a Sprite instead so this type of emergent" }, { "end": 1328.4, "start": 1321.52, "text": " capability is super interesting for us to see and super exciting for us and surprises us a lot." }, { "end": 1334.8, "start": 1329.52, "text": " Yeah I think for me the big surprise that I wasn't expecting was this this" }, { "end": 1341.68, "start": 1335.84, "text": " visibility for the robotic system to scale with better language models I think this is super" }, { "end": 1348.16, "start": 1341.68, "text": " exciting because as we know there's many many people many researchers working on getting LLMs to" }, { "end": 1354.88, "start": 1348.16, "text": " be better and scaling them up to just improving them constantly and just seeing that by using a" }, { "end": 1359.8400000000001, "start": 1354.88, "text": " better language model we can improve robotics performance kind of you know for free by just" }, { "end": 1364.0800000000002, "start": 1359.8400000000001, "text": " swapping it out without changing anything on the robot thing that is that is really exciting that" }, { "end": 1370.16, "start": 1364.0800000000002, "text": " kind of allows us to ride that wave of better and better LLMs and improving road performance that" }, { "end": 1375.1200000000001, "start": 1370.16, "text": " way so we've been seeing bigger and better performing LLMs I'm not sure what hardware they run" }, { "end": 1379.04, "start": 1375.12, "text": " on but I'm assuming they don't run on the robot is that right that's right they run on" }, { "end": 1384.9599999999998, "start": 1379.04, "text": " TPUs and we call it through some sort of bridge what does the latency look like there" }, { "end": 1390.4799999999998, "start": 1384.9599999999998, "text": " just is that a limiting factor at all or is it pretty fast yeah that's a great question so for" }, { "end": 1395.76, "start": 1390.4799999999998, "text": " some part of the robot system it's latency sensitive for example if you're doing grasping it's" }, { "end": 1400.8799999999999, "start": 1395.76, "text": " super sensitive to latency like if you miss one step then you're getting like outdated data and" }, { "end": 1407.5200000000002, "start": 1400.88, "text": " then it can mess up the manipulation fortunately for the planning it's not a time sensitive" }, { "end": 1413.8400000000001, "start": 1407.5200000000002, "text": " step like the robot can just stop and think it can tell people I am doing the inference so" }, { "end": 1420.64, "start": 1415.2, "text": " in terms of the latency for the latest palms they can we are seeing about three seconds of latency" }, { "end": 1427.92, "start": 1420.64, "text": " in terms of reasoning so it's actually not too bad usually each steps takes about like 30 seconds" }, { "end": 1434.16, "start": 1427.92, "text": " so it's not bottlenecked by the inference speed of the planning and then the value functions are" }, { "end": 1438.8000000000002, "start": 1434.16, "text": " they running locally as well or they're just fast anyway yeah the value functions are super fast" }, { "end": 1444, "start": 1438.8000000000002, "text": " they they can run couple of hertz so that's not a bottleneck I yesterday I told my wife and my" }, { "end": 1449.04, "start": 1444, "text": " mother-in-law about the interview today and about the robots and and they were they were excited" }, { "end": 1453.8400000000001, "start": 1449.04, "text": " they asked me well what can this robot cook and I had to explain that you know robotics is really" }, { "end": 1458.24, "start": 1453.84, "text": " hard and you know it's not at that state of the art is not there yet it's no feeling of" }, { "end": 1464.6399999999999, "start": 1458.24, "text": " sake and that's just how the field is right now but what what should I tell them in terms of when" }, { "end": 1470.3999999999999, "start": 1464.6399999999999, "text": " when we might expect a robot like this to to cook us a meal which sounds like pretty far off but" }, { "end": 1474.72, "start": 1470.3999999999999, "text": " but maybe not the way things are going yeah I think it's really hard to make predictions about" }, { "end": 1479.84, "start": 1474.72, "text": " things like that I think we're making quite a lot of progress but it's kind of difficult to foresee" }, { "end": 1484.56, "start": 1479.84, "text": " what are the challenges that we'll we'll see with getting there one one thing that I tend to" }, { "end": 1492.6399999999999, "start": 1484.56, "text": " mention when when I get asked by my family questions like that is the smorevex paradox where in AI" }, { "end": 1498.3999999999999, "start": 1492.6399999999999, "text": " it's the the easy things that are hard and it's the hard things that are relatively easier so the" }, { "end": 1503.84, "start": 1498.3999999999999, "text": " the things that seem very easy to us such as manipulating objects or cooking a meal we're just" }, { "end": 1508.3999999999999, "start": 1503.84, "text": " walking around and you know playing with toys and things like simple manipulation skills" }, { "end": 1514.24, "start": 1508.4, "text": " only seem easy because we've been doing them for thousands and thousands of years and the" }, { "end": 1521.0400000000002, "start": 1514.24, "text": " evolution just made it so that it just seems extremely easy to us versus the the other things that" }, { "end": 1527.0400000000002, "start": 1521.0400000000002, "text": " require more reasoning so things like mathematical equations or playing complex games things that" }, { "end": 1533.0400000000002, "start": 1527.0400000000002, "text": " we would usually consider the intelligent things there we haven't been doing them for that long" }, { "end": 1539.36, "start": 1533.04, "text": " on the evolutionary scale so robots or the algorithms don't have that much to to catch up on so I" }, { "end": 1544.96, "start": 1539.36, "text": " feel like the the embodied version of AI where we actually have to manipulate the world around us" }, { "end": 1549.6, "start": 1544.96, "text": " and understand what's happening this is the really really hard part and it's kind of difficult to" }, { "end": 1556.08, "start": 1551.12, "text": " to get that across sometimes because you know it just seems so easy like I can cook a meal very" }, { "end": 1563.04, "start": 1556.08, "text": " easily or even a you know even a small kid can kind of has manipulation capabilities that far" }, { "end": 1569.12, "start": 1563.04, "text": " exceed what the robots can do today okay I'm gonna try explain that to them thanks for that okay" }, { "end": 1576.56, "start": 1569.12, "text": " and then in terms of the idea of end-to-end learning versus this compositional style where you're" }, { "end": 1582.1599999999999, "start": 1576.56, "text": " putting together prebuilt components I'm curious how you how you guys see that like it seemed that" }, { "end": 1588.8000000000002, "start": 1582.16, "text": " at some time you know some people were really exterling the virtues of end-to-end deep learning but then" }, { "end": 1594.4, "start": 1588.8000000000002, "text": " you more recently these foundation models have become really popular where there's a lot of" }, { "end": 1599.52, "start": 1594.4, "text": " pre-training involved and we don't expect to learn end-to-end or if at most a bit of fine-tuning" }, { "end": 1604.5600000000002, "start": 1600.0800000000002, "text": " do you think the future is going to involve a lot more of this pre-training and composition" }, { "end": 1609.3600000000001, "start": 1604.5600000000002, "text": " the way we're seeing here yeah that's a that's a really good question looking back at how robot" }, { "end": 1614.6399999999999, "start": 1609.36, "text": " learning has evolved it seems that initially we started with things that are a little bit more" }, { "end": 1619.84, "start": 1614.6399999999999, "text": " modular they're a little bit easier to implement a little bit easier to understand and they kind of" }, { "end": 1625.36, "start": 1619.84, "text": " make a lot of sense initially and then as we get better at them and they start to work we put" }, { "end": 1630.1599999999999, "start": 1625.36, "text": " them into this end-to-end system that optimizes only for the thing that we care about and then it" }, { "end": 1634.7199999999998, "start": 1630.1599999999999, "text": " finds the right representations to communicate between these different components that that" }, { "end": 1639.68, "start": 1634.72, "text": " happen in the past for instance with perception and control where we would have a perceptual system" }, { "end": 1645.76, "start": 1639.68, "text": " that would for instance estimate that the posts have an object or recognize the object and so on" }, { "end": 1652, "start": 1645.76, "text": " and then we'll take that representation that we came up with and feed it to a to a controller" }, { "end": 1657.92, "start": 1652, "text": " I think that right now with the with the language models with these planners we are going through" }, { "end": 1663.84, "start": 1657.92, "text": " a similar cycle where we are at this very initial step where it's much easier to think of them as" }, { "end": 1668.56, "start": 1663.84, "text": " these modular components where you have a separate planner that just gives you the next set of steps" }, { "end": 1674.24, "start": 1668.56, "text": " to do and then you have a separate end-to-end in this case but separate controller" }, { "end": 1681.12, "start": 1675.76, "text": " closed loop controller that can take a short command and execute it but I think over time as we" }, { "end": 1687.12, "start": 1681.12, "text": " can start to develop them more and more they'll become more unified and more end-to-end in this" }, { "end": 1693.04, "start": 1687.12, "text": " working in particular in in sake and prompting the LLMS was just the path of least resistance it" }, { "end": 1698.8, "start": 1693.04, "text": " was just very easy to do and we could see the results straight away but I think we we can start" }, { "end": 1705.44, "start": 1698.8, "text": " thinking about how can we combine it in one big system that can where we can be fine-tuning the" }, { "end": 1711.6, "start": 1705.44, "text": " LLM planner as well as the low-level skills jointly based on all the data that we that we are collecting" }, { "end": 1716.6399999999999, "start": 1711.6, "text": " okay let's talk about some of the the work that this this is built upon we won't go into" }, { "end": 1721.92, "start": 1716.6399999999999, "text": " integrate depth with these but just just a few brief mentions of for example I think the" }, { "end": 1729.04, "start": 1721.92, "text": " RL algorithm using here is is empty opt based on qt opt is that right and and can you briefly tell" }, { "end": 1734.16, "start": 1729.04, "text": " us about qt opt I think I if I understand correctly that's what you're using to learn the grasping" }, { "end": 1741.1200000000001, "start": 1734.8000000000002, "text": " from images with offline pre-training that's right the why qt opt there's other RL algorithms that" }, { "end": 1746, "start": 1741.1200000000001, "text": " that could do continuous control from images could you could you just spend a moment telling us why" }, { "end": 1752.96, "start": 1746, "text": " qt opt and empty opt for this for this case right yeah of course um yeah so in our experience we've" }, { "end": 1757.52, "start": 1752.96, "text": " been experimenting with a lot of these different algorithms and a lot of different variants I think" }, { "end": 1764, "start": 1758.4, "text": " one aspect that makes it a little bit different for us is that we try to do it at scale so with a" }, { "end": 1769.76, "start": 1764, "text": " lot of different tasks with a lot of robots a lot of data and so on and often the the algorithms" }, { "end": 1778.32, "start": 1769.76, "text": " as we see them on smaller scale benchmarks compared differently on larger scale so with qt opt in" }, { "end": 1783.76, "start": 1778.32, "text": " particular what we really like about that is that it's really stable if if setup correctly it" }, { "end": 1791.52, "start": 1783.76, "text": " just optimizes things quite well and it usually works and it's much less fragile than algorithms" }, { "end": 1799.28, "start": 1791.52, "text": " that use other actual critical algorithms I think we have some hunches on why that is one kind" }, { "end": 1808.56, "start": 1799.28, "text": " of one thought there is that in qt opt the optimization of the of the actor is completely independent" }, { "end": 1814, "start": 1808.56, "text": " from the optimization of the q function and I think that makes it just like a little bit more robust" }, { "end": 1820.96, "start": 1814, "text": " setup where there's no actor we just have another optimizer that stays constant throughout training" }, { "end": 1827.2, "start": 1822.16, "text": " and that kind of removes this one one aspect that can be quite difficult in in actor critical" }, { "end": 1831.28, "start": 1827.2, "text": " algorithms so we just found it a little bit more stable in these large scale settings but I think" }, { "end": 1837.8400000000001, "start": 1832.32, "text": " this is not the final answer I think you know as we explore these algorithms more and there's so" }, { "end": 1843.1200000000001, "start": 1837.8400000000001, "text": " many things to try so many different combinations between these algorithms different actor architectures" }, { "end": 1848.56, "start": 1843.1200000000001, "text": " critic architectures you know optimization schemes and so on I think we'll get more answers just" }, { "end": 1856.32, "start": 1849.52, "text": " at the current time to us qt opt will working the best and if you mentioned" }, { "end": 1862.08, "start": 1856.32, "text": " ralmo gen which I understand was part of your dissertation and that was partly inspirational for" }, { "end": 1869.04, "start": 1862.08, "text": " this work can you briefly describe what what that adds yeah sure so remote gen is a a previous work" }, { "end": 1875.9199999999998, "start": 1869.04, "text": " by me which explores using motion generation and rain force learning together so for that work" }, { "end": 1882.32, "start": 1876.8, "text": " we are also tackling the problem of mobile manipulation it's super challenging because you need" }, { "end": 1888.24, "start": 1882.32, "text": " to control where to move and where to do the manipulation what we found in that paper so that's" }, { "end": 1894.32, "start": 1888.24, "text": " basically a hierarchical reinforced learning work and the low level is motion generation which is" }, { "end": 1901.52, "start": 1894.32, "text": " not learned but rather some classical like planning based methods what we found is that for this" }, { "end": 1907.9199999999998, "start": 1901.52, "text": " long horizon problems it's beneficial to decompose the problem in a hierarchical fashion and it" }, { "end": 1914.96, "start": 1907.92, "text": " will be even better if the steps decomposed are semantically meaningful such as some like navigation" }, { "end": 1921.52, "start": 1914.96, "text": " steps interleaved with some manipulation steps so I would say that's a good inspiration for the" }, { "end": 1928.72, "start": 1921.52, "text": " say can lie of work where we also decompose a long horizon problem into a few short horizon steps" }, { "end": 1935.1200000000001, "start": 1928.72, "text": " which is like more manageable to learn in the low level okay and then you also mentioned" }, { "end": 1943.36, "start": 1935.12, "text": " another work action models where it uses hindsight relabeling and goal-chaining I guess to help" }, { "end": 1949.9199999999998, "start": 1943.36, "text": " with scaling the learning with the fixed amount of data can you just say briefly what this action" }, { "end": 1956.32, "start": 1949.9199999999998, "text": " model is contributing yeah so actionable models for the work that we did right after empty opt" }, { "end": 1963.4399999999998, "start": 1956.8799999999999, "text": " where I think the main kind of contribution in terms of sake and this is quite nuanced here" }, { "end": 1968.8, "start": 1963.44, "text": " so this is an offline or realm method that takes all the data that we used for for empty opt we" }, { "end": 1975.68, "start": 1968.8, "text": " collected for empty after we had a pre-specified set of tasks 12 or 14 tasks or something like that" }, { "end": 1981.92, "start": 1977.1200000000001, "text": " these were encoded as just one hot vectors so there was just task number one two three so one" }, { "end": 1987.1200000000001, "start": 1983.1200000000001, "text": " we collected a lot of data with it then we did this multi-task rain force learning with with" }, { "end": 1996.6399999999999, "start": 1987.12, "text": " Qt opt called empty opt and we were constantly talking about what other tasks to add and as you try" }, { "end": 2002.9599999999998, "start": 1996.6399999999999, "text": " to scale these systems this question actually becomes quite tricky especially as you try to do" }, { "end": 2007.28, "start": 2002.9599999999998, "text": " the set scale and you want the robots to run out to no mostly and so on and this is something that" }, { "end": 2012.6399999999999, "start": 2007.28, "text": " didn't occur at least to me when when we were starting that project that you know you kind of" }, { "end": 2018.4, "start": 2012.64, "text": " have to come up with as many tasks as you can and at the same task at the same time these tasks" }, { "end": 2024, "start": 2018.4, "text": " have to potentially reset each other so that they can run continuously autonomously without any human" }, { "end": 2029.92, "start": 2024, "text": " intervention they also have to be meaningful tasks and very diverse and so on so it seemed that" }, { "end": 2037.76, "start": 2029.92, "text": " at certain scale just coming up with the tasks themselves becomes a bottleneck so in actionable models" }, { "end": 2043.6, "start": 2037.76, "text": " we thought that rather than thinking of all kinds of different tasks let's just do a goal condition" }, { "end": 2048.88, "start": 2043.6, "text": " Q learning so let's consider every state as a potential task as a potential goal and try to get" }, { "end": 2054.08, "start": 2048.88, "text": " to that goal this was done completely offline so we didn't have to collect any of the shown data" }, { "end": 2060, "start": 2054.64, "text": " and we trained on all the data collected with empty opt and it worked really well I think this was" }, { "end": 2065.92, "start": 2060, "text": " kind of a big aha moment for us in terms of you know these one hot representations that we" }, { "end": 2071.36, "start": 2065.92, "text": " were using before to represent tasks were it kind of difficult to work with and the goal images" }, { "end": 2078.08, "start": 2071.36, "text": " just seemed much closer to what we actually wanted it would also allow us to just scale the number" }, { "end": 2083.92, "start": 2078.08, "text": " of tasks significantly because now any goal images actually a task representation and I think" }, { "end": 2090.2400000000002, "start": 2083.92, "text": " that was a step towards getting to language condition policies where language is this kind of" }, { "end": 2096.56, "start": 2090.24, "text": " spacing between where it's very compositional it's very natural to express to the robot what you" }, { "end": 2104.72, "start": 2096.56, "text": " wanted to do much more of a natural than than goal image but at the same time language captures" }, { "end": 2110.8799999999997, "start": 2104.72, "text": " these different relationships between tasks much better than one hot vectors for instance so if we" }, { "end": 2116.72, "start": 2110.8799999999997, "text": " had a task that is I don't pick up a carrot and pick up a cucumber if one is represented as" }, { "end": 2121.52, "start": 2116.72, "text": " task number one and the other is represented as task number two in terms of the representation of" }, { "end": 2128, "start": 2121.52, "text": " the sass there completely orthogonal there's nothing that they share versus the way that language" }, { "end": 2134.72, "start": 2128, "text": " was formed was such that you know we call things picking because whether it's picking carrot or" }, { "end": 2139.7599999999998, "start": 2134.72, "text": " picking cucumbers they kind of look very similar the language can groups them together that's how it" }, { "end": 2145.4399999999996, "start": 2139.7599999999998, "text": " came about so I think language not it's not only a really good interface but it also allows" }, { "end": 2151.12, "start": 2145.44, "text": " for better representations for learning all of these skills together okay let's move on to follow" }, { "end": 2156.56, "start": 2151.12, "text": " up work this is called inner monologue embodied reasoning through planning with action with language" }, { "end": 2162.64, "start": 2156.56, "text": " models and that was with yourself as authors slash co authors so this paper was published" }, { "end": 2167.36, "start": 2162.64, "text": " since we scheduled the interview and it definitely feels like an unexpected bonus so I'm excited to" }, { "end": 2173.6, "start": 2167.92, "text": " to be able to chat about you with us you you extend say can in in some different ways and get" }, { "end": 2178.64, "start": 2173.6, "text": " some really interesting results here so can you talk about the general idea of inner monologue for" }, { "end": 2184.08, "start": 2178.64, "text": " the inner monologue it's mainly tried to address the shortcomings of say can so say can is more" }, { "end": 2191.36, "start": 2184.08, "text": " doing open loop planning and it doesn't respond to certain failure cases it would just continue to do" }, { "end": 2197.12, "start": 2191.36, "text": " like plan further for inner monologue we tried to let the language model source feedback" }, { "end": 2203.8399999999997, "start": 2197.12, "text": " from the environment and from human and then do close loop planning one big inspiration for the" }, { "end": 2209.68, "start": 2203.8399999999997, "text": " inner monologue is palm-thick say can because we find using large larger language models it gives us" }, { "end": 2216.24, "start": 2209.68, "text": " like some extra capability to play with so we try to have a more unstructured way of like prompting" }, { "end": 2222, "start": 2216.24, "text": " the language model and find it can still do the planning in a very high quality so that's kind of" }, { "end": 2227.6, "start": 2222, "text": " the main idea of inner monologue so the text in the inner monologue looks looks a little bit like a" }, { "end": 2233.76, "start": 2227.6, "text": " screenplay script with the different actors and there's some narration different statements by the" }, { "end": 2239.76, "start": 2233.76, "text": " human robot scene some questions and I gathered it's all felt and fed into the language model so what" }, { "end": 2246.08, "start": 2239.76, "text": " are the different types of utterances here that the the text can contain is different than say can" }, { "end": 2250.8, "start": 2246.08, "text": " right right that's that's different than say can I would like to say that the inner monologue is" }, { "end": 2257.04, "start": 2250.8, "text": " also inspired by socratic models where they find you can just use multiple models to communicate" }, { "end": 2264.48, "start": 2257.04, "text": " using language so this is exactly what we're doing here there are different actors that can can" }, { "end": 2271.6800000000003, "start": 2264.48, "text": " talk to each other and then there is also a planning step which summarized the whoever have talked" }, { "end": 2278.0800000000004, "start": 2271.6800000000003, "text": " and generated a plan so here are some of the actors that we have here the first is success detection" }, { "end": 2283.7599999999998, "start": 2278.08, "text": " which it detects if a previous step is successful or not and then it will say the action is successful" }, { "end": 2289.7599999999998, "start": 2283.7599999999998, "text": " or the action failed second we have passive sync description the passive sync description" }, { "end": 2294.96, "start": 2289.7599999999998, "text": " basically describes the scene it can be an object detector telling you there are certain objects" }, { "end": 2300.08, "start": 2294.96, "text": " in the scene there are certain objects in certain locations there are some like state of the object" }, { "end": 2305.68, "start": 2300.08, "text": " this all pass this all fall into the passive sync description there is also active sync description" }, { "end": 2311.3599999999997, "start": 2305.68, "text": " where the robot can actively ask questions about the scene it can ask human like what is a color" }, { "end": 2318, "start": 2311.3599999999997, "text": " of certain things or it can ask a human here are two drinks which one do you like so it will ask" }, { "end": 2323.04, "start": 2318, "text": " question the way it feels it needs to so these are the source of feedback that we are gathering" }, { "end": 2328.16, "start": 2323.04, "text": " so we we talked to Rohan Shah recently on the show and he talked about this idea of active querying" }, { "end": 2333.12, "start": 2328.16, "text": " and actually learning to learning to ask but in this setting here how does your system" }, { "end": 2339.44, "start": 2333.12, "text": " learn or figure out when it's appropriate to ask we figure out where to ask mainly through" }, { "end": 2346.96, "start": 2340, "text": " it's still through field shot prompting we give it a couple examples when there are ambiguous" }, { "end": 2354.48, "start": 2347.92, "text": " when the query is ambiguous then it will further ask to clarify the query it's a little bit" }, { "end": 2360.4, "start": 2354.48, "text": " into the implementation detail where like the robot finishes a task and we score different options" }, { "end": 2367.6800000000003, "start": 2360.4, "text": " we score and continue and ask right so if the end ask score is higher then we will further prompt" }, { "end": 2374.56, "start": 2367.6800000000003, "text": " the language model to ask a question so here it's a slight deviation from the say can like" }, { "end": 2381.04, "start": 2374.56, "text": " scoring based decoding but for these cases we find generative decoding is more helpful here" }, { "end": 2387.2000000000003, "start": 2381.04, "text": " and it can always generate like meaningful questions to answer to ask to reduce ambiguity for" }, { "end": 2392.96, "start": 2387.2, "text": " example and you reported some interesting generalization and emergent capabilities in this" }, { "end": 2396.96, "start": 2392.96, "text": " intermodal log work can you can you talk about that and were some of those surprising to you" }, { "end": 2404, "start": 2397.7599999999998, "text": " yeah there are some some generalization or emergent capability that are super surprising to us" }, { "end": 2408.8799999999997, "start": 2404, "text": " so first let me just briefly define like what are the emergent capability I guess there are two" }, { "end": 2415.3599999999997, "start": 2408.8799999999997, "text": " meanings like one is that the capability only emerges at a larger scale so in this case we use" }, { "end": 2423.28, "start": 2415.36, "text": " the palm pathway language model and such capability only exhibit in such a scale if you use a smaller" }, { "end": 2427.92, "start": 2423.28, "text": " language model it probably will not have those capabilities the second kind of" }, { "end": 2434.88, "start": 2428.8, "text": " implicit meaning of emergent capability is that it's not a shown in the like field shot prompt so" }, { "end": 2444.4, "start": 2434.88, "text": " it's completely new to ask those capabilities one capability that we find is that it can generalize to" }, { "end": 2451.92, "start": 2444.4, "text": " like human changing the request for example the human say go to go to do task a and then as a robot" }, { "end": 2459.6800000000003, "start": 2451.92, "text": " was doing we insert go to do task b and the robot will change its plan to do task b and here we can" }, { "end": 2466.08, "start": 2459.6800000000003, "text": " also say never mind go go to finish your previous task and then in this case robot go back to finish" }, { "end": 2473.12, "start": 2466.08, "text": " the task a so this is quite surprising to us because we find that it understands this history" }, { "end": 2479.68, "start": 2473.12, "text": " it understands what is what does the previous task mean all due to like the large language model" }, { "end": 2485.7599999999998, "start": 2479.68, "text": " and our interface is very natural there are a couple emergent capabilities such as" }, { "end": 2494.3199999999997, "start": 2487.3599999999997, "text": " in one case we find it can also generalize to like you can use emoji as a query for example you can" }, { "end": 2505.04, "start": 2494.32, "text": " say a square a yellow square points to red circle and it will put the yellow block into the red bow" }, { "end": 2511.04, "start": 2505.04, "text": " so this is another like emergent capability that we see that is super interesting another very" }, { "end": 2517.84, "start": 2511.04, "text": " interesting emergent capability is that it can also propose a new plan based on a prompt such as" }, { "end": 2524.88, "start": 2517.84, "text": " try a new try a new method like when the robot fails at doing a task there are usually two reasons" }, { "end": 2531.1200000000003, "start": 2524.88, "text": " first it could it could be its manipulation policy has noise so it fails at doing a task in this" }, { "end": 2537.2000000000003, "start": 2531.1200000000003, "text": " case the best plan was to try again there could also be that the plan is not feasible for example" }, { "end": 2543.04, "start": 2537.2000000000003, "text": " if you are trying to grasp a block and the block is too heavy and in this case the plan would be" }, { "end": 2549.12, "start": 2543.04, "text": " to change a block to grasp so we find we can just provide a tiny hint to the language model we can" }, { "end": 2554.32, "start": 2549.12, "text": " just say please try a different plan or do you have any other idea and then the language model" }, { "end": 2560.16, "start": 2554.32, "text": " would just generate a different plan so this is also super exciting for us we have never seen this" }, { "end": 2567.52, "start": 2560.16, "text": " in our previous work and I saw that you showed inner monologue did well on on unseen tasks actually" }, { "end": 2573.28, "start": 2567.52, "text": " to me it seems surprisingly well and where is the baseline method got all zero so did you did you" }, { "end": 2580.72, "start": 2573.28, "text": " expect it to be doing this well on unseen tasks I think for in terms of unseen tasks we I guess" }, { "end": 2588.48, "start": 2580.72, "text": " you are referring to the comparison with clay ports yes yes so the clay port is trained to do" }, { "end": 2594.88, "start": 2588.48, "text": " like those tasks with demonstration so in that case it naturally doesn't generalize to new tasks" }, { "end": 2600.96, "start": 2594.88, "text": " super well like it's mainly we will perform pretty well for the same task but it doesn't generalize" }, { "end": 2606.2400000000002, "start": 2600.96, "text": " to novel tasks that's because the clipboard doesn't leverage like the rich knowledge presented" }, { "end": 2612.2400000000002, "start": 2606.2400000000002, "text": " in the large language models in our work the generalization mainly come from the language model" }, { "end": 2618.1600000000003, "start": 2612.2400000000002, "text": " in that case it's kind of natural for inner monologue to generalize to novel tasks where we see" }, { "end": 2625.92, "start": 2618.16, "text": " some other methods struggles and does the inner monologue consider the whole text as the prompt" }, { "end": 2630.64, "start": 2625.92, "text": " or you you mentioned a scratch pad I actually didn't follow that does it does it use the whole" }, { "end": 2637.12, "start": 2630.64, "text": " thing as the as the prompt for the language model right it used the whole thing as the prompt so" }, { "end": 2644.24, "start": 2637.12, "text": " I mentioned scratch pad because there are some relevant work in the NLP community that kind of" }, { "end": 2650.08, "start": 2644.24, "text": " inspired the inner monologue two of the papers are one is a chain of thought prompting where they" }, { "end": 2656.64, "start": 2650.08, "text": " just allow language model to generate a chain of thought before actually decoding the answer another" }, { "end": 2661.9199999999996, "start": 2656.64, "text": " is called scratch pad where the language model can just like call different modules and then" }, { "end": 2669.4399999999996, "start": 2662.8799999999997, "text": " keep some notes in the scratch pad before decoding an answer in the inner monologue we use the inner" }, { "end": 2677.84, "start": 2669.44, "text": " monologue itself as a scratch pad so every actor can write to that scratch pad and then we decode" }, { "end": 2684.96, "start": 2677.84, "text": " some actions like every time we decode for example robot action it is consuming all previous" }, { "end": 2691.92, "start": 2684.96, "text": " history steps as a prompt I see okay can we talk about this set of pre-trained skills how do you" }, { "end": 2697.28, "start": 2691.92, "text": " expand this set of skills Carol you said that that was a large part of the work was training these" }, { "end": 2701.6000000000004, "start": 2697.28, "text": " skills do you see a way around that or is there a way to are you envisioning a way to automatically" }, { "end": 2707.28, "start": 2701.6000000000004, "text": " acquire skills on supervised or how do you see that scaling up yeah this is this is a really" }, { "end": 2713.44, "start": 2707.28, "text": " important question and I can't emphasize enough how how difficult it is to actually to work on the" }, { "end": 2717.52, "start": 2713.44, "text": " low level skills and how important this this is definitely the bottleneck for the entire system" }, { "end": 2723.6000000000004, "start": 2718.7200000000003, "text": " so I think one thing that is quite exciting about about sake and is that before when we were" }, { "end": 2729.12, "start": 2723.6, "text": " thinking about what skills to add we would usually just like sit down and you know as engineers" }, { "end": 2733.6, "start": 2729.12, "text": " and researchers will just think about it and vote or something like this and then add that skill" }, { "end": 2742.16, "start": 2734.4, "text": " now we are at the level where sake and starts to be useful and can be used in in an office for" }, { "end": 2748.24, "start": 2742.16, "text": " instance where the robot can maybe bring you a snack from from a kitchen or something like this" }, { "end": 2753.2799999999997, "start": 2748.24, "text": " so it can start interacting with real users I think this is probably much better way" }, { "end": 2757.2000000000003, "start": 2753.28, "text": " coming up with new tasks so we can just see what are the things that users ask for" }, { "end": 2762.6400000000003, "start": 2758.4, "text": " quite often and then see what are the skills that would enable that so we can kind of" }, { "end": 2768.88, "start": 2762.6400000000003, "text": " more automatically decide what are the what are the the things that are missing in terms of how to" }, { "end": 2774.88, "start": 2768.88, "text": " add the skills there's many options there so sake and it's quite modular in that way" }, { "end": 2780.6400000000003, "start": 2775.76, "text": " where you can add a skill that was trained with behavior cloning with reinforcement learning" }, { "end": 2786.3199999999997, "start": 2780.64, "text": " and could be a scripted skill anything works as long as it has an affordance function associated" }, { "end": 2792.24, "start": 2786.3199999999997, "text": " with it so it kind of allows us to consider all these options separately when we when we are" }, { "end": 2798.3199999999997, "start": 2792.24, "text": " thinking about these skills we're kind of thinking about potentially also having the the language" }, { "end": 2803.92, "start": 2798.3199999999997, "text": " model come up with the skills that that could be useful in these settings so that would automate" }, { "end": 2810.4, "start": 2803.92, "text": " it even further overall yeah there's a lot of work that goes into this and I hope that will have" }, { "end": 2817.36, "start": 2810.4, "text": " some answers or some reports on this soon so stay tuned it seems to me the use of these" }, { "end": 2822.48, "start": 2817.36, "text": " large language models in this context is maybe a bit of a double edge sword like on you showed in" }, { "end": 2827.36, "start": 2822.48, "text": " the in the metal monologue paper that you had a request in Chinese and even though you didn't" }, { "end": 2831.52, "start": 2827.36, "text": " design it to understand Chinese necessarily because the language model had seen Chinese before it" }, { "end": 2836.1600000000003, "start": 2831.52, "text": " was able to understand at zero shot and do the right thing which is pretty amazing and then on" }, { "end": 2840.1600000000003, "start": 2836.1600000000003, "text": " the other hand it the language models would have all these things in them that you don't really" }, { "end": 2845.52, "start": 2840.16, "text": " need for this setting or maybe even I'm not sure if you'd even necessarily want like do you want" }, { "end": 2850.48, "start": 2845.52, "text": " your kitchen robot to have read all of read it or to understand irony and all this stuff I don't" }, { "end": 2858, "start": 2850.48, "text": " know maybe you do can you talk about like the idea of using these general purpose language models" }, { "end": 2864.16, "start": 2858, "text": " for very specialized purposes do you think in the future you'd want to have very specialized" }, { "end": 2869.44, "start": 2864.16, "text": " language models that were were kind of paired down it seems to me there's like a tension between" }, { "end": 2873.2000000000003, "start": 2869.44, "text": " like a good old fashioned AI system it just doesn't know enough things and you have to keep" }, { "end": 2877.28, "start": 2873.2000000000003, "text": " working hard to add facts and knowledge and and here you have the opposite problem where you have" }, { "end": 2882.7200000000003, "start": 2877.28, "text": " an LLM which actually in some ways maybe knows too much do you is that a concern at all or or not" }, { "end": 2888.88, "start": 2882.7200000000003, "text": " so much first of all we're using the general purpose like large language model mainly because" }, { "end": 2895.28, "start": 2888.88, "text": " their their scale and emergent capability and the built-in knowledge in that so it will be" }, { "end": 2901.44, "start": 2895.28, "text": " it will be difficult to shrink the model down while still keeping these" }, { "end": 2908.48, "start": 2901.44, "text": " knowledge so that would be like one key challenge for us however we do have motivation to bring" }, { "end": 2915.92, "start": 2908.48, "text": " those model down like to kind of distill those models one is one main thing is about efficiency" }, { "end": 2922.8, "start": 2915.92, "text": " so currently the inference is quite heavy and we definitely want to like make it smaller so that" }, { "end": 2928.5600000000004, "start": 2922.8, "text": " we can do inference faster in terms of like the unwanted behavior I would say current the" }, { "end": 2934.88, "start": 2928.5600000000004, "text": " say can decoding is quite safe because we only allow it to output like certain actions using" }, { "end": 2941.52, "start": 2934.88, "text": " the scoring mode so we don't get a lot of like undesired behavior so for us if we want to shrink" }, { "end": 2946.88, "start": 2941.52, "text": " the model down it's mainly for like efficiency purposes not for like unwanted behavior" }, { "end": 2952.8, "start": 2946.88, "text": " yeah I think the in terms of specializing these general purpose models right now we the" }, { "end": 2961.12, "start": 2952.8, "text": " main tool that we have for this other than affordance scoring and so on is prompting right so" }, { "end": 2966.7200000000003, "start": 2961.12, "text": " you can think of prompting us some way of of specifying the the task and specializing the model" }, { "end": 2972.8, "start": 2966.7200000000003, "text": " to the specific thing that you want it to to do I think as we gather more data for the for the" }, { "end": 2977.84, "start": 2972.8, "text": " task that we actually care about we could also think of other ways that just fine tuning the the" }, { "end": 2984.4, "start": 2977.84, "text": " model on fine tuning instead of parameters and I think there was kind of a many options that we" }, { "end": 2989.28, "start": 2984.4, "text": " could we could consider there to make this the model a little bit more specialized that it" }, { "end": 2993.6000000000004, "start": 2989.28, "text": " could be on just prompting it so there's a line in the CKN paper in the conclusion says it is" }, { "end": 2998.4, "start": 2993.6000000000004, "text": " also interesting to examine whether natural language is the right ontology to use to program robots" }, { "end": 3004, "start": 2998.4, "text": " and I just observing that language models most of them seem pretty generic they are only conditioned" }, { "end": 3010.64, "start": 3004, "text": " on the previous text and so it's not maybe not clear how to condition them on on other things" }, { "end": 3016.96, "start": 3010.64, "text": " do you see wanting to have language models that can be conditioned on other things or do you think" }, { "end": 3022.4, "start": 3016.96, "text": " the vanilla language models whether they're distilled or not are the are the right paradigm here" }, { "end": 3027.12, "start": 3022.4, "text": " any comments on that there may be two aspects to this and this may be like a little more" }, { "end": 3034.48, "start": 3027.12, "text": " philosophical so I think that the first aspect is that language just seems to be a really nice" }, { "end": 3041.2, "start": 3034.48, "text": " interface that is very interpretable for all of us but it also captures the compositionality" }, { "end": 3046, "start": 3041.2, "text": " and the relationships between all the different tasks that we might consider the robots to do" }, { "end": 3053.7599999999998, "start": 3046.64, "text": " so I think it's just like a really nice representation that potentially can make a robot learning" }, { "end": 3059.28, "start": 3053.76, "text": " easier because as we mentioned earlier if you have two tasks that look very similar and they will" }, { "end": 3065.6800000000003, "start": 3059.28, "text": " be probably described by the same set of words and I think that's that's really useful and kind of" }, { "end": 3073.36, "start": 3065.6800000000003, "text": " for free on top of that you also get the interpretability of it and then separately I think this is what" }, { "end": 3080.32, "start": 3073.36, "text": " with your question is pointing towards I think we should be considering other modalities in" }, { "end": 3086.32, "start": 3080.32, "text": " these in these large models and how they can influence the planners and robot learning in general" }, { "end": 3092.88, "start": 3087.04, "text": " I think something like inner monologue or secratic models is just one way of doing this that is" }, { "end": 3098.88, "start": 3092.88, "text": " more practical because a lot of multimodal models have the language component so you can just" }, { "end": 3103.1200000000003, "start": 3098.88, "text": " kind of ask a vision vision language model to describe what it's using language and then that's" }, { "end": 3109.6800000000003, "start": 3103.1200000000003, "text": " the way you can incorporate it into your big language model but as these multimodal models could" }, { "end": 3115.2799999999997, "start": 3109.68, "text": " better and better I would hope that we can incorporate much more into our prompt we can" }, { "end": 3119.52, "start": 3115.2799999999997, "text": " incorporate what we currently see you know what's our confidence in the actions that we are about" }, { "end": 3127.04, "start": 3119.52, "text": " to take and so on this would be just a much richer way of specify or kind of meta programming the" }, { "end": 3132.24, "start": 3127.04, "text": " robot right so not only you can just specify I want you to help me clean something up but maybe" }, { "end": 3136.72, "start": 3132.24, "text": " you can also demonstrate something and that's also part of the of the prompt that the robot can" }, { "end": 3141.6, "start": 3136.72, "text": " then understand no understand that you wanted to this thing to be picked up in a certain way" }, { "end": 3147.4399999999996, "start": 3141.6, "text": " or something like that so I think there's much more work to be done in in this interesting" }, { "end": 3153.8399999999997, "start": 3147.4399999999996, "text": " prompt multi model prompting mechanisms that would allow us to to teach robots better so I get" }, { "end": 3159.6, "start": 3153.8399999999997, "text": " that say can is a is lab work it's not meant to be deployed in its current state but when we" }, { "end": 3164.56, "start": 3159.6, "text": " eventually get to these types of robots being deployed do you think that they may have something in" }, { "end": 3170.32, "start": 3164.56, "text": " common with say can or what do you think there's any parts of of these systems that might be long-term" }, { "end": 3175.2799999999997, "start": 3170.32, "text": " advances versus stepping stones or is there more stepping stone situation yeah that's a that's a" }, { "end": 3182.88, "start": 3175.2799999999997, "text": " good question I think if we think of language models as these reasoning engines that can tell us a" }, { "end": 3189.44, "start": 3182.88, "text": " lot about the the semantics and about the world in general I think probably some form of this" }, { "end": 3195.28, "start": 3189.44, "text": " is here to stay these seem to be just really really powerful models that can understand common sense" }, { "end": 3203.2000000000003, "start": 3195.28, "text": " to a certain extent and that I think is very very helpful for for robot learning and I think we'll" }, { "end": 3207.84, "start": 3203.2000000000003, "text": " see this going forward maybe that will be a slightly different kind of model that can also incorporate" }, { "end": 3214.7200000000003, "start": 3207.84, "text": " other modalities as we mentioned but I could I can imagine that some form of this some form of" }, { "end": 3220.72, "start": 3214.72, "text": " this distilled knowledge what's that can you talk about a bit about how you think about your" }, { "end": 3227.04, "start": 3220.72, "text": " future work to what extent do you plan for an advance or are you taking things more step by step" }, { "end": 3232.3999999999996, "start": 3227.04, "text": " do you re-plan all the time how do you plan your your future work yeah that's a that's a good" }, { "end": 3242.08, "start": 3232.3999999999996, "text": " question I think it depends on the individual I think for for this project I tend to split it into" }, { "end": 3249.6, "start": 3242.08, "text": " three main aspects um the this data generation we need to be able to just generate a lot of data" }, { "end": 3256.16, "start": 3249.6, "text": " with robots then the other aspect is data sponge algorithms so just find algorithms that are" }, { "end": 3261.92, "start": 3256.16, "text": " able to absorb all of the data and that's often very very tricky and we spend a lot of time there" }, { "end": 3269.12, "start": 3262.56, "text": " and then the the third aspect are just I said just modeling how do you get the models to be to be" }, { "end": 3278.96, "start": 3269.12, "text": " better and I think for for a long time the bottleneck was actually the the algorithms themselves how well" }, { "end": 3288, "start": 3278.96, "text": " they can absorb all the data so we we saw for instance in in in language that once transformers came" }, { "end": 3294.4, "start": 3288, "text": " out they were just really really good data sponges and you can kind of throw a lot of data at them" }, { "end": 3299.6800000000003, "start": 3294.4, "text": " and then you can observe this fascinating scaling was and the performance continues to improve" }, { "end": 3305.04, "start": 3299.6800000000003, "text": " and we've been trying to do to to find an equivalent of that in robotics whether it's an offline" }, { "end": 3309.6, "start": 3305.04, "text": " or algorithm or some imitation algorithm or something else something that can absorb as much data" }, { "end": 3315.6, "start": 3309.6, "text": " and as diverse data as possible I think now we are slowly getting to the point where this is no" }, { "end": 3319.84, "start": 3315.6, "text": " longer a bottleneck there is a lot of algorithms that can absorb actually quite a lot of data" }, { "end": 3327.44, "start": 3319.84, "text": " so I think we'll we'll kind of then look at the the state of things and see what is the bottleneck" }, { "end": 3335.76, "start": 3327.44, "text": " now and I suspect that it will be data generation itself so how can we develop algorithms or develop" }, { "end": 3343.2000000000003, "start": 3335.76, "text": " even just processes for collecting very diverse data for very diverse tasks either on the real robots" }, { "end": 3348, "start": 3343.2000000000003, "text": " or how can we incorporate human data how can we just scale up our data collection significantly" }, { "end": 3353.68, "start": 3348, "text": " are you surprised by some of the fast progress in AI lately and do you think it's going to keep" }, { "end": 3358.64, "start": 3353.68, "text": " accelerating for me I personally am really surprised that the scaling laws continue to hold" }, { "end": 3364.8, "start": 3358.64, "text": " I think I find it absolutely fascinating and I think we kind of maybe take it take it for granted" }, { "end": 3370.72, "start": 3364.8, "text": " a little bit that you know we we saw it a few times and now it's just like it's considered maybe" }, { "end": 3376.72, "start": 3370.72, "text": " boring or some people refer to it as just like pure engineering that there aren't any novel ideas" }, { "end": 3381.8399999999997, "start": 3376.72, "text": " it's just about scaling things up and I think first I think scaling things up is extremely hard" }, { "end": 3386, "start": 3381.8399999999997, "text": " and I haven't really subscribed to the notion of it's just engineering I think it's just it's" }, { "end": 3393.9199999999996, "start": 3386.8799999999997, "text": " it's really really hard and it's as much of a there's so many novelties there as much as in any" }, { "end": 3399.2, "start": 3393.9199999999996, "text": " novel research idea and I think it was just yeah it's mind blowing to me that we can make so much" }, { "end": 3407.2, "start": 3399.2, "text": " progress by pushing this one direction how do you see the kind of competitions lash cooperation" }, { "end": 3412.24, "start": 3407.2, "text": " between different labs and are the labs doing cool work you yeah there's plenty of other labs" }, { "end": 3418.08, "start": 3412.24, "text": " that do really cool work I think we pay a lot of attention to what's happening in academia and in" }, { "end": 3424.08, "start": 3418.08, "text": " other industrial labs and particularly interested in the algorithms that address problems that we" }, { "end": 3431.44, "start": 3424.08, "text": " um start noticing at scale so it's I think we get a lot of inspiration from different works that" }, { "end": 3436.08, "start": 3431.44, "text": " come out from from different labs that sometimes maybe they don't even realize that this is the" }, { "end": 3443.2, "start": 3436.08, "text": " problem that that is you know really apparent when you scale things to like many robots or many" }, { "end": 3450.48, "start": 3443.2, "text": " robots doing many different tasks and yeah these are super super useful things we also tend to work" }, { "end": 3456.16, "start": 3450.48, "text": " with with interns and student researchers and it's always refreshing when they when they come in" }, { "end": 3465.04, "start": 3456.16, "text": " and bring in all kinds of new ideas and ways to to use our system um so yeah I think we we" }, { "end": 3470.56, "start": 3465.04, "text": " draw a lot of inspiration from those what do you think of of the concept of a GI do you find that" }, { "end": 3477.12, "start": 3470.56, "text": " that idea useful to talk about or is it a distraction maybe like on a more personal level it's" }, { "end": 3483.04, "start": 3477.12, "text": " it's a little hard to think about it about a GI when your day-to-day work is you know you're" }, { "end": 3488.72, "start": 3483.04, "text": " looking at the robot struggling with grasping like an apple you know on a on a countertop so like" }, { "end": 3494.56, "start": 3488.72, "text": " when you see how hard these things are and you know how much work it takes to actually get it to" }, { "end": 3499.6, "start": 3494.56, "text": " do like the simplest things it's kind of quite difficult to imagine you know all the steps that" }, { "end": 3504.96, "start": 3499.6, "text": " would need to be taken and how it just like at some point will will progress and exponentially" }, { "end": 3512, "start": 3504.96, "text": " from my side I like to be like more grounded and just to make solid progress towards the robot" }, { "end": 3518.48, "start": 3512, "text": " capability so I haven't been thinking about a GI too much however I do think uh when people" }, { "end": 3525.44, "start": 3518.48, "text": " discuss a GI they also think about like ethics and safety and I think those are good to for us to" }, { "end": 3530.7200000000003, "start": 3525.44, "text": " think about like early on when we start to build those methods we also take into like safety and" }, { "end": 3537.2, "start": 3530.72, "text": " ethics into consideration and I think like down the road when we have more powerful models we are" }, { "end": 3546, "start": 3537.2, "text": " uh salt we are safe on that regard makes sense and it seems that there's I mean there's been such" }, { "end": 3551.3599999999997, "start": 3546, "text": " great progress in terms of the language models being able to write these big essays the" }, { "end": 3558.48, "start": 3552.24, "text": " image models being able to generate incredible art and then there's kind of a gap between that" }, { "end": 3563.44, "start": 3558.48, "text": " and what we see in robotics are we waiting for something maybe it's the data sponge that you were" }, { "end": 3571.76, "start": 3563.44, "text": " talking about or the data generation Carol but are we waiting for some advance that can lead to" }, { "end": 3578.08, "start": 3571.76, "text": " some sort of image net moment for robotics is that ahead or is that behind us there's been a few" }, { "end": 3584.48, "start": 3578.08, "text": " moments that were significant I think in in robot learning but I don't think we've had the image" }, { "end": 3591.12, "start": 3584.48, "text": " net moment yet I think one of the one of the underlying maybe hopes behind something like" }, { "end": 3596.2400000000002, "start": 3591.12, "text": " Saken was to kind of attach ourselves a little bit more towards the progress that is happening in" }, { "end": 3601.68, "start": 3596.2400000000002, "text": " other fields right so if there if we find some way of of having language models improve robotics" }, { "end": 3607.04, "start": 3601.68, "text": " than as they improve robotics will improve as well or the same with multimodal models and so on" }, { "end": 3614, "start": 3607.04, "text": " as shown in inner monologue but I think um in terms of the low level skills I think these are still" }, { "end": 3621.04, "start": 3614, "text": " the early days we're I think quite bottlenecked by by the data available to us there aren't there" }, { "end": 3627.28, "start": 3621.04, "text": " isn't that much data out there of robots doing many different things you know nice data sets of" }, { "end": 3635.84, "start": 3627.28, "text": " just real robots doing diverse sort of tasks so that's another struggle that we have to we have to" }, { "end": 3641.92, "start": 3635.84, "text": " incorporate in all of this but I think we're making decent progress there but yeah I think the" }, { "end": 3646.88, "start": 3641.92, "text": " the bigger breakthroughs are still in front of us is there anything else I should have asked you" }, { "end": 3651.52, "start": 3646.88, "text": " about today or anything else you want to share with our audience I guess I would just briefly" }, { "end": 3658.32, "start": 3651.52, "text": " mention that it's really inspiring to see that the progress of the natural language processing" }, { "end": 3662.96, "start": 3658.32, "text": " kind of trickle down into robotics and start to solve some of the robotics problem for us" }, { "end": 3671.6, "start": 3663.6, "text": " in general I think this more interdisciplinary researching AI is super super exciting and we cannot" }, { "end": 3676.96, "start": 3671.6, "text": " wait to see more of that coming into robotics yeah I fully agree I think this" }, { "end": 3684, "start": 3678.16, "text": " unification I think it was really hard to think anything like this even a few years back" }, { "end": 3692.88, "start": 3685.36, "text": " that you know some improvement that you can make to an architecture can you know improve" }, { "end": 3699.52, "start": 3692.88, "text": " robotics and vision and language and and all of these things so it's I'm on how it's super" }, { "end": 3704.64, "start": 3699.52, "text": " exciting to see something like this that we were kind of all pushing in in one direction and we" }, { "end": 3711.44, "start": 3704.64, "text": " can all benefit from each other and even for us specifically at Google we are you know closely" }, { "end": 3718.56, "start": 3711.44, "text": " collaborating with with language folks with language researchers and it's just very cool to have" }, { "end": 3727.7599999999998, "start": 3718.56, "text": " this you know interdisciplinary team where we can all push in in a single direction I think on" }, { "end": 3736.6400000000003, "start": 3727.76, "text": " the other hand that's also important especially for academic labs to you know don't jump on the" }, { "end": 3742.6400000000003, "start": 3737.44, "text": " on the height train and maybe like either there is something that that you're really passionate about" }, { "end": 3749.2000000000003, "start": 3742.6400000000003, "text": " and you know something that that you believe will improve robotics or robot learning" }, { "end": 3754.4, "start": 3750.1600000000003, "text": " whatever you're interested in I think it's important to keep pushing on that as well" }, { "end": 3760.56, "start": 3754.4, "text": " I'm a little worried that we'll lose a little bit of this diversity in terms of research ideas" }, { "end": 3767.12, "start": 3760.56, "text": " that are out there because of this unification so I think it's important to keep both but yes" }, { "end": 3772.1600000000003, "start": 3767.12, "text": " it's super exciting well I want to thank you both so much for joining us today and taking the time" }, { "end": 3777.28, "start": 3772.1600000000003, "text": " to share your insight with the talk our audience thank you so much Fesha thanks thanks for" }, { "end": 3789.6800000000003, "start": 3777.28, "text": " our invitation and thank you Carol Haussman thank you thanks for having us" } ]
Sai Krishna Gottipati
Sai Krishna Gottipati of AI Redefined on RL for synthesizable drug discovery, Multi-Teacher Self-Play, Cogment framework for realtime multi-actor RL, AI + Chess, and m...
https://media.transistor…80a.mp3?src=site
TalkRL podcast is all reinforced in learning all the time, featuring brilliant guests both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. I have a brief message from any scale, our sponsor for this episode. Re-enforced in learning is gaining traction as a complementary approach to supervised learning with applications ranging from recommended systems to games to production planning. So don't miss Ray Summit, the annual user conference for the Ray open source project, where you can hear how teams at Dow, Verizon, Riot Games and more are solving their RL challenges with RL lib. That's the Ray ecosystems open source library for RL. Ray Summit is happening August 23rd and 24th in San Francisco. You can register at raysubmit.org and use the code RaySummit22RL for a further 25% off the already reduced prices of 100 bucks for Keynotes only or 150 to add a tutorial from Sven. These prices are for the first 25 people to register. Now I can see from personal experience, I've used Ray's RL lib and I have recommended it for consulting clients. It's easy to get started with, but it's also highly scalable and supports a variety of advanced algorithms and settings. Now on to our, Psy Krishna Goatipati is an RL researcher at AI redefined working on RL, multi-agent RL, human in the loop learning and he developed the world's first RL based algorithm for synthesizable drug discovery. As a master student at Miele, he worked on RL for active localization, board games and character generation. Psy is also an international master in chess. Psy thanks for joining us today. Thanks for inviting me. Can you tell us a bit more about your current focus areas? Currently I'm working mostly on multi-agent RL, human in the loop learning and some of its applications and some industrial settings. Can you say a bit about the settings? Sure, I think it mostly relates to our product which is Cogment. It's a multi-actor framework. So by actor, I mean the actor could be an AI agent or a human agent or a heuristic agent and so on. So it's especially useful and very complex ecosystems where multiple actors are acting. So we have multiple industrial clients that are using Cogment for their products. For example, we are working on a project with Thalas for airport security where the idea is to defend the airport from incoming drones or any other objects and we have two teams of drones. One is team defending the airport and the other is the one that's trying to attack the airport. So as a defense team, we need to develop sophisticated algorithm. So this is a different kind of attacks and this is where the teams within the defense team should learn to collaborate with each other and simultaneously launch and attack agonist the offenders. Yeah, that's one of the applications for example. Wow, okay. So not showing away from the hard problems in multi-agent RL or the safety critical settings either. Yeah. That's amazing. So I can hear more about Cogment later on in our chat and I look forward to that. So just looking through your background, it seems like you've had a few different chapters in your research career. I see early papers on Slam and drug discovery and then computer vision and RL fundamentals. Can you give us a brief idea of how you got to this point? During my undergrad as part of my honors project, I started in the Robotics Research Center. I think it's now called a Triple IT Robotics Lab. So at that time, the lab was working on this Mahinder Rice Challenge, which is to design an autonomous driver in Garfer Indian roads. So I started working on various computer vision problems like road sign or traffic signal, detection and recognition, bot holds and speed breakers, recognition and so on. That sounds challenging. I'll just say I learned to drive in New Delhi and he roads there are quite. If you want to test your autonomous driving, that's got to be a really good place for a lot of tail events. Yeah, I think either but roads are even more challenging. So yeah, at that time, so this was back in 2015 or 2016. So at that time, I was mostly using still traditional computer vision techniques, but that's also the time when I slowly got introduced to deep learning. Yeah, and then I started using deep learning first for the recognition part and then also for the detection part. At the same time, I got an opportunity to work on this research project as well, which is the 3D reconstruction of the vehicles. I worked on a very small part of the project at the time, which is the key point localization. That's when I got introduced to many of the deep learning frameworks at that time. I think PyTorch was in even released or not that mature at that time. I was dealing with cafe ahead of that time and before cafe was doing deep learning one in MATLAB, so those are fun times. Yeah, towards the end of my undergrad, I got an admit in Mela to work with Liam Paul. Not that robotic slabs basically and the project I'd be working on isn't decided. And I thought I would continue working on some fun and challenging robotics problems. And yeah, I explored a lot of different problems in localization, slam and so on. And I finally got to work on the problem of active localization. And yeah, I initially tried out the traditional methods for active localization and soon realized that reinforcement learning is a very good fit for this problem. So I started using reinforcement learning for active localization and that's how I got into the reinforcement learning. At the same time, yeah, I think this was beginning of 2018. I was also taking the reinforcement learning course at Michael where I got to work on some interesting assignments and projects. And yeah, after graduating from Mela, I started working in a drug discovery company where again reinforcement learning is a very good fit for the problem I was working on then. And then now I'm at a redefined where I think I now find multi agent barrel and human looper like more challenging and interesting problems to work on. What would you say are the holy grail problems or the long term goals in your current focus areas? I think human and the loop learning is at very early stages. We even don't have proper benchmarks at this point. For example, reinforcement learning Atari is kind of considered to be a good environment to test different algorithms on. But we don't have any such ideal algorithm to test human in the loop learning one. I mean, even the metrics that we have to optimize aren't very clear because it's not just about maximizing the particular reward. We should also need to care about the trust factor. And I think as a first very good challenge is to develop this benchmarks, develop the environments and try to optimize for these different metrics. And yeah, I think in 10 years we would have a very good complex ecosystems where humans and AI agents can learn from each other, trust each other and cooperate with each other. Yeah, I mean, the whole idea of a benchmark for human in the loop seems so difficult to execute. Like how many times can you run it? How much human time is it going to take? How replicable would the results be with different humans? Exactly. Do you feel like there's progress is being made in that question or is it is there people kind of kicking the can down the road a bit? I've seen some multi agent RL papers will focus on how other RL agents or other types of automated agents will respond or react. But but it doesn't seem like there's any clear way to to automate a human response. I mean, the whole point is that the human response is very differently than any machine ever would. So how could you ever put that in a loop in terms of like running huge amounts of hyper parameters, sweeps or anything like that? Yeah, that is a very challenging question. And we are kind of working on a small part of that right now on the Hanna B project where we are trying to have a humans play, some multi humans play against other agents and train agents in such a way that they can learn to collaborate with all the humans. Okay. And then we're going to talk about that Hanna B paper in a few minutes. So I just saw an announcement a few days ago that that Miele, the research institute in Montreal and your employer, AI redefined, have a partnership. Can you say a bit more about AI redefined and its mission and the and the partnership with Miele? And what stages AI redefined that with with it sounds like really ambitious work? Yeah, so AI redefined started out around 2017. It's based in Montreal and it's mission. And I think in my own words, it's to develop complex ecosystems where humans and AI agents can, as I was saying, learn from each other or collaborate with each other and trust each other. Yeah, so I think that's the grand goal that we have. And we are kind of working on multiple projects with Miele researchers, for example, the one with Professor Sarah Chandra's group on Hanna B. And we are looking forward to working on more such projects with other Miele researchers as well and test out the actual potential of Cogment. Awesome. Okay. So let's talk about the Cogment paper. That is Cogment, open source framework for distributed multi-actor training, deployment and operations. And that's with authors, AI redefined and yourself as well as co-authors. So you said a little bit about Cogments. So it's really about multi-agents systems and is it really about learning in real time or inference in real time or what is the, he tells more about the setting that Cogment is best for? Yeah, I wouldn't call it a multi-agent system. It's more of a multi-actor system. As I was saying, actor could be an AI agent or a human or a heuristic agent or basically any other actor. It can be used for even normal simple single agent reinforcement learning algorithms, but then I guess you won't see any advantages compared to the other existing frameworks where Cogment really shines is in these multi-actor systems because you can have multiple actors simultaneously acting in an environment, doing very different things. For example, imagine all the ways a human can interact with an AI agent. An AI agent can reward a human at every time step or vice versa. And similarly one agent can set a curriculum and the other agent can follow the curriculum or even simpler algorithms like behavior learning. So these are all different ways in which a human can interact with the AI agent. Cogment is really suited for all these kinds of use cases. For example, one simple demonstration would be in the case of a simple gym environment like Luna Lander, where an AI agent with its randomly insularized policy starts playing the game and human can intervene at any time step in the middle of the episode. And the AI agent can learn both from its own samples and from the human interventions. So instead of continuously interacting with the agent, human can just sit back and relax and only intervene when he thinks that agent is acting very stupidly. And I think this is one of the efficient uses of human time. One of the projects was on this airport security that we are working with Thalas and Matthew Taylor and Matthew Kustel from University of Alberta. The other collaboration we are having is with conference.ai, which is I think like kind of consortium of multiple French industries and labs. And we are working on this specific case of hyperparameter optimization guided by humans. Yeah, so basically allowing the humans to explore the space of hyperparameter so that they can end up with the final optimized parameters that they want. One other interesting project is with this major player in training simulation. I think I can't reveal the name. But the project is basically in a traffic controller and pilot training where you have multiple aerial vehicles that are cute to land at different landing spots or landing destination. And then you receive an emergency request from a different pilot. And so how should this ATC react so that they can reroute the existing aerial vehicles and also help this new training pilot land safely? We also have this other collaboration with renewable energy company where the goal is to basically manage the energy grid or to decide when to sell or store the energy in the grid. It's basically an optimization problem with ARL, but we could however have a human in the loop with an operator actually controlling the decisions. And you can also have different kind of risk profiles that are controlled by the humans. So how do you think about the use of AI and RL in safety, critical situations? It seems like especially with the aircraft traffic controller case, I guess, in the power case too. Yeah, so I think it's important to have human in the loop and kind of have human as human have a final say in the systems. And yeah, that's kind of primary focus set here in the final as well. Okay, but you think the human and the AI together can make better decisions and safer decisions than the human on its own? Is that the goal here? Yeah, exactly. I mean, there are some complex planning that needs to be underdone. So which in a time critical situations human might not be able to do. So agent will do all those hard work very quickly and then it will suggest what it thinks is the best action. And if it seems like a sensible action that is not dangerous, then the human can approve that action basically based on the human approval or disapproval the agent can also learn from further from this kinds of feedback. So it would be a continually learning system as well. Is it very general or is it more focused on on policy off policy offline? Is it like model based model phrase all of the above or is it very is it kind of focused on certain aspects of it on the RSI? It's all of the above. So especially this we have this thing called retroactive rewards where the even the rewards can be given much later than when the time steps of the episode has actually happened. So this gives rise to like wide range of applications as well. For example, when AI agent is acting in an environment, human might not be as quick to give the reward, right? So it's useful in those cases. And what stage is Cognitive and is it built on other tools or is it kind of a greenfield project? Are you extending something here or is it really starting from scratch? It's mostly a greenfield project. It's based on microservice architecture. I think that's like just like a concept of microservice architecture. There are multiple versions of Cognitive. I think the first question came out about one and a half year ago or something recently released Cognitive 2.0 which is more academic oriented and which is more friendly to the researchers as well. And on top of Cognitive we released something called Cognitive as well, which is a collection of a bunch of reinforcement learning agents and environments like simple G-man environments, pettings who procedural generation environments and so on. So that it would be easy for any actual academic researcher to get started and do a couple of experiments with Cognitive. I guess in the case where a human takes over, are you labeling those samples as expert demonstrations or are they considered differently? Yes, they can be stored to a different replay buffer or they can be stored on the same replay buffer. It depends on how we code here. What is your role in Cognitive project? I mostly developing on Cognitive first, which is implementing and benchmarking different reinforcement learning or multi-agent algorithms with different kinds of environments. And then we also use Cognitive for all of our research projects, ongoing research projects. Cool. Do you want to move on to asymmetric self-play paper? Yeah. So I think this is a paper from OpenAI that you weren't a co-author on, but you found it interesting for our discussion. I think the idea here is to solve goal-conditioned reinforcement learning. Usually, it's a very sparse, what problem and hence it's a very challenging task to solve. So what these guys do is they introduce a new kind of agent, they call it LIs and Bob. So LIs being like a teacher agent that acts out in the environment and reaches a particular state. And then the Bob agent is supposed to reach that state reached by the LIs. This way the problem of spasity can be kind of overcome. So this paper was asymmetric self-play for automatic goal discovery and robotic manipulation with authors, OpenAI, Matthias, Clapper, et al. So why fundamentally, why do you think that splitting the learning problem in this way using two separate agents? Why is that somehow better? We see different algorithms that split the learning problem in this type of way or in related ways. Why is that somehow better? Is there some reason why it should make sense to do that? It's almost like they're setting up a game, right? Yeah, that's so if a single agent is acting out in the environment that what is very sparse, especially in goal-conditioned environment. So I'm thinking of a robotic manipulation task where all the end locations has to exactly match. Yeah, maybe even after 100 time steps, you might not be able to reach that location. And it's hard for any typical RL algorithms to learn from such kind of sparse words. So introducing this new agent will encourage exploration. It will encourage the first teacher agent or the LIS agent to go to the places it hasn't been to before because if it's revolving around the same area, then the Bob agent can reach those locations and the teacher will be negatively rewarded. So teacher is always incentivized to explore more and consequently the student is incentivized to follow the teacher. I think this way the exploration is much faster and the end of the day the agent can generalize much better even to the unseen goals. So but why do you think that is that it works better with two agents? Like you can imagine another formulation where we just had one agent and we said, okay, we're going to give some curiosity, interested in curiosity to this agent and it's going to do its best to explore everywhere and then it's going to and then we're going to do some kind of hindsight replay thing to say, we'll just pretend you were trying to find these goals. It seems like that maybe could work as well or why do you think this is better this way? Yeah, those could work as well but I think one kind of issue or challenge I see with this intrinsic reward based methods or information, theoretic based rewards, curiosity based rewards and so on is they don't necessarily align with your actual goal. You're especially incentivizing the agent to just increase its curiosity or optimize some kind of information, theoretic metric which might not be relevant to your actual goal of solving a goal condition problem. But on the other hand, this teacher student approach is kind of incentivizing the agent to reach a wide range of goals in a much quick fashion. So the training procedure is closer to the test time procedure. It seems like the teachers here training for the similar behavior that we actually want to see. Yeah. Right, so if it maybe it's just using some kind of noisy exploration then it's not going to be really optimized for quickly getting to any specific goal because it never behaved that way really during training time. Yeah, correct. Yeah. All right, well, anything else you want to say about this paper? I think we've seen that general idea show up a lot of times in terms of goal selection and a separate agent trying to reach that goal as a strategy for self-play. Yeah, so I think one other interesting thing that did in this paper is add a behavior cloning loss to the student training. So usually we have seen multiple approaches before where we have a goal generating agent and another agent that's trying to reach the goal, but this goal generating agents are usually some VIEs or cans and so on. But in the case of this asymmetric self-play paper, the teacher agent also actually acts in the environment and reaches that position. What that means for the student agent is in case the student finds the goal too hard to reach, then the student can actually learn from the behavior cloning of the teacher. I think that really helped in much faster training. But do we have a chicken and egg problem? Like, how does the teacher know how to get there? I actually didn't follow that part. How does the teacher know how to get there? So initially teacher moves completely randomly. So both the teacher and the student agent starts out completely randomly. But once the teacher gets to a certain location and if the student fails to reach their first time, then it's good. The teacher agent gets rewarded. In the second episode as well, if the teacher reaches the same spot, but now the student has learned how to reach that place. So the student reaches that goal and the teacher will be negatively rewarded. So now the teacher realizes that okay, the student can reach his goals. Now I should further expand my space and it's incentivized to export more. So what kind of settings do you think are most suitable for this? I'm thinking of a real world application in the context of industrial robots. For example, in the kitchen robots or in some factory settings and so on. Those manipulator arms has to be trained to reach different kinds of poses. So I think during its training phase, it's ideal if they were trained in this manner. We have one agent, one teacher agent trying to do multiple, trying to reach multiple locations, but it could also have multiple student agents trying to reach the same goal pose. Okay. Do you think this really makes sense in simulation and then using some to reel or like literally doing all of us in the real world? Yeah, I think that's always a complex question. It depends on the specifics, but yeah, doing it in the simulation first and then seem to real time, so it should work. Okay, so let's move on to a paper that you have currently under review at a conference. And I won't say the conference name, but it's a big well known conference. The papers call do as you teach a multi-teacher approach to self-play in deeper and force learning. So can you give us a basic idea of what's going on in this paper? Sorry. Yeah, so we have seen this as a matrix self-play paper and we implemented it and then we noticed that it's working well, but not as good as we expected. So then we were thinking of what kind of improvements we can make to that. And one issue we noticed is that there is kind of lack of diversity in how the teacher is setting the goals. It is exploring, but it is kind of exploring mostly in one direction, considering grid world example. And the teacher is setting goals in it's still challenging goals, but it's setting goals in only one direction. So I think, yeah, that's the basis for our approach. So we believe that we need multiple teachers to set diverse goals for that could also help in faster learning of the student agent and also better generalization. And where does the stochasticity come from, the randomness in the teachers? It's random initialization of the networks and then they act all differently because they are incentivized based on whether the student has reached the goal or not. You could get away with one teacher if the distribution was what you wanted, but you're saying you don't get your distribution from one. And that's because so I just wonder what the other approach would be like, is there some way to fix, is there any alternative to fix the distribution? Because what I think what we're saying is the distribution from anyone teacher is just not distributed, basically, not evenly distributed. So is there some way to make it evenly distributed or there's just no way and this is this multi teacher is a kind of a approach to overcome that problem? I mean, we thought of other approaches, for example, adding a diversity specific metric and so on, but I think they are really dependent on the environment or particular task at hand and not really gender stick, gender log rhythms. And I think there are some other ways you could do it. For example, adding goals to the replay buffer that are only diverse. So you let the teacher agent generate all these goals, but store those goals in the replay buffer that are explicitly different from these goals that are already stored. But these are also computationally expensive. And how do you consider a difference between goals? Like as you have some idea of distance between the goals, is that in terms of steps to get there or how do you think of difference between goals? That's another challenge actually. You can't, you don't have any specific metric or distance between goals. If you're acting in a grid world, then it's clear. But again, it's usually specific to the environment you're acting in, which is why I think this multi-teacher approach is very general. It's not computationally intensive and it gives much better results. And it also shows that we are actually generating much diverse goals. And are some of the teachers like other teachers competing among themselves too? Like are they kind of losing teachers and winning teachers? It's possible that particular teacher can always get in some kind of local minima. You have this danger especially in the case of a single teacher, right? It's always possible that it can always get stuck somewhere, but using multiple teachers kind of solves this issue as well. It also depends on the complexity of the environment. So if the environment is not complex enough, there is no point in having multiple teachers because all the teachers would be generating goals around the same region where the student had already reached that region and the teachers are not getting incentivized anymore. Well, I love the concept and I love the parallel to the real world. I think of every guest on the show as a teacher to me. I learned from every guest. And it's great to have multiple teachers because every teacher has their own distribution of areas that they are more interested in. And so to get a diverse scope is actually a really nice treat. So in this case, in this paper, there's students, teachers, and I think there's also intern agents. Can you tell us about that? What is the intern about? What are the roles? Once we are at the teacher agents' generalities, the schools and the students' learns from those goals, we also wanted to see if these generated goals are of use at all. So we started calling this new agent as intern agent. So the intern doesn't have access to the teacher's trajectories. They only have access to the teacher's goals. Essentially, they can't use something like behavior learning laws or other implementation learning methods. The only way they are allowed to learn is based on this curriculum of goals. And we have observed that this curriculum of goals set by the teachers is much better compared to a random set of goals. And also, if you increase the number of teachers, the diversity of the goals generated increases and also it helps the intern learn much faster. I think you can also kind of draw the real life parallel to this one as well. That even if you don't have access to the complete lecture, but if you just have access to some references and so on, you could still learn from those references. But those references has to be accurate and useful and not just arbitrary. So this reminds me of something I love talking about, which is the paired system. It's a way of curriculum design. So is there something similar to paired going on here, or can you talk about the relationship between those two ideas? Yeah, they're very related. So our work can be kind of seen as a specific instance of the broader problem of these emergent ecosystems where you have one agent, let's call it a teacher agent, that's generating increasingly complex environments and the actual reinforcement learning agent that has to solve this whatever the environment the teacher agent throws at it. So we can see kind of this goal generating teacher and the student agent as a specific instance of that, where instead of generating these complex environments, we are only restricting the generation to goals inside a specific environment. All those algorithms that are applicable in those emergent ecosystems are applicable here as well broadly speaking. For example, I have seen approaches that use like I think evolutionary search or genetic algorithms for these kinds of teacher agents. Can you represent these goals? Are they just states that you show the agents that want you to get into the state or how do you represent the goal? Yeah, so we have tried this approach on two environments. One is fetch and the other is custom driving simulator. Yeah, in both the cases, we represent the position as X, Y and yeah, we could try other things for example as a bit map representation if it's a grid board kind of setting. So states as opposed to not like observations, like are these the robot arms, I think you are talking about a robot arm sitting. Is that right? A simple gym version of that. And so in that case, is it using proprioceptive observations that's like the state variables of the positions and angles of the arms or is it more an observation like a image of the image of the outside of the robot or how does that work? No, it's not an image. The goal would just be included as the goal position that the arm has to reach like X, Y. The actual state is the different position or the velocities of the hand. I see. So what is the intern ad? Is it intern like an additional experiment or does it actually make the learning better? It doesn't add to the actual student teacher training. It's an additional experiment to show the utility of the goals generated by the teachers. So what kind of problems are best suited for this type of approach do you think? So we are essentially solving goal condition RL here. There are a wide variety of applications for goal condition RL, I think as we were discussing this industrial manipulator robots or even the medical robots and so on. Cool. Okay. Do you want to move to the next paper here? Continuous coordination. So this paper is from ICML 2021. Continuous coordination as a realistic scenario for lifelong learning. And this is Nikoi as author plus co-authors. No, I wasn't involved when the paper was being published. So this is something I believe that that could be a good set up testing the capabilities of a government. So in their paper, they established this lifelong learning setup with multiple agents and we are currently working with these authors to have humans in the loop, to have human agents learn to cooperate with the AI agents and vice versa. So Hanabi is a quite unusual game and I think that's why it comes up in these settings. It has some very unusual properties. Can you talk about Hanabi and why it's a good candidate? Yeah, it's a very challenging multiplayer, like two to five players, a cooperative card game. So if humans actually play the game for the first time they would never win, I myself played the games multiple times and every time a player changes your entire strategy changes and you kind of have to start everything from the beginning because the players really need to establish some kind of implicit connection or strategy of what they're doing. So the game is basically every player can see every other player's cards except his own cards and at every time step you have, you can choose to do multiple actions. The final goal is to, so the colors are basically numbered one to five and they're colored and the goal is to drop the cards in such a way that they're arranged in increasing order from one to five and across all colors. So yeah, that's a very challenging thing to do and you could choose to give out hints to other players or you can choose to drop the card or you can choose to play the card. There are a very limited number of information tokens so you can't keep giving hints forever. There are very limited number of hints that you could give. So I mean many games, especially card games, have partial information as part of the game and then we have that here too of course. Why is it different here? What makes the partial information here different than saying Blackjack or any other game we might play? I think the cooperative aspect is important fun here. The goal is for all the players to play collectively so that they could either all win or all lose. And so this acts like a good benchmark for teaching agents to collaborate with each other or having bringing humans in the loop and teaching agents to cooperate or collaborate with humans. So that is unusual. I think most card games are about winning each person winning and not collaborating as a more competitive. I guess there's games like Bridge with their teams but this idea of all being on the same team but missing this crucial information is really interesting. It also seems to me a bit artificial in the sense that this game is only fun because you can't say, hey, Si, you're carrying a yellow two and a red three. I'm not allowed to say that to you, right? It's part of the rules of the game. But as humans, that's trivial. It's a strange situation because normally we could just say our communication is so good, we could just easily clear up the situation and win together. And so somehow we've, this game has added this artificial constraint. You cannot communicate. You have to really limit your communication bandwidth. Couldn't we short circuit the whole challenge just by letting communication flow freely or no? No, so because in real in realistic settings, you can of course communicate in natural language, but I think that adds a whole lot of complexity. And at this point or at the current state of research of NLP, I don't think we can trust the systems too well. So I think that's why it's important to constrain on what the agents are allowed to communicate at this point, but given these limited communication capabilities that we are perfectly safe, can this, can they learn to, can they learn useful cooperative behaviors? That's a very good challenge to have. I mean, we don't have to constrain the agents to speak in natural language. Like maybe they exchange a vector or something, a learned vector. They could do a learn to communicate type thing, but that would be against the rules as well, right? But exchanging vectors with each other, then the Hanabi doesn't work. Yeah, I mean, I think the point of this is to see how well they can learn to cooperate. It's to have challenging cooperatives. You can of course change the rules and make it easy, but I think that won't be challenging. So I can explain the concept of this paper. So you have the Hanabi game. So what these guys do is the first train bunch of self-play agents around 100 of them so that they can get better by playing with themselves. And then they sample, randomly sample, few of these trained agents and make them play with each other so that they can learn the cooperative behaviors. And then in the final test phase, they again sample a bunch of agents and that weren't chosen before or that did not play with each other before. And then they make them play with each other and then see how well it works in the context of a zero-shot coordination. So what we are currently trying to do or extending this work is to have a human agent play with these bunch of trained agents. And this is not just the challenge for the AI agent, but it's also a challenge for the bunch of human agents to learn to cooperate with these trained agents. As a trained agent, keep changing. It's also important to continuously adapt to your new opponents, but also remember how we have performed with your old partners, not opponents, but partners. And we saw things like a population-based training, which I think was used in Starcraft where there was many types of strategies and to nail things down, to keep things from sliding all over in strategy space. They had to nail things down by having all these fixed agents and then you keep growing the population. So, it seems like this approach has some things in common with that. Although the random, I think they went a little further maybe with the population-based training in terms of really keeping track of which agents were dominating which and really focusing on training on the agents that were still a challenge so that they could get the best ranks possible to be efficient with that. So I wonder, is this the same type of setting? Like would a population-based training also be applicable here? Is this kind of an alternative to that? Or how do you see the relationship between those two things? Yeah, basically the approaches that were used there can be used here as well. I think the Hanabi is basically kind of a simpler version of the game where we don't have any of those additional complexities of let's say, or no vision or other kinds of representations. The representation here is simple and the code task here is just to learn the abilities of cooperation. You know, these types of games require a really good memory. Is that true? That was a strategic, actually. Someone was saying that about strategic, which is another game with a lot of partial information and the idea and the comment was about the fact that well, computers can have trivially memorized any amount of data, so does that make these games less interesting for testing algorithms on because the computer can just remember every comment. Whereas for a human, they might start losing track of the hints over time. Is that a factor here or not, not so much? So in strategic, a strategic is basically a two-player game, right? So one could always kind of try to memorize most of the things, whereas in the case of Hanabi, you have the agents as well that are changing, so it's not trivial to memorize. Of course, in the context of self-play, it's easy to memorize like if the agent is playing with itself, then which is happening in the first phase of the training, then it's easy to learn. But again, that is being challenged by the next phase of training where these agents are made to play against with the play alongside other agents. And I think this is where the ability of Cognment also really shines, where you have multiple actors acting out in the environment where these actors can either be the trained agents or the human agents, right? So this is one natural fit that we found for Cognment. Great. So let's move on to the next paper here, which is learning to navigate the synthetically accessible chemical space using reinforcement learning with first author yourself and Sat Rav and with co-authors. I'm really excited about this paper. I remember this back from ICML. I think it was. And I think that's where I met you. Yeah. Yeah. I mean, I wanted to have you on the show largely because of this paper and because I just thought you were great to talk to and you had such interesting views on the work that you were doing. So, yeah, this is kind of the paper that's a grab my attention. So tell us, tell us about this exciting paper here. What did you do with this work? Yeah. So the challenge was to generate molecules that are actually synthesizable. So at that time, what people used to do before this paper was that so the molecules are usually represented as a string or as a graph. So they use different kinds of cans. We, sorry, even reinforcement learning based methods to generate different kinds of these graph structures or these strings and so on. But once they are generated and they're obviously optimized for the reward that we wanted. But once these are generated, there is no guarantee that any of these are actually synthesizable. Yeah. So that's the challenge we were trying to overcome then. Then our approach was basically instead of searching in the space of the structures, we should actually search in the space of chemical reactions. So we would start with a bunch of chemical reactants, choose one of them and make it react with one other reactant, you get a product and then choose another reactant, you get one more product and so on. Repeat this process until you get satisfying the world or basically optimize in this particular space. So how does the chemistry part work in terms of having the data in place? Is there databases with these these chemicals and these reactions and how it would transform your molecule or how does how does that work? Yeah. So for the reactants, there is this database called inner mind datasets. It contains about 150,000 molecules. So that's an initial starting database. And then for chemical reaction, we have something called reaction templates which basically say that what are reactive parts in any of the reactants and how they react with each other to obtain a particular product just corresponding to those reactive parts and what are the carbon string attached to the rest of the molecules test the same way. And I think a smart is kind of way to represent these and we have libraries like RTKT that does the computes most of these things. I mean, this is kind of implying a giant search tree, maybe not that, not that disillular from a game tree, but I guess the branching factor is very huge and depth is very large, you can't explore the whole tree, is that, is that the story? Exactly. So, you can't have normal any kind of research or other heuristic methods to search this base. That's why we needed reinforcement learning. Even for reinforcement learning, space of 150,000 reactants is very huge. So at first, we choose something called reaction template. There are about 14 and of them. And once you choose a specific reaction template, the number of reactants, you can choose decreases from about 150,000 to about 30,000 on average. Again, this is on an average, but for a specific template, it could be as low as 50 or as high as 100,000. So it really depends. So even to compute or to find the reactant in space of 30,000 reactants is still very hard task for reinforcement learning agents. So what we did is we predicted the action in continuous space and then mapped it to the discrete space using the KNN method or just computed the first nearest neighbor. So we predicted the proper, instead of predicting the number, a discrete number from 1 to 150,000, we predicted the properties of a molecule in a continuous space. And we pre-computed all the properties of all these 150,000 reactants beforehand so that we can directly use the nearest neighbor method to compute the actual reactant that we want. So what is the reward design here? Yeah, so the drug discovery community works on a specific set of benchmarks. One of them is called QED, which is basically a drug likeness score. So how likely that the molecule you generated is good to be a drug. And then you have penalized a log B score, which is kind of related to what a solubility I believe. And then you have other methods. For example, let's say you want to invent a drug to cure HIV, then what you do is you develop some QSAR model. So you know what a HIV target is. And then you have a very small database of molecules and how it reacted to that particular HIV target. So you train some models using some supervised method to obtain a reward model. So when you get a new molecule, you pass your molecule through this reward model and obtain a particular scalar value. So these are called QSAR models. And in that paper, we did it against three HIV based targets. Okay. So it's based on the experience of how other drugs, how effective a past drugs have been. Yeah, not necessarily drugs, but any kind of molecules because yeah, basically your training data shouldn't be biased. So it shouldn't be just be passed with only the useful molecules. This should also have some useless molecules so that the score can be predicted accurately. So how do you represent the chemicals internally? So the molecules can be represented in different ways. The people who work with smile string, they're represented in string converted to the one heart vector and then embedding and so on. First paper, if I remember correctly, we considered a few representations that is ECFP4, which kind, so these are all vectors. ECFP4 is a vector that contains information of the graphical structure of the molecule. And then we have something called MACCS or MACS, which is a binary vector that tells you the presence or absence of different features of the molecule. And then we have something called multi set, which contains several features. I think there were 200 such features and we had picked 35 of them to use as a representation. So we experimented with all these kinds of representations and I think at the end, what bugged out is ECFP features as the input because we want a robust representation as input and then the multi set, the 35 features from multi set as the output. So these are established standard representations? Yeah. I wonder if you've been following the alpha fold work at all and I know that was for protein folding very different space. Yeah. But I wonder if you think those two lines of work have something in common or are going to overlap at some point? No, I think they're very different approaches that alpha fold is mostly a supervised learning algorithm. But yeah, having the ability to predict the protein structures has a lot of use cases in drug discovery, but not I don't think it's related to this work. These drugs are not proteins generally, right? But they could affect proteins? Yeah. So they basically react with the proteins. So one, I think the way to see it is if you have an accurate structure of the protein, then you could probably predict its reactive properties. So this could probably help in the reward function design that you were talking about earlier. Instead of just learning from the existing database of how different molecules interacted with a particular protein target, probably the protein structure can also help in other ways of reward design. So I see this paper is deadly accumulating citations. Are people building cool things on top of this that you're aware of? Yeah, I think so. I think what this paper opened up is kind of a new chemical space for people to experiment on. So it need not just pure reinforcement learning. So I think I've seen a few papers where people are using genetic algorithms or this evolutionary algorithms instead of RL for exploring the same kind of chemical space. And then people were trying out different representations. I think graphical representation is very attractive. And I think I've seen one of the papers doing that. And then people can also, they also tried to think learning the inverse graph. So we are just doing forwards synthesis, right? So people also tried to do the retro synthesis based on the forward synthesis. So they tried to train the inverse network as well. Yeah, I think very important challenges. Multi-objective optimization because in drug discovery, you just don't want to optimize for one particular score. Your generated molecule should fit in a specific profile. For example, it should have a particular drug likeness score, but it should also have particular water solubility levels and particular different profiles that are not harmful to human body, basically. So it's essentially a multi-objective optimization problem. And I think a couple of papers have started dealing with that based on this new chemical space. Awesome. That must be very gratifying for you to see as a researcher. Yeah, definitely. Yes. Okay. So coming back to chess, has your chess background influenced your approach to AI? Do you think? Not so much, I think. But in general, I think being a chess player helped because you could generally do your calculations much faster or you could kind of visualize proofs without actually putting everything on paper. I think it has helped in that way, yeah. So what about has AI influenced your approach to chess at all? Not so much, I think. I mean, I haven't played many chess tournaments since I started doing AI. I've played three or four tournaments. So do you find chess AI interesting? Yeah, I think a lot of exciting things are happening, especially with this tabular as a learning system like Alpha, Alpha, Zero and so on. I think this kind of approaches existed before and they were tried on different games. But to see it work on chess is really exciting. I think at the end of the day, I still see that these are only acting like a help us to the Monte Carlo research, right? The policy networks or the value networks that these algorithms are learning. I think they're only adding as an extra help to the MCTS and I think MCTS is still at the core of all this chess engines, which has been since many decades. Do you feel like this generation of AI has solved chess in a sense? Or do you think there's more interesting things that we could do in the chess domain or in close related domains? No, no way. I don't think I think we are very far from saying it to be solved because we still see this Alpha, Zero, Lila, Zero making some mistakes and those mistakes cannot really be explained. So I think it's far from perfect or far from being solved. What do you think the reason is why it's that happens? What do you think is that with is missing in the design? Yeah, so I think for any chess engine, mostly boils down to how much computation or how many Monte Carlo research simulations you're allowing the engine to have. And despite having all this trained policy and value networks, if you don't allow it to explore for enough, there are still a lot of blind ends even if it's forcing 25 mows, there could be something on the 26th slide that it was 26 more than the engine has missed that primarily probably because the value network failed to predict that something might happen in the next move. These are still the corner guesses. Can I observe some engine games? There's a lot of interesting games from Alpha, Zero. It has been very aggressive in some games. There are a lot of sacrifices that's very good to watch. But at the same time, it still has those components or the drawbacks that the older AI engines have. In a very closed position, it can't plan properly. It just keeps moving the pieces around without proper futuristic plan. So it seems to me that Alpha Zero can only perform as well as the function approximator is properly approximating the function and also only as well as the data. So if it hasn't explored certain regions or if the function approximator doesn't generalize enough or in the right way. And in both of those cases are the where the corner cases will hit us. I've never been very clear on how perfect a fit the convolutional network really is for this problem. Seems to me it may be not the perfect fit. Exactly, I agree. That's another very good question to explore. Unlike other board games like Go, chess has a very interesting representation as well. It has multiple kinds of pieces. So you can't just represent them as numbers on a 2D map. So what people do is they use something called bitmap representations. So each piece is represented in a binary one or zero in its dedicated two dimensional map in a multiple layered three dimensional structure. And yeah, I'm still not sure if it's the most optimal representation to have. And yeah, definitely on top of that, it's very unclear if the usual convolutional networks are suitable to these kind of representations. There's definitely some locality and some spatial component that maybe the CNN is capturing, but also like a rook and move all across the board all at once. That seems like CNN is not going to be very suitable for that part. So I do wonder about that. I think in Alpha Fold, Alpha Fold 1 used some CNNs and then in Alpha Fold 2, they took the CNN out because of the locality restriction of the CNN wasn't helping them because it would restrict the field to the block, the CNN block. So I wonder if that's the case here. You'll never have enough data if the game is hard enough. So I wonder if the challenge is how do you get the network, how do you get the function proximity to generalize without covering every possible position? And then I wonder if there's how to get that inductive bias that we really want, which seems right now, it seems very situation specific, designing the inductive bias. I was, I keep going back to Alpha Fold because I think it was really interesting. They really baked in a very specific inductive bias after deeply understanding the problem. So a lot of the intelligence is right there in the inductive bias design in the network design. And I think that there wasn't much of that in the slide of work. Yeah, yeah, it is a lot of open problems to explore in this. I think I really consider it solved if an agent can play without any research. For example, if given a position can a policy network or using a value network can we predict the best move in that position, which I think is impossible to achieve. Yeah, at least not in the next 20, 30 years, I don't think so. I mean, you can play Alpha Zero in only one step mode, I guess, without the full research, right? And it still does better than it's still to have some level of skill, but it's just not that strong, right? Yeah, yeah, it's very inferior playing. And in such a case, I think there are too many failure modes that can be exploited. So I mean, it begs the question like, why do we even need this type of structure, this tree search at all? I gave a talk a while ago to the data science group in Vancouver about why DQN for Atari makes sense and why the Alpha Zero algorithm makes sense for a situation like Go. It's because what I was saying, and see if you agree with me, as the reason is that the true value function of Go is so bumpy and hard to predict that whereas in Atari, the value function is much smoother and easier to predict. And so DQN is enough to master that value function. But on the go side, or maybe on the chest side, the value function changes so much from any small move. So the function is so non-smooth that you have no choice. Your function proximity is not strong enough to generalize into the future. So the only choice you have is to simulate into the future and see what the effect is on the value function. That's exactly correct. But if we had function approximators that were more powerful, that could model the complexity of chest and go, then we wouldn't need the MCTS. But the fact is we have the current generation of neural networks doesn't have that property. So maybe it's a failing of the function approximator we have to make up for with this additional mechanism. Is that how you see it? Yeah, I'm still not clear at what point this function approximator would be able to solve that. I don't see that happening any time in the near future, but that's generally true. So what do you think about explainability in chess and these types of games? Like definitely when talking, you know, so I never got very far at chess. I'm not very good at chess, but I was very interested as a kid. And I remember reading books on chess strategy and there would be so many recipes and there's a lot to talk about in chess strategy. And people use a lot of metaphors and it's not like people use a lot of generalization as they're talking about the strategy, even when you're talking about open and close positions and game and this and that. There's all these concepts that we throw around. I wonder what you think about explainability in terms of chess AI. Like do you think we could ever get to the point where we could have a discussion with the chess AI about strategy or is that kind of a ridiculous concept? I think it can explain why it thinks a particular move is good, but that explanation would still be based on the variations that it's calculating and not in any like a natural language that it sees that somehow sees this double-ponstructure is good or I don't see that happening in time soon. But yeah, that's something that would be useful to have. I guess there's all this work now with language models and attaching language models to everything and grounding the language models and everything. And do you think if we plugged in the large language model to alpha zero, we could somehow get it to explain why side-beating in the latest round? It's a very tough challenge. I don't think I don't think you have current language models that accurate to do that. I mean, it's not a lot of, we need a lot of novel data to train such models on, which are not easily accessible or within a reasonable amount of compute. I guess if it read chess books and if it was able to understand the positions and somehow map it to its existing representation, then maybe we could get somewhere. But it's just hard to imagine, but it seems like what I've been noticing is plugging all of them into the different things is working way better than I ever imagined it would. I'm shocked by how often it's working well. Are there people getting it to work? Yeah, never thought about having an agent reading chess books. That's definitely something interesting. So besides your own work, is there other things happening in RL or other parts of AI lately that you find really interesting side? Yeah, so these language models are somehow very interesting. Yeah, they're already working at a very large scale. But I like these ideas on scaling laws as well. Like what some amount of increased computation or increased network size or increased training data size can do. I think there's this latest paper from Google that shows some emergent behavior like so far and language model cannot solve some arithmetic. But if you have more compute and more scaling than it's basically the accuracy is increasing significantly. And so they call this as emergent properties because they did not that particular property of solving those mathematics did not exist when they had less compute. And I want to see how far the increased compute would be useful in reinforcement learning. Can you consider yourself in the scale as all you need to camp? It's not all we need, but I think it's something we definitely need. I went to the scaling laws workshop recently and yeah, it's very exciting. I think more many people in the camp also actually believe that scale is not all you need, but it's something definitely that you definitely need. So is there anything else that I should have asked you today or that you want to share with our talk our audience? Yeah, check out Cogment. It's exciting. And yeah, if you're working on multi-agentarell or human in the loop learning, check out Cogment and I'm happy to chat more about your ongoing projects on these topics. So is it open source? Anyone can download? Yeah, exactly. And it's easy to get started as well, I believe. And we'll have a link in the show notes, but just for the record, where are people getting it? Yeah, it's Cogment.ai. So, Sy Krishna, Gauti Pati, thank you so much for joining us here at Talk our Elle and sharing your insights with us today. Thanks so much for taking the time. Yeah, thank you for having me. I think it's my first broadcast.
[ { "end": 10.32, "start": 0, "text": " TalkRL podcast is all reinforced in learning all the time, featuring brilliant guests" }, { "end": 12.4, "start": 10.32, "text": " both research and applied." }, { "end": 15.8, "start": 12.4, "text": " Join the conversation on Twitter at TalkRL podcast." }, { "end": 21.68, "start": 15.8, "text": " I'm your host, Robin Chohan." }, { "end": 25.28, "start": 21.68, "text": " I have a brief message from any scale, our sponsor for this episode." }, { "end": 29.32, "start": 25.28, "text": " Re-enforced in learning is gaining traction as a complementary approach to supervised learning" }, { "end": 33.72, "start": 29.32, "text": " with applications ranging from recommended systems to games to production planning." }, { "end": 38.52, "start": 33.72, "text": " So don't miss Ray Summit, the annual user conference for the Ray open source project, where" }, { "end": 44.44, "start": 38.52, "text": " you can hear how teams at Dow, Verizon, Riot Games and more are solving their RL challenges" }, { "end": 46.120000000000005, "start": 44.44, "text": " with RL lib." }, { "end": 50.04, "start": 46.120000000000005, "text": " That's the Ray ecosystems open source library for RL." }, { "end": 53.84, "start": 50.04, "text": " Ray Summit is happening August 23rd and 24th in San Francisco." }, { "end": 61.400000000000006, "start": 53.84, "text": " You can register at raysubmit.org and use the code RaySummit22RL for a further 25% off" }, { "end": 67.72, "start": 61.400000000000006, "text": " the already reduced prices of 100 bucks for Keynotes only or 150 to add a tutorial from" }, { "end": 68.72, "start": 67.72, "text": " Sven." }, { "end": 70.88, "start": 68.72, "text": " These prices are for the first 25 people to register." }, { "end": 75.36, "start": 70.88, "text": " Now I can see from personal experience, I've used Ray's RL lib and I have recommended it" }, { "end": 76.68, "start": 75.36, "text": " for consulting clients." }, { "end": 80.60000000000001, "start": 76.68, "text": " It's easy to get started with, but it's also highly scalable and supports a variety of" }, { "end": 83.12, "start": 80.60000000000001, "text": " advanced algorithms and settings." }, { "end": 90.28, "start": 83.12, "text": " Now on to our, Psy Krishna Goatipati is an RL researcher at AI redefined working on RL," }, { "end": 96.04, "start": 90.28, "text": " multi-agent RL, human in the loop learning and he developed the world's first RL based" }, { "end": 99.96000000000001, "start": 96.04, "text": " algorithm for synthesizable drug discovery." }, { "end": 104.92, "start": 99.96000000000001, "text": " As a master student at Miele, he worked on RL for active localization, board games and" }, { "end": 106.16, "start": 104.92, "text": " character generation." }, { "end": 110.2, "start": 106.16, "text": " Psy is also an international master in chess." }, { "end": 111.80000000000001, "start": 110.2, "text": " Psy thanks for joining us today." }, { "end": 113.44, "start": 111.8, "text": " Thanks for inviting me." }, { "end": 116.88, "start": 113.44, "text": " Can you tell us a bit more about your current focus areas?" }, { "end": 123, "start": 116.88, "text": " Currently I'm working mostly on multi-agent RL, human in the loop learning and some of" }, { "end": 126.03999999999999, "start": 123, "text": " its applications and some industrial settings." }, { "end": 127.92, "start": 126.03999999999999, "text": " Can you say a bit about the settings?" }, { "end": 133.35999999999999, "start": 127.92, "text": " Sure, I think it mostly relates to our product which is Cogment." }, { "end": 136.88, "start": 133.35999999999999, "text": " It's a multi-actor framework." }, { "end": 142.28, "start": 136.88, "text": " So by actor, I mean the actor could be an AI agent or a human agent or a heuristic agent" }, { "end": 143.72, "start": 142.28, "text": " and so on." }, { "end": 151.92, "start": 143.72, "text": " So it's especially useful and very complex ecosystems where multiple actors are acting." }, { "end": 158.35999999999999, "start": 151.92, "text": " So we have multiple industrial clients that are using Cogment for their products." }, { "end": 164.84, "start": 158.35999999999999, "text": " For example, we are working on a project with Thalas for airport security where the idea" }, { "end": 173.72, "start": 164.84, "text": " is to defend the airport from incoming drones or any other objects and we have two teams" }, { "end": 175.32, "start": 173.72, "text": " of drones." }, { "end": 180.16, "start": 175.32, "text": " One is team defending the airport and the other is the one that's trying to attack the" }, { "end": 181.16, "start": 180.16, "text": " airport." }, { "end": 187.16, "start": 181.16, "text": " So as a defense team, we need to develop sophisticated algorithm." }, { "end": 195.12, "start": 187.16, "text": " So this is a different kind of attacks and this is where the teams within the defense team" }, { "end": 202, "start": 195.12, "text": " should learn to collaborate with each other and simultaneously launch and attack agonist" }, { "end": 203, "start": 202, "text": " the offenders." }, { "end": 205.51999999999998, "start": 203, "text": " Yeah, that's one of the applications for example." }, { "end": 206.51999999999998, "start": 205.51999999999998, "text": " Wow, okay." }, { "end": 211.96, "start": 206.51999999999998, "text": " So not showing away from the hard problems in multi-agent RL or the safety critical settings" }, { "end": 212.96, "start": 211.96, "text": " either." }, { "end": 213.96, "start": 212.96, "text": " Yeah." }, { "end": 214.96, "start": 213.96, "text": " That's amazing." }, { "end": 219.04000000000002, "start": 214.96, "text": " So I can hear more about Cogment later on in our chat and I look forward to that." }, { "end": 222.72, "start": 219.04000000000002, "text": " So just looking through your background, it seems like you've had a few different chapters" }, { "end": 224.04000000000002, "start": 222.72, "text": " in your research career." }, { "end": 229.96, "start": 224.04000000000002, "text": " I see early papers on Slam and drug discovery and then computer vision and RL fundamentals." }, { "end": 233.64000000000001, "start": 229.96, "text": " Can you give us a brief idea of how you got to this point?" }, { "end": 239.52, "start": 233.64000000000001, "text": " During my undergrad as part of my honors project, I started in the Robotics Research Center." }, { "end": 243.16, "start": 239.52, "text": " I think it's now called a Triple IT Robotics Lab." }, { "end": 248, "start": 243.16, "text": " So at that time, the lab was working on this Mahinder Rice Challenge, which is to design" }, { "end": 251.88, "start": 248, "text": " an autonomous driver in Garfer Indian roads." }, { "end": 259.4, "start": 251.88, "text": " So I started working on various computer vision problems like road sign or traffic signal," }, { "end": 265.88, "start": 259.4, "text": " detection and recognition, bot holds and speed breakers, recognition and so on." }, { "end": 266.88, "start": 265.88, "text": " That sounds challenging." }, { "end": 271.12, "start": 266.88, "text": " I'll just say I learned to drive in New Delhi and he roads there are quite." }, { "end": 275.88, "start": 271.12, "text": " If you want to test your autonomous driving, that's got to be a really good place for a" }, { "end": 276.88, "start": 275.88, "text": " lot of tail events." }, { "end": 283.16, "start": 276.88, "text": " Yeah, I think either but roads are even more challenging." }, { "end": 287.88, "start": 283.16, "text": " So yeah, at that time, so this was back in 2015 or 2016." }, { "end": 292.88, "start": 287.88, "text": " So at that time, I was mostly using still traditional computer vision techniques, but that's" }, { "end": 297.36, "start": 292.88, "text": " also the time when I slowly got introduced to deep learning." }, { "end": 304.48, "start": 297.36, "text": " Yeah, and then I started using deep learning first for the recognition part and then also" }, { "end": 306.44, "start": 304.48, "text": " for the detection part." }, { "end": 310.56, "start": 306.44, "text": " At the same time, I got an opportunity to work on this research project as well, which" }, { "end": 314.44, "start": 310.56, "text": " is the 3D reconstruction of the vehicles." }, { "end": 320.16, "start": 314.44, "text": " I worked on a very small part of the project at the time, which is the key point localization." }, { "end": 325.28000000000003, "start": 320.16, "text": " That's when I got introduced to many of the deep learning frameworks at that time." }, { "end": 330, "start": 325.28, "text": " I think PyTorch was in even released or not that mature at that time." }, { "end": 336.2, "start": 330, "text": " I was dealing with cafe ahead of that time and before cafe was doing deep learning" }, { "end": 340.11999999999995, "start": 336.2, "text": " one in MATLAB, so those are fun times." }, { "end": 351.64, "start": 340.11999999999995, "text": " Yeah, towards the end of my undergrad, I got an admit in Mela to work with Liam Paul." }, { "end": 358.08, "start": 351.64, "text": " Not that robotic slabs basically and the project I'd be working on isn't decided." }, { "end": 363.15999999999997, "start": 358.08, "text": " And I thought I would continue working on some fun and challenging robotics problems." }, { "end": 371.36, "start": 363.15999999999997, "text": " And yeah, I explored a lot of different problems in localization, slam and so on." }, { "end": 376.88, "start": 371.36, "text": " And I finally got to work on the problem of active localization." }, { "end": 382.28, "start": 376.88, "text": " And yeah, I initially tried out the traditional methods for active localization and soon realized" }, { "end": 387.12, "start": 382.28, "text": " that reinforcement learning is a very good fit for this problem." }, { "end": 391.84, "start": 387.12, "text": " So I started using reinforcement learning for active localization and that's how I got" }, { "end": 395.32, "start": 391.84, "text": " into the reinforcement learning." }, { "end": 400.12, "start": 395.32, "text": " At the same time, yeah, I think this was beginning of 2018." }, { "end": 405.24, "start": 400.12, "text": " I was also taking the reinforcement learning course at Michael where I got to work on some" }, { "end": 408.64, "start": 405.24, "text": " interesting assignments and projects." }, { "end": 416.56, "start": 408.64, "text": " And yeah, after graduating from Mela, I started working in a drug discovery company where" }, { "end": 421.88, "start": 416.56, "text": " again reinforcement learning is a very good fit for the problem I was working on then." }, { "end": 428.84000000000003, "start": 421.88, "text": " And then now I'm at a redefined where I think I now find multi agent barrel and human looper" }, { "end": 432.12, "start": 428.84000000000003, "text": " like more challenging and interesting problems to work on." }, { "end": 437.76, "start": 432.12, "text": " What would you say are the holy grail problems or the long term goals in your current focus" }, { "end": 438.76, "start": 437.76, "text": " areas?" }, { "end": 444.6, "start": 438.76, "text": " I think human and the loop learning is at very early stages." }, { "end": 448.36, "start": 444.6, "text": " We even don't have proper benchmarks at this point." }, { "end": 453.28000000000003, "start": 448.36, "text": " For example, reinforcement learning Atari is kind of considered to be a good environment" }, { "end": 455.64, "start": 453.28000000000003, "text": " to test different algorithms on." }, { "end": 462.56, "start": 455.64, "text": " But we don't have any such ideal algorithm to test human in the loop learning one." }, { "end": 467.91999999999996, "start": 462.56, "text": " I mean, even the metrics that we have to optimize aren't very clear because it's not just" }, { "end": 470.32, "start": 467.91999999999996, "text": " about maximizing the particular reward." }, { "end": 474.15999999999997, "start": 470.32, "text": " We should also need to care about the trust factor." }, { "end": 480.59999999999997, "start": 474.15999999999997, "text": " And I think as a first very good challenge is to develop this benchmarks, develop the" }, { "end": 485.28, "start": 480.59999999999997, "text": " environments and try to optimize for these different metrics." }, { "end": 491.71999999999997, "start": 485.28, "text": " And yeah, I think in 10 years we would have a very good complex ecosystems where humans" }, { "end": 498.35999999999996, "start": 491.71999999999997, "text": " and AI agents can learn from each other, trust each other and cooperate with each other." }, { "end": 503.91999999999996, "start": 498.35999999999996, "text": " Yeah, I mean, the whole idea of a benchmark for human in the loop seems so difficult to" }, { "end": 504.91999999999996, "start": 503.91999999999996, "text": " execute." }, { "end": 506.47999999999996, "start": 504.91999999999996, "text": " Like how many times can you run it?" }, { "end": 508.35999999999996, "start": 506.47999999999996, "text": " How much human time is it going to take?" }, { "end": 511.76, "start": 508.35999999999996, "text": " How replicable would the results be with different humans?" }, { "end": 512.76, "start": 511.76, "text": " Exactly." }, { "end": 517.6, "start": 512.76, "text": " Do you feel like there's progress is being made in that question or is it is there people" }, { "end": 519.88, "start": 517.6, "text": " kind of kicking the can down the road a bit?" }, { "end": 526.08, "start": 519.88, "text": " I've seen some multi agent RL papers will focus on how other RL agents or other types of" }, { "end": 529.28, "start": 526.08, "text": " automated agents will respond or react." }, { "end": 533.2, "start": 529.28, "text": " But but it doesn't seem like there's any clear way to to automate a human response." }, { "end": 536.56, "start": 533.2, "text": " I mean, the whole point is that the human response is very differently than any machine" }, { "end": 537.56, "start": 536.56, "text": " ever would." }, { "end": 542, "start": 537.56, "text": " So how could you ever put that in a loop in terms of like running huge amounts of hyper" }, { "end": 544, "start": 542, "text": " parameters, sweeps or anything like that?" }, { "end": 547.76, "start": 544, "text": " Yeah, that is a very challenging question." }, { "end": 554.44, "start": 547.76, "text": " And we are kind of working on a small part of that right now on the Hanna B project where" }, { "end": 560.92, "start": 554.44, "text": " we are trying to have a humans play, some multi humans play against other agents and train" }, { "end": 566.08, "start": 560.92, "text": " agents in such a way that they can learn to collaborate with all the humans." }, { "end": 567.08, "start": 566.08, "text": " Okay." }, { "end": 571.16, "start": 567.08, "text": " And then we're going to talk about that Hanna B paper in a few minutes." }, { "end": 575.16, "start": 571.16, "text": " So I just saw an announcement a few days ago that that Miele, the research institute in" }, { "end": 578.88, "start": 575.16, "text": " Montreal and your employer, AI redefined, have a partnership." }, { "end": 583.4399999999999, "start": 578.88, "text": " Can you say a bit more about AI redefined and its mission and the and the partnership" }, { "end": 584.4399999999999, "start": 583.4399999999999, "text": " with Miele?" }, { "end": 588.92, "start": 584.4399999999999, "text": " And what stages AI redefined that with with it sounds like really ambitious work?" }, { "end": 593.4, "start": 588.92, "text": " Yeah, so AI redefined started out around 2017." }, { "end": 597.76, "start": 593.4, "text": " It's based in Montreal and it's mission." }, { "end": 603.92, "start": 597.76, "text": " And I think in my own words, it's to develop complex ecosystems where humans and AI agents" }, { "end": 609.12, "start": 603.92, "text": " can, as I was saying, learn from each other or collaborate with each other and trust each" }, { "end": 610.12, "start": 609.12, "text": " other." }, { "end": 614.12, "start": 610.12, "text": " Yeah, so I think that's the grand goal that we have." }, { "end": 620.36, "start": 614.12, "text": " And we are kind of working on multiple projects with Miele researchers, for example, the one" }, { "end": 624.68, "start": 620.36, "text": " with Professor Sarah Chandra's group on Hanna B." }, { "end": 629, "start": 624.68, "text": " And we are looking forward to working on more such projects with other Miele researchers" }, { "end": 633.52, "start": 629, "text": " as well and test out the actual potential of Cogment." }, { "end": 634.52, "start": 633.52, "text": " Awesome." }, { "end": 635.52, "start": 634.52, "text": " Okay." }, { "end": 636.9599999999999, "start": 635.52, "text": " So let's talk about the Cogment paper." }, { "end": 641.16, "start": 636.9599999999999, "text": " That is Cogment, open source framework for distributed multi-actor training, deployment" }, { "end": 643.56, "start": 641.16, "text": " and operations." }, { "end": 649.3199999999999, "start": 643.56, "text": " And that's with authors, AI redefined and yourself as well as co-authors." }, { "end": 652.12, "start": 649.3199999999999, "text": " So you said a little bit about Cogments." }, { "end": 657.36, "start": 652.12, "text": " So it's really about multi-agents systems and is it really about learning in real time" }, { "end": 663.4, "start": 657.36, "text": " or inference in real time or what is the, he tells more about the setting that Cogment" }, { "end": 664.4, "start": 663.4, "text": " is best for?" }, { "end": 667.68, "start": 664.4, "text": " Yeah, I wouldn't call it a multi-agent system." }, { "end": 670.16, "start": 667.68, "text": " It's more of a multi-actor system." }, { "end": 676.6800000000001, "start": 670.16, "text": " As I was saying, actor could be an AI agent or a human or a heuristic agent or basically" }, { "end": 678.08, "start": 676.6800000000001, "text": " any other actor." }, { "end": 684.0400000000001, "start": 678.08, "text": " It can be used for even normal simple single agent reinforcement learning algorithms," }, { "end": 690.48, "start": 684.0400000000001, "text": " but then I guess you won't see any advantages compared to the other existing frameworks" }, { "end": 697.44, "start": 690.48, "text": " where Cogment really shines is in these multi-actor systems because you can have multiple" }, { "end": 703.08, "start": 697.44, "text": " actors simultaneously acting in an environment, doing very different things." }, { "end": 710.48, "start": 703.08, "text": " For example, imagine all the ways a human can interact with an AI agent." }, { "end": 716.48, "start": 710.48, "text": " An AI agent can reward a human at every time step or vice versa." }, { "end": 721.76, "start": 716.48, "text": " And similarly one agent can set a curriculum and the other agent can follow the curriculum" }, { "end": 724.64, "start": 721.76, "text": " or even simpler algorithms like behavior learning." }, { "end": 731.0400000000001, "start": 724.64, "text": " So these are all different ways in which a human can interact with the AI agent." }, { "end": 736.04, "start": 731.04, "text": " Cogment is really suited for all these kinds of use cases." }, { "end": 740.8, "start": 736.04, "text": " For example, one simple demonstration would be in the case of a simple gym environment" }, { "end": 746.36, "start": 740.8, "text": " like Luna Lander, where an AI agent with its randomly insularized policy starts playing" }, { "end": 752.12, "start": 746.36, "text": " the game and human can intervene at any time step in the middle of the episode." }, { "end": 758.4399999999999, "start": 752.12, "text": " And the AI agent can learn both from its own samples and from the human interventions." }, { "end": 764, "start": 758.44, "text": " So instead of continuously interacting with the agent, human can just sit back and relax" }, { "end": 769.32, "start": 764, "text": " and only intervene when he thinks that agent is acting very stupidly." }, { "end": 774.5200000000001, "start": 769.32, "text": " And I think this is one of the efficient uses of human time." }, { "end": 781.12, "start": 774.5200000000001, "text": " One of the projects was on this airport security that we are working with Thalas and Matthew" }, { "end": 786.2800000000001, "start": 781.12, "text": " Taylor and Matthew Kustel from University of Alberta." }, { "end": 792.36, "start": 786.28, "text": " The other collaboration we are having is with conference.ai, which is I think like kind" }, { "end": 797.8399999999999, "start": 792.36, "text": " of consortium of multiple French industries and labs." }, { "end": 805.72, "start": 797.8399999999999, "text": " And we are working on this specific case of hyperparameter optimization guided by humans." }, { "end": 813.3199999999999, "start": 805.72, "text": " Yeah, so basically allowing the humans to explore the space of hyperparameter so that" }, { "end": 817.84, "start": 813.32, "text": " they can end up with the final optimized parameters that they want." }, { "end": 824.2, "start": 817.84, "text": " One other interesting project is with this major player in training simulation." }, { "end": 826.44, "start": 824.2, "text": " I think I can't reveal the name." }, { "end": 834, "start": 826.44, "text": " But the project is basically in a traffic controller and pilot training where you have multiple" }, { "end": 841.2, "start": 834, "text": " aerial vehicles that are cute to land at different landing spots or landing destination." }, { "end": 845.12, "start": 841.2, "text": " And then you receive an emergency request from a different pilot." }, { "end": 852.2800000000001, "start": 845.12, "text": " And so how should this ATC react so that they can reroute the existing aerial vehicles" }, { "end": 857.1600000000001, "start": 852.2800000000001, "text": " and also help this new training pilot land safely?" }, { "end": 864.36, "start": 857.1600000000001, "text": " We also have this other collaboration with renewable energy company where the goal is" }, { "end": 871.6, "start": 864.36, "text": " to basically manage the energy grid or to decide when to sell or store the energy in the" }, { "end": 872.6, "start": 871.6, "text": " grid." }, { "end": 878.52, "start": 872.6, "text": " It's basically an optimization problem with ARL, but we could however have a human in the" }, { "end": 883.4, "start": 878.52, "text": " loop with an operator actually controlling the decisions." }, { "end": 889.44, "start": 883.4, "text": " And you can also have different kind of risk profiles that are controlled by the humans." }, { "end": 894.96, "start": 889.44, "text": " So how do you think about the use of AI and RL in safety, critical situations?" }, { "end": 899.6400000000001, "start": 894.96, "text": " It seems like especially with the aircraft traffic controller case, I guess, in the power" }, { "end": 900.6400000000001, "start": 899.6400000000001, "text": " case too." }, { "end": 908.2800000000001, "start": 900.6400000000001, "text": " Yeah, so I think it's important to have human in the loop and kind of have human as human" }, { "end": 911.7600000000001, "start": 908.2800000000001, "text": " have a final say in the systems." }, { "end": 916.72, "start": 911.7600000000001, "text": " And yeah, that's kind of primary focus set here in the final as well." }, { "end": 921, "start": 916.72, "text": " Okay, but you think the human and the AI together can make better decisions and safer" }, { "end": 923, "start": 921, "text": " decisions than the human on its own?" }, { "end": 924.1600000000001, "start": 923, "text": " Is that the goal here?" }, { "end": 925.1600000000001, "start": 924.1600000000001, "text": " Yeah, exactly." }, { "end": 930, "start": 925.1600000000001, "text": " I mean, there are some complex planning that needs to be underdone." }, { "end": 935.48, "start": 930, "text": " So which in a time critical situations human might not be able to do." }, { "end": 941.2, "start": 935.48, "text": " So agent will do all those hard work very quickly and then it will suggest what it thinks" }, { "end": 942.9200000000001, "start": 941.2, "text": " is the best action." }, { "end": 947.8399999999999, "start": 942.92, "text": " And if it seems like a sensible action that is not dangerous, then the human can approve" }, { "end": 953.56, "start": 947.8399999999999, "text": " that action basically based on the human approval or disapproval the agent can also learn" }, { "end": 956.56, "start": 953.56, "text": " from further from this kinds of feedback." }, { "end": 959.8, "start": 956.56, "text": " So it would be a continually learning system as well." }, { "end": 965.9599999999999, "start": 959.8, "text": " Is it very general or is it more focused on on policy off policy offline?" }, { "end": 970.16, "start": 965.9599999999999, "text": " Is it like model based model phrase all of the above or is it very is it kind of focused" }, { "end": 972.88, "start": 970.16, "text": " on certain aspects of it on the RSI?" }, { "end": 975.24, "start": 972.88, "text": " It's all of the above." }, { "end": 980.88, "start": 975.24, "text": " So especially this we have this thing called retroactive rewards where the even the rewards" }, { "end": 986.56, "start": 980.88, "text": " can be given much later than when the time steps of the episode has actually happened." }, { "end": 991.12, "start": 986.56, "text": " So this gives rise to like wide range of applications as well." }, { "end": 995.36, "start": 991.12, "text": " For example, when AI agent is acting in an environment, human might not be as quick to" }, { "end": 996.88, "start": 995.36, "text": " give the reward, right?" }, { "end": 999.04, "start": 996.88, "text": " So it's useful in those cases." }, { "end": 1005.04, "start": 999.04, "text": " And what stage is Cognitive and is it built on other tools or is it kind of a greenfield" }, { "end": 1006.04, "start": 1005.04, "text": " project?" }, { "end": 1009.64, "start": 1006.04, "text": " Are you extending something here or is it really starting from scratch?" }, { "end": 1011.88, "start": 1009.64, "text": " It's mostly a greenfield project." }, { "end": 1014.3199999999999, "start": 1011.88, "text": " It's based on microservice architecture." }, { "end": 1018.7199999999999, "start": 1014.3199999999999, "text": " I think that's like just like a concept of microservice architecture." }, { "end": 1020.3199999999999, "start": 1018.7199999999999, "text": " There are multiple versions of Cognitive." }, { "end": 1025.44, "start": 1020.3199999999999, "text": " I think the first question came out about one and a half year ago or something recently" }, { "end": 1032.64, "start": 1025.44, "text": " released Cognitive 2.0 which is more academic oriented and which is more friendly to the" }, { "end": 1034.2, "start": 1032.64, "text": " researchers as well." }, { "end": 1040.0800000000002, "start": 1034.2, "text": " And on top of Cognitive we released something called Cognitive as well, which is a collection" }, { "end": 1046.44, "start": 1040.0800000000002, "text": " of a bunch of reinforcement learning agents and environments like simple G-man environments," }, { "end": 1051.16, "start": 1046.44, "text": " pettings who procedural generation environments and so on." }, { "end": 1056.48, "start": 1051.16, "text": " So that it would be easy for any actual academic researcher to get started and do a couple" }, { "end": 1058.2, "start": 1056.48, "text": " of experiments with Cognitive." }, { "end": 1065.2, "start": 1058.2, "text": " I guess in the case where a human takes over, are you labeling those samples as expert" }, { "end": 1067.8000000000002, "start": 1065.2, "text": " demonstrations or are they considered differently?" }, { "end": 1073.3600000000001, "start": 1067.8000000000002, "text": " Yes, they can be stored to a different replay buffer or they can be stored on the same" }, { "end": 1074.68, "start": 1073.3600000000001, "text": " replay buffer." }, { "end": 1077.0400000000002, "start": 1074.68, "text": " It depends on how we code here." }, { "end": 1079.72, "start": 1077.0400000000002, "text": " What is your role in Cognitive project?" }, { "end": 1087.2, "start": 1079.72, "text": " I mostly developing on Cognitive first, which is implementing and benchmarking different" }, { "end": 1092.96, "start": 1087.2, "text": " reinforcement learning or multi-agent algorithms with different kinds of environments." }, { "end": 1099.68, "start": 1092.96, "text": " And then we also use Cognitive for all of our research projects, ongoing research projects." }, { "end": 1100.68, "start": 1099.68, "text": " Cool." }, { "end": 1104.16, "start": 1100.68, "text": " Do you want to move on to asymmetric self-play paper?" }, { "end": 1105.16, "start": 1104.16, "text": " Yeah." }, { "end": 1110.0400000000002, "start": 1105.16, "text": " So I think this is a paper from OpenAI that you weren't a co-author on, but you found it" }, { "end": 1112.0400000000002, "start": 1110.0400000000002, "text": " interesting for our discussion." }, { "end": 1117.44, "start": 1112.0400000000002, "text": " I think the idea here is to solve goal-conditioned reinforcement learning." }, { "end": 1124.0800000000002, "start": 1117.44, "text": " Usually, it's a very sparse, what problem and hence it's a very challenging task to solve." }, { "end": 1129.8000000000002, "start": 1124.0800000000002, "text": " So what these guys do is they introduce a new kind of agent, they call it LIs and Bob." }, { "end": 1136.04, "start": 1129.8, "text": " So LIs being like a teacher agent that acts out in the environment and reaches a particular" }, { "end": 1137.2, "start": 1136.04, "text": " state." }, { "end": 1144.12, "start": 1137.2, "text": " And then the Bob agent is supposed to reach that state reached by the LIs." }, { "end": 1148.52, "start": 1144.12, "text": " This way the problem of spasity can be kind of overcome." }, { "end": 1154.44, "start": 1148.52, "text": " So this paper was asymmetric self-play for automatic goal discovery and robotic manipulation" }, { "end": 1158.24, "start": 1154.44, "text": " with authors, OpenAI, Matthias, Clapper, et al." }, { "end": 1163.64, "start": 1158.24, "text": " So why fundamentally, why do you think that splitting the learning problem in this way" }, { "end": 1165.36, "start": 1163.64, "text": " using two separate agents?" }, { "end": 1167.04, "start": 1165.36, "text": " Why is that somehow better?" }, { "end": 1172, "start": 1167.04, "text": " We see different algorithms that split the learning problem in this type of way or in" }, { "end": 1173, "start": 1172, "text": " related ways." }, { "end": 1174.68, "start": 1173, "text": " Why is that somehow better?" }, { "end": 1177.48, "start": 1174.68, "text": " Is there some reason why it should make sense to do that?" }, { "end": 1179.48, "start": 1177.48, "text": " It's almost like they're setting up a game, right?" }, { "end": 1187.16, "start": 1179.48, "text": " Yeah, that's so if a single agent is acting out in the environment that what is very sparse," }, { "end": 1189.0400000000002, "start": 1187.16, "text": " especially in goal-conditioned environment." }, { "end": 1196.52, "start": 1189.0400000000002, "text": " So I'm thinking of a robotic manipulation task where all the end locations has to exactly" }, { "end": 1197.52, "start": 1196.52, "text": " match." }, { "end": 1202.68, "start": 1197.52, "text": " Yeah, maybe even after 100 time steps, you might not be able to reach that location." }, { "end": 1208.64, "start": 1202.68, "text": " And it's hard for any typical RL algorithms to learn from such kind of sparse words." }, { "end": 1213.72, "start": 1208.64, "text": " So introducing this new agent will encourage exploration." }, { "end": 1220.48, "start": 1213.72, "text": " It will encourage the first teacher agent or the LIS agent to go to the places it hasn't" }, { "end": 1225.56, "start": 1220.48, "text": " been to before because if it's revolving around the same area, then the Bob agent can" }, { "end": 1231, "start": 1225.56, "text": " reach those locations and the teacher will be negatively rewarded." }, { "end": 1238.04, "start": 1231, "text": " So teacher is always incentivized to explore more and consequently the student is incentivized" }, { "end": 1239.56, "start": 1238.04, "text": " to follow the teacher." }, { "end": 1247.44, "start": 1239.56, "text": " I think this way the exploration is much faster and the end of the day the agent can generalize" }, { "end": 1250.8799999999999, "start": 1247.44, "text": " much better even to the unseen goals." }, { "end": 1254.56, "start": 1250.8799999999999, "text": " So but why do you think that is that it works better with two agents?" }, { "end": 1258.32, "start": 1254.56, "text": " Like you can imagine another formulation where we just had one agent and we said, okay," }, { "end": 1263.04, "start": 1258.32, "text": " we're going to give some curiosity, interested in curiosity to this agent and it's going" }, { "end": 1267.12, "start": 1263.04, "text": " to do its best to explore everywhere and then it's going to and then we're going to do" }, { "end": 1272.28, "start": 1267.12, "text": " some kind of hindsight replay thing to say, we'll just pretend you were trying to find" }, { "end": 1273.28, "start": 1272.28, "text": " these goals." }, { "end": 1278.1999999999998, "start": 1273.28, "text": " It seems like that maybe could work as well or why do you think this is better this way?" }, { "end": 1285.76, "start": 1278.1999999999998, "text": " Yeah, those could work as well but I think one kind of issue or challenge I see with this" }, { "end": 1290.2399999999998, "start": 1285.76, "text": " intrinsic reward based methods or information, theoretic based rewards, curiosity based" }, { "end": 1295.32, "start": 1290.2399999999998, "text": " rewards and so on is they don't necessarily align with your actual goal." }, { "end": 1302, "start": 1295.32, "text": " You're especially incentivizing the agent to just increase its curiosity or optimize some" }, { "end": 1307.08, "start": 1302, "text": " kind of information, theoretic metric which might not be relevant to your actual goal" }, { "end": 1310.24, "start": 1307.08, "text": " of solving a goal condition problem." }, { "end": 1317.8799999999999, "start": 1310.24, "text": " But on the other hand, this teacher student approach is kind of incentivizing the agent" }, { "end": 1324.24, "start": 1317.8799999999999, "text": " to reach a wide range of goals in a much quick fashion." }, { "end": 1328.84, "start": 1324.24, "text": " So the training procedure is closer to the test time procedure." }, { "end": 1333.1200000000001, "start": 1328.84, "text": " It seems like the teachers here training for the similar behavior that we actually want" }, { "end": 1334.1200000000001, "start": 1333.1200000000001, "text": " to see." }, { "end": 1335.1200000000001, "start": 1334.1200000000001, "text": " Yeah." }, { "end": 1339.24, "start": 1335.1200000000001, "text": " Right, so if it maybe it's just using some kind of noisy exploration then it's not going" }, { "end": 1344.6, "start": 1339.24, "text": " to be really optimized for quickly getting to any specific goal because it never behaved" }, { "end": 1346.72, "start": 1344.6, "text": " that way really during training time." }, { "end": 1347.72, "start": 1346.72, "text": " Yeah, correct." }, { "end": 1348.72, "start": 1347.72, "text": " Yeah." }, { "end": 1350.84, "start": 1348.72, "text": " All right, well, anything else you want to say about this paper?" }, { "end": 1358.52, "start": 1350.84, "text": " I think we've seen that general idea show up a lot of times in terms of goal selection" }, { "end": 1363.04, "start": 1358.52, "text": " and a separate agent trying to reach that goal as a strategy for self-play." }, { "end": 1369.52, "start": 1363.04, "text": " Yeah, so I think one other interesting thing that did in this paper is add a behavior" }, { "end": 1373.1599999999999, "start": 1369.52, "text": " cloning loss to the student training." }, { "end": 1379.8, "start": 1373.1599999999999, "text": " So usually we have seen multiple approaches before where we have a goal generating agent" }, { "end": 1384.6, "start": 1379.8, "text": " and another agent that's trying to reach the goal, but this goal generating agents are" }, { "end": 1389.12, "start": 1384.6, "text": " usually some VIEs or cans and so on." }, { "end": 1394.68, "start": 1389.12, "text": " But in the case of this asymmetric self-play paper, the teacher agent also actually acts" }, { "end": 1397.44, "start": 1394.68, "text": " in the environment and reaches that position." }, { "end": 1402.12, "start": 1397.44, "text": " What that means for the student agent is in case the student finds the goal too hard to" }, { "end": 1407.84, "start": 1402.12, "text": " reach, then the student can actually learn from the behavior cloning of the teacher." }, { "end": 1412.12, "start": 1407.84, "text": " I think that really helped in much faster training." }, { "end": 1413.52, "start": 1412.12, "text": " But do we have a chicken and egg problem?" }, { "end": 1415.1999999999998, "start": 1413.52, "text": " Like, how does the teacher know how to get there?" }, { "end": 1416.8, "start": 1415.1999999999998, "text": " I actually didn't follow that part." }, { "end": 1418.32, "start": 1416.8, "text": " How does the teacher know how to get there?" }, { "end": 1420.6399999999999, "start": 1418.32, "text": " So initially teacher moves completely randomly." }, { "end": 1425.24, "start": 1420.6399999999999, "text": " So both the teacher and the student agent starts out completely randomly." }, { "end": 1431.72, "start": 1425.24, "text": " But once the teacher gets to a certain location and if the student fails to reach their first" }, { "end": 1433.12, "start": 1431.72, "text": " time, then it's good." }, { "end": 1435.36, "start": 1433.12, "text": " The teacher agent gets rewarded." }, { "end": 1440.24, "start": 1435.36, "text": " In the second episode as well, if the teacher reaches the same spot, but now the student" }, { "end": 1442.1999999999998, "start": 1440.24, "text": " has learned how to reach that place." }, { "end": 1446.8799999999999, "start": 1442.1999999999998, "text": " So the student reaches that goal and the teacher will be negatively rewarded." }, { "end": 1451.6399999999999, "start": 1446.8799999999999, "text": " So now the teacher realizes that okay, the student can reach his goals." }, { "end": 1457.6, "start": 1451.6399999999999, "text": " Now I should further expand my space and it's incentivized to export more." }, { "end": 1461.28, "start": 1457.6, "text": " So what kind of settings do you think are most suitable for this?" }, { "end": 1465.76, "start": 1461.28, "text": " I'm thinking of a real world application in the context of industrial robots." }, { "end": 1471.76, "start": 1465.76, "text": " For example, in the kitchen robots or in some factory settings and so on." }, { "end": 1476.28, "start": 1471.76, "text": " Those manipulator arms has to be trained to reach different kinds of poses." }, { "end": 1482.72, "start": 1476.28, "text": " So I think during its training phase, it's ideal if they were trained in this manner." }, { "end": 1489.84, "start": 1482.72, "text": " We have one agent, one teacher agent trying to do multiple, trying to reach multiple locations," }, { "end": 1497.28, "start": 1489.84, "text": " but it could also have multiple student agents trying to reach the same goal pose." }, { "end": 1498.28, "start": 1497.28, "text": " Okay." }, { "end": 1502.9199999999998, "start": 1498.28, "text": " Do you think this really makes sense in simulation and then using some to reel or like" }, { "end": 1507.04, "start": 1502.9199999999998, "text": " literally doing all of us in the real world?" }, { "end": 1509.8, "start": 1507.04, "text": " Yeah, I think that's always a complex question." }, { "end": 1516.3999999999999, "start": 1509.8, "text": " It depends on the specifics, but yeah, doing it in the simulation first and then seem" }, { "end": 1518.52, "start": 1516.3999999999999, "text": " to real time, so it should work." }, { "end": 1522.68, "start": 1518.52, "text": " Okay, so let's move on to a paper that you have currently under review at a conference." }, { "end": 1527.72, "start": 1522.68, "text": " And I won't say the conference name, but it's a big well known conference." }, { "end": 1533, "start": 1527.72, "text": " The papers call do as you teach a multi-teacher approach to self-play in deeper and" }, { "end": 1534.36, "start": 1533, "text": " force learning." }, { "end": 1537.24, "start": 1534.36, "text": " So can you give us a basic idea of what's going on in this paper?" }, { "end": 1538.24, "start": 1537.24, "text": " Sorry." }, { "end": 1544.2, "start": 1538.24, "text": " Yeah, so we have seen this as a matrix self-play paper and we implemented it and then" }, { "end": 1550.44, "start": 1544.2, "text": " we noticed that it's working well, but not as good as we expected." }, { "end": 1557, "start": 1550.44, "text": " So then we were thinking of what kind of improvements we can make to that." }, { "end": 1564.76, "start": 1557, "text": " And one issue we noticed is that there is kind of lack of diversity in how the teacher" }, { "end": 1566.48, "start": 1564.76, "text": " is setting the goals." }, { "end": 1573.48, "start": 1566.48, "text": " It is exploring, but it is kind of exploring mostly in one direction, considering grid" }, { "end": 1574.48, "start": 1573.48, "text": " world example." }, { "end": 1581, "start": 1574.48, "text": " And the teacher is setting goals in it's still challenging goals, but it's setting goals" }, { "end": 1582.8, "start": 1581, "text": " in only one direction." }, { "end": 1585.72, "start": 1582.8, "text": " So I think, yeah, that's the basis for our approach." }, { "end": 1593.68, "start": 1585.72, "text": " So we believe that we need multiple teachers to set diverse goals for that could also" }, { "end": 1600.32, "start": 1593.68, "text": " help in faster learning of the student agent and also better generalization." }, { "end": 1604.3999999999999, "start": 1600.32, "text": " And where does the stochasticity come from, the randomness in the teachers?" }, { "end": 1611.1599999999999, "start": 1604.3999999999999, "text": " It's random initialization of the networks and then they act all differently because they" }, { "end": 1615.6799999999998, "start": 1611.1599999999999, "text": " are incentivized based on whether the student has reached the goal or not." }, { "end": 1619.6, "start": 1615.6799999999998, "text": " You could get away with one teacher if the distribution was what you wanted, but you're" }, { "end": 1622.36, "start": 1619.6, "text": " saying you don't get your distribution from one." }, { "end": 1625.72, "start": 1622.36, "text": " And that's because so I just wonder what the other approach would be like, is there some" }, { "end": 1630.2, "start": 1625.72, "text": " way to fix, is there any alternative to fix the distribution?" }, { "end": 1633.24, "start": 1630.2, "text": " Because what I think what we're saying is the distribution from anyone teacher is just" }, { "end": 1636.3600000000001, "start": 1633.24, "text": " not distributed, basically, not evenly distributed." }, { "end": 1641.2, "start": 1636.3600000000001, "text": " So is there some way to make it evenly distributed or there's just no way and this is this multi" }, { "end": 1644.32, "start": 1641.2, "text": " teacher is a kind of a approach to overcome that problem?" }, { "end": 1649.92, "start": 1644.32, "text": " I mean, we thought of other approaches, for example, adding a diversity specific metric" }, { "end": 1656.8000000000002, "start": 1649.92, "text": " and so on, but I think they are really dependent on the environment or particular task at hand" }, { "end": 1661.92, "start": 1656.8000000000002, "text": " and not really gender stick, gender log rhythms." }, { "end": 1664.5600000000002, "start": 1661.92, "text": " And I think there are some other ways you could do it." }, { "end": 1670.76, "start": 1664.5600000000002, "text": " For example, adding goals to the replay buffer that are only diverse." }, { "end": 1675.92, "start": 1670.76, "text": " So you let the teacher agent generate all these goals, but store those goals in the replay" }, { "end": 1680.44, "start": 1675.92, "text": " buffer that are explicitly different from these goals that are already stored." }, { "end": 1683.72, "start": 1680.44, "text": " But these are also computationally expensive." }, { "end": 1686.88, "start": 1683.72, "text": " And how do you consider a difference between goals?" }, { "end": 1691.1200000000001, "start": 1686.88, "text": " Like as you have some idea of distance between the goals, is that in terms of steps to get" }, { "end": 1694.2, "start": 1691.1200000000001, "text": " there or how do you think of difference between goals?" }, { "end": 1695.92, "start": 1694.2, "text": " That's another challenge actually." }, { "end": 1701.16, "start": 1695.92, "text": " You can't, you don't have any specific metric or distance between goals." }, { "end": 1703.3200000000002, "start": 1701.16, "text": " If you're acting in a grid world, then it's clear." }, { "end": 1709.32, "start": 1703.32, "text": " But again, it's usually specific to the environment you're acting in, which is why I think this" }, { "end": 1712.32, "start": 1709.32, "text": " multi-teacher approach is very general." }, { "end": 1716.84, "start": 1712.32, "text": " It's not computationally intensive and it gives much better results." }, { "end": 1722.2, "start": 1716.84, "text": " And it also shows that we are actually generating much diverse goals." }, { "end": 1726.2, "start": 1722.2, "text": " And are some of the teachers like other teachers competing among themselves too?" }, { "end": 1729.08, "start": 1726.2, "text": " Like are they kind of losing teachers and winning teachers?" }, { "end": 1737, "start": 1729.08, "text": " It's possible that particular teacher can always get in some kind of local minima." }, { "end": 1741.12, "start": 1737, "text": " You have this danger especially in the case of a single teacher, right?" }, { "end": 1746.04, "start": 1741.12, "text": " It's always possible that it can always get stuck somewhere, but using multiple teachers" }, { "end": 1748.9199999999998, "start": 1746.04, "text": " kind of solves this issue as well." }, { "end": 1752.36, "start": 1748.9199999999998, "text": " It also depends on the complexity of the environment." }, { "end": 1757.1999999999998, "start": 1752.36, "text": " So if the environment is not complex enough, there is no point in having multiple teachers" }, { "end": 1762, "start": 1757.2, "text": " because all the teachers would be generating goals around the same region where the student" }, { "end": 1768.28, "start": 1762, "text": " had already reached that region and the teachers are not getting incentivized anymore." }, { "end": 1771.2, "start": 1768.28, "text": " Well, I love the concept and I love the parallel to the real world." }, { "end": 1775.1200000000001, "start": 1771.2, "text": " I think of every guest on the show as a teacher to me." }, { "end": 1776.64, "start": 1775.1200000000001, "text": " I learned from every guest." }, { "end": 1780.56, "start": 1776.64, "text": " And it's great to have multiple teachers because every teacher has their own distribution" }, { "end": 1783.16, "start": 1780.56, "text": " of areas that they are more interested in." }, { "end": 1787.24, "start": 1783.16, "text": " And so to get a diverse scope is actually a really nice treat." }, { "end": 1791.24, "start": 1787.24, "text": " So in this case, in this paper, there's students, teachers, and I think there's also intern" }, { "end": 1792.24, "start": 1791.24, "text": " agents." }, { "end": 1793.24, "start": 1792.24, "text": " Can you tell us about that?" }, { "end": 1794.24, "start": 1793.24, "text": " What is the intern about?" }, { "end": 1795.24, "start": 1794.24, "text": " What are the roles?" }, { "end": 1799.44, "start": 1795.24, "text": " Once we are at the teacher agents' generalities, the schools and the students' learns from" }, { "end": 1805.76, "start": 1799.44, "text": " those goals, we also wanted to see if these generated goals are of use at all." }, { "end": 1810.72, "start": 1805.76, "text": " So we started calling this new agent as intern agent." }, { "end": 1814.96, "start": 1810.72, "text": " So the intern doesn't have access to the teacher's trajectories." }, { "end": 1818.88, "start": 1814.96, "text": " They only have access to the teacher's goals." }, { "end": 1822.56, "start": 1818.88, "text": " Essentially, they can't use something like behavior learning laws or other implementation" }, { "end": 1823.96, "start": 1822.56, "text": " learning methods." }, { "end": 1830.52, "start": 1823.96, "text": " The only way they are allowed to learn is based on this curriculum of goals." }, { "end": 1838.2, "start": 1830.52, "text": " And we have observed that this curriculum of goals set by the teachers is much better" }, { "end": 1841.88, "start": 1838.2, "text": " compared to a random set of goals." }, { "end": 1848.76, "start": 1841.88, "text": " And also, if you increase the number of teachers, the diversity of the goals generated increases" }, { "end": 1852.8, "start": 1848.76, "text": " and also it helps the intern learn much faster." }, { "end": 1857.48, "start": 1852.8, "text": " I think you can also kind of draw the real life parallel to this one as well." }, { "end": 1862.2, "start": 1857.48, "text": " That even if you don't have access to the complete lecture, but if you just have access" }, { "end": 1866.48, "start": 1862.2, "text": " to some references and so on, you could still learn from those references." }, { "end": 1872.56, "start": 1866.48, "text": " But those references has to be accurate and useful and not just arbitrary." }, { "end": 1876.96, "start": 1872.56, "text": " So this reminds me of something I love talking about, which is the paired system." }, { "end": 1878.28, "start": 1876.96, "text": " It's a way of curriculum design." }, { "end": 1883.2, "start": 1878.28, "text": " So is there something similar to paired going on here, or can you talk about the relationship" }, { "end": 1884.2, "start": 1883.2, "text": " between those two ideas?" }, { "end": 1886.6, "start": 1884.2, "text": " Yeah, they're very related." }, { "end": 1893.5, "start": 1886.6, "text": " So our work can be kind of seen as a specific instance of the broader problem of these" }, { "end": 1900.44, "start": 1893.5, "text": " emergent ecosystems where you have one agent, let's call it a teacher agent, that's generating" }, { "end": 1905.12, "start": 1900.44, "text": " increasingly complex environments and the actual reinforcement learning agent that has" }, { "end": 1912, "start": 1905.12, "text": " to solve this whatever the environment the teacher agent throws at it." }, { "end": 1919.04, "start": 1912, "text": " So we can see kind of this goal generating teacher and the student agent as a specific" }, { "end": 1924.76, "start": 1919.04, "text": " instance of that, where instead of generating these complex environments, we are only" }, { "end": 1930.3999999999999, "start": 1924.76, "text": " restricting the generation to goals inside a specific environment." }, { "end": 1936.04, "start": 1930.3999999999999, "text": " All those algorithms that are applicable in those emergent ecosystems are applicable" }, { "end": 1938, "start": 1936.04, "text": " here as well broadly speaking." }, { "end": 1944.04, "start": 1938, "text": " For example, I have seen approaches that use like I think evolutionary search or genetic" }, { "end": 1946.96, "start": 1944.04, "text": " algorithms for these kinds of teacher agents." }, { "end": 1948.96, "start": 1946.96, "text": " Can you represent these goals?" }, { "end": 1953.04, "start": 1948.96, "text": " Are they just states that you show the agents that want you to get into the state or how" }, { "end": 1954.3600000000001, "start": 1953.04, "text": " do you represent the goal?" }, { "end": 1958.64, "start": 1954.3600000000001, "text": " Yeah, so we have tried this approach on two environments." }, { "end": 1963.88, "start": 1958.64, "text": " One is fetch and the other is custom driving simulator." }, { "end": 1971.04, "start": 1963.88, "text": " Yeah, in both the cases, we represent the position as X, Y and yeah, we could try other things" }, { "end": 1975.64, "start": 1971.04, "text": " for example as a bit map representation if it's a grid board kind of setting." }, { "end": 1980.6000000000001, "start": 1975.64, "text": " So states as opposed to not like observations, like are these the robot arms, I think you" }, { "end": 1982.3600000000001, "start": 1980.6000000000001, "text": " are talking about a robot arm sitting." }, { "end": 1983.3600000000001, "start": 1982.3600000000001, "text": " Is that right?" }, { "end": 1986.5200000000002, "start": 1983.3600000000001, "text": " A simple gym version of that." }, { "end": 1991.3200000000002, "start": 1986.5200000000002, "text": " And so in that case, is it using proprioceptive observations that's like the state variables" }, { "end": 1996.2800000000002, "start": 1991.3200000000002, "text": " of the positions and angles of the arms or is it more an observation like a image of" }, { "end": 2000.0800000000002, "start": 1996.2800000000002, "text": " the image of the outside of the robot or how does that work?" }, { "end": 2001.6000000000001, "start": 2000.0800000000002, "text": " No, it's not an image." }, { "end": 2007.3999999999999, "start": 2001.6, "text": " The goal would just be included as the goal position that the arm has to reach like X, Y." }, { "end": 2012.8799999999999, "start": 2007.3999999999999, "text": " The actual state is the different position or the velocities of the hand." }, { "end": 2013.8799999999999, "start": 2012.8799999999999, "text": " I see." }, { "end": 2014.8799999999999, "start": 2013.8799999999999, "text": " So what is the intern ad?" }, { "end": 2020.24, "start": 2014.8799999999999, "text": " Is it intern like an additional experiment or does it actually make the learning better?" }, { "end": 2023.36, "start": 2020.24, "text": " It doesn't add to the actual student teacher training." }, { "end": 2029.7199999999998, "start": 2023.36, "text": " It's an additional experiment to show the utility of the goals generated by the teachers." }, { "end": 2035.1200000000001, "start": 2029.72, "text": " So what kind of problems are best suited for this type of approach do you think?" }, { "end": 2039.08, "start": 2035.1200000000001, "text": " So we are essentially solving goal condition RL here." }, { "end": 2044.3600000000001, "start": 2039.08, "text": " There are a wide variety of applications for goal condition RL, I think as we were discussing" }, { "end": 2051.2, "start": 2044.3600000000001, "text": " this industrial manipulator robots or even the medical robots and so on." }, { "end": 2052.2, "start": 2051.2, "text": " Cool." }, { "end": 2053.2, "start": 2052.2, "text": " Okay." }, { "end": 2054.84, "start": 2053.2, "text": " Do you want to move to the next paper here?" }, { "end": 2055.84, "start": 2054.84, "text": " Continuous coordination." }, { "end": 2059.92, "start": 2055.84, "text": " So this paper is from ICML 2021." }, { "end": 2063.76, "start": 2059.92, "text": " Continuous coordination as a realistic scenario for lifelong learning." }, { "end": 2067.92, "start": 2063.76, "text": " And this is Nikoi as author plus co-authors." }, { "end": 2072.88, "start": 2067.92, "text": " No, I wasn't involved when the paper was being published." }, { "end": 2079.08, "start": 2072.88, "text": " So this is something I believe that that could be a good set up testing the capabilities" }, { "end": 2080.96, "start": 2079.08, "text": " of a government." }, { "end": 2089.04, "start": 2080.96, "text": " So in their paper, they established this lifelong learning setup with multiple agents and we" }, { "end": 2096.8, "start": 2089.04, "text": " are currently working with these authors to have humans in the loop, to have human agents" }, { "end": 2100.36, "start": 2096.8, "text": " learn to cooperate with the AI agents and vice versa." }, { "end": 2105.48, "start": 2100.36, "text": " So Hanabi is a quite unusual game and I think that's why it comes up in these settings." }, { "end": 2107.88, "start": 2105.48, "text": " It has some very unusual properties." }, { "end": 2113.56, "start": 2107.88, "text": " Can you talk about Hanabi and why it's a good candidate?" }, { "end": 2120.6, "start": 2113.56, "text": " Yeah, it's a very challenging multiplayer, like two to five players, a cooperative card" }, { "end": 2121.6, "start": 2120.6, "text": " game." }, { "end": 2127.88, "start": 2121.6, "text": " So if humans actually play the game for the first time they would never win, I myself" }, { "end": 2133, "start": 2127.88, "text": " played the games multiple times and every time a player changes your entire strategy" }, { "end": 2139.24, "start": 2133, "text": " changes and you kind of have to start everything from the beginning because the players really" }, { "end": 2147.68, "start": 2139.24, "text": " need to establish some kind of implicit connection or strategy of what they're doing." }, { "end": 2154.48, "start": 2147.68, "text": " So the game is basically every player can see every other player's cards except his own" }, { "end": 2162, "start": 2154.48, "text": " cards and at every time step you have, you can choose to do multiple actions." }, { "end": 2167.84, "start": 2162, "text": " The final goal is to, so the colors are basically numbered one to five and they're colored and" }, { "end": 2173.6, "start": 2167.84, "text": " the goal is to drop the cards in such a way that they're arranged in increasing order" }, { "end": 2177.2, "start": 2173.6, "text": " from one to five and across all colors." }, { "end": 2184.48, "start": 2177.2, "text": " So yeah, that's a very challenging thing to do and you could choose to give out hints" }, { "end": 2191.04, "start": 2184.48, "text": " to other players or you can choose to drop the card or you can choose to play the card." }, { "end": 2197.08, "start": 2191.04, "text": " There are a very limited number of information tokens so you can't keep giving hints" }, { "end": 2198.08, "start": 2197.08, "text": " forever." }, { "end": 2200.6, "start": 2198.08, "text": " There are very limited number of hints that you could give." }, { "end": 2206.56, "start": 2200.6, "text": " So I mean many games, especially card games, have partial information as part of the game" }, { "end": 2208.32, "start": 2206.56, "text": " and then we have that here too of course." }, { "end": 2211.2799999999997, "start": 2208.32, "text": " Why is it different here?" }, { "end": 2217.24, "start": 2211.2799999999997, "text": " What makes the partial information here different than saying Blackjack or any other game we" }, { "end": 2218.24, "start": 2217.24, "text": " might play?" }, { "end": 2222.68, "start": 2218.24, "text": " I think the cooperative aspect is important fun here." }, { "end": 2229.8799999999997, "start": 2222.68, "text": " The goal is for all the players to play collectively so that they could either all win or all lose." }, { "end": 2236.8399999999997, "start": 2229.8799999999997, "text": " And so this acts like a good benchmark for teaching agents to collaborate with each other" }, { "end": 2242.7999999999997, "start": 2236.8399999999997, "text": " or having bringing humans in the loop and teaching agents to cooperate or collaborate with" }, { "end": 2243.7999999999997, "start": 2242.7999999999997, "text": " humans." }, { "end": 2244.7999999999997, "start": 2243.7999999999997, "text": " So that is unusual." }, { "end": 2249.28, "start": 2244.8, "text": " I think most card games are about winning each person winning and not collaborating as" }, { "end": 2250.28, "start": 2249.28, "text": " a more competitive." }, { "end": 2255.32, "start": 2250.28, "text": " I guess there's games like Bridge with their teams but this idea of all being on the same" }, { "end": 2259.4, "start": 2255.32, "text": " team but missing this crucial information is really interesting." }, { "end": 2264.7200000000003, "start": 2259.4, "text": " It also seems to me a bit artificial in the sense that this game is only fun because" }, { "end": 2269.48, "start": 2264.7200000000003, "text": " you can't say, hey, Si, you're carrying a yellow two and a red three." }, { "end": 2271, "start": 2269.48, "text": " I'm not allowed to say that to you, right?" }, { "end": 2272.6000000000004, "start": 2271, "text": " It's part of the rules of the game." }, { "end": 2275.6, "start": 2272.6, "text": " But as humans, that's trivial." }, { "end": 2279.4, "start": 2275.6, "text": " It's a strange situation because normally we could just say our communication is so good," }, { "end": 2283.4, "start": 2279.4, "text": " we could just easily clear up the situation and win together." }, { "end": 2288, "start": 2283.4, "text": " And so somehow we've, this game has added this artificial constraint." }, { "end": 2289.72, "start": 2288, "text": " You cannot communicate." }, { "end": 2291.72, "start": 2289.72, "text": " You have to really limit your communication bandwidth." }, { "end": 2296.36, "start": 2291.72, "text": " Couldn't we short circuit the whole challenge just by letting communication flow freely or" }, { "end": 2297.36, "start": 2296.36, "text": " no?" }, { "end": 2303.04, "start": 2297.36, "text": " No, so because in real in realistic settings, you can of course communicate in natural" }, { "end": 2306.96, "start": 2303.04, "text": " language, but I think that adds a whole lot of complexity." }, { "end": 2314.04, "start": 2306.96, "text": " And at this point or at the current state of research of NLP, I don't think we can trust" }, { "end": 2316.56, "start": 2314.04, "text": " the systems too well." }, { "end": 2321.56, "start": 2316.56, "text": " So I think that's why it's important to constrain on what the agents are allowed to communicate" }, { "end": 2330.92, "start": 2321.56, "text": " at this point, but given these limited communication capabilities that we are perfectly safe, can" }, { "end": 2335.4, "start": 2330.92, "text": " this, can they learn to, can they learn useful cooperative behaviors?" }, { "end": 2336.88, "start": 2335.4, "text": " That's a very good challenge to have." }, { "end": 2339.56, "start": 2336.88, "text": " I mean, we don't have to constrain the agents to speak in natural language." }, { "end": 2343.92, "start": 2339.56, "text": " Like maybe they exchange a vector or something, a learned vector." }, { "end": 2347.56, "start": 2343.92, "text": " They could do a learn to communicate type thing, but that would be against the rules as" }, { "end": 2348.56, "start": 2347.56, "text": " well, right?" }, { "end": 2351.92, "start": 2348.56, "text": " But exchanging vectors with each other, then the Hanabi doesn't work." }, { "end": 2357.7999999999997, "start": 2351.92, "text": " Yeah, I mean, I think the point of this is to see how well they can learn to cooperate." }, { "end": 2360.12, "start": 2357.7999999999997, "text": " It's to have challenging cooperatives." }, { "end": 2365.96, "start": 2360.12, "text": " You can of course change the rules and make it easy, but I think that won't be challenging." }, { "end": 2369.04, "start": 2365.96, "text": " So I can explain the concept of this paper." }, { "end": 2370.88, "start": 2369.04, "text": " So you have the Hanabi game." }, { "end": 2378.52, "start": 2370.88, "text": " So what these guys do is the first train bunch of self-play agents around 100 of them" }, { "end": 2385.56, "start": 2378.52, "text": " so that they can get better by playing with themselves." }, { "end": 2390.6, "start": 2385.56, "text": " And then they sample, randomly sample, few of these trained agents and make them play" }, { "end": 2394.6, "start": 2390.6, "text": " with each other so that they can learn the cooperative behaviors." }, { "end": 2401.16, "start": 2394.6, "text": " And then in the final test phase, they again sample a bunch of agents and that weren't" }, { "end": 2404.6, "start": 2401.16, "text": " chosen before or that did not play with each other before." }, { "end": 2408.8399999999997, "start": 2404.6, "text": " And then they make them play with each other and then see how well it works in the context" }, { "end": 2412.2799999999997, "start": 2408.8399999999997, "text": " of a zero-shot coordination." }, { "end": 2420, "start": 2412.2799999999997, "text": " So what we are currently trying to do or extending this work is to have a human agent play" }, { "end": 2422.6, "start": 2420, "text": " with these bunch of trained agents." }, { "end": 2427.64, "start": 2422.6, "text": " And this is not just the challenge for the AI agent, but it's also a challenge for the" }, { "end": 2432.8399999999997, "start": 2427.64, "text": " bunch of human agents to learn to cooperate with these trained agents." }, { "end": 2437.08, "start": 2432.84, "text": " As a trained agent, keep changing." }, { "end": 2442.84, "start": 2437.08, "text": " It's also important to continuously adapt to your new opponents, but also remember how" }, { "end": 2448.96, "start": 2442.84, "text": " we have performed with your old partners, not opponents, but partners." }, { "end": 2453.52, "start": 2448.96, "text": " And we saw things like a population-based training, which I think was used in Starcraft" }, { "end": 2459.2400000000002, "start": 2453.52, "text": " where there was many types of strategies and to nail things down, to keep things from" }, { "end": 2462.1200000000003, "start": 2459.2400000000002, "text": " sliding all over in strategy space." }, { "end": 2466.6, "start": 2462.12, "text": " They had to nail things down by having all these fixed agents and then you keep growing" }, { "end": 2467.6, "start": 2466.6, "text": " the population." }, { "end": 2471.6, "start": 2467.6, "text": " So, it seems like this approach has some things in common with that." }, { "end": 2476.16, "start": 2471.6, "text": " Although the random, I think they went a little further maybe with the population-based" }, { "end": 2482, "start": 2476.16, "text": " training in terms of really keeping track of which agents were dominating which and really" }, { "end": 2489, "start": 2482, "text": " focusing on training on the agents that were still a challenge so that they could get" }, { "end": 2491.4, "start": 2489, "text": " the best ranks possible to be efficient with that." }, { "end": 2494.92, "start": 2491.4, "text": " So I wonder, is this the same type of setting?" }, { "end": 2498.32, "start": 2494.92, "text": " Like would a population-based training also be applicable here?" }, { "end": 2499.76, "start": 2498.32, "text": " Is this kind of an alternative to that?" }, { "end": 2502.12, "start": 2499.76, "text": " Or how do you see the relationship between those two things?" }, { "end": 2506.7200000000003, "start": 2502.12, "text": " Yeah, basically the approaches that were used there can be used here as well." }, { "end": 2513.04, "start": 2506.7200000000003, "text": " I think the Hanabi is basically kind of a simpler version of the game where we don't have" }, { "end": 2519.2400000000002, "start": 2513.04, "text": " any of those additional complexities of let's say, or no vision or other kinds of representations." }, { "end": 2525.04, "start": 2519.24, "text": " The representation here is simple and the code task here is just to learn the abilities" }, { "end": 2526.8399999999997, "start": 2525.04, "text": " of cooperation." }, { "end": 2531.16, "start": 2526.8399999999997, "text": " You know, these types of games require a really good memory." }, { "end": 2532.16, "start": 2531.16, "text": " Is that true?" }, { "end": 2533.16, "start": 2532.16, "text": " That was a strategic, actually." }, { "end": 2537.72, "start": 2533.16, "text": " Someone was saying that about strategic, which is another game with a lot of partial" }, { "end": 2543.7999999999997, "start": 2537.72, "text": " information and the idea and the comment was about the fact that well, computers can" }, { "end": 2549.7200000000003, "start": 2543.8, "text": " have trivially memorized any amount of data, so does that make these games less interesting" }, { "end": 2553.52, "start": 2549.7200000000003, "text": " for testing algorithms on because the computer can just remember every comment." }, { "end": 2558.04, "start": 2553.52, "text": " Whereas for a human, they might start losing track of the hints over time." }, { "end": 2560.32, "start": 2558.04, "text": " Is that a factor here or not, not so much?" }, { "end": 2565.5600000000004, "start": 2560.32, "text": " So in strategic, a strategic is basically a two-player game, right?" }, { "end": 2572.7200000000003, "start": 2565.5600000000004, "text": " So one could always kind of try to memorize most of the things, whereas in the case of Hanabi," }, { "end": 2579.7599999999998, "start": 2572.72, "text": " you have the agents as well that are changing, so it's not trivial to memorize." }, { "end": 2585.16, "start": 2579.7599999999998, "text": " Of course, in the context of self-play, it's easy to memorize like if the agent is playing" }, { "end": 2590.56, "start": 2585.16, "text": " with itself, then which is happening in the first phase of the training, then it's easy" }, { "end": 2591.56, "start": 2590.56, "text": " to learn." }, { "end": 2597, "start": 2591.56, "text": " But again, that is being challenged by the next phase of training where these agents are" }, { "end": 2601.2799999999997, "start": 2597, "text": " made to play against with the play alongside other agents." }, { "end": 2607.7200000000003, "start": 2601.28, "text": " And I think this is where the ability of Cognment also really shines, where you have multiple" }, { "end": 2613.5600000000004, "start": 2607.7200000000003, "text": " actors acting out in the environment where these actors can either be the trained agents" }, { "end": 2615.96, "start": 2613.5600000000004, "text": " or the human agents, right?" }, { "end": 2619.92, "start": 2615.96, "text": " So this is one natural fit that we found for Cognment." }, { "end": 2620.92, "start": 2619.92, "text": " Great." }, { "end": 2624.76, "start": 2620.92, "text": " So let's move on to the next paper here, which is learning to navigate the synthetically" }, { "end": 2630.96, "start": 2624.76, "text": " accessible chemical space using reinforcement learning with first author yourself and Sat" }, { "end": 2632.8, "start": 2630.96, "text": " Rav and with co-authors." }, { "end": 2634.2400000000002, "start": 2632.8, "text": " I'm really excited about this paper." }, { "end": 2637.16, "start": 2634.2400000000002, "text": " I remember this back from ICML." }, { "end": 2638.16, "start": 2637.16, "text": " I think it was." }, { "end": 2640.56, "start": 2638.16, "text": " And I think that's where I met you." }, { "end": 2641.56, "start": 2640.56, "text": " Yeah." }, { "end": 2642.56, "start": 2641.56, "text": " Yeah." }, { "end": 2647.88, "start": 2642.56, "text": " I mean, I wanted to have you on the show largely because of this paper and because I just" }, { "end": 2652.76, "start": 2647.88, "text": " thought you were great to talk to and you had such interesting views on the work that" }, { "end": 2654, "start": 2652.76, "text": " you were doing." }, { "end": 2658.52, "start": 2654, "text": " So, yeah, this is kind of the paper that's a grab my attention." }, { "end": 2661.7599999999998, "start": 2658.52, "text": " So tell us, tell us about this exciting paper here." }, { "end": 2662.7599999999998, "start": 2661.7599999999998, "text": " What did you do with this work?" }, { "end": 2663.7599999999998, "start": 2662.7599999999998, "text": " Yeah." }, { "end": 2669.12, "start": 2663.7599999999998, "text": " So the challenge was to generate molecules that are actually synthesizable." }, { "end": 2675.7599999999998, "start": 2669.12, "text": " So at that time, what people used to do before this paper was that so the molecules are usually" }, { "end": 2678.68, "start": 2675.7599999999998, "text": " represented as a string or as a graph." }, { "end": 2680.68, "start": 2678.68, "text": " So they use different kinds of cans." }, { "end": 2687.4, "start": 2680.68, "text": " We, sorry, even reinforcement learning based methods to generate different kinds of" }, { "end": 2691.2000000000003, "start": 2687.4, "text": " these graph structures or these strings and so on." }, { "end": 2697.76, "start": 2691.2000000000003, "text": " But once they are generated and they're obviously optimized for the reward that we wanted." }, { "end": 2703.48, "start": 2697.76, "text": " But once these are generated, there is no guarantee that any of these are actually synthesizable." }, { "end": 2704.48, "start": 2703.48, "text": " Yeah." }, { "end": 2709.48, "start": 2704.48, "text": " So that's the challenge we were trying to overcome then." }, { "end": 2716.2000000000003, "start": 2709.48, "text": " Then our approach was basically instead of searching in the space of the structures, we" }, { "end": 2721.3999999999996, "start": 2716.2, "text": " should actually search in the space of chemical reactions." }, { "end": 2727.7999999999997, "start": 2721.3999999999996, "text": " So we would start with a bunch of chemical reactants, choose one of them and make it react" }, { "end": 2733.2, "start": 2727.7999999999997, "text": " with one other reactant, you get a product and then choose another reactant, you get one" }, { "end": 2734.2799999999997, "start": 2733.2, "text": " more product and so on." }, { "end": 2740.6, "start": 2734.2799999999997, "text": " Repeat this process until you get satisfying the world or basically optimize in this particular" }, { "end": 2742.56, "start": 2740.6, "text": " space." }, { "end": 2747.84, "start": 2742.56, "text": " So how does the chemistry part work in terms of having the data in place?" }, { "end": 2752.7999999999997, "start": 2747.84, "text": " Is there databases with these these chemicals and these reactions and how it would transform" }, { "end": 2755.2799999999997, "start": 2752.7999999999997, "text": " your molecule or how does how does that work?" }, { "end": 2756.2799999999997, "start": 2755.2799999999997, "text": " Yeah." }, { "end": 2761.4, "start": 2756.2799999999997, "text": " So for the reactants, there is this database called inner mind datasets." }, { "end": 2765.4, "start": 2761.4, "text": " It contains about 150,000 molecules." }, { "end": 2768.96, "start": 2765.4, "text": " So that's an initial starting database." }, { "end": 2775.48, "start": 2768.96, "text": " And then for chemical reaction, we have something called reaction templates which basically" }, { "end": 2782.08, "start": 2775.48, "text": " say that what are reactive parts in any of the reactants and how they react with each other" }, { "end": 2788.08, "start": 2782.08, "text": " to obtain a particular product just corresponding to those reactive parts and what are the carbon" }, { "end": 2792.96, "start": 2788.08, "text": " string attached to the rest of the molecules test the same way." }, { "end": 2801.96, "start": 2792.96, "text": " And I think a smart is kind of way to represent these and we have libraries like RTKT that" }, { "end": 2805.28, "start": 2801.96, "text": " does the computes most of these things." }, { "end": 2810.8, "start": 2805.28, "text": " I mean, this is kind of implying a giant search tree, maybe not that, not that disillular" }, { "end": 2815.84, "start": 2810.8, "text": " from a game tree, but I guess the branching factor is very huge and depth is very large," }, { "end": 2818.44, "start": 2815.84, "text": " you can't explore the whole tree, is that, is that the story?" }, { "end": 2819.44, "start": 2818.44, "text": " Exactly." }, { "end": 2825.12, "start": 2819.44, "text": " So, you can't have normal any kind of research or other heuristic methods to search this" }, { "end": 2826.12, "start": 2825.12, "text": " base." }, { "end": 2829.92, "start": 2826.12, "text": " That's why we needed reinforcement learning." }, { "end": 2836.44, "start": 2829.92, "text": " Even for reinforcement learning, space of 150,000 reactants is very huge." }, { "end": 2840.2000000000003, "start": 2836.44, "text": " So at first, we choose something called reaction template." }, { "end": 2842.4, "start": 2840.2000000000003, "text": " There are about 14 and of them." }, { "end": 2848.44, "start": 2842.4, "text": " And once you choose a specific reaction template, the number of reactants, you can choose" }, { "end": 2853.16, "start": 2848.44, "text": " decreases from about 150,000 to about 30,000 on average." }, { "end": 2859.88, "start": 2853.16, "text": " Again, this is on an average, but for a specific template, it could be as low as 50 or as" }, { "end": 2860.88, "start": 2859.88, "text": " high as 100,000." }, { "end": 2862.88, "start": 2860.88, "text": " So it really depends." }, { "end": 2872.08, "start": 2862.88, "text": " So even to compute or to find the reactant in space of 30,000 reactants is still very" }, { "end": 2874.36, "start": 2872.08, "text": " hard task for reinforcement learning agents." }, { "end": 2882.6800000000003, "start": 2874.36, "text": " So what we did is we predicted the action in continuous space and then mapped it to the" }, { "end": 2888.92, "start": 2882.6800000000003, "text": " discrete space using the KNN method or just computed the first nearest neighbor." }, { "end": 2896.04, "start": 2888.92, "text": " So we predicted the proper, instead of predicting the number, a discrete number from 1 to 150,000," }, { "end": 2899.96, "start": 2896.04, "text": " we predicted the properties of a molecule in a continuous space." }, { "end": 2907.2, "start": 2899.96, "text": " And we pre-computed all the properties of all these 150,000 reactants beforehand so" }, { "end": 2912.48, "start": 2907.2, "text": " that we can directly use the nearest neighbor method to compute the actual reactant that" }, { "end": 2913.48, "start": 2912.48, "text": " we want." }, { "end": 2916.2400000000002, "start": 2913.48, "text": " So what is the reward design here?" }, { "end": 2923, "start": 2916.2400000000002, "text": " Yeah, so the drug discovery community works on a specific set of benchmarks." }, { "end": 2927.48, "start": 2923, "text": " One of them is called QED, which is basically a drug likeness score." }, { "end": 2933.68, "start": 2927.48, "text": " So how likely that the molecule you generated is good to be a drug." }, { "end": 2939.8, "start": 2933.68, "text": " And then you have penalized a log B score, which is kind of related to what a solubility" }, { "end": 2941.28, "start": 2939.8, "text": " I believe." }, { "end": 2945.64, "start": 2941.28, "text": " And then you have other methods." }, { "end": 2952.2, "start": 2945.64, "text": " For example, let's say you want to invent a drug to cure HIV, then what you do is you" }, { "end": 2954.4, "start": 2952.2, "text": " develop some QSAR model." }, { "end": 2958.12, "start": 2954.4, "text": " So you know what a HIV target is." }, { "end": 2965.76, "start": 2958.12, "text": " And then you have a very small database of molecules and how it reacted to that particular" }, { "end": 2967.2400000000002, "start": 2965.76, "text": " HIV target." }, { "end": 2973.92, "start": 2967.2400000000002, "text": " So you train some models using some supervised method to obtain a reward model." }, { "end": 2980.84, "start": 2973.92, "text": " So when you get a new molecule, you pass your molecule through this reward model and obtain" }, { "end": 2982.76, "start": 2980.84, "text": " a particular scalar value." }, { "end": 2985.7200000000003, "start": 2982.76, "text": " So these are called QSAR models." }, { "end": 2992.0800000000004, "start": 2985.7200000000003, "text": " And in that paper, we did it against three HIV based targets." }, { "end": 2993.0800000000004, "start": 2992.0800000000004, "text": " Okay." }, { "end": 2998.5600000000004, "start": 2993.0800000000004, "text": " So it's based on the experience of how other drugs, how effective a past drugs have been." }, { "end": 3006.5600000000004, "start": 2998.5600000000004, "text": " Yeah, not necessarily drugs, but any kind of molecules because yeah, basically your training" }, { "end": 3008.28, "start": 3006.5600000000004, "text": " data shouldn't be biased." }, { "end": 3012.52, "start": 3008.28, "text": " So it shouldn't be just be passed with only the useful molecules." }, { "end": 3019, "start": 3012.52, "text": " This should also have some useless molecules so that the score can be predicted accurately." }, { "end": 3022.52, "start": 3019, "text": " So how do you represent the chemicals internally?" }, { "end": 3027.0400000000004, "start": 3022.52, "text": " So the molecules can be represented in different ways." }, { "end": 3033.6000000000004, "start": 3027.0400000000004, "text": " The people who work with smile string, they're represented in string converted to the one" }, { "end": 3036.96, "start": 3033.6000000000004, "text": " heart vector and then embedding and so on." }, { "end": 3044.76, "start": 3036.96, "text": " First paper, if I remember correctly, we considered a few representations that is ECFP4, which" }, { "end": 3046.92, "start": 3044.76, "text": " kind, so these are all vectors." }, { "end": 3054.2, "start": 3046.92, "text": " ECFP4 is a vector that contains information of the graphical structure of the molecule." }, { "end": 3061.64, "start": 3054.2, "text": " And then we have something called MACCS or MACS, which is a binary vector that tells you" }, { "end": 3066.64, "start": 3061.64, "text": " the presence or absence of different features of the molecule." }, { "end": 3073.04, "start": 3066.64, "text": " And then we have something called multi set, which contains several features." }, { "end": 3080.6, "start": 3073.04, "text": " I think there were 200 such features and we had picked 35 of them to use as a representation." }, { "end": 3086.48, "start": 3080.6, "text": " So we experimented with all these kinds of representations and I think at the end," }, { "end": 3093.8399999999997, "start": 3086.48, "text": " what bugged out is ECFP features as the input because we want a robust representation" }, { "end": 3100.48, "start": 3093.84, "text": " as input and then the multi set, the 35 features from multi set as the output." }, { "end": 3103.1200000000003, "start": 3100.48, "text": " So these are established standard representations?" }, { "end": 3104.1200000000003, "start": 3103.1200000000003, "text": " Yeah." }, { "end": 3109.6000000000004, "start": 3104.1200000000003, "text": " I wonder if you've been following the alpha fold work at all and I know that was for protein" }, { "end": 3111.36, "start": 3109.6000000000004, "text": " folding very different space." }, { "end": 3112.36, "start": 3111.36, "text": " Yeah." }, { "end": 3117.88, "start": 3112.36, "text": " But I wonder if you think those two lines of work have something in common or are going" }, { "end": 3119.6000000000004, "start": 3117.88, "text": " to overlap at some point?" }, { "end": 3127.72, "start": 3119.6, "text": " No, I think they're very different approaches that alpha fold is mostly a supervised learning" }, { "end": 3129.2, "start": 3127.72, "text": " algorithm." }, { "end": 3135.12, "start": 3129.2, "text": " But yeah, having the ability to predict the protein structures has a lot of use cases" }, { "end": 3139.68, "start": 3135.12, "text": " in drug discovery, but not I don't think it's related to this work." }, { "end": 3141.88, "start": 3139.68, "text": " These drugs are not proteins generally, right?" }, { "end": 3144.08, "start": 3141.88, "text": " But they could affect proteins?" }, { "end": 3145.08, "start": 3144.08, "text": " Yeah." }, { "end": 3148.2, "start": 3145.08, "text": " So they basically react with the proteins." }, { "end": 3153.3599999999997, "start": 3148.2, "text": " So one, I think the way to see it is if you have an accurate structure of the protein," }, { "end": 3157.3199999999997, "start": 3153.3599999999997, "text": " then you could probably predict its reactive properties." }, { "end": 3163.9199999999996, "start": 3157.3199999999997, "text": " So this could probably help in the reward function design that you were talking about earlier." }, { "end": 3170.56, "start": 3163.9199999999996, "text": " Instead of just learning from the existing database of how different molecules interacted" }, { "end": 3176.9199999999996, "start": 3170.56, "text": " with a particular protein target, probably the protein structure can also help in other" }, { "end": 3179.16, "start": 3176.92, "text": " ways of reward design." }, { "end": 3183.52, "start": 3179.16, "text": " So I see this paper is deadly accumulating citations." }, { "end": 3186.4, "start": 3183.52, "text": " Are people building cool things on top of this that you're aware of?" }, { "end": 3187.4, "start": 3186.4, "text": " Yeah, I think so." }, { "end": 3194.48, "start": 3187.4, "text": " I think what this paper opened up is kind of a new chemical space for people to experiment" }, { "end": 3195.48, "start": 3194.48, "text": " on." }, { "end": 3197.88, "start": 3195.48, "text": " So it need not just pure reinforcement learning." }, { "end": 3205.52, "start": 3197.88, "text": " So I think I've seen a few papers where people are using genetic algorithms or this evolutionary" }, { "end": 3211.6, "start": 3205.52, "text": " algorithms instead of RL for exploring the same kind of chemical space." }, { "end": 3215.96, "start": 3211.6, "text": " And then people were trying out different representations." }, { "end": 3219.56, "start": 3215.96, "text": " I think graphical representation is very attractive." }, { "end": 3222.12, "start": 3219.56, "text": " And I think I've seen one of the papers doing that." }, { "end": 3228.44, "start": 3222.12, "text": " And then people can also, they also tried to think learning the inverse graph." }, { "end": 3231.72, "start": 3228.44, "text": " So we are just doing forwards synthesis, right?" }, { "end": 3237.08, "start": 3231.72, "text": " So people also tried to do the retro synthesis based on the forward synthesis." }, { "end": 3240.3599999999997, "start": 3237.08, "text": " So they tried to train the inverse network as well." }, { "end": 3245.3599999999997, "start": 3240.3599999999997, "text": " Yeah, I think very important challenges." }, { "end": 3250.24, "start": 3245.3599999999997, "text": " Multi-objective optimization because in drug discovery, you just don't want to optimize" }, { "end": 3252.56, "start": 3250.24, "text": " for one particular score." }, { "end": 3256.64, "start": 3252.56, "text": " Your generated molecule should fit in a specific profile." }, { "end": 3260.9199999999996, "start": 3256.64, "text": " For example, it should have a particular drug likeness score, but it should also have" }, { "end": 3266.6, "start": 3260.92, "text": " particular water solubility levels and particular different profiles that are not harmful to" }, { "end": 3268.56, "start": 3266.6, "text": " human body, basically." }, { "end": 3272.6800000000003, "start": 3268.56, "text": " So it's essentially a multi-objective optimization problem." }, { "end": 3279.52, "start": 3272.6800000000003, "text": " And I think a couple of papers have started dealing with that based on this new chemical" }, { "end": 3280.52, "start": 3279.52, "text": " space." }, { "end": 3281.52, "start": 3280.52, "text": " Awesome." }, { "end": 3283.6800000000003, "start": 3281.52, "text": " That must be very gratifying for you to see as a researcher." }, { "end": 3284.6800000000003, "start": 3283.6800000000003, "text": " Yeah, definitely." }, { "end": 3285.6800000000003, "start": 3284.6800000000003, "text": " Yes." }, { "end": 3286.6800000000003, "start": 3285.6800000000003, "text": " Okay." }, { "end": 3290.88, "start": 3286.6800000000003, "text": " So coming back to chess, has your chess background influenced your approach to AI?" }, { "end": 3291.88, "start": 3290.88, "text": " Do you think?" }, { "end": 3293.84, "start": 3291.88, "text": " Not so much, I think." }, { "end": 3300.84, "start": 3293.84, "text": " But in general, I think being a chess player helped because you could generally do your calculations" }, { "end": 3306.76, "start": 3300.84, "text": " much faster or you could kind of visualize proofs without actually putting everything" }, { "end": 3307.76, "start": 3306.76, "text": " on paper." }, { "end": 3309.8, "start": 3307.76, "text": " I think it has helped in that way, yeah." }, { "end": 3313.48, "start": 3309.8, "text": " So what about has AI influenced your approach to chess at all?" }, { "end": 3314.8, "start": 3313.48, "text": " Not so much, I think." }, { "end": 3320.84, "start": 3314.8, "text": " I mean, I haven't played many chess tournaments since I started doing AI." }, { "end": 3324.1600000000003, "start": 3320.84, "text": " I've played three or four tournaments." }, { "end": 3326.52, "start": 3324.1600000000003, "text": " So do you find chess AI interesting?" }, { "end": 3332.92, "start": 3326.52, "text": " Yeah, I think a lot of exciting things are happening, especially with this tabular as" }, { "end": 3336.6800000000003, "start": 3332.92, "text": " a learning system like Alpha, Alpha, Zero and so on." }, { "end": 3343.4, "start": 3336.6800000000003, "text": " I think this kind of approaches existed before and they were tried on different games." }, { "end": 3347.52, "start": 3343.4, "text": " But to see it work on chess is really exciting." }, { "end": 3354.64, "start": 3347.52, "text": " I think at the end of the day, I still see that these are only acting like a help us to" }, { "end": 3357.92, "start": 3354.64, "text": " the Monte Carlo research, right?" }, { "end": 3362.64, "start": 3357.92, "text": " The policy networks or the value networks that these algorithms are learning." }, { "end": 3370.08, "start": 3362.64, "text": " I think they're only adding as an extra help to the MCTS and I think MCTS is still at the" }, { "end": 3376.64, "start": 3370.08, "text": " core of all this chess engines, which has been since many decades." }, { "end": 3381.52, "start": 3376.64, "text": " Do you feel like this generation of AI has solved chess in a sense?" }, { "end": 3386.16, "start": 3381.52, "text": " Or do you think there's more interesting things that we could do in the chess domain or" }, { "end": 3387.72, "start": 3386.16, "text": " in close related domains?" }, { "end": 3388.72, "start": 3387.72, "text": " No, no way." }, { "end": 3396, "start": 3388.72, "text": " I don't think I think we are very far from saying it to be solved because we still see" }, { "end": 3404.68, "start": 3396, "text": " this Alpha, Zero, Lila, Zero making some mistakes and those mistakes cannot really be explained." }, { "end": 3411.16, "start": 3404.68, "text": " So I think it's far from perfect or far from being solved." }, { "end": 3415.48, "start": 3411.16, "text": " What do you think the reason is why it's that happens?" }, { "end": 3418.44, "start": 3415.48, "text": " What do you think is that with is missing in the design?" }, { "end": 3425.96, "start": 3418.44, "text": " Yeah, so I think for any chess engine, mostly boils down to how much computation or how" }, { "end": 3431.48, "start": 3425.96, "text": " many Monte Carlo research simulations you're allowing the engine to have." }, { "end": 3437.36, "start": 3431.48, "text": " And despite having all this trained policy and value networks, if you don't allow it" }, { "end": 3442.76, "start": 3437.36, "text": " to explore for enough, there are still a lot of blind ends even if it's forcing 25" }, { "end": 3447.96, "start": 3442.76, "text": " mows, there could be something on the 26th slide that it was 26 more than the engine" }, { "end": 3453.4, "start": 3447.96, "text": " has missed that primarily probably because the value network failed to predict that something" }, { "end": 3455.92, "start": 3453.4, "text": " might happen in the next move." }, { "end": 3458.08, "start": 3455.92, "text": " These are still the corner guesses." }, { "end": 3462, "start": 3458.08, "text": " Can I observe some engine games?" }, { "end": 3466.68, "start": 3462, "text": " There's a lot of interesting games from Alpha, Zero." }, { "end": 3470.52, "start": 3466.68, "text": " It has been very aggressive in some games." }, { "end": 3475.56, "start": 3470.52, "text": " There are a lot of sacrifices that's very good to watch." }, { "end": 3482.12, "start": 3475.56, "text": " But at the same time, it still has those components or the drawbacks that the older AI engines" }, { "end": 3483.12, "start": 3482.12, "text": " have." }, { "end": 3488.48, "start": 3483.12, "text": " In a very closed position, it can't plan properly." }, { "end": 3493.92, "start": 3488.48, "text": " It just keeps moving the pieces around without proper futuristic plan." }, { "end": 3499.2799999999997, "start": 3493.92, "text": " So it seems to me that Alpha Zero can only perform as well as the function approximator is" }, { "end": 3504.68, "start": 3499.2799999999997, "text": " properly approximating the function and also only as well as the data." }, { "end": 3509.64, "start": 3504.68, "text": " So if it hasn't explored certain regions or if the function approximator doesn't generalize" }, { "end": 3511.92, "start": 3509.64, "text": " enough or in the right way." }, { "end": 3515.84, "start": 3511.92, "text": " And in both of those cases are the where the corner cases will hit us." }, { "end": 3522.44, "start": 3515.84, "text": " I've never been very clear on how perfect a fit the convolutional network really is for" }, { "end": 3523.44, "start": 3522.44, "text": " this problem." }, { "end": 3526.2400000000002, "start": 3523.44, "text": " Seems to me it may be not the perfect fit." }, { "end": 3527.2400000000002, "start": 3526.2400000000002, "text": " Exactly, I agree." }, { "end": 3532.52, "start": 3527.2400000000002, "text": " That's another very good question to explore." }, { "end": 3538.28, "start": 3532.52, "text": " Unlike other board games like Go, chess has a very interesting representation as well." }, { "end": 3541.04, "start": 3538.28, "text": " It has multiple kinds of pieces." }, { "end": 3544.96, "start": 3541.04, "text": " So you can't just represent them as numbers on a 2D map." }, { "end": 3549.48, "start": 3544.96, "text": " So what people do is they use something called bitmap representations." }, { "end": 3556.88, "start": 3549.48, "text": " So each piece is represented in a binary one or zero in its dedicated two dimensional" }, { "end": 3563.2799999999997, "start": 3556.88, "text": " map in a multiple layered three dimensional structure." }, { "end": 3569.64, "start": 3563.2799999999997, "text": " And yeah, I'm still not sure if it's the most optimal representation to have." }, { "end": 3575.3599999999997, "start": 3569.64, "text": " And yeah, definitely on top of that, it's very unclear if the usual convolutional networks" }, { "end": 3578.56, "start": 3575.3599999999997, "text": " are suitable to these kind of representations." }, { "end": 3582.7999999999997, "start": 3578.56, "text": " There's definitely some locality and some spatial component that maybe the CNN is capturing," }, { "end": 3587.8399999999997, "start": 3582.7999999999997, "text": " but also like a rook and move all across the board all at once." }, { "end": 3591.68, "start": 3587.8399999999997, "text": " That seems like CNN is not going to be very suitable for that part." }, { "end": 3594, "start": 3591.68, "text": " So I do wonder about that." }, { "end": 3600.48, "start": 3594, "text": " I think in Alpha Fold, Alpha Fold 1 used some CNNs and then in Alpha Fold 2, they took" }, { "end": 3605.48, "start": 3600.48, "text": " the CNN out because of the locality restriction of the CNN wasn't helping them because it" }, { "end": 3609.28, "start": 3605.48, "text": " would restrict the field to the block, the CNN block." }, { "end": 3612.12, "start": 3609.28, "text": " So I wonder if that's the case here." }, { "end": 3615.88, "start": 3612.12, "text": " You'll never have enough data if the game is hard enough." }, { "end": 3620.16, "start": 3615.88, "text": " So I wonder if the challenge is how do you get the network, how do you get the function" }, { "end": 3624.44, "start": 3620.16, "text": " proximity to generalize without covering every possible position?" }, { "end": 3628.72, "start": 3624.44, "text": " And then I wonder if there's how to get that inductive bias that we really want, which" }, { "end": 3634.48, "start": 3628.72, "text": " seems right now, it seems very situation specific, designing the inductive bias." }, { "end": 3638.2799999999997, "start": 3634.48, "text": " I was, I keep going back to Alpha Fold because I think it was really interesting." }, { "end": 3645.08, "start": 3638.2799999999997, "text": " They really baked in a very specific inductive bias after deeply understanding the problem." }, { "end": 3649.2, "start": 3645.08, "text": " So a lot of the intelligence is right there in the inductive bias design in the network design." }, { "end": 3652.24, "start": 3649.2, "text": " And I think that there wasn't much of that in the slide of work." }, { "end": 3656.7999999999997, "start": 3652.24, "text": " Yeah, yeah, it is a lot of open problems to explore in this." }, { "end": 3664.68, "start": 3656.7999999999997, "text": " I think I really consider it solved if an agent can play without any research." }, { "end": 3670.3999999999996, "start": 3664.68, "text": " For example, if given a position can a policy network or using a value network can we predict" }, { "end": 3675.6, "start": 3670.3999999999996, "text": " the best move in that position, which I think is impossible to achieve." }, { "end": 3679.92, "start": 3675.6, "text": " Yeah, at least not in the next 20, 30 years, I don't think so." }, { "end": 3684.48, "start": 3679.92, "text": " I mean, you can play Alpha Zero in only one step mode, I guess, without the full" }, { "end": 3685.2799999999997, "start": 3684.48, "text": " research, right?" }, { "end": 3690.64, "start": 3685.2799999999997, "text": " And it still does better than it's still to have some level of skill, but it's just not" }, { "end": 3691.64, "start": 3690.64, "text": " that strong, right?" }, { "end": 3694.68, "start": 3691.64, "text": " Yeah, yeah, it's very inferior playing." }, { "end": 3700.48, "start": 3694.68, "text": " And in such a case, I think there are too many failure modes that can be exploited." }, { "end": 3705.2799999999997, "start": 3700.48, "text": " So I mean, it begs the question like, why do we even need this type of structure," }, { "end": 3706.84, "start": 3705.28, "text": " this tree search at all?" }, { "end": 3713.32, "start": 3706.84, "text": " I gave a talk a while ago to the data science group in Vancouver about why DQN for Atari" }, { "end": 3719.96, "start": 3713.32, "text": " makes sense and why the Alpha Zero algorithm makes sense for a situation like Go." }, { "end": 3726.1600000000003, "start": 3719.96, "text": " It's because what I was saying, and see if you agree with me, as the reason is that the" }, { "end": 3732.44, "start": 3726.1600000000003, "text": " true value function of Go is so bumpy and hard to predict that whereas in Atari, the value" }, { "end": 3735.32, "start": 3732.44, "text": " function is much smoother and easier to predict." }, { "end": 3739.56, "start": 3735.32, "text": " And so DQN is enough to master that value function." }, { "end": 3744.28, "start": 3739.56, "text": " But on the go side, or maybe on the chest side, the value function changes so much from" }, { "end": 3745.6, "start": 3744.28, "text": " any small move." }, { "end": 3749.36, "start": 3745.6, "text": " So the function is so non-smooth that you have no choice." }, { "end": 3752.88, "start": 3749.36, "text": " Your function proximity is not strong enough to generalize into the future." }, { "end": 3757.08, "start": 3752.88, "text": " So the only choice you have is to simulate into the future and see what the effect is" }, { "end": 3758.7200000000003, "start": 3757.08, "text": " on the value function." }, { "end": 3762.28, "start": 3758.7200000000003, "text": " That's exactly correct." }, { "end": 3768.52, "start": 3762.28, "text": " But if we had function approximators that were more powerful, that could model the" }, { "end": 3771.92, "start": 3768.52, "text": " complexity of chest and go, then we wouldn't need the MCTS." }, { "end": 3776.52, "start": 3771.92, "text": " But the fact is we have the current generation of neural networks doesn't have that property." }, { "end": 3780.84, "start": 3776.52, "text": " So maybe it's a failing of the function approximator we have to make up for with this additional" }, { "end": 3781.84, "start": 3780.84, "text": " mechanism." }, { "end": 3783.1200000000003, "start": 3781.84, "text": " Is that how you see it?" }, { "end": 3790.6400000000003, "start": 3783.1200000000003, "text": " Yeah, I'm still not clear at what point this function approximator would be able to" }, { "end": 3791.64, "start": 3790.64, "text": " solve that." }, { "end": 3796.2, "start": 3791.64, "text": " I don't see that happening any time in the near future, but that's generally true." }, { "end": 3800.92, "start": 3796.2, "text": " So what do you think about explainability in chess and these types of games?" }, { "end": 3805.3599999999997, "start": 3800.92, "text": " Like definitely when talking, you know, so I never got very far at chess." }, { "end": 3808.2799999999997, "start": 3805.3599999999997, "text": " I'm not very good at chess, but I was very interested as a kid." }, { "end": 3814.2799999999997, "start": 3808.2799999999997, "text": " And I remember reading books on chess strategy and there would be so many recipes and there's" }, { "end": 3816.6, "start": 3814.2799999999997, "text": " a lot to talk about in chess strategy." }, { "end": 3823.88, "start": 3816.6, "text": " And people use a lot of metaphors and it's not like people use a lot of generalization" }, { "end": 3827.12, "start": 3823.88, "text": " as they're talking about the strategy, even when you're talking about open and close" }, { "end": 3829.7999999999997, "start": 3827.12, "text": " positions and game and this and that." }, { "end": 3832.72, "start": 3829.7999999999997, "text": " There's all these concepts that we throw around." }, { "end": 3837.08, "start": 3832.72, "text": " I wonder what you think about explainability in terms of chess AI." }, { "end": 3841.3199999999997, "start": 3837.08, "text": " Like do you think we could ever get to the point where we could have a discussion with" }, { "end": 3845.88, "start": 3841.3199999999997, "text": " the chess AI about strategy or is that kind of a ridiculous concept?" }, { "end": 3853.44, "start": 3845.88, "text": " I think it can explain why it thinks a particular move is good, but that explanation would still" }, { "end": 3862.28, "start": 3853.44, "text": " be based on the variations that it's calculating and not in any like a natural language that" }, { "end": 3868.28, "start": 3862.28, "text": " it sees that somehow sees this double-ponstructure is good or I don't see that happening in" }, { "end": 3869.96, "start": 3868.28, "text": " time soon." }, { "end": 3874.28, "start": 3869.96, "text": " But yeah, that's something that would be useful to have." }, { "end": 3879.4, "start": 3874.28, "text": " I guess there's all this work now with language models and attaching language models to everything" }, { "end": 3881.6800000000003, "start": 3879.4, "text": " and grounding the language models and everything." }, { "end": 3887, "start": 3881.6800000000003, "text": " And do you think if we plugged in the large language model to alpha zero, we could somehow" }, { "end": 3895.6400000000003, "start": 3887, "text": " get it to explain why side-beating in the latest round?" }, { "end": 3898, "start": 3895.6400000000003, "text": " It's a very tough challenge." }, { "end": 3904.2000000000003, "start": 3898, "text": " I don't think I don't think you have current language models that accurate to do that." }, { "end": 3910.7999999999997, "start": 3904.2, "text": " I mean, it's not a lot of, we need a lot of novel data to train such models on, which" }, { "end": 3914.96, "start": 3910.7999999999997, "text": " are not easily accessible or within a reasonable amount of compute." }, { "end": 3919.3999999999996, "start": 3914.96, "text": " I guess if it read chess books and if it was able to understand the positions and somehow" }, { "end": 3924.68, "start": 3919.3999999999996, "text": " map it to its existing representation, then maybe we could get somewhere." }, { "end": 3929.16, "start": 3924.68, "text": " But it's just hard to imagine, but it seems like what I've been noticing is plugging" }, { "end": 3933.24, "start": 3929.16, "text": " all of them into the different things is working way better than I ever imagined it would." }, { "end": 3935.12, "start": 3933.24, "text": " I'm shocked by how often it's working well." }, { "end": 3936.6, "start": 3935.12, "text": " Are there people getting it to work?" }, { "end": 3940.8399999999997, "start": 3936.6, "text": " Yeah, never thought about having an agent reading chess books." }, { "end": 3943.24, "start": 3940.8399999999997, "text": " That's definitely something interesting." }, { "end": 3948.24, "start": 3943.24, "text": " So besides your own work, is there other things happening in RL or other parts of AI lately" }, { "end": 3949.9599999999996, "start": 3948.24, "text": " that you find really interesting side?" }, { "end": 3954.7999999999997, "start": 3949.9599999999996, "text": " Yeah, so these language models are somehow very interesting." }, { "end": 3959.24, "start": 3954.7999999999997, "text": " Yeah, they're already working at a very large scale." }, { "end": 3964.04, "start": 3959.24, "text": " But I like these ideas on scaling laws as well." }, { "end": 3970.8399999999997, "start": 3964.04, "text": " Like what some amount of increased computation or increased network size or increased training" }, { "end": 3972.3599999999997, "start": 3970.8399999999997, "text": " data size can do." }, { "end": 3979.64, "start": 3972.3599999999997, "text": " I think there's this latest paper from Google that shows some emergent behavior like so" }, { "end": 3984.24, "start": 3979.64, "text": " far and language model cannot solve some arithmetic." }, { "end": 3991.9199999999996, "start": 3984.24, "text": " But if you have more compute and more scaling than it's basically the accuracy is increasing" }, { "end": 3992.9199999999996, "start": 3991.9199999999996, "text": " significantly." }, { "end": 3999.7999999999997, "start": 3992.9199999999996, "text": " And so they call this as emergent properties because they did not that particular property" }, { "end": 4005.9599999999996, "start": 3999.7999999999997, "text": " of solving those mathematics did not exist when they had less compute." }, { "end": 4013.72, "start": 4005.9599999999996, "text": " And I want to see how far the increased compute would be useful in reinforcement learning." }, { "end": 4017.04, "start": 4013.72, "text": " Can you consider yourself in the scale as all you need to camp?" }, { "end": 4022.52, "start": 4017.04, "text": " It's not all we need, but I think it's something we definitely need." }, { "end": 4029, "start": 4022.52, "text": " I went to the scaling laws workshop recently and yeah, it's very exciting." }, { "end": 4035.64, "start": 4029, "text": " I think more many people in the camp also actually believe that scale is not all you need," }, { "end": 4039.24, "start": 4035.64, "text": " but it's something definitely that you definitely need." }, { "end": 4044, "start": 4039.24, "text": " So is there anything else that I should have asked you today or that you want to share with" }, { "end": 4045, "start": 4044, "text": " our talk our audience?" }, { "end": 4047, "start": 4045, "text": " Yeah, check out Cogment." }, { "end": 4048, "start": 4047, "text": " It's exciting." }, { "end": 4055, "start": 4048, "text": " And yeah, if you're working on multi-agentarell or human in the loop learning, check out Cogment" }, { "end": 4060.2, "start": 4055, "text": " and I'm happy to chat more about your ongoing projects on these topics." }, { "end": 4061.68, "start": 4060.2, "text": " So is it open source?" }, { "end": 4062.68, "start": 4061.68, "text": " Anyone can download?" }, { "end": 4063.68, "start": 4062.68, "text": " Yeah, exactly." }, { "end": 4066.24, "start": 4063.68, "text": " And it's easy to get started as well, I believe." }, { "end": 4069.7999999999997, "start": 4066.24, "text": " And we'll have a link in the show notes, but just for the record, where are people getting" }, { "end": 4070.7999999999997, "start": 4069.7999999999997, "text": " it?" }, { "end": 4072.2, "start": 4070.7999999999997, "text": " Yeah, it's Cogment.ai." }, { "end": 4076.12, "start": 4072.2, "text": " So, Sy Krishna, Gauti Pati, thank you so much for joining us here at Talk our Elle and" }, { "end": 4078.16, "start": 4076.12, "text": " sharing your insights with us today." }, { "end": 4079.16, "start": 4078.16, "text": " Thanks so much for taking the time." }, { "end": 4080.68, "start": 4079.16, "text": " Yeah, thank you for having me." }, { "end": 4110.639999999999, "start": 4080.68, "text": " I think it's my first broadcast." } ]
Aravind Srinivas 2
Aravind Srinivas, Research Scientist at OpenAI, returns to talk Decision Transformer, VideoGPT, choosing problems, and explore vs exploit in research careers
https://media.transistor…583.mp3?src=site
TalkRL podcast is all reinforced in learning all the time, featuring brilliant guests both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. Arvin Shrinivas is a research scientist at OpenAI. He holds a PhD from Berkeley, where he taught the Berkeley unsupervised learning course. Arvin, thanks so much for joining us again. Thank you for having me. Of course, you were our guest back in episode 11 when you were back at Berkeley. We talked about curl and rad and sunrise and your unsupervised learning course. Now I gather that you're at OpenAI. Can you tell us a bit about what your focus area is there and what your role is there? I am a researcher in the algorithm's team at OpenAI. The algorithm's team is the team that works on basic research that leads to new kind of generative models. For example, Dali came out of the work done by folks in the algorithm's team and in the past like GPD2, image GPD and so on. My focus specifically is to work on generative models of modalities other than text, exploring new possibilities there, new architectures, new modalities and so on. When we look back at your dissertation from Berkeley, you mentioned the main axes of contributions being self-supervised or unsupervised representation learning and self-attention. Are you continuing along those lines with what you're doing now? Yeah, for sure. It's difficult to not take advantage of self-supervised learning anywhere right now in deep learning. Anything you do right now definitely gets a big boost if you leverage pre-trained self-supervised representations. Whether it be reinforcement learning or generative models or language, it is the case that having access to a pretty good pre-trained representation can make things a lot more convenient and more generalizable for you. Just to give you an example, the work done by Aditya Ramesh like Dali, Dali too that came out recently. If you actually look at the architecture, it takes a pre-trained clip model and then tries to build a generative model on top of that lead instead of building a generative model from scratch. So at what is clip, clip, you can consider the system's kind of like giant self-supervised contrast of learning done on like internet skill data. So it's hard to like decouple self-supervised learning from almost anything you do right now. It's pretty much part of everything that anyone is building these days. So just like that, similar to that, I'm also leveraging that for my current research. Okay, so let's get started talking about decision transformer reinforcement learning by a sequence modeling. That was Chen at all in 2021. What are the main ideas happening here? It's very simple. Transformers are awesome. GPDs are awesome. They basically changed the landscape of natural language processing. For some reason, reinforcement learning is viewed separately from deep learning in the sense that the way people think about RL is there is this whole literature, Ritz-Sodden textbook, dynamic programming, approximate DP, value evaluation, policy gradients, Q learning and so on. And the way we actually use deep learning in RL is to just use it as a function approximator to get you the representations on which you perform the classical RL algorithms. Just been the way deep minds started things and everyone's just been doing the same thing. After that, this like several year of the work have been put into just making that stack really work. That's fair. It's kind of work like you know, Atari, AlphaGo and so on. But now that we have a sequence model of working really well after the invention of the transformer, it is worth rethinking the whole paradigm itself. At the end of the day, what do you actually want from an RL agent? It should have access to what it's done in the past by which we mean like all the state's actions, rewards, it's seen so far. It's context of the world. It's context of the environment. And it should have access to what it's supposed to be doing like whatever task the human provides the agent with. And based on this, it should decide what to do next. Ultimately, if there is a model that cares to this need, that's all you need, right? And for that, it just has to attend to all this information and decide what to do next. In some sense, it has to attend. That's all you need, right? So, just like a transformer sure attention is all you need. We thought, okay, why not just treat RL as a sequence model and given sufficient data of what an agent is supposed to be doing. It should be able to do it at this time. So if you have a good enough data set of trajectories and trajectory by trajectories, I just mean a sequence of state action rewards. In environment, you just give that just like how you give natural language strings like sentences or like images and let the transformer behave like a sponge that absorbs all this data and learns to internalize how the world works and how it gets rewarded for taking different sequence of actions and so on. For example, if it's given a trajectory of the game of pong, the transformer will internalize that, taking the action up means the battery will go up or if you're near the ball, the battery has to go down, things like that. Instead of being told to like, hey, if the sequence of actions will lead you to this value function and so on, you don't need all that. So in some sense, you can think of this as the software 2.0 moment for RL, right? Let the neural network write the weights for the RL algorithm itself and that's all it is. So you do this, you take a data set, just train a regular GPD like transformer that learns to predict the future actions given the past state actions and rewards. Test time, you just ask it to get a high reward and it'll just learn to do it. But if you can do this reliably at a large scale, then we can like basically leverage every scale, large scale infrastructure that's been built for GPDs and images or Dali and how a similar kind of stack for robotics and control, right? And that that that in my opinion at least is the future. It's much easier to scale something when there is a whole community of people and investment and resources being put into scaling a particular infrastructure. It's very hard to scale something when you are like a small community around you is the only set of people doing that. So I think I said this in the previous podcast itself, like one motivating factor for doing a lot of the research I did in RL is to make RL look more like deep learning. And currently deep learning is just mostly transformers. So it's good that RL also just looks like a transformer. How did this idea come about? The way this project was conceived was I was actually interviewing at OpenAI for full time position and I had some discussions with some people there. Like just generally you get you do research chats, like you know to figure out alignment and I talked to Alec Radford and I asked him what is like really the way in which he picks problems. So Alec Radford is the first author on GPT1 to clip, you know he's done incredible work, you know, considered to be the most successful independent individual contributor at OpenAI. I was very curious how he picks his problems and one intuition that he gave me was you have to think about like all these like large generative models as distillation of human activity on internet baked into a large model. Let me break that down. Humans are like like if you look at the way GPT2 was built and three for that matter, you leverage the fact that humans have curated a lot of content and create a lot of content on internet in the far off text, right? Like you take news articles of books or Wikipedia pages, it's a lot of human work that was done and put on the internet and you can take advantage of human ratings like the karma for the post and you use that as some kind of implicit page rank and you only take the content that has sufficient karma so that you take good content. You're basically leveraging human activity like rating pages, creating content, describing articles like in the form of Wikipedia, writing it in a formal way or writing news articles or like conversational ability on Reddit, you're taking all these things that people do for free on internet, their footprint is there on internet and putting it into a generative model like GPT and then at test time you can ask the GPT to do things and it becomes economically and commercially valuable, right? In some sense that gave me an insight, oh actually you can think of these language models like agents, even though people don't think of language models as agents because there's no reinforcement learning in it, technically there is, right? If you consider every single word has been an action taken by a human agent and now the transformer is basically cloning it, it's behavior cloning activity on the internet. So that is the word alacuse when describing research. It's like behavior cloning human work on the internet that already exists into a large model and then the large model becomes like an intelligent agent at test time and the more diverse data you throw at the model, the more likely that it will do like amazing things. So just give me a new insight, oh for example language models are agents that can write creative writing, co-pilot, Git, co-pilot is basically a writing assistant but you can think of it as an agent that just learned to code, right? By behavior cloning human code on GitHub. This is the same thing that autopilot does too, autopilot is basically cloning human driving and Dalai is cloning artists. So at the end of the day, like the end game for creating intelligent agents like robots or any RL agent is the clone behavior. The only thing that you need to go beyond just cloning is also understand what it means to solve a task. You don't want to know like what does it mean to complete a task, what does it mean to not complete a task. You want to know the notion of a reward to rather than just saying oh yeah humans did this thing, I'll also do the same thing. So it was just the inspiration was okay like the current generative model is a greater cloning, how do you make RL more like just cloning. So there are two parts to it then one is from swammer and the other is supervised learning. And then how do you turn RL into supervised learning. One way to turn RL into supervised learning was using the upside down reinforcement learning commulation. I think that was proposed by Schmid-Jubber. So I was already aware of that paper but then that was not general enough. It just took the goal as an embedding and then took the current state and action and tried to decide what took the current state and the goal and just formulated it like the goal condition reinforcement learning setup. So that would still have the same issues that come with scaling these Markovian models. So you do want something that's more general. I just combine these two insights together to get the idea. I started working on it myself with I think Lily, Leach and then later Igor Mordash, he's the other senior author on the paper, said he was looking into bird models for RL as like some kind of pre-trained representations that can be leveraged for any new task in sense that you'd pre-trained a large bird and then you find you into any new task you have with another undergrad named Kevin Liu. So he was very, but that project wasn't really panning out as much because in general nobody is really shown very successful behavior pre-training in RL. I think people have shown good successful results on the pre-training the vision encoder and then showing that it can accelerate the learning on a new task for pixels but no one's really shown something where you pre-training an action decoder or something and then you throw a new task and it just works. But the stack they were building was very useful for us too. It's just that the same front swimmers, the masking is left to right with this random masking, things like that. So we just decided to combine and that led to that paper. That's super interesting. Okay, I'm not really glad I asked you this. You said that people weren't able to make these unsupervised models work with new tasks. Is that right? And what you were doing is with decision transformer is not a new task. Is that what the difference is or why was unsupervised not working before and you made it work? Yeah, so to clarify, I'm not saying decision transformer made unsupervised learning work for RL. That's not true. What I mean to say is something like Bert is very hard to get the work for reinforcement learning. And then you're, you pre-trained a very large model and then you find into any new task. It's not clear exactly what you should pre-trained. And fine tuning is dark magic by itself. Like you have to figure out so many hyper parameters like the learning rate of the atom, optimizer, and how you decay the learning rate. And what is the bad size? If you use a large bad size, you might overfeit quickly on a task where you have very new examples. This is a problem in NLP itself. How to efficiently fine tune. You don't even need to ask for reinforcement learning. It's going to be even worse. Given it's hard to even train it from scratch. On the other hand, something like GPDs are cool. You don't have to fine tune. If you have a great model, zero shot of few shot of work at this time. So that is the advantage of training these large language models. You can ask if you had 100 GPUs, 1000 GPUs, would you go for training a GPD or would you go for training a bird or a T-fi model? Any answer is you would go for training a GPD because of the flexibility it offers you at test time. If you wanted to fine tune, you could still fine tune from it. But there are so many other capabilities like zero shot completions, few shot completions, prompting, prompt engineering. The natural sequentiality it offers you. So that made me think, that would be even more of a drastic result if you just show that a pure GPD works. On rather than saying, hey, here are some checkpoints. If you want to do your RL research, you can take it from use it. That in my opinion, wouldn't have been as impactful because at the end of the day, what are these RL tests? It's just a bunch of simulation benchmarks that people created for quick results. In the impact of the paper, it's more if it forces people to reading the paradigm itself rather than serving as a checkpoint for writing more papers for other students. I felt like having a pure language model stack would be better for that. So I got a minute. I did not appreciate the magnitude of the contribution with decision-transforms when the paper came out. I thought, oh, that's interesting. With the way you're talking about it now, a very different paradigm for RL, it makes it sound like a much bigger deal. I'm glad I got to hear that directly from you. Can we talk about the experiments to make this a little more concrete? Yeah, for sure. So far, I've been thinking about this as supervised learning and I don't understand how it could ever, why could it ever do better than the training data or is that even important anymore? Yeah. I think that's pretty important to be able to do better than the training data. But for what it's worth, I want to clarify that that is a nature of conditioning on the reward and not because we're using a transformer or anything. There is a bit of a thread where somebody points, like I actually pointed out myself, that this was a trick used in training Alpha Star. Oriol Vennials actually, Oriol Vennials and his team were building Alpha Star and the way Alpha Star was trained was it actually conditions a lot on the pre-order. There were previous, like, sayings, but in addition to that, it conditions on the opponent's skill level, how many units you got to build based on that. And there's like a pointer network that attends to all these entities. So this was there so that at test time, they could adapt to the opponent by predicting its skill level and using that information to condition for the agent to be more adaptive and flexible to what opponent is playing with. I have to say, these were also ideas that inspired me when we built the architecture. It was more like a subconsciously, I remembered this, but it was like two years ago when I was like, I actually forgot to credit them in the paper. Yeah, you can extrapolate beyond the training data. If you train the agent, do that in the first place, right? The agent can only extrapolate. Also what does extrapolate mean? Like you want to tell the agent what task it's even doing in the first place so that at test time, you can give it new tasks and it can potentially do that. In some sense, like, if the agent has understood what it means to get a particular score, then it can potentially get a score that's never seen in the training data. And that score can be bigger than the maximum score in the training data. This is just, I just mean this in a funny way, but it's slightly conscious, you know? If the agent has understood what it even means to achieve a certain score, whether it be good or bad, you can ask it to get a higher score than whatever score it's training to see in the training data. I'm not saying this works reliably that, oh, yeah, we've solved like an incredibly amazing problem that given a dataset of human trajectories, you can always ensure a decision-transformer will like get a score better than the best human in the training data. No, it only works on one or two benchmarks, I think. There's still a lot more work to do there, but it's exciting. The capabilities are pretty cool. And so this is even without any mechanism like the arg max in Q learning, which is how the algorithm tries to keep maximizing its return. Yeah, exactly. Yeah. That's pretty amazing. Yeah, you just say, yeah, to get a score of one, you do these sequence of actions, to get a score of five, you do these sequence of actions, and to get a score of 100, you do these test times, you just say get a score of 1000. Maybe it does something more or less similar to what it's seen for a get a score of 100, but potentially slightly better than that because it is implicitly learned. What does it mean to do better for 100 relative to five? So it might take that behavior and paste it for relative to 100 over that. And that's why it's a paradigm shift because you don't have to do all these dynamic programming of policy gradients. You just let the deep neural network figure out what it means to optimize long term reward. So now I'm looking at a chart in your paper figure three is showing different results. And what it shows is that in some results, this is a transformer, it gets about the same as TD learning. And in some cases, it does better. And in some cases, it does a lot better, which is surprising to me. I guess when I looked at this because I'm like, there's no arg max. How is it doing this? I was a little surprised that TD learning is represented by CQL. Would there be other algorithms that might do better to represent TD learning here? Yeah, there might be. There might be. And I think that's an active area of research, right? Many people are working on that. So but to me, those are not interesting at all. Like I would say you can spend another five generations of PhD students or you can spend 100,000 generations of Nvidia GPUs and transformers are going to be the Nvidia GPU route are like coming up with more and more fancy Q learning algorithms for offline RL is going to be the PhD route. You can decide to take a bit yourself. Actually there was a funny comment right after the paper came out. There was a Reddit post on the paper, somebody posted it on Reddit and I saw a funny comment where somebody's like, you know, you should just go go buy more Nvidia stock. And yeah, actually if you did that, you could have got like become richer for sure. But just saying that, you know, like more and more Q learning algorithms you come up with potentially, they're going to be decision transformer. I think some people have been published papers beating our scores. But the point is like we didn't even spend time coming up with a new algorithm or hacking the transformer to work really well or anything. It's in fact, like if you look at Kevin's code release, it just imports hugging face transformers and just runs it on the project rate data. It's that simple. It also in my opinion, it also reduces the barrier to entry to RL. I'm talking about it from the perspective of myself as well as many people have heard from that, oh, I really so hard like you got to like actually like take a bunch of classes, read David's service lectures or like, you know, the sudden and barbed up book, which is super hard to do all the exercises there. And by the time like I've lost my energy, it's been like two, three months. I'm hardly any progress. On the other hand, like if you work on computer vision or NLP, you just import like hugging face transformers or like you know, or by torch image models, immediately take a dataset, like label it as a random model, you feel good, you feel like you're making progress, right? The dopamine is there. The iteration speed is a lot faster. How do you do that for reinforcement learning? It's I think like inventing even more complicated Q learning algorithms doesn't seem like something that actually caters the need of like bringing more people to the field and making faster progress, right? It actually seems like a reverse direction of that. On the other hand, doing something like a decision for the farmer that just makes our rather more and more like NLP or super computer vision is likely to make things easier for people. The amount of our algorithms is exploding exponentially and every there's variants of sub variants. Should we expect that trend to continue or do we expect that ultimately we will converge on a family or a set of families? Are you saying that in one future possibility is that we converge back to something simpler like a decision to transfer? I would even go even further than that. The algorithm is in the weight of the transformer. You cannot write the algorithm. People think they can actually write the algorithm themselves. That's not possible and they should they should learn to be more humble. I think people think like they can write the ultimate algorithm that will solve AI on a whiteboard, write a paper on it and that will be the answer. In fact, David Silver has a very interesting point about this. He was very unhappy after AlphaGo was done, even though it's such a legendary moment because he was just somehow not happy that it boots for Alpha from human data, the human game, go games and after that it is self-player. Alpha0 was removing every component of hard coding in that. It figured out for itself what it means to win and what it means to be better. Of course that is something you can only do in a zero-sum, a perfect information game. That's what I would say for coming up with more clever online offline or algorithms was just trying to make a decision transformer really work at scale. This is the route and also we need the generalization abilities of language code, things like that. If you want to build an AGI, we need the thing that uses all possible strings in one model. Think about a future GPT, L plus 1 that's trained on trajectories, that's trained on internet data, that's trained on videos. That's likely to be more of a solution to the RL problem than beating the score on Mujoko with CQL plus plus. If we look at the two ideas of transformers and RL, there must be quite a few different ways to combine these. Obviously, many seems to use the transformer as a function approximator and then use more conventional algorithms. Is that also a reasonable approach, do you think, or is it really the self-supervised mechanism that's the important bit here? I think it's reasonable. A lot of people are writing papers on taking out the CNNs and we're getting based robotics and using a vision transformer instead. Yeah, that sounds interesting. That's definitely going to have the short-term progress because anytime you replace the backbone architecture with a different stack, it's likely to proliferate across anyone using any CNN anywhere. That's it. The paradigm shift is more important in the long run. You tell me, is it fun to not understand QLearning well or all these WQ learning? There's a host of people that do WQ learning. Do you even remember that? Where do you put the max? There are two maxes. There's an outer max and an inner max. The soft QLearning has an exponential and approximations for that for the denominator. It's not even fun to spend one year or two years just reading all these things and going nowhere in terms of actual performance on the task that you care about. On the other hand, you just say, hey, I'm just going to import a hugging phase. I'm going to create a data loader from a trajectories. I'm just going to scale the rewards and some normalize them so that they're like zero to one range or whatever, percentiles. I'm just going to treat the problem like a Kaggle contest and I'm just going to leverage a transformer. I won't even need to think about the optimization. All that part has been figured out for language models so more or less it's going to work for any string. In future, you might be able to use a diffusion model much easier, right? You make progress in like few hours. You get something, you get an agent that actually works and you know how to debug. Debugging is just like debugging supervised learning. You tell yourself like which potentially is going to be used by more people and have more chances of success in the long run. The diversity and range of all these complicated mechanisms people have come up with is absolutely incredible and kind of very strange. Like you don't see that in other parts of machine learning to the same extent. And that was kind of one of my motivations for doing this podcast is when I was starting to see this endless stream of these things coming out and these endless variations of algorithms, I felt a little discouraged. I felt how I'm going to keep on top of this. The only way I could think to do it was to talk to people who really know often and be able to understand which of these is important. But the trend doesn't seem to stop the continuing diversity. That is not going to stop. And I don't think it should stop because I mean people have the free will of freedom to do any kind of research, right? Like I would say that what you're feeling is more the norm than exception. And just like the assisting basil says, right? In the long term, that's no misaligned between economic value and like customer value. So the research you do like has secured everyone and not just like a small community of like really well-grat PhD students. Because at the end, like the real value is only created when people take your algorithms and like build robots or customer service agents, things like that. Those are not ideally going to be done by PhD students, right? Like very good software engineers who can quickly like bootstrap existing ML repos will likely do that for them. It's going to be easier if because they all already know like GPDs and stuff. So I don't think what you're saying is actually the exception. It's more the norm that people are kind of tired of just seeing countless new variants of Q learning algorithms, like promising like one or two percent improvement over existing ones over like four or five random seats. It's kind of boring to see such papers. So can we talk about where something like decision transformer is most relevant and maybe what are the limitations? Like is it really relegated to tasks where we're in the huge data regime? Ideally yes. Ideally that should be the regime which is really shine. That's not going to be a problem I think. Like any industry where you're like building a agent you're most like like you ideally want a leverage a transformer when you have a lot of data, right? That's it you might not need a reward data as much as like you think you might be able to leverage a really good retrain model or like a language model. That's just trained at a trajectory level without reward information. You could find unit to like a small set of trajectories that actually have reward information. So just like how we saw the data efficiency problems with regular like language or code or computer vision the same seems set of like ideas can apply here. In terms of like shortcomings I still think it's not there yet in terms of really beating the best human engineered algorithms on these benchmarks. That was not our point either. Like the point is oh you know without much tuning it's already like pretty good but it would be nice if it's made to be a very reliable algorithm that works out of something like scikit-learn logistic regression you know it just take it and it just works it would be nice to make it like that. So even if you don't get an amazing performance you get some reasonably good model to the extent that if it doesn't work it's more like an issue in your data. If you can get it to that such a stage then that would be really cool. Fitting it with language or code where you can ask a mod agent to iteratively change the code based on your feedback that you give that would be really awesome. Eterative debugging or getting a certain score on a Kaggle contest those kind of things would be super awesome to see. I think robotics like you basically train an agent given a goal you just train an agent to complete the actions where that go and you keep telling the robot like hey you know you already did this how about like actually getting closer to the object you give feedback in between and it can take what it's done previously and your current feedback into account and try to like change its trajectory stuff like that. Think about rewards in this itself as like being replaced by language feedback that would be super cool to have. So there are so many more variations of this model that people haven't really explored yet and I'm hoping they explore. It's also a matter of like yeah the amount of compute you have access to and you know being able to try these ideas needs like good compute or a good foundation models to build with so that that needs a little bit of time to change to. Let's say this approach really does take over the tasks where there are large data sets then maybe there's still room for other approaches for small data problems like we might face in say medicine. Yeah, yeah potentially though I do hope there's like a large pre-trained model for medicine too. I think that would be awesome. If you can leverage like insights across different medical problems and big that into one model that might be better modeled and just trading a small model on a small data set. We haven't reached those points yet but it's good to be optimistic about it. So Jan LeCoon the storage researcher from Facebook AI Research describes a cake with the cake icing and the cherry with the cake as unsupervised learning and icing is supervised in the cherries R.O. How do you relate that metaphor to what you're doing here with the transition transformer? Yeah that's a great question. So the cake is in some sense like the foundation to understand the world, perceive and understand the world right. Decision transformer is the cherry. It doesn't look at the cake part. So for example you take the Atari experiment in a decision transformer. The processes pixels in the form of like taking the frame and getting a latent embedding from the CNN and then the transformer runs up on double those latens. I think the cake handles the part of like what is the CNN that's used to encode the latent. The cherry is the part where you're figuring out okay once you process your sensory stream how do you actually design actions at a motor level. If you look at Yanlequin stocks there is a part that he always says like we haven't really figured out how to do action hierarchies like learning new primitives motor primitives action hierarchies. That is a part decision transformer gets at and as for the actual cake itself we need to good we need to build really good representation learning and generator models of high dimensional sensory data. Like the work I did on CPC addresses that currently is addressing that video GPT. So those are those are more in that space. Decision transformer doesn't have much to do with the cake itself it has more to do with the cherry. Another way of looking at it is that you kind of transcended the cake altogether the three different components because the decision transformer really combines all these things in a way that maybe is. Yeah. Yeah. Hopefully you can build a really good video model and then use that as a foundation model to like fine tune by adding rewards and so it kind of like builds the entire cake in one large giant transformer I think that will be awesome. But yeah like look I'm obviously saying a lot more than what the paper does so the paper itself hasn't shown any result in that level. So last I checked I saw 91 citations I think on Google Scholar so people are building on this any comments on things that people have already built on this sounds like you have definitely ideas for the future but in terms of what's been done so far. Yeah. Or what's in progress any comments on that? Yeah I saw some good papers but most of the citations are basically like oh yeah offline RL has been applied with the transformer or like transformers are awesome and they've been getting to be used in RL or like some people just use that as a baseline in their new offline RL algorithm. So I'm not so happy with like the citations itself like I mean the short like getting 100 citations in less than a year is awesome but it's not like they're like like genuine like genuinely building a better model has happened yet. My feeling is people should just try to make a much larger model with a lot more trajectories and train it. And that would be the real deal. It's boring very likely not going to get a near looks paper or something unless you know you spend a lot of time in figuring out like new capabilities that come out of such models but that is more likely the correcting to do. There are some interesting like work done by people in Peter's lab I think like they the some work that tried to do a decision transformer kind of thing for robots like like figuring out like what set of primitives to take and like leveraging pre-trained APIs like co-pilot or GPT 3. So figuring out how to integrate language you know decision transformer will be super cool. But yeah I'm there's no particular work that I'm like able to highlight. I'm saying oh yeah this is incredibly awesome follow up. Yeah citations are sometimes misleading right like they they you might get a lot of citations but it's not like people are actually like like like really building new variants of your model. But there was that there was like one work from Shane goo I think from Google brain that 3 trained on Wikipedia and then find you know the decision transformer and showed some gains that was kind of interesting. So yeah people are getting under right right ideas for sure. So we've talked about the difference between the RL classical RL paradigm and the supervised learning paradigm and the decision transformer kind of combines those. What about the axis of model 3 and model based? Yeah. It seems like there's sort of like an implicit model in here. Yeah. Can you talk about that? Yeah. Yeah. Yeah. The decision transformer yeah it's the definition of model based model 3 are like you know it's so hard like different people have different ways to think about it. To me like if you just say yeah model based is like anything that predicts the future state given the previous state and actions then decision transformer is not a model based or the method. But if you see a model basis anything that just models the future and what part of the future you choose to models up to you then decision transformer is a model based model like like RL algorithm. We intentionally chose not to predict the future pixels or future states because in some task it's likely to help you and some task it's not likely to help you and then you'll have the hard code like percentage of the loss that you want to be allocated for predicting the future state and that will change for different environments. So that makes it like not so clean as it is now. But if you do have like say some another loss for not just predicting the future actions but also the future states and future rewards then that just becomes the ultimate world model right. So that's just an exact GPT and that is model based. So now you can easily change DT to be in model based by just removing the masking of the state losses. It might need the right kind of latent space to predict the future. It's not ideal for you to just predict the future pixels in Atari. Just think about it in terms of the loss function right like you have one single dimension for predicting the future action and you have like 84 by 84 dimensions for predicting one future state. So most of the compute of the models can be allocated to like those 6000 pixels, 6500, 400 pixels for the frame and just like one dimension for the action and it might what is the point of having a model that just like fills up the background of the future state but takes the wrong action. That's not very useful. So if you could do this in a good latent abstraction where there's like a good latent space for the state and you predict the latent then that's pretty cool and that's what like the ideas like dolly do are getting at like you just like learn a prior on the latent space and then you decode it with a diffusion of sampler. Those are in the code. I think those ideas should be investigated for her to move to video GPT now. The GPT is what the name says it's a GPT for video models basically how do you learn a generator model for video you can't just throw a straight GPT transformer at it at a pixel level you could but it needs a lot more compute and memory. You learn a latent abstraction of the video that down samples the video into a latent vector and then you learn a GPT at the latent level and then you up sample those latens back into the pixels to actually like create a video. When you train such a model on a large enough video data set and it becomes a pretty good world model for you given the initial frame and predict the future and so on. So how do you evaluate a video model? There are metrics in general evaluating generative models is really hard. There are two ways to evaluate video GPT like models one is just a likelihood it gets in the latent space because it's you're still training a GPT on the latent tokens so you could measure the bits per dimension the log likelihood in the latent space but that's not a very useful metric because these bits are like not perceptible bits. So one thing you could do is measure something called a fresh it video distance just like how people measure a fresh it in the inception distance for images where they take like a lot of samples from the model they put it into the latent space of the pretrain inception architecture that's just an image classifier and they take a bunch of samples from the actual data distribution to and then they compare the statistics between these two batches in terms of first order and second order and come up with a metric. So you could do the same thing for a video to take samples from the model you take samples from the actual video data set take like a pretrain video classifier like a kinetics classifier and take the latent embedding of that and just compute the statistics in that space. It is not the best metric in the sense that it's not like you optimize for FVD and you optimize for human like what like you know judgments of what is what seems like something that would you know count for the Turing test of like oh yeah is this video from a like a YouTube or a generative model like it's not as good as getting like high correlation with that but you know we're not even at a point in video generation where a human would find it hard to save whether that's this video is from AI or from a human until we get there I think optimizing for FVD sounds fine. So I guess with image generation, again papers and such it's very convenient to just show some images were so used to evaluating faces that any discrepancy in faces is very easy for our eyes to detect. Well it's getting harder and harder right like people say it's hard for them to know if like a dolly image is from dolly or from a human artist these days. You can't it is easy to find out though like if you actually look at the up something artifacts I can dolly you could actually like zoom in and like figure out like oh yeah this doesn't seem like something human would have done but GAN papers also use FIDs to compare like you look at style GAN they always report the FIDs that they get on the new data set it's a good metric. So yeah video papers can use FVDs. What do you predict in terms of how far away are we from video GBT type systems joining the other ranks of the other the other types of mediums in terms of really high quality video being generally. Yeah yeah it's likely to happen. I think if the folks like like like dolly is pretty awesome and so that nothing really stops people from building something like that for video it requires a lot of computer and effort and time. Yeah I mean video GBT is not that good in terms of quality it's more like an idea paper with some reasonable results but to actually have like a video dolly level like like a video that you can actually upload to YouTube and people might not be able to say this is completely AI generated I think it's a very likely to happen. Okay like if you were to take a bet that it's not going to happen in the next five years you're likely to lose that bet. But whether it's going to happen in one year or two years from now or like something like that it's not very clear. The amount of information in a video versus these other mediums like text for example is a page of text is such a small amount of information compared to even a few seconds of video that I wonder where this problem the video generation problem lies compared to the current levels of compute that we have available for the current generation of of compute and GPUs that we have to have the capacity to to really do this at high quality. So in terms of bits you do get more bits from a video but in terms of useful bits it probably is not not an extremely high order of magnitude because there's a lot of redundancy in a video right like just take a frame itself you don't need to ingest one k by one k frames to a model to learn a representation of it like probably 64 versus 64 is fine like most of the information can be preserved that like a latent embedding on top of that can still understand what exists in the frame in terms of like temporal information like you don't need to ingest it at the same frames per second that's needed for watching a high quality video on YouTube right like 16 FPS or like 24 FPS probably not needed for for an AI to train a model on top of that. It is needed for the generative quality to produce such a model. Yeah you do need to produce like a 20 FPS model like like for example if you were to produce like a 10 second video you don't just want to produce 10 frames right you do you want to produce something like 200 or like 240 frames whatever FPS you're happy viewing on on on YouTube and similarly you don't want to produce 64 versus 64 frames you do want to produce something like 720 or 1080 so that already makes you output dimension incredibly large but that's not what stops people from actually solving the video generation problem because you truly model the useful bits that's the hard part and if you actually have access to useful bits then the same GPT is that work on text will work on video too. The difficult part is figuring out what are the useful bits like like you can't hard code it like you can't say hey yeah I'll just take the motion vectors and reference frame just like how MPEGs you know and code X are coded you could potentially do that right you just take you just take the source code of like a great video compression algorithm like like you know actually get whatever is communicated in terms of the code X and learn a generative model on that and then use the same decoder in the code X so that like you're computationally like much better but your video generation the generative model on those bits might actually be pretty hard to potentially model so it's not clear and it might actually be limiting because these code X are designed for just like preserving as much information as possible and not really learning anything semantic but the purpose is being made right like you look at dolly it's awesome it's it's learning generative model at a latent level VQVA itself is like that video GPT uses a VQVA and VQVA is like learn a down sap representation of your input and learns a generative model on that so we are already seeing signs of life there in terms of how to do it but once we have learned the like really correct representation to learn a generative model on top of that's going to be the moment where video generation really shines in video GPT we use the VQVA and that was the same stack use for code X or copilot so sorry jukebox and other like like like you know Aaron Vandian's work on like VQVA 2 that produced these 1224 by 1224 images it's been shown to work for images it's not been shown to work reliably for videos yet but but I would imagine that someone will figure this out so VQVA is is is a discrete V AE is that right yeah for for generating samples there's a code book so there's like a some kind of limited capacity that it has yeah it is lossy you can think of video sorry VQVA is as like a neural network learned version of JPEG or MPEG it's basically doing the same thing it's trying to pack in the bits in the latent space and then go back from those bits to the actual input minimize the reconstruction loss in a JPEG you're going to hard code these components by saying yeah the way you down sample is like you you take these 8 by 8 blocks of the image and then you run like discrete consent transform at up that and you quantize those coefficients and then you run an inverse DCT and then you chunk everything together again and you get back the image right and which how many bits you quantize in your DCT design like the lossy in ads of your JPEG and that also gives you the compression ratio just like that VQVA is how like how many bits you're using in the code book do you use like 10 bits or 8 bits and that decides how lossy it is and based on that you get like pretty good like reconstructions so it is doing the thing that JPEGs are doing but in a more neural way and that's cool but you also want something that's even more semantic than VQ ideas like uncle Purgating there like like the dolly tools basically taking the clip vector and then building a general demol on top of that so that clip is even more semantic than a VQ but then what is like what is the clip of video that's unclear these are like difficult difficult questions right now nobody knows the answers to these yet and you're right you do you need a lot of compute for making video work and I honestly don't know if if like somebody would make it work very fast but I think the likelihood that nobody makes it work in five years from now is very low it seems to me there's some kind of tension like you said with the semantics like I guess in RL there's the the bullet pixel problem where you know there could be just one pixel moving across the string screen that's such a small detail of the image but it ends up changing the plot of the game you the character dies and so when you're doing this down sampling and up sampling how do we know if we're not missing some of the detail that's so critical to the to the semantics of the video like someone has a tiny little pin that they're popping the balloons with and maybe that gets lost in the down sampling is there something causal needed to make this actually meaningful maybe maybe some notion of temporal consistency and motion is necessary but I don't know right like somebody could make it work with just a single latent vector it's difficult to say it is not going to work if unless you have an existence proof that I'm sorry unless you have a theoretical proof that it cannot work it's difficult to say because the way things work in deep learning is existence probes basically oh yeah I made it work here it's so far it doesn't exist yet but proving that it cannot exist is much harder but it'll be nice to have most structured latens for sure that have like some notion of optical flow like temporal latent and then image latent in a form like initial frame and then a generative model just like besides the initial canvas to paint and like this the motions to like you know encode and the decoder just figures things out that'll be super cool to have. So in terms of moving images we've also seen some awesome results in the Nerf line of work can you say anything about the difference or similarity between what's happening here and what's happening there? Yeah Nerf is trying to make less use of neural decoders and more use of volumetric rendering algorithms so it's in some sense offloading some compute from a neural decoder to something that actually is physically consistent in terms of viewing the same scene from different views geometrically. So whether that'll be part of like future generative models that is not clear yet but it'll be nice to have something like that because it definitely requires less model capacity than a pure video GPT like model and then might actually be more consistent physically so that makes it like more reliable in terms of like building a technology around it. So it's pretty exciting how to actually combine the large representation learning models with something like Nerf is still like something people are exploring. And moving on to some more general issues as a research scientist can you talk about how you plan your research and your roadmap you know what to pursue the explore exploit trade-off in your in planning through your research? So I'm currently really in exploitation mode like there's stuff that already works and there's like a certain for open AI formula of doing things and I'm just trying to do it like that. Evolution is more risky of course right like you you might end up with something amazing but the probability of that is low but the value function is much higher. Explatation is value function might be low but probability success is high. You ideally you want to be in the period of optimality between probability of success and value function. Yeah like you do want to have a big cake but you want to eat it too kind of it's hard I'm it's not like I've succeeded multiple times to tell you exactly how to do it and I don't know I'm trying to figure out myself but if you if you you would ask me whether I'm exploring or exploiting right now I'm definitely exploiting. Are there anything else you want to share with our audience today? AI is really exciting you know I'm sure it's overwhelming to see like every day people making so much progress across different like you know companies and research labs. If you're more junior I would say don't get discouraged and just try to like work on the fundamentals and pick something you really cannot dedicate like 80 hours that we connect and keep making progress. Yeah if you're interested in doing a PhD or something like that it's worth really thinking hard about whether you should be working on deep learning or even if you work on deep learning doing it the same manner as opening our Google maybe something worth thinking about because the more and more clear it's getting that you need large models to get amazing generalization results at this time it's hard for you to have an impact outside of what people are doing already so it's good to like rethink the game if you're in a place where you don't have as much compute you need to like figure out a new formula or like look at some places that people are already looking into. Arvind Shrinivas thank you so much for sharing your time and your insight with us here at TalkRL. Thank you Robin.
[ { "end": 11.46, "start": 0, "text": " TalkRL podcast is all reinforced in learning all the time, featuring brilliant guests" }, { "end": 13.540000000000001, "start": 11.46, "text": " both research and applied." }, { "end": 16.9, "start": 13.540000000000001, "text": " Join the conversation on Twitter at TalkRL podcast." }, { "end": 23.04, "start": 16.9, "text": " I'm your host, Robin Chohan." }, { "end": 25.86, "start": 23.04, "text": " Arvin Shrinivas is a research scientist at OpenAI." }, { "end": 30.4, "start": 25.86, "text": " He holds a PhD from Berkeley, where he taught the Berkeley unsupervised learning course." }, { "end": 32.4, "start": 30.4, "text": " Arvin, thanks so much for joining us again." }, { "end": 33.4, "start": 32.4, "text": " Thank you for having me." }, { "end": 38.379999999999995, "start": 33.4, "text": " Of course, you were our guest back in episode 11 when you were back at Berkeley." }, { "end": 43.46, "start": 38.379999999999995, "text": " We talked about curl and rad and sunrise and your unsupervised learning course." }, { "end": 45.980000000000004, "start": 43.46, "text": " Now I gather that you're at OpenAI." }, { "end": 50.58, "start": 45.980000000000004, "text": " Can you tell us a bit about what your focus area is there and what your role is there?" }, { "end": 55.46, "start": 50.58, "text": " I am a researcher in the algorithm's team at OpenAI." }, { "end": 62.42, "start": 55.46, "text": " The algorithm's team is the team that works on basic research that leads to new kind" }, { "end": 64.78, "start": 62.42, "text": " of generative models." }, { "end": 72.7, "start": 64.78, "text": " For example, Dali came out of the work done by folks in the algorithm's team and in" }, { "end": 78.82, "start": 72.7, "text": " the past like GPD2, image GPD and so on." }, { "end": 88.38, "start": 78.82, "text": " My focus specifically is to work on generative models of modalities other than text, exploring" }, { "end": 93.22, "start": 88.38, "text": " new possibilities there, new architectures, new modalities and so on." }, { "end": 98.25999999999999, "start": 93.22, "text": " When we look back at your dissertation from Berkeley, you mentioned the main axes of" }, { "end": 104.3, "start": 98.25999999999999, "text": " contributions being self-supervised or unsupervised representation learning and self-attention." }, { "end": 107.38, "start": 104.3, "text": " Are you continuing along those lines with what you're doing now?" }, { "end": 109.53999999999999, "start": 107.38, "text": " Yeah, for sure." }, { "end": 117.66, "start": 109.53999999999999, "text": " It's difficult to not take advantage of self-supervised learning anywhere right now in deep learning." }, { "end": 125.41999999999999, "start": 117.66, "text": " Anything you do right now definitely gets a big boost if you leverage pre-trained self-supervised" }, { "end": 126.41999999999999, "start": 125.41999999999999, "text": " representations." }, { "end": 136.34, "start": 126.41999999999999, "text": " Whether it be reinforcement learning or generative models or language, it is the case that having" }, { "end": 142.82, "start": 136.34, "text": " access to a pretty good pre-trained representation can make things a lot more convenient and more" }, { "end": 144.94, "start": 142.82, "text": " generalizable for you." }, { "end": 151.18, "start": 144.94, "text": " Just to give you an example, the work done by Aditya Ramesh like Dali, Dali too that came" }, { "end": 153.26, "start": 151.18, "text": " out recently." }, { "end": 158.94, "start": 153.26, "text": " If you actually look at the architecture, it takes a pre-trained clip model and then tries" }, { "end": 163.7, "start": 158.94, "text": " to build a generative model on top of that lead instead of building a generative model" }, { "end": 165.7, "start": 163.7, "text": " from scratch." }, { "end": 171.54, "start": 165.7, "text": " So at what is clip, clip, you can consider the system's kind of like giant self-supervised" }, { "end": 175.85999999999999, "start": 171.54, "text": " contrast of learning done on like internet skill data." }, { "end": 182.33999999999997, "start": 175.85999999999999, "text": " So it's hard to like decouple self-supervised learning from almost anything you do right" }, { "end": 183.33999999999997, "start": 182.33999999999997, "text": " now." }, { "end": 188.42, "start": 183.33999999999997, "text": " It's pretty much part of everything that anyone is building these days." }, { "end": 193.17999999999998, "start": 188.42, "text": " So just like that, similar to that, I'm also leveraging that for my current research." }, { "end": 198.9, "start": 193.18, "text": " Okay, so let's get started talking about decision transformer reinforcement learning" }, { "end": 200.94, "start": 198.9, "text": " by a sequence modeling." }, { "end": 205.1, "start": 200.94, "text": " That was Chen at all in 2021." }, { "end": 206.74, "start": 205.1, "text": " What are the main ideas happening here?" }, { "end": 208.14000000000001, "start": 206.74, "text": " It's very simple." }, { "end": 209.14000000000001, "start": 208.14000000000001, "text": " Transformers are awesome." }, { "end": 210.14000000000001, "start": 209.14000000000001, "text": " GPDs are awesome." }, { "end": 214.38, "start": 210.14000000000001, "text": " They basically changed the landscape of natural language processing." }, { "end": 219.66, "start": 214.38, "text": " For some reason, reinforcement learning is viewed separately from deep learning in the" }, { "end": 226.54, "start": 219.66, "text": " sense that the way people think about RL is there is this whole literature, Ritz-Sodden" }, { "end": 235.78, "start": 226.54, "text": " textbook, dynamic programming, approximate DP, value evaluation, policy gradients, Q learning" }, { "end": 237.01999999999998, "start": 235.78, "text": " and so on." }, { "end": 243.3, "start": 237.01999999999998, "text": " And the way we actually use deep learning in RL is to just use it as a function approximator" }, { "end": 247.78, "start": 243.3, "text": " to get you the representations on which you perform the classical RL algorithms." }, { "end": 253.34, "start": 247.78, "text": " Just been the way deep minds started things and everyone's just been doing the same thing." }, { "end": 258.06, "start": 253.34, "text": " After that, this like several year of the work have been put into just making that stack" }, { "end": 259.06, "start": 258.06, "text": " really work." }, { "end": 260.06, "start": 259.06, "text": " That's fair." }, { "end": 263.38, "start": 260.06, "text": " It's kind of work like you know, Atari, AlphaGo and so on." }, { "end": 271.3, "start": 263.38, "text": " But now that we have a sequence model of working really well after the invention of the transformer," }, { "end": 274.74, "start": 271.3, "text": " it is worth rethinking the whole paradigm itself." }, { "end": 279.14, "start": 274.74, "text": " At the end of the day, what do you actually want from an RL agent?" }, { "end": 285.54, "start": 279.14, "text": " It should have access to what it's done in the past by which we mean like all the state's" }, { "end": 289.58, "start": 285.54, "text": " actions, rewards, it's seen so far." }, { "end": 291.26, "start": 289.58, "text": " It's context of the world." }, { "end": 294.1, "start": 291.26, "text": " It's context of the environment." }, { "end": 299.62, "start": 294.1, "text": " And it should have access to what it's supposed to be doing like whatever task the human" }, { "end": 301.94, "start": 299.62, "text": " provides the agent with." }, { "end": 304.5, "start": 301.94, "text": " And based on this, it should decide what to do next." }, { "end": 313.62, "start": 304.5, "text": " Ultimately, if there is a model that cares to this need, that's all you need, right?" }, { "end": 321.22, "start": 313.62, "text": " And for that, it just has to attend to all this information and decide what to do next." }, { "end": 324.38, "start": 321.22, "text": " In some sense, it has to attend." }, { "end": 326.58, "start": 324.38, "text": " That's all you need, right?" }, { "end": 331.34, "start": 326.58, "text": " So, just like a transformer sure attention is all you need." }, { "end": 340.61999999999995, "start": 331.34, "text": " We thought, okay, why not just treat RL as a sequence model and given sufficient data" }, { "end": 344.58, "start": 340.61999999999995, "text": " of what an agent is supposed to be doing." }, { "end": 347.02, "start": 344.58, "text": " It should be able to do it at this time." }, { "end": 354.73999999999995, "start": 347.02, "text": " So if you have a good enough data set of trajectories and trajectory by trajectories, I just" }, { "end": 358.21999999999997, "start": 354.73999999999995, "text": " mean a sequence of state action rewards." }, { "end": 363.94000000000005, "start": 358.22, "text": " In environment, you just give that just like how you give natural language strings like" }, { "end": 371.14000000000004, "start": 363.94000000000005, "text": " sentences or like images and let the transformer behave like a sponge that absorbs all this" }, { "end": 378.3, "start": 371.14000000000004, "text": " data and learns to internalize how the world works and how it gets rewarded for taking" }, { "end": 380.62, "start": 378.3, "text": " different sequence of actions and so on." }, { "end": 385.58000000000004, "start": 380.62, "text": " For example, if it's given a trajectory of the game of pong, the transformer will internalize" }, { "end": 391.74, "start": 385.58, "text": " that, taking the action up means the battery will go up or if you're near the ball, the" }, { "end": 394.46, "start": 391.74, "text": " battery has to go down, things like that." }, { "end": 400.41999999999996, "start": 394.46, "text": " Instead of being told to like, hey, if the sequence of actions will lead you to this value" }, { "end": 403.65999999999997, "start": 400.41999999999996, "text": " function and so on, you don't need all that." }, { "end": 411.06, "start": 403.65999999999997, "text": " So in some sense, you can think of this as the software 2.0 moment for RL, right?" }, { "end": 418.5, "start": 411.06, "text": " Let the neural network write the weights for the RL algorithm itself and that's all it" }, { "end": 419.98, "start": 418.5, "text": " is." }, { "end": 428.62, "start": 419.98, "text": " So you do this, you take a data set, just train a regular GPD like transformer that learns" }, { "end": 432.86, "start": 428.62, "text": " to predict the future actions given the past state actions and rewards." }, { "end": 438.06, "start": 432.86, "text": " Test time, you just ask it to get a high reward and it'll just learn to do it." }, { "end": 448.3, "start": 438.06, "text": " But if you can do this reliably at a large scale, then we can like basically leverage" }, { "end": 456.42, "start": 448.3, "text": " every scale, large scale infrastructure that's been built for GPDs and images or Dali and" }, { "end": 460.82, "start": 456.42, "text": " how a similar kind of stack for robotics and control, right?" }, { "end": 465.42, "start": 460.82, "text": " And that that that in my opinion at least is the future." }, { "end": 471.26, "start": 465.42, "text": " It's much easier to scale something when there is a whole community of people and investment" }, { "end": 475.46000000000004, "start": 471.26, "text": " and resources being put into scaling a particular infrastructure." }, { "end": 481.5, "start": 475.46000000000004, "text": " It's very hard to scale something when you are like a small community around you is the" }, { "end": 485.02000000000004, "start": 481.5, "text": " only set of people doing that." }, { "end": 493.02000000000004, "start": 485.02000000000004, "text": " So I think I said this in the previous podcast itself, like one motivating factor for doing" }, { "end": 499.26, "start": 493.02, "text": " a lot of the research I did in RL is to make RL look more like deep learning." }, { "end": 501.82, "start": 499.26, "text": " And currently deep learning is just mostly transformers." }, { "end": 505.26, "start": 501.82, "text": " So it's good that RL also just looks like a transformer." }, { "end": 506.85999999999996, "start": 505.26, "text": " How did this idea come about?" }, { "end": 512.98, "start": 506.85999999999996, "text": " The way this project was conceived was I was actually interviewing at OpenAI for full" }, { "end": 517.62, "start": 512.98, "text": " time position and I had some discussions with some people there." }, { "end": 522.6999999999999, "start": 517.62, "text": " Like just generally you get you do research chats, like you know to figure out alignment" }, { "end": 529.58, "start": 522.7, "text": " and I talked to Alec Radford and I asked him what is like really the way in which he picks" }, { "end": 530.58, "start": 529.58, "text": " problems." }, { "end": 538.1400000000001, "start": 530.58, "text": " So Alec Radford is the first author on GPT1 to clip, you know he's done incredible work," }, { "end": 543.0200000000001, "start": 538.1400000000001, "text": " you know, considered to be the most successful independent individual contributor at OpenAI." }, { "end": 548.3000000000001, "start": 543.0200000000001, "text": " I was very curious how he picks his problems and one intuition that he gave me was you have" }, { "end": 555.66, "start": 548.3, "text": " to think about like all these like large generative models as distillation of human activity" }, { "end": 558.9399999999999, "start": 555.66, "text": " on internet baked into a large model." }, { "end": 560.8199999999999, "start": 558.9399999999999, "text": " Let me break that down." }, { "end": 566.4599999999999, "start": 560.8199999999999, "text": " Humans are like like if you look at the way GPT2 was built and three for that matter, you" }, { "end": 571.6999999999999, "start": 566.4599999999999, "text": " leverage the fact that humans have curated a lot of content and create a lot of content" }, { "end": 579.7, "start": 571.7, "text": " on internet in the far off text, right?" }, { "end": 580.1800000000001, "start": 579.7, "text": " Like you take news articles of books or Wikipedia pages, it's a lot of human work that was" }, { "end": 586.8000000000001, "start": 580.1800000000001, "text": " done and put on the internet and you can take advantage of human ratings like the karma" }, { "end": 592.98, "start": 586.8000000000001, "text": " for the post and you use that as some kind of implicit page rank and you only take the" }, { "end": 595.94, "start": 592.98, "text": " content that has sufficient karma so that you take good content." }, { "end": 602.58, "start": 595.94, "text": " You're basically leveraging human activity like rating pages, creating content, describing" }, { "end": 609.62, "start": 602.58, "text": " articles like in the form of Wikipedia, writing it in a formal way or writing news articles" }, { "end": 616.3000000000001, "start": 609.62, "text": " or like conversational ability on Reddit, you're taking all these things that people do for" }, { "end": 621.0600000000001, "start": 616.3000000000001, "text": " free on internet, their footprint is there on internet and putting it into a generative" }, { "end": 625.7, "start": 621.0600000000001, "text": " model like GPT and then at test time you can ask the GPT to do things and it becomes" }, { "end": 628.4200000000001, "start": 625.7, "text": " economically and commercially valuable, right?" }, { "end": 632.98, "start": 628.4200000000001, "text": " In some sense that gave me an insight, oh actually you can think of these language models" }, { "end": 639.0200000000001, "start": 632.98, "text": " like agents, even though people don't think of language models as agents because there's" }, { "end": 642.46, "start": 639.0200000000001, "text": " no reinforcement learning in it, technically there is, right?" }, { "end": 648.98, "start": 642.46, "text": " If you consider every single word has been an action taken by a human agent and now the" }, { "end": 653.9000000000001, "start": 648.98, "text": " transformer is basically cloning it, it's behavior cloning activity on the internet." }, { "end": 656.66, "start": 653.9, "text": " So that is the word alacuse when describing research." }, { "end": 663.02, "start": 656.66, "text": " It's like behavior cloning human work on the internet that already exists into a large" }, { "end": 667.34, "start": 663.02, "text": " model and then the large model becomes like an intelligent agent at test time and the" }, { "end": 672.4599999999999, "start": 667.34, "text": " more diverse data you throw at the model, the more likely that it will do like amazing" }, { "end": 674.02, "start": 672.4599999999999, "text": " things." }, { "end": 682.8199999999999, "start": 674.02, "text": " So just give me a new insight, oh for example language models are agents that can write" }, { "end": 689.82, "start": 682.82, "text": " creative writing, co-pilot, Git, co-pilot is basically a writing assistant but you can" }, { "end": 693.58, "start": 689.82, "text": " think of it as an agent that just learned to code, right?" }, { "end": 697.0200000000001, "start": 693.58, "text": " By behavior cloning human code on GitHub." }, { "end": 702.3000000000001, "start": 697.0200000000001, "text": " This is the same thing that autopilot does too, autopilot is basically cloning human driving" }, { "end": 706.0200000000001, "start": 702.3000000000001, "text": " and Dalai is cloning artists." }, { "end": 711.74, "start": 706.0200000000001, "text": " So at the end of the day, like the end game for creating intelligent agents like robots" }, { "end": 715.1800000000001, "start": 711.74, "text": " or any RL agent is the clone behavior." }, { "end": 720.94, "start": 715.1800000000001, "text": " The only thing that you need to go beyond just cloning is also understand what it means" }, { "end": 722.42, "start": 720.94, "text": " to solve a task." }, { "end": 725.7, "start": 722.42, "text": " You don't want to know like what does it mean to complete a task, what does it mean to" }, { "end": 727.1800000000001, "start": 725.7, "text": " not complete a task." }, { "end": 733.02, "start": 727.1800000000001, "text": " You want to know the notion of a reward to rather than just saying oh yeah humans did" }, { "end": 735.62, "start": 733.02, "text": " this thing, I'll also do the same thing." }, { "end": 741.26, "start": 735.62, "text": " So it was just the inspiration was okay like the current generative model is a greater" }, { "end": 745.22, "start": 741.26, "text": " cloning, how do you make RL more like just cloning." }, { "end": 749.5, "start": 745.22, "text": " So there are two parts to it then one is from swammer and the other is supervised learning." }, { "end": 752.1, "start": 749.5, "text": " And then how do you turn RL into supervised learning." }, { "end": 757.1, "start": 752.1, "text": " One way to turn RL into supervised learning was using the upside down reinforcement learning" }, { "end": 758.1, "start": 757.1, "text": " commulation." }, { "end": 761.42, "start": 758.1, "text": " I think that was proposed by Schmid-Jubber." }, { "end": 766.3, "start": 761.42, "text": " So I was already aware of that paper but then that was not general enough." }, { "end": 771.38, "start": 766.3, "text": " It just took the goal as an embedding and then took the current state and action and tried" }, { "end": 778.8599999999999, "start": 771.38, "text": " to decide what took the current state and the goal and just formulated it like the goal" }, { "end": 781.2199999999999, "start": 778.8599999999999, "text": " condition reinforcement learning setup." }, { "end": 788.9, "start": 781.2199999999999, "text": " So that would still have the same issues that come with scaling these Markovian models." }, { "end": 791.8599999999999, "start": 788.9, "text": " So you do want something that's more general." }, { "end": 795.3, "start": 791.8599999999999, "text": " I just combine these two insights together to get the idea." }, { "end": 805.42, "start": 795.3, "text": " I started working on it myself with I think Lily, Leach and then later Igor Mordash," }, { "end": 813.14, "start": 805.42, "text": " he's the other senior author on the paper, said he was looking into bird models for RL" }, { "end": 819.3, "start": 813.14, "text": " as like some kind of pre-trained representations that can be leveraged for any new task in" }, { "end": 823.66, "start": 819.3, "text": " sense that you'd pre-trained a large bird and then you find you into any new task you" }, { "end": 828.5799999999999, "start": 823.66, "text": " have with another undergrad named Kevin Liu." }, { "end": 835.38, "start": 828.5799999999999, "text": " So he was very, but that project wasn't really panning out as much because in general nobody" }, { "end": 840.06, "start": 835.38, "text": " is really shown very successful behavior pre-training in RL." }, { "end": 845.54, "start": 840.06, "text": " I think people have shown good successful results on the pre-training the vision encoder" }, { "end": 851.14, "start": 845.54, "text": " and then showing that it can accelerate the learning on a new task for pixels but no one's" }, { "end": 855.66, "start": 851.14, "text": " really shown something where you pre-training an action decoder or something and then you" }, { "end": 858.74, "start": 855.66, "text": " throw a new task and it just works." }, { "end": 863.22, "start": 858.74, "text": " But the stack they were building was very useful for us too." }, { "end": 868.54, "start": 863.22, "text": " It's just that the same front swimmers, the masking is left to right with this random" }, { "end": 870.14, "start": 868.54, "text": " masking, things like that." }, { "end": 874.62, "start": 870.14, "text": " So we just decided to combine and that led to that paper." }, { "end": 875.62, "start": 874.62, "text": " That's super interesting." }, { "end": 877.78, "start": 875.62, "text": " Okay, I'm not really glad I asked you this." }, { "end": 882.98, "start": 877.78, "text": " You said that people weren't able to make these unsupervised models work with new tasks." }, { "end": 883.98, "start": 882.98, "text": " Is that right?" }, { "end": 887.98, "start": 883.98, "text": " And what you were doing is with decision transformer is not a new task." }, { "end": 892.5799999999999, "start": 887.98, "text": " Is that what the difference is or why was unsupervised not working before and you made" }, { "end": 893.5799999999999, "start": 892.5799999999999, "text": " it work?" }, { "end": 897.3399999999999, "start": 893.5799999999999, "text": " Yeah, so to clarify, I'm not saying decision transformer made unsupervised learning" }, { "end": 898.3399999999999, "start": 897.3399999999999, "text": " work for RL." }, { "end": 900.22, "start": 898.3399999999999, "text": " That's not true." }, { "end": 905.9399999999999, "start": 900.22, "text": " What I mean to say is something like Bert is very hard to get the work for reinforcement" }, { "end": 906.9399999999999, "start": 905.9399999999999, "text": " learning." }, { "end": 911.94, "start": 906.94, "text": " And then you're, you pre-trained a very large model and then you find into any new task." }, { "end": 915.3800000000001, "start": 911.94, "text": " It's not clear exactly what you should pre-trained." }, { "end": 918.3800000000001, "start": 915.3800000000001, "text": " And fine tuning is dark magic by itself." }, { "end": 926.0200000000001, "start": 918.3800000000001, "text": " Like you have to figure out so many hyper parameters like the learning rate of the atom, optimizer," }, { "end": 928.6600000000001, "start": 926.0200000000001, "text": " and how you decay the learning rate." }, { "end": 931.58, "start": 928.6600000000001, "text": " And what is the bad size?" }, { "end": 936.2600000000001, "start": 931.58, "text": " If you use a large bad size, you might overfeit quickly on a task where you have very" }, { "end": 938.8199999999999, "start": 936.26, "text": " new examples." }, { "end": 941.22, "start": 938.8199999999999, "text": " This is a problem in NLP itself." }, { "end": 943.18, "start": 941.22, "text": " How to efficiently fine tune." }, { "end": 945.58, "start": 943.18, "text": " You don't even need to ask for reinforcement learning." }, { "end": 947.38, "start": 945.58, "text": " It's going to be even worse." }, { "end": 952.22, "start": 947.38, "text": " Given it's hard to even train it from scratch." }, { "end": 954.74, "start": 952.22, "text": " On the other hand, something like GPDs are cool." }, { "end": 955.9, "start": 954.74, "text": " You don't have to fine tune." }, { "end": 960.9, "start": 955.9, "text": " If you have a great model, zero shot of few shot of work at this time." }, { "end": 965.14, "start": 960.9, "text": " So that is the advantage of training these large language models." }, { "end": 972.14, "start": 965.14, "text": " You can ask if you had 100 GPUs, 1000 GPUs, would you go for training a GPD or would you" }, { "end": 974.66, "start": 972.14, "text": " go for training a bird or a T-fi model?" }, { "end": 978.38, "start": 974.66, "text": " Any answer is you would go for training a GPD because of the flexibility it offers you" }, { "end": 979.38, "start": 978.38, "text": " at test time." }, { "end": 981.62, "start": 979.38, "text": " If you wanted to fine tune, you could still fine tune from it." }, { "end": 987.78, "start": 981.62, "text": " But there are so many other capabilities like zero shot completions, few shot completions," }, { "end": 989.9399999999999, "start": 987.78, "text": " prompting, prompt engineering." }, { "end": 993.58, "start": 989.9399999999999, "text": " The natural sequentiality it offers you." }, { "end": 999.5400000000001, "start": 993.58, "text": " So that made me think, that would be even more of a drastic result if you just show that" }, { "end": 1001.46, "start": 999.5400000000001, "text": " a pure GPD works." }, { "end": 1004.7800000000001, "start": 1001.46, "text": " On rather than saying, hey, here are some checkpoints." }, { "end": 1009.14, "start": 1004.7800000000001, "text": " If you want to do your RL research, you can take it from use it." }, { "end": 1014.7800000000001, "start": 1009.14, "text": " That in my opinion, wouldn't have been as impactful because at the end of the day, what are" }, { "end": 1015.7800000000001, "start": 1014.7800000000001, "text": " these RL tests?" }, { "end": 1022.0200000000001, "start": 1015.7800000000001, "text": " It's just a bunch of simulation benchmarks that people created for quick results." }, { "end": 1027.74, "start": 1022.02, "text": " In the impact of the paper, it's more if it forces people to reading the paradigm itself" }, { "end": 1032.82, "start": 1027.74, "text": " rather than serving as a checkpoint for writing more papers for other students." }, { "end": 1036.86, "start": 1032.82, "text": " I felt like having a pure language model stack would be better for that." }, { "end": 1037.86, "start": 1036.86, "text": " So I got a minute." }, { "end": 1044.34, "start": 1037.86, "text": " I did not appreciate the magnitude of the contribution with decision-transforms when" }, { "end": 1045.34, "start": 1044.34, "text": " the paper came out." }, { "end": 1048.7, "start": 1045.34, "text": " I thought, oh, that's interesting." }, { "end": 1052.54, "start": 1048.7, "text": " With the way you're talking about it now, a very different paradigm for RL, it makes" }, { "end": 1054.3, "start": 1052.54, "text": " it sound like a much bigger deal." }, { "end": 1056.66, "start": 1054.3, "text": " I'm glad I got to hear that directly from you." }, { "end": 1059.42, "start": 1056.66, "text": " Can we talk about the experiments to make this a little more concrete?" }, { "end": 1061.02, "start": 1059.42, "text": " Yeah, for sure." }, { "end": 1065.46, "start": 1061.02, "text": " So far, I've been thinking about this as supervised learning and I don't understand how it" }, { "end": 1069.74, "start": 1065.46, "text": " could ever, why could it ever do better than the training data or is that even important" }, { "end": 1070.74, "start": 1069.74, "text": " anymore?" }, { "end": 1072.82, "start": 1070.74, "text": " Yeah." }, { "end": 1077.5800000000002, "start": 1072.82, "text": " I think that's pretty important to be able to do better than the training data." }, { "end": 1082.1, "start": 1077.58, "text": " But for what it's worth, I want to clarify that that is a nature of conditioning on the" }, { "end": 1085.78, "start": 1082.1, "text": " reward and not because we're using a transformer or anything." }, { "end": 1091.3, "start": 1085.78, "text": " There is a bit of a thread where somebody points, like I actually pointed out myself, that" }, { "end": 1095.62, "start": 1091.3, "text": " this was a trick used in training Alpha Star." }, { "end": 1101.22, "start": 1095.62, "text": " Oriol Vennials actually, Oriol Vennials and his team were building Alpha Star and the way" }, { "end": 1107.54, "start": 1101.22, "text": " Alpha Star was trained was it actually conditions a lot on the pre-order." }, { "end": 1112.82, "start": 1107.54, "text": " There were previous, like, sayings, but in addition to that, it conditions on the opponent's" }, { "end": 1118.1399999999999, "start": 1112.82, "text": " skill level, how many units you got to build based on that." }, { "end": 1122.42, "start": 1118.1399999999999, "text": " And there's like a pointer network that attends to all these entities." }, { "end": 1126.98, "start": 1122.42, "text": " So this was there so that at test time, they could adapt to the opponent by predicting" }, { "end": 1132.26, "start": 1126.98, "text": " its skill level and using that information to condition for the agent to be more adaptive" }, { "end": 1134.7, "start": 1132.26, "text": " and flexible to what opponent is playing with." }, { "end": 1139.54, "start": 1134.7, "text": " I have to say, these were also ideas that inspired me when we built the architecture." }, { "end": 1145.78, "start": 1139.54, "text": " It was more like a subconsciously, I remembered this, but it was like two years ago when I was" }, { "end": 1150.02, "start": 1145.78, "text": " like, I actually forgot to credit them in the paper." }, { "end": 1154.78, "start": 1150.02, "text": " Yeah, you can extrapolate beyond the training data." }, { "end": 1159.5800000000002, "start": 1154.78, "text": " If you train the agent, do that in the first place, right?" }, { "end": 1162.46, "start": 1159.5800000000002, "text": " The agent can only extrapolate." }, { "end": 1163.78, "start": 1162.46, "text": " Also what does extrapolate mean?" }, { "end": 1168.5, "start": 1163.78, "text": " Like you want to tell the agent what task it's even doing in the first place so that at" }, { "end": 1172.46, "start": 1168.5, "text": " test time, you can give it new tasks and it can potentially do that." }, { "end": 1177.8999999999999, "start": 1172.46, "text": " In some sense, like, if the agent has understood what it means to get a particular score, then" }, { "end": 1181.34, "start": 1177.8999999999999, "text": " it can potentially get a score that's never seen in the training data." }, { "end": 1184.78, "start": 1181.34, "text": " And that score can be bigger than the maximum score in the training data." }, { "end": 1190.8999999999999, "start": 1184.78, "text": " This is just, I just mean this in a funny way, but it's slightly conscious, you know?" }, { "end": 1195.5800000000002, "start": 1190.9, "text": " If the agent has understood what it even means to achieve a certain score, whether it" }, { "end": 1200.3400000000001, "start": 1195.5800000000002, "text": " be good or bad, you can ask it to get a higher score than whatever score it's training" }, { "end": 1201.94, "start": 1200.3400000000001, "text": " to see in the training data." }, { "end": 1206.1000000000001, "start": 1201.94, "text": " I'm not saying this works reliably that, oh, yeah, we've solved like an incredibly amazing" }, { "end": 1212.5800000000002, "start": 1206.1000000000001, "text": " problem that given a dataset of human trajectories, you can always ensure a decision-transformer will" }, { "end": 1216.18, "start": 1212.5800000000002, "text": " like get a score better than the best human in the training data." }, { "end": 1219.42, "start": 1216.18, "text": " No, it only works on one or two benchmarks, I think." }, { "end": 1223.0600000000002, "start": 1219.42, "text": " There's still a lot more work to do there, but it's exciting." }, { "end": 1225.5, "start": 1223.0600000000002, "text": " The capabilities are pretty cool." }, { "end": 1232.18, "start": 1225.5, "text": " And so this is even without any mechanism like the arg max in Q learning, which is how" }, { "end": 1236.38, "start": 1232.18, "text": " the algorithm tries to keep maximizing its return." }, { "end": 1237.38, "start": 1236.38, "text": " Yeah, exactly." }, { "end": 1238.38, "start": 1237.38, "text": " Yeah." }, { "end": 1239.38, "start": 1238.38, "text": " That's pretty amazing." }, { "end": 1244.3400000000001, "start": 1239.38, "text": " Yeah, you just say, yeah, to get a score of one, you do these sequence of actions, to" }, { "end": 1248.5800000000002, "start": 1244.3400000000001, "text": " get a score of five, you do these sequence of actions, and to get a score of 100, you" }, { "end": 1252.1, "start": 1248.58, "text": " do these test times, you just say get a score of 1000." }, { "end": 1256.74, "start": 1252.1, "text": " Maybe it does something more or less similar to what it's seen for a get a score of 100," }, { "end": 1262.22, "start": 1256.74, "text": " but potentially slightly better than that because it is implicitly learned." }, { "end": 1265.86, "start": 1262.22, "text": " What does it mean to do better for 100 relative to five?" }, { "end": 1271.74, "start": 1265.86, "text": " So it might take that behavior and paste it for relative to 100 over that." }, { "end": 1275.6999999999998, "start": 1271.74, "text": " And that's why it's a paradigm shift because you don't have to do all these dynamic programming" }, { "end": 1276.9399999999998, "start": 1275.6999999999998, "text": " of policy gradients." }, { "end": 1283.02, "start": 1276.94, "text": " You just let the deep neural network figure out what it means to optimize long term reward." }, { "end": 1287.8600000000001, "start": 1283.02, "text": " So now I'm looking at a chart in your paper figure three is showing different results." }, { "end": 1293.22, "start": 1287.8600000000001, "text": " And what it shows is that in some results, this is a transformer, it gets about the same" }, { "end": 1294.22, "start": 1293.22, "text": " as TD learning." }, { "end": 1296.18, "start": 1294.22, "text": " And in some cases, it does better." }, { "end": 1299.54, "start": 1296.18, "text": " And in some cases, it does a lot better, which is surprising to me." }, { "end": 1301.8600000000001, "start": 1299.54, "text": " I guess when I looked at this because I'm like, there's no arg max." }, { "end": 1303.94, "start": 1301.8600000000001, "text": " How is it doing this?" }, { "end": 1307.66, "start": 1303.94, "text": " I was a little surprised that TD learning is represented by CQL." }, { "end": 1311.38, "start": 1307.66, "text": " Would there be other algorithms that might do better to represent TD learning here?" }, { "end": 1312.7, "start": 1311.38, "text": " Yeah, there might be." }, { "end": 1313.7, "start": 1312.7, "text": " There might be." }, { "end": 1315.98, "start": 1313.7, "text": " And I think that's an active area of research, right?" }, { "end": 1318.42, "start": 1315.98, "text": " Many people are working on that." }, { "end": 1321.26, "start": 1318.42, "text": " So but to me, those are not interesting at all." }, { "end": 1328.1000000000001, "start": 1321.26, "text": " Like I would say you can spend another five generations of PhD students or you can spend" }, { "end": 1340.4599999999998, "start": 1328.1, "text": " 100,000 generations of Nvidia GPUs and transformers are going to be the Nvidia GPU route are like" }, { "end": 1345.3, "start": 1340.4599999999998, "text": " coming up with more and more fancy Q learning algorithms for offline RL is going to be the" }, { "end": 1347.1, "start": 1345.3, "text": " PhD route." }, { "end": 1350.54, "start": 1347.1, "text": " You can decide to take a bit yourself." }, { "end": 1354.62, "start": 1350.54, "text": " Actually there was a funny comment right after the paper came out." }, { "end": 1358.3, "start": 1354.62, "text": " There was a Reddit post on the paper, somebody posted it on Reddit and I saw a funny comment" }, { "end": 1364.3, "start": 1358.3, "text": " where somebody's like, you know, you should just go go buy more Nvidia stock." }, { "end": 1369.1, "start": 1364.3, "text": " And yeah, actually if you did that, you could have got like become richer for sure." }, { "end": 1373.62, "start": 1369.1, "text": " But just saying that, you know, like more and more Q learning algorithms you come up with" }, { "end": 1377.3799999999999, "start": 1373.62, "text": " potentially, they're going to be decision transformer." }, { "end": 1380.9799999999998, "start": 1377.3799999999999, "text": " I think some people have been published papers beating our scores." }, { "end": 1386.18, "start": 1380.98, "text": " But the point is like we didn't even spend time coming up with a new algorithm or hacking" }, { "end": 1389.18, "start": 1386.18, "text": " the transformer to work really well or anything." }, { "end": 1393.9, "start": 1389.18, "text": " It's in fact, like if you look at Kevin's code release, it just imports hugging face" }, { "end": 1397.14, "start": 1393.9, "text": " transformers and just runs it on the project rate data." }, { "end": 1398.42, "start": 1397.14, "text": " It's that simple." }, { "end": 1403.26, "start": 1398.42, "text": " It also in my opinion, it also reduces the barrier to entry to RL." }, { "end": 1406.8600000000001, "start": 1403.26, "text": " I'm talking about it from the perspective of myself as well as many people have heard" }, { "end": 1414.86, "start": 1406.86, "text": " from that, oh, I really so hard like you got to like actually like take a bunch of classes," }, { "end": 1418.74, "start": 1414.86, "text": " read David's service lectures or like, you know, the sudden and barbed up book, which" }, { "end": 1421.78, "start": 1418.74, "text": " is super hard to do all the exercises there." }, { "end": 1426.78, "start": 1421.78, "text": " And by the time like I've lost my energy, it's been like two, three months." }, { "end": 1428.6999999999998, "start": 1426.78, "text": " I'm hardly any progress." }, { "end": 1433.9399999999998, "start": 1428.6999999999998, "text": " On the other hand, like if you work on computer vision or NLP, you just import like hugging" }, { "end": 1439.94, "start": 1433.94, "text": " face transformers or like you know, or by torch image models, immediately take a dataset," }, { "end": 1443.74, "start": 1439.94, "text": " like label it as a random model, you feel good, you feel like you're making progress," }, { "end": 1444.74, "start": 1443.74, "text": " right?" }, { "end": 1446.3400000000001, "start": 1444.74, "text": " The dopamine is there." }, { "end": 1448.38, "start": 1446.3400000000001, "text": " The iteration speed is a lot faster." }, { "end": 1450.6200000000001, "start": 1448.38, "text": " How do you do that for reinforcement learning?" }, { "end": 1455.8200000000002, "start": 1450.6200000000001, "text": " It's I think like inventing even more complicated Q learning algorithms doesn't seem like" }, { "end": 1459.7, "start": 1455.8200000000002, "text": " something that actually caters the need of like bringing more people to the field and" }, { "end": 1461.66, "start": 1459.7, "text": " making faster progress, right?" }, { "end": 1464.26, "start": 1461.66, "text": " It actually seems like a reverse direction of that." }, { "end": 1468.02, "start": 1464.26, "text": " On the other hand, doing something like a decision for the farmer that just makes our" }, { "end": 1472.8600000000001, "start": 1468.02, "text": " rather more and more like NLP or super computer vision is likely to make things easier for" }, { "end": 1473.8600000000001, "start": 1472.8600000000001, "text": " people." }, { "end": 1480.02, "start": 1473.8600000000001, "text": " The amount of our algorithms is exploding exponentially and every there's variants of sub variants." }, { "end": 1484.5400000000002, "start": 1480.02, "text": " Should we expect that trend to continue or do we expect that ultimately we will converge" }, { "end": 1487.1000000000001, "start": 1484.5400000000002, "text": " on a family or a set of families?" }, { "end": 1492.34, "start": 1487.1, "text": " Are you saying that in one future possibility is that we converge back to something simpler" }, { "end": 1494.34, "start": 1492.34, "text": " like a decision to transfer?" }, { "end": 1496.54, "start": 1494.34, "text": " I would even go even further than that." }, { "end": 1499.5, "start": 1496.54, "text": " The algorithm is in the weight of the transformer." }, { "end": 1501.26, "start": 1499.5, "text": " You cannot write the algorithm." }, { "end": 1504.1799999999998, "start": 1501.26, "text": " People think they can actually write the algorithm themselves." }, { "end": 1507.82, "start": 1504.1799999999998, "text": " That's not possible and they should they should learn to be more humble." }, { "end": 1512.1399999999999, "start": 1507.82, "text": " I think people think like they can write the ultimate algorithm that will solve AI on a" }, { "end": 1517.3400000000001, "start": 1512.14, "text": " whiteboard, write a paper on it and that will be the answer." }, { "end": 1521.5800000000002, "start": 1517.3400000000001, "text": " In fact, David Silver has a very interesting point about this." }, { "end": 1527.1000000000001, "start": 1521.5800000000002, "text": " He was very unhappy after AlphaGo was done, even though it's such a legendary moment because" }, { "end": 1533.14, "start": 1527.1000000000001, "text": " he was just somehow not happy that it boots for Alpha from human data, the human game," }, { "end": 1538.0600000000002, "start": 1533.14, "text": " go games and after that it is self-player." }, { "end": 1543.94, "start": 1538.06, "text": " Alpha0 was removing every component of hard coding in that." }, { "end": 1548.1799999999998, "start": 1543.94, "text": " It figured out for itself what it means to win and what it means to be better." }, { "end": 1556.98, "start": 1548.1799999999998, "text": " Of course that is something you can only do in a zero-sum, a perfect information game." }, { "end": 1561.4199999999998, "start": 1556.98, "text": " That's what I would say for coming up with more clever online offline or algorithms" }, { "end": 1565.3, "start": 1561.4199999999998, "text": " was just trying to make a decision transformer really work at scale." }, { "end": 1572.7, "start": 1565.3, "text": " This is the route and also we need the generalization abilities of language code, things like that." }, { "end": 1578.7, "start": 1572.7, "text": " If you want to build an AGI, we need the thing that uses all possible strings in one model." }, { "end": 1584.98, "start": 1578.7, "text": " Think about a future GPT, L plus 1 that's trained on trajectories, that's trained on" }, { "end": 1587.46, "start": 1584.98, "text": " internet data, that's trained on videos." }, { "end": 1595.9, "start": 1587.46, "text": " That's likely to be more of a solution to the RL problem than beating the score on Mujoko" }, { "end": 1598.38, "start": 1595.9, "text": " with CQL plus plus." }, { "end": 1603.02, "start": 1598.38, "text": " If we look at the two ideas of transformers and RL, there must be quite a few different" }, { "end": 1604.42, "start": 1603.02, "text": " ways to combine these." }, { "end": 1609.06, "start": 1604.42, "text": " Obviously, many seems to use the transformer as a function approximator and then use" }, { "end": 1610.7, "start": 1609.06, "text": " more conventional algorithms." }, { "end": 1616.58, "start": 1610.7, "text": " Is that also a reasonable approach, do you think, or is it really the self-supervised mechanism" }, { "end": 1618.5, "start": 1616.58, "text": " that's the important bit here?" }, { "end": 1619.82, "start": 1618.5, "text": " I think it's reasonable." }, { "end": 1627.46, "start": 1619.82, "text": " A lot of people are writing papers on taking out the CNNs and we're getting based robotics" }, { "end": 1629.98, "start": 1627.46, "text": " and using a vision transformer instead." }, { "end": 1632.1399999999999, "start": 1629.98, "text": " Yeah, that sounds interesting." }, { "end": 1636.78, "start": 1632.1399999999999, "text": " That's definitely going to have the short-term progress because anytime you replace the" }, { "end": 1641.54, "start": 1636.78, "text": " backbone architecture with a different stack, it's likely to proliferate across anyone" }, { "end": 1644.54, "start": 1641.54, "text": " using any CNN anywhere." }, { "end": 1646.78, "start": 1644.54, "text": " That's it." }, { "end": 1650.82, "start": 1646.78, "text": " The paradigm shift is more important in the long run." }, { "end": 1661.06, "start": 1650.82, "text": " You tell me, is it fun to not understand QLearning well or all these WQ learning?" }, { "end": 1664.86, "start": 1661.06, "text": " There's a host of people that do WQ learning." }, { "end": 1666.6599999999999, "start": 1664.86, "text": " Do you even remember that?" }, { "end": 1667.6599999999999, "start": 1666.6599999999999, "text": " Where do you put the max?" }, { "end": 1668.6599999999999, "start": 1667.6599999999999, "text": " There are two maxes." }, { "end": 1670.6599999999999, "start": 1668.6599999999999, "text": " There's an outer max and an inner max." }, { "end": 1678.14, "start": 1670.66, "text": " The soft QLearning has an exponential and approximations for that for the denominator." }, { "end": 1684.18, "start": 1678.14, "text": " It's not even fun to spend one year or two years just reading all these things and going" }, { "end": 1689.5400000000002, "start": 1684.18, "text": " nowhere in terms of actual performance on the task that you care about." }, { "end": 1692.94, "start": 1689.5400000000002, "text": " On the other hand, you just say, hey, I'm just going to import a hugging phase." }, { "end": 1695.5800000000002, "start": 1692.94, "text": " I'm going to create a data loader from a trajectories." }, { "end": 1700.38, "start": 1695.5800000000002, "text": " I'm just going to scale the rewards and some normalize them so that they're like zero to" }, { "end": 1703.22, "start": 1700.38, "text": " one range or whatever, percentiles." }, { "end": 1707.8600000000001, "start": 1703.22, "text": " I'm just going to treat the problem like a Kaggle contest and I'm just going to leverage" }, { "end": 1708.8600000000001, "start": 1707.8600000000001, "text": " a transformer." }, { "end": 1711.5800000000002, "start": 1708.8600000000001, "text": " I won't even need to think about the optimization." }, { "end": 1714.5800000000002, "start": 1711.5800000000002, "text": " All that part has been figured out for language models so more or less it's going to work" }, { "end": 1715.8600000000001, "start": 1714.5800000000002, "text": " for any string." }, { "end": 1720.22, "start": 1715.8600000000001, "text": " In future, you might be able to use a diffusion model much easier, right?" }, { "end": 1722.3000000000002, "start": 1720.22, "text": " You make progress in like few hours." }, { "end": 1726.5, "start": 1722.3000000000002, "text": " You get something, you get an agent that actually works and you know how to debug." }, { "end": 1728.9, "start": 1726.5, "text": " Debugging is just like debugging supervised learning." }, { "end": 1733.18, "start": 1728.9, "text": " You tell yourself like which potentially is going to be used by more people and have" }, { "end": 1735.7, "start": 1733.18, "text": " more chances of success in the long run." }, { "end": 1740.9, "start": 1735.7, "text": " The diversity and range of all these complicated mechanisms people have come up with is absolutely" }, { "end": 1743.98, "start": 1740.9, "text": " incredible and kind of very strange." }, { "end": 1749.7800000000002, "start": 1743.98, "text": " Like you don't see that in other parts of machine learning to the same extent." }, { "end": 1754.02, "start": 1749.7800000000002, "text": " And that was kind of one of my motivations for doing this podcast is when I was starting" }, { "end": 1758.3400000000001, "start": 1754.02, "text": " to see this endless stream of these things coming out and these endless variations of" }, { "end": 1760.86, "start": 1758.34, "text": " algorithms, I felt a little discouraged." }, { "end": 1762.86, "start": 1760.86, "text": " I felt how I'm going to keep on top of this." }, { "end": 1767.82, "start": 1762.86, "text": " The only way I could think to do it was to talk to people who really know often and be" }, { "end": 1770.6999999999998, "start": 1767.82, "text": " able to understand which of these is important." }, { "end": 1774.62, "start": 1770.6999999999998, "text": " But the trend doesn't seem to stop the continuing diversity." }, { "end": 1776.6599999999999, "start": 1774.62, "text": " That is not going to stop." }, { "end": 1782.1, "start": 1776.6599999999999, "text": " And I don't think it should stop because I mean people have the free will of freedom to" }, { "end": 1784.02, "start": 1782.1, "text": " do any kind of research, right?" }, { "end": 1791.26, "start": 1784.02, "text": " Like I would say that what you're feeling is more the norm than exception." }, { "end": 1795.18, "start": 1791.26, "text": " And just like the assisting basil says, right?" }, { "end": 1801.9, "start": 1795.18, "text": " In the long term, that's no misaligned between economic value and like customer value." }, { "end": 1809.58, "start": 1801.9, "text": " So the research you do like has secured everyone and not just like a small community of like" }, { "end": 1812.66, "start": 1809.58, "text": " really well-grat PhD students." }, { "end": 1816.78, "start": 1812.66, "text": " Because at the end, like the real value is only created when people take your algorithms" }, { "end": 1821.26, "start": 1816.78, "text": " and like build robots or customer service agents, things like that." }, { "end": 1825.22, "start": 1821.26, "text": " Those are not ideally going to be done by PhD students, right?" }, { "end": 1832.38, "start": 1825.22, "text": " Like very good software engineers who can quickly like bootstrap existing ML repos will" }, { "end": 1834.26, "start": 1832.38, "text": " likely do that for them." }, { "end": 1838.8600000000001, "start": 1834.26, "text": " It's going to be easier if because they all already know like GPDs and stuff." }, { "end": 1844.8999999999999, "start": 1838.86, "text": " So I don't think what you're saying is actually the exception." }, { "end": 1850.5, "start": 1844.8999999999999, "text": " It's more the norm that people are kind of tired of just seeing countless new variants" }, { "end": 1856.4199999999998, "start": 1850.5, "text": " of Q learning algorithms, like promising like one or two percent improvement over existing" }, { "end": 1859.3, "start": 1856.4199999999998, "text": " ones over like four or five random seats." }, { "end": 1861.34, "start": 1859.3, "text": " It's kind of boring to see such papers." }, { "end": 1867.6599999999999, "start": 1861.34, "text": " So can we talk about where something like decision transformer is most relevant and maybe" }, { "end": 1869.14, "start": 1867.66, "text": " what are the limitations?" }, { "end": 1875.98, "start": 1869.14, "text": " Like is it really relegated to tasks where we're in the huge data regime?" }, { "end": 1876.98, "start": 1875.98, "text": " Ideally yes." }, { "end": 1880.8600000000001, "start": 1876.98, "text": " Ideally that should be the regime which is really shine." }, { "end": 1882.9, "start": 1880.8600000000001, "text": " That's not going to be a problem I think." }, { "end": 1889.8600000000001, "start": 1882.9, "text": " Like any industry where you're like building a agent you're most like like you ideally want" }, { "end": 1892.74, "start": 1889.8600000000001, "text": " a leverage a transformer when you have a lot of data, right?" }, { "end": 1899.82, "start": 1892.74, "text": " That's it you might not need a reward data as much as like you think you might be able" }, { "end": 1905.02, "start": 1899.82, "text": " to leverage a really good retrain model or like a language model." }, { "end": 1908.14, "start": 1905.02, "text": " That's just trained at a trajectory level without reward information." }, { "end": 1913.5, "start": 1908.14, "text": " You could find unit to like a small set of trajectories that actually have reward information." }, { "end": 1919.6200000000001, "start": 1913.5, "text": " So just like how we saw the data efficiency problems with regular like language or code" }, { "end": 1924.9399999999998, "start": 1919.62, "text": " or computer vision the same seems set of like ideas can apply here." }, { "end": 1930.5, "start": 1924.9399999999998, "text": " In terms of like shortcomings I still think it's not there yet in terms of really beating" }, { "end": 1935.9799999999998, "start": 1930.5, "text": " the best human engineered algorithms on these benchmarks." }, { "end": 1937.58, "start": 1935.9799999999998, "text": " That was not our point either." }, { "end": 1942.06, "start": 1937.58, "text": " Like the point is oh you know without much tuning it's already like pretty good but it would" }, { "end": 1950.58, "start": 1942.06, "text": " be nice if it's made to be a very reliable algorithm that works out of something like" }, { "end": 1955.3, "start": 1950.58, "text": " scikit-learn logistic regression you know it just take it and it just works it would be" }, { "end": 1956.82, "start": 1955.3, "text": " nice to make it like that." }, { "end": 1962.06, "start": 1956.82, "text": " So even if you don't get an amazing performance you get some reasonably good model to the" }, { "end": 1965.3, "start": 1962.06, "text": " extent that if it doesn't work it's more like an issue in your data." }, { "end": 1968.86, "start": 1965.3, "text": " If you can get it to that such a stage then that would be really cool." }, { "end": 1974.06, "start": 1968.86, "text": " Fitting it with language or code where you can ask a mod agent to iteratively change the" }, { "end": 1978.6999999999998, "start": 1974.06, "text": " code based on your feedback that you give that would be really awesome." }, { "end": 1983.4199999999998, "start": 1978.6999999999998, "text": " Eterative debugging or getting a certain score on a Kaggle contest those kind of things" }, { "end": 1984.9399999999998, "start": 1983.4199999999998, "text": " would be super awesome to see." }, { "end": 1991.34, "start": 1984.9399999999998, "text": " I think robotics like you basically train an agent given a goal you just train an agent" }, { "end": 1996.9399999999998, "start": 1991.34, "text": " to complete the actions where that go and you keep telling the robot like hey you know" }, { "end": 2003.1000000000001, "start": 1996.94, "text": " you already did this how about like actually getting closer to the object you give feedback" }, { "end": 2008.3, "start": 2003.1000000000001, "text": " in between and it can take what it's done previously and your current feedback into account" }, { "end": 2012.74, "start": 2008.3, "text": " and try to like change its trajectory stuff like that." }, { "end": 2016.74, "start": 2012.74, "text": " Think about rewards in this itself as like being replaced by language feedback that would" }, { "end": 2018.38, "start": 2016.74, "text": " be super cool to have." }, { "end": 2024.5, "start": 2018.38, "text": " So there are so many more variations of this model that people haven't really explored" }, { "end": 2026.9, "start": 2024.5, "text": " yet and I'm hoping they explore." }, { "end": 2030.98, "start": 2026.9, "text": " It's also a matter of like yeah the amount of compute you have access to and you know" }, { "end": 2034.66, "start": 2030.98, "text": " being able to try these ideas needs like good compute or a good foundation models to build" }, { "end": 2038.98, "start": 2034.66, "text": " with so that that needs a little bit of time to change to." }, { "end": 2043.7800000000002, "start": 2038.98, "text": " Let's say this approach really does take over the tasks where there are large data sets" }, { "end": 2048.94, "start": 2043.7800000000002, "text": " then maybe there's still room for other approaches for small data problems like we might face" }, { "end": 2050.46, "start": 2048.94, "text": " in say medicine." }, { "end": 2056.34, "start": 2050.46, "text": " Yeah, yeah potentially though I do hope there's like a large pre-trained model for medicine" }, { "end": 2057.34, "start": 2056.34, "text": " too." }, { "end": 2058.34, "start": 2057.34, "text": " I think that would be awesome." }, { "end": 2062.46, "start": 2058.34, "text": " If you can leverage like insights across different medical problems and big that into one" }, { "end": 2068.82, "start": 2062.46, "text": " model that might be better modeled and just trading a small model on a small data set." }, { "end": 2073.78, "start": 2068.82, "text": " We haven't reached those points yet but it's good to be optimistic about it." }, { "end": 2079.6600000000003, "start": 2073.78, "text": " So Jan LeCoon the storage researcher from Facebook AI Research describes a cake with the" }, { "end": 2084.86, "start": 2079.6600000000003, "text": " cake icing and the cherry with the cake as unsupervised learning and icing is supervised" }, { "end": 2086.42, "start": 2084.86, "text": " in the cherries R.O." }, { "end": 2090.82, "start": 2086.42, "text": " How do you relate that metaphor to what you're doing here with the transition transformer?" }, { "end": 2094.1400000000003, "start": 2090.82, "text": " Yeah that's a great question." }, { "end": 2102.9, "start": 2094.1400000000003, "text": " So the cake is in some sense like the foundation to understand the world, perceive and understand" }, { "end": 2104.3, "start": 2102.9, "text": " the world right." }, { "end": 2106.02, "start": 2104.3, "text": " Decision transformer is the cherry." }, { "end": 2108.58, "start": 2106.02, "text": " It doesn't look at the cake part." }, { "end": 2112.54, "start": 2108.58, "text": " So for example you take the Atari experiment in a decision transformer." }, { "end": 2120.3, "start": 2112.54, "text": " The processes pixels in the form of like taking the frame and getting a latent embedding" }, { "end": 2125.42, "start": 2120.3, "text": " from the CNN and then the transformer runs up on double those latens." }, { "end": 2129.54, "start": 2125.42, "text": " I think the cake handles the part of like what is the CNN that's used to encode the" }, { "end": 2130.54, "start": 2129.54, "text": " latent." }, { "end": 2136.3, "start": 2130.54, "text": " The cherry is the part where you're figuring out okay once you process your sensory stream" }, { "end": 2141.62, "start": 2136.3, "text": " how do you actually design actions at a motor level." }, { "end": 2146.5, "start": 2141.62, "text": " If you look at Yanlequin stocks there is a part that he always says like we haven't" }, { "end": 2152.66, "start": 2146.5, "text": " really figured out how to do action hierarchies like learning new primitives motor primitives" }, { "end": 2154.46, "start": 2152.66, "text": " action hierarchies." }, { "end": 2159.8599999999997, "start": 2154.46, "text": " That is a part decision transformer gets at and as for the actual cake itself we need to" }, { "end": 2165.18, "start": 2159.8599999999997, "text": " good we need to build really good representation learning and generator models of high dimensional" }, { "end": 2166.98, "start": 2165.18, "text": " sensory data." }, { "end": 2175.54, "start": 2166.98, "text": " Like the work I did on CPC addresses that currently is addressing that video GPT." }, { "end": 2178.7, "start": 2175.54, "text": " So those are those are more in that space." }, { "end": 2182.66, "start": 2178.7, "text": " Decision transformer doesn't have much to do with the cake itself it has more to do with" }, { "end": 2183.66, "start": 2182.66, "text": " the cherry." }, { "end": 2187.1, "start": 2183.66, "text": " Another way of looking at it is that you kind of transcended the cake altogether the" }, { "end": 2192.02, "start": 2187.1, "text": " three different components because the decision transformer really combines all these things" }, { "end": 2194.02, "start": 2192.02, "text": " in a way that maybe is." }, { "end": 2195.02, "start": 2194.02, "text": " Yeah." }, { "end": 2196.02, "start": 2195.02, "text": " Yeah." }, { "end": 2205.22, "start": 2196.02, "text": " Hopefully you can build a really good video model and then use that as a foundation model" }, { "end": 2211.86, "start": 2205.22, "text": " to like fine tune by adding rewards and so it kind of like builds the entire cake in" }, { "end": 2215.1, "start": 2211.86, "text": " one large giant transformer I think that will be awesome." }, { "end": 2221.1, "start": 2215.1, "text": " But yeah like look I'm obviously saying a lot more than what the paper does so the paper" }, { "end": 2224.18, "start": 2221.1, "text": " itself hasn't shown any result in that level." }, { "end": 2229.3799999999997, "start": 2224.18, "text": " So last I checked I saw 91 citations I think on Google Scholar so people are building" }, { "end": 2234.1, "start": 2229.3799999999997, "text": " on this any comments on things that people have already built on this sounds like you have" }, { "end": 2237.2599999999998, "start": 2234.1, "text": " definitely ideas for the future but in terms of what's been done so far." }, { "end": 2238.2599999999998, "start": 2237.2599999999998, "text": " Yeah." }, { "end": 2239.7799999999997, "start": 2238.2599999999998, "text": " Or what's in progress any comments on that?" }, { "end": 2247.54, "start": 2239.7799999999997, "text": " Yeah I saw some good papers but most of the citations are basically like oh yeah offline" }, { "end": 2252.98, "start": 2247.54, "text": " RL has been applied with the transformer or like transformers are awesome and they've been" }, { "end": 2258.18, "start": 2252.98, "text": " getting to be used in RL or like some people just use that as a baseline in their new offline" }, { "end": 2259.9, "start": 2258.18, "text": " RL algorithm." }, { "end": 2265.06, "start": 2259.9, "text": " So I'm not so happy with like the citations itself like I mean the short like getting" }, { "end": 2270.38, "start": 2265.06, "text": " 100 citations in less than a year is awesome but it's not like they're like like genuine" }, { "end": 2274.66, "start": 2270.38, "text": " like genuinely building a better model has happened yet." }, { "end": 2280.94, "start": 2274.66, "text": " My feeling is people should just try to make a much larger model with a lot more trajectories" }, { "end": 2282.42, "start": 2280.94, "text": " and train it." }, { "end": 2284.98, "start": 2282.42, "text": " And that would be the real deal." }, { "end": 2289.86, "start": 2284.98, "text": " It's boring very likely not going to get a near looks paper or something unless you know" }, { "end": 2293.7000000000003, "start": 2289.86, "text": " you spend a lot of time in figuring out like new capabilities that come out of such models" }, { "end": 2298.06, "start": 2293.7000000000003, "text": " but that is more likely the correcting to do." }, { "end": 2304.38, "start": 2298.06, "text": " There are some interesting like work done by people in Peter's lab I think like they" }, { "end": 2311.34, "start": 2304.38, "text": " the some work that tried to do a decision transformer kind of thing for robots like like figuring" }, { "end": 2317.78, "start": 2311.34, "text": " out like what set of primitives to take and like leveraging pre-trained APIs like co-pilot" }, { "end": 2320.5, "start": 2317.78, "text": " or GPT 3." }, { "end": 2325.3, "start": 2320.5, "text": " So figuring out how to integrate language you know decision transformer will be super" }, { "end": 2326.3, "start": 2325.3, "text": " cool." }, { "end": 2330.7400000000002, "start": 2326.3, "text": " But yeah I'm there's no particular work that I'm like able to highlight." }, { "end": 2333.54, "start": 2330.7400000000002, "text": " I'm saying oh yeah this is incredibly awesome follow up." }, { "end": 2338.42, "start": 2333.54, "text": " Yeah citations are sometimes misleading right like they they you might get a lot of citations" }, { "end": 2343.38, "start": 2338.42, "text": " but it's not like people are actually like like like really building new variants of" }, { "end": 2344.46, "start": 2343.38, "text": " your model." }, { "end": 2349.62, "start": 2344.46, "text": " But there was that there was like one work from Shane goo I think from Google brain that" }, { "end": 2353.98, "start": 2349.62, "text": " 3 trained on Wikipedia and then find you know the decision transformer and showed some" }, { "end": 2355.98, "start": 2353.98, "text": " gains that was kind of interesting." }, { "end": 2359.1800000000003, "start": 2355.98, "text": " So yeah people are getting under right right ideas for sure." }, { "end": 2363.54, "start": 2359.1800000000003, "text": " So we've talked about the difference between the RL classical RL paradigm and the supervised" }, { "end": 2367.54, "start": 2363.54, "text": " learning paradigm and the decision transformer kind of combines those." }, { "end": 2370.58, "start": 2367.54, "text": " What about the axis of model 3 and model based?" }, { "end": 2371.58, "start": 2370.58, "text": " Yeah." }, { "end": 2374.62, "start": 2371.58, "text": " It seems like there's sort of like an implicit model in here." }, { "end": 2375.62, "start": 2374.62, "text": " Yeah." }, { "end": 2376.62, "start": 2375.62, "text": " Can you talk about that?" }, { "end": 2377.62, "start": 2376.62, "text": " Yeah." }, { "end": 2378.62, "start": 2377.62, "text": " Yeah." }, { "end": 2379.62, "start": 2378.62, "text": " Yeah." }, { "end": 2384.22, "start": 2379.62, "text": " The decision transformer yeah it's the definition of model based model 3 are like you know" }, { "end": 2389.46, "start": 2384.22, "text": " it's so hard like different people have different ways to think about it." }, { "end": 2394.2599999999998, "start": 2389.46, "text": " To me like if you just say yeah model based is like anything that predicts the future state" }, { "end": 2398.46, "start": 2394.26, "text": " given the previous state and actions then decision transformer is not a model based" }, { "end": 2400.5400000000004, "start": 2398.46, "text": " or the method." }, { "end": 2405.94, "start": 2400.5400000000004, "text": " But if you see a model basis anything that just models the future and what part of the future" }, { "end": 2410.3, "start": 2405.94, "text": " you choose to models up to you then decision transformer is a model based model like like" }, { "end": 2411.42, "start": 2410.3, "text": " RL algorithm." }, { "end": 2416.34, "start": 2411.42, "text": " We intentionally chose not to predict the future pixels or future states because in some" }, { "end": 2420.5400000000004, "start": 2416.34, "text": " task it's likely to help you and some task it's not likely to help you and then you'll" }, { "end": 2425.82, "start": 2420.54, "text": " have the hard code like percentage of the loss that you want to be allocated for predicting" }, { "end": 2429.58, "start": 2425.82, "text": " the future state and that will change for different environments." }, { "end": 2432.86, "start": 2429.58, "text": " So that makes it like not so clean as it is now." }, { "end": 2440.34, "start": 2432.86, "text": " But if you do have like say some another loss for not just predicting the future actions" }, { "end": 2445.54, "start": 2440.34, "text": " but also the future states and future rewards then that just becomes the ultimate world" }, { "end": 2446.54, "start": 2445.54, "text": " model right." }, { "end": 2451.98, "start": 2446.54, "text": " So that's just an exact GPT and that is model based." }, { "end": 2456.3, "start": 2451.98, "text": " So now you can easily change DT to be in model based by just removing the masking of the" }, { "end": 2458.66, "start": 2456.3, "text": " state losses." }, { "end": 2462.54, "start": 2458.66, "text": " It might need the right kind of latent space to predict the future." }, { "end": 2467.38, "start": 2462.54, "text": " It's not ideal for you to just predict the future pixels in Atari." }, { "end": 2471.22, "start": 2467.38, "text": " Just think about it in terms of the loss function right like you have one single dimension" }, { "end": 2476.5, "start": 2471.22, "text": " for predicting the future action and you have like 84 by 84 dimensions for predicting" }, { "end": 2478.42, "start": 2476.5, "text": " one future state." }, { "end": 2486.02, "start": 2478.42, "text": " So most of the compute of the models can be allocated to like those 6000 pixels, 6500," }, { "end": 2495.26, "start": 2486.02, "text": " 400 pixels for the frame and just like one dimension for the action and it might what" }, { "end": 2500.66, "start": 2495.26, "text": " is the point of having a model that just like fills up the background of the future state" }, { "end": 2503.5, "start": 2500.66, "text": " but takes the wrong action." }, { "end": 2505.1, "start": 2503.5, "text": " That's not very useful." }, { "end": 2509.98, "start": 2505.1, "text": " So if you could do this in a good latent abstraction where there's like a good latent space" }, { "end": 2514.46, "start": 2509.98, "text": " for the state and you predict the latent then that's pretty cool and that's what like" }, { "end": 2519.8199999999997, "start": 2514.46, "text": " the ideas like dolly do are getting at like you just like learn a prior on the latent" }, { "end": 2523.2999999999997, "start": 2519.8199999999997, "text": " space and then you decode it with a diffusion of sampler." }, { "end": 2524.46, "start": 2523.2999999999997, "text": " Those are in the code." }, { "end": 2530.14, "start": 2524.46, "text": " I think those ideas should be investigated for her to move to video GPT now." }, { "end": 2536.02, "start": 2530.14, "text": " The GPT is what the name says it's a GPT for video models basically how do you learn" }, { "end": 2541.8199999999997, "start": 2536.02, "text": " a generator model for video you can't just throw a straight GPT transformer at it at a pixel" }, { "end": 2545.46, "start": 2541.8199999999997, "text": " level you could but it needs a lot more compute and memory." }, { "end": 2551.06, "start": 2545.46, "text": " You learn a latent abstraction of the video that down samples the video into a latent vector" }, { "end": 2555.2599999999998, "start": 2551.06, "text": " and then you learn a GPT at the latent level and then you up sample those latens back" }, { "end": 2559.14, "start": 2555.2599999999998, "text": " into the pixels to actually like create a video." }, { "end": 2563.7799999999997, "start": 2559.14, "text": " When you train such a model on a large enough video data set and it becomes a pretty good" }, { "end": 2568.46, "start": 2563.7799999999997, "text": " world model for you given the initial frame and predict the future and so on." }, { "end": 2570.94, "start": 2568.46, "text": " So how do you evaluate a video model?" }, { "end": 2575.1, "start": 2570.94, "text": " There are metrics in general evaluating generative models is really hard." }, { "end": 2580.02, "start": 2575.1, "text": " There are two ways to evaluate video GPT like models one is just a likelihood it gets" }, { "end": 2585.06, "start": 2580.02, "text": " in the latent space because it's you're still training a GPT on the latent tokens so" }, { "end": 2590.46, "start": 2585.06, "text": " you could measure the bits per dimension the log likelihood in the latent space but that's" }, { "end": 2594.7799999999997, "start": 2590.46, "text": " not a very useful metric because these bits are like not perceptible bits." }, { "end": 2600.7, "start": 2594.7799999999997, "text": " So one thing you could do is measure something called a fresh it video distance just like" }, { "end": 2609.06, "start": 2600.7, "text": " how people measure a fresh it in the inception distance for images where they take like a" }, { "end": 2614.34, "start": 2609.06, "text": " lot of samples from the model they put it into the latent space of the pretrain inception" }, { "end": 2621.26, "start": 2614.34, "text": " architecture that's just an image classifier and they take a bunch of samples from the" }, { "end": 2626.02, "start": 2621.26, "text": " actual data distribution to and then they compare the statistics between these two batches" }, { "end": 2630.3, "start": 2626.02, "text": " in terms of first order and second order and come up with a metric." }, { "end": 2634.02, "start": 2630.3, "text": " So you could do the same thing for a video to take samples from the model you take samples" }, { "end": 2640.2200000000003, "start": 2634.02, "text": " from the actual video data set take like a pretrain video classifier like a kinetics classifier" }, { "end": 2647.06, "start": 2640.22, "text": " and take the latent embedding of that and just compute the statistics in that space." }, { "end": 2652.4199999999996, "start": 2647.06, "text": " It is not the best metric in the sense that it's not like you optimize for FVD and you" }, { "end": 2658.18, "start": 2652.4199999999996, "text": " optimize for human like what like you know judgments of what is what seems like something" }, { "end": 2664.2599999999998, "start": 2658.18, "text": " that would you know count for the Turing test of like oh yeah is this video from a like" }, { "end": 2670.5, "start": 2664.26, "text": " a YouTube or a generative model like it's not as good as getting like high correlation" }, { "end": 2675.86, "start": 2670.5, "text": " with that but you know we're not even at a point in video generation where a human would" }, { "end": 2682.1400000000003, "start": 2675.86, "text": " find it hard to save whether that's this video is from AI or from a human until we get" }, { "end": 2685.1800000000003, "start": 2682.1400000000003, "text": " there I think optimizing for FVD sounds fine." }, { "end": 2690.1400000000003, "start": 2685.1800000000003, "text": " So I guess with image generation, again papers and such it's very convenient to just show" }, { "end": 2698.66, "start": 2690.14, "text": " some images were so used to evaluating faces that any discrepancy in faces is very easy" }, { "end": 2701.66, "start": 2698.66, "text": " for our eyes to detect." }, { "end": 2707.7799999999997, "start": 2701.66, "text": " Well it's getting harder and harder right like people say it's hard for them to know" }, { "end": 2713.7, "start": 2707.7799999999997, "text": " if like a dolly image is from dolly or from a human artist these days." }, { "end": 2717.7799999999997, "start": 2713.7, "text": " You can't it is easy to find out though like if you actually look at the up something" }, { "end": 2723.5, "start": 2717.78, "text": " artifacts I can dolly you could actually like zoom in and like figure out like oh yeah" }, { "end": 2730.46, "start": 2723.5, "text": " this doesn't seem like something human would have done but GAN papers also use FIDs to" }, { "end": 2735.1800000000003, "start": 2730.46, "text": " compare like you look at style GAN they always report the FIDs that they get on the new data" }, { "end": 2736.82, "start": 2735.1800000000003, "text": " set it's a good metric." }, { "end": 2739.78, "start": 2736.82, "text": " So yeah video papers can use FVDs." }, { "end": 2745.86, "start": 2739.78, "text": " What do you predict in terms of how far away are we from video GBT type systems joining" }, { "end": 2751.3, "start": 2745.86, "text": " the other ranks of the other the other types of mediums in terms of really high quality" }, { "end": 2753.3, "start": 2751.3, "text": " video being generally." }, { "end": 2755.9, "start": 2753.3, "text": " Yeah yeah it's likely to happen." }, { "end": 2763.5, "start": 2755.9, "text": " I think if the folks like like like dolly is pretty awesome and so that nothing really" }, { "end": 2768.02, "start": 2763.5, "text": " stops people from building something like that for video it requires a lot of computer" }, { "end": 2769.42, "start": 2768.02, "text": " and effort and time." }, { "end": 2774.6200000000003, "start": 2769.42, "text": " Yeah I mean video GBT is not that good in terms of quality it's more like an idea paper" }, { "end": 2781.1, "start": 2774.62, "text": " with some reasonable results but to actually have like a video dolly level like like a video" }, { "end": 2785.3399999999997, "start": 2781.1, "text": " that you can actually upload to YouTube and people might not be able to say this is completely" }, { "end": 2788.94, "start": 2785.3399999999997, "text": " AI generated I think it's a very likely to happen." }, { "end": 2792.98, "start": 2788.94, "text": " Okay like if you were to take a bet that it's not going to happen in the next five years" }, { "end": 2795.2999999999997, "start": 2792.98, "text": " you're likely to lose that bet." }, { "end": 2798.8199999999997, "start": 2795.2999999999997, "text": " But whether it's going to happen in one year or two years from now or like something" }, { "end": 2800.38, "start": 2798.8199999999997, "text": " like that it's not very clear." }, { "end": 2806.7000000000003, "start": 2800.38, "text": " The amount of information in a video versus these other mediums like text for example is" }, { "end": 2810.7000000000003, "start": 2806.7000000000003, "text": " a page of text is such a small amount of information compared to even a few seconds" }, { "end": 2815.86, "start": 2810.7000000000003, "text": " of video that I wonder where this problem the video generation problem lies compared" }, { "end": 2820.86, "start": 2815.86, "text": " to the current levels of compute that we have available for the current generation of" }, { "end": 2826.26, "start": 2820.86, "text": " of compute and GPUs that we have to have the capacity to to really do this at high quality." }, { "end": 2831.6600000000003, "start": 2826.26, "text": " So in terms of bits you do get more bits from a video but in terms of useful bits it" }, { "end": 2838.0600000000004, "start": 2831.6600000000003, "text": " probably is not not an extremely high order of magnitude because there's a lot of redundancy" }, { "end": 2845.7000000000003, "start": 2838.0600000000004, "text": " in a video right like just take a frame itself you don't need to ingest one k by one k frames" }, { "end": 2851.26, "start": 2845.7000000000003, "text": " to a model to learn a representation of it like probably 64 versus 64 is fine like" }, { "end": 2856.7400000000002, "start": 2851.26, "text": " most of the information can be preserved that like a latent embedding on top of that" }, { "end": 2861.38, "start": 2856.7400000000002, "text": " can still understand what exists in the frame in terms of like temporal information like" }, { "end": 2865.94, "start": 2861.38, "text": " you don't need to ingest it at the same frames per second that's needed for watching a" }, { "end": 2873.38, "start": 2865.94, "text": " high quality video on YouTube right like 16 FPS or like 24 FPS probably not needed for" }, { "end": 2875.94, "start": 2873.38, "text": " for an AI to train a model on top of that." }, { "end": 2879.78, "start": 2875.94, "text": " It is needed for the generative quality to produce such a model." }, { "end": 2886.1800000000003, "start": 2879.78, "text": " Yeah you do need to produce like a 20 FPS model like like for example if you were to" }, { "end": 2892.1000000000004, "start": 2886.1800000000003, "text": " produce like a 10 second video you don't just want to produce 10 frames right you do you" }, { "end": 2897.7000000000003, "start": 2892.1000000000004, "text": " want to produce something like 200 or like 240 frames whatever FPS you're happy viewing" }, { "end": 2904.7400000000002, "start": 2897.7000000000003, "text": " on on on YouTube and similarly you don't want to produce 64 versus 64 frames you do" }, { "end": 2910.9399999999996, "start": 2904.74, "text": " want to produce something like 720 or 1080 so that already makes you output dimension incredibly" }, { "end": 2917.2999999999997, "start": 2910.9399999999996, "text": " large but that's not what stops people from actually solving the video generation problem" }, { "end": 2924.9799999999996, "start": 2917.2999999999997, "text": " because you truly model the useful bits that's the hard part and if you actually have" }, { "end": 2930.7799999999997, "start": 2924.9799999999996, "text": " access to useful bits then the same GPT is that work on text will work on video too." }, { "end": 2935.46, "start": 2930.78, "text": " The difficult part is figuring out what are the useful bits like like you can't hard" }, { "end": 2942.26, "start": 2935.46, "text": " code it like you can't say hey yeah I'll just take the motion vectors and reference frame" }, { "end": 2950.1000000000004, "start": 2942.26, "text": " just like how MPEGs you know and code X are coded you could potentially do that right" }, { "end": 2954.6600000000003, "start": 2950.1000000000004, "text": " you just take you just take the source code of like a great video compression algorithm" }, { "end": 2962.02, "start": 2954.66, "text": " like like you know actually get whatever is communicated in terms of the code X and learn" }, { "end": 2968.8599999999997, "start": 2962.02, "text": " a generative model on that and then use the same decoder in the code X so that like you're" }, { "end": 2974.94, "start": 2968.8599999999997, "text": " computationally like much better but your video generation the generative model on those" }, { "end": 2982.3799999999997, "start": 2974.94, "text": " bits might actually be pretty hard to potentially model so it's not clear and it might actually" }, { "end": 2986.98, "start": 2982.38, "text": " be limiting because these code X are designed for just like preserving as much information" }, { "end": 2991.3, "start": 2986.98, "text": " as possible and not really learning anything semantic but the purpose is being made right" }, { "end": 2995.46, "start": 2991.3, "text": " like you look at dolly it's awesome it's it's learning generative model at a latent" }, { "end": 3002.82, "start": 2995.46, "text": " level VQVA itself is like that video GPT uses a VQVA and VQVA is like learn a down" }, { "end": 3008.7400000000002, "start": 3002.82, "text": " sap representation of your input and learns a generative model on that so we are already" }, { "end": 3015.74, "start": 3008.74, "text": " seeing signs of life there in terms of how to do it but once we have learned the like" }, { "end": 3020.3799999999997, "start": 3015.74, "text": " really correct representation to learn a generative model on top of that's going to be the" }, { "end": 3026.9399999999996, "start": 3020.3799999999997, "text": " moment where video generation really shines in video GPT we use the VQVA and that was" }, { "end": 3034.1, "start": 3026.9399999999996, "text": " the same stack use for code X or copilot so sorry jukebox and other like like like you" }, { "end": 3041.8199999999997, "start": 3034.1, "text": " know Aaron Vandian's work on like VQVA 2 that produced these 1224 by 1224 images it's" }, { "end": 3046.7, "start": 3041.8199999999997, "text": " been shown to work for images it's not been shown to work reliably for videos yet but" }, { "end": 3053.9, "start": 3046.7, "text": " but I would imagine that someone will figure this out so VQVA is is is a discrete V AE" }, { "end": 3059.7, "start": 3053.9, "text": " is that right yeah for for generating samples there's a code book so there's like a some" }, { "end": 3069.18, "start": 3059.7, "text": " kind of limited capacity that it has yeah it is lossy you can think of video sorry VQVA" }, { "end": 3075.2599999999998, "start": 3069.18, "text": " is as like a neural network learned version of JPEG or MPEG it's basically doing the" }, { "end": 3080.54, "start": 3075.2599999999998, "text": " same thing it's trying to pack in the bits in the latent space and then go back from" }, { "end": 3086.4199999999996, "start": 3080.54, "text": " those bits to the actual input minimize the reconstruction loss in a JPEG you're going" }, { "end": 3091.58, "start": 3086.42, "text": " to hard code these components by saying yeah the way you down sample is like you you take" }, { "end": 3097.98, "start": 3091.58, "text": " these 8 by 8 blocks of the image and then you run like discrete consent transform at" }, { "end": 3103.82, "start": 3097.98, "text": " up that and you quantize those coefficients and then you run an inverse DCT and then you" }, { "end": 3109.9, "start": 3103.82, "text": " chunk everything together again and you get back the image right and which how many bits" }, { "end": 3114.7000000000003, "start": 3109.9, "text": " you quantize in your DCT design like the lossy in ads of your JPEG and that also gives" }, { "end": 3119.7, "start": 3114.7, "text": " you the compression ratio just like that VQVA is how like how many bits you're using" }, { "end": 3124.8199999999997, "start": 3119.7, "text": " in the code book do you use like 10 bits or 8 bits and that decides how lossy it is and" }, { "end": 3131.46, "start": 3124.8199999999997, "text": " based on that you get like pretty good like reconstructions so it is doing the thing" }, { "end": 3137.18, "start": 3131.46, "text": " that JPEGs are doing but in a more neural way and that's cool but you also want something" }, { "end": 3142.5, "start": 3137.18, "text": " that's even more semantic than VQ ideas like uncle Purgating there like like the dolly" }, { "end": 3147.62, "start": 3142.5, "text": " tools basically taking the clip vector and then building a general demol on top of that" }, { "end": 3152.42, "start": 3147.62, "text": " so that clip is even more semantic than a VQ but then what is like what is the clip of" }, { "end": 3159.38, "start": 3152.42, "text": " video that's unclear these are like difficult difficult questions right now nobody knows" }, { "end": 3164.3, "start": 3159.38, "text": " the answers to these yet and you're right you do you need a lot of compute for making" }, { "end": 3172.7400000000002, "start": 3164.3, "text": " video work and I honestly don't know if if like somebody would make it work very fast" }, { "end": 3177.34, "start": 3172.7400000000002, "text": " but I think the likelihood that nobody makes it work in five years from now is very" }, { "end": 3181.78, "start": 3177.34, "text": " low it seems to me there's some kind of tension like you said with the semantics like I" }, { "end": 3187.94, "start": 3181.78, "text": " guess in RL there's the the bullet pixel problem where you know there could be just one pixel" }, { "end": 3191.46, "start": 3187.94, "text": " moving across the string screen that's such a small detail of the image but it ends" }, { "end": 3196.14, "start": 3191.46, "text": " up changing the plot of the game you the character dies and so when you're doing this down" }, { "end": 3202.3, "start": 3196.14, "text": " sampling and up sampling how do we know if we're not missing some of the detail that's" }, { "end": 3207.1, "start": 3202.3, "text": " so critical to the to the semantics of the video like someone has a tiny little pin" }, { "end": 3211.26, "start": 3207.1, "text": " that they're popping the balloons with and maybe that gets lost in the down sampling" }, { "end": 3219.54, "start": 3211.26, "text": " is there something causal needed to make this actually meaningful maybe maybe some notion" }, { "end": 3226.42, "start": 3219.54, "text": " of temporal consistency and motion is necessary but I don't know right like somebody could" }, { "end": 3231.7, "start": 3226.42, "text": " make it work with just a single latent vector it's difficult to say it is not going to work" }, { "end": 3236.82, "start": 3231.7, "text": " if unless you have an existence proof that I'm sorry unless you have a theoretical proof" }, { "end": 3241.9, "start": 3236.82, "text": " that it cannot work it's difficult to say because the way things work in deep learning" }, { "end": 3246.74, "start": 3241.9, "text": " is existence probes basically oh yeah I made it work here it's so far it doesn't exist" }, { "end": 3251.8599999999997, "start": 3246.74, "text": " yet but proving that it cannot exist is much harder but it'll be nice to have most structured" }, { "end": 3260.06, "start": 3251.8599999999997, "text": " latens for sure that have like some notion of optical flow like temporal latent and then" }, { "end": 3264.8199999999997, "start": 3260.06, "text": " image latent in a form like initial frame and then a generative model just like besides" }, { "end": 3270.2999999999997, "start": 3264.8199999999997, "text": " the initial canvas to paint and like this the motions to like you know encode and the" }, { "end": 3274.5, "start": 3270.2999999999997, "text": " decoder just figures things out that'll be super cool to have." }, { "end": 3280.26, "start": 3274.5, "text": " So in terms of moving images we've also seen some awesome results in the Nerf line of" }, { "end": 3284.7, "start": 3280.26, "text": " work can you say anything about the difference or similarity between what's happening" }, { "end": 3286.66, "start": 3284.7, "text": " here and what's happening there?" }, { "end": 3296.58, "start": 3286.66, "text": " Yeah Nerf is trying to make less use of neural decoders and more use of volumetric rendering" }, { "end": 3304.34, "start": 3296.58, "text": " algorithms so it's in some sense offloading some compute from a neural" }, { "end": 3310.86, "start": 3304.34, "text": " decoder to something that actually is physically consistent in terms of viewing the same" }, { "end": 3317.38, "start": 3310.86, "text": " scene from different views geometrically." }, { "end": 3326.58, "start": 3317.38, "text": " So whether that'll be part of like future generative models that is not clear yet but it'll" }, { "end": 3332.86, "start": 3326.58, "text": " be nice to have something like that because it definitely requires less model capacity" }, { "end": 3344.3, "start": 3332.86, "text": " than a pure video GPT like model and then might actually be more consistent physically" }, { "end": 3350.1800000000003, "start": 3344.3, "text": " so that makes it like more reliable in terms of like building a technology around it." }, { "end": 3356.26, "start": 3350.1800000000003, "text": " So it's pretty exciting how to actually combine the large representation learning models" }, { "end": 3360.98, "start": 3356.26, "text": " with something like Nerf is still like something people are exploring." }, { "end": 3365.38, "start": 3360.98, "text": " And moving on to some more general issues as a research scientist can you talk about" }, { "end": 3370.3, "start": 3365.38, "text": " how you plan your research and your roadmap you know what to pursue the explore exploit" }, { "end": 3373.34, "start": 3370.3, "text": " trade-off in your in planning through your research?" }, { "end": 3381.82, "start": 3373.34, "text": " So I'm currently really in exploitation mode like there's stuff that already works and" }, { "end": 3386.18, "start": 3381.82, "text": " there's like a certain for open AI formula of doing things and I'm just trying to do it" }, { "end": 3387.94, "start": 3386.18, "text": " like that." }, { "end": 3392.66, "start": 3387.94, "text": " Evolution is more risky of course right like you you might end up with something amazing" }, { "end": 3397.3, "start": 3392.66, "text": " but the probability of that is low but the value function is much higher." }, { "end": 3401.86, "start": 3397.3, "text": " Explatation is value function might be low but probability success is high." }, { "end": 3406.46, "start": 3401.86, "text": " You ideally you want to be in the period of optimality between probability of success" }, { "end": 3407.46, "start": 3406.46, "text": " and value function." }, { "end": 3412.2200000000003, "start": 3407.46, "text": " Yeah like you do want to have a big cake but you want to eat it too kind of it's hard" }, { "end": 3418.5, "start": 3412.22, "text": " I'm it's not like I've succeeded multiple times to tell you exactly how to do it and I" }, { "end": 3425.3799999999997, "start": 3418.5, "text": " don't know I'm trying to figure out myself but if you if you you would ask me whether I'm" }, { "end": 3428.74, "start": 3425.3799999999997, "text": " exploring or exploiting right now I'm definitely exploiting." }, { "end": 3431.2999999999997, "start": 3428.74, "text": " Are there anything else you want to share with our audience today?" }, { "end": 3436.5, "start": 3431.2999999999997, "text": " AI is really exciting you know I'm sure it's overwhelming to see like every day people" }, { "end": 3443.46, "start": 3436.5, "text": " making so much progress across different like you know companies and research labs." }, { "end": 3449.94, "start": 3443.46, "text": " If you're more junior I would say don't get discouraged and just try to like work on" }, { "end": 3457.42, "start": 3449.94, "text": " the fundamentals and pick something you really cannot dedicate like 80 hours that we" }, { "end": 3459.82, "start": 3457.42, "text": " connect and keep making progress." }, { "end": 3464.86, "start": 3459.82, "text": " Yeah if you're interested in doing a PhD or something like that it's worth really thinking" }, { "end": 3469.98, "start": 3464.86, "text": " hard about whether you should be working on deep learning or even if you work on deep" }, { "end": 3475.78, "start": 3469.98, "text": " learning doing it the same manner as opening our Google maybe something worth thinking" }, { "end": 3481.82, "start": 3475.78, "text": " about because the more and more clear it's getting that you need large models to get" }, { "end": 3487.34, "start": 3481.82, "text": " amazing generalization results at this time it's hard for you to have an impact outside" }, { "end": 3491.82, "start": 3487.34, "text": " of what people are doing already so it's good to like rethink the game if you're in a" }, { "end": 3496.54, "start": 3491.82, "text": " place where you don't have as much compute you need to like figure out a new formula or" }, { "end": 3499.78, "start": 3496.54, "text": " like look at some places that people are already looking into." }, { "end": 3503.5800000000004, "start": 3499.78, "text": " Arvind Shrinivas thank you so much for sharing your time and your insight with us here at" }, { "end": 3504.5800000000004, "start": 3503.5800000000004, "text": " TalkRL." }, { "end": 3534.54, "start": 3504.58, "text": " Thank you Robin." } ]
Rohin Shah
DeepMind Research Scientist Dr. Rohin Shah on Value Alignment, Learning from Human feedback, Assistance paradigm, the BASALT MineRL competition, his Alignment Newslett...
https://media.transistor…ae3.mp3?src=site
TalkRL podcast is all reinforcing learning all the time, featuring brilliant guests, both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. Robyn Shaw is a research scientist at DeepMind and the editor and main contributor of the alignment newsletter. Thanks so much for joining us today, Robin. Yeah, thanks for having me, Robin. Let's get started with how do you like to describe your area of interest? On my website, the thing that I say is that I'm interested in sort of a long-term trajectory of AI because it seems like AI is becoming more and more capable over time. With many people thinking that someday we are going to get to artificial general intelligence or AIGI, where AI systems will be able to replace humans at most economically valuable tasks. And that just seems like such an important event in the history of humanity. It seems like it would radically transform the world. And so it seems very important to both important and interesting to understand what is going to happen and to see how we can make that important stuff happen better, so that we get good outcomes instead of bad outcomes. That's a very general statement, but I would say that's a pretty big area of interest for me. And then I often spend most of my time on a particular sub-question within that, which is what are the chances that these AIGI systems will be misaligned with humanity in the sense that they will want something other than what they will want to do things other than what humans want them to do. So A, what is the risk of that, how can it arise, and B, how can we prevent that problem from happening? Cool. Okay, so we're going to talk about some of this in more general terms later on. But first let's get a little more specific about some of your recent papers. First we have the MinRL Basel competition on learning from human feedback. And that was benchmarked for agents that solve almost life-like tasks. So I gather this is based on the MinRL Minecraft based RL environment. We saw some competitions on using that before, but here you're doing something different with the MinRL. Can you tell us about Basel and what's the idea here? So I think the basic idea is that a reward function, which is a typical tool that you use in reinforcement learning, I'm sure you're less, I expect you're listeners probably know about that. The reward function, if you have to write it down by hand, is actually a pretty not great way of specifying what you want an AI system to do. Like reinforcement learning treats the reward function as a specification of exactly what the optimal behavior is to do in every possible circumstance that could possibly arise. When you looked at that the reward function, did you think of every possible situation that could ever possibly arise and check whether your reward function was specifying the correct behavior in that situation? No, you did not do that. And so we already have lots and lots of examples of cases where people like tried to write it right down the reward function that they thought would lead to good behavior and they actually ran reinforcement learning or some other optimization algorithm with that reward function. And they found some totally unexpected solution that did get high reward, but didn't do what the designer wanted it to do. And so this motivates the question of like, all right, how can we specify what we want the agent to do without using handwritten reward functions? The general class of approaches that has been developed in response to this is what I call learning from human feedback or LFHF. The idea here is that you consider some possible situations where the AI could do things and then you like ask a human, hey, in these particular situations, what should the AI system do? So you're making more local queries and local specifications rather than having to reason about every possible circumstance that can ever arise. And then given all of this, given a large data set of human feedback on various situations, you can then train an agent to meet that specification as best as it can. So people have been developing these techniques and includes things like imitation learning where you learn from human demonstrations of how to do the task or learning from comparisons where humans compare, look at videos of two agent, two videos of agent behavior and then say, you know, the left one is better than the right one or it includes corrections where the agent does something on humans like at this point, you should have like taken this other action instead. That would have been better. These are all ways that you can use human, human feedback to train an agent to do what you want. So people have developed a lot of algorithms like this, but the evaluation of them is kind of ad hoc. People just sort of make up some new environment to test their method on. They don't really compare on any like on a standard benchmark that everyone is using. So the big idea with basalt was to was to change that to actually make a benchmark that could reasonably fairly compare all of these, all of these different approaches. So we like, we wanted it to mimic the real world situation as much as possible in the real world situation. You just have like some notion in your head of what task you want your AI system to do. And then you have to, you have to take a learning from human feedback algorithm and give it the appropriate feedback. So similarly in this benchmark, we instantiate the agent in a Minecraft world and then we just tell the designer, hey, you've got to train. You're an agent to say make a waterfall. That's one of our tasks and then take a picture of it. So we just tell the designers, you have to do this. So now the designer has in their head, like notion of what the agent is supposed to do, but there's no formal specification, no reward function, nothing like that. So they can then do whatever they want. They can write down a reward function by hand if that seems like an approach they want to do. They can use demonstrations, they can use preferences, they can use corrections, they can do active learning and so on. But their job is to like make an agent that actually does the task. Ideally, they want to maximize performance and minimize costs both in terms of compute and in terms of how much human feedback it takes to train the agent. So I watched the presentations of the top two solutions and it seemed like they showed very different approaches. The first one, Kyros I would say, is seemed like a lot of hand engineering. I think they used 80,000 plus labeled images and built some very specific components for this. They kind of decomposed the problem, which I think is a very sensible thing to do. But then also the second one was Obsidian. They produced this inverse Q learning method, a new method, which is seemed like a more general theoretical solution. I just wondered if you have any comments on the different types of solutions that came out of this or those kind of two main classes that you saw or did any classes of solutions surprise you. Yeah, I think that's basically right. I don't think they were particularly surprising in that it was, we spent a lot of time making sure that the tasks couldn't trivially be solved by just doing hand engineering a like classical program. So even the top team did rely on a behavior-cloned navigation policy that used the neural network, but it's true they've done a bunch of engineering on top of that. Which I think is according to me is just a benefit of this setup. It shows you like, hey, if you're just actually trying to get good performance, do you train in the neural network end to end or do you put in or do you put in domain knowledge and how much domain knowledge do you put in and how do you do it? And it turns out that in this particular case, the domain knowledge, well, they did end up getting first, but Team Obsidian was quite close behind. I would say that the two approaches were actually pretty comparable. And I do agree that I would say one is more of an engineering-y solution than the other one is more of a researchy solution. So it seems to me like the goals here were things that could be modeled and learned like it seems feasible to learn the concept or to train a network to learn the concept of looking at a waterfall. They had enough labels. And I guess that's what some contestants did. But do you have any comments on if we were to want goals that are harder to model than these things? I was trying to think examples that came up with like Arnie or Dance Choreography scoring, like how would you even begin to model those things. Do we have to just continue improving our modeling toolkits so that we can make models of these reward functions or is there some other strategy? It depends exactly what you mean by improving the modeling tool kit. But basically, I think the answer is yes. But you know, the way that we can improve our modeling tool kit, it may not look like explicit modeling. So for example, for Arnie, I think you could probably get a decent, well, maybe not. But it's plausible that you could get a decent reward model out of a large language model that like does in fact have the concept of irony. If I remember correctly, large language models are not actually that great at humor, so I'm not sure if they have the concept of Arnie. But I wouldn't be surprised that if further scaling did in fact give them a concept of irony such that we could use, we could then use them to have rewards that involve Arnie. And I think that's the same sort of thing as like waterfall. Like I agree that we can learn the concept of a waterfall, but it's not a trivial concept. If you ask me to program it by hand, I would have no idea. Like the only input, you get pixels as an input. If you're like, here's a rectangle of pixels, please write a program that detects the waterfall in there. I'm like, oh God, that sounds really difficult. I don't know how to do it. But we can, if we apply machine learning, then like turns out that we can recognize these sorts of concepts. And similarly, I think it's not going to be like, I definitely couldn't write a program directly that can recognize Arnie. But if you do machine learning, if you use machine learning to model all the text on the internet, the resulting model does in fact have a concept of irony that you can then try to use the new reward functions. And then there's a Twitter thread related to disinformation. And I shared a line from your paper where you said learning from human feedback offers the alternative of training recommender systems to promote content that humans would predict would improve the user's well-being. And I thought that was really cool insight. Is that something you're interested in pursuing or you see that being a thing? I don't know whether or not it is actually feasible currently. One thing that needs to be true of recommender systems is they need to be cheap to run because they are being run so, so many times every day. I don't actually know this for a fact. I haven't actually done any Fermi estimates. But my guess would be that if you try to actually run GPT-3 on, say, Facebook posts in order to then rank them, I think that would probably be prohibitively expensive for Facebook. So, there's a question of, can you get a model that actually makes reasonable predictions about the user's well-being that can also be run cheaply enough that it's not a huge expensive cost to whoever is implementing the recommendation system? And also does it take a sufficiently small amount of human feedback that you aren't bottlenecked on cost from the humans providing the feedback? And also do we have algorithms that are good enough to train recommender systems this way? I think the answer is plausibly yes to all of these. I haven't actually checked myself nor have I even tried to do any feasibility studies. I think the line that you're quoting was more about, like, okay, why do this research at all? And I'm like, well, someday in the future, this should be possible. And I stick by that, like, someday in the future things will become significantly cheaper, learning from human feedback algorithms will be a lot better and so on. And then, like, it will just totally make sense to recommender systems trained with human feedback unless we found something even better by then. It's just not obvious to me that it is the right choice currently. I look forward to that, and I'm really concerned, like, many people are about the disinformation and the divisiveness of social media. So that sounds great. I think everyone's used to very cheap reward functions pretty much across the board. So I guess what you're kind of pointing to with these reward functions is potentially more expensive to evaluate reward functions, which is maybe hasn't been a common thing up till now. Like, both more expensive reward functions and also the model that you train with that reward function might be, might still be very expensive to do inference with. Presumably recommender systems right now are like compute these, you know, run a few linear time algorithms on the post in order to like compute it like 100 or 100,000 features, then do a dot product with 100,000 weights, see which and then like rank things in order by by those numbers. And that's like, you know, maybe a million flops or something, which is a tiny, tiny number of flops, whereas like a forward pass to GPT-3 is more is several hundred billion flops. So that's a like 10 to the 5x increase in the amount of computation you have to do. Actually, no, that's one forward pass through GPT-3, but there are many words in a Facebook post. So multiply the 10 to the 5 by the number of words in the Facebook post. And now we're at like maybe more like 10 to the 7 times cost increase just to do inference even as you mean you had successfully trained a model that could do recommendations. And in the end result, maybe lowering engagement for the benefit of less divisive content, which is maybe not in the interest of the social media companies in the first place. Yeah, there's also a question of whether the companies will want to do this. But I think if we, I don't know, if we like show that this was feasible, that would give regulators a much more, like I think a common problem with regulation is that you don't know what to regulate because there's no alternative on the table for what people are already doing. And if we were to come to them and say, look, there's this learning from human feedback approach, we've like calculated it out. This should, this should only increase cost by 2x or maybe, yeah, this should, maybe this is like just the same amount of cost. And it shouldn't be too hard for companies to actually train such a model. They've already got all the infrastructure. They should barely be like, I don't know, $100,000 to train the model once. And if you lay out that case, I think it's much, I would hope at least that it would be a lot easier for the regulators to be like, yes, everyone you must train your recommended systems to be optimizing for what humans would predict as good as opposed to whatever you're doing right now. So that could really change the game. And then the bots or the divisive posters are now trying to game that, that newer word function. So that's why some different strategies. Yeah, you might, you might imagine that you have to like keep retraining in order to deal with new strategies that are, that people are finding in response to you. Like we can do this. I don't have any special information about that on those from working at Google, but I'm told that Google is actually like pretty good at defeating, defeating spammers, for example. Like in fact, my Gmail spam filter works quite well as far as I can tell despite the fact that spammers are constantly trying to evade it. And we'll hopefully we could do the same thing here. Cool. Okay, let's move on to your next paper. Preferences implicit in the state of the world. I understand this paper is closely related to your dissertation. We'll link to your dissertation in the showroom and says, well, I'm just going to read a quote and I love how you distilled this key insight. You said the key insight of this paper is that when a robot is deployed in an environment, the humans have been acting in. The state of the environment is already optimized for what humans want. Can you tell us the general idea here and what do you mean by that statement? Maybe like put yourself in the position of a robot or an AI system that knows nothing about the world. Maybe it's like, sorry, like it knows the laws of physics or something. It knows that there's gravity. It knows that like there are solids, liquids and gases, liquids tend to take the shape of the container that they're in, stuff like that. It doesn't know anything about humans or maybe like, you know, it was, we imagine that it's sort of like off in other parts of the solar system or whatever it hasn't really seen in Earth. And then it like comes to Earth and it's like, whoa, Earth has these like super regular structures. There's like these like very cuboidal structures with glass panes and regular intervals that often seem to have lights inside of them, even though even at night when there isn't light outside of outside of them, this is kind of shocking. You wouldn't expect this from a random configuration of atoms or something like that. Like there's some sense in which they order of the world that we humans have imposed upon it is like extremely surprising if you don't know about humans already being there and what they want. So then you can imagine asking your AIS system, hey, you see a lot of order here. Can you like figure out an explanation for why this order is there? Perhaps it, and then you like, and maybe you get, and then you give it the hint of like, look, it's, we're going to give you the hint that it was created by somebody optimizing the world. What sort of things might they have been optimizing for? And then you like, you know, you look around, you see that like, oh, liquids, they tend to be in these like glasses, it would be really easy to tip over the glasses and have all the liquids spill out, but like that mostly doesn't happen. So people must want to have their liquids in glasses and probably I shouldn't knock it over. Vases, they're like kind of fragile. You could like easily just like move them a little bit to the, to the left or right and they would like fall down and break. And once they're broken, you couldn't then reassemble them. But nonetheless, they're still not broken. So like probably someone like actively doesn't want them to break and is leaving them on the table. Yeah. So really, I would say that the idea here is the order in the world did not just happen by random chance. It happened because of human optimization. And so from looking at the order of the world, you can figure out what the humans were optimizing for. Yeah, that's the basic idea under a length of paper. So there's some kind of relationship here to inverse reinforcement learning where we're trying to recover the reward function from observing an agent's behavior. But here you're not observing the agent's behavior, right? So it's not quite inverse RL. What how would you describe the relationship between what you're doing here and standard inverse RL? Yeah. So in terms of the formalism, inverse RL says that you observe the human's behavior over time. So that's a sequence of states and actions that the human took within those states. Whereas we're just saying, nah, nah, nah, we're not watching the human's behavior. We're just going to see only the state, the current state. That's the only thing that we see. And so you can think of this in the framework of inverse reinforcement learning. You can think of this as either the final state of the trajectory or a state sampled from the stationary distribution from an infinitely long trajectory. Either of those would be reasonable to do. But you're only observing that one thing instead of observing the entire state action history starting from a random initialization of the world. But other than that, you just make that one change and then you run through all the same map and you get a slightly different algorithm. And that's basically what we did to make this paper. So with this approach, I guess potentially you're opening up a huge amount of kind of unsupervised learning just from observing what's happening. And you can kind of almost do it instantaneously in terms of observation, right? You don't have to watch billions of humans for thousands of years. Yep. That's right. It does require that your AI system knows like the laws of physics or as we would call it in RL, the transition dynamics. Or well, it needs to either know that or have some sorts of data from which it can learn that because if you just look at the state of the world and you have no idea what the laws of physics are or how things work at all, you're not going to be able to figure out how it was optimized into the state. Like if you want to infer that humans don't want their bases to be broken, it's an important fact in order to infer that that if a base is broken, it's very hard to put it back together. And that is the fact about the transition dynamics, which we assume by fiat that the agent knows. But yes, if you had enough data such that self-supervised learning could teach the agent a bunch about dynamics and also then and then like also the agent could go about going around looking at the state of the world in theory, it could then infer a lot about what humans care about. So I very clearly remember meeting you at Nureps 2018 Deep Arla Workshop in Montreal, the poster session. And I remember your poster on this and you showed a dining room that was all nicely arranged and you were saying how a robot could learn from how things are arranged. And I just want to say, I'll say this publicly, I didn't understand at that point what you meant or why that could be important. And it was so different, your angle was just so different than everything else that was being presented that day. And I really didn't get it. So I and I will own that. It was my loss. So thanks for your patience. It only took me three and a half years or something to come around. Yeah, sorry, I didn't communicate that clear or I suppose. I don't think it was no, I don't think it was at all on you. But maybe I just lacked the background to see why I like to understand. Let me put it this way, like how often do you find people who have some technical understanding of AI, but still maybe don't appreciate some of this line of work including alignment and things like that? Is that a common thing? I think that's the reason of common. And what do you attribute that to? What's going on there and is that changing at all? I think it's pretty interesting. I don't think that these people would say that, oh, this is a boring paper or this is an incompetent paper. I think they would say, yes, the person who wrote this paper is in fact, has in fact done something impressive by the standards of like, did you need to be intelligent and like do good math in order to do this? I think they are more likely to say something like, okay, but so what? And that's not entirely unfair. It was the deep RL workshop and here I am talking about like, oh, yes, imagine that you know all the dynamics and also you're only getting to look at the state of the world. And then you think about how bases can be broken but then they can't be put back together and voila, you learn that humans don't like to break bases. This is just such so different from all of the things that RL usually focuses on, right? Like it doesn't have any of the buzzwords. There's no like, you know, deep learning, there's no exploration, there's no, there's no catastrophic forgetting, nothing like that. And to be clear, all of those seem like important things to focus on. And I think many of the people who are at that workshop were focusing on those and are doing good work on them. And I'm just doing something completely different. It's like not all that interesting to them because they want to work on reinforcement learning. I think they're making a mistake in the sense that like AI alignment is important and more people should work on it. But I don't think they're making a mistake in that they're probably correct about what doesn't, doesn't interest them. Okay, just so I'm clear, I was not critiquing your math or the value of anything you were doing. It was just my ability to understand the importance of this type of work. Yeah, I didn't think you were. Okay, thanks. So I will say that that day when I first encountered your poster, I was really hung up on edge cases. Like, you know, there's in the world, the robot might observe, there's hunger and there's traffic accidents and there's things that things like, like not everything is perfect. And we don't want the robot to replicate all these, all these flaws in the world or the dining room, there might be, you know, dirty dishes or something. And so the world is clearly not exactly how we want it to be. So how is that, is that an issue or is that, is that not an issue or is that just not the point of this, not, not a trust here? It depends a little bit. I think in many cases, it's not an issue. If you imagine that the robot somehow sees the entire world. So for example, you mentioned hunger. I think the robot would notice that we do in fact spend a lot of effort making sure that at least large number of people don't go hungry. Like, we've built these giant vehicles, both trucks and cargo ships and so on, that move food around in a way that seems at least somewhat optimized to get food to people who like that food and want to eat it. So there's lots of effort being put into it. There's not like the maximal amount of effort being put into it, which I think reflects the fact that there are things that we care about other than food. So I do think it would be like, all right, humans definitely care about having food. I think it might then, like if you use the assumption that we have in the paper, which is that humans are the humans are noisily rational, then it might conclude things like, I, yes, Western countries care about getting food to Western citizens, to the citizens of their country. And they care a little bit about other people having food, but like not that much. It's like a small portion of their government's aid budget. So like there's a positive weight on their fairly small weight. And that seems like maybe not the thing that we want to tolerant, but like also I think it is in some sense an accurate reflection of what Western countries care about if you go by their actions rather than what they say. Cool. Okay. So I, I'm going to move on to benefits of assistance over reward learning. And this one was absolutely fascinating to me. Actually mind blowing. I highly recommend people read all of these, but, but definitely I can point to this one as something surprising to me. So that was you as a first author. And can you share what is the, what's the general idea of this paper, Ron? I should say that this general idea was not novel to this paper. It's been proposed previously. I'm not going to remember the paper, but it's by firm at all. It's like towards decision, decision theoretic model of assistance or something like that. And then there's also cooperative and research reinforcement learning from Chai where I did my PhD. The idea with this paper was just to take that the, the models that had already been proposed in these papers and explain why they were so nice. Why, why I was like particularly keen on these models as opposed to other things that the field could be doing. So the idea here is that generally we want to build a systems that help us do stuff. And you could imagine two different ways that this could be done. First, you could imagine a system that has two separate modules. One module is doing is trying to figure out what the humans want or what the humans want the AI system to do. And the other module is them is trying to then do the things that the first module said the people wanted it to do. And that's kind of like the one we talked about learning from human feedback earlier on and modeling reward functions. Is that what that would exactly? I think that is often what that's often what people are thinking about. I would make a distinction between how you train the AI system and what the AI system is doing. This paper I would say is more about what the AI system is doing. Whereas the learning from human feedback stuff is more about how you train the system. Yeah. In the what the AI system is doing framework, I would call this value learning or reward learning. And then the alternative is assistance. And so although there's like some surface similarities between learning from human feedback and award learning, it is totally possible to use learning from human feedback algorithms to train an AI system that acts as though it is in the assistance paradigm is also possible to use learning from human feedback approaches to train an AI system. Then acts as though that then acts as though it is in the reward learning paradigm. So that's one distinction to make to recap the value learning or reward learning side of the two models is two separate modules, one that like figures out what the humans want and the other that then acts to optimize those values. And the other side which we might call assistance is where you still have both of those functions but they're combined into a single module. And the way that you do this is you have the AI system posit that there is some true unknown reward function theta only the human, the human who is a part of the environment knows this data and their behavior depends on what the data actually is. And so now these just test to act in order to maximize theta but it doesn't know theta. So it has to like look at how the human is behaving within the environment in order to like make some inferences about what data probably is. And then as it gets more and more information about data that allows it to take more and more like actions in order to optimize theta. And fundamentally this like learning about theta is an instrumental action that the agent predicts would be useful for helping it to better optimize theta in the future. So if I understand correctly you're saying assistance is superior because it can the agent can reason about how to improve its model of what the human wants or how do you describe why why it's you get all these benefits from assistance. Yeah, I think that benefits come more from the fact that these two functions are integrated. There's the value learning, the reward learning or value learning and the control. So like acting to optimize the value learning. So we can think of these two functions in assistance. They're merged into a single module that does like nice good Bayesian reasoning about all of it. Whereas in the value learning paradigm they're separated and it's this integration that provides the benefits you can make plans which is generally the domain of control. But those plans can then depend on the agent believing that in the future it's going to learn some more things about the reward function theta which would normally be the domain of value learning. So that's an example where control is using information future information about value learning in order to make its plans whereas when those two modules are separated you can't do that. And so like one example that we have in the paper is you is like you imagine that you've got a robot who is who asked you cook dinner for Alice. Alice is currently well not cook dinner bake a pie for Alice. Alice is currently at the office so the robot can't talk to her and unfortunately the robot doesn't know what kind of pie she wants. Maybe Apple blueberry or cherry but like the robot could guess but its guess is not that likely to be correct. However it turns out that you know the steps to make the piecrest are the same for all three pies. So an assistive robot can reason the hey my plan is first make the piecrest then wait for Alice to get home then ask her what filling she wants then put the filling in. And that entire plan consists of both taking actions in the environment like making the crust and putting in the filling and also includes things like learn more about data by asking Alice a question. And so it's like integrating all of these into a single plan whereas that plan cannot be expressed in the value learning paradigm. The query as an action in the action space. So I really like the you laid out some levels of task complexity and I'm just going to go through them really briefly. You mentioned traditional CS is giving instructions to computer on how to perform a task and then using AI or ML for simpler tasks would be specifying what the task is and the machine figures out how to do it. I guess that's standard RL formulation. And then I the hard for hard task specifying the task is difficult. So the agents can learn may may learn a reward function from human feedback. And then and then and then you mentioned assistance paradigm as as the next level where the human is part of the environment has latent goals that the robot does not know. Yep. How do you see this ladder? Like does this describe is this a universal classification scheme? Is it or we don't is that the high level? That's a good question. I haven't really thought about it before. You can imagine a different version of the highest level which is like here we've talked about the assistance framing where you're like there is some objective but you have to infer it from human feedback. There's a different version that may is more in line with the way things are going with deep learning right now which is more like specifying the task is difficult. So we're only going to like evaluate behaviors that the AI agent shows and maybe like also try to find some hypothetical behaviors and evaluate those as well. So that's a different way that you could talk about this highest level where you're like evaluating specific behaviors rather than trying to specify the task across all possible behaviors. And then maybe that would have to be the highest level. I don't know you could just keep inventing new kinds of human feedback inputs and maybe those could be thought of as higher levels beyond that as well. So then one detail I mentioned I saw in the paper you mentioned two-phase communicative assistance is equivalent to reward learning. And I puzzled over that line and I couldn't really quite understand what you meant. Can you say a little bit more about that? What does that mean? How do you conclude that those two things are equivalent? Yeah. So there are a number of definitions here. Maybe I won't go through all of it but just so that listener is no. We had definition, we had formal definitions of like what counts as assistance and what counts as a reward learning. In the reward learning case we imagine it as first you have a system that asks the human questions or actually doesn't have to ask the human questions. But first we have a system that interacts with the human somehow and develops a guess of what the reward function is. And then that guess of what the reward function is which could be a distribution over rewards is passed on to a system that then acts to maximize the expected value of the sorry the expected reward according to that distribution over rewards. Okay. Yeah. So once it's done its communication it's learned a reward and in phase two it's not it doesn't have a query as action at that point. That's right. Exactly. Okay. Cool. And so then this you know two phase is the two phase communicative assistance that two phase and the communicative parts both have technical definitions but they roughly mean exactly what you would expect them to mean in order to make this true. So you mentioned three benefits of using assistance. This assistance paradigm. Can you briefly explain what those benefits are? The first one which I already talked about is plans conditional on future feedback. So this is the example where the robot can make a plan that says hey first they'll make the pie crust then I'll wait for Alice to get back from the office then I'll ask her what filling she wants. Then I'll put in the appropriate filling so there there the plan was conditional on the answer that Alice was going to give in the future that the robot predict that she would give but like couldn't actually ask the question now. So that's one thing that can be done in the assistance paradigm but not in the value learning or reward learning paradigm. A second one is what we call relevance aware active learning. So active learning is the idea that instead of the human passively giving the robot, sorry instead the human giving a bunch of information to the robot on the robot passively taking it and using it to update it's a submit of data. The robot actively asks the human quite human questions that seem most relevant to updating its understanding of the reward data and then the human answers those questions. So that's active learning that can be done in both paradigms. The thing that assistance can do is to have the robot only ask questions that are actually relevant for the plans that's going to have in the future. So to make this point I might you might imagine that like you know you get a household robot and your household robot's booting up and if it was in the reward learning paradigm it has to like figure out data right now and so it's like all right do you tend to like at what time do you do tend to prefer dinner so I can cook that for you and that's like a pretty reasonable question and you're like yeah I usually eat around 7 pm and it's got a few more questions like this and later on it's like well if you ever want to depaint your house what color should we paint it and you're like kind of like blue I guess but like why are you asking me this and then it's like if aliens come and invade from Mars where would what would be your preference of place to hide in and you're like why why are you asking me this but the thing is like all of these questions are in fact relevant for for the reward function data the reason that you don't that like if this were a human instead of a robot the reason they went to ask these questions is because the situations to which they're relevant probably don't come up but in order to like make that prediction you need to be talking more to the control sub module with the control module which is like a thing that reward learning paradigm doesn't do the controls of modules the one that's like all right we're gonna take we're probably going to take these sorts of actions as kind of lead to this kind of feature and so like you know probably aliens from Mars aren't ever going to be relevant so if if you have this like one unified system then it can be like well okay I know that like aliens from Mars are probably not going to show up anytime in the near future and then I don't need to ask about those preferences right now if they if I do find out that aliens from Mars are likely to land soon then I will ask that question but I can leave that to later and not bother Alice until that actually happens so that's a second one and then the final one is that you know so far I've been talking at cases where the robot is learning by asking the human questions and the human just like gives answers that are informative about their word function data the third one is that you know you don't have to ask the human questions you can also learn from their behavior just directly while they are going about their day and optimizing their environment a good example of this is like your robot starts helping out around the kitchen it starts by doing some like very obvious things like okay there are some dirty dishes just put them in the dishwasher meanwhile the humans going around and like starting to collect the ingredients for baking a pie that everybody can see this notice that that's that's the case and then like go and get out the like mixing bowl and the egg beater and so on in order to help like this sort of just like seeing what the human is up to and then like immediately starting to help with that is the sort of thing that you can only like this is all happening within a single episode rather than being across episodes the like value learning or rewarding could do it across episodes where like first the robot looks and watches the human act in the environment to make an entire cake from scratch and then the next time when the robot is actually in the environment it goes and helps the human out but in the assistance paradigm it can do that learning and help out with making the cake within the episode itself as long as it has enough understanding of how the world works and what data is likely to be in order to actually like did use with enough confidence that those actions are good to take when you describe the robot that would ask all these irrelevant questions I couldn't help I'm a parent I couldn't help with thinking you know that's the kind of thing a four-year-old would do try ask you every random question yes not relevant right then and it seems like you're you're kind of pointing to a more mature type of intelligence yeah yeah a lot of this is like like the the entire paper has a assumption of like we're going to write down math and then we're going to talk about agents that are optimal for that math we're not going to bother thinking of we're not going to think about like okay how do we in practice get the optimal thing we're just like is the optimal thing actually the thing that we want and so one would hope that yes if we're assuming the actual optimal agent it should in fact be more mature than four-year-olds one hopes so how do you relate can you relate this assistance paradigm back to standard and inverse RL what is the relationship between these two paradigms yeah so inverse RL assumes that it's an example of the reward learning paradigm it assumes that you get full demonstrations of the entire task and then you have and then you like executed by the human teleoperating the robot there's like versions of it that don't assume the teleoperation part but usually that's an assumption and then given the you know teleoperated robot demonstrations of how to do the task the robot is then it's supposed to infer what the task actually was and then be able to do it itself in the future without any teleoperation so without uncertainty is that true with the inverse RL paradigm assumes that we're not uncertain in the end ah no it doesn't necessarily assume that I think in many deep RL algorithms that does end up being an assumption that they use but it's not a necessary one it can still be uncertain and then I would plan typically with respect to maximizing the expectation over the reward function although you could also try to be conservative or risk sensitive and then you would be maximizing expected reward maybe you'd be maximizing like worst case reward if you wanted to be maximally conservative or something like that or a fifth percentile reward or something like that yeah so so there can be uncertainty but like the human isn't in the environment and there's this episodic assumption where like the demonstration is one episode and then when the robot is acting that's a totally different episode and that also is in true and the assistance case you talk about active reward learning and interactive reward learning can you help us understand those those two phrases and how they differ yeah so active reward learning is just when the robot has the ability like in the reward learning paradigm the robot is given the ability to ask questions rather than just getting just getting to observe what the human is doing so hopefully that one should be relatively clear the interactive reward learning setting is it's mostly just a thing we made up because it was a thing that people often brought up as like maybe this will work so we wanted to talk about it and show why it doesn't doesn't in fact work but the idea there is that you alternate between you still have your two modules you have one reward learning module and one control module and they don't talk to each other but instead of like just doing one reward learning thing and then and then doing control forever more you do like I don't know you do 10 steps of reward learning then 10 steps of control then 10 steps of reward learning then 10 steps of control and you like keep iterating between the the two stages so why is computational complexity really high for algorithms that try to optimize over assistance I think you mentioned that here yeah so everything I've talked about has just sort of assumed that the agents are optimal by default but if you think about it what the optimal agent has to do is it has to you know maintain a probability distribution over all of the possible reward functions that Alice could have and then updated over time as it sees more and more of Alice's behavior and as you probably know full-patient updating over a large list of hypotheses is very computationally intractable. Another way of seeing it is that you know if you take this assistance paradigm you can and through a relatively simple reduction turn it into a partially observable markup decision process or POMDP the basic idea there is to treat the reward function data as like some unobserved part of the state and then that reward function is whatever that unobserved part of the state would say and then the Alice's behavior is thought of as part of the transition dynamics which depends on the unobserved part of the state that is the that is data so that that's a rough reduction to how you phrase assistance as a POMDP and then POMDPs are known to be very computationally intractable to solve again for basically the same reason that I was just saying which is that like to actually solve them you need to maintain a patient a probability distribution over all the ways that the unobserved parts of the state could be and that's just computationally intractable. So do you plan to work on this on this particular line of work further? I think I don't plan to do further direct research on this myself. I still basically agree with the point of the paper which is look when you're building your AI systems they should be reasoning more they should be reasoning in the way that the assistance paradigm suggests where there's like this integrated reward learning and control and they shouldn't be reasoning in the way that the value learning paradigm suggests where you'll like first figure out what human values are and then optimize for them and so I think that point is a pretty important point and we'll guide how people to AI systems in the future or it will guide how what we have our AI systems do and I think I will like continue to push for that point including like at projects at DeepMine but I probably won't be doing more like technical research on the math and those papers specifically because I think I like it's said the things that I wanted to say there's still more there's still plenty of work that one could do such as like trying to come up with algorithms to directly optimize the maths that we wrote down but that seems less high leverage to me. Okay moving to the next paper on the utility of learning about humans for human AI coordination that was Carol et al with yourself as a co-author can you tell us the brief general idea here? I think this paper was written in in the wake of some pretty big successes of self-play so self-play is the algorithm underlying well self-player like very similar variants are the out is the algorithm underlying opening i5 which plays Dota alpha star which plays Starcraft alpha go and alpha zero which play you know go chess, Cherky and so on at a superhuman level these were like some of the yeah some of the biggest results in AI around that time and so I suggest that the like self-play was going to be a really big thing and the point we were making in this paper is that self-play works well when you have a zero a two-player zero-sum game uh which which is a like perfectly competitive game uh because it's effectively going to cause you to explore the full space of strategies because if you're like playing against yourself in a competitive game if there's any flying your strategy then gradient descent is going to like push you in the direction of like exploiting that flaw because you're you know you're trying to beat the other copy of you so you're always driven to get better uh in contrast in common payoff game which are the most collaborative games where each agent gets the same payoff no matter what happens but the payoffs can be different uh you don't have this um similar incentive uh you don't have any incentive to be unexploitable like all you want is to come up with some policy that like if played against yourself will get you maximum reward but it doesn't really matter if you are if you would like play badly with somebody else like a human like if that were true that wouldn't come up in self-play self-play would be like nah in every single game you play you got the maximum reward there's nothing to do here so there's no force that's like causing you to be robust to all of the possible players that you could have whereas in the competitive game if you weren't robust to all of the players that could possibly arise then you're exploitable in some way and then the gradient descent is incentivized to find that exploit after which you have to become robust to it is there any way to reformulate it so that there is that competitive pressure? You can actually do this and so I know you've had Michael Dennis and I think also Natasha Shaks yeah on this podcast before and both of them are doing work that's kind of like this uh with paired right that was shakes and uh exactly yeah well the way you do it as you just say all right we're gonna make the environment a our competitor the environment's going to like try and like make itself super complicated in a way that defeats whatever policy we were trying to use to coordinate and so then this makes sure that you have to be robust to whichever environment you find yourself in so that's like one way to get robustness to well it's getting you robustness to environments it's not necessarily getting robustness to your partners um when like if you for example you wanted to cooperate with a human but you could do a similar thing there where you say we're going to also take the partner agent and we're going to make it be uh adversarial now this doesn't work great if you like literally make it adversarial because sometimes in many like interesting collaborative games um like like overcooked which is the one that we were studying here if your partner is an adversary they can just guarantee that you get minimum reward it's not it's often not difficult in this and overcooked you just like stand in front of the station where you deliver the dishes that you've cooked and you just stand there and that's what the adversary does and then the agent is just like well okay I can make a soup but I can never deliver it I guess I never get rewarded uh so so it doesn't quite that like naive simple approach doesn't quite work but you can instead you can like try to have a slightly more sophisticated method where you know the instead of being an adversarial partner it's a partner that's like trying to keep you on the edge of your abilities and then you like uh as you uh and then like once your agent learns how to like do well with the one uh with your current uh partner then like the partner tries to make itself a bit harder to do and so on so there there are a few there's a few papers like this that I am currently failing to remember but but there are papers that try to do this sort of thing I think many of them did end up just like following uh both the self-play work and this paper of ours so yeah and basically I think you're right you can in fact do some clever tricks to make things uh to make things better and to get around this it's not quite as simple and elegant as self-play and I don't think the results are quite as good as you get with self-play because it's still not exactly the thing that you want so now we have a contributed question which I'm very excited about from uh doctrine at asha jakes senior research scientist at google ai and postdoc at Berkeley and we were lucky to have to tasha as our guest on episode one so ntasha ntasha asked uh the most interesting questions are about why interacting with humans is so much harder slash so different and interacting with simulated rl agents so rohan what is it about humans that makes them um harder and different yeah there are a bunch of factors here maybe the most obvious one and probably the biggest one in practice is that you can't just put humans in your environment to do like a million steps of grading to sent on uh which often we do in fact do with our simulated rl agents and so like if you could just somehow put a human in the loop uh in a million it for a million episodes maybe then the resulting agent would in fact just be really good at coordinating with humans in fact i might like take out the maybe there and i will i will actually predict that that the resulting agent will be good with humans as long as you had like at like a reasonable diversity of humans um in that you had to collaborate with so my first and biggest answer is you can't get a lot of data from humans in the way that you can get a lot of data from simulated rl agents uh or equivalently you can't just put the human uh into the training loop and the way you can put a simulated rl agent into the training loop uh so that's answer number one and then there is another answer uh which seems significantly less important which is that humans aren't just not as are sorry are significantly more diverse than simulated rl agents typically humans don't all act the same way uh even an individual human will act pretty different um from one episode to the next humans will like learn over time uh and so they're not only is there a policy like kind of kind of stochastic but their policy isn't even stationary their policy changes over time as they learn how to play the game and become better at it um that's another thing that rl like usually rl seems that that doesn't that is not in fact true that like episodes are drawn iid because of this like non-stationarity and stochasticity and diversity you would imagine that it like you have to get a much more robust policy uh in order to work with humans instead of working with simulated rl agents and so that uh ends up being uh that ends up you know being harder to do sometimes people try to like take their simulated rl agents and like make them more stochastic to be more similar to humans um for example by like maybe taking a random action with some small probability and i think usually this ends up still looking kind of like artificial and forest when you like look at the resulting behavior such that it still doesn't require that robust policy in order to collaborate well with those agents um and humans are just like more challenging them that okay let's really move to the next paper evaluating the robustness of collaborative agents that was not at all with yourself as a co-author can you give us the the short version of what this paper is about like we just talked about how in order to get your agency work well with humans they need to be they need to learn a pretty robust policy and so one way of measuring how good your agents are collaborating with humans is while you just like have them play with humans and see how well that goes which is a reasonable thing to do um and people should definitely do it but this paper proposed a like maybe simpler and more reproducible test that you can run more often which is just i mean it's the basic idea from software engineering it's just a unit test uh and so it's a very simple idea the idea is just write some unit tests for the robustness of your agents write some cases in which you think the like correct action is unambiguous like clear in cases that you may be expect not to come up during training during training and then just see whether your agent does in fact do the right thing on those inputs and that can give you like if your agent passes all of those tests that's not a guarantee that it's robust uh but if it fails some of those tests then you definitely sound found some failures of robustness and i think in practice uh the agents that we tested all like failed many tests i will yeah i don't remember the exact numbers off the top of my head but i think like some of the better agents were getting scores of maybe 70 percent could we kind of say that this is related to the idea of of sampling from environments outside of the trained distribution because we think that like in in in samples that are related to the distribution that the agent would encounter after it's deployed would you would you phrase it that way or is it is it going a different yeah i think that's pretty close i would say all basically everything about that seems correct except the part where you say like uh and it's probably going to arise in the test distribution i think usually i just wouldn't even try to like um check whether or not it would uh up here in the test distribution i just it gets like that's very hard to do you don't know what's going like if you knew how the test distribution was going to look and in what way it was going to be different from the trained distribution then you should just change your trained distribution to be the test distribution but like the fundamental challenge of robustness is usually that you don't know what your test distribution is going to look like so i would say it's more like we try to deliberately and uh find situations that are outside the training situation but where a human would agree that there's like one them unambiguously correct answer um and test it in those cases like maybe this will lead us to be too conservative because like actually the test was in a state that will never actually come up in the test distribution but given that we it seems very hard to know that i think um it's still a good idea to be driving these tests and to take failure as fairly seriously and this paper mentions three types of robustness can you um briefly touch on on the three types yeah so this is basically a categorization that we found helpful in generating the tests uh and it's uh somewhat specific to reinforcement learning agents so the three types were state robustness which is um a case where like basically these are test cases in which the main thing that you've changed is the state in which the agent is operating then there's agent robustness which is uh when one of the other agents in the environment exhibits some behavior that's like uh unusual and not what you expected and then that can further be uh decomposed into two types is agent robustness without memory where uh even like where the the test doesn't require the AI system to have any memory there's like a correct action that seems determinable even if the AI system doesn't have memory uh so this might be what you want to use if you for some reason are using uh an MLP or a CNN as your architecture and then there's agent robustness with memory uh which is where the distribution shift happens from uh an uh partner agent in the environment doing something that where you have to actually like look at the behavior over time notice that uh something is violating what what you expected during training and then take some corrective action as a result uh so there you need memory in order to understand um how the partner agent is doing something that wasn't what you expected and then I guess when we're dealing with uh high-dimensional state there's just a ridiculous number of permutations situations and we've seen in the past that uh that deep learning especially can be really sensitive to small seemingly meaningless changes in this high-dimensional state so how do we how how could we possibly think about scaling this up to a point where uh we don't have to test every single thing? I think that basically this particular approach you mostly just shouldn't try to scale up in this way it's more meant to be a like first quick sanity check that is already quite hard to pass uh for current systems we're talking scores like 70 percent I think once you get to like scores like 90-99 percent uh then it's like okay that's the point it like start thinking about scaling up but like suppose we got there uh what do we then do? I don't think we really want to scale up the like the specific process of humans think of tests humans write down tests uh then we like run those on the AI system I think at that point uh we want to migrate to a more like alignment flavored uh viewpoint which I think we were going to talk about in the near future anyway uh but to give a give some advance uh to talk about that a little bit in advance I think once we like scale up we want to try and find cases where the AI system does something bad that knew was bad it knew that it wasn't the thing that its designers intended and the reason that this allows you to scale up is because now you can like go and inspect the AI system and try to find facts that it knows and like leverage those in order to create your test cases and one hopes that the set of things that the AI knows is still plausibly a very large space but hopefully not an exponentially growing space the way the state space is and the intuition for why this is okay is that like yes the AI system might end up may end up having accidents and that wouldn't be caught if we were only looking for cases where the AI system made a mistake that it knew was a mistake but like usually those things aren't that bad uh they can be if your AI system is like in a nuclear power plant for example or uh in some like uh in a weapon system perhaps but like in many cases it's not actually that bad for the AI system to make an accidental error the really bad errors are the ones where the AI system is like intentionally making an error or making something that is bad from the perspective of the designers those are those are like really bad situations and you don't want to get into them and so I'm most interested in like thinking of like how we can avoid that uh and so then you can like try to leverage the agent's knowledge to construct inputs that you can then test the AI system on. So this is a great segue to the alignment section um so how do you define uh alignment in AI? Maybe I will give you two definitions uh that are like slightly different but mostly the same so one is that an AI system is misaligned so not aligned uh if it takes actions that it knew uh were against the wishes of its designers that's basically the definition that I was just giving earlier. A different more positive definition of AI alignment is and is that an AI system is aligned if it is trying to do what its uh designers intended for it to do. And is there some um agreed upon taxonomy of like top level topics in alignment? um like how does it relate to concepts like AI safety and human feedback that different things that we talked about today? How do we how would we arrange these in a kind of high level? There is definitely not a canonical taxonomy of topics there's not even a canonical definition so like the one I gave doesn't include the problem for example of how you resolve disagreements between humans on what the AI system should do it just says all right there's some designers they wanted something that's what the AI system is supposed to be doing uh and it doesn't talk about like all right the process by which those designers decide what the AI system intends to do that's like not not a part of the problem as I'm defining it it's obviously still an important problem just like not part of this definition uh as I gave it but other people would say no that's a bad definition you should include that problem so there's not even a canonical definition so I think I will just give you maybe my taxonomy of alignment topics so in terms of how alignment relates to AI safety uh there's this sort of general big picture question of like how do we make or will AI be beneficial for humanity which you might call AI safety or AI beneficialness or something and on that you can break down into a few possible uh possible categories I quite like the I'm gonna forget where this where I where this taxonomy comes from but I like the taxonomy into accidents misuse and structural risks so accidents are exactly what they sound like accidents happen when an AI system does something bad and nobody intended for that the AI system to do that thing uh missus also exactly what it sounds like it's when it's when somebody gets an AI system to do something and that's some the thing that a got the AI system to do was something that we didn't actually want so think of like terrorists using AI systems um to like assassinate people uh and then structural risks are maybe less obvious than the previous two but structural risks happen when you know if as we infuse AI systems into our economy do any new sorts of problems arise do we get into like races to the bottom on safety do we get to do we have like a whole bunch of increased economic competition that causes to sacrifice many to sacrifice many of our values in the name of productivity uh stuff like that so that's like one starting categorization accidents misuse structural risk and within accidents you can have uh you can then further separate into accidents where the AI system knew that the thing that that was doing was bad and accidents where the AI system didn't know that the thing that it was doing was bad and the first one is AI alignment according to my definition which again is not a canonical definition I think it's maybe the most common definition but it's like not canonical so that was like how alignment relates to AI safety and then like how does the stuff we've been talking about today relate to alignment again people will disagree with me on this but according to me the way to build aligned AI systems and the sense of AI systems that don't make take bad actions that they knew were bad is that you use a lot of human feedback to train your AI system to where it like the human feedback you know it rewards the AI system when it does things that the humans want and punishes the AI system when the AI system does things that the human doesn't want this doesn't solve the entire problem you you basically then just want to like make your human the the people providing your feedback as powerful as make them as competent as possible so maybe you could do some interpretability with the model that you're training in order to like understand how exactly it's like reasoning how it's making decisions you can then feed that information to the humans who are providing feedback and thus this can then maybe allow them to not just select AI systems that get the right outcomes but now they can select AI systems that get the right outcomes for the right reasons and that can help you get more robustness you could imagine that you have some other AI systems that are in charge of like finding new hypothetical inputs on which the AI system that you're training takes a bad action and so like this AI systems and like here's this hypothetical input here's this input on which your AI systems doing a bad thing and then the humans are like oh that's bad let's put it in the training data set and give good feedback on it and so on so then I think the salt would be maybe the most obviously connected here where it was about how do you just train anything with human feedback which is obviously a core thing I've been talking about in this plan um preferences implicit in the state of the world it's less clear how that relates here I think that paper makes more sense in a plan that's more like traditional value alignment where your AI system maintain like has an explicit distribution over data that it's updating by evidence so I think that one is less relevant to this to the to this description the benefits of assistance paper is I think primarily is statement about what the AI system should do and so like what we want our human feedback providers to be doing is to be seeing hey is this AI system like thinking about what what its users will want if it's uncertain about what the users will want does it like ask for clarification or it does it just like guess we probably wanted to ask for clarification rather than guessing if it's a sufficiently important thing but if it's like some probably insignificant thing then it's like fine if it can guess and so through the human feedback that you can then like train a system that's being very assistive the overcooked papers on the utility of learning about learning about humans for human air coordination uh that one is I think not that relevant to this plan unless you happen to be building an AI system that is playing a collaborative game the evaluation the robustness paper is more relevant in that like part of the thing that these human feedback providers are going to be doing is to like be constructing these hypothetical the be constructing inputs on which the AI system behaves badly and then training the AI system not to behave badly on those inputs uh so in that sense it's uh it also fits into this um overall story cool okay can you mention a bit about your alignment newsletter um like what what how do you how do you how do you define that newsletter and and how did you how did you start that and what's happening with the newsletter now the alignment newsletter is supposed to be a weekly newsletter that I write that summarizes uh just recent content relevant to AI alignment it has not been uh very weekly in the last couple of months because I have been busy but I do intend to go back to making it a weekly newsletter it i mean the origin story is kind of funny it was just we this was while I was a PhD student at the Center for Human Competible AI at UC Berkeley uh we were like just discussing that like there are a lot of papers that were coming out all the time uh as people will probably be familiar with and it was hard to keep track of them all um and so someone suggested that hey maybe we should have a rotation of people who just uh search for all of the new papers that have arrived in the past week and just send an email out to everyone just like list giving links to those papers so other people don't have to do the search themselves and I said like look I you know I just do this every week anyway I I'm just happy to take on this job sending and sending one email with a bunch of links is not a hard uh we don't need to have this rotation of people um so I did that internally to try uh then like you know a couple of weeks later I like added a sentence that was telling people hey this is what this is like the topic um here is you know maybe you should read it if you are interested in x, y and z uh and so that happened for a while and then I think I started writing slightly more extensive summaries so that people didn't have to read the paper uh unless it was something they were particularly interested in uh and like they're around that point people were like this is actually quite useful you should make it public uh and then I like tested it a bit more um maybe for another like three to four weeks internally to try and then I and and after that I released it publicly uh it still did go under a fair amount of improvement I think maybe after like 10 to 15 newsletters was when it felt more stable yeah and now it's like apart from the fact that I've been too busy to do it recently it's been pretty stable for the last I don't know two years or so cool well uh to the audience I highly recommend the newsletter and uh like I mentioned you know when I first met you and and heard about your alignment newsletter early on at that point I really wasn't um I didn't really appreciate the the importance of alignment uh issues and and I got to say that really changed for me when I read the book Human Compatible by Professor Stuart Russell who I gather is your one of your PhD advisors yep and so that book really helped me appreciate the importance of alignment related stuff and it was part of the reason that I that I saw saw you to interview you so I I'm happy to recommend that uh plug that book to to the audience uh Professor Russell's awesome and it's a very well written book and uh and feel a great insight yep I also strongly recommend this book and since we're on the topic of the alignment newsletter you can read my summary of uh Stuart Russell's book in order to get a sense of what it talks about uh before you actually make the commitment of actually reading the entire book um so you can find that on my website under uh alignment newsletter there's a list of past issues I think this was newsletter edition 69 not totally sure you can check that and what was your website again I it's just my first name and last name rohencha dot com okay cool I highly recommend doing that um to the audience and so I wanted to ask you about how you know how alignment work is done so a common pattern that you know we might be familiar with that in in many ML papers is to show a new method and show some experiments um but is alignment uh is work in alignment fundamentally different like what does the work uh entail in in alignment is there a lot of thought experiments or or how would you describe that uh there's a big variety of things so some alignment work um is in fact pretty similar to uh existing uh two two typical ML work so for example there's a lot of alignment work that's like can we make human feedback algorithms better uh and you know you start with some baseline and some task or environment in which you want to get an AI system to do something and then you like try to improve upon the baseline using some ideas that you thought about uh and like you know maybe it's somewhat different because you're using human feedback where is typical ML research doesn't involve human feedback but that's not that big a difference it's still like mostly the same skills uh so that's probably the kind that's closest to existing ML research there's also like a lot of interpretability work which again is just like working with actual machine learning models and trying to figure out what the heck they're doing also seems pretty it's like not the same thing as like get a better performance on this task but it's still like pretty similar to uh the general field to like some parts of the of machine learning so that's like one kind one type of alignment research and then there's you know on the complete other side there is a bunch of stuff where you're like where you think very abstractly about what future AI systems are going to look like so like maybe you're like all right maybe you think about how some story by which you might by which AGI might arise like we run such and such algorithm maybe would set some improvements in the arc in various architectures with like such and such data and you get a and it turns out you can get AGI out of this uh then you maybe like think in this hypothetical okay uh does this AGI end up getting misaligned if so how how does it get misaligned if yes um when you tell that story and they're like okay now I have a story of like how the AGI system was misaligned what would I need to do in order to like prevent this from happening so you can do the like pretty elaborate uh conceptual dot experiments I think these are usually good as a way of ensuring that the things that you're working on are actually useful I think there are a few people who do these sorts of conceptual arguments almost always and uh do them well such that I'm like yeah this the stuff they're producing I think is probably going to matter in the future but I think it's also very easy to end up not very grounded in what's actually going to happen such that you end up saying things that won't actually be true in the future and could knowably like some somewhat there is some reasonably easy to find argument today that could convince you that the things you're saying are like not going to matter in the future so it's like pretty hard to do this research because of the lack of actual empirical feedback loops but I don't think it is doomed um I think people do in fact get um some interesting results out of this and often the results out of this that the the best results out of this line of work uh usually seen better to me than the results that we get out of the empirical line of work so you mentioned your newsletter and then there's an alignment forum if I understand that that's what that was spring out of less wrong is that is that right I don't know if I would say it's spring out of less wrong it was meant to be at least somewhat separate from it but it's definitely very it's definitely affiliated with less wrong and like everything on it gets cross posted to less wrong and so these are pretty advanced resources I mean from my point of view um but to the audience who maybe is just getting started with these ideas can you recommend uh you know a couple of resources that might be good for them to get like an on ramp for them um I guess including the the human compatible but anything else you'd want to mention yeah so human compatible is a pretty good suggestion um there are other books as well um so super intelligence is more on the philosophy side uh the alignment problem by Brian Christian is less on the like uh has a little bit less on like what what my solutions look like that has more to like intellectual history behind how how these concerns started arising on life three point oh by max tegmark i don't remember how much it talks about alignment I assume it does a decent amount uh but that's that's another option apart from books I think so the alignment for him has um sequences of blood posts that are that that don't require quite as much um technical depth so for example it's got the value learning sequence which I well which I half-road half curated other people's posts um so I think that's a good introduction to some of the ideas in alignment uh there's the embedded agency sequence also on the alignment for him and the iterated amplification sequence of the alignment for him oh there's the there's an agi safety fundamentals course and then you can just google it it has a publicly available curriculum I believe I think really ignore all the other suggestions look at that curriculum and then read things on there is probably actually my advice Tev you've seen any uh depictions of alignment issues in science fiction or um these these ideas come up for you when you when you watch a read read sci-fi they definitely come up to some extent I think there are many ways in which the depictions aren't realistic but like they do come up or I guess even outside or just uh even mythology like the the the whole mightest touch thing seems like a perfect example of a misalignment yeah the king might as example is is a good example I do like a lot yeah yeah those are good examples yeah that's true if you if you expand to include mythology in general I feel like it's probably everywhere um especially if you include things like you asked for something and got what you literally asked for but not what you actually meant that's really common isn't it yeah in stories yeah I mean we've got like I could just take any story about jidis and probably this will feature um so they really started the uh alignment literature back then I guess thousands of years old the problem of there are two people one person wants the other person to do something that's just like a very important fundamental problem that you need to deal with there's like tons of stuff also in economics about those where it's a principal agent problem and like the island and problem is not literally the same thing in the principal agent problem he assumes that the agent had already has some motivation some utility function and you're like trying to incentivize them to do the things that you want whereas in the ai line we do get to build the agent that you're delegating to and so you have more control over it so there are differences but like fundamentally the like entity a wants entity be to do something for entity a is like just a super common pattern that human society has thought about a lot so we have some more contributing questions uh this is one from Nathan Lambert uh PhD student at UC Berkeley doing research on robot learning and Nathan was our guest for episode 19 so Nathan says a lot of AI alignment and agi safety work happens on blog posts and forums uh what's the right manner to draw more attention from the academic community any comment on that i think i think that this is basically a reasonable strategy where like by by doing this work on blog posts and forums people can move a lot faster uh like ml is pretty good in that uh like relative to other academic fields you know it doesn't take years to publish your paper it only takes months to publish your paper uh but blood-pussant forums it can be days to talk about your ideas so you can move a lot faster if you're trusting in everyone's ability to like understand which work is good um and what to build on uh and so that that's like i think the main benefit of blog posts and forums but then as a result anyone who isn't an expert correctly doesn't end up reading the blog posts and forums because there's not it it's a little hard if you're not an expert to extract the signal and ignore the noise so i think then there's like a separate group of people and not sorry they're not a separate group but there's a group of people who then takes a bunch of these ideas and then tries and then converts them into like more rigorous uh and correct and academically presented um ideas and in papers and that's the thing that you can like uh show to the academic community in order to draw more attention in fact we've just been working on a project along these lines at deep mind which hopefully we'll really soon talking about the risks from uh inner misalignment so yeah i think roughly my story is you figure out conceptually what you want to do via the blog posts and forums and then you like make it rigorous and have experiments and like demonstrate things with um actual examples instead of hypothetical ones uh in the format of an academic paper and that's how you then like make it um credible enough and convincing enough to draw attention from the academic community great and then Taylor Killian asks Taylor is a PhD student at U of T and the Fector Institute Taylor was our guest for episode 13 and Taylor asks how can we approach the alignment problem when faced with heterogeneous behavior from possibly many human actors? i think under my interpretation of this question is that you know human sometimes disagree on what things to value and similarly disagree on what behaviors they exhibit and want the AI to exhibit um so how do you get the AI to decide on one set of values or one set of behaviors and as i talked about a little bit before i mostly just take this question and like it is outside of the scope of the things that i usually think about i'm usually just i'm usually thinking about the designers have something in mind that they want the AI system to do did the AI system actually do do that thing or at least did it is it trying to do that thing? i do think that this problem is in fact an important problem but i think what you the the way what your solution like your solutions are probably going to be more like political um or like societal rather than technical where you know you have to negotiate with other people to figure out what exactly you want your AI systems to be doing and then you like take that take that like simple spec and you hand it off to the AI designers and then the AI designers are all right all right now we will make an AI system with the spec yeah so so i would say it's like yeah there's a separate problem of like how to go from human society to something that we can put inside of an AI this is like the domain of a significant portion of social science uh and it has technical aspects too so like social choice theory for example i think has at least some technical people trying to do a mechanism design to to solve these problems and that seems great and people should do that it's a good problem to solve um it's unfortunately not one i have thought about very much but i do feel pretty strongly about the factorization into one part of you know one problem which is like figure out what exactly you want to put into the AI system and then the other part of the problem which i call the alignment problem which is then how do you take that thing that you want to put into the AI system and actually put it into the AI system okay cool and Taylor also asks how do we best handle bias when learning from human expert demonstrations this is a good question and i would say is an open question in in the field so i don't have a great answer to it but some approaches that people have taken one simple thing is to get a uh get demonstration from a wide variety of humans and hope that to to the extent that they're making mistakes some of those mistakes will cancel out you can invest additional effort like you get a bunch of demonstrations and then you invest a lot of effort into evaluating the quality of each of those demonstrations and then you can like label each demonstration with like how how high quality it is and then you can design an algorithm that like takes the quality into account when learning or i mean the most simple thing is you just like discard everything that's too low quality and only keep the high quality ones but there are some algorithms that have been proposed that can make use of the low quality ones while still trying to get to the performance of the high quality ones another approach that people have tried to take is to like trying yes what sorts of biases are present and then try to build algorithms that correct for those biases so in fact one of my older papers looks into an approach of this forum i think i like we did get results that were better than the baseline but i don't think it was all that promising uh so i mostly did not continue working on that approach so it just seems kind of hard to like know exactly which biases are going to happen and to then correct for all of them all right so those are a few thoughts on how you can try to handle bias i don't think we know the best way to do it yet cool thanks so much uh to Taylor and Nathan and Natasha for contributed questions um you can also contribute questions to our next uh interviews uh if you show up on our twitter at talk or oral podcast so we're just about wrapping up here uh few more questions for you today what rohen what would you say is the holy grail for your line of research i think the holy grail is to have a procedure for training AI systems at particular tasks um where we can them where we can apply arbitrary human understandable constraints to how the AI system achieves those tasks so for example we can be like we can build AI system that schedules your meetings but uh uh and and like but ensure is that it's always very respectful when it's talking to other people in order to schedule your emails and is love never like you know discriminating based on sex or something like that or you can like build an agent that plays Minecraft and you can just deploy it on an entirely new multiplayer server that includes both humans and AI systems yeah and then you can say hey you should just go help such and such player with whatever it is they want to do and the agent just does that and it like abides by the norms on that uh on the multiplayer server that adjoined or you can build a recommender system that's just optimizing for what humans think uh is good for recommender systems to be doing while uh rather than optimizing for say engagement if we think that engagement is a bad thing to be optimizing for so how do you see your uh your research career plan do you have a clear roadmap in mind or are you uh doing a lot of exploration as you go i think i feel more like there's maybe i wouldn't call it a roadmap exactly but there's a clear plan uh and the plan is we talked a bit a bit about it earlier the plan is roughly train models using human feedback and then like empower the humans providing the feedback as much as you can um ideally so that they can know everything that the model knows and select the models that are getting the right outcomes for the right reasons i'd say like that's the plan that's like an ideal to which we aspire uh we will probably not actually reach it knowing everything that the model knows is a pretty high bar and probably we won't get to it but there are like a bunch of tricks that we can do that get us closer and closer to it and the closer we get to it the better the better we're doing um and so i'm like let us find more and more of those tricks find which ones are the best see how like cost how costly they are and so on um and ideally this just leads to our to a significant improvement in our ability to do these things every time um i i will say though that it took me several years to get to this point like most of the uh most of the previous years of my career have in fact been a significant amount of exploration uh which is part of why like not all of the papers uh that we've talked about so far really fit into the story is there anything else uh you want to mention to our audience today Rohin yeah um so i um probably going to start a hiring round at DeepMind for my own team um probably sometime in the next month from the time of recording uh today is March 22nd so yeah please do apply if you're interested in working on an alignment great doctor Rohin Shah this has been an absolute pleasure and and a total honor by the way i want to thank you for on behalf of myself and in our audience yeah thanks for having me on it was really fun to actually go through all of these papers uh in a single session i don't think i've ever done that before
[ { "end": 11, "start": 0, "text": " TalkRL podcast is all reinforcing learning all the time, featuring brilliant guests, both" }, { "end": 12.72, "start": 11, "text": " research and applied." }, { "end": 15.52, "start": 12.72, "text": " Join the conversation on Twitter at TalkRL podcast." }, { "end": 22.96, "start": 15.52, "text": " I'm your host, Robin Chohan." }, { "end": 28.240000000000002, "start": 22.96, "text": " Robyn Shaw is a research scientist at DeepMind and the editor and main contributor of the" }, { "end": 29.240000000000002, "start": 28.240000000000002, "text": " alignment newsletter." }, { "end": 31.72, "start": 29.24, "text": " Thanks so much for joining us today, Robin." }, { "end": 33.48, "start": 31.72, "text": " Yeah, thanks for having me, Robin." }, { "end": 37.96, "start": 33.48, "text": " Let's get started with how do you like to describe your area of interest?" }, { "end": 44.16, "start": 37.96, "text": " On my website, the thing that I say is that I'm interested in sort of a long-term trajectory" }, { "end": 49.480000000000004, "start": 44.16, "text": " of AI because it seems like AI is becoming more and more capable over time." }, { "end": 54.4, "start": 49.480000000000004, "text": " With many people thinking that someday we are going to get to artificial general intelligence" }, { "end": 60.879999999999995, "start": 54.4, "text": " or AIGI, where AI systems will be able to replace humans at most economically valuable" }, { "end": 61.879999999999995, "start": 60.879999999999995, "text": " tasks." }, { "end": 66.75999999999999, "start": 61.879999999999995, "text": " And that just seems like such an important event in the history of humanity." }, { "end": 69.16, "start": 66.75999999999999, "text": " It seems like it would radically transform the world." }, { "end": 74.75999999999999, "start": 69.16, "text": " And so it seems very important to both important and interesting to understand what is going" }, { "end": 80.08, "start": 74.75999999999999, "text": " to happen and to see how we can make that important stuff happen better, so that we get good" }, { "end": 81.92, "start": 80.08, "text": " outcomes instead of bad outcomes." }, { "end": 87.4, "start": 81.92, "text": " That's a very general statement, but I would say that's a pretty big area of interest" }, { "end": 88.4, "start": 87.4, "text": " for me." }, { "end": 95.64, "start": 88.4, "text": " And then I often spend most of my time on a particular sub-question within that, which" }, { "end": 103.88, "start": 95.64, "text": " is what are the chances that these AIGI systems will be misaligned with humanity in the sense" }, { "end": 107.72, "start": 103.88, "text": " that they will want something other than what they will want to do things other than what" }, { "end": 108.96000000000001, "start": 107.72, "text": " humans want them to do." }, { "end": 113.96, "start": 108.96, "text": " So A, what is the risk of that, how can it arise, and B, how can we prevent that problem" }, { "end": 114.96, "start": 113.96, "text": " from happening?" }, { "end": 115.96, "start": 114.96, "text": " Cool." }, { "end": 120.32, "start": 115.96, "text": " Okay, so we're going to talk about some of this in more general terms later on." }, { "end": 125.44, "start": 120.32, "text": " But first let's get a little more specific about some of your recent papers." }, { "end": 130.28, "start": 125.44, "text": " First we have the MinRL Basel competition on learning from human feedback." }, { "end": 134.84, "start": 130.28, "text": " And that was benchmarked for agents that solve almost life-like tasks." }, { "end": 140, "start": 134.84, "text": " So I gather this is based on the MinRL Minecraft based RL environment." }, { "end": 145.36, "start": 140, "text": " We saw some competitions on using that before, but here you're doing something different" }, { "end": 146.36, "start": 145.36, "text": " with the MinRL." }, { "end": 149.16, "start": 146.36, "text": " Can you tell us about Basel and what's the idea here?" }, { "end": 156.32, "start": 149.16, "text": " So I think the basic idea is that a reward function, which is a typical tool that you" }, { "end": 160, "start": 156.32, "text": " use in reinforcement learning, I'm sure you're less, I expect you're listeners probably" }, { "end": 161, "start": 160, "text": " know about that." }, { "end": 166.56, "start": 161, "text": " The reward function, if you have to write it down by hand, is actually a pretty not great" }, { "end": 170.2, "start": 166.56, "text": " way of specifying what you want an AI system to do." }, { "end": 174.2, "start": 170.2, "text": " Like reinforcement learning treats the reward function as a specification of exactly what" }, { "end": 178.92000000000002, "start": 174.2, "text": " the optimal behavior is to do in every possible circumstance that could possibly arise." }, { "end": 182.12, "start": 178.92000000000002, "text": " When you looked at that the reward function, did you think of every possible situation" }, { "end": 186.4, "start": 182.12, "text": " that could ever possibly arise and check whether your reward function was specifying the" }, { "end": 188.32, "start": 186.4, "text": " correct behavior in that situation?" }, { "end": 190.08, "start": 188.32, "text": " No, you did not do that." }, { "end": 195.36, "start": 190.08, "text": " And so we already have lots and lots of examples of cases where people like tried to write" }, { "end": 199.36, "start": 195.36, "text": " it right down the reward function that they thought would lead to good behavior and they" }, { "end": 204.32000000000002, "start": 199.36, "text": " actually ran reinforcement learning or some other optimization algorithm with that reward" }, { "end": 205.56, "start": 204.32000000000002, "text": " function." }, { "end": 210.04000000000002, "start": 205.56, "text": " And they found some totally unexpected solution that did get high reward, but didn't do" }, { "end": 212.16000000000003, "start": 210.04000000000002, "text": " what the designer wanted it to do." }, { "end": 216.76000000000002, "start": 212.16000000000003, "text": " And so this motivates the question of like, all right, how can we specify what we want" }, { "end": 221.44, "start": 216.76, "text": " the agent to do without using handwritten reward functions?" }, { "end": 226.6, "start": 221.44, "text": " The general class of approaches that has been developed in response to this is what I" }, { "end": 229.95999999999998, "start": 226.6, "text": " call learning from human feedback or LFHF." }, { "end": 236.16, "start": 229.95999999999998, "text": " The idea here is that you consider some possible situations where the AI could do things and" }, { "end": 242.44, "start": 236.16, "text": " then you like ask a human, hey, in these particular situations, what should the AI system do?" }, { "end": 248.68, "start": 242.44, "text": " So you're making more local queries and local specifications rather than having to" }, { "end": 252.28, "start": 248.68, "text": " reason about every possible circumstance that can ever arise." }, { "end": 259.24, "start": 252.28, "text": " And then given all of this, given a large data set of human feedback on various situations," }, { "end": 264.4, "start": 259.24, "text": " you can then train an agent to meet that specification as best as it can." }, { "end": 268.12, "start": 264.4, "text": " So people have been developing these techniques and includes things like imitation learning" }, { "end": 272.92, "start": 268.12, "text": " where you learn from human demonstrations of how to do the task or learning from comparisons" }, { "end": 279.08, "start": 272.92, "text": " where humans compare, look at videos of two agent, two videos of agent behavior and then" }, { "end": 285.04, "start": 279.08, "text": " say, you know, the left one is better than the right one or it includes corrections where" }, { "end": 288.36, "start": 285.04, "text": " the agent does something on humans like at this point, you should have like taken this" }, { "end": 289.36, "start": 288.36, "text": " other action instead." }, { "end": 290.68, "start": 289.36, "text": " That would have been better." }, { "end": 296.32, "start": 290.68, "text": " These are all ways that you can use human, human feedback to train an agent to do what" }, { "end": 297.32, "start": 296.32, "text": " you want." }, { "end": 301.56, "start": 297.32, "text": " So people have developed a lot of algorithms like this, but the evaluation of them is kind" }, { "end": 303.56, "start": 301.56, "text": " of ad hoc." }, { "end": 309.56, "start": 303.56, "text": " People just sort of make up some new environment to test their method on." }, { "end": 316.24, "start": 309.56, "text": " They don't really compare on any like on a standard benchmark that everyone is using." }, { "end": 322.48, "start": 316.24, "text": " So the big idea with basalt was to was to change that to actually make a benchmark that" }, { "end": 328.36, "start": 322.48, "text": " could reasonably fairly compare all of these, all of these different approaches." }, { "end": 333.12, "start": 328.36, "text": " So we like, we wanted it to mimic the real world situation as much as possible in the real" }, { "end": 334.12, "start": 333.12, "text": " world situation." }, { "end": 339.24, "start": 334.12, "text": " You just have like some notion in your head of what task you want your AI system to do." }, { "end": 342.8, "start": 339.24, "text": " And then you have to, you have to take a learning from human feedback algorithm and give" }, { "end": 344.8, "start": 342.8, "text": " it the appropriate feedback." }, { "end": 349.84000000000003, "start": 344.8, "text": " So similarly in this benchmark, we instantiate the agent in a Minecraft world and then we" }, { "end": 352.44, "start": 349.84000000000003, "text": " just tell the designer, hey, you've got to train." }, { "end": 355.76, "start": 352.44, "text": " You're an agent to say make a waterfall." }, { "end": 358.48, "start": 355.76, "text": " That's one of our tasks and then take a picture of it." }, { "end": 361.08, "start": 358.48, "text": " So we just tell the designers, you have to do this." }, { "end": 366.28, "start": 361.08, "text": " So now the designer has in their head, like notion of what the agent is supposed to do," }, { "end": 369.56, "start": 366.28, "text": " but there's no formal specification, no reward function, nothing like that." }, { "end": 371.08, "start": 369.56, "text": " So they can then do whatever they want." }, { "end": 374.64, "start": 371.08, "text": " They can write down a reward function by hand if that seems like an approach they want" }, { "end": 375.64, "start": 374.64, "text": " to do." }, { "end": 379.48, "start": 375.64, "text": " They can use demonstrations, they can use preferences, they can use corrections, they can do active" }, { "end": 381.36, "start": 379.48, "text": " learning and so on." }, { "end": 385, "start": 381.36, "text": " But their job is to like make an agent that actually does the task." }, { "end": 390.72, "start": 385, "text": " Ideally, they want to maximize performance and minimize costs both in terms of compute" }, { "end": 395.36, "start": 390.72, "text": " and in terms of how much human feedback it takes to train the agent." }, { "end": 400.48, "start": 395.36, "text": " So I watched the presentations of the top two solutions and it seemed like they showed" }, { "end": 402.76, "start": 400.48, "text": " very different approaches." }, { "end": 407.2, "start": 402.76, "text": " The first one, Kyros I would say, is seemed like a lot of hand engineering." }, { "end": 412.52, "start": 407.2, "text": " I think they used 80,000 plus labeled images and built some very specific components for" }, { "end": 413.52, "start": 412.52, "text": " this." }, { "end": 416.92, "start": 413.52, "text": " They kind of decomposed the problem, which I think is a very sensible thing to do." }, { "end": 420.4, "start": 416.92, "text": " But then also the second one was Obsidian." }, { "end": 425.15999999999997, "start": 420.4, "text": " They produced this inverse Q learning method, a new method, which is seemed like a more" }, { "end": 426.8, "start": 425.15999999999997, "text": " general theoretical solution." }, { "end": 430, "start": 426.8, "text": " I just wondered if you have any comments on the different types of solutions that came" }, { "end": 435.36, "start": 430, "text": " out of this or those kind of two main classes that you saw or did any classes of solutions" }, { "end": 436.36, "start": 435.36, "text": " surprise you." }, { "end": 438.76, "start": 436.36, "text": " Yeah, I think that's basically right." }, { "end": 445.36, "start": 438.76, "text": " I don't think they were particularly surprising in that it was, we spent a lot of time making" }, { "end": 450.56, "start": 445.36, "text": " sure that the tasks couldn't trivially be solved by just doing hand engineering a like" }, { "end": 451.88, "start": 450.56, "text": " classical program." }, { "end": 459.12, "start": 451.88, "text": " So even the top team did rely on a behavior-cloned navigation policy that used the neural network," }, { "end": 461.76, "start": 459.12, "text": " but it's true they've done a bunch of engineering on top of that." }, { "end": 466.52, "start": 461.76, "text": " Which I think is according to me is just a benefit of this setup." }, { "end": 471.36, "start": 466.52, "text": " It shows you like, hey, if you're just actually trying to get good performance, do you" }, { "end": 476.32, "start": 471.36, "text": " train in the neural network end to end or do you put in or do you put in domain knowledge" }, { "end": 481.2, "start": 476.32, "text": " and how much domain knowledge do you put in and how do you do it?" }, { "end": 484.84, "start": 481.2, "text": " And it turns out that in this particular case, the domain knowledge, well, they did end" }, { "end": 489, "start": 484.84, "text": " up getting first, but Team Obsidian was quite close behind." }, { "end": 492.16, "start": 489, "text": " I would say that the two approaches were actually pretty comparable." }, { "end": 496.28, "start": 492.16, "text": " And I do agree that I would say one is more of an engineering-y solution than the other" }, { "end": 499.24, "start": 496.28, "text": " one is more of a researchy solution." }, { "end": 504.08, "start": 499.24, "text": " So it seems to me like the goals here were things that could be modeled and learned like" }, { "end": 507.56, "start": 504.08, "text": " it seems feasible to learn the concept or to train a network to learn the concept of" }, { "end": 508.56, "start": 507.56, "text": " looking at a waterfall." }, { "end": 510.56, "start": 508.56, "text": " They had enough labels." }, { "end": 512.4, "start": 510.56, "text": " And I guess that's what some contestants did." }, { "end": 518.12, "start": 512.4, "text": " But do you have any comments on if we were to want goals that are harder to model than" }, { "end": 519.12, "start": 518.12, "text": " these things?" }, { "end": 524.68, "start": 519.12, "text": " I was trying to think examples that came up with like Arnie or Dance Choreography scoring," }, { "end": 527.24, "start": 524.68, "text": " like how would you even begin to model those things." }, { "end": 532.36, "start": 527.24, "text": " Do we have to just continue improving our modeling toolkits so that we can make models of" }, { "end": 536.36, "start": 532.36, "text": " these reward functions or is there some other strategy?" }, { "end": 539.76, "start": 536.36, "text": " It depends exactly what you mean by improving the modeling tool kit." }, { "end": 542.16, "start": 539.76, "text": " But basically, I think the answer is yes." }, { "end": 546.24, "start": 542.16, "text": " But you know, the way that we can improve our modeling tool kit, it may not look like" }, { "end": 547.84, "start": 546.24, "text": " explicit modeling." }, { "end": 555.48, "start": 547.84, "text": " So for example, for Arnie, I think you could probably get a decent, well, maybe not." }, { "end": 562.08, "start": 555.48, "text": " But it's plausible that you could get a decent reward model out of a large language model" }, { "end": 566.8000000000001, "start": 562.08, "text": " that like does in fact have the concept of irony." }, { "end": 570.52, "start": 566.8000000000001, "text": " If I remember correctly, large language models are not actually that great at humor, so I'm" }, { "end": 573.2800000000001, "start": 570.52, "text": " not sure if they have the concept of Arnie." }, { "end": 577.76, "start": 573.28, "text": " But I wouldn't be surprised that if further scaling did in fact give them a concept of" }, { "end": 585.0799999999999, "start": 577.76, "text": " irony such that we could use, we could then use them to have rewards that involve Arnie." }, { "end": 587.88, "start": 585.0799999999999, "text": " And I think that's the same sort of thing as like waterfall." }, { "end": 594.52, "start": 587.88, "text": " Like I agree that we can learn the concept of a waterfall, but it's not a trivial concept." }, { "end": 597.1999999999999, "start": 594.52, "text": " If you ask me to program it by hand, I would have no idea." }, { "end": 600.68, "start": 597.1999999999999, "text": " Like the only input, you get pixels as an input." }, { "end": 606.52, "start": 600.68, "text": " If you're like, here's a rectangle of pixels, please write a program that detects the" }, { "end": 607.52, "start": 606.52, "text": " waterfall in there." }, { "end": 609.4399999999999, "start": 607.52, "text": " I'm like, oh God, that sounds really difficult." }, { "end": 610.88, "start": 609.4399999999999, "text": " I don't know how to do it." }, { "end": 615.5999999999999, "start": 610.88, "text": " But we can, if we apply machine learning, then like turns out that we can recognize these" }, { "end": 617.76, "start": 615.5999999999999, "text": " sorts of concepts." }, { "end": 623.88, "start": 617.76, "text": " And similarly, I think it's not going to be like, I definitely couldn't write a program" }, { "end": 626.92, "start": 623.88, "text": " directly that can recognize Arnie." }, { "end": 631.4799999999999, "start": 626.92, "text": " But if you do machine learning, if you use machine learning to model all the text on the" }, { "end": 636.12, "start": 631.4799999999999, "text": " internet, the resulting model does in fact have a concept of irony that you can then try" }, { "end": 637.92, "start": 636.12, "text": " to use the new reward functions." }, { "end": 640.92, "start": 637.92, "text": " And then there's a Twitter thread related to disinformation." }, { "end": 645.64, "start": 640.92, "text": " And I shared a line from your paper where you said learning from human feedback offers" }, { "end": 650.04, "start": 645.64, "text": " the alternative of training recommender systems to promote content that humans would predict" }, { "end": 652.0799999999999, "start": 650.04, "text": " would improve the user's well-being." }, { "end": 654.1999999999999, "start": 652.0799999999999, "text": " And I thought that was really cool insight." }, { "end": 658.9200000000001, "start": 654.2, "text": " Is that something you're interested in pursuing or you see that being a thing?" }, { "end": 662.84, "start": 658.9200000000001, "text": " I don't know whether or not it is actually feasible currently." }, { "end": 667.88, "start": 662.84, "text": " One thing that needs to be true of recommender systems is they need to be cheap to run because" }, { "end": 671.2800000000001, "start": 667.88, "text": " they are being run so, so many times every day." }, { "end": 673.48, "start": 671.2800000000001, "text": " I don't actually know this for a fact." }, { "end": 676.1600000000001, "start": 673.48, "text": " I haven't actually done any Fermi estimates." }, { "end": 682.88, "start": 676.1600000000001, "text": " But my guess would be that if you try to actually run GPT-3 on, say, Facebook posts" }, { "end": 689.12, "start": 682.88, "text": " in order to then rank them, I think that would probably be prohibitively expensive for" }, { "end": 690.12, "start": 689.12, "text": " Facebook." }, { "end": 695.64, "start": 690.12, "text": " So, there's a question of, can you get a model that actually makes reasonable predictions" }, { "end": 702.04, "start": 695.64, "text": " about the user's well-being that can also be run cheaply enough that it's not a huge" }, { "end": 706.8, "start": 702.04, "text": " expensive cost to whoever is implementing the recommendation system?" }, { "end": 711.8, "start": 706.8, "text": " And also does it take a sufficiently small amount of human feedback that you aren't" }, { "end": 715.92, "start": 711.8, "text": " bottlenecked on cost from the humans providing the feedback?" }, { "end": 721.3199999999999, "start": 715.92, "text": " And also do we have algorithms that are good enough to train recommender systems this" }, { "end": 722.3199999999999, "start": 721.3199999999999, "text": " way?" }, { "end": 725.28, "start": 722.3199999999999, "text": " I think the answer is plausibly yes to all of these." }, { "end": 730.8399999999999, "start": 725.28, "text": " I haven't actually checked myself nor have I even tried to do any feasibility studies." }, { "end": 735.8399999999999, "start": 730.8399999999999, "text": " I think the line that you're quoting was more about, like, okay, why do this research" }, { "end": 736.8399999999999, "start": 735.8399999999999, "text": " at all?" }, { "end": 739.52, "start": 736.8399999999999, "text": " And I'm like, well, someday in the future, this should be possible." }, { "end": 743.84, "start": 739.52, "text": " And I stick by that, like, someday in the future things will become significantly cheaper," }, { "end": 747.0799999999999, "start": 743.84, "text": " learning from human feedback algorithms will be a lot better and so on." }, { "end": 751.72, "start": 747.0799999999999, "text": " And then, like, it will just totally make sense to recommender systems trained with human" }, { "end": 754.48, "start": 751.72, "text": " feedback unless we found something even better by then." }, { "end": 757.36, "start": 754.48, "text": " It's just not obvious to me that it is the right choice currently." }, { "end": 762.84, "start": 757.36, "text": " I look forward to that, and I'm really concerned, like, many people are about the disinformation" }, { "end": 765.48, "start": 762.84, "text": " and the divisiveness of social media." }, { "end": 766.48, "start": 765.48, "text": " So that sounds great." }, { "end": 771.32, "start": 766.48, "text": " I think everyone's used to very cheap reward functions pretty much across the board." }, { "end": 775.12, "start": 771.32, "text": " So I guess what you're kind of pointing to with these reward functions is potentially" }, { "end": 779.8000000000001, "start": 775.12, "text": " more expensive to evaluate reward functions, which is maybe hasn't been a common thing" }, { "end": 780.8000000000001, "start": 779.8000000000001, "text": " up till now." }, { "end": 785.2, "start": 780.8000000000001, "text": " Like, both more expensive reward functions and also the model that you train with that" }, { "end": 790.12, "start": 785.2, "text": " reward function might be, might still be very expensive to do inference with." }, { "end": 794.96, "start": 790.12, "text": " Presumably recommender systems right now are like compute these, you know, run a few linear" }, { "end": 801.48, "start": 794.96, "text": " time algorithms on the post in order to like compute it like 100 or 100,000 features," }, { "end": 806, "start": 801.48, "text": " then do a dot product with 100,000 weights, see which and then like rank things in order" }, { "end": 808.2800000000001, "start": 806, "text": " by by those numbers." }, { "end": 812.84, "start": 808.2800000000001, "text": " And that's like, you know, maybe a million flops or something, which is a tiny, tiny number" }, { "end": 820.9200000000001, "start": 812.84, "text": " of flops, whereas like a forward pass to GPT-3 is more is several hundred billion flops." }, { "end": 828.76, "start": 820.92, "text": " So that's a like 10 to the 5x increase in the amount of computation you have to do." }, { "end": 833.4799999999999, "start": 828.76, "text": " Actually, no, that's one forward pass through GPT-3, but there are many words in a Facebook" }, { "end": 834.4799999999999, "start": 833.4799999999999, "text": " post." }, { "end": 838.8399999999999, "start": 834.4799999999999, "text": " So multiply the 10 to the 5 by the number of words in the Facebook post." }, { "end": 843.0799999999999, "start": 838.8399999999999, "text": " And now we're at like maybe more like 10 to the 7 times cost increase just to do inference" }, { "end": 847.7199999999999, "start": 843.0799999999999, "text": " even as you mean you had successfully trained a model that could do recommendations." }, { "end": 853.2, "start": 847.72, "text": " And in the end result, maybe lowering engagement for the benefit of less divisive content," }, { "end": 857.0400000000001, "start": 853.2, "text": " which is maybe not in the interest of the social media companies in the first place." }, { "end": 861.5600000000001, "start": 857.0400000000001, "text": " Yeah, there's also a question of whether the companies will want to do this." }, { "end": 867.8000000000001, "start": 861.5600000000001, "text": " But I think if we, I don't know, if we like show that this was feasible, that would give" }, { "end": 874.52, "start": 867.8000000000001, "text": " regulators a much more, like I think a common problem with regulation is that you don't" }, { "end": 879.48, "start": 874.52, "text": " know what to regulate because there's no alternative on the table for what people are already" }, { "end": 880.48, "start": 879.48, "text": " doing." }, { "end": 884.6, "start": 880.48, "text": " And if we were to come to them and say, look, there's this learning from human feedback" }, { "end": 887.48, "start": 884.6, "text": " approach, we've like calculated it out." }, { "end": 892.68, "start": 887.48, "text": " This should, this should only increase cost by 2x or maybe, yeah, this should, maybe" }, { "end": 896, "start": 892.68, "text": " this is like just the same amount of cost." }, { "end": 899.3199999999999, "start": 896, "text": " And it shouldn't be too hard for companies to actually train such a model." }, { "end": 900.72, "start": 899.3199999999999, "text": " They've already got all the infrastructure." }, { "end": 905.52, "start": 900.72, "text": " They should barely be like, I don't know, $100,000 to train the model once." }, { "end": 911.12, "start": 905.52, "text": " And if you lay out that case, I think it's much, I would hope at least that it would" }, { "end": 915.8000000000001, "start": 911.12, "text": " be a lot easier for the regulators to be like, yes, everyone you must train your recommended" }, { "end": 921.44, "start": 915.8000000000001, "text": " systems to be optimizing for what humans would predict as good as opposed to whatever you're" }, { "end": 922.44, "start": 921.44, "text": " doing right now." }, { "end": 924.1600000000001, "start": 922.44, "text": " So that could really change the game." }, { "end": 928.44, "start": 924.1600000000001, "text": " And then the bots or the divisive posters are now trying to game that, that newer word" }, { "end": 929.44, "start": 928.44, "text": " function." }, { "end": 931.44, "start": 929.44, "text": " So that's why some different strategies." }, { "end": 937.2800000000001, "start": 931.44, "text": " Yeah, you might, you might imagine that you have to like keep retraining in order to deal" }, { "end": 941.5600000000001, "start": 937.2800000000001, "text": " with new strategies that are, that people are finding in response to you." }, { "end": 943.24, "start": 941.5600000000001, "text": " Like we can do this." }, { "end": 947.5200000000001, "start": 943.24, "text": " I don't have any special information about that on those from working at Google, but I'm" }, { "end": 953.44, "start": 947.5200000000001, "text": " told that Google is actually like pretty good at defeating, defeating spammers, for example." }, { "end": 958.96, "start": 953.44, "text": " Like in fact, my Gmail spam filter works quite well as far as I can tell despite the fact" }, { "end": 962.44, "start": 958.96, "text": " that spammers are constantly trying to evade it." }, { "end": 964.6800000000001, "start": 962.44, "text": " And we'll hopefully we could do the same thing here." }, { "end": 965.6800000000001, "start": 964.6800000000001, "text": " Cool." }, { "end": 967.2, "start": 965.6800000000001, "text": " Okay, let's move on to your next paper." }, { "end": 968.88, "start": 967.2, "text": " Preferences implicit in the state of the world." }, { "end": 972.72, "start": 968.88, "text": " I understand this paper is closely related to your dissertation." }, { "end": 976.36, "start": 972.72, "text": " We'll link to your dissertation in the showroom and says, well, I'm just going to read a quote" }, { "end": 978.76, "start": 976.36, "text": " and I love how you distilled this key insight." }, { "end": 982.84, "start": 978.76, "text": " You said the key insight of this paper is that when a robot is deployed in an environment," }, { "end": 984.12, "start": 982.84, "text": " the humans have been acting in." }, { "end": 987.08, "start": 984.12, "text": " The state of the environment is already optimized for what humans want." }, { "end": 991.76, "start": 987.08, "text": " Can you tell us the general idea here and what do you mean by that statement?" }, { "end": 998.6800000000001, "start": 991.76, "text": " Maybe like put yourself in the position of a robot or an AI system that knows nothing" }, { "end": 1000.4000000000001, "start": 998.6800000000001, "text": " about the world." }, { "end": 1004.24, "start": 1000.4000000000001, "text": " Maybe it's like, sorry, like it knows the laws of physics or something." }, { "end": 1006.5600000000001, "start": 1004.24, "text": " It knows that there's gravity." }, { "end": 1011.5200000000001, "start": 1006.5600000000001, "text": " It knows that like there are solids, liquids and gases, liquids tend to take the shape" }, { "end": 1015.08, "start": 1011.5200000000001, "text": " of the container that they're in, stuff like that." }, { "end": 1021.2, "start": 1015.08, "text": " It doesn't know anything about humans or maybe like, you know, it was, we imagine that" }, { "end": 1025.92, "start": 1021.2, "text": " it's sort of like off in other parts of the solar system or whatever it hasn't really" }, { "end": 1026.92, "start": 1025.92, "text": " seen in Earth." }, { "end": 1031.64, "start": 1026.92, "text": " And then it like comes to Earth and it's like, whoa, Earth has these like super regular" }, { "end": 1032.64, "start": 1031.64, "text": " structures." }, { "end": 1041.72, "start": 1032.64, "text": " There's like these like very cuboidal structures with glass panes and regular intervals" }, { "end": 1046.6000000000001, "start": 1041.72, "text": " that often seem to have lights inside of them, even though even at night when there isn't" }, { "end": 1050.72, "start": 1046.6000000000001, "text": " light outside of outside of them, this is kind of shocking." }, { "end": 1056.1200000000001, "start": 1050.72, "text": " You wouldn't expect this from a random configuration of atoms or something like that." }, { "end": 1061.24, "start": 1056.1200000000001, "text": " Like there's some sense in which they order of the world that we humans have imposed upon" }, { "end": 1067.48, "start": 1061.24, "text": " it is like extremely surprising if you don't know about humans already being there and" }, { "end": 1068.48, "start": 1067.48, "text": " what they want." }, { "end": 1074.56, "start": 1068.48, "text": " So then you can imagine asking your AIS system, hey, you see a lot of order here." }, { "end": 1079.84, "start": 1074.56, "text": " Can you like figure out an explanation for why this order is there?" }, { "end": 1084.2, "start": 1079.84, "text": " Perhaps it, and then you like, and maybe you get, and then you give it the hint of like," }, { "end": 1088.04, "start": 1084.2, "text": " look, it's, we're going to give you the hint that it was created by somebody optimizing" }, { "end": 1089.04, "start": 1088.04, "text": " the world." }, { "end": 1091.92, "start": 1089.04, "text": " What sort of things might they have been optimizing for?" }, { "end": 1096.08, "start": 1091.92, "text": " And then you like, you know, you look around, you see that like, oh, liquids, they tend" }, { "end": 1100.24, "start": 1096.08, "text": " to be in these like glasses, it would be really easy to tip over the glasses and have all" }, { "end": 1103.8, "start": 1100.24, "text": " the liquids spill out, but like that mostly doesn't happen." }, { "end": 1107.84, "start": 1103.8, "text": " So people must want to have their liquids in glasses and probably I shouldn't knock it" }, { "end": 1108.84, "start": 1107.84, "text": " over." }, { "end": 1111, "start": 1108.84, "text": " Vases, they're like kind of fragile." }, { "end": 1116.52, "start": 1111, "text": " You could like easily just like move them a little bit to the, to the left or right and" }, { "end": 1119.1599999999999, "start": 1116.52, "text": " they would like fall down and break." }, { "end": 1122.1999999999998, "start": 1119.1599999999999, "text": " And once they're broken, you couldn't then reassemble them." }, { "end": 1124.36, "start": 1122.1999999999998, "text": " But nonetheless, they're still not broken." }, { "end": 1128.84, "start": 1124.36, "text": " So like probably someone like actively doesn't want them to break and is leaving them on" }, { "end": 1129.84, "start": 1128.84, "text": " the table." }, { "end": 1130.84, "start": 1129.84, "text": " Yeah." }, { "end": 1134.56, "start": 1130.84, "text": " So really, I would say that the idea here is the order in the world did not just happen" }, { "end": 1136.04, "start": 1134.56, "text": " by random chance." }, { "end": 1138.36, "start": 1136.04, "text": " It happened because of human optimization." }, { "end": 1142.76, "start": 1138.36, "text": " And so from looking at the order of the world, you can figure out what the humans were optimizing" }, { "end": 1143.76, "start": 1142.76, "text": " for." }, { "end": 1145.6799999999998, "start": 1143.76, "text": " Yeah, that's the basic idea under a length of paper." }, { "end": 1149.6, "start": 1145.6799999999998, "text": " So there's some kind of relationship here to inverse reinforcement learning where we're" }, { "end": 1154.84, "start": 1149.6, "text": " trying to recover the reward function from observing an agent's behavior." }, { "end": 1157.28, "start": 1154.84, "text": " But here you're not observing the agent's behavior, right?" }, { "end": 1159.32, "start": 1157.28, "text": " So it's not quite inverse RL." }, { "end": 1163.1999999999998, "start": 1159.32, "text": " What how would you describe the relationship between what you're doing here and standard" }, { "end": 1164.36, "start": 1163.1999999999998, "text": " inverse RL?" }, { "end": 1165.36, "start": 1164.36, "text": " Yeah." }, { "end": 1172.7199999999998, "start": 1165.36, "text": " So in terms of the formalism, inverse RL says that you observe the human's behavior over" }, { "end": 1173.7199999999998, "start": 1172.7199999999998, "text": " time." }, { "end": 1177.9199999999998, "start": 1173.7199999999998, "text": " So that's a sequence of states and actions that the human took within those states." }, { "end": 1181.48, "start": 1177.92, "text": " Whereas we're just saying, nah, nah, nah, we're not watching the human's behavior." }, { "end": 1185.04, "start": 1181.48, "text": " We're just going to see only the state, the current state." }, { "end": 1186.72, "start": 1185.04, "text": " That's the only thing that we see." }, { "end": 1189.68, "start": 1186.72, "text": " And so you can think of this in the framework of inverse reinforcement learning." }, { "end": 1195.8400000000001, "start": 1189.68, "text": " You can think of this as either the final state of the trajectory or a state sampled from" }, { "end": 1200.5600000000002, "start": 1195.8400000000001, "text": " the stationary distribution from an infinitely long trajectory." }, { "end": 1202.72, "start": 1200.5600000000002, "text": " Either of those would be reasonable to do." }, { "end": 1206.3600000000001, "start": 1202.72, "text": " But you're only observing that one thing instead of observing the entire state action" }, { "end": 1210.04, "start": 1206.36, "text": " history starting from a random initialization of the world." }, { "end": 1213.4799999999998, "start": 1210.04, "text": " But other than that, you just make that one change and then you run through all the" }, { "end": 1215.9199999999998, "start": 1213.4799999999998, "text": " same map and you get a slightly different algorithm." }, { "end": 1220.1599999999999, "start": 1215.9199999999998, "text": " And that's basically what we did to make this paper." }, { "end": 1224.32, "start": 1220.1599999999999, "text": " So with this approach, I guess potentially you're opening up a huge amount of kind of" }, { "end": 1227.12, "start": 1224.32, "text": " unsupervised learning just from observing what's happening." }, { "end": 1231.3999999999999, "start": 1227.12, "text": " And you can kind of almost do it instantaneously in terms of observation, right?" }, { "end": 1234.1599999999999, "start": 1231.3999999999999, "text": " You don't have to watch billions of humans for thousands of years." }, { "end": 1235.1599999999999, "start": 1234.1599999999999, "text": " Yep." }, { "end": 1236.1599999999999, "start": 1235.1599999999999, "text": " That's right." }, { "end": 1242.48, "start": 1236.16, "text": " It does require that your AI system knows like the laws of physics or as we would call" }, { "end": 1246.48, "start": 1242.48, "text": " it in RL, the transition dynamics." }, { "end": 1251.0400000000002, "start": 1246.48, "text": " Or well, it needs to either know that or have some sorts of data from which it can learn" }, { "end": 1257.0400000000002, "start": 1251.0400000000002, "text": " that because if you just look at the state of the world and you have no idea what the" }, { "end": 1261.4, "start": 1257.0400000000002, "text": " laws of physics are or how things work at all, you're not going to be able to figure" }, { "end": 1264.28, "start": 1261.4, "text": " out how it was optimized into the state." }, { "end": 1269.16, "start": 1264.28, "text": " Like if you want to infer that humans don't want their bases to be broken, it's an important" }, { "end": 1275.6, "start": 1269.16, "text": " fact in order to infer that that if a base is broken, it's very hard to put it back together." }, { "end": 1281.52, "start": 1275.6, "text": " And that is the fact about the transition dynamics, which we assume by fiat that the agent" }, { "end": 1282.52, "start": 1281.52, "text": " knows." }, { "end": 1289.3999999999999, "start": 1282.52, "text": " But yes, if you had enough data such that self-supervised learning could teach the agent a bunch about" }, { "end": 1295.0400000000002, "start": 1289.4, "text": " dynamics and also then and then like also the agent could go about going around looking" }, { "end": 1300.96, "start": 1295.0400000000002, "text": " at the state of the world in theory, it could then infer a lot about what humans care about." }, { "end": 1308.2, "start": 1300.96, "text": " So I very clearly remember meeting you at Nureps 2018 Deep Arla Workshop in Montreal," }, { "end": 1309.4, "start": 1308.2, "text": " the poster session." }, { "end": 1314.68, "start": 1309.4, "text": " And I remember your poster on this and you showed a dining room that was all nicely arranged" }, { "end": 1320.28, "start": 1314.68, "text": " and you were saying how a robot could learn from how things are arranged." }, { "end": 1325.6000000000001, "start": 1320.28, "text": " And I just want to say, I'll say this publicly, I didn't understand at that point what you" }, { "end": 1328.48, "start": 1325.6000000000001, "text": " meant or why that could be important." }, { "end": 1331.5600000000002, "start": 1328.48, "text": " And it was so different, your angle was just so different than everything else that was" }, { "end": 1333.28, "start": 1331.5600000000002, "text": " being presented that day." }, { "end": 1334.64, "start": 1333.28, "text": " And I really didn't get it." }, { "end": 1336.8400000000001, "start": 1334.64, "text": " So I and I will own that." }, { "end": 1338.68, "start": 1336.8400000000001, "text": " It was my loss." }, { "end": 1339.68, "start": 1338.68, "text": " So thanks for your patience." }, { "end": 1343.3600000000001, "start": 1339.68, "text": " It only took me three and a half years or something to come around." }, { "end": 1348.8, "start": 1343.36, "text": " Yeah, sorry, I didn't communicate that clear or I suppose." }, { "end": 1353.28, "start": 1348.8, "text": " I don't think it was no, I don't think it was at all on you." }, { "end": 1358.6399999999999, "start": 1353.28, "text": " But maybe I just lacked the background to see why I like to understand." }, { "end": 1363.8799999999999, "start": 1358.6399999999999, "text": " Let me put it this way, like how often do you find people who have some technical understanding" }, { "end": 1370.08, "start": 1363.8799999999999, "text": " of AI, but still maybe don't appreciate some of this line of work including alignment and" }, { "end": 1371.08, "start": 1370.08, "text": " things like that?" }, { "end": 1372.28, "start": 1371.08, "text": " Is that a common thing?" }, { "end": 1375.08, "start": 1372.28, "text": " I think that's the reason of common." }, { "end": 1376.36, "start": 1375.08, "text": " And what do you attribute that to?" }, { "end": 1379.56, "start": 1376.36, "text": " What's going on there and is that changing at all?" }, { "end": 1380.8799999999999, "start": 1379.56, "text": " I think it's pretty interesting." }, { "end": 1386.92, "start": 1380.8799999999999, "text": " I don't think that these people would say that, oh, this is a boring paper or this is" }, { "end": 1389.36, "start": 1386.92, "text": " an incompetent paper." }, { "end": 1395.04, "start": 1389.36, "text": " I think they would say, yes, the person who wrote this paper is in fact, has in fact done" }, { "end": 1401.76, "start": 1395.04, "text": " something impressive by the standards of like, did you need to be intelligent and like" }, { "end": 1403.84, "start": 1401.76, "text": " do good math in order to do this?" }, { "end": 1409.08, "start": 1403.84, "text": " I think they are more likely to say something like, okay, but so what?" }, { "end": 1411.68, "start": 1409.08, "text": " And that's not entirely unfair." }, { "end": 1416.96, "start": 1411.68, "text": " It was the deep RL workshop and here I am talking about like, oh, yes, imagine that you" }, { "end": 1422.68, "start": 1416.96, "text": " know all the dynamics and also you're only getting to look at the state of the world." }, { "end": 1426.84, "start": 1422.68, "text": " And then you think about how bases can be broken but then they can't be put back together" }, { "end": 1429.6, "start": 1426.84, "text": " and voila, you learn that humans don't like to break bases." }, { "end": 1435.1999999999998, "start": 1429.6, "text": " This is just such so different from all of the things that RL usually focuses on, right?" }, { "end": 1436.9199999999998, "start": 1435.1999999999998, "text": " Like it doesn't have any of the buzzwords." }, { "end": 1441.76, "start": 1436.9199999999998, "text": " There's no like, you know, deep learning, there's no exploration, there's no, there's" }, { "end": 1444.52, "start": 1441.76, "text": " no catastrophic forgetting, nothing like that." }, { "end": 1448.08, "start": 1444.52, "text": " And to be clear, all of those seem like important things to focus on." }, { "end": 1452.48, "start": 1448.08, "text": " And I think many of the people who are at that workshop were focusing on those and are" }, { "end": 1454.04, "start": 1452.48, "text": " doing good work on them." }, { "end": 1455.9199999999998, "start": 1454.04, "text": " And I'm just doing something completely different." }, { "end": 1460.72, "start": 1455.92, "text": " It's like not all that interesting to them because they want to work on reinforcement" }, { "end": 1461.72, "start": 1460.72, "text": " learning." }, { "end": 1466.8400000000001, "start": 1461.72, "text": " I think they're making a mistake in the sense that like AI alignment is important and" }, { "end": 1468.88, "start": 1466.8400000000001, "text": " more people should work on it." }, { "end": 1473.88, "start": 1468.88, "text": " But I don't think they're making a mistake in that they're probably correct about what" }, { "end": 1475.72, "start": 1473.88, "text": " doesn't, doesn't interest them." }, { "end": 1479.72, "start": 1475.72, "text": " Okay, just so I'm clear, I was not critiquing your math or the value of anything you were" }, { "end": 1480.72, "start": 1479.72, "text": " doing." }, { "end": 1484.3600000000001, "start": 1480.72, "text": " It was just my ability to understand the importance of this type of work." }, { "end": 1485.88, "start": 1484.3600000000001, "text": " Yeah, I didn't think you were." }, { "end": 1487.44, "start": 1485.88, "text": " Okay, thanks." }, { "end": 1492.68, "start": 1487.44, "text": " So I will say that that day when I first encountered your poster, I was really hung up on edge" }, { "end": 1494.2, "start": 1492.68, "text": " cases." }, { "end": 1499.0400000000002, "start": 1494.2, "text": " Like, you know, there's in the world, the robot might observe, there's hunger and there's" }, { "end": 1503.2, "start": 1499.0400000000002, "text": " traffic accidents and there's things that things like, like not everything is perfect." }, { "end": 1507.3600000000001, "start": 1503.2, "text": " And we don't want the robot to replicate all these, all these flaws in the world or the" }, { "end": 1511.1200000000001, "start": 1507.3600000000001, "text": " dining room, there might be, you know, dirty dishes or something." }, { "end": 1514.0400000000002, "start": 1511.1200000000001, "text": " And so the world is clearly not exactly how we want it to be." }, { "end": 1520.12, "start": 1514.04, "text": " So how is that, is that an issue or is that, is that not an issue or is that just not" }, { "end": 1522.84, "start": 1520.12, "text": " the point of this, not, not a trust here?" }, { "end": 1524.52, "start": 1522.84, "text": " It depends a little bit." }, { "end": 1527.52, "start": 1524.52, "text": " I think in many cases, it's not an issue." }, { "end": 1530.8799999999999, "start": 1527.52, "text": " If you imagine that the robot somehow sees the entire world." }, { "end": 1533.08, "start": 1530.8799999999999, "text": " So for example, you mentioned hunger." }, { "end": 1540.92, "start": 1533.08, "text": " I think the robot would notice that we do in fact spend a lot of effort making sure that" }, { "end": 1544.0800000000002, "start": 1540.92, "text": " at least large number of people don't go hungry." }, { "end": 1549.72, "start": 1544.0800000000002, "text": " Like, we've built these giant vehicles, both trucks and cargo ships and so on, that" }, { "end": 1555.92, "start": 1549.72, "text": " move food around in a way that seems at least somewhat optimized to get food to people who" }, { "end": 1558.28, "start": 1555.92, "text": " like that food and want to eat it." }, { "end": 1560.5600000000002, "start": 1558.28, "text": " So there's lots of effort being put into it." }, { "end": 1565.1200000000001, "start": 1560.5600000000002, "text": " There's not like the maximal amount of effort being put into it, which I think reflects" }, { "end": 1569.1200000000001, "start": 1565.1200000000001, "text": " the fact that there are things that we care about other than food." }, { "end": 1573.6399999999999, "start": 1569.12, "text": " So I do think it would be like, all right, humans definitely care about having food." }, { "end": 1578.4799999999998, "start": 1573.6399999999999, "text": " I think it might then, like if you use the assumption that we have in the paper, which" }, { "end": 1584.28, "start": 1578.4799999999998, "text": " is that humans are the humans are noisily rational, then it might conclude things like," }, { "end": 1593.08, "start": 1584.28, "text": " I, yes, Western countries care about getting food to Western citizens, to the citizens" }, { "end": 1594.28, "start": 1593.08, "text": " of their country." }, { "end": 1599.72, "start": 1594.28, "text": " And they care a little bit about other people having food, but like not that much." }, { "end": 1603.24, "start": 1599.72, "text": " It's like a small portion of their government's aid budget." }, { "end": 1606.92, "start": 1603.24, "text": " So like there's a positive weight on their fairly small weight." }, { "end": 1612.96, "start": 1606.92, "text": " And that seems like maybe not the thing that we want to tolerant, but like also I think" }, { "end": 1619.44, "start": 1612.96, "text": " it is in some sense an accurate reflection of what Western countries care about if you" }, { "end": 1622, "start": 1619.44, "text": " go by their actions rather than what they say." }, { "end": 1628.48, "start": 1622, "text": " Cool. Okay. So I, I'm going to move on to benefits of assistance over reward learning." }, { "end": 1631.96, "start": 1628.48, "text": " And this one was absolutely fascinating to me." }, { "end": 1632.96, "start": 1631.96, "text": " Actually mind blowing." }, { "end": 1636.52, "start": 1632.96, "text": " I highly recommend people read all of these, but, but definitely I can point to this one" }, { "end": 1638.8, "start": 1636.52, "text": " as something surprising to me." }, { "end": 1640.48, "start": 1638.8, "text": " So that was you as a first author." }, { "end": 1644.68, "start": 1640.48, "text": " And can you share what is the, what's the general idea of this paper, Ron?" }, { "end": 1648.12, "start": 1644.68, "text": " I should say that this general idea was not novel to this paper." }, { "end": 1653.36, "start": 1648.12, "text": " It's been proposed previously. I'm not going to remember the paper, but it's by firm at" }, { "end": 1654.36, "start": 1653.36, "text": " all." }, { "end": 1659.56, "start": 1654.36, "text": " It's like towards decision, decision theoretic model of assistance or something like that." }, { "end": 1663.4799999999998, "start": 1659.56, "text": " And then there's also cooperative and research reinforcement learning from Chai where I did" }, { "end": 1664.4799999999998, "start": 1663.4799999999998, "text": " my PhD." }, { "end": 1668.4799999999998, "start": 1664.4799999999998, "text": " The idea with this paper was just to take that the, the models that had already been" }, { "end": 1673.04, "start": 1668.4799999999998, "text": " proposed in these papers and explain why they were so nice." }, { "end": 1678.3999999999999, "start": 1673.04, "text": " Why, why I was like particularly keen on these models as opposed to other things that the" }, { "end": 1679.76, "start": 1678.3999999999999, "text": " field could be doing." }, { "end": 1685.96, "start": 1679.76, "text": " So the idea here is that generally we want to build a systems that help us do stuff." }, { "end": 1689.72, "start": 1685.96, "text": " And you could imagine two different ways that this could be done." }, { "end": 1696.72, "start": 1689.72, "text": " First, you could imagine a system that has two separate modules." }, { "end": 1702.36, "start": 1696.72, "text": " One module is doing is trying to figure out what the humans want or what the humans want" }, { "end": 1703.8, "start": 1702.36, "text": " the AI system to do." }, { "end": 1709.84, "start": 1703.8, "text": " And the other module is them is trying to then do the things that the first module said" }, { "end": 1711.6, "start": 1709.84, "text": " the people wanted it to do." }, { "end": 1715.3999999999999, "start": 1711.6, "text": " And that's kind of like the one we talked about learning from human feedback earlier on" }, { "end": 1716.76, "start": 1715.3999999999999, "text": " and modeling reward functions." }, { "end": 1719.28, "start": 1716.76, "text": " Is that what that would exactly?" }, { "end": 1723.32, "start": 1719.28, "text": " I think that is often what that's often what people are thinking about." }, { "end": 1729.52, "start": 1723.32, "text": " I would make a distinction between how you train the AI system and what the AI system" }, { "end": 1730.6, "start": 1729.52, "text": " is doing." }, { "end": 1734.36, "start": 1730.6, "text": " This paper I would say is more about what the AI system is doing." }, { "end": 1740.1999999999998, "start": 1734.36, "text": " Whereas the learning from human feedback stuff is more about how you train the system." }, { "end": 1741.1999999999998, "start": 1740.1999999999998, "text": " Yeah." }, { "end": 1746.08, "start": 1741.1999999999998, "text": " In the what the AI system is doing framework, I would call this value learning or reward" }, { "end": 1747.08, "start": 1746.08, "text": " learning." }, { "end": 1748.9599999999998, "start": 1747.08, "text": " And then the alternative is assistance." }, { "end": 1752.6, "start": 1748.9599999999998, "text": " And so although there's like some surface similarities between learning from human feedback" }, { "end": 1758.36, "start": 1752.6, "text": " and award learning, it is totally possible to use learning from human feedback algorithms" }, { "end": 1766.8799999999999, "start": 1758.36, "text": " to train an AI system that acts as though it is in the assistance paradigm is also possible" }, { "end": 1771.32, "start": 1766.8799999999999, "text": " to use learning from human feedback approaches to train an AI system." }, { "end": 1776.12, "start": 1771.32, "text": " Then acts as though that then acts as though it is in the reward learning paradigm." }, { "end": 1782.08, "start": 1776.12, "text": " So that's one distinction to make to recap the value learning or reward learning side" }, { "end": 1788.56, "start": 1782.08, "text": " of the two models is two separate modules, one that like figures out what the humans want" }, { "end": 1793.6399999999999, "start": 1788.56, "text": " and the other that then acts to optimize those values." }, { "end": 1798.76, "start": 1793.6399999999999, "text": " And the other side which we might call assistance is where you still have both of those functions" }, { "end": 1801.6799999999998, "start": 1798.76, "text": " but they're combined into a single module." }, { "end": 1808, "start": 1801.6799999999998, "text": " And the way that you do this is you have the AI system posit that there is some true unknown" }, { "end": 1813.76, "start": 1808, "text": " reward function theta only the human, the human who is a part of the environment knows this" }, { "end": 1818, "start": 1813.76, "text": " data and their behavior depends on what the data actually is." }, { "end": 1822.8, "start": 1818, "text": " And so now these just test to act in order to maximize theta but it doesn't know theta." }, { "end": 1826.88, "start": 1822.8, "text": " So it has to like look at how the human is behaving within the environment in order to like" }, { "end": 1830.28, "start": 1826.88, "text": " make some inferences about what data probably is." }, { "end": 1833.52, "start": 1830.28, "text": " And then as it gets more and more information about data that allows it to take more and" }, { "end": 1837.36, "start": 1833.52, "text": " more like actions in order to optimize theta." }, { "end": 1843.12, "start": 1837.36, "text": " And fundamentally this like learning about theta is an instrumental action that the" }, { "end": 1849.4399999999998, "start": 1843.12, "text": " agent predicts would be useful for helping it to better optimize theta in the future." }, { "end": 1854.9199999999998, "start": 1849.4399999999998, "text": " So if I understand correctly you're saying assistance is superior because it can the" }, { "end": 1860.8, "start": 1854.9199999999998, "text": " agent can reason about how to improve its model of what the human wants or how do you" }, { "end": 1865.6, "start": 1860.8, "text": " describe why why it's you get all these benefits from assistance." }, { "end": 1870.48, "start": 1865.6, "text": " Yeah, I think that benefits come more from the fact that these two functions are integrated." }, { "end": 1875.52, "start": 1870.48, "text": " There's the value learning, the reward learning or value learning and the control." }, { "end": 1878.1599999999999, "start": 1875.52, "text": " So like acting to optimize the value learning." }, { "end": 1880.7199999999998, "start": 1878.1599999999999, "text": " So we can think of these two functions in assistance." }, { "end": 1886.24, "start": 1880.7199999999998, "text": " They're merged into a single module that does like nice good Bayesian reasoning about all of it." }, { "end": 1891.6799999999998, "start": 1886.8799999999999, "text": " Whereas in the value learning paradigm they're separated and it's this integration that" }, { "end": 1897.68, "start": 1891.68, "text": " provides the benefits you can make plans which is generally the domain of control." }, { "end": 1904.88, "start": 1897.68, "text": " But those plans can then depend on the agent believing that in the future it's going to learn" }, { "end": 1909.92, "start": 1904.88, "text": " some more things about the reward function theta which would normally be the domain of value" }, { "end": 1917.1200000000001, "start": 1909.92, "text": " learning. So that's an example where control is using information future information about" }, { "end": 1922.08, "start": 1917.12, "text": " value learning in order to make its plans whereas when those two modules are separated you can't" }, { "end": 1929.04, "start": 1922.08, "text": " do that. And so like one example that we have in the paper is you is like you imagine that" }, { "end": 1934.32, "start": 1930, "text": " you've got a robot who is who asked you cook dinner for Alice." }, { "end": 1940.1599999999999, "start": 1934.32, "text": " Alice is currently well not cook dinner bake a pie for Alice. Alice is currently at the office" }, { "end": 1945.76, "start": 1940.1599999999999, "text": " so the robot can't talk to her and unfortunately the robot doesn't know what kind of pie she wants." }, { "end": 1951.2, "start": 1945.76, "text": " Maybe Apple blueberry or cherry but like the robot could guess but its guess is not that likely to" }, { "end": 1957.68, "start": 1951.2, "text": " be correct. However it turns out that you know the steps to make the piecrest are the same for all" }, { "end": 1968.64, "start": 1957.68, "text": " three pies. So an assistive robot can reason the hey my plan is first make the piecrest then wait" }, { "end": 1974.8799999999999, "start": 1968.64, "text": " for Alice to get home then ask her what filling she wants then put the filling in. And that entire" }, { "end": 1981.2800000000002, "start": 1974.88, "text": " plan consists of both taking actions in the environment like making the crust and putting in the filling" }, { "end": 1987.8400000000001, "start": 1981.2800000000002, "text": " and also includes things like learn more about data by asking Alice a question. And so it's like" }, { "end": 1992.48, "start": 1987.8400000000001, "text": " integrating all of these into a single plan whereas that plan cannot be expressed in the" }, { "end": 2000.72, "start": 1992.48, "text": " value learning paradigm. The query as an action in the action space. So I really like the you laid out" }, { "end": 2005.2, "start": 2000.72, "text": " some levels of task complexity and I'm just going to go through them really briefly. You mentioned" }, { "end": 2011.76, "start": 2005.2, "text": " traditional CS is giving instructions to computer on how to perform a task and then using AI or" }, { "end": 2018.56, "start": 2011.76, "text": " ML for simpler tasks would be specifying what the task is and the machine figures out how to do it." }, { "end": 2025.3600000000001, "start": 2018.56, "text": " I guess that's standard RL formulation. And then I the hard for hard task specifying the task is" }, { "end": 2033.28, "start": 2025.36, "text": " difficult. So the agents can learn may may learn a reward function from human feedback. And then" }, { "end": 2038.56, "start": 2033.28, "text": " and then and then you mentioned assistance paradigm as as the next level where the human is part" }, { "end": 2044.6399999999999, "start": 2038.56, "text": " of the environment has latent goals that the robot does not know. Yep. How do you see this ladder?" }, { "end": 2050.16, "start": 2044.6399999999999, "text": " Like does this describe is this a universal classification scheme? Is it or we don't is that the" }, { "end": 2057.8399999999997, "start": 2050.16, "text": " high level? That's a good question. I haven't really thought about it before. You can imagine a" }, { "end": 2065.04, "start": 2057.8399999999997, "text": " different version of the highest level which is like here we've talked about the assistance" }, { "end": 2072.56, "start": 2065.6, "text": " framing where you're like there is some objective but you have to infer it from human feedback." }, { "end": 2076.72, "start": 2072.56, "text": " There's a different version that may is more in line with the way things are going" }, { "end": 2082.16, "start": 2076.72, "text": " with deep learning right now which is more like specifying the task is difficult. So we're only" }, { "end": 2087.7599999999998, "start": 2082.16, "text": " going to like evaluate behaviors that the AI agent shows and maybe like also try to find some" }, { "end": 2094.24, "start": 2087.7599999999998, "text": " hypothetical behaviors and evaluate those as well. So that's a different way that you could" }, { "end": 2100.24, "start": 2094.24, "text": " talk about this highest level where you're like evaluating specific behaviors rather than trying" }, { "end": 2106.16, "start": 2100.24, "text": " to specify the task across all possible behaviors. And then maybe that would have to be the highest" }, { "end": 2112.72, "start": 2106.16, "text": " level. I don't know you could just keep inventing new kinds of human feedback inputs and maybe those" }, { "end": 2119.04, "start": 2112.72, "text": " could be thought of as higher levels beyond that as well. So then one detail I mentioned I saw in" }, { "end": 2125.12, "start": 2119.04, "text": " the paper you mentioned two-phase communicative assistance is equivalent to reward learning." }, { "end": 2129.7599999999998, "start": 2125.12, "text": " And I puzzled over that line and I couldn't really quite understand what you meant. Can you say" }, { "end": 2134.16, "start": 2129.7599999999998, "text": " a little bit more about that? What does that mean? How do you conclude that those two things are" }, { "end": 2140.7999999999997, "start": 2134.16, "text": " equivalent? Yeah. So there are a number of definitions here. Maybe I won't go through all of it but" }, { "end": 2148.3999999999996, "start": 2140.7999999999997, "text": " just so that listener is no. We had definition, we had formal definitions of like what counts as" }, { "end": 2159.04, "start": 2148.3999999999996, "text": " assistance and what counts as a reward learning. In the reward learning case we imagine it as first" }, { "end": 2164.72, "start": 2159.04, "text": " you have a system that asks the human questions or actually doesn't have to ask the human questions." }, { "end": 2170.16, "start": 2164.72, "text": " But first we have a system that interacts with the human somehow and develops a guess of what" }, { "end": 2175.7599999999998, "start": 2170.16, "text": " the reward function is. And then that guess of what the reward function is which could be a" }, { "end": 2182.8, "start": 2175.7599999999998, "text": " distribution over rewards is passed on to a system that then acts to maximize the expected" }, { "end": 2187.12, "start": 2182.8, "text": " value of the sorry the expected reward according to that distribution over rewards." }, { "end": 2193.3599999999997, "start": 2187.12, "text": " Okay. Yeah. So once it's done its communication it's learned a reward and in phase two it's not" }, { "end": 2197.68, "start": 2193.3599999999997, "text": " it doesn't have a query as action at that point. That's right. Exactly. Okay. Cool." }, { "end": 2205.3599999999997, "start": 2197.68, "text": " And so then this you know two phase is the two phase communicative assistance that two" }, { "end": 2210.7999999999997, "start": 2205.3599999999997, "text": " phase and the communicative parts both have technical definitions but they roughly mean exactly" }, { "end": 2216.64, "start": 2210.7999999999997, "text": " what you would expect them to mean in order to make this true. So you mentioned three" }, { "end": 2223.2799999999997, "start": 2216.64, "text": " benefits of using assistance. This assistance paradigm. Can you briefly explain what those" }, { "end": 2230, "start": 2223.2799999999997, "text": " benefits are? The first one which I already talked about is plans conditional on future feedback." }, { "end": 2236.64, "start": 2230, "text": " So this is the example where the robot can make a plan that says hey first they'll make the pie" }, { "end": 2241.44, "start": 2236.64, "text": " crust then I'll wait for Alice to get back from the office then I'll ask her what filling she wants." }, { "end": 2249.44, "start": 2241.44, "text": " Then I'll put in the appropriate filling so there there the plan was conditional on the answer" }, { "end": 2254.8, "start": 2249.44, "text": " that Alice was going to give in the future that the robot predict that she would give but like" }, { "end": 2261.44, "start": 2254.8, "text": " couldn't actually ask the question now. So that's one thing that can be done in the assistance" }, { "end": 2271.36, "start": 2261.44, "text": " paradigm but not in the value learning or reward learning paradigm. A second one is what we call" }, { "end": 2282, "start": 2271.36, "text": " relevance aware active learning. So active learning is the idea that instead of the human passively" }, { "end": 2286.7200000000003, "start": 2282, "text": " giving the robot, sorry instead the human giving a bunch of information to the robot on the robot" }, { "end": 2292.8, "start": 2286.7200000000003, "text": " passively taking it and using it to update it's a submit of data. The robot actively asks the human" }, { "end": 2298.88, "start": 2292.8, "text": " quite human questions that seem most relevant to updating its understanding of the reward data" }, { "end": 2303.44, "start": 2298.88, "text": " and then the human answers those questions. So that's active learning that can be done in both" }, { "end": 2310.32, "start": 2303.44, "text": " paradigms. The thing that assistance can do is to have the robot only ask questions that are actually" }, { "end": 2316.7200000000003, "start": 2310.32, "text": " relevant for the plans that's going to have in the future. So to make this point I might you might" }, { "end": 2322.4, "start": 2316.7200000000003, "text": " imagine that like you know you get a household robot and your household robot's booting up and" }, { "end": 2327.6800000000003, "start": 2322.4, "text": " if it was in the reward learning paradigm it has to like figure out data right now and so it's like" }, { "end": 2335.04, "start": 2327.68, "text": " all right do you tend to like at what time do you do tend to prefer dinner so I can cook that for" }, { "end": 2340.96, "start": 2335.04, "text": " you and that's like a pretty reasonable question and you're like yeah I usually eat around 7 pm" }, { "end": 2346.24, "start": 2342, "text": " and it's got a few more questions like this and later on it's like well if you ever want to" }, { "end": 2354.3999999999996, "start": 2346.24, "text": " depaint your house what color should we paint it and you're like kind of like blue I guess but" }, { "end": 2361.2000000000003, "start": 2354.4, "text": " like why are you asking me this and then it's like if aliens come and invade from Mars where would" }, { "end": 2367.12, "start": 2361.2000000000003, "text": " what would be your preference of place to hide in and you're like why why are you asking me this" }, { "end": 2371.92, "start": 2367.12, "text": " but the thing is like all of these questions are in fact relevant for for the reward function data" }, { "end": 2379.44, "start": 2372.7200000000003, "text": " the reason that you don't that like if this were a human instead of a robot the reason they" }, { "end": 2384.4, "start": 2379.44, "text": " went to ask these questions is because the situations to which they're relevant probably don't come up" }, { "end": 2392.4, "start": 2385.04, "text": " but in order to like make that prediction you need to be talking more to the control sub module" }, { "end": 2397.76, "start": 2392.4, "text": " with the control module which is like a thing that reward learning paradigm doesn't do the" }, { "end": 2401.6, "start": 2397.76, "text": " controls of modules the one that's like all right we're gonna take we're probably going to take" }, { "end": 2406.48, "start": 2401.6, "text": " these sorts of actions as kind of lead to this kind of feature and so like you know probably aliens" }, { "end": 2411.76, "start": 2406.48, "text": " from Mars aren't ever going to be relevant so if if you have this like one unified system" }, { "end": 2417.92, "start": 2412.96, "text": " then it can be like well okay I know that like aliens from Mars are probably not going to show up" }, { "end": 2424.64, "start": 2418.72, "text": " anytime in the near future and then I don't need to ask about those preferences right now if they" }, { "end": 2430.72, "start": 2424.64, "text": " if I do find out that aliens from Mars are likely to land soon then I will ask that question" }, { "end": 2436.32, "start": 2430.72, "text": " but I can leave that to later and not bother Alice until that actually happens so that's a" }, { "end": 2443.6000000000004, "start": 2436.32, "text": " second one and then the final one is that you know so far I've been talking at cases where" }, { "end": 2450.0800000000004, "start": 2444.2400000000002, "text": " the robot is learning by asking the human questions and the human just like gives answers that are" }, { "end": 2455.44, "start": 2450.0800000000004, "text": " informative about their word function data the third one is that you know you don't have to ask the" }, { "end": 2460.48, "start": 2455.44, "text": " human questions you can also learn from their behavior just directly while they are going about" }, { "end": 2466.72, "start": 2460.48, "text": " their day and optimizing their environment a good example of this is like your robot starts helping" }, { "end": 2471.68, "start": 2466.72, "text": " out around the kitchen it starts by doing some like very obvious things like okay there are some" }, { "end": 2478.4, "start": 2471.68, "text": " dirty dishes just put them in the dishwasher meanwhile the humans going around and like" }, { "end": 2484, "start": 2478.4, "text": " starting to collect the ingredients for baking a pie that everybody can see this notice that" }, { "end": 2489.76, "start": 2484, "text": " that's that's the case and then like go and get out the like mixing bowl and the egg beater and so" }, { "end": 2497.1200000000003, "start": 2489.76, "text": " on in order to help like this sort of just like seeing what the human is up to and then like" }, { "end": 2502.8, "start": 2497.1200000000003, "text": " immediately starting to help with that is the sort of thing that you can only like this is all" }, { "end": 2508, "start": 2502.8, "text": " happening within a single episode rather than being across episodes the like value learning" }, { "end": 2514.1600000000003, "start": 2508, "text": " or rewarding could do it across episodes where like first the robot looks and watches the human" }, { "end": 2518.6400000000003, "start": 2514.96, "text": " act in the environment to make an entire cake from scratch and then the next time when the" }, { "end": 2523.8399999999997, "start": 2518.64, "text": " robot is actually in the environment it goes and helps the human out but in the assistance paradigm" }, { "end": 2529.6, "start": 2523.8399999999997, "text": " it can do that learning and help out with making the cake within the episode itself as long as it" }, { "end": 2536.08, "start": 2529.6, "text": " has enough understanding of how the world works and what data is likely to be in order to actually" }, { "end": 2540.72, "start": 2536.08, "text": " like did use with enough confidence that those actions are good to take when you describe the robot" }, { "end": 2544.48, "start": 2540.72, "text": " that would ask all these irrelevant questions I couldn't help I'm a parent I couldn't help with" }, { "end": 2548.64, "start": 2544.48, "text": " thinking you know that's the kind of thing a four-year-old would do try ask you every random question" }, { "end": 2552.64, "start": 2548.64, "text": " yes not relevant right then and it seems like you're you're kind of pointing to a more mature" }, { "end": 2559.84, "start": 2552.64, "text": " type of intelligence yeah yeah a lot of this is like like the the entire paper has a assumption" }, { "end": 2564.08, "start": 2559.84, "text": " of like we're going to write down math and then we're going to talk about agents that are optimal" }, { "end": 2568.64, "start": 2564.08, "text": " for that math we're not going to bother thinking of we're not going to think about like okay how do" }, { "end": 2572.8, "start": 2568.64, "text": " we in practice get the optimal thing we're just like is the optimal thing actually the thing that" }, { "end": 2580.6400000000003, "start": 2572.8, "text": " we want and so one would hope that yes if we're assuming the actual optimal agent it should in" }, { "end": 2590, "start": 2580.6400000000003, "text": " fact be more mature than four-year-olds one hopes so how do you relate can you relate this assistance" }, { "end": 2596.96, "start": 2590, "text": " paradigm back to standard and inverse RL what is the relationship between these two paradigms yeah" }, { "end": 2605.2, "start": 2596.96, "text": " so inverse RL assumes that it's an example of the reward learning paradigm it assumes that" }, { "end": 2610.48, "start": 2605.2, "text": " you get full demonstrations of the entire task and then you have and then you like" }, { "end": 2617.6, "start": 2611.44, "text": " executed by the human teleoperating the robot there's like versions of it that don't" }, { "end": 2627.2, "start": 2617.6, "text": " assume the teleoperation part but usually that's an assumption and then given the you know teleoperated" }, { "end": 2632.3199999999997, "start": 2627.2, "text": " robot demonstrations of how to do the task the robot is then it's supposed to infer what the task" }, { "end": 2638.3199999999997, "start": 2632.3199999999997, "text": " actually was and then be able to do it itself in the future without any teleoperation so without" }, { "end": 2643.36, "start": 2638.3199999999997, "text": " uncertainty is that true with the inverse RL paradigm assumes that we're not uncertain in the end" }, { "end": 2652, "start": 2643.36, "text": " ah no it doesn't necessarily assume that I think in many deep RL algorithms that does end" }, { "end": 2657.36, "start": 2652, "text": " up being an assumption that they use but it's not a necessary one it can still be uncertain and" }, { "end": 2664.7200000000003, "start": 2657.36, "text": " then I would plan typically with respect to maximizing the expectation over the reward function" }, { "end": 2672.56, "start": 2665.28, "text": " although you could also try to be conservative or risk sensitive and then you would be maximizing" }, { "end": 2677.36, "start": 2672.56, "text": " expected reward maybe you'd be maximizing like worst case reward if you wanted to be" }, { "end": 2682.32, "start": 2677.36, "text": " maximally conservative or something like that or a fifth percentile reward or something like that" }, { "end": 2688.08, "start": 2682.32, "text": " yeah so so there can be uncertainty but like the human isn't in the environment and there's this" }, { "end": 2692.88, "start": 2688.08, "text": " episodic assumption where like the demonstration is one episode and then when the robot is acting" }, { "end": 2697.92, "start": 2692.88, "text": " that's a totally different episode and that also is in true and the assistance case" }, { "end": 2703.2000000000003, "start": 2697.92, "text": " you talk about active reward learning and interactive reward learning can you help us understand" }, { "end": 2707.52, "start": 2703.2000000000003, "text": " those those two phrases and how they differ yeah so active reward learning is just when" }, { "end": 2714, "start": 2708.4, "text": " the robot has the ability like in the reward learning paradigm the robot is given the ability to" }, { "end": 2720, "start": 2714, "text": " ask questions rather than just getting just getting to observe what the human is doing so hopefully" }, { "end": 2727.28, "start": 2720, "text": " that one should be relatively clear the interactive reward learning setting is it's mostly just a" }, { "end": 2732.88, "start": 2727.28, "text": " thing we made up because it was a thing that people often brought up as like maybe this will work" }, { "end": 2738, "start": 2732.88, "text": " so we wanted to talk about it and show why it doesn't doesn't in fact work but the idea there is" }, { "end": 2743.1200000000003, "start": 2738, "text": " that you alternate between you still have your two modules you have one reward learning module and" }, { "end": 2748.0800000000004, "start": 2743.1200000000003, "text": " one control module and they don't talk to each other but instead of like just doing one reward" }, { "end": 2754.5600000000004, "start": 2748.0800000000004, "text": " learning thing and then and then doing control forever more you do like I don't know you do 10" }, { "end": 2759.84, "start": 2754.56, "text": " steps of reward learning then 10 steps of control then 10 steps of reward learning then 10 steps of" }, { "end": 2768, "start": 2759.84, "text": " control and you like keep iterating between the the two stages so why is computational complexity really" }, { "end": 2772.96, "start": 2768, "text": " high for algorithms that try to optimize over assistance I think you mentioned that here yeah so" }, { "end": 2776.96, "start": 2772.96, "text": " everything I've talked about has just sort of assumed that the agents are optimal by default" }, { "end": 2784, "start": 2777.84, "text": " but if you think about it what the optimal agent has to do is it has to you know maintain a" }, { "end": 2789.52, "start": 2784, "text": " probability distribution over all of the possible reward functions that Alice could have and then" }, { "end": 2797.04, "start": 2790.16, "text": " updated over time as it sees more and more of Alice's behavior and as you probably know" }, { "end": 2805.2, "start": 2797.6, "text": " full-patient updating over a large list of hypotheses is very computationally intractable." }, { "end": 2810.56, "start": 2805.2, "text": " Another way of seeing it is that you know if you take this assistance paradigm you can" }, { "end": 2816.08, "start": 2810.56, "text": " and through a relatively simple reduction turn it into a partially observable markup decision" }, { "end": 2823.84, "start": 2816.08, "text": " process or POMDP the basic idea there is to treat the reward function data as like some" }, { "end": 2829.36, "start": 2823.84, "text": " unobserved part of the state and then that reward function is whatever that unobserved part of the" }, { "end": 2836.96, "start": 2829.36, "text": " state would say and then the Alice's behavior is thought of as part of the transition dynamics which" }, { "end": 2842.64, "start": 2836.96, "text": " depends on the unobserved part of the state that is the that is data so that that's a rough" }, { "end": 2851.52, "start": 2842.64, "text": " reduction to how you phrase assistance as a POMDP and then POMDPs are known to be very computationally" }, { "end": 2856.7200000000003, "start": 2851.52, "text": " intractable to solve again for basically the same reason that I was just saying which is that like" }, { "end": 2862.32, "start": 2856.7200000000003, "text": " to actually solve them you need to maintain a patient a probability distribution over all the" }, { "end": 2868.7200000000003, "start": 2862.32, "text": " ways that the unobserved parts of the state could be and that's just computationally intractable." }, { "end": 2873.28, "start": 2869.28, "text": " So do you plan to work on this on this particular line of work further?" }, { "end": 2881.6800000000003, "start": 2873.28, "text": " I think I don't plan to do further direct research on this myself. I still basically agree with" }, { "end": 2887.6800000000003, "start": 2881.6800000000003, "text": " the point of the paper which is look when you're building your AI systems they should be reasoning" }, { "end": 2892.3999999999996, "start": 2887.68, "text": " more they should be reasoning in the way that the assistance paradigm suggests where there's" }, { "end": 2898.48, "start": 2892.3999999999996, "text": " like this integrated reward learning and control and they shouldn't be reasoning in the way that the" }, { "end": 2903.44, "start": 2898.48, "text": " value learning paradigm suggests where you'll like first figure out what human values are" }, { "end": 2911.44, "start": 2903.44, "text": " and then optimize for them and so I think that point is a pretty important point and we'll guide" }, { "end": 2918.08, "start": 2911.44, "text": " how people to AI systems in the future or it will guide how what we have our AI systems do" }, { "end": 2924.08, "start": 2918.7200000000003, "text": " and I think I will like continue to push for that point including like at projects at DeepMine" }, { "end": 2931.2000000000003, "start": 2924.64, "text": " but I probably won't be doing more like technical research on the math and those papers" }, { "end": 2937.92, "start": 2931.2000000000003, "text": " specifically because I think I like it's said the things that I wanted to say there's still" }, { "end": 2942.96, "start": 2937.92, "text": " more there's still plenty of work that one could do such as like trying to come up with algorithms" }, { "end": 2949.04, "start": 2942.96, "text": " to directly optimize the maths that we wrote down but that seems less high leverage to me." }, { "end": 2954.48, "start": 2949.04, "text": " Okay moving to the next paper on the utility of learning about humans for human AI" }, { "end": 2960.56, "start": 2954.48, "text": " coordination that was Carol et al with yourself as a co-author can you tell us the brief general" }, { "end": 2969.52, "start": 2960.56, "text": " idea here? I think this paper was written in in the wake of some pretty big successes of self-play" }, { "end": 2976.72, "start": 2970.64, "text": " so self-play is the algorithm underlying well self-player like very similar variants are the" }, { "end": 2984.4, "start": 2976.72, "text": " out is the algorithm underlying opening i5 which plays Dota alpha star which plays Starcraft alpha" }, { "end": 2990.48, "start": 2984.4, "text": " go and alpha zero which play you know go chess, Cherky and so on at a superhuman level these were" }, { "end": 2996.1600000000003, "start": 2990.48, "text": " like some of the yeah some of the biggest results in AI around that time and so I suggest that" }, { "end": 3002.4, "start": 2996.1600000000003, "text": " the like self-play was going to be a really big thing and the point we were making in this paper" }, { "end": 3010.7200000000003, "start": 3003.28, "text": " is that self-play works well when you have a zero a two-player zero-sum game" }, { "end": 3018.08, "start": 3010.72, "text": " uh which which is a like perfectly competitive game uh because it's effectively going to cause" }, { "end": 3023.2, "start": 3018.08, "text": " you to explore the full space of strategies because if you're like playing against yourself in a" }, { "end": 3029.3599999999997, "start": 3023.2, "text": " competitive game if there's any flying your strategy then gradient descent is going to like push" }, { "end": 3035.04, "start": 3029.3599999999997, "text": " you in the direction of like exploiting that flaw because you're you know you're trying to beat" }, { "end": 3042.56, "start": 3035.04, "text": " the other copy of you so you're always driven to get better uh in contrast in common payoff game" }, { "end": 3049.2, "start": 3042.56, "text": " which are the most collaborative games where each agent gets the same payoff no matter what happens" }, { "end": 3057.12, "start": 3049.84, "text": " but the payoffs can be different uh you don't have this um similar incentive uh you don't have any" }, { "end": 3065.44, "start": 3057.12, "text": " incentive to be unexploitable like all you want is to come up with some policy that like if played" }, { "end": 3074.08, "start": 3065.44, "text": " against yourself will get you maximum reward but it doesn't really matter if you are if you would" }, { "end": 3079.6, "start": 3074.08, "text": " like play badly with somebody else like a human like if that were true that wouldn't come up" }, { "end": 3084.56, "start": 3079.6, "text": " in self-play self-play would be like nah in every single game you play you got the maximum reward" }, { "end": 3090.56, "start": 3084.56, "text": " there's nothing to do here so there's no force that's like causing you to be robust to all of the" }, { "end": 3096.32, "start": 3090.56, "text": " possible players that you could have whereas in the competitive game if you weren't robust to" }, { "end": 3101.12, "start": 3096.32, "text": " all of the players that could possibly arise then you're exploitable in some way and then the" }, { "end": 3105.84, "start": 3101.12, "text": " gradient descent is incentivized to find that exploit after which you have to become robust to it" }, { "end": 3109.12, "start": 3105.84, "text": " is there any way to reformulate it so that there is that competitive pressure?" }, { "end": 3115.2799999999997, "start": 3109.12, "text": " You can actually do this and so I know you've had Michael Dennis and I think also Natasha Shaks" }, { "end": 3120.24, "start": 3115.2799999999997, "text": " yeah on this podcast before and both of them are doing work that's kind of like this" }, { "end": 3128, "start": 3121.2799999999997, "text": " uh with paired right that was shakes and uh exactly yeah well the way you do it as you just say" }, { "end": 3134, "start": 3128, "text": " all right we're gonna make the environment a our competitor the environment's going to like try" }, { "end": 3139.84, "start": 3134, "text": " and like make itself super complicated in a way that defeats whatever policy" }, { "end": 3146.88, "start": 3140.64, "text": " we were trying to use to coordinate and so then this makes sure that you have to be robust to" }, { "end": 3153.04, "start": 3147.68, "text": " whichever environment you find yourself in so that's like one way to get robustness to well it's" }, { "end": 3158.16, "start": 3153.04, "text": " getting you robustness to environments it's not necessarily getting robustness to your partners" }, { "end": 3164.56, "start": 3158.16, "text": " um when like if you for example you wanted to cooperate with a human but you could do a similar" }, { "end": 3170.7999999999997, "start": 3164.56, "text": " thing there where you say we're going to also take the partner agent and we're going to make it be" }, { "end": 3177.44, "start": 3170.7999999999997, "text": " uh adversarial now this doesn't work great if you like literally make it adversarial because" }, { "end": 3184.16, "start": 3177.44, "text": " sometimes in many like interesting collaborative games um like like overcooked which is the one" }, { "end": 3189.92, "start": 3184.16, "text": " that we were studying here if your partner is an adversary they can just guarantee that you get" }, { "end": 3196.08, "start": 3189.92, "text": " minimum reward it's not it's often not difficult in this and overcooked you just like stand in front" }, { "end": 3202.24, "start": 3196.08, "text": " of the station where you deliver the dishes that you've cooked and you just stand there and that's" }, { "end": 3207.6, "start": 3202.24, "text": " what the adversary does and then the agent is just like well okay I can make a soup but I can never" }, { "end": 3214.88, "start": 3207.6, "text": " deliver it I guess I never get rewarded uh so so it doesn't quite that like naive simple approach" }, { "end": 3222.56, "start": 3214.88, "text": " doesn't quite work but you can instead you can like try to have a slightly more sophisticated method" }, { "end": 3228, "start": 3222.56, "text": " where you know the instead of being an adversarial partner it's a partner that's like" }, { "end": 3233.2799999999997, "start": 3228.88, "text": " trying to keep you on the edge of your abilities and then you like uh as you" }, { "end": 3238.5600000000004, "start": 3233.28, "text": " uh and then like once your agent learns how to like do well with the one uh with your current" }, { "end": 3243.2000000000003, "start": 3238.5600000000004, "text": " uh partner then like the partner tries to make itself a bit harder to do and so on so there" }, { "end": 3250.8, "start": 3243.2000000000003, "text": " there are a few there's a few papers like this that I am currently failing to remember but but there" }, { "end": 3255.52, "start": 3250.8, "text": " are papers that try to do this sort of thing I think many of them did end up just like following" }, { "end": 3262.4, "start": 3256.1600000000003, "text": " uh both the self-play work and this paper of ours so yeah and basically I think you're right" }, { "end": 3268.1600000000003, "start": 3262.4, "text": " you can in fact do some clever tricks to make things uh to make things better and to get around this" }, { "end": 3273.92, "start": 3268.88, "text": " it's not quite as simple and elegant as self-play and I don't think the results are" }, { "end": 3278.48, "start": 3273.92, "text": " quite as good as you get with self-play because it's still not exactly the thing that you want" }, { "end": 3283.2000000000003, "start": 3278.96, "text": " so now we have a contributed question which I'm very excited about from uh" }, { "end": 3288.8, "start": 3283.2000000000003, "text": " doctrine at asha jakes senior research scientist at google ai and postdoc at Berkeley and we were" }, { "end": 3294.4, "start": 3288.8, "text": " lucky to have to tasha as our guest on episode one so ntasha ntasha asked uh the most interesting" }, { "end": 3300.32, "start": 3294.4, "text": " questions are about why interacting with humans is so much harder slash so different and interacting" }, { "end": 3307.36, "start": 3300.32, "text": " with simulated rl agents so rohan what is it about humans that makes them um harder and different" }, { "end": 3314.1600000000003, "start": 3308.1600000000003, "text": " yeah there are a bunch of factors here maybe the most obvious one and probably the biggest one" }, { "end": 3320.8799999999997, "start": 3314.16, "text": " in practice is that you can't just put humans in your environment to do like a million steps" }, { "end": 3327.6, "start": 3320.8799999999997, "text": " of grading to sent on uh which often we do in fact do with our simulated rl agents and so like" }, { "end": 3334.08, "start": 3327.6, "text": " if you could just somehow put a human in the loop uh in a million it for a million episodes" }, { "end": 3339.2, "start": 3334.72, "text": " maybe then the resulting agent would in fact just be really good at coordinating with humans" }, { "end": 3343.68, "start": 3339.2, "text": " in fact i might like take out the maybe there and i will i will actually predict that" }, { "end": 3348.56, "start": 3343.68, "text": " that the resulting agent will be good with humans as long as you had like at like a reasonable" }, { "end": 3356.72, "start": 3348.56, "text": " diversity of humans um in that you had to collaborate with so my first and biggest answer is" }, { "end": 3362.3199999999997, "start": 3357.68, "text": " you can't get a lot of data from humans in the way that you can get a lot of data from simulated" }, { "end": 3368.48, "start": 3362.3199999999997, "text": " rl agents uh or equivalently you can't just put the human uh into the training loop and the way" }, { "end": 3375.12, "start": 3368.48, "text": " you can put a simulated rl agent into the training loop uh so that's answer number one and then" }, { "end": 3381.12, "start": 3375.12, "text": " there is another answer uh which seems significantly less important which is that humans" }, { "end": 3388.2400000000002, "start": 3382.2400000000002, "text": " aren't just not as are sorry are significantly more diverse than simulated rl agents typically" }, { "end": 3393.52, "start": 3388.88, "text": " humans don't all act the same way uh even an individual human will act pretty different" }, { "end": 3400.16, "start": 3393.52, "text": " um from one episode to the next humans will like learn over time uh and so they're" }, { "end": 3407.12, "start": 3400.8, "text": " not only is there a policy like kind of kind of stochastic but their policy isn't even stationary" }, { "end": 3412.56, "start": 3407.12, "text": " their policy changes over time as they learn how to play the game and become better at it um" }, { "end": 3417.36, "start": 3412.56, "text": " that's another thing that rl like usually rl seems that that doesn't" }, { "end": 3423.2, "start": 3418.16, "text": " that is not in fact true that like episodes are drawn iid because of this like" }, { "end": 3430.16, "start": 3423.2, "text": " non-stationarity and stochasticity and diversity you would imagine that it like you have to get a" }, { "end": 3437.4399999999996, "start": 3430.16, "text": " much more robust policy uh in order to work with humans instead of working with simulated rl agents" }, { "end": 3444.3999999999996, "start": 3437.4399999999996, "text": " and so that uh ends up being uh that ends up you know being harder to do sometimes people try to" }, { "end": 3451.52, "start": 3444.3999999999996, "text": " like take their simulated rl agents and like make them more stochastic to be more similar to humans" }, { "end": 3456.48, "start": 3451.52, "text": " um for example by like maybe taking a random action with some small probability" }, { "end": 3465.28, "start": 3457.52, "text": " and i think usually this ends up still looking kind of like artificial and forest when you like" }, { "end": 3470.24, "start": 3465.28, "text": " look at the resulting behavior such that it still doesn't require that robust policy" }, { "end": 3478, "start": 3470.8, "text": " in order to collaborate well with those agents um and humans are just like more challenging them that" }, { "end": 3483.28, "start": 3478, "text": " okay let's really move to the next paper evaluating the robustness of collaborative agents that" }, { "end": 3488.72, "start": 3483.28, "text": " was not at all with yourself as a co-author can you give us the the short version of what this" }, { "end": 3494.72, "start": 3488.72, "text": " paper is about like we just talked about how in order to get your agency work well with humans they" }, { "end": 3500.88, "start": 3494.72, "text": " need to be they need to learn a pretty robust policy and so one way of measuring how good your" }, { "end": 3506.96, "start": 3500.88, "text": " agents are collaborating with humans is while you just like have them play with humans and see how" }, { "end": 3514.88, "start": 3506.96, "text": " well that goes which is a reasonable thing to do um and people should definitely do it but this" }, { "end": 3520.32, "start": 3514.88, "text": " paper proposed a like maybe simpler and more reproducible test that you can run more often" }, { "end": 3528.08, "start": 3521.6, "text": " which is just i mean it's the basic idea from software engineering it's just a unit test uh" }, { "end": 3533.92, "start": 3528.08, "text": " and so it's a very simple idea the idea is just write some unit tests for the robustness of your" }, { "end": 3540.8, "start": 3533.92, "text": " agents write some cases in which you think the like correct action is unambiguous like clear in cases" }, { "end": 3547.28, "start": 3540.8, "text": " that you may be expect not to come up during training during training and then just see whether" }, { "end": 3553.36, "start": 3547.28, "text": " your agent does in fact do the right thing on those inputs and that can give you like if your agent" }, { "end": 3559.6, "start": 3553.36, "text": " passes all of those tests that's not a guarantee that it's robust uh but if it fails some of those tests" }, { "end": 3565.68, "start": 3559.6, "text": " then you definitely sound found some failures of robustness and i think in practice uh the agents" }, { "end": 3572.4, "start": 3565.68, "text": " that we tested all like failed many tests i will yeah i don't remember the exact numbers off the" }, { "end": 3578.72, "start": 3572.4, "text": " top of my head but i think like some of the better agents were getting scores of maybe 70 percent" }, { "end": 3586.56, "start": 3579.2, "text": " could we kind of say that this is related to the idea of of sampling from environments outside of" }, { "end": 3592.64, "start": 3586.56, "text": " the trained distribution because we think that like in in in samples that are related to the" }, { "end": 3598.32, "start": 3592.64, "text": " distribution that the agent would encounter after it's deployed would you would you phrase it that" }, { "end": 3605.68, "start": 3598.32, "text": " way or is it is it going a different yeah i think that's pretty close i would say all basically" }, { "end": 3611.52, "start": 3605.68, "text": " everything about that seems correct except the part where you say like uh and it's probably going" }, { "end": 3617.52, "start": 3611.52, "text": " to arise in the test distribution i think usually i just wouldn't even try to like um check whether" }, { "end": 3623.68, "start": 3617.52, "text": " or not it would uh up here in the test distribution i just it gets like that's very hard to do you don't" }, { "end": 3629.2, "start": 3623.68, "text": " know what's going like if you knew how the test distribution was going to look and in what way it" }, { "end": 3632.72, "start": 3629.2, "text": " was going to be different from the trained distribution then you should just change your trained" }, { "end": 3637.7599999999998, "start": 3632.72, "text": " distribution to be the test distribution but like the fundamental challenge of robustness is" }, { "end": 3642.6400000000003, "start": 3637.76, "text": " usually that you don't know what your test distribution is going to look like so i would say it's more" }, { "end": 3648.88, "start": 3642.6400000000003, "text": " like we try to deliberately and uh find situations that are outside the training situation but where a" }, { "end": 3654, "start": 3648.88, "text": " human would agree that there's like one them unambiguously correct answer um and test it in those" }, { "end": 3659.2000000000003, "start": 3654, "text": " cases like maybe this will lead us to be too conservative because like actually the test was in" }, { "end": 3665.1200000000003, "start": 3659.2000000000003, "text": " a state that will never actually come up in the test distribution but given that we it seems" }, { "end": 3670.24, "start": 3665.12, "text": " very hard to know that i think um it's still a good idea to be driving these tests and to take" }, { "end": 3675.3599999999997, "start": 3670.24, "text": " failure as fairly seriously and this paper mentions three types of robustness can you um" }, { "end": 3680.96, "start": 3675.3599999999997, "text": " briefly touch on on the three types yeah so this is basically a categorization that we found" }, { "end": 3687.92, "start": 3680.96, "text": " helpful in generating the tests uh and it's uh somewhat specific to reinforcement learning agents" }, { "end": 3694.88, "start": 3687.92, "text": " so the three types were state robustness which is um a case where like basically these are test" }, { "end": 3701.76, "start": 3694.88, "text": " cases in which the main thing that you've changed is the state in which the agent is operating" }, { "end": 3708.7200000000003, "start": 3702.4, "text": " then there's agent robustness which is uh when one of the other agents in the environment" }, { "end": 3717.6800000000003, "start": 3709.44, "text": " exhibits some behavior that's like uh unusual and not what you expected and then that can further be" }, { "end": 3728.16, "start": 3717.68, "text": " uh decomposed into two types is agent robustness without memory where uh even like where the the test" }, { "end": 3735.52, "start": 3728.16, "text": " doesn't require the AI system to have any memory there's like a correct action that seems" }, { "end": 3742, "start": 3735.52, "text": " determinable even if the AI system doesn't have memory uh so this might be what you want to use if" }, { "end": 3749.68, "start": 3742, "text": " you for some reason are using uh an MLP or a CNN as your architecture and then there's agent robustness" }, { "end": 3758.08, "start": 3749.68, "text": " with memory uh which is where the distribution shift happens from uh an uh partner agent in the" }, { "end": 3763.76, "start": 3758.08, "text": " environment doing something that where you have to actually like look at the behavior over time" }, { "end": 3769.68, "start": 3763.76, "text": " notice that uh something is violating what what you expected during training and then take some" }, { "end": 3776.24, "start": 3769.68, "text": " corrective action as a result uh so there you need memory in order to understand um how the" }, { "end": 3781.04, "start": 3776.24, "text": " partner agent is doing something that wasn't what you expected and then I guess when we're dealing" }, { "end": 3787.3599999999997, "start": 3781.04, "text": " with uh high-dimensional state there's just a ridiculous number of permutations situations and" }, { "end": 3792.96, "start": 3787.3599999999997, "text": " we've seen in the past that uh that deep learning especially can be really sensitive to small seemingly" }, { "end": 3798, "start": 3792.96, "text": " meaningless changes in this high-dimensional state so how do we how how could we possibly think" }, { "end": 3802.88, "start": 3798, "text": " about scaling this up to a point where uh we don't have to test every single thing?" }, { "end": 3810, "start": 3802.88, "text": " I think that basically this particular approach you mostly just shouldn't try to scale up in this way" }, { "end": 3816.4, "start": 3810, "text": " it's more meant to be a like first quick sanity check that is already quite hard to pass uh for current" }, { "end": 3822.48, "start": 3816.4, "text": " systems we're talking scores like 70 percent I think once you get to like scores like 90-99" }, { "end": 3828.32, "start": 3822.48, "text": " percent uh then it's like okay that's the point it like start thinking about scaling up but like" }, { "end": 3835.52, "start": 3828.32, "text": " suppose we got there uh what do we then do? I don't think we really want to scale up the like" }, { "end": 3841.76, "start": 3835.52, "text": " the specific process of humans think of tests humans write down tests uh then we like run those on the" }, { "end": 3852.4, "start": 3841.76, "text": " AI system I think at that point uh we want to migrate to a more like alignment flavored uh viewpoint" }, { "end": 3858.7200000000003, "start": 3852.4, "text": " which I think we were going to talk about in the near future anyway uh but to give a give some" }, { "end": 3866.64, "start": 3858.7200000000003, "text": " advance uh to talk about that a little bit in advance I think once we like scale up we want to try" }, { "end": 3875.2000000000003, "start": 3866.64, "text": " and find cases where the AI system does something bad that knew was bad it knew that it wasn't the" }, { "end": 3880.32, "start": 3875.2000000000003, "text": " thing that its designers intended and the reason that this allows you to scale up is because now you" }, { "end": 3887.04, "start": 3880.32, "text": " can like go and inspect the AI system and try to find facts that it knows and like leverage those" }, { "end": 3894.4, "start": 3887.04, "text": " in order to create your test cases and one hopes that the set of things that the AI knows is still" }, { "end": 3898.96, "start": 3894.4, "text": " plausibly a very large space but hopefully not an exponentially growing space the way the" }, { "end": 3906.48, "start": 3898.96, "text": " state space is and the intuition for why this is okay is that like yes the AI system might end up" }, { "end": 3912.2400000000002, "start": 3906.48, "text": " may end up having accidents and that wouldn't be caught if we were only looking for cases where" }, { "end": 3917.68, "start": 3912.2400000000002, "text": " the AI system made a mistake that it knew was a mistake but like usually those things aren't that bad" }, { "end": 3925.12, "start": 3917.68, "text": " uh they can be if your AI system is like in a nuclear power plant for example or uh in some like" }, { "end": 3931.36, "start": 3925.12, "text": " uh in a weapon system perhaps but like in many cases it's not actually that bad for the AI system" }, { "end": 3938.6400000000003, "start": 3931.36, "text": " to make an accidental error the really bad errors are the ones where the AI system is like" }, { "end": 3943.84, "start": 3938.6400000000003, "text": " intentionally making an error or making something that is bad from the perspective of the designers" }, { "end": 3949.04, "start": 3944.4, "text": " those are those are like really bad situations and you don't want to get into them and so I'm" }, { "end": 3953.6800000000003, "start": 3949.04, "text": " most interested in like thinking of like how we can avoid that uh and so then you can like try" }, { "end": 3958.4, "start": 3953.6800000000003, "text": " to leverage the agent's knowledge to construct inputs that you can then test the AI system on." }, { "end": 3965.44, "start": 3958.4, "text": " So this is a great segue to the alignment section um so how do you define uh alignment in AI?" }, { "end": 3971.6, "start": 3965.44, "text": " Maybe I will give you two definitions uh that are like slightly different but mostly the same" }, { "end": 3982, "start": 3971.6, "text": " so one is that an AI system is misaligned so not aligned uh if it takes actions that it knew" }, { "end": 3988.8, "start": 3982, "text": " uh were against the wishes of its designers that's basically the definition that I was just" }, { "end": 3995.92, "start": 3988.8, "text": " giving earlier. A different more positive definition of AI alignment is and is that an AI" }, { "end": 4002.56, "start": 3995.92, "text": " system is aligned if it is trying to do what its uh designers intended for it to do." }, { "end": 4009.76, "start": 4003.28, "text": " And is there some um agreed upon taxonomy of like top level topics in alignment?" }, { "end": 4016.48, "start": 4009.76, "text": " um like how does it relate to concepts like AI safety and human feedback that different things" }, { "end": 4020.8, "start": 4016.48, "text": " that we talked about today? How do we how would we arrange these in a kind of high level?" }, { "end": 4026.88, "start": 4021.36, "text": " There is definitely not a canonical taxonomy of topics there's not even a canonical definition" }, { "end": 4035.76, "start": 4027.6000000000004, "text": " so like the one I gave doesn't include the problem for example of how you resolve disagreements" }, { "end": 4042.48, "start": 4035.76, "text": " between humans on what the AI system should do it just says all right there's some designers" }, { "end": 4048.0800000000004, "start": 4042.48, "text": " they wanted something that's what the AI system is supposed to be doing uh and it doesn't talk about" }, { "end": 4052.8, "start": 4048.0800000000004, "text": " like all right the process by which those designers decide what the AI system intends to do that's" }, { "end": 4057.6000000000004, "start": 4052.8, "text": " like not not a part of the problem as I'm defining it it's obviously still an important problem" }, { "end": 4063.5200000000004, "start": 4057.6000000000004, "text": " just like not part of this definition uh as I gave it but other people would say no that's a bad" }, { "end": 4068.56, "start": 4063.52, "text": " definition you should include that problem so there's not even a canonical definition" }, { "end": 4077.44, "start": 4068.56, "text": " so I think I will just give you maybe my taxonomy of alignment topics so in terms of how alignment" }, { "end": 4084.16, "start": 4077.44, "text": " relates to AI safety uh there's this sort of general big picture question of like how do we make" }, { "end": 4091.92, "start": 4084.72, "text": " or will AI be beneficial for humanity which you might call AI safety or AI beneficialness or" }, { "end": 4099.2, "start": 4091.92, "text": " something and on that you can break down into a few possible uh possible categories I quite like the" }, { "end": 4105.68, "start": 4100, "text": " I'm gonna forget where this where I where this taxonomy comes from but I like the taxonomy" }, { "end": 4113.92, "start": 4105.68, "text": " into accidents misuse and structural risks so accidents are exactly what they sound like accidents" }, { "end": 4119.04, "start": 4113.92, "text": " happen when an AI system does something bad and nobody intended for that the AI system to do that" }, { "end": 4125.76, "start": 4119.04, "text": " thing uh missus also exactly what it sounds like it's when it's when somebody gets an AI system to" }, { "end": 4130.48, "start": 4125.76, "text": " do something and that's some the thing that a got the AI system to do was something that we didn't" }, { "end": 4138.72, "start": 4130.48, "text": " actually want so think of like terrorists using AI systems um to like assassinate people uh and" }, { "end": 4145.68, "start": 4138.72, "text": " then structural risks are maybe less obvious than the previous two but structural risks happen when" }, { "end": 4152.56, "start": 4145.68, "text": " you know if as we infuse AI systems into our economy do any new sorts of problems arise do we get" }, { "end": 4159.200000000001, "start": 4152.56, "text": " into like races to the bottom on safety do we get to do we have like a whole bunch of increased" }, { "end": 4165.6, "start": 4159.200000000001, "text": " economic competition that causes to sacrifice many to sacrifice many of our values in the name of" }, { "end": 4172.56, "start": 4165.6, "text": " productivity uh stuff like that so that's like one starting categorization accidents misuse" }, { "end": 4179.360000000001, "start": 4172.56, "text": " structural risk and within accidents you can have uh you can then further separate into" }, { "end": 4186.400000000001, "start": 4180.4800000000005, "text": " accidents where the AI system knew that the thing that that was doing was bad and accidents where" }, { "end": 4193.04, "start": 4186.400000000001, "text": " the AI system didn't know that the thing that it was doing was bad and the first one is AI alignment" }, { "end": 4198.56, "start": 4193.04, "text": " according to my definition which again is not a canonical definition I think it's maybe the" }, { "end": 4204.72, "start": 4198.56, "text": " most common definition but it's like not canonical so that was like how alignment relates to AI safety" }, { "end": 4211.200000000001, "start": 4205.52, "text": " and then like how does the stuff we've been talking about today relate to alignment again people" }, { "end": 4219.6, "start": 4211.200000000001, "text": " will disagree with me on this but according to me the way to build aligned AI systems and the" }, { "end": 4228.400000000001, "start": 4219.6, "text": " sense of AI systems that don't make take bad actions that they knew were bad is that you use a lot of" }, { "end": 4235.04, "start": 4228.4, "text": " human feedback to train your AI system to where it like the human feedback you know it rewards the" }, { "end": 4240.799999999999, "start": 4235.04, "text": " AI system when it does things that the humans want and punishes the AI system when the" }, { "end": 4245.599999999999, "start": 4240.799999999999, "text": " AI system does things that the human doesn't want this doesn't solve the entire problem you you" }, { "end": 4252.879999999999, "start": 4245.599999999999, "text": " basically then just want to like make your human the the people providing your feedback as powerful" }, { "end": 4258.96, "start": 4252.88, "text": " as make them as competent as possible so maybe you could do some interpretability with the model" }, { "end": 4264.72, "start": 4258.96, "text": " that you're training in order to like understand how exactly it's like reasoning how it's making" }, { "end": 4272, "start": 4264.72, "text": " decisions you can then feed that information to the humans who are providing feedback and thus this" }, { "end": 4278.08, "start": 4272, "text": " can then maybe allow them to not just select AI systems that get the right outcomes but now they" }, { "end": 4282.72, "start": 4278.08, "text": " can select AI systems that get the right outcomes for the right reasons and that can help you get" }, { "end": 4292.08, "start": 4282.72, "text": " more robustness you could imagine that you have some other AI systems that are in charge of like" }, { "end": 4298.08, "start": 4292.08, "text": " finding new hypothetical inputs on which the AI system that you're training takes a bad action" }, { "end": 4303.84, "start": 4298.08, "text": " and so like this AI systems and like here's this hypothetical input here's this input on which" }, { "end": 4308.320000000001, "start": 4303.84, "text": " your AI systems doing a bad thing and then the humans are like oh that's bad let's put it in the" }, { "end": 4315.44, "start": 4308.32, "text": " training data set and give good feedback on it and so on so then I think the salt would be maybe" }, { "end": 4320.5599999999995, "start": 4315.44, "text": " the most obviously connected here where it was about how do you just train anything with human" }, { "end": 4325.28, "start": 4320.5599999999995, "text": " feedback which is obviously a core thing I've been talking about in this plan um preferences implicit" }, { "end": 4332.32, "start": 4325.28, "text": " in the state of the world it's less clear how that relates here I think that paper makes more sense" }, { "end": 4339.12, "start": 4332.32, "text": " in a plan that's more like traditional value alignment where your AI system maintain" }, { "end": 4346.32, "start": 4339.12, "text": " like has an explicit distribution over data that it's updating by evidence so I think that one" }, { "end": 4353.12, "start": 4346.32, "text": " is less relevant to this to the to this description the benefits of assistance paper is I think" }, { "end": 4360.32, "start": 4353.12, "text": " primarily is statement about what the AI system should do and so like what we want our human" }, { "end": 4367.5199999999995, "start": 4360.32, "text": " feedback providers to be doing is to be seeing hey is this AI system like thinking about what" }, { "end": 4373.44, "start": 4368.16, "text": " what its users will want if it's uncertain about what the users will want does it like ask for" }, { "end": 4379.84, "start": 4373.44, "text": " clarification or it does it just like guess we probably wanted to ask for clarification rather" }, { "end": 4386.24, "start": 4379.84, "text": " than guessing if it's a sufficiently important thing but if it's like some probably insignificant" }, { "end": 4392.08, "start": 4386.24, "text": " thing then it's like fine if it can guess and so through the human feedback that you can then like" }, { "end": 4400.639999999999, "start": 4392.08, "text": " train a system that's being very assistive the overcooked papers on the utility of learning about" }, { "end": 4410.5599999999995, "start": 4401.36, "text": " learning about humans for human air coordination uh that one is I think not that relevant to this plan" }, { "end": 4416.16, "start": 4410.5599999999995, "text": " unless you happen to be building an AI system that is playing a collaborative game the evaluation" }, { "end": 4423.2, "start": 4416.16, "text": " the robustness paper is more relevant in that like part of the thing that these human feedback" }, { "end": 4428.8, "start": 4423.2, "text": " providers are going to be doing is to like be constructing these hypothetical the be constructing" }, { "end": 4434.88, "start": 4428.8, "text": " inputs on which the AI system behaves badly and then training the AI system not to behave badly" }, { "end": 4443.5199999999995, "start": 4434.88, "text": " on those inputs uh so in that sense it's uh it also fits into this um overall story cool okay can" }, { "end": 4449.68, "start": 4443.52, "text": " you mention a bit about your alignment newsletter um like what what how do you how do you how do you" }, { "end": 4454.160000000001, "start": 4449.68, "text": " define that newsletter and and how did you how did you start that and what's happening with the" }, { "end": 4461.52, "start": 4454.160000000001, "text": " newsletter now the alignment newsletter is supposed to be a weekly newsletter that I write that" }, { "end": 4470.64, "start": 4461.52, "text": " summarizes uh just recent content relevant to AI alignment it has not been uh very weekly" }, { "end": 4476.08, "start": 4470.64, "text": " in the last couple of months because I have been busy but I do intend to go back to making it a" }, { "end": 4483.68, "start": 4476.08, "text": " weekly newsletter it i mean the origin story is kind of funny it was just we this was while I was a" }, { "end": 4490.160000000001, "start": 4483.68, "text": " PhD student at the Center for Human Competible AI at UC Berkeley uh we were like just discussing" }, { "end": 4496.320000000001, "start": 4490.160000000001, "text": " that like there are a lot of papers that were coming out all the time uh as people will probably" }, { "end": 4502.88, "start": 4496.32, "text": " be familiar with and it was hard to keep track of them all um and so someone suggested that hey maybe" }, { "end": 4510.96, "start": 4502.88, "text": " we should have a rotation of people who just uh search for all of the new papers that have arrived" }, { "end": 4515.679999999999, "start": 4510.96, "text": " in the past week and just send an email out to everyone just like list giving links to those" }, { "end": 4521.92, "start": 4515.679999999999, "text": " papers so other people don't have to do the search themselves and I said like look I you know I just" }, { "end": 4527.28, "start": 4521.92, "text": " do this every week anyway I I'm just happy to take on this job sending and sending one email with" }, { "end": 4533.04, "start": 4527.28, "text": " a bunch of links is not a hard uh we don't need to have this rotation of people um so I did that" }, { "end": 4540, "start": 4533.04, "text": " internally to try uh then like you know a couple of weeks later I like added a sentence that was" }, { "end": 4547.12, "start": 4540.64, "text": " telling people hey this is what this is like the topic um here is you know maybe you should read it" }, { "end": 4555.36, "start": 4547.12, "text": " if you are interested in x, y and z uh and so that happened for a while and then I think I started" }, { "end": 4560.96, "start": 4555.36, "text": " writing slightly more extensive summaries so that people didn't have to read the paper uh unless it" }, { "end": 4565.68, "start": 4560.96, "text": " was something they were particularly interested in uh and like they're around that point people were" }, { "end": 4571.599999999999, "start": 4565.68, "text": " like this is actually quite useful you should make it public uh and then I like tested it a bit more" }, { "end": 4579.52, "start": 4571.6, "text": " um maybe for another like three to four weeks internally to try and then I and and after that I" }, { "end": 4586.08, "start": 4579.52, "text": " released it publicly uh it still did go under a fair amount of improvement I think maybe after" }, { "end": 4592.56, "start": 4586.08, "text": " like 10 to 15 newsletters was when it felt more stable yeah and now it's like apart from the fact" }, { "end": 4598.400000000001, "start": 4592.56, "text": " that I've been too busy to do it recently it's been pretty stable for the last I don't know two" }, { "end": 4605.12, "start": 4598.4, "text": " years or so cool well uh to the audience I highly recommend the newsletter and uh like I mentioned" }, { "end": 4609.92, "start": 4605.12, "text": " you know when I first met you and and heard about your alignment newsletter early on at that" }, { "end": 4616, "start": 4609.92, "text": " point I really wasn't um I didn't really appreciate the the importance of alignment uh issues and" }, { "end": 4621.2, "start": 4616, "text": " and I got to say that really changed for me when I read the book Human Compatible by Professor" }, { "end": 4626.879999999999, "start": 4621.2, "text": " Stuart Russell who I gather is your one of your PhD advisors yep and so that book really helped" }, { "end": 4631.36, "start": 4626.88, "text": " me appreciate the importance of alignment related stuff and it was part of the reason that I" }, { "end": 4636.56, "start": 4631.36, "text": " that I saw saw you to interview you so I I'm happy to recommend that uh plug that book to to the" }, { "end": 4641.6, "start": 4636.56, "text": " audience uh Professor Russell's awesome and it's a very well written book and uh and feel a great" }, { "end": 4646.4800000000005, "start": 4641.6, "text": " insight yep I also strongly recommend this book and since we're on the topic of the alignment" }, { "end": 4652.96, "start": 4646.4800000000005, "text": " newsletter you can read my summary of uh Stuart Russell's book in order to get a sense of what it" }, { "end": 4659.28, "start": 4652.96, "text": " talks about uh before you actually make the commitment of actually reading the entire book um so you" }, { "end": 4664.08, "start": 4659.28, "text": " can find that on my website under uh alignment newsletter there's a list of past issues" }, { "end": 4670.4, "start": 4664.8, "text": " I think this was newsletter edition 69 not totally sure you can check that and what was your" }, { "end": 4676.64, "start": 4670.4, "text": " website again I it's just my first name and last name rohencha dot com okay cool I highly" }, { "end": 4683.76, "start": 4676.64, "text": " recommend doing that um to the audience and so I wanted to ask you about how you know how" }, { "end": 4689.6, "start": 4683.76, "text": " alignment work is done so a common pattern that you know we might be familiar with that in in" }, { "end": 4696, "start": 4689.6, "text": " many ML papers is to show a new method and show some experiments um but is alignment uh is work" }, { "end": 4701.52, "start": 4696, "text": " in alignment fundamentally different like what does the work uh entail in in alignment is there" }, { "end": 4707.4400000000005, "start": 4701.52, "text": " a lot of thought experiments or or how would you describe that uh there's a big variety of things" }, { "end": 4716.4800000000005, "start": 4707.4400000000005, "text": " so some alignment work um is in fact pretty similar to uh existing uh two two typical ML work" }, { "end": 4722.64, "start": 4717.68, "text": " so for example there's a lot of alignment work that's like can we make human feedback algorithms" }, { "end": 4730.72, "start": 4722.64, "text": " better uh and you know you start with some baseline and some task or environment in which you want" }, { "end": 4736.400000000001, "start": 4730.72, "text": " to get an AI system to do something and then you like try to improve upon the baseline using some" }, { "end": 4742.240000000001, "start": 4736.400000000001, "text": " ideas that you thought about uh and like you know maybe it's somewhat different because you're using" }, { "end": 4748.16, "start": 4742.240000000001, "text": " human feedback where is typical ML research doesn't involve human feedback but that's not that big" }, { "end": 4754.8, "start": 4748.16, "text": " a difference it's still like mostly the same skills uh so that's probably the kind that's closest" }, { "end": 4761.360000000001, "start": 4754.8, "text": " to existing ML research there's also like a lot of interpretability work which again is just like" }, { "end": 4766.56, "start": 4761.360000000001, "text": " working with actual machine learning models and trying to figure out what the heck they're doing" }, { "end": 4772.08, "start": 4766.56, "text": " also seems pretty it's like not the same thing as like get a better performance on this task but it's" }, { "end": 4778.24, "start": 4772.08, "text": " still like pretty similar to uh the general field to like some parts of the of machine learning" }, { "end": 4785.2, "start": 4778.24, "text": " so that's like one kind one type of alignment research and then there's you know on the complete" }, { "end": 4791.679999999999, "start": 4785.2, "text": " other side there is a bunch of stuff where you're like where you think very abstractly about" }, { "end": 4796.88, "start": 4791.679999999999, "text": " what future AI systems are going to look like so like maybe you're like all right maybe you think" }, { "end": 4805.04, "start": 4796.88, "text": " about how some story by which you might by which AGI might arise like we run such and such" }, { "end": 4811.5199999999995, "start": 4805.04, "text": " algorithm maybe would set some improvements in the arc in various architectures with like such and" }, { "end": 4819.04, "start": 4811.5199999999995, "text": " such data and you get a and it turns out you can get AGI out of this uh then you maybe like think" }, { "end": 4825.84, "start": 4819.04, "text": " in this hypothetical okay uh does this AGI end up getting misaligned if so how how does it get" }, { "end": 4832.48, "start": 4825.84, "text": " misaligned if yes um when you tell that story and they're like okay now I have a story of like how the" }, { "end": 4838.32, "start": 4832.48, "text": " AGI system was misaligned what would I need to do in order to like prevent this from happening" }, { "end": 4844.879999999999, "start": 4839.36, "text": " so you can do the like pretty elaborate uh conceptual dot experiments I think these are usually" }, { "end": 4851.36, "start": 4844.879999999999, "text": " good as a way of ensuring that the things that you're working on are actually useful I think there are" }, { "end": 4859.2, "start": 4851.36, "text": " a few people who do these sorts of conceptual arguments almost always and uh do them well such" }, { "end": 4865.36, "start": 4859.2, "text": " that I'm like yeah this the stuff they're producing I think is probably going to matter in the future" }, { "end": 4871.28, "start": 4865.36, "text": " but I think it's also very easy to end up not very grounded in what's actually going to happen" }, { "end": 4875.5199999999995, "start": 4871.28, "text": " such that you end up saying things that won't actually be true in the future and could" }, { "end": 4881.04, "start": 4875.5199999999995, "text": " knowably like some somewhat there is some reasonably easy to find argument today that could" }, { "end": 4885.92, "start": 4881.04, "text": " convince you that the things you're saying are like not going to matter in the future so it's like" }, { "end": 4890.8, "start": 4885.92, "text": " pretty hard to do this research because of the lack of actual empirical feedback loops but I don't" }, { "end": 4896.64, "start": 4890.8, "text": " think it is doomed um I think people do in fact get um some interesting results out of this and often" }, { "end": 4902.64, "start": 4896.64, "text": " the results out of this that the the best results out of this line of work uh usually seen better to" }, { "end": 4907.68, "start": 4902.64, "text": " me than the results that we get out of the empirical line of work so you mentioned your newsletter" }, { "end": 4912.64, "start": 4907.68, "text": " and then there's an alignment forum if I understand that that's what that was spring out of" }, { "end": 4917.200000000001, "start": 4912.64, "text": " less wrong is that is that right I don't know if I would say it's spring out of less wrong it was" }, { "end": 4921.4400000000005, "start": 4917.200000000001, "text": " meant to be at least somewhat separate from it but it's definitely very it's definitely affiliated" }, { "end": 4926.160000000001, "start": 4921.4400000000005, "text": " with less wrong and like everything on it gets cross posted to less wrong and so these are pretty" }, { "end": 4931.200000000001, "start": 4926.160000000001, "text": " advanced resources I mean from my point of view um but to the audience who maybe is just getting" }, { "end": 4936.08, "start": 4931.200000000001, "text": " started with these ideas can you recommend uh you know a couple of resources that might be good for" }, { "end": 4940.96, "start": 4936.08, "text": " them to get like an on ramp for them um I guess including the the human compatible but" }, { "end": 4945.68, "start": 4940.96, "text": " anything else you'd want to mention yeah so human compatible is a pretty good suggestion um" }, { "end": 4952.72, "start": 4945.68, "text": " there are other books as well um so super intelligence is more on the philosophy side uh the alignment" }, { "end": 4960.32, "start": 4952.72, "text": " problem by Brian Christian is less on the like uh has a little bit less on like what what my" }, { "end": 4965.52, "start": 4960.32, "text": " solutions look like that has more to like intellectual history behind how how these concerns started" }, { "end": 4974.88, "start": 4965.52, "text": " arising on life three point oh by max tegmark i don't remember how much it talks about alignment" }, { "end": 4983.120000000001, "start": 4974.88, "text": " I assume it does a decent amount uh but that's that's another option apart from books I think" }, { "end": 4992, "start": 4983.76, "text": " so the alignment for him has um sequences of blood posts that are that that don't require" }, { "end": 4999.92, "start": 4992, "text": " quite as much um technical depth so for example it's got the value learning sequence which I" }, { "end": 5007.2, "start": 4999.92, "text": " well which I half-road half curated other people's posts um so I think that's a good" }, { "end": 5014.08, "start": 5007.84, "text": " introduction to some of the ideas in alignment uh there's the embedded agency sequence also on" }, { "end": 5020.16, "start": 5014.08, "text": " the alignment for him and the iterated amplification sequence of the alignment for him oh there's the" }, { "end": 5028, "start": 5020.16, "text": " there's an agi safety fundamentals course and then you can just google it it has a publicly available" }, { "end": 5034.32, "start": 5028, "text": " curriculum I believe I think really ignore all the other suggestions look at that curriculum" }, { "end": 5039.2, "start": 5034.96, "text": " and then read things on there is probably actually my advice" }, { "end": 5046.8, "start": 5039.2, "text": " Tev you've seen any uh depictions of alignment issues in science fiction or um these these ideas come" }, { "end": 5053.4400000000005, "start": 5046.8, "text": " up for you when you when you watch a read read sci-fi they definitely come up to some extent I think" }, { "end": 5059.2, "start": 5053.4400000000005, "text": " there are many ways in which the depictions aren't realistic but like they do come up or I guess even" }, { "end": 5064.4800000000005, "start": 5059.2, "text": " outside or just uh even mythology like the the the whole mightest touch thing seems like a" }, { "end": 5070.64, "start": 5064.4800000000005, "text": " perfect example of a misalignment yeah the king might as example is is a good example I do like" }, { "end": 5079.12, "start": 5070.64, "text": " a lot yeah yeah those are good examples yeah that's true if you if you expand to include mythology in" }, { "end": 5084.8, "start": 5079.12, "text": " general I feel like it's probably everywhere um especially if you include things like you asked for" }, { "end": 5090.08, "start": 5084.8, "text": " something and got what you literally asked for but not what you actually meant that's really common" }, { "end": 5096.88, "start": 5090.08, "text": " isn't it yeah in stories yeah I mean we've got like I could just take any story about jidis and" }, { "end": 5103.4400000000005, "start": 5096.88, "text": " probably this will feature um so they really started the uh alignment literature back then I guess" }, { "end": 5110.32, "start": 5103.4400000000005, "text": " thousands of years old the problem of there are two people one person wants the other person to do" }, { "end": 5116.08, "start": 5110.32, "text": " something that's just like a very important fundamental problem that you need to deal with there's" }, { "end": 5121.36, "start": 5116.08, "text": " like tons of stuff also in economics about those where it's a principal agent problem and like the" }, { "end": 5125.6, "start": 5121.36, "text": " island and problem is not literally the same thing in the principal agent problem he assumes that" }, { "end": 5130.96, "start": 5125.6, "text": " the agent had already has some motivation some utility function and you're like trying to incentivize" }, { "end": 5136, "start": 5130.96, "text": " them to do the things that you want whereas in the ai line we do get to build the agent that you're" }, { "end": 5141.84, "start": 5136, "text": " delegating to and so you have more control over it so there are differences but like fundamentally the" }, { "end": 5150.4800000000005, "start": 5141.84, "text": " like entity a wants entity be to do something for entity a is like just a super common pattern that" }, { "end": 5158.32, "start": 5150.48, "text": " human society has thought about a lot so we have some more contributing questions uh this is one" }, { "end": 5166, "start": 5158.32, "text": " from Nathan Lambert uh PhD student at UC Berkeley doing research on robot learning and Nathan was our" }, { "end": 5172.879999999999, "start": 5166, "text": " guest for episode 19 so Nathan says a lot of AI alignment and agi safety work happens on blog posts" }, { "end": 5178.16, "start": 5172.879999999999, "text": " and forums uh what's the right manner to draw more attention from the academic community any comment" }, { "end": 5190.16, "start": 5178.16, "text": " on that i think i think that this is basically a reasonable strategy where like by by doing this work" }, { "end": 5197.5199999999995, "start": 5190.16, "text": " on blog posts and forums people can move a lot faster uh like ml is pretty good in that uh" }, { "end": 5203.28, "start": 5198.16, "text": " like relative to other academic fields you know it doesn't take years to publish your paper it only" }, { "end": 5209.44, "start": 5203.28, "text": " takes months to publish your paper uh but blood-pussant forums it can be days to talk about your ideas" }, { "end": 5216.8, "start": 5210.5599999999995, "text": " so you can move a lot faster if you're trusting in everyone's ability to like understand which work" }, { "end": 5223.04, "start": 5216.8, "text": " is good um and what to build on uh and so that that's like i think the main benefit of blog posts and" }, { "end": 5228.96, "start": 5223.04, "text": " forums but then as a result anyone who isn't an expert correctly doesn't end up reading the" }, { "end": 5234.8, "start": 5228.96, "text": " blog posts and forums because there's not it it's a little hard if you're not an expert to extract" }, { "end": 5242.4800000000005, "start": 5234.8, "text": " the signal and ignore the noise so i think then there's like a separate group of people and not" }, { "end": 5247.84, "start": 5242.4800000000005, "text": " sorry they're not a separate group but there's a group of people who then takes a bunch of these ideas" }, { "end": 5257.6, "start": 5248.4800000000005, "text": " and then tries and then converts them into like more rigorous uh and correct and academically" }, { "end": 5265.4400000000005, "start": 5257.6, "text": " presented um ideas and in papers and that's the thing that you can like uh show to the academic" }, { "end": 5271.6, "start": 5265.4400000000005, "text": " community in order to draw more attention in fact we've just been working on a project along" }, { "end": 5277.200000000001, "start": 5271.6, "text": " these lines at deep mind which hopefully we'll really soon talking about the risks from uh" }, { "end": 5285.52, "start": 5277.200000000001, "text": " inner misalignment so yeah i think roughly my story is you figure out conceptually what you want" }, { "end": 5293.120000000001, "start": 5285.52, "text": " to do via the blog posts and forums and then you like make it rigorous and have experiments and" }, { "end": 5300.320000000001, "start": 5293.120000000001, "text": " like demonstrate things with um actual examples instead of hypothetical ones uh in the format of" }, { "end": 5306.72, "start": 5300.320000000001, "text": " an academic paper and that's how you then like make it um credible enough and convincing enough" }, { "end": 5312.96, "start": 5306.72, "text": " to draw attention from the academic community great and then Taylor Killian asks" }, { "end": 5318.56, "start": 5312.96, "text": " Taylor is a PhD student at U of T and the Fector Institute Taylor was our guest for episode 13" }, { "end": 5323.28, "start": 5319.12, "text": " and Taylor asks how can we approach the alignment problem when faced with" }, { "end": 5327.76, "start": 5323.84, "text": " heterogeneous behavior from possibly many human actors?" }, { "end": 5334.4800000000005, "start": 5327.76, "text": " i think under my interpretation of this question is that you know human sometimes disagree" }, { "end": 5340.8, "start": 5334.4800000000005, "text": " on what things to value and similarly disagree on what behaviors they exhibit and want the AI to" }, { "end": 5348.56, "start": 5340.8, "text": " exhibit um so how do you get the AI to decide on one set of values or one set of behaviors" }, { "end": 5357.68, "start": 5349.360000000001, "text": " and as i talked about a little bit before i mostly just take this question and like it is" }, { "end": 5362.88, "start": 5357.68, "text": " outside of the scope of the things that i usually think about i'm usually just i'm usually thinking" }, { "end": 5368.72, "start": 5362.88, "text": " about the designers have something in mind that they want the AI system to do did the AI system" }, { "end": 5373.280000000001, "start": 5368.72, "text": " actually do do that thing or at least did it is it trying to do that thing?" }, { "end": 5378.88, "start": 5373.280000000001, "text": " i do think that this problem is in fact an important problem but i think what you the the way" }, { "end": 5386.400000000001, "start": 5379.84, "text": " what your solution like your solutions are probably going to be more like political um or like" }, { "end": 5395.12, "start": 5386.400000000001, "text": " societal rather than technical where you know you have to negotiate with other people to figure out" }, { "end": 5401.28, "start": 5395.12, "text": " what exactly you want your AI systems to be doing and then you like take that take that like simple" }, { "end": 5405.599999999999, "start": 5401.28, "text": " spec and you hand it off to the AI designers and then the AI designers are all right all right now" }, { "end": 5411.44, "start": 5405.599999999999, "text": " we will make an AI system with the spec yeah so so i would say it's like yeah there's a separate" }, { "end": 5418.16, "start": 5411.44, "text": " problem of like how to go from human society to something that we can put inside of an AI" }, { "end": 5423.36, "start": 5418.16, "text": " this is like the domain of a significant portion of social science uh and it has technical" }, { "end": 5429.759999999999, "start": 5423.36, "text": " aspects too so like social choice theory for example i think has at least some technical people" }, { "end": 5438.32, "start": 5429.759999999999, "text": " trying to do a mechanism design to to solve these problems and that seems great and people should" }, { "end": 5444.799999999999, "start": 5438.32, "text": " do that it's a good problem to solve um it's unfortunately not one i have thought about very much" }, { "end": 5450.719999999999, "start": 5444.799999999999, "text": " but i do feel pretty strongly about the factorization into one part of you know one problem which" }, { "end": 5455.4400000000005, "start": 5450.72, "text": " is like figure out what exactly you want to put into the AI system and then the other part of the" }, { "end": 5459.52, "start": 5455.4400000000005, "text": " problem which i call the alignment problem which is then how do you take that thing that you want to" }, { "end": 5465.4400000000005, "start": 5459.52, "text": " put into the AI system and actually put it into the AI system okay cool and Taylor also asks how do" }, { "end": 5472.72, "start": 5465.4400000000005, "text": " we best handle bias when learning from human expert demonstrations this is a good question and i" }, { "end": 5480.400000000001, "start": 5472.72, "text": " would say is an open question in in the field so i don't have a great answer to it but some" }, { "end": 5486.799999999999, "start": 5480.4, "text": " approaches that people have taken one simple thing is to get a uh get demonstration from a wide" }, { "end": 5492.24, "start": 5486.799999999999, "text": " variety of humans and hope that to to the extent that they're making mistakes some of those mistakes" }, { "end": 5498.24, "start": 5492.24, "text": " will cancel out you can invest additional effort like you get a bunch of demonstrations and then" }, { "end": 5504.16, "start": 5498.24, "text": " you invest a lot of effort into evaluating the quality of each of those demonstrations and then" }, { "end": 5509.92, "start": 5504.16, "text": " you can like label each demonstration with like how how high quality it is and then you can design" }, { "end": 5515.12, "start": 5509.92, "text": " an algorithm that like takes the quality into account when learning or i mean the most simple" }, { "end": 5520, "start": 5515.12, "text": " thing is you just like discard everything that's too low quality and only keep the high quality ones" }, { "end": 5525.6, "start": 5520, "text": " but there are some algorithms that have been proposed that can make use of the low quality ones" }, { "end": 5530.4800000000005, "start": 5525.6, "text": " while still trying to get to the performance of the high quality ones another approach that people" }, { "end": 5538.56, "start": 5530.4800000000005, "text": " have tried to take is to like trying yes what sorts of biases are present and then try to build" }, { "end": 5545.92, "start": 5538.56, "text": " algorithms that correct for those biases so in fact one of my older papers looks into an approach" }, { "end": 5554.320000000001, "start": 5546.8, "text": " of this forum i think i like we did get results that were better than the baseline but i don't think" }, { "end": 5560.96, "start": 5554.320000000001, "text": " it was all that promising uh so i mostly did not continue working on that approach so it just" }, { "end": 5566.64, "start": 5560.96, "text": " seems kind of hard to like know exactly which biases are going to happen and to then correct for" }, { "end": 5571.52, "start": 5566.64, "text": " all of them all right so those are a few thoughts on how you can try to handle bias i don't think we" }, { "end": 5577.280000000001, "start": 5571.52, "text": " know the best way to do it yet cool thanks so much uh to Taylor and Nathan and Natasha for" }, { "end": 5583.12, "start": 5577.280000000001, "text": " contributed questions um you can also contribute questions to our next uh interviews uh if you show" }, { "end": 5588.56, "start": 5583.12, "text": " up on our twitter at talk or oral podcast so we're just about wrapping up here uh few more questions" }, { "end": 5596.400000000001, "start": 5588.56, "text": " for you today what rohen what would you say is the holy grail for your line of research i think the" }, { "end": 5607.599999999999, "start": 5596.4, "text": " holy grail is to have a procedure for training AI systems at particular tasks um where we can" }, { "end": 5615.759999999999, "start": 5607.599999999999, "text": " them where we can apply arbitrary human understandable constraints to how the AI system achieves those" }, { "end": 5622.32, "start": 5615.759999999999, "text": " tasks so for example we can be like we can build AI system that schedules your meetings but uh" }, { "end": 5628.32, "start": 5622.32, "text": " uh and and like but ensure is that it's always very respectful when it's talking to other people" }, { "end": 5633.44, "start": 5628.32, "text": " in order to schedule your emails and is love never like you know discriminating based on sex or" }, { "end": 5639.2, "start": 5633.44, "text": " something like that or you can like build an agent that plays Minecraft and you can just deploy it" }, { "end": 5645.679999999999, "start": 5639.2, "text": " on an entirely new multiplayer server that includes both humans and AI systems yeah and then you can" }, { "end": 5650, "start": 5645.679999999999, "text": " say hey you should just go help such and such player with whatever it is they want to do and the" }, { "end": 5656.4, "start": 5650, "text": " agent just does that and it like abides by the norms on that uh on the multiplayer server that" }, { "end": 5664.32, "start": 5656.4, "text": " adjoined or you can build a recommender system that's just optimizing for what humans think uh is good" }, { "end": 5670.72, "start": 5664.32, "text": " for recommender systems to be doing while uh rather than optimizing for say engagement if we think" }, { "end": 5676.88, "start": 5670.72, "text": " that engagement is a bad thing to be optimizing for so how do you see your uh your research career plan" }, { "end": 5683.12, "start": 5676.88, "text": " do you have a clear roadmap in mind or are you uh doing a lot of exploration as you go i think i" }, { "end": 5690.64, "start": 5683.12, "text": " feel more like there's maybe i wouldn't call it a roadmap exactly but there's a clear plan uh and" }, { "end": 5698.4800000000005, "start": 5690.64, "text": " the plan is we talked a bit a bit about it earlier the plan is roughly train models using human feedback" }, { "end": 5704.4800000000005, "start": 5698.4800000000005, "text": " and then like empower the humans providing the feedback as much as you can um ideally so that they" }, { "end": 5709.04, "start": 5704.48, "text": " can know everything that the model knows and select the models that are getting the right outcomes" }, { "end": 5717.12, "start": 5709.04, "text": " for the right reasons i'd say like that's the plan that's like an ideal to which we aspire uh we" }, { "end": 5722.799999999999, "start": 5717.12, "text": " will probably not actually reach it knowing everything that the model knows is a pretty high bar" }, { "end": 5728.799999999999, "start": 5722.799999999999, "text": " and probably we won't get to it but there are like a bunch of tricks that we can do that get us" }, { "end": 5733.04, "start": 5728.799999999999, "text": " closer and closer to it and the closer we get to it the better the better we're doing" }, { "end": 5739.2, "start": 5733.04, "text": " um and so i'm like let us find more and more of those tricks find which ones are the best see how" }, { "end": 5746.16, "start": 5739.2, "text": " like cost how costly they are and so on um and ideally this just leads to our to a significant" }, { "end": 5751.44, "start": 5746.16, "text": " improvement in our ability to do these things every time um i i will say though that it took me" }, { "end": 5758.88, "start": 5752.4, "text": " several years to get to this point like most of the uh most of the previous years of my career" }, { "end": 5765.84, "start": 5758.88, "text": " have in fact been a significant amount of exploration uh which is part of why like not all of the" }, { "end": 5772.8, "start": 5765.84, "text": " papers uh that we've talked about so far really fit into the story is there anything else uh you" }, { "end": 5782.16, "start": 5772.8, "text": " want to mention to our audience today Rohin yeah um so i um probably going to start a hiring round" }, { "end": 5790.48, "start": 5782.16, "text": " at DeepMind for my own team um probably sometime in the next month from the time of recording uh" }, { "end": 5797.599999999999, "start": 5790.48, "text": " today is March 22nd so yeah please do apply if you're interested in working on an alignment" }, { "end": 5803.44, "start": 5797.599999999999, "text": " great doctor Rohin Shah this has been an absolute pleasure and and a total honor by the way" }, { "end": 5809.28, "start": 5803.44, "text": " i want to thank you for on behalf of myself and in our audience yeah thanks for having me on it was" }, { "end": 5815.12, "start": 5809.28, "text": " really fun to actually go through all of these papers uh in a single session i don't think i've" }, { "end": 5839.92, "start": 5815.12, "text": " ever done that before" } ]
Jordan Terry
Jordan Terry on maintaining Gym and PettingZoo, hardware accelerated environments and the future of RL, environment models for multi-agent RL, and more!
https://media.transistor…615.mp3?src=site
TalkRL podcast is all reinforced in learning all the time featuring brilliant guests both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. Jordan Terry is a PhD candidate at the University of Maryland, the maintainer of Jim, and the maintainer and creator of Petting Zoo and the founder of Swarm Labs. Thanks so much for joining us, Jordan. Hey, good to be here. So how do you like to describe your focus area? I have been working in deep reinforcement learning. Most seem multi-agent or enforceable learning, but I have some single agent reinforcement learning work that I've done. I've essentially been pursuing a handful of hopefully high impact, very, very long term projects that are hopefully going to become public in the next three to six months, most of all of them. Okay, that sounds exciting. So let's start with Jim. I don't think anyone could ever overstate the importance of Jim in the RL ecosystem, and I understand that you were now maintaining that project, which has got to be a huge job. Yeah, it is a memorable experience, I'll say. Awesome. So can you help us with some history of the Jim project? Yeah, so what Jim was intended to be in what it is or kind of different things. What's happened is that Jim is now basically HTTP for RL. It is the standard interface between pretty much all environment and learning code. It has been installed 35 million times. It's the most installed RL library in the world by a very large amount. So Jim is essentially into becoming HTTP for RL and something that was that was the price to everyone from what I've told old. There is between 100 and a few thousand different third party environments. I tried to estimate it and like create a list of though other than them at this point, but because of forks, I can't even figure out like way to create a raise more to use to be able to hate a list. If you listen to this and think of a way, please email me. And yeah, so it's been sold 35 million times. It's used so much that I can't quantify how much is used anymore and essentially all code is fundamentally built around this. For a variety of reasons, Jim ended up being progressive less maintained over time. I ended up essentially taking over the maintenance from open AI about five months ago now. And so now I'm in the unique position of okay, cool. You are now in charge of the most used and consequential piece of reinforced money software in the world that hasn't been extensively maintained ever. And one interesting consequence to Jim is that because Jim wasn't expected to be HTTP for RL, some of the design choices and stuff they're made for are worth. They're not as deliberate as you might like just to be clear for what Jim was intended to be. There's nothing wrong with this approach. Like Jim was this little side project almost of like, hey, we did a cool thing for everyone to have, you know, a share said a benchmark, see if we can go play with the community field. And for this, Jim was perfectly fine. Where he's allowed us on the ground with it. But when you end up creating HTTP for RL by accident, you know, this wasn't designed for that. Literally wasn't designed for that. And so now you have to figure out what are you going to do next? And this essentially has been a large part of my life ever since. You mentioned some specific changes that you're planning and that are documented in the repo. So maybe let's, let's start with those. I pulled up the Jim repo and I can see here on April 27th, Greg Brockman first commit, 2016. Hello world. I love that. Yeah, RL has changed a lot since then. So yeah, it makes total sense that that we need to bring Jim up to date. So I think the communities are going to owe you a debt of gratitude and advance for this. I can't imagine how much work it is. So let's talk about some of the, some of the things that you have planned. While seeding the random number generator, pretty technical change that may not have too much direct impact on us, but what's the benefit there? The first thing interesting change the API is changing how seeding works. So previously in Jim, well, originally in Jim, Jim didn't really have a standardized seed method. And then they added the specific dot seed method. And the way that this method works for those of you who don't know is that you pass a number to the environment via the seed method. And it just kind of goes and does its thing. And so then whenever you reset, given that seed, it will just reset to that seed. And so you can use this for determinism and reproducibility and results between papers. If you, if everyone does things correctly, which happens less often than you would hope. So the problem with the seed method, well, there's a couple things. Number one, the internal stuff with the way that Jim did seeding was using like features and empire that were the old version and I recommended anymore. And so there just had to be a list of internal upgrades there. But then there's this issue of okay, what should the API? So there are three options for how to specify seeding for an environment. One option is to do what's done. Just have a separate method that handles this. One option is to make an argument to, to environment initialization. And then the other argument is to make it an argument to reset. Sorry, the other option is to make an argument to reset. The problem with the way that was currently done is one, the contract isn't clear. For instance, a lot of the time this will be called as part of reset or call reset and depending on different third party environments. And just okay, well, the method seed, okay, well, just from the method name, do you know if this is supposed to see it for one environment or for many environments or so on, you generally want a lot of standardization clarity regarding, you know, your basic tool for your profitability. And so that's one part of the problem. Then the other part of the problem is that there's nothing in the API that prevents it from being called and need at any time. And so at a certain point, it ends up becoming more clear to make it a part of seed, of reset or part of init. The reason that we've made it a part of init, not reset is because initialization of some environments is really, really, really expensive. I like like a lot of simulators, they could, you know, they take minutes to start up in a lot of cases, even a workstation hardware. And okay, well that means that needing to, if you're going through seed that means that putting this as an argument to a init is probably a bad idea. What happened with all of this is that okay? Well, the current thing has an unclear contract. That's not clearly used and it's arguably another method put in. You can't really put it in a knit. Then this is why I think that putting reset makes more sense ideologically and some people to agree with this and this is okay, I, you know, but I think that that having a clear contract with what the method does for disability is very important. And a lot of people think that, okay, it's like this and I guess it kind of works, we shouldn't break it. And I guess my driving philosophy with API and gym and stuff is that, you know, gym is going to be used a lot more in the next five years and it has been the past five, right? Just having everything be frozen and taught, I'm having nothing be changed and not bringing it up to the standards of what you would hope for what has become HTTP for RL, you know, that this isn't really an outcome that you want to see happen. Well, you know, having a break, making breaking changes to sucks, you know, we can keep them minimal. And so, you know, okay, if you have, if you have a seeding logic and environment or learning, you know, moving the board past in the C-dress, ever is passing it to the seed method, this is a trivial change, right? This is a small change and until the 1.0 release, which is, you know, six months away, some like that probably, we're going to end up having the seed method, um, the old one still supported. And so if you're, you know, using the same problems like the code, then it's still work. And so this isn't a big breaking change. This is essentially, it's less confusing and, you know, we need to get it to a point to release. The big breaking change to go silent of order is with truncation versus termination. This is a really important and really weird issue. Yeah. Can you help us help the audience understand the importance of truncation versus termination? And I think there was a paper on this topic by Pardo, time limits and reinforcement learning, which is where I first learned about this, the subtleties of the difference between those, those two. And yeah, if you could just maybe walk us through that, that'd be awesome. Yeah. So I'm not going to go into math and a vocal format, but essentially what it comes down to is this. Um, if you have a list of state action pairs from an environment, um, you will then have a final state action pair. If you then want to go and compute, um, things like value from it for, like for a reaction value or a van or just stuff like this, then computing this, the correct way depends on whether or not the, uh, you reach a state of the environment that is truly terminal, or if you're just cut off from stepping the environment further due to time limits, I you're, you are truncated. So if you're trying to do DQ end, um, reproducing, you know, the paper and stuff like this, then, um, then this differentiation to correctly reproduce the paper matters. Additionally, what happens is that this is actually a thing for works beyond, uh, just DQ and stuff, uh, in policy gradient methods, um, when you're using, uh, G A E's, uh, G A E logic, this ends up coming up there as well. And okay, well, if this is this like subtle integral thing and how the stuff is computed and clearly specifying it is probably important. Problem is that Jim only has done as a state and this doesn't differentiate what the deal is. And this is not what you would want. People don't really, you know, use it as a tin. Now what Jim does have is for some environments in Jim, the trunkation is imposed, um, via a time limit wrapper in those environments, it adds a info's property that indicates if the environment is truncated. However, it is monitoring this is not done for all environments where where this should be applied. And this is also not a widely known feature that is not widely used in learning code and not done in a third party environments to my knowledge. And so Jim essentially just merger these two things that from a co-perspective are together different. You talked to lots of people who are somewhat famous in our own album. Jim and I'm like, Oh, yeah, man. When I was first learning Jim, uh, I was so confused by they were the same, but it was like I was going crazy. And then I kind of figured out and like, Oh, okay. And this isn't good. And so, so this is definitely like the clearest case of where we're breaking API changes required. So there's been this issue of what you want to do, right? Yeah, just so for the audience, I think one one thing that happens if you don't handle this correctly is what some people refer to as the photo finish ending, whereas you treat it if you train an agent to do a sprint. And you you stop the simulation at some point. And the agent actually thinks that's the end of an episode. And then it may just fall on its face at the last moment, because it doesn't affect its speed at the last moment. But what you really want is to train a policy that can keep going. And the difference between truncation and and and really ending the episode is whether is what happens after that that point. Is that right? Yeah. Yeah. That's correct. Yeah. And but just to go into the options that you kind of have to choose from one option is to make done it. Well, if you want to make this explicit not info is arguing with the way it is now, which I think we kind of have to. What what you end up having to do is either make it done a three state variable, but you could do with a custom class and Python. That would be ontpithonic and kind of messy. Another thing that you could do is return a half step return a fifth value in addition to observation or ward done info. You can also have return truncated. And if you have backwards compatibility for learning code, that simply just add an extra comma underscore like you normally do for infos. And that's a trivial change up a learning code to a handle this. Although you probably should have fixed log to a course, but it'll still run. And then the other option is instead of returning a billion to do a DMM does and return a discount factor, which is more. It's a lot more expressive, right? The discount factor. Like is there a or we are we missing something by not having discount factor in Jim? I actually that was no. Nothing to my knowledge. Okay. I'm not going to say that there is that that I mean, it's surely possible that like there is some aspect that that is handled by discount factor that you can't that you cannot describe otherwise. And if there is, please email me. But daily to the knowledge of my self-reading, who I've talked to, there is no way to do this. There's there's no need to do this. Essentially where this comes down to is should the environment handle this counter factor should learning code handle it. In the show notes, one question was what is the difference between Jim and DMM and and at least from my perspective, Jim is like this like minimal, simple, very, very, pythonic, super friendly to get into interface. And like like if you know anything my RL, you just look at it, but oh, that's how it works. Awesome. I understand now. And that is incredible. DMM and while it has features that are more kind of has a style that's a little bit more resemblement of the like form of MP model doesn't have that like infinitely easy, pythonic, accessible thing. And so if you're trying to use this for education or if you're trying to just you know, like use the easiest thing or whatever like, like I think that that keeping things as simple as possible is really valuable. And I think that that sort of ethosage in my think is what I really liked about it. And I think that it's why became HTTP for RL because for all of the mutations API design is very understandable and simple. And so this is my, this is kind of why my preference is to just have a bullying variable instead of having people think through discounting and stuff. So just for the audience, I mean, generally, we talk, I think the most common way to talk about discount discounting is this gamma that we, that the agent chooses in terms in it and it sets their horizon as to how far in the future that care about roughly speaking. Is that, is that how you would describe it? Yeah, essentially, I really like David Silver explanation of what a discount factor is. And it's that, you know, you get a reward from when you, when you take this action and then you know, and you're trying to figure out how useful it is to take these action now or later in your cumulative sum of rewards through an environment, right? And ideally, you'd want to take a reward us soon rather than later, because you don't have a perfect modeling environment and where things can happen and all these things. It is, this kind of is there for a way of it's essentially it's incentivizing taking actions earlier, getting rewards earlier, which is much more in line with what you hoped to have. It's like an interest charge on, on, on rewards. Yeah. But typically we talk about it, the simplest way we talk about it as a global, a number across the, the entire episode or environment, but then in DMN, we actually have a discount at every single step, which I guess is what we're discussing here about the value of having that ability to change the discount at every single step, is it right? But I mean, you can change it on your own and learning code, right? It essentially, it comes down to is discounted in an environment property or is it a learning code property? And then there's, there's a philosophical answer to this and there's a technical one, there's non-breaking change one. And I guess, I guess my, my most pragmatic answer to this is that there's looking at the third party learning code and looking at third party environments. I think that it would be a better outcome for the community, for this to be maintained in learning code because of how it's structured and because environments, a lot of time people just take some random environment and not really going to details like I've been working on a paper that will hopefully maybe submit it to a natured sub journal at the end of this month or really next month. And I've had to go and make, you know, old, the team will miraculously go and make this very, very, very large number of fixes to environments. And this isn't really done, but people doing these things, at least by a lot of team sices, whereas in learning code, people do do the stuff because they kind of have to. Okay. Do we, do you want to move on to the simulators? So one thing that's going on in Jim that I've gotten questions about is one of the, one of the biggest things which Jim and I first took over is like our statutory enforcement and whatever the post the top common ones get rid of musical. And this is turned into a, to a very, very deep story. It's probably worth telling briefly. And so the story of musical is that mute is so the problem with the musical environments. There is three problems. So number one, well, there were three. Number one, the musical environments are very, they depend on the musical simulator. You have to, you have to either lie to get a student license to and stuff like this, or you'd have to pay a large amount of money to like tens of thousands of dollars to get a professional license. And this makes your reproducibility hard because it's close or software. You can't get a grade of CI. It just becomes albatross on the field's neck. And well, okay, we should probably do something about that. But then things get worse because the Python bindings for deep mind, sorry, for the Python bindings for musical were musical pie by opening a, originally deep mind also created their own Python bindings separately before acquiring musical. But the Python bindings for musical by open, I were maintained. And then the, and then the environments themselves weren't maintained are very well documented. And so there are lots of things in these environments where no one had any idea what they were doing in the action space. And and like the guy who created the Bible or placements for instance, he just couldn't bear it. Well, a lot of stuff was doing, right? And then if you look at the list of, of issues on gym, most of them are tag musical like more than half. And the reason for this is that, you know, you get bug reports, musical invites, many of which are very, very, very serious mind you. Things like rendering the environment changes are reproducibility outcomes. That sounds bad. But we can't fix that because no one understands how the stuff works who's involved the project and no one who I've reached out to after reaching out to a lot of people hasn't or should have enough to really contribute. And so for all these reasons, something kind of had to be done. And this is, this is a mejoko specific. Like I've actually never used mejoko because I don't have a license. So I use pie bullet. But are these things specific to the two? Mejoko would not to the problem with the pie bullet or placements is that they didn't replace all the environments and the environments, um, emitted certain features because the pie bullet guy couldn't figure out what they were. Uh, yeah. I mean, they were functional course, but they weren't like profoundly well maintained the level of what you would hope in this. And I mean, you use them. You see that. So this is the problem with with the musical environments. And okay, well, that's a lot of problems. And so then the question comes, okay, so what are you going to do about these problems? Well, the obvious answer is to reach out to the pie bullet, um, we're dropping environment guy in the pie ball author and talk to them and stuff. And okay, you know, we could go on and recreate the environments in pie bullet and or more accurately include the pie, fix the pie bullet environments and include them in gym. This is a potential outcome that could be taken. However, uh, it was point out there's a better option. That was to look at Brax. This is something that I hadn't fully understood of until I had gotten into gym. So have you seen all the stuff with the hard work seller environments? Because this is the coolest thing ever. This is going to completely change the field. Yeah. I got to admit I, uh, I learned a lot about it since talking to you and, uh, and I'm definitely, so just for the audience, the issue here is that a seemingly just slow and, uh, how can we make them fast? Is that what, is it what we're getting in here? Not quite. So, so the issues this I am not to hardware person. So I'm not going to try to give like the super in depth explanation, but the abbreviate explanation to my knowledge is this when you are training, um, like let's say that you're training an Atari environment on C, um, within environment running on CPU and the neural network running on GPU and you end up having these very, very large slowdowns because you have to go since stuff from the GPU to the CPU and back through the PCI bus. And if you have an environment, even if it's, you know, less efficient to run on a GPU, but have run on the GPU and connect to the neural network directly, then you can get usually over and more than order of magnitude of, uh, performance improvement. So if, so let me get this straight. If your, if your, your SIM is entirely running in GPU and your RL code is entirely running in GPU, then the whole cycle between them, which is the main clock cycle of RL is all running in GPU. And then your pipe, and you're, you're, you're just not in Python land at all. And you soon have to worry about the bad performance of Python. Well, the issue isn't, the issue isn't Python, the issue is the PCI bus. Right. So move, it's moving, moving bits around. Yeah. Yeah. And, and, and this back and forth time, when, when you remove this, even with code that isn't profoundly well done, this code ends up running tens to hundreds of times faster. And so, like, and so just unlike random GPU, like, and so, so, so Brax is a product of a Google brain that provides very well done, um, these very well done, um, physics, it's very old, a physics simulation that can run on, on a cylinder. So one thing to mention, are six hardware accelerators. This isn't just GPUs. GPUs are with most people, but this can run on GPUs or other proprietary ML, um, ASICs that were sharp and seek them out as well. This isn't, this isn't a GPU only thing. But yeah. And so, so Brax runs, um, it runs more than 100 times faster, basically. And so we can train a minutes and I mean, you, you understand the implications of, okay, instead of taking something, training hours and now trains in minutes, this changes a lot. We're in a different regime, then, right? Yeah. Yeah. Because, you know, you know, then, okay, well, if we're gonna, we're gonna, we're gonna, okay, cool, hyper-prabbit sweep, that becomes cheap. Or, you know, hey, I did something stupid. Okay, well, this isn't super expensive to go and re-run it. Or, you know, I want to train this 10 times, there are 100 times or a thousand times and create actually academically legitimate research. This makes that possible. And so one thing that I've been, been trying to do as much as I can towards is having hard work, accelerated environments, be that a fault for all gym environments. Because like you said, the change everything, and there's another example that changes, you know, if you're a student and, you know, you have an old, cheap GPU in your laptop and it's useless right now. Okay, well, this makes it actually, you're actually able to do things. And now, the one argument that you will see against GPU-excited environments is that, of course, developing for them right now with the tools available, it takes a lot more effort. That's one. And the other argument as well, people are just gonna use these very, very large neural networks. They won't matter anymore. And this is the arguments and people who are spec'd a lot of them, they're off big of a, well, a person, I wish clarify. And it wasn't big enough half the company either. But regardless, for most things that people are doing in research in RL, there's still lots of new blue sky research to be done on simple, small environments. Because you know, we can't solve net hack. And you can go and you think, and have things be hard work, accelerated. And I think, and if I have anything to do with this, this is gonna be the next big push in RL experimentally. And a lot of environments, porting them to be hard work solid and run this mass amount faster is a shockingly doable process. Yeah, I want to understand the limits of that. Because I guess when I look at it at the different pieces that are running in the RL loop, there's the, let's say the entire simulator was, was hard work solid rated. Okay, that's great. That's what that, then the step, the step function actually can be really fast. But then as soon as you're, and then on the, on your agent side, you know, your neural network can be in the GPU. So that's great. So your agent, you know, comes out with its raw predictions, you know, very quickly. But then all the intermediate logic, which of which, you know, custom agents can have tons, is still, there's a lot of, there's typically there could be a lot of experimentation, experimentation of that layer, a lot of different types of logic, a lot of looking up in different buffers or whatever we're doing to make your agent a, a special snowflake. That stuff seems like it would be, it would be still hard to accelerate and to get, to get the entire loop in there. Is that right? Are we looking at it at, at, at, it's not that hard. Is it? Okay. You have to factor your code in a certain way. And there'll be, it's like some sort of like grand master tutorial guide to this, when this comes, uh, starts to move into production more. But no, all this requires the fact that you're in code in a certain way. It doesn't require any profoundly special logic. The one kind of all with this though is that, um, in, is that hard work solid environments will no longer be returning numpy tensors. And so you end up having to have, and these rappers will be able to nid gym or people work in this, but you'll end up having to have, you know, like wrapping whatever, uh, GPU tensor, this is out putting to a torch tensor or tensor flow, whatever. And so that'll be a thing that people will have to handle. And there are only reason we didn't have to do that before. The um, I was everyone just, all the libraries just kind of supported numpy implicitly, whereas, you know, we can't pass a jacks to, um, to another environment. And the other cool thing, and just briefly summarize, okay, I have an environment, I want to make it harder, it started how does this work? There are like four ways of doing it. One is to go and, um, and use things like pie, couture, whatever, and use Python, binding school, couture and stuff like that. It ends up, um, putting a lot of overhead and it's hard to do. And the easiest one is even a numpy based environment, you can just, uh, ride in jacks. And while this, in some cases, it's being difficult, uh, for a lot of things is just a doable thing. Then if you have c-based environments, you can either modify the c-based environment, um, to essentially compile the vercuta, or you can, um, I am told I never done this, um, compile it to xla. And then it'll run on the, on any hard work accelerator that tensor flow supports. If, if the net hat people wanted to make, um, net tack hardware accelerated, I'm just using this because it's an example of a very, very old c environment that's important. Unless there are properties regarding, uh, compilation to actually, uh, bike codes that I'm unaware of, this is a thing that they could just do if they wanted to. And I'm not calling them not specifically. I'm just again using them as an example of an important environment involving old code. And, and, and bracks and, and jacks are Google products, right? There, there's no dependency on tensor flow there. We can just do pie torch just as easily with all. Yeah. So, um, tensor flows lowest level of execution or lies on, uh, what's called xla. It'll just take and run arbitrary tensor e code on a tpu or gpu, or I believe on a applicable AMD GPUs as well. And probably other stuff in the future. And jacks is Python bindings for x-laden, init replicate numpy, though is me, though it is missing a couple features in the pie to have auditive support. So that's what jacks does. And then bracks is essentially just, um, a physics library written using this full alternative to not buy. And so, and so they're just to kind of close this off. What our goal is is is to create fizz 3d environments as an alternative to the musical environments in there right now, um, because you know, these ones like are maintained, it can be fixed and so on. That would be a period where people can go and experiment with them and stuff. This will probably turn to ther so with the current rates and like that. Rebole will be able to go and play with them in gym. People can find every issues. There are arms for loading many. And eventually the musical environments will be pulled out additionally. The box student to the environments are going to be redone and bracks. Well, um, this is because the box should be environments depend on a fork of a fork of and un maintained bindings for the physics engine that they use. And I can't get the lining of my hand if it's trying pretty hard. And also this runs much faster in bracks. And so if you want to do, you know, massive hyper parameter sweeps on those, when their hard work started into the hyper parameter sweeps in minutes, which is awesome. We'd be in a different regime completely, which is awesome. So I, but I do want to understand, um, you're talking about just replacing the jokos with bracks. Do we expect that like, for example, agents trained on the jokos are going to perform the same in bracks or do we expect some degree of like symptom, some differences? There'll be some differences inherently. Um, they are by far the most accurate placement sever made for me, jico, but they are inherently different. And now that the music goes acquired, this is, this is made the process dramatically easier, of course, because then now the source is more for a different comparison. But no, there are, there are, there are inherent differences. But these will at minimum beat them, medically closer to their jolm, musical environments and the pie bullet ones, like people like you use. And those have been more inadequate for everyone. Okay. And then I know there's, there's other simulators too. Like I spoke to, uh, joming shia at, uh, UBC robotics lab. And they use, um, pie bullet and also race sim and Isaac, Jim. And so there's, so do you imagine when this, this change is made to, uh, to move to bracks that, that all the other simulators are kind of going to be left behind in terms of performance, or can other sims, sims be moved over to bracks too? The underlying, uh, their underlying logic be running. I don't think this, yeah. I don't think that we are likely to see other environments, rewritten in jacks or other simulators. I think that, that changing other simulators to run on hardware, using whatever see their written in is certainly think that we might see one really desirable thing about bracks as well, talking to him as I think that it has, it is it is, it is, if nothing else tied for the longest likely maintained life from where we are now, which is really good. And so that's another advantage of it. And then out of all the hardware accelerated, but once it's also good, and the other aspect is that bracks team is willing to go and work with us a tremendous amount on adding these environments and stuff. And the problem with, with doing something where the maintainers of the library aren't, you know, super supporting you is that, you know, we have fairly small resources for creating these environments and scratches actually incredibly difficult. Um, you know, the, the pie bullet guy certainly knows what he, when I say the pie bullet guy, I mean, the guy who created that pie bullet replacement for the musica really, really knows what he's doing. He had a very hard time. So having people who are willing to donate, you know, stupid amounts of time and really lead money to this is also helpful. And you mentioned the jumpy rappers. Can you fill us in on what, what was that about? Yeah. So switching environments become hard work. Sardar is going to have this really weird property right now. In environments that turn numpy, I tensors, arrays to neural networks to use and ever, and every different, deep learning library just natively interrupt those. However, you're going to be your turn data structures other than numpy because numpy doesn't have natively around the GPU. And so you have to have the rappers written the way that they for example can handle both jacks and numpy environments and jumpy as a way of doing this and we need to do rapid ride anyways. So, okay. So we're doing rapid ride and riding them in new library. So that's with the story of jumpy rappers and these rappers are going to have to have, you know, like a jacks environment to pie torch learning code wrapper and it won't cause real performance. And you know, there are examples of the code that works, but this is just an extra thing that you'll have to do that most people aren't used to. Anything else you want to tell us about the 1.0 roadmap? So there's a couple other cool things they're hopefully going to do. One thing is that for the entire duration since I've been in charge of Jim, we have been working on a new fully featured really nice website to see what it's going to roughly look like. It's you can go to pettings.mell which is the current pettings documentation website. It's going to be based off that. That's going to be very, very comprehensive. So this is something that we're actively working on. Another cool thing that hopefully will happen is that the widely used Jim and eager environment is going to be put into the toy tech environment. Hopefully the pending so far. This is the current plan that's been publicly discussed. And then the one other thing the board mentioning is we are making some minor changes to render API that you can look up. This is another one where the breaking changes aren't going to require much work to sort out. And just to give people a sense of what's happening with the 1.0 release. In general, the overarching plan is to make all the breaking changes via flag and stuff while retaining backwards compatibility. I'm add all new environments to the 1.0 release all the deprecated environments, all the backwards compatibility stuff will be your move because it will be a 1.0 release. And that'll have a stable set of environments, simple new API stable set of all new rappers and Gempie and all these things. The only thing big things that we want to deal with after the 1.0 release is looking into the vector API. A lot of people don't like it for very good reasons. And that's a very important thing to address. But just from the receptive of, you know, I'm one person who's small number of people were essentially pushing dealing with that one aspect after the 1.0 release when all the stuff with a query API solidified. Got it. Okay. Sounds like you have your work cut out for you. Yeah. 2022. So do you want to move on to petting zoo? Yeah. Yeah. I'm super excited about this project. So you have the petting zoo environment. I see that you have a paper petting on petting zoo on scholar and archive. Yeah. It was accepted in NERPS. Excellent. Can you tell us about petting zoo? Yeah. So this is how I got put in charge of Gempie and nut shells that I created the most similar thing to it. I created a library that was intended to be Gem for multi agent. All right. Many people try to create multi agent environments with a Gem API except there's no indication of explicit aid and surely things. And it really sucked. And it's hard to use. So people had all these, you know, header on this third party APIs. And then it's like the the dark times of RL before Gem. And okay. Well, I want to go and reproduce this work in this paper. Okay. Cool. Now, well, the APIs are different. So now you have to do a lot of software engineering. And whenever doing this engineering, you infuse your exponential causes for responsibility or all the stuff. That's not good for anyone. And so the intention of petting zoo was to be Gem from all day. And RL. This has arguably been in some ways a much harder job than Gem. I mean, Gem was hard and that was the first. But like having a universal multi agent API is much harder than having a universal single agent API is what I mean. We end up going through a lot of different design iterations. There's still a fairly small number of more minor breaking changes, much more minor than stuff in Gem that are planned for the petting zoo API. If you're listening to this and you're doing anything that's like normal with petting zoo, they're like you should be doing. You'll be fine. So one thing you encounter when you get to multiplayer RL or multiplayer games right away is two different paradigms in terms of some games agents take turns making move. And in some games, all agents make their move at once. And then it swaps over the environment. So and I think you have a nice way to handle this in petting zoo. How do you handle that? So imagine a game where all agents step together once. This is like rock, paper, scissors, right? Where everyone acts at once. And then imagine a game like chess where one player acts and the other player acts and you go back and forth, right? Well, so there's sort of two ways to handle this problem, loosely speaking. One way to handle this problem would be to have an API that says, okay, well, you can step for one agent at a time and step for the next agent so on. And so you just know cycle through everything, right? And you can do that. And it's cool. And this is a general and desirable thing to do. And what happens when you're stepping through each agent at every time is that, you know, this can work for chess and this doesn't suck. This isn't weird for all paper scissors where everyone's acting at the same time because you just, you know, the environment just cues them essentially. Whereas if you make your default API, the alternative, of everyone's stepping together, okay, well, what happens if you're playing chess, right? You know, you essentially have to use a dummy action for one agent at a time. This gets problematic. And so this is the intuition for why I believe that this sort of step-based API is a better default that I would say there's course still many uses for APIs that focus on the simultaneous case. The other advantages API is that a fault. And this is essentially the argument that paper makes is that this mental model of each agent stepping at once. So for example, if you're in a multi-agent environment, each agent takes an action they step in the way that that action step and the way they go through this and resolve the stuff is on a code level is that they essentially run through a loop of all the agents, right? Unless you're doing some wild parallelization stuff, there's a four loop of all the agents and each agent updates an environment sequentially on a single core. And but everyone models these as if, you know, they all have said it's not a kidney. So, so this sounds like a weird mental model in implementation discrepancy. Okay, so can you find a real world example of an important wild-ears environment that has a bug caught by the discrepancy? Yes, we found a couple and we did and the paper goes into the bug and the environment. This was the open source implementation of the social sequential limit games. The bugs were all the bugs we found are actually recently patched by an undergrad who's been working for me on a different project using environments. Essentially, it was right. But essentially this causes, though, is race condition because you know, you have a journal logic depending, you have like, you know, resolutions depending on this internal logic, you know, okay, well, if you're thinking about the stuff as if it isn't having that just from a mental model, this is weird to me that like and the effect of mental model people use for the most for this is is the positive modellability APIs around this and all this stuff. It's sensor parts observable. So, actually, games like Google us. And what happens with this model, this is aligned with with real world scenarios, another problem with this model is that it doesn't give you access information that you should have. So, for example, if you go through this through a cycle, if you know how it not be works, imagine you know, some sort of game where everyone sits in a ring of five people, they'll take individual turns, right? And so some fraction of your turn, and many games, well, it's sorry, some fraction of your award will be attributable to different players' turns, right? And this might just from an understanding perspective, or if not learning perspective, be something you might want, you know, want to be aware of and want to have access to that information for, right? Well, what happens is that if you take the pause G approach, all rewards are smashed together. So, there's no possibility of attribution for like mental model leading debugging purposes or learning purposes, though this is not what leaves for learning. And that also bothers me. So, this is, and so what the Pentings of Paper does is I've been introducing the Pentings library itself is that it puts forth this formally defined model of AC games, as well as a mental model, which is this idea of sequentially stepping games. We showed their provably equivalent to partial to partial observable games. The alternative, let me go through two case studies of how the sort of unintuitive aspects of pause Gs have caused issues in real environments. So pause Gs, like you mentioned, is partially observable stochastic games? Yep. It's the most general, it's think of, think of a multi agent version of a partial, a partial observable MDP. There's about sugarmult agent models that you'll see used. Pause G is the most general and commonly used one outside of EFGs, but that's a kind of different thing. Yeah. As a summary, if people want to Google pause Gs, I would genuinely recommend to look at the literature view on this in the Pentings of Paper. A lot of the, if you try to Google form a history of this stuff, what you'll see is bad in the Pentings of Paper isn't, it's not, it's some sort of gift from God or anything, but like it's usable and, and a lot of the like text bookish sources on this are troubling if anyone's interest. And so this was the idea with the game model. This is what we did. And the reason that we implemented both the pause Gs, style and ACMs based API in Pentings of Paper wondering, there's one really big problem with the AC API and that's performance in these games where agents can set simultaneously. So imagine, and this is a real case, I'm working on a not yet announced project that involves training thousands of agents in the Pentings environment. Okay, well, if you're doing that with a, with the AC game API, this is not good. You know, you have to make a call for each time you can't have no networks be inferencing in parallel on the GPU. It makes things much slower. And so for that first thing, I have this standard Poggy parallel API is important. And so Pentings who does support both, it doesn't support the parallel API for sequential environments because that would be problematic and does treat the AC and API as the default, but it does support both. Cool. Okay. And then. So I personally competed in the Europe's 2018 Palmerman multi agent competition, which was my introduction to a lot of these, these multi agent RL issues. And let's for example, one thing that I noticed is like sometimes in Marl, you want agents to have individual rewards. And then sometimes you want to share the credit for the rewards and like as a team reward, or sometimes sometimes I wanted to, like I wanted to balance between those two things, things sometimes to make agents more selfish or more team focused. And I had to do all this custom stuff to make that work. But does how does reward work in petting zoo? And how would you like, I guess the notion of like teams or competing teams, is that like an orthogonal to what petting zoo is doing? Kind of. Penting it doesn't formalize notion of teams, right? So in multi agent RL, every single agent is trying to maximize zone this kind of future expected kind of future award, right? Whether or not this process is cooperative or competitive or mix some depends on the rewards present in the environment. And this is a similar case with with regards to teams. And so petting who doesn't formally delineate any of this stuff, because this is just a side effect in the environment, so to speak. In petting zoo agents can be named. If you really need to do team adding stuff, you can use agent names and you know have the first part of the name be you know, like team blue on a score one that's on a score two and so on for different agents. But we don't have a formal notion of support for teams. As far as reward goes, the petting zoo on a mail page has a really good thorough explanation, but the brief version that yeah, for any agent for any time step that there were occurred at, you can go and pull the reward to look at exactly or you can get the key mode of reward over less cycle of agents that acted first. And feed that to learning if you don't want to go and deal with dictionaries of rewards. Okay, and then petting zoo is more focused on the MDP paradigm more than the planning paradigm. Is that right? Yeah, so petting zoo is focused on this classical state action pair, the stuff in RL, the most different thing from petting zoo that's widely used as open spiel. They have things like a classical backtracking. The issue with backtracking that you come in these classes of galalcular games is that this can't be general supported for a lot of different environments. It's messy due from computational and API and environment design perspective. Because it petting zoo environment support pickling, well, I was maybe not a third party ones do, but all first party ones do. You still can do the backtracking, but it's less computationally efficient than having native support to the open spiel. And I needed this because it's not a common use feature for currently popular deep RL stuff. First stuff outside of specific classic games like open spiel targets. So it's a specialty feature for more specific things. Yeah, one of the cool thing to mention is petting zoo is just getting to see the adoption cycle of it because when petting zoo was released, I had almost no professional credibility. I was very new to the field. I transferred from physics. I didn't really know anyone and I ended up talking about it does in different grad students under grads and working on it for six to 12 months. And it's and it's just been really, really cool to see how pettings you've grown from this almost obscure library to this thing that like all the different major multi agent RL libraries use. And all the ones that don't use it are currently actively working on supporting stuff for open spiel, which is doing something different that we're trying to. And so that just been a really cool thing to like just to see how as a grown to see, there's like a brand new library that I created not that long ago. And there's something like 30 plus different third party environments for petting zoo now. I think more than that, there's a list of them on the documentation website. And you know, to see that this just like this huge wave integration like if people don't know what the RL discord is, it's what it sounds like. It's really cool. Cool. You should Google it and check it out. I'm in there a lot of the open source RL people are. And like the multi agent RL channel on that is essentially just like petting, it's like stack overflow for petting zoo now a lot of the time, which is really, really worthwhile to see just from a personal level that people are using your stuff this much. Sweet. Okay. I look forward to checking that out more. And it's been on my list for a while, but now it's moved up near the top now. Thanks. Thanks for for giving the community petting zoo. I think this is we're just I'm sure it's just the beginning of the epic story of where petting zoo is going to go. Hopefully. So let's move on to super suit. Do you want to tell us about super suit? Yeah. So super suits in the process of being killed. So the story of super suit is that petting zoo needed, you know, rappers, right? And Jim's built in rappers weren't very good. They still aren't literally almost literally full of regret of them. And this is partially motivated by jumping out of the factors. But you know, okay, well, if we're going to do petting zoo rappers that are like comprehensive and good stuff, why not do Jim rappers and put it in its own pack? Don't super soon have the rappers back a couple things that petting zoo did that were it may be a tiny bit innovative. Like they weren't profound or anything. One is that we were the first to add versioning for rappers the way the way that environments are versioned Jim and petting zoo. I think that's important because it's, you know, rappers impact reproducibility just as much as this environment aspects to. And then the other aspect is to have a bunch of, you know, rappers that can be, you know, a bunch of rappers that do small things that people can just grab from, you know, if you know, if you're using musical, most people have to write their own little code snippet to turn the the float 84 data type stuff as a turn and to float 32, it to float 32 to be able to pass a general network code or float 16 depending. What was cool about SuperSuit is that, you know, it just had a bunch of these rappers you can go and grab from for a variety of reasons, in part because of jump high and all these other things, SuperSuits being killed and broken up into Jim dot rappers and the new version of that. And the petting zoo that will be moved in the petting zoo dot rappers. Okay. And then you said you wanted to speak about scientifically legitimate experiments in RL. There's this common problem in RL where, where, you know, you'll go and you'll read a paper in RL and you'll be able to tell very quickly that like reading this paper as a good use of time and like, okay, why does anyone care? Like this is multiple things went wrong here. And just to pick, without picking on these specific papers, one example of where a lot of things, where something's very serious and wrong is if you try to make claims about methods and stuff like this without doing hyper parameters, searches or with only training once or without using the comparison methods to describe like Mark Belmer and RLiable or RLiable, I don't know how I pronounce it, other similar works. And we see paper like this really, really often they get accepted to, you know, hyperfodded venues by, in many cases, people who aren't profoundly experienced in RL or viewers, I would hypothesize, why can't confirm this personally. This is just something that really, really bothers me that I think that a lot of attention needs to be put, it needs to be placed on. And, you know, it's one thing if people want to go and put out work because the problem is that, you know, if you want to claim that your method has its performance, so on, unless you do think like hyper parameter searches, testing on a diverse set of environments, doing accurate, accurate, specific comparisons, that you can't actually say that you've done anything like, like, you've done all this work, and you've contributed no new knowledge to the space. And like, you know, you have about 10,000 working days in average person's life, and this is what you're spending part of it on. And I don't, I mean, I mean, there's obviously no motivation to publish a pair of sure, but like, there are ways to do this where you're making actually scientifically valid claims about your, about your working can contribute, can contribute actual knowledge. And this is something that's not widely done. I hope the Megan E. Serum is cheaper to run with hard work, so we'll improve this, but this is more than anything, requires a systemic change in terms of the culture and review process, so on for papers. And but then the process beyond you, any contributions to the field or deference field or anything like this is that, you know, the people who are working on this, who have spent these large amounts of time creating these papers and even the peer-reviewed venues, and then don't go through the process of all this additional work to be able to make sign if it claims out their works that can actually be such that they can actually like contribute, like knowledge that is cracked in the game, at least can you, that you can show is cracked. Like, you're spending a fraction of your life doing this and it's a very fine thing. And it, I can't run my head around around this. And I think there's something that really needs to be addressed. I think that two people have done a really good job in this space are Mark Belmer and Phil Thomas in terms of, you know, scientifically legit, and they're reinforced morning claims. But in a lot of cases, in a lot of profile, high profile publications, this has been shown in the literature even, peer-reviewed literature, not that that means that much in this field, for the claims about the spirited already of methods to be false. And everyone wants to go and this is the weird thing that I need, because for some reason I get approached by a ton of grad students wanting to do things with me, they all want to go and do method papers. They want to create the new PPO or the new thing like this. And the problem with that is, I mean, this has been done literally hundreds of times. And none of them have had the experimental support to be widely used in their claims. Maybe the one exception to this that's seen a little bit of, of many fleece is beyond the like one to see what we find three is Diane and PPG. You'll see use from time of time, but beyond those, you know, there's been no progress in this fairly important thing, because primarily due to due to the issue of credible benchmarking. And so this is my man, Yosek, Clouds, monologue about that. So this, this sounds related to, you know, Rashab Agarwal was first author on a paper that won outstanding paper at Neurup's 2021 that's deep reinforcement learning at the edge of the statistical precipice. This is the one where he's saying that this to basically statistical methods used in RL comparisons are are pretty bad. And we should do a better job of benchmarking using basically commonly understood statistics. And, and I mean, one of the things here was that does usually the sample size is so small, you know, showing how, how can we draw conclusions with, you know, maybe one or a handful of runs. So it sounds related. It's not the entire issue, but it sounds related to the same. It's the same issue. Yeah. And, and you mentioned Mark Bellmere. He's also a co author on that co author on that paper. Yeah. One of the things that's related to this, if anyone's interested in some sort of topic for research, if you want to do this, you can email me and tell me tell you everything that I know because this really needs to be done is that there are a large amount of implementation specific tricks that impact how black box optimizers are used for automated hyper parameter shooting for RL. And I say this is someone who I've written, at least as far as I can publicly tell if I've written some of the largest amount of automated hyper parameter shooting code for deep RL in the world for a project that I'm trying to work on where the entire thing is like mass scale hyper parameter tuning with hundreds of GPUs. And this has been studied and coming back to, you know, if you want to make your count a claim of how good your method is, and you kind of have to do hyper parameter tuning in an automated manner. And if that's the case, well, presumably there should be a whole literature on the impact of all the generally well understood implementation tricks that correspond to hooking these two things together. And there is not. I think that that readily available hard work started in environments and stuff like gym like we hope to do would make doing academic scale research on this easier. But I think that if anyone's interested for research shop, I think this is a foundationally important thing that as far as no literally no one in the world is is working on. And if anyone wants to, you know, have me tell them everything I know about that is email me. Okay, that's a generous offer. I want to follow up on one aspect of that right now. So I was looking at I'm trying to remember which one it was, but either you used hyper opt or optuna. And I was like, I'm going to get to know some hyper hyper parameter tuning. And so what I did is I hooked it up to a dummy problem where the result was just a random number like some kind of some noisy random number. And I said, okay, hyper parameter tuning try to optimize this. And it it what I found is it had no awareness that it's hyper parameters had no impact on the result. Yep. And so it gave me some answer. It said, Oh, I found this point that had the best result. But there was there was no point at which it said, you know what, I'm going to stop tuning it turning that particular knob because it has no impact. And so so I so that felt very dumb to me. Yeah, I have not used hyper opt. I've used optuna exclusively. This isn't any of your religious thing. It just optuna has a bunch of built-in thing. Other people have built a bunch of things for optuna specifically that I use. And yeah, so if you specifically use pruning, if you use pruning with optuna, it'll get around this problem. There's different settings for that. But just to sort of illustrate the scale of the problem here, let me give one example of why this is such like a a important and be incredibly low hanging research problem. When you when you're training a reinforcement learning environment, what you do is that you go is that you go and take at some point of the like series of reward values and return that one value to the black box optimizer is black box optimizer can only take one value for good reason. Okay. What's value of that curve to your turn? Now you may say, oh, this is you take the best one, right? Well, the problem is taking the best one is that then your your is that then your hyper parameter optimizer will go will go and find these really, really unstable hyper parameters or hyper parameters that that advice really, really unstable learning. And so then your so then you're going to curve so essentially be like flat flat flat flat and just for a single step of like peak up incredibly high to like fully learn and instantly drop back down. And so all your learning curves look like and it's super weird. So okay, maybe don't report that value. Okay. Well, you can report the end value and that kind of works, right? Well, sure. But like do you want to take like a weighted average over the last end values or you know, all these things or another related problems. Okay, well, you know, I want to find the best value, but it's also useful if if I find the best value and I want to try to, you know, somehow incentivize finding more sample efficient hyper parameters as well. Okay. How do you do this beyond? You can, you know, just arbitrarily get strained. But what if you want to add some sort of additional penalty, towards a turn to the hyper to the black box hyper parameter for either walk lock time run or resources you just have like this, how do you integrate that in a way that doesn't screw everything up? Or variance, right? Like if you want, or you want a responsibility. Yeah. Um, well, and the problem with variance, this, that's, um, is that you have to run them a bunch, which is even more challenging. But yeah, like this is what I mean, but like, okay, what part of the of the award curve do I return to black box optimizer? This is not a formerly studied problem as best as I can tell. And like, like, this is, this is could be, could it be foundational work? And then lots of things like this would be that I can get into later if people care. And aside from just being generally foundational work, this is easy foundational work that has zero competition, which sounds appealing. No, that sounds, uh, that sounds like important stuff. I, yeah, I'm not aware what other people did in that space, but I definitely have faced some of those issues in the past. Yeah. What, how do I plug in my curve, my training curve into, into hyperopt or optuna or anything like that? You mentioned that you gave up on reading RL papers under normal circumstances a long time ago. Uh, and you talked about this, uh, this article that we saw, you know, earlier this year, please commit more blade and academic. That article wasn't incredible. You want to tell us about the response to that article because I, that was, that was, uh, that's the kind of, uh, perspective you don't hear very often. People are generally not that honest. Um, or Frank, I would say. Yeah. Uh, so the reason I like the article so much is that that there is a specific benchmark of multi Asian RL and, and he says, this anyone will instantly know what I'm referring to is in the space. I don't want to call people out because professional issues, but, um, but like, like, there's essentially one, one benchmark in the space multi Asian RL, where the entire space on this, the entire literature on this falls into this sort of, please commit more at more blade and academic fraud. And it's got at the point now where, where the, um, which is wild by academic standards, where there's, where there's actual allegations of genuine academic fraud because of how badly the scientific claims that the papers in the space of a maid by, like, the first person outside of the core people in it to like go and like, let's read things and think for a while before writing a paper first. It was, is all he did. And he matched to, like, to solve the environment set. And it was in, in, you know, in this little subspace and multi Asian RL and being this fairly impactful thing, at least to me. And just the fact that this can happen is, is, it's a detriment to all of us and ashamed of the field. And you know, everyone's, oh, pre-produced abilities broken in this and that bad thing are an RL. And I mean, it just like, you should at least try, like, like, like, you know, people make it like in biology and stuff. If there are wild mistakes made very constantly in such as life, you know, turning, you know, and people regularly find out like that an entire subfield is using invalid statistics. And stuff. And this kind of happens, I guess, but like, they're at least trying to do the right thing. Like, effort is put into doing things the right way. And I feel like it's somehow different than where we're at. And this is why I stopped reading papers because like, you know, how many papers a year are published that you care about that you've been remembered like 10? Okay. So, so what's the point of going and reading through through the lose of new papers, right? And for never to have this even unique to me, I'm not gonna name people, but like, there are other professors who like are very famous who, I think some of them have been on here to make the same approach if you really know, except for some very edge case stuff, most of the stuff isn't really worth the time to read. And like, and like, if people are spinning the large amounts of their life, turning out paper after paper after paper. And this is the consensus that people have these papers like, this is what you're spending like, a portion of your life doing. And this is something really, really fundamentally bothered me. Like, like, just almost like for the people doing it for the community for everyone. And so I get that, that like the entire last however long this, uh, my interview has been like, old man yells a cloud style, but like, the emperor has no clothes. Let's get some clothes for this emperor, please people. So is there anything else I should have asked you, um, today, Jordan? I don't think so. Uh, I guess, tensor this sort of thing at the end. What do I see by lines of research come over next come years and what is the holy grail of your line of research? One thing, I would really like to wrap up what I hopefully will be some moderately highly publicized works in the next three to six months and get those out so I can move on with my life, get Jim one point, don't know all that stuff out so I can again deal with other things. Um, one problem that I'd really like to try to solve personally is trying to beat PPO across a, uh, a well chosen diverse set of environments within it with experience on such a way that real scientific claims being made about beating PPO meaningfully for the first time. That's something I, I think it would be really cool problem to work on personally. Uh, and then regarding that what is the holy grail? I mean, in this, in, in the space of our elders, obviously, you know, if not general intelligence, there's, you know, like GPT-3 kind of general generality intelligence for our L that'd be really cool. If I want to talk to someone about that, I asked Joseph Suarez in my T. But for me personally, I think that I think the holy grail is to have a sort of like unified set of best practices regarding all the different oil environments and like what, uh, for, for experimental validity and making real scientific claims and all these things that can be sort of standardized across all the little sub disciplines of RL so that we can at least make better progress. I think whether this would solve coming back to the Emperor has no close problem is that like, you know, many of the people who who are trying to solve these problems are better worst don't have the most personal incentive for whatever the reason is to try to make profound contribution to science. They're trying to add as many papers as possible and they aren't, you know, trying all these things. And if you can at least create like some sort of genuine standard free-produced ability in RL, then you can at least say that like, well, okay, you didn't meet the agreed upon criteria that's been published for the thing that you're working on for this go fix it. And then at least, you know, this will at least make a lot of the papers that would otherwise be completely worthless and offer almost no new real knowledge about anything empirically offer some even if the authors aren't even even if this isn't what the authors have their hearts bent on, you know? I mean, there's some tension there, I think like some people cynically say, you know, large organizations or would be pro regulations that are that build that create barriers to entry because they already they can already pass those strong barriers of entry. Yeah, well, yes, but I mean, there argument would be if, you know, if we insist it on, let's say we insist it on 50 runs of your algorithm, that now limits it only to the biggest labs that can actually do that for significantly complex agents and environments. So here's the problem that I have with that, though, right? Well, I am entirely appreciative of all the constraints that come along with doing that. I think that hard work's a very functional help, or it's applicable, there are many, there are many cases where it's not or people aren't able to do it. But like in these cases, okay, so let's say that, oh, well, I couldn't do 50 runs or I couldn't do a lot of hyper parameter tuning. Okay, it's fine. You couldn't, but like because you couldn't do this, even if even if you couldn't do without direct fault of your own, it still essentially precludes you from contributing side to knowledge or at least the vast majority of it in the work that you're publishing. Like, like if you can't do that, you will already excluded from, from, from, from, at least your contributing to certain areas of reinforcing, but of course, many areas are reinforcing that you can contribute to without these public resources. But yes, you know, you would be excluded from other things you already are, right? It's just that people pretend that you aren't. It's just obscured. And so maybe it would clarify what areas of research it's appropriate for certain size labs to focus on so that they can actually produce scientifically valid results. But then your work on making this stuff more scalable can change the equation and presumably is better for the field. I, I, I very much hope so we'll see how things got going over here. Awesome. Well, Jordan Terry, this has been fantastic. I really enjoyed this conversation and I'm really enjoying learning about all the incredible work you're doing and your contributions to the community are, are just outstanding. And thanks so much for, for sharing all this with, with talk or else today. Thank you so much for having me. And it was really nice to be here.
[ { "end": 10.72, "start": 0, "text": " TalkRL podcast is all reinforced in learning all the time featuring brilliant guests both" }, { "end": 12.52, "start": 10.72, "text": " research and applied." }, { "end": 15.52, "start": 12.52, "text": " Join the conversation on Twitter at TalkRL podcast." }, { "end": 22.32, "start": 15.52, "text": " I'm your host, Robin Chohan." }, { "end": 27.96, "start": 22.32, "text": " Jordan Terry is a PhD candidate at the University of Maryland, the maintainer of Jim, and the" }, { "end": 31.96, "start": 27.96, "text": " maintainer and creator of Petting Zoo and the founder of Swarm Labs." }, { "end": 33.84, "start": 31.96, "text": " Thanks so much for joining us, Jordan." }, { "end": 35.96, "start": 33.84, "text": " Hey, good to be here." }, { "end": 39.08, "start": 35.96, "text": " So how do you like to describe your focus area?" }, { "end": 42.96, "start": 39.08, "text": " I have been working in deep reinforcement learning." }, { "end": 46.82, "start": 42.96, "text": " Most seem multi-agent or enforceable learning, but I have some single agent reinforcement" }, { "end": 47.82, "start": 46.82, "text": " learning work that I've done." }, { "end": 53.88, "start": 47.82, "text": " I've essentially been pursuing a handful of hopefully high impact, very, very long term" }, { "end": 58.64, "start": 53.88, "text": " projects that are hopefully going to become public in the next three to six months, most" }, { "end": 59.64, "start": 58.64, "text": " of all of them." }, { "end": 60.72, "start": 59.64, "text": " Okay, that sounds exciting." }, { "end": 62.24, "start": 60.72, "text": " So let's start with Jim." }, { "end": 67.8, "start": 62.24, "text": " I don't think anyone could ever overstate the importance of Jim in the RL ecosystem, and" }, { "end": 72.16, "start": 67.8, "text": " I understand that you were now maintaining that project, which has got to be a huge job." }, { "end": 76.56, "start": 72.16, "text": " Yeah, it is a memorable experience, I'll say." }, { "end": 77.56, "start": 76.56, "text": " Awesome." }, { "end": 81.36, "start": 77.56, "text": " So can you help us with some history of the Jim project?" }, { "end": 87.28, "start": 81.36, "text": " Yeah, so what Jim was intended to be in what it is or kind of different things." }, { "end": 91.48, "start": 87.28, "text": " What's happened is that Jim is now basically HTTP for RL." }, { "end": 97.64, "start": 91.48, "text": " It is the standard interface between pretty much all environment and learning code." }, { "end": 100, "start": 97.64, "text": " It has been installed 35 million times." }, { "end": 103.44, "start": 100, "text": " It's the most installed RL library in the world by a very large amount." }, { "end": 108.16, "start": 103.44, "text": " So Jim is essentially into becoming HTTP for RL and something that was that was the price" }, { "end": 110.44, "start": 108.16, "text": " to everyone from what I've told old." }, { "end": 114.56, "start": 110.44, "text": " There is between 100 and a few thousand different third party environments." }, { "end": 117.56, "start": 114.56, "text": " I tried to estimate it and like create a list of though other than them at this point," }, { "end": 122.52, "start": 117.56, "text": " but because of forks, I can't even figure out like way to create a raise more to use" }, { "end": 123.6, "start": 122.52, "text": " to be able to hate a list." }, { "end": 126.84, "start": 123.6, "text": " If you listen to this and think of a way, please email me." }, { "end": 129.32, "start": 126.84, "text": " And yeah, so it's been sold 35 million times." }, { "end": 134.04, "start": 129.32, "text": " It's used so much that I can't quantify how much is used anymore and essentially all" }, { "end": 136.76, "start": 134.04, "text": " code is fundamentally built around this." }, { "end": 141.39999999999998, "start": 136.76, "text": " For a variety of reasons, Jim ended up being progressive less maintained over time." }, { "end": 147, "start": 141.39999999999998, "text": " I ended up essentially taking over the maintenance from open AI about five months ago now." }, { "end": 150.64, "start": 147, "text": " And so now I'm in the unique position of okay, cool." }, { "end": 155.84, "start": 150.64, "text": " You are now in charge of the most used and consequential piece of reinforced money software" }, { "end": 158.84, "start": 155.84, "text": " in the world that hasn't been extensively maintained ever." }, { "end": 163.04, "start": 158.84, "text": " And one interesting consequence to Jim is that because Jim wasn't expected to be HTTP" }, { "end": 166.72, "start": 163.04, "text": " for RL, some of the design choices and stuff they're made for are worth." }, { "end": 170.88, "start": 166.72, "text": " They're not as deliberate as you might like just to be clear for what Jim was intended" }, { "end": 171.88, "start": 170.88, "text": " to be." }, { "end": 172.88, "start": 171.88, "text": " There's nothing wrong with this approach." }, { "end": 177.2, "start": 172.88, "text": " Like Jim was this little side project almost of like, hey, we did a cool thing for everyone" }, { "end": 180.8, "start": 177.2, "text": " to have, you know, a share said a benchmark, see if we can go play with the community" }, { "end": 181.8, "start": 180.8, "text": " field." }, { "end": 184.04, "start": 181.8, "text": " And for this, Jim was perfectly fine." }, { "end": 185.92, "start": 184.04, "text": " Where he's allowed us on the ground with it." }, { "end": 191.07999999999998, "start": 185.92, "text": " But when you end up creating HTTP for RL by accident, you know, this wasn't designed" }, { "end": 192.07999999999998, "start": 191.07999999999998, "text": " for that." }, { "end": 193.24, "start": 192.07999999999998, "text": " Literally wasn't designed for that." }, { "end": 196.52, "start": 193.24, "text": " And so now you have to figure out what are you going to do next?" }, { "end": 200.20000000000002, "start": 196.52, "text": " And this essentially has been a large part of my life ever since." }, { "end": 205.24, "start": 200.20000000000002, "text": " You mentioned some specific changes that you're planning and that are documented in the" }, { "end": 206.24, "start": 205.24, "text": " repo." }, { "end": 209, "start": 206.24, "text": " So maybe let's, let's start with those." }, { "end": 216.64000000000001, "start": 209, "text": " I pulled up the Jim repo and I can see here on April 27th, Greg Brockman first commit," }, { "end": 217.64000000000001, "start": 216.64000000000001, "text": " 2016." }, { "end": 218.64000000000001, "start": 217.64000000000001, "text": " Hello world." }, { "end": 219.64000000000001, "start": 218.64000000000001, "text": " I love that." }, { "end": 222, "start": 219.64000000000001, "text": " Yeah, RL has changed a lot since then." }, { "end": 228.2, "start": 222, "text": " So yeah, it makes total sense that that we need to bring Jim up to date." }, { "end": 233.64, "start": 228.2, "text": " So I think the communities are going to owe you a debt of gratitude and advance for this." }, { "end": 235.44, "start": 233.64, "text": " I can't imagine how much work it is." }, { "end": 239.48, "start": 235.44, "text": " So let's talk about some of the, some of the things that you have planned." }, { "end": 244.16, "start": 239.48, "text": " While seeding the random number generator, pretty technical change that may not have" }, { "end": 246.92000000000002, "start": 244.16, "text": " too much direct impact on us, but what's the benefit there?" }, { "end": 251.8, "start": 246.92000000000002, "text": " The first thing interesting change the API is changing how seeding works." }, { "end": 256.36, "start": 251.8, "text": " So previously in Jim, well, originally in Jim, Jim didn't really have a standardized" }, { "end": 257.36, "start": 256.36, "text": " seed method." }, { "end": 260, "start": 257.36, "text": " And then they added the specific dot seed method." }, { "end": 264.56, "start": 260, "text": " And the way that this method works for those of you who don't know is that you pass a number" }, { "end": 266.8, "start": 264.56, "text": " to the environment via the seed method." }, { "end": 269.76, "start": 266.8, "text": " And it just kind of goes and does its thing." }, { "end": 273.12, "start": 269.76, "text": " And so then whenever you reset, given that seed, it will just reset to that seed." }, { "end": 277.76, "start": 273.12, "text": " And so you can use this for determinism and reproducibility and results between papers." }, { "end": 282.2, "start": 277.76, "text": " If you, if everyone does things correctly, which happens less often than you would hope." }, { "end": 284.59999999999997, "start": 282.2, "text": " So the problem with the seed method, well, there's a couple things." }, { "end": 288.76, "start": 284.59999999999997, "text": " Number one, the internal stuff with the way that Jim did seeding was using like features" }, { "end": 291.28, "start": 288.76, "text": " and empire that were the old version and I recommended anymore." }, { "end": 294.4, "start": 291.28, "text": " And so there just had to be a list of internal upgrades there." }, { "end": 296.76, "start": 294.4, "text": " But then there's this issue of okay, what should the API?" }, { "end": 300.28, "start": 296.76, "text": " So there are three options for how to specify seeding for an environment." }, { "end": 301.96, "start": 300.28, "text": " One option is to do what's done." }, { "end": 304.52, "start": 301.96, "text": " Just have a separate method that handles this." }, { "end": 308.44, "start": 304.52, "text": " One option is to make an argument to, to environment initialization." }, { "end": 310.59999999999997, "start": 308.44, "text": " And then the other argument is to make it an argument to reset." }, { "end": 312.91999999999996, "start": 310.59999999999997, "text": " Sorry, the other option is to make an argument to reset." }, { "end": 318.79999999999995, "start": 312.91999999999996, "text": " The problem with the way that was currently done is one, the contract isn't clear." }, { "end": 323.88, "start": 318.79999999999995, "text": " For instance, a lot of the time this will be called as part of reset or call reset and" }, { "end": 325.64, "start": 323.88, "text": " depending on different third party environments." }, { "end": 329.2, "start": 325.64, "text": " And just okay, well, the method seed, okay, well, just from the method name, do you know" }, { "end": 333.24, "start": 329.2, "text": " if this is supposed to see it for one environment or for many environments or so on, you generally" }, { "end": 337.08, "start": 333.24, "text": " want a lot of standardization clarity regarding, you know, your basic tool for your" }, { "end": 338.08, "start": 337.08, "text": " profitability." }, { "end": 339.76, "start": 338.08, "text": " And so that's one part of the problem." }, { "end": 343.24, "start": 339.76, "text": " Then the other part of the problem is that there's nothing in the API that prevents it" }, { "end": 346.48, "start": 343.24, "text": " from being called and need at any time." }, { "end": 352.28000000000003, "start": 346.48, "text": " And so at a certain point, it ends up becoming more clear to make it a part of seed, of" }, { "end": 353.8, "start": 352.28000000000003, "text": " reset or part of init." }, { "end": 357.12, "start": 353.8, "text": " The reason that we've made it a part of init, not reset is because initialization of" }, { "end": 359.32, "start": 357.12, "text": " some environments is really, really, really expensive." }, { "end": 363.8, "start": 359.32, "text": " I like like a lot of simulators, they could, you know, they take minutes to start up in" }, { "end": 366.68, "start": 363.8, "text": " a lot of cases, even a workstation hardware." }, { "end": 370.82, "start": 366.68, "text": " And okay, well that means that needing to, if you're going through seed that means that" }, { "end": 374.02, "start": 370.82, "text": " putting this as an argument to a init is probably a bad idea." }, { "end": 376.96, "start": 374.02, "text": " What happened with all of this is that okay?" }, { "end": 379.28, "start": 376.96, "text": " Well, the current thing has an unclear contract." }, { "end": 382.56, "start": 379.28, "text": " That's not clearly used and it's arguably another method put in." }, { "end": 384.44, "start": 382.56, "text": " You can't really put it in a knit." }, { "end": 387.48, "start": 384.44, "text": " Then this is why I think that putting reset makes more sense ideologically and some people" }, { "end": 394.98, "start": 387.48, "text": " to agree with this and this is okay, I, you know, but I think that that having a clear" }, { "end": 399.56, "start": 394.98, "text": " contract with what the method does for disability is very important. And a lot of people think" }, { "end": 403.32, "start": 399.56, "text": " that, okay, it's like this and I guess it kind of works, we shouldn't break it. And I guess" }, { "end": 406.90000000000003, "start": 403.32, "text": " my driving philosophy with API and gym and stuff is that, you know, gym is going to be" }, { "end": 411.72, "start": 406.90000000000003, "text": " used a lot more in the next five years and it has been the past five, right? Just having" }, { "end": 416.24, "start": 411.72, "text": " everything be frozen and taught, I'm having nothing be changed and not bringing it up to" }, { "end": 421.04, "start": 416.24, "text": " the standards of what you would hope for what has become HTTP for RL, you know, that this" }, { "end": 426.92, "start": 421.04, "text": " isn't really an outcome that you want to see happen. Well, you know, having a break," }, { "end": 430, "start": 426.92, "text": " making breaking changes to sucks, you know, we can keep them minimal. And so, you know," }, { "end": 433.64, "start": 430, "text": " okay, if you have, if you have a seeding logic and environment or learning, you know, moving" }, { "end": 437.64, "start": 433.64, "text": " the board past in the C-dress, ever is passing it to the seed method, this is a trivial change," }, { "end": 442.84000000000003, "start": 437.64, "text": " right? This is a small change and until the 1.0 release, which is, you know, six months" }, { "end": 449.35999999999996, "start": 442.84, "text": " away, some like that probably, we're going to end up having the seed method, um, the old" }, { "end": 452.96, "start": 449.35999999999996, "text": " one still supported. And so if you're, you know, using the same problems like the code," }, { "end": 458.44, "start": 452.96, "text": " then it's still work. And so this isn't a big breaking change. This is essentially, it's" }, { "end": 462.96, "start": 458.44, "text": " less confusing and, you know, we need to get it to a point to release. The big breaking" }, { "end": 469.91999999999996, "start": 462.96, "text": " change to go silent of order is with truncation versus termination. This is a really important" }, { "end": 474.92, "start": 469.92, "text": " and really weird issue. Yeah. Can you help us help the audience understand the importance" }, { "end": 481.40000000000003, "start": 474.92, "text": " of truncation versus termination? And I think there was a paper on this topic by Pardo," }, { "end": 485.32, "start": 481.40000000000003, "text": " time limits and reinforcement learning, which is where I first learned about this, the" }, { "end": 490.8, "start": 485.32, "text": " subtleties of the difference between those, those two. And yeah, if you could just maybe" }, { "end": 495.20000000000005, "start": 490.8, "text": " walk us through that, that'd be awesome. Yeah. So I'm not going to go into math and" }, { "end": 499.92, "start": 495.2, "text": " a vocal format, but essentially what it comes down to is this. Um, if you have a list of" }, { "end": 504.88, "start": 499.92, "text": " state action pairs from an environment, um, you will then have a final state action pair." }, { "end": 511.76, "start": 504.88, "text": " If you then want to go and compute, um, things like value from it for, like for a reaction" }, { "end": 516.64, "start": 511.76, "text": " value or a van or just stuff like this, then computing this, the correct way depends on" }, { "end": 521.88, "start": 516.64, "text": " whether or not the, uh, you reach a state of the environment that is truly terminal, or" }, { "end": 525.56, "start": 521.88, "text": " if you're just cut off from stepping the environment further due to time limits, I you're," }, { "end": 529.8, "start": 525.56, "text": " you are truncated. So if you're trying to do DQ end, um, reproducing, you know, the paper" }, { "end": 534.48, "start": 529.8, "text": " and stuff like this, then, um, then this differentiation to correctly reproduce the paper" }, { "end": 543.08, "start": 534.48, "text": " matters. Additionally, what happens is that this is actually a thing for works beyond, uh," }, { "end": 549.08, "start": 543.08, "text": " just DQ and stuff, uh, in policy gradient methods, um, when you're using, uh, G A E's, uh," }, { "end": 554.6800000000001, "start": 549.08, "text": " G A E logic, this ends up coming up there as well. And okay, well, if this is this like" }, { "end": 560.44, "start": 554.6800000000001, "text": " subtle integral thing and how the stuff is computed and clearly specifying it is probably" }, { "end": 567.32, "start": 560.44, "text": " important. Problem is that Jim only has done as a state and this doesn't differentiate" }, { "end": 574.44, "start": 567.32, "text": " what the deal is. And this is not what you would want. People don't really, you know, use" }, { "end": 582.12, "start": 574.44, "text": " it as a tin. Now what Jim does have is for some environments in Jim, the trunkation is imposed," }, { "end": 588.0400000000001, "start": 582.12, "text": " um, via a time limit wrapper in those environments, it adds a info's property that indicates if" }, { "end": 592.08, "start": 588.0400000000001, "text": " the environment is truncated. However, it is monitoring this is not done for all environments" }, { "end": 597.2, "start": 592.08, "text": " where where this should be applied. And this is also not a widely known feature that is not" }, { "end": 602.44, "start": 597.2, "text": " widely used in learning code and not done in a third party environments to my knowledge." }, { "end": 607.48, "start": 602.44, "text": " And so Jim essentially just merger these two things that from a co-perspective are together" }, { "end": 610.96, "start": 607.48, "text": " different. You talked to lots of people who are somewhat famous in our own album. Jim" }, { "end": 616.12, "start": 610.96, "text": " and I'm like, Oh, yeah, man. When I was first learning Jim, uh, I was so confused by" }, { "end": 619.12, "start": 616.12, "text": " they were the same, but it was like I was going crazy. And then I kind of figured out" }, { "end": 625.44, "start": 619.12, "text": " and like, Oh, okay. And this isn't good. And so, so this is definitely like the clearest" }, { "end": 629.6, "start": 625.44, "text": " case of where we're breaking API changes required. So there's been this issue of what" }, { "end": 635.4, "start": 629.6, "text": " you want to do, right? Yeah, just so for the audience, I think one one thing that happens" }, { "end": 640.52, "start": 635.4, "text": " if you don't handle this correctly is what some people refer to as the photo finish ending," }, { "end": 648.44, "start": 640.52, "text": " whereas you treat it if you train an agent to do a sprint. And you you stop the simulation" }, { "end": 653.36, "start": 648.44, "text": " at some point. And the agent actually thinks that's the end of an episode. And then it may" }, { "end": 660.04, "start": 653.36, "text": " just fall on its face at the last moment, because it doesn't affect its speed at the last" }, { "end": 663.88, "start": 660.04, "text": " moment. But what you really want is to train a policy that can keep going. And the difference" }, { "end": 669.6800000000001, "start": 663.88, "text": " between truncation and and and really ending the episode is whether is what happens after" }, { "end": 675.2, "start": 669.6800000000001, "text": " that that point. Is that right? Yeah. Yeah. That's correct. Yeah. And but just to go into" }, { "end": 682.2, "start": 675.2, "text": " the options that you kind of have to choose from one option is to make done it. Well, if" }, { "end": 685.6800000000001, "start": 682.2, "text": " you want to make this explicit not info is arguing with the way it is now, which I think" }, { "end": 690.32, "start": 685.6800000000001, "text": " we kind of have to. What what you end up having to do is either make it done a three state" }, { "end": 696.24, "start": 690.32, "text": " variable, but you could do with a custom class and Python. That would be ontpithonic and kind" }, { "end": 701.6800000000001, "start": 696.24, "text": " of messy. Another thing that you could do is return a half step return a fifth value" }, { "end": 708.44, "start": 701.6800000000001, "text": " in addition to observation or ward done info. You can also have return truncated. And" }, { "end": 712.5200000000001, "start": 708.44, "text": " if you have backwards compatibility for learning code, that simply just add an extra comma" }, { "end": 718.08, "start": 712.5200000000001, "text": " underscore like you normally do for infos. And that's a trivial change up a learning code" }, { "end": 722.84, "start": 718.08, "text": " to a handle this. Although you probably should have fixed log to a course, but it'll still" }, { "end": 729, "start": 722.84, "text": " run. And then the other option is instead of returning a billion to do a DMM does and" }, { "end": 733.5600000000001, "start": 729, "text": " return a discount factor, which is more. It's a lot more expressive, right? The discount" }, { "end": 737.32, "start": 733.5600000000001, "text": " factor. Like is there a or we are we missing something by not having discount factor in" }, { "end": 742.12, "start": 737.32, "text": " Jim? I actually that was no. Nothing to my knowledge. Okay. I'm not going to say that" }, { "end": 748.6400000000001, "start": 742.12, "text": " there is that that I mean, it's surely possible that like there is some aspect that that" }, { "end": 753.6, "start": 748.6400000000001, "text": " is handled by discount factor that you can't that you cannot describe otherwise. And if" }, { "end": 757.5600000000001, "start": 753.6, "text": " there is, please email me. But daily to the knowledge of my self-reading, who I've talked" }, { "end": 762.6800000000001, "start": 757.5600000000001, "text": " to, there is no way to do this. There's there's no need to do this. Essentially where this" }, { "end": 766.5200000000001, "start": 762.6800000000001, "text": " comes down to is should the environment handle this counter factor should learning code" }, { "end": 773.76, "start": 766.52, "text": " handle it. In the show notes, one question was what is the difference between Jim and" }, { "end": 779.88, "start": 773.76, "text": " DMM and and at least from my perspective, Jim is like this like minimal, simple, very," }, { "end": 786.4399999999999, "start": 779.88, "text": " very, pythonic, super friendly to get into interface. And like like if you know anything" }, { "end": 790.1999999999999, "start": 786.4399999999999, "text": " my RL, you just look at it, but oh, that's how it works. Awesome. I understand now. And" }, { "end": 796.4, "start": 790.1999999999999, "text": " that is incredible. DMM and while it has features that are more kind of has a style" }, { "end": 801.68, "start": 796.4, "text": " that's a little bit more resemblement of the like form of MP model doesn't have that" }, { "end": 806.48, "start": 801.68, "text": " like infinitely easy, pythonic, accessible thing. And so if you're trying to use this" }, { "end": 810.16, "start": 806.48, "text": " for education or if you're trying to just you know, like use the easiest thing or whatever" }, { "end": 814.92, "start": 810.16, "text": " like, like I think that that keeping things as simple as possible is really valuable. And" }, { "end": 819.16, "start": 814.92, "text": " I think that that sort of ethosage in my think is what I really liked about it. And I" }, { "end": 823.8, "start": 819.16, "text": " think that it's why became HTTP for RL because for all of the mutations API design is" }, { "end": 828.64, "start": 823.8, "text": " very understandable and simple. And so this is my, this is kind of why my preference is" }, { "end": 833.4399999999999, "start": 828.64, "text": " to just have a bullying variable instead of having people think through discounting and" }, { "end": 838.88, "start": 833.4399999999999, "text": " stuff. So just for the audience, I mean, generally, we talk, I think the most common way to" }, { "end": 844.4, "start": 838.88, "text": " talk about discount discounting is this gamma that we, that the agent chooses in terms" }, { "end": 848.92, "start": 844.4, "text": " in it and it sets their horizon as to how far in the future that care about roughly speaking." }, { "end": 855.8, "start": 848.92, "text": " Is that, is that how you would describe it? Yeah, essentially, I really like David Silver" }, { "end": 860.24, "start": 855.8, "text": " explanation of what a discount factor is. And it's that, you know, you get a reward from" }, { "end": 864.5999999999999, "start": 860.24, "text": " when you, when you take this action and then you know, and you're trying to figure out" }, { "end": 871.28, "start": 864.5999999999999, "text": " how useful it is to take these action now or later in your cumulative sum of rewards" }, { "end": 876.12, "start": 871.28, "text": " through an environment, right? And ideally, you'd want to take a reward us soon rather" }, { "end": 879.76, "start": 876.12, "text": " than later, because you don't have a perfect modeling environment and where things can" }, { "end": 883.48, "start": 879.76, "text": " happen and all these things. It is, this kind of is there for a way of it's essentially" }, { "end": 889.08, "start": 883.48, "text": " it's incentivizing taking actions earlier, getting rewards earlier, which is much more" }, { "end": 893.08, "start": 889.08, "text": " in line with what you hoped to have. It's like an interest charge on, on, on rewards." }, { "end": 897.44, "start": 893.08, "text": " Yeah. But typically we talk about it, the simplest way we talk about it as a global, a number" }, { "end": 904.8, "start": 897.44, "text": " across the, the entire episode or environment, but then in DMN, we actually have a discount" }, { "end": 909.0799999999999, "start": 904.8, "text": " at every single step, which I guess is what we're discussing here about the value of having" }, { "end": 913.4399999999999, "start": 909.0799999999999, "text": " that ability to change the discount at every single step, is it right?" }, { "end": 918.7199999999999, "start": 913.4399999999999, "text": " But I mean, you can change it on your own and learning code, right? It essentially, it" }, { "end": 924.68, "start": 918.7199999999999, "text": " comes down to is discounted in an environment property or is it a learning code property?" }, { "end": 929.3199999999999, "start": 924.68, "text": " And then there's, there's a philosophical answer to this and there's a technical one, there's" }, { "end": 934.4799999999999, "start": 929.3199999999999, "text": " non-breaking change one. And I guess, I guess my, my most pragmatic answer to this is that" }, { "end": 937.84, "start": 934.48, "text": " there's looking at the third party learning code and looking at third party environments." }, { "end": 943.36, "start": 937.84, "text": " I think that it would be a better outcome for the community, for this to be maintained" }, { "end": 949.12, "start": 943.36, "text": " in learning code because of how it's structured and because environments, a lot of time people" }, { "end": 953.6, "start": 949.12, "text": " just take some random environment and not really going to details like I've been working" }, { "end": 958.36, "start": 953.6, "text": " on a paper that will hopefully maybe submit it to a natured sub journal at the end of" }, { "end": 963.9200000000001, "start": 958.36, "text": " this month or really next month. And I've had to go and make, you know, old, the team" }, { "end": 970.4799999999999, "start": 963.92, "text": " will miraculously go and make this very, very, very large number of fixes to environments." }, { "end": 974.16, "start": 970.4799999999999, "text": " And this isn't really done, but people doing these things, at least by a lot of team" }, { "end": 977.76, "start": 974.16, "text": " sices, whereas in learning code, people do do the stuff because they kind of have to." }, { "end": 980.4, "start": 977.76, "text": " Okay. Do we, do you want to move on to the simulators?" }, { "end": 985.4, "start": 980.4, "text": " So one thing that's going on in Jim that I've gotten questions about is one of the, one" }, { "end": 988.36, "start": 985.4, "text": " of the biggest things which Jim and I first took over is like our statutory enforcement" }, { "end": 995.36, "start": 988.36, "text": " and whatever the post the top common ones get rid of musical. And this is turned into a," }, { "end": 1002.6, "start": 995.36, "text": " to a very, very deep story. It's probably worth telling briefly. And so the story of musical" }, { "end": 1007.92, "start": 1002.6, "text": " is that mute is so the problem with the musical environments. There is three problems." }, { "end": 1015.4, "start": 1007.92, "text": " So number one, well, there were three. Number one, the musical environments are very," }, { "end": 1019.76, "start": 1015.4, "text": " they depend on the musical simulator. You have to, you have to either lie to get a student" }, { "end": 1023.56, "start": 1019.76, "text": " license to and stuff like this, or you'd have to pay a large amount of money to like tens" }, { "end": 1028.6, "start": 1023.56, "text": " of thousands of dollars to get a professional license. And this makes your reproducibility" }, { "end": 1032.96, "start": 1028.6, "text": " hard because it's close or software. You can't get a grade of CI. It just becomes" }, { "end": 1037.6, "start": 1032.96, "text": " albatross on the field's neck. And well, okay, we should probably do something about" }, { "end": 1042.72, "start": 1037.6, "text": " that. But then things get worse because the Python bindings for deep mind, sorry, for" }, { "end": 1048.16, "start": 1042.72, "text": " the Python bindings for musical were musical pie by opening a, originally deep mind also" }, { "end": 1053.04, "start": 1048.16, "text": " created their own Python bindings separately before acquiring musical. But the Python bindings" }, { "end": 1059.08, "start": 1053.04, "text": " for musical by open, I were maintained. And then the, and then the environments themselves" }, { "end": 1061.88, "start": 1059.08, "text": " weren't maintained are very well documented. And so there are lots of things in these" }, { "end": 1067.08, "start": 1061.88, "text": " environments where no one had any idea what they were doing in the action space. And" }, { "end": 1072.08, "start": 1067.08, "text": " and like the guy who created the Bible or placements for instance, he just couldn't" }, { "end": 1078.32, "start": 1072.08, "text": " bear it. Well, a lot of stuff was doing, right? And then if you look at the list of, of" }, { "end": 1082.1599999999999, "start": 1078.32, "text": " issues on gym, most of them are tag musical like more than half. And the reason for this" }, { "end": 1086.32, "start": 1082.1599999999999, "text": " is that, you know, you get bug reports, musical invites, many of which are very, very, very" }, { "end": 1091.56, "start": 1086.32, "text": " serious mind you. Things like rendering the environment changes are reproducibility outcomes." }, { "end": 1097.3999999999999, "start": 1091.56, "text": " That sounds bad. But we can't fix that because no one understands how the stuff works who's" }, { "end": 1100.32, "start": 1097.3999999999999, "text": " involved the project and no one who I've reached out to after reaching out to a lot of" }, { "end": 1105.48, "start": 1100.32, "text": " people hasn't or should have enough to really contribute. And so for all these reasons," }, { "end": 1109.6799999999998, "start": 1105.48, "text": " something kind of had to be done. And this is, this is a mejoko specific. Like I've" }, { "end": 1113.08, "start": 1109.6799999999998, "text": " actually never used mejoko because I don't have a license. So I use pie bullet. But are" }, { "end": 1118.3999999999999, "start": 1113.08, "text": " these things specific to the two? Mejoko would not to the problem with the pie bullet or" }, { "end": 1122, "start": 1118.3999999999999, "text": " placements is that they didn't replace all the environments and the environments, um," }, { "end": 1125.36, "start": 1122, "text": " emitted certain features because the pie bullet guy couldn't figure out what they were." }, { "end": 1130, "start": 1125.36, "text": " Uh, yeah. I mean, they were functional course, but they weren't like profoundly well maintained" }, { "end": 1134.64, "start": 1130, "text": " the level of what you would hope in this. And I mean, you use them. You see that. So this" }, { "end": 1140.08, "start": 1134.64, "text": " is the problem with with the musical environments. And okay, well, that's a lot of problems. And" }, { "end": 1144.2, "start": 1140.08, "text": " so then the question comes, okay, so what are you going to do about these problems? Well," }, { "end": 1148.6, "start": 1144.2, "text": " the obvious answer is to reach out to the pie bullet, um, we're dropping environment" }, { "end": 1152.08, "start": 1148.6, "text": " guy in the pie ball author and talk to them and stuff. And okay, you know, we could go" }, { "end": 1156.68, "start": 1152.08, "text": " on and recreate the environments in pie bullet and or more accurately include the pie, fix" }, { "end": 1159.84, "start": 1156.68, "text": " the pie bullet environments and include them in gym. This is a potential outcome that could" }, { "end": 1164.84, "start": 1159.84, "text": " be taken. However, uh, it was point out there's a better option. That was to look at" }, { "end": 1171.24, "start": 1164.84, "text": " Brax. This is something that I hadn't fully understood of until I had gotten into gym." }, { "end": 1173.6799999999998, "start": 1171.24, "text": " So have you seen all the stuff with the hard work seller environments? Because this is" }, { "end": 1177.72, "start": 1173.6799999999998, "text": " the coolest thing ever. This is going to completely change the field. Yeah. I got to admit" }, { "end": 1183.4399999999998, "start": 1177.72, "text": " I, uh, I learned a lot about it since talking to you and, uh, and I'm definitely, so just" }, { "end": 1188.12, "start": 1183.4399999999998, "text": " for the audience, the issue here is that a seemingly just slow and, uh, how can we make" }, { "end": 1193.76, "start": 1188.12, "text": " them fast? Is that what, is it what we're getting in here? Not quite. So, so the issues" }, { "end": 1198.6, "start": 1193.76, "text": " this I am not to hardware person. So I'm not going to try to give like the super in depth" }, { "end": 1203.08, "start": 1198.6, "text": " explanation, but the abbreviate explanation to my knowledge is this when you are training," }, { "end": 1208.08, "start": 1203.08, "text": " um, like let's say that you're training an Atari environment on C, um, within environment" }, { "end": 1213.04, "start": 1208.08, "text": " running on CPU and the neural network running on GPU and you end up having these very," }, { "end": 1218.6, "start": 1213.04, "text": " very large slowdowns because you have to go since stuff from the GPU to the CPU and back" }, { "end": 1224.76, "start": 1218.6, "text": " through the PCI bus. And if you have an environment, even if it's, you know, less efficient to run" }, { "end": 1229.48, "start": 1224.76, "text": " on a GPU, but have run on the GPU and connect to the neural network directly, then you can" }, { "end": 1235.48, "start": 1229.48, "text": " get usually over and more than order of magnitude of, uh, performance improvement. So if, so" }, { "end": 1239.8799999999999, "start": 1235.48, "text": " let me get this straight. If your, if your, your SIM is entirely running in GPU and your" }, { "end": 1243.92, "start": 1239.88, "text": " RL code is entirely running in GPU, then the whole cycle between them, which is the main" }, { "end": 1248.68, "start": 1243.92, "text": " clock cycle of RL is all running in GPU. And then your pipe, and you're, you're, you're" }, { "end": 1252.68, "start": 1248.68, "text": " just not in Python land at all. And you soon have to worry about the bad performance of" }, { "end": 1257.72, "start": 1252.68, "text": " Python. Well, the issue isn't, the issue isn't Python, the issue is the PCI bus. Right." }, { "end": 1262.64, "start": 1257.72, "text": " So move, it's moving, moving bits around. Yeah. Yeah. And, and, and this back and forth" }, { "end": 1267.8000000000002, "start": 1262.64, "text": " time, when, when you remove this, even with code that isn't profoundly well done, this" }, { "end": 1273.3999999999999, "start": 1267.8, "text": " code ends up running tens to hundreds of times faster. And so, like, and so just unlike" }, { "end": 1278.24, "start": 1273.3999999999999, "text": " random GPU, like, and so, so, so Brax is a product of a Google brain that provides very" }, { "end": 1283.8799999999999, "start": 1278.24, "text": " well done, um, these very well done, um, physics, it's very old, a physics simulation" }, { "end": 1287.76, "start": 1283.8799999999999, "text": " that can run on, on a cylinder. So one thing to mention, are six hardware accelerators." }, { "end": 1293.1599999999999, "start": 1287.76, "text": " This isn't just GPUs. GPUs are with most people, but this can run on GPUs or other proprietary" }, { "end": 1297.92, "start": 1293.16, "text": " ML, um, ASICs that were sharp and seek them out as well. This isn't, this isn't a GPU" }, { "end": 1305.24, "start": 1297.92, "text": " only thing. But yeah. And so, so Brax runs, um, it runs more than 100 times faster, basically." }, { "end": 1309.5600000000002, "start": 1305.24, "text": " And so we can train a minutes and I mean, you, you understand the implications of, okay," }, { "end": 1313.3200000000002, "start": 1309.5600000000002, "text": " instead of taking something, training hours and now trains in minutes, this changes a" }, { "end": 1319.0400000000002, "start": 1313.3200000000002, "text": " lot. We're in a different regime, then, right? Yeah. Yeah. Because, you know, you know," }, { "end": 1321.52, "start": 1319.0400000000002, "text": " then, okay, well, if we're gonna, we're gonna, we're gonna, okay, cool, hyper-prabbit" }, { "end": 1326.56, "start": 1321.52, "text": " sweep, that becomes cheap. Or, you know, hey, I did something stupid. Okay, well, this" }, { "end": 1331.56, "start": 1326.56, "text": " isn't super expensive to go and re-run it. Or, you know, I want to train this 10 times," }, { "end": 1336.56, "start": 1331.56, "text": " there are 100 times or a thousand times and create actually academically legitimate research." }, { "end": 1341.24, "start": 1336.56, "text": " This makes that possible. And so one thing that I've been, been trying to do as much as I" }, { "end": 1347.4, "start": 1341.24, "text": " can towards is having hard work, accelerated environments, be that a fault for all gym" }, { "end": 1351, "start": 1347.4, "text": " environments. Because like you said, the change everything, and there's another example" }, { "end": 1356.8, "start": 1351, "text": " that changes, you know, if you're a student and, you know, you have an old, cheap GPU in" }, { "end": 1361.76, "start": 1356.8, "text": " your laptop and it's useless right now. Okay, well, this makes it actually, you're actually" }, { "end": 1367.8, "start": 1361.76, "text": " able to do things. And now, the one argument that you will see against GPU-excited environments" }, { "end": 1371.16, "start": 1367.8, "text": " is that, of course, developing for them right now with the tools available, it takes a lot" }, { "end": 1374.56, "start": 1371.16, "text": " more effort. That's one. And the other argument as well, people are just gonna use these very," }, { "end": 1377.16, "start": 1374.56, "text": " very large neural networks. They won't matter anymore. And this is the arguments" }, { "end": 1381.4, "start": 1377.16, "text": " and people who are spec'd a lot of them, they're off big of a, well, a person, I wish" }, { "end": 1385.8000000000002, "start": 1381.4, "text": " clarify. And it wasn't big enough half the company either. But regardless, for most" }, { "end": 1389.96, "start": 1385.8000000000002, "text": " things that people are doing in research in RL, there's still lots of new blue sky research" }, { "end": 1395.2, "start": 1389.96, "text": " to be done on simple, small environments. Because you know, we can't solve net hack. And" }, { "end": 1399.2, "start": 1395.2, "text": " you can go and you think, and have things be hard work, accelerated. And I think, and" }, { "end": 1402.92, "start": 1399.2, "text": " if I have anything to do with this, this is gonna be the next big push in RL experimentally." }, { "end": 1407.1200000000001, "start": 1402.92, "text": " And a lot of environments, porting them to be hard work solid and run this mass amount" }, { "end": 1410.3600000000001, "start": 1407.1200000000001, "text": " faster is a shockingly doable process." }, { "end": 1414.8000000000002, "start": 1410.3600000000001, "text": " Yeah, I want to understand the limits of that. Because I guess when I look at it at the" }, { "end": 1419.68, "start": 1414.8000000000002, "text": " different pieces that are running in the RL loop, there's the, let's say the entire simulator" }, { "end": 1424.64, "start": 1419.68, "text": " was, was hard work solid rated. Okay, that's great. That's what that, then the step," }, { "end": 1430.68, "start": 1424.64, "text": " the step function actually can be really fast. But then as soon as you're, and then on" }, { "end": 1435.48, "start": 1430.68, "text": " the, on your agent side, you know, your neural network can be in the GPU. So that's great." }, { "end": 1442.2, "start": 1435.48, "text": " So your agent, you know, comes out with its raw predictions, you know, very quickly. But" }, { "end": 1449.3600000000001, "start": 1442.2, "text": " then all the intermediate logic, which of which, you know, custom agents can have tons," }, { "end": 1453.8400000000001, "start": 1449.3600000000001, "text": " is still, there's a lot of, there's typically there could be a lot of experimentation," }, { "end": 1457.8, "start": 1453.8400000000001, "text": " experimentation of that layer, a lot of different types of logic, a lot of looking up in different" }, { "end": 1463.1599999999999, "start": 1457.8, "text": " buffers or whatever we're doing to make your agent a, a special snowflake. That stuff" }, { "end": 1467.8799999999999, "start": 1463.1599999999999, "text": " seems like it would be, it would be still hard to accelerate and to get, to get the entire" }, { "end": 1472.1599999999999, "start": 1467.8799999999999, "text": " loop in there. Is that right? Are we looking at it at, at, at, it's not that hard. Is it?" }, { "end": 1476.9199999999998, "start": 1472.1599999999999, "text": " Okay. You have to factor your code in a certain way. And there'll be, it's like some sort" }, { "end": 1482.6399999999999, "start": 1476.9199999999998, "text": " of like grand master tutorial guide to this, when this comes, uh, starts to move into production" }, { "end": 1486.8799999999999, "start": 1482.6399999999999, "text": " more. But no, all this requires the fact that you're in code in a certain way. It doesn't" }, { "end": 1493.96, "start": 1486.88, "text": " require any profoundly special logic. The one kind of all with this though is that, um," }, { "end": 1498.3600000000001, "start": 1493.96, "text": " in, is that hard work solid environments will no longer be returning numpy tensors. And" }, { "end": 1502.2, "start": 1498.3600000000001, "text": " so you end up having to have, and these rappers will be able to nid gym or people work" }, { "end": 1506.0800000000002, "start": 1502.2, "text": " in this, but you'll end up having to have, you know, like wrapping whatever, uh, GPU" }, { "end": 1510.68, "start": 1506.0800000000002, "text": " tensor, this is out putting to a torch tensor or tensor flow, whatever. And so that'll" }, { "end": 1513.24, "start": 1510.68, "text": " be a thing that people will have to handle. And there are only reason we didn't have" }, { "end": 1516.8400000000001, "start": 1513.24, "text": " to do that before. The um, I was everyone just, all the libraries just kind of supported" }, { "end": 1521.56, "start": 1516.84, "text": " numpy implicitly, whereas, you know, we can't pass a jacks to, um, to another environment." }, { "end": 1525.72, "start": 1521.56, "text": " And the other cool thing, and just briefly summarize, okay, I have an environment, I want" }, { "end": 1530.56, "start": 1525.72, "text": " to make it harder, it started how does this work? There are like four ways of doing it." }, { "end": 1536.72, "start": 1530.56, "text": " One is to go and, um, and use things like pie, couture, whatever, and use Python, binding" }, { "end": 1540.12, "start": 1536.72, "text": " school, couture and stuff like that. It ends up, um, putting a lot of overhead and it's" }, { "end": 1544.1599999999999, "start": 1540.12, "text": " hard to do. And the easiest one is even a numpy based environment, you can just, uh, ride" }, { "end": 1548.68, "start": 1544.16, "text": " in jacks. And while this, in some cases, it's being difficult, uh, for a lot of things" }, { "end": 1553.0800000000002, "start": 1548.68, "text": " is just a doable thing. Then if you have c-based environments, you can either modify the" }, { "end": 1557.96, "start": 1553.0800000000002, "text": " c-based environment, um, to essentially compile the vercuta, or you can, um, I am told" }, { "end": 1563.1200000000001, "start": 1557.96, "text": " I never done this, um, compile it to xla. And then it'll run on the, on any hard work" }, { "end": 1568.48, "start": 1563.1200000000001, "text": " accelerator that tensor flow supports. If, if the net hat people wanted to make, um," }, { "end": 1572.48, "start": 1568.48, "text": " net tack hardware accelerated, I'm just using this because it's an example of a very, very" }, { "end": 1577.68, "start": 1572.48, "text": " old c environment that's important. Unless there are properties regarding, uh, compilation" }, { "end": 1582.32, "start": 1577.68, "text": " to actually, uh, bike codes that I'm unaware of, this is a thing that they could just do" }, { "end": 1585.44, "start": 1582.32, "text": " if they wanted to. And I'm not calling them not specifically. I'm just again using them" }, { "end": 1591.68, "start": 1585.44, "text": " as an example of an important environment involving old code. And, and, and bracks and, and" }, { "end": 1595.8, "start": 1591.68, "text": " jacks are Google products, right? There, there's no dependency on tensor flow there. We can" }, { "end": 1603.9199999999998, "start": 1595.8, "text": " just do pie torch just as easily with all. Yeah. So, um, tensor flows lowest level of execution" }, { "end": 1612.36, "start": 1603.9199999999998, "text": " or lies on, uh, what's called xla. It'll just take and run arbitrary tensor e code on" }, { "end": 1618.32, "start": 1612.36, "text": " a tpu or gpu, or I believe on a applicable AMD GPUs as well. And probably other stuff" }, { "end": 1624.04, "start": 1618.32, "text": " in the future. And jacks is Python bindings for x-laden, init replicate numpy, though" }, { "end": 1628.04, "start": 1624.04, "text": " is me, though it is missing a couple features in the pie to have auditive support. So that's" }, { "end": 1635.72, "start": 1628.04, "text": " what jacks does. And then bracks is essentially just, um, a physics library written using this" }, { "end": 1641.1599999999999, "start": 1635.72, "text": " full alternative to not buy. And so, and so they're just to kind of close this off. What" }, { "end": 1647.08, "start": 1641.1599999999999, "text": " our goal is is is to create fizz 3d environments as an alternative to the musical environments" }, { "end": 1651.32, "start": 1647.08, "text": " in there right now, um, because you know, these ones like are maintained, it can be fixed" }, { "end": 1654.36, "start": 1651.32, "text": " and so on. That would be a period where people can go and experiment with them and stuff." }, { "end": 1658.76, "start": 1654.36, "text": " This will probably turn to ther so with the current rates and like that. Rebole will be" }, { "end": 1662.8799999999999, "start": 1658.76, "text": " able to go and play with them in gym. People can find every issues. There are arms for" }, { "end": 1667.8799999999999, "start": 1662.8799999999999, "text": " loading many. And eventually the musical environments will be pulled out additionally. The" }, { "end": 1672.1599999999999, "start": 1667.8799999999999, "text": " box student to the environments are going to be redone and bracks. Well, um, this is" }, { "end": 1679.4399999999998, "start": 1672.1599999999999, "text": " because the box should be environments depend on a fork of a fork of and un maintained" }, { "end": 1683.1200000000001, "start": 1679.44, "text": " bindings for the physics engine that they use. And I can't get the lining of my hand if" }, { "end": 1687.8400000000001, "start": 1683.1200000000001, "text": " it's trying pretty hard. And also this runs much faster in bracks. And so if you want to do," }, { "end": 1692.64, "start": 1687.8400000000001, "text": " you know, massive hyper parameter sweeps on those, when their hard work started into the" }, { "end": 1698.24, "start": 1692.64, "text": " hyper parameter sweeps in minutes, which is awesome. We'd be in a different regime completely," }, { "end": 1703.92, "start": 1698.24, "text": " which is awesome. So I, but I do want to understand, um, you're talking about just replacing" }, { "end": 1709.04, "start": 1703.92, "text": " the jokos with bracks. Do we expect that like, for example, agents trained on the jokos are going" }, { "end": 1715.1200000000001, "start": 1709.04, "text": " to perform the same in bracks or do we expect some degree of like symptom, some differences?" }, { "end": 1720.96, "start": 1715.76, "text": " There'll be some differences inherently. Um, they are by far the most accurate placement" }, { "end": 1725.92, "start": 1720.96, "text": " sever made for me, jico, but they are inherently different. And now that the music goes acquired," }, { "end": 1730.72, "start": 1725.92, "text": " this is, this is made the process dramatically easier, of course, because then now the" }, { "end": 1734.08, "start": 1730.72, "text": " source is more for a different comparison. But no, there are, there are, there are inherent" }, { "end": 1738.56, "start": 1734.08, "text": " differences. But these will at minimum beat them, medically closer to their jolm, musical" }, { "end": 1742.56, "start": 1738.56, "text": " environments and the pie bullet ones, like people like you use. And those have been more" }, { "end": 1746.4, "start": 1742.56, "text": " inadequate for everyone. Okay. And then I know there's, there's other simulators too. Like I" }, { "end": 1753.84, "start": 1746.4, "text": " spoke to, uh, joming shia at, uh, UBC robotics lab. And they use, um, pie bullet and also race" }, { "end": 1760.8, "start": 1753.84, "text": " sim and Isaac, Jim. And so there's, so do you imagine when this, this change is made to, uh, to" }, { "end": 1764.9599999999998, "start": 1760.8, "text": " move to bracks that, that all the other simulators are kind of going to be left behind in terms of" }, { "end": 1770.6399999999999, "start": 1764.9599999999998, "text": " performance, or can other sims, sims be moved over to bracks too? The underlying, uh, their underlying" }, { "end": 1777.04, "start": 1770.6399999999999, "text": " logic be running. I don't think this, yeah. I don't think that we are likely to see other" }, { "end": 1781.1999999999998, "start": 1777.04, "text": " environments, rewritten in jacks or other simulators. I think that, that changing other" }, { "end": 1786, "start": 1781.2, "text": " simulators to run on hardware, using whatever see their written in is certainly think that we might" }, { "end": 1789.92, "start": 1786, "text": " see one really desirable thing about bracks as well, talking to him as I think that it has, it is" }, { "end": 1796.16, "start": 1789.92, "text": " it is, it is, if nothing else tied for the longest likely maintained life from where we are now," }, { "end": 1799.44, "start": 1796.16, "text": " which is really good. And so that's another advantage of it. And then out of all the hardware" }, { "end": 1804, "start": 1799.44, "text": " accelerated, but once it's also good, and the other aspect is that bracks team is willing to go" }, { "end": 1811.92, "start": 1804, "text": " and work with us a tremendous amount on adding these environments and stuff. And the problem with," }, { "end": 1816.4, "start": 1811.92, "text": " with doing something where the maintainers of the library aren't, you know, super supporting you" }, { "end": 1820.4, "start": 1816.4, "text": " is that, you know, we have fairly small resources for creating these environments and scratches" }, { "end": 1825.2, "start": 1820.4, "text": " actually incredibly difficult. Um, you know, the, the pie bullet guy certainly knows what he," }, { "end": 1828.16, "start": 1825.2, "text": " when I say the pie bullet guy, I mean, the guy who created that pie bullet replacement for" }, { "end": 1832.32, "start": 1828.16, "text": " the musica really, really knows what he's doing. He had a very hard time. So having people who are" }, { "end": 1839.36, "start": 1832.32, "text": " willing to donate, you know, stupid amounts of time and really lead money to this is also helpful." }, { "end": 1845.2, "start": 1839.36, "text": " And you mentioned the jumpy rappers. Can you fill us in on what, what was that about?" }, { "end": 1849.6, "start": 1845.2, "text": " Yeah. So switching environments become hard work. Sardar is going to have this really weird" }, { "end": 1855.76, "start": 1849.6, "text": " property right now. In environments that turn numpy, I tensors, arrays to neural networks to use" }, { "end": 1860.1599999999999, "start": 1855.76, "text": " and ever, and every different, deep learning library just natively interrupt those. However," }, { "end": 1864.24, "start": 1860.16, "text": " you're going to be your turn data structures other than numpy because numpy doesn't have natively" }, { "end": 1869.1200000000001, "start": 1864.24, "text": " around the GPU. And so you have to have the rappers written the way that they for example can handle" }, { "end": 1874.24, "start": 1869.1200000000001, "text": " both jacks and numpy environments and jumpy as a way of doing this and we need to do rapid" }, { "end": 1879.6000000000001, "start": 1874.24, "text": " ride anyways. So, okay. So we're doing rapid ride and riding them in new library. So that's with" }, { "end": 1883.28, "start": 1879.6000000000001, "text": " the story of jumpy rappers and these rappers are going to have to have, you know, like a jacks" }, { "end": 1888.3200000000002, "start": 1883.28, "text": " environment to pie torch learning code wrapper and it won't cause real performance. And you know," }, { "end": 1892.6399999999999, "start": 1888.32, "text": " there are examples of the code that works, but this is just an extra thing that you'll have to do" }, { "end": 1897.36, "start": 1892.6399999999999, "text": " that most people aren't used to. Anything else you want to tell us about the 1.0 roadmap?" }, { "end": 1900.8799999999999, "start": 1897.36, "text": " So there's a couple other cool things they're hopefully going to do. One thing is that" }, { "end": 1905.52, "start": 1900.8799999999999, "text": " for the entire duration since I've been in charge of Jim, we have been working on a new fully" }, { "end": 1910, "start": 1905.52, "text": " featured really nice website to see what it's going to roughly look like. It's you can go to" }, { "end": 1913.4399999999998, "start": 1910, "text": " pettings.mell which is the current pettings documentation website. It's going to be based off that." }, { "end": 1918.4, "start": 1913.44, "text": " That's going to be very, very comprehensive. So this is something that we're actively working on." }, { "end": 1923.28, "start": 1918.4, "text": " Another cool thing that hopefully will happen is that the widely used Jim and eager environment" }, { "end": 1929.04, "start": 1923.28, "text": " is going to be put into the toy tech environment. Hopefully the pending so far. This is the current" }, { "end": 1934.24, "start": 1929.04, "text": " plan that's been publicly discussed. And then the one other thing the board mentioning is we are" }, { "end": 1938.3200000000002, "start": 1934.24, "text": " making some minor changes to render API that you can look up. This is another one where the" }, { "end": 1944.3999999999999, "start": 1938.32, "text": " breaking changes aren't going to require much work to sort out. And just to give people a sense" }, { "end": 1951.28, "start": 1944.3999999999999, "text": " of what's happening with the 1.0 release. In general, the overarching plan is to make all the" }, { "end": 1955.12, "start": 1951.28, "text": " breaking changes via flag and stuff while retaining backwards compatibility. I'm add all new" }, { "end": 1959.04, "start": 1955.12, "text": " environments to the 1.0 release all the deprecated environments, all the backwards compatibility stuff" }, { "end": 1964.1599999999999, "start": 1959.04, "text": " will be your move because it will be a 1.0 release. And that'll have a stable set of environments," }, { "end": 1969.1200000000001, "start": 1964.16, "text": " simple new API stable set of all new rappers and Gempie and all these things. The only thing big" }, { "end": 1973.2, "start": 1969.1200000000001, "text": " things that we want to deal with after the 1.0 release is looking into the vector API. A lot of" }, { "end": 1978.3200000000002, "start": 1973.2, "text": " people don't like it for very good reasons. And that's a very important thing to address." }, { "end": 1982.8000000000002, "start": 1978.3200000000002, "text": " But just from the receptive of, you know, I'm one person who's small number of people were" }, { "end": 1987.76, "start": 1982.8000000000002, "text": " essentially pushing dealing with that one aspect after the 1.0 release when all the stuff with" }, { "end": 1995.92, "start": 1987.76, "text": " a query API solidified. Got it. Okay. Sounds like you have your work cut out for you. Yeah." }, { "end": 2004.56, "start": 1995.92, "text": " 2022. So do you want to move on to petting zoo? Yeah. Yeah. I'm super excited about this project." }, { "end": 2011.04, "start": 2004.56, "text": " So you have the petting zoo environment. I see that you have a paper petting on petting zoo" }, { "end": 2015.84, "start": 2011.04, "text": " on scholar and archive. Yeah. It was accepted in NERPS. Excellent. Can you tell us about petting zoo?" }, { "end": 2020, "start": 2015.84, "text": " Yeah. So this is how I got put in charge of Gempie and nut shells that I created the" }, { "end": 2023.84, "start": 2020, "text": " most similar thing to it. I created a library that was intended to be Gem for multi agent." }, { "end": 2028.56, "start": 2023.84, "text": " All right. Many people try to create multi agent environments with a Gem API except there's no" }, { "end": 2033.52, "start": 2028.56, "text": " indication of explicit aid and surely things. And it really sucked. And it's hard to use." }, { "end": 2037.12, "start": 2033.52, "text": " So people had all these, you know, header on this third party APIs. And then it's like the" }, { "end": 2044.24, "start": 2038.08, "text": " the dark times of RL before Gem. And okay. Well, I want to go and reproduce this work in this" }, { "end": 2049.2, "start": 2044.24, "text": " paper. Okay. Cool. Now, well, the APIs are different. So now you have to do a lot of software" }, { "end": 2054.72, "start": 2049.2, "text": " engineering. And whenever doing this engineering, you infuse your exponential causes for" }, { "end": 2058.72, "start": 2054.72, "text": " responsibility or all the stuff. That's not good for anyone. And so the intention of petting zoo" }, { "end": 2064.48, "start": 2058.72, "text": " was to be Gem from all day. And RL. This has arguably been in some ways a much harder job than Gem." }, { "end": 2070.32, "start": 2065.76, "text": " I mean, Gem was hard and that was the first. But like having a universal multi agent API is" }, { "end": 2074.96, "start": 2070.32, "text": " much harder than having a universal single agent API is what I mean. We end up going through a lot" }, { "end": 2081.36, "start": 2074.96, "text": " of different design iterations. There's still a fairly small number of more minor breaking changes," }, { "end": 2086.48, "start": 2081.36, "text": " much more minor than stuff in Gem that are planned for the petting zoo API. If you're listening to" }, { "end": 2090.56, "start": 2086.48, "text": " this and you're doing anything that's like normal with petting zoo, they're like you should be" }, { "end": 2097.04, "start": 2090.56, "text": " doing. You'll be fine. So one thing you encounter when you get to multiplayer RL or multiplayer games" }, { "end": 2103.92, "start": 2097.04, "text": " right away is two different paradigms in terms of some games agents take turns making move. And in" }, { "end": 2109.12, "start": 2103.92, "text": " some games, all agents make their move at once. And then it swaps over the environment. So" }, { "end": 2113.04, "start": 2109.12, "text": " and I think you have a nice way to handle this in petting zoo. How do you handle that?" }, { "end": 2117.84, "start": 2113.04, "text": " So imagine a game where all agents step together once. This is like rock, paper, scissors," }, { "end": 2121.52, "start": 2117.84, "text": " right? Where everyone acts at once. And then imagine a game like chess where one player acts and" }, { "end": 2125.36, "start": 2121.52, "text": " the other player acts and you go back and forth, right? Well, so there's sort of two ways to" }, { "end": 2132.32, "start": 2125.36, "text": " handle this problem, loosely speaking. One way to handle this problem would be to have an API that says," }, { "end": 2135.76, "start": 2132.32, "text": " okay, well, you can step for one agent at a time and step for the next agent so on. And so you just" }, { "end": 2139.84, "start": 2135.76, "text": " know cycle through everything, right? And you can do that. And it's cool. And this is a" }, { "end": 2145.04, "start": 2139.84, "text": " general and desirable thing to do. And what happens when you're stepping through each agent at every" }, { "end": 2150.88, "start": 2145.04, "text": " time is that, you know, this can work for chess and this doesn't suck. This isn't weird for" }, { "end": 2153.84, "start": 2150.88, "text": " all paper scissors where everyone's acting at the same time because you just, you know," }, { "end": 2159.04, "start": 2153.84, "text": " the environment just cues them essentially. Whereas if you make your default API, the alternative," }, { "end": 2164.6400000000003, "start": 2159.04, "text": " of everyone's stepping together, okay, well, what happens if you're playing chess, right? You know," }, { "end": 2170.96, "start": 2164.6400000000003, "text": " you essentially have to use a dummy action for one agent at a time. This gets problematic." }, { "end": 2178.32, "start": 2171.6000000000004, "text": " And so this is the intuition for why I believe that this sort of step-based API is a better" }, { "end": 2184.0800000000004, "start": 2178.32, "text": " default that I would say there's course still many uses for APIs that focus on the simultaneous case." }, { "end": 2188.8, "start": 2184.0800000000004, "text": " The other advantages API is that a fault. And this is essentially the argument that paper makes" }, { "end": 2196.0800000000004, "start": 2188.8, "text": " is that this mental model of each agent stepping at once. So for example, if you're in a multi-agent" }, { "end": 2200.8, "start": 2196.0800000000004, "text": " environment, each agent takes an action they step in the way that that action step and the way" }, { "end": 2204.96, "start": 2200.8, "text": " they go through this and resolve the stuff is on a code level is that they essentially run through" }, { "end": 2209.04, "start": 2204.96, "text": " a loop of all the agents, right? Unless you're doing some wild parallelization stuff," }, { "end": 2213.6, "start": 2210, "text": " there's a four loop of all the agents and each agent updates an environment sequentially on a" }, { "end": 2219.2, "start": 2213.6, "text": " single core. And but everyone models these as if, you know, they all have said it's not a kidney. So," }, { "end": 2226.2400000000002, "start": 2220.88, "text": " so this sounds like a weird mental model in implementation discrepancy. Okay, so can you find a" }, { "end": 2231.44, "start": 2226.2400000000002, "text": " real world example of an important wild-ears environment that has a bug caught by the discrepancy?" }, { "end": 2236.4, "start": 2231.44, "text": " Yes, we found a couple and we did and the paper goes into the bug and the environment. This was the" }, { "end": 2242.7200000000003, "start": 2236.4, "text": " open source implementation of the social sequential limit games. The bugs were all the bugs we found" }, { "end": 2247.52, "start": 2242.7200000000003, "text": " are actually recently patched by an undergrad who's been working for me on a different project using" }, { "end": 2251.92, "start": 2247.52, "text": " environments. Essentially, it was right. But essentially this causes, though, is race condition" }, { "end": 2256.4, "start": 2251.92, "text": " because you know, you have a journal logic depending, you have like, you know, resolutions depending" }, { "end": 2261.6800000000003, "start": 2256.4, "text": " on this internal logic, you know, okay, well, if you're thinking about the stuff as if it isn't having" }, { "end": 2268, "start": 2261.6800000000003, "text": " that just from a mental model, this is weird to me that like and the effect of mental model people" }, { "end": 2272.88, "start": 2268, "text": " use for the most for this is is the positive modellability APIs around this and all this stuff." }, { "end": 2278.32, "start": 2272.88, "text": " It's sensor parts observable. So, actually, games like Google us. And what happens with this model," }, { "end": 2282.4, "start": 2278.32, "text": " this is aligned with with real world scenarios, another problem with this model is that it doesn't give" }, { "end": 2287.52, "start": 2282.4, "text": " you access information that you should have. So, for example, if you go through this through a cycle," }, { "end": 2291.36, "start": 2287.52, "text": " if you know how it not be works, imagine you know, some sort of game where everyone sits in a" }, { "end": 2296.48, "start": 2291.36, "text": " ring of five people, they'll take individual turns, right? And so some fraction of your turn," }, { "end": 2301.2000000000003, "start": 2296.48, "text": " and many games, well, it's sorry, some fraction of your award will be attributable to different" }, { "end": 2305.6, "start": 2301.2000000000003, "text": " players' turns, right? And this might just from an understanding perspective, or if not learning" }, { "end": 2310, "start": 2305.6, "text": " perspective, be something you might want, you know, want to be aware of and want to have access" }, { "end": 2314.88, "start": 2310, "text": " to that information for, right? Well, what happens is that if you take the pause G approach," }, { "end": 2320.08, "start": 2314.88, "text": " all rewards are smashed together. So, there's no possibility of attribution for like mental model" }, { "end": 2324.32, "start": 2320.08, "text": " leading debugging purposes or learning purposes, though this is not what leaves for learning. And" }, { "end": 2328, "start": 2324.32, "text": " that also bothers me. So, this is, and so what the Pentings of Paper does is I've been introducing" }, { "end": 2333.84, "start": 2328, "text": " the Pentings library itself is that it puts forth this formally defined model of AC games," }, { "end": 2338.72, "start": 2333.84, "text": " as well as a mental model, which is this idea of sequentially stepping games. We showed their" }, { "end": 2343.2, "start": 2338.72, "text": " provably equivalent to partial to partial observable games. The alternative, let me go through two" }, { "end": 2351.68, "start": 2343.2, "text": " case studies of how the sort of unintuitive aspects of pause Gs have caused issues in real environments." }, { "end": 2357.8399999999997, "start": 2351.68, "text": " So pause Gs, like you mentioned, is partially observable stochastic games? Yep. It's the most" }, { "end": 2366.3999999999996, "start": 2357.8399999999997, "text": " general, it's think of, think of a multi agent version of a partial, a partial observable MDP." }, { "end": 2371.6800000000003, "start": 2366.4, "text": " There's about sugarmult agent models that you'll see used. Pause G is the most general and" }, { "end": 2376.1600000000003, "start": 2371.6800000000003, "text": " commonly used one outside of EFGs, but that's a kind of different thing. Yeah. As a summary," }, { "end": 2380.2400000000002, "start": 2376.1600000000003, "text": " if people want to Google pause Gs, I would genuinely recommend to look at the literature" }, { "end": 2385.52, "start": 2380.2400000000002, "text": " view on this in the Pentings of Paper. A lot of the, if you try to Google form a history of this" }, { "end": 2389.6, "start": 2385.52, "text": " stuff, what you'll see is bad in the Pentings of Paper isn't, it's not, it's some sort of gift" }, { "end": 2396.56, "start": 2389.6, "text": " from God or anything, but like it's usable and, and a lot of the like text bookish sources on" }, { "end": 2400.7999999999997, "start": 2396.56, "text": " this are troubling if anyone's interest. And so this was the idea with the game model. This is" }, { "end": 2405.8399999999997, "start": 2400.7999999999997, "text": " what we did. And the reason that we implemented both the pause Gs, style and ACMs based API in" }, { "end": 2412, "start": 2405.8399999999997, "text": " Pentings of Paper wondering, there's one really big problem with the AC API and that's performance" }, { "end": 2417.6, "start": 2412, "text": " in these games where agents can set simultaneously. So imagine, and this is a real case, I'm working on a" }, { "end": 2424.88, "start": 2417.6, "text": " not yet announced project that involves training thousands of agents in the Pentings environment." }, { "end": 2431.7599999999998, "start": 2425.7599999999998, "text": " Okay, well, if you're doing that with a, with the AC game API, this is not good. You know," }, { "end": 2436.16, "start": 2431.7599999999998, "text": " you have to make a call for each time you can't have no networks be inferencing in parallel" }, { "end": 2439.92, "start": 2436.16, "text": " on the GPU. It makes things much slower. And so for that first thing, I have this standard" }, { "end": 2444.72, "start": 2439.92, "text": " Poggy parallel API is important. And so Pentings who does support both, it doesn't support the parallel" }, { "end": 2450.24, "start": 2444.72, "text": " API for sequential environments because that would be problematic and does treat the AC and API" }, { "end": 2457.2799999999997, "start": 2450.24, "text": " as the default, but it does support both. Cool. Okay. And then. So I personally competed in the" }, { "end": 2462.8799999999997, "start": 2457.2799999999997, "text": " Europe's 2018 Palmerman multi agent competition, which was my introduction to a lot of these," }, { "end": 2468.3199999999997, "start": 2462.8799999999997, "text": " these multi agent RL issues. And let's for example, one thing that I noticed is like sometimes in" }, { "end": 2475.92, "start": 2468.32, "text": " Marl, you want agents to have individual rewards. And then sometimes you want to share the credit" }, { "end": 2481.2000000000003, "start": 2475.92, "text": " for the rewards and like as a team reward, or sometimes sometimes I wanted to, like I wanted to" }, { "end": 2486.4, "start": 2481.2000000000003, "text": " balance between those two things, things sometimes to make agents more selfish or more team focused." }, { "end": 2492.56, "start": 2487.6000000000004, "text": " And I had to do all this custom stuff to make that work. But does how does reward work in" }, { "end": 2500.48, "start": 2492.56, "text": " petting zoo? And how would you like, I guess the notion of like teams or competing teams," }, { "end": 2506.96, "start": 2500.48, "text": " is that like an orthogonal to what petting zoo is doing? Kind of. Penting it doesn't formalize" }, { "end": 2512.32, "start": 2506.96, "text": " notion of teams, right? So in multi agent RL, every single agent is trying to maximize" }, { "end": 2518.72, "start": 2512.32, "text": " zone this kind of future expected kind of future award, right? Whether or not this process is" }, { "end": 2523.68, "start": 2518.72, "text": " cooperative or competitive or mix some depends on the rewards present in the environment." }, { "end": 2528.8799999999997, "start": 2524.72, "text": " And this is a similar case with with regards to teams. And so petting who doesn't formally delineate" }, { "end": 2533.52, "start": 2528.8799999999997, "text": " any of this stuff, because this is just a side effect in the environment, so to speak. In petting zoo" }, { "end": 2538.56, "start": 2533.52, "text": " agents can be named. If you really need to do team adding stuff, you can use agent names and you" }, { "end": 2542.56, "start": 2538.56, "text": " know have the first part of the name be you know, like team blue on a score one that's on a score" }, { "end": 2548.96, "start": 2542.56, "text": " two and so on for different agents. But we don't have a formal notion of support for teams. As far as" }, { "end": 2554.48, "start": 2548.96, "text": " reward goes, the petting zoo on a mail page has a really good thorough explanation, but the brief" }, { "end": 2559.68, "start": 2554.48, "text": " version that yeah, for any agent for any time step that there were occurred at, you can go and pull" }, { "end": 2563.7599999999998, "start": 2559.68, "text": " the reward to look at exactly or you can get the key mode of reward over less cycle of agents that" }, { "end": 2569.36, "start": 2563.7599999999998, "text": " acted first. And feed that to learning if you don't want to go and deal with dictionaries of rewards." }, { "end": 2575.2000000000003, "start": 2569.36, "text": " Okay, and then petting zoo is more focused on the MDP paradigm more than the planning paradigm." }, { "end": 2581.28, "start": 2575.2000000000003, "text": " Is that right? Yeah, so petting zoo is focused on this classical state action pair, the stuff in RL," }, { "end": 2586.48, "start": 2581.28, "text": " the most different thing from petting zoo that's widely used as open spiel. They have things like" }, { "end": 2590.48, "start": 2586.48, "text": " a classical backtracking. The issue with backtracking that you come in these classes of" }, { "end": 2595.6, "start": 2590.48, "text": " galalcular games is that this can't be general supported for a lot of different environments. It's" }, { "end": 2600.3199999999997, "start": 2595.6, "text": " messy due from computational and API and environment design perspective. Because it petting zoo" }, { "end": 2605.6, "start": 2600.3199999999997, "text": " environment support pickling, well, I was maybe not a third party ones do, but all first party ones do." }, { "end": 2608.72, "start": 2605.6, "text": " You still can do the backtracking, but it's less computationally efficient than having native" }, { "end": 2613.2799999999997, "start": 2608.72, "text": " support to the open spiel. And I needed this because it's not a common use feature for currently" }, { "end": 2620, "start": 2613.2799999999997, "text": " popular deep RL stuff. First stuff outside of specific classic games like open spiel targets. So" }, { "end": 2625.2, "start": 2620, "text": " it's a specialty feature for more specific things. Yeah, one of the cool thing to mention is" }, { "end": 2629.12, "start": 2625.2, "text": " petting zoo is just getting to see the adoption cycle of it because when petting zoo was released," }, { "end": 2634.96, "start": 2629.12, "text": " I had almost no professional credibility. I was very new to the field. I transferred from physics." }, { "end": 2639.3599999999997, "start": 2634.96, "text": " I didn't really know anyone and I ended up talking about it does in different grad students" }, { "end": 2643.6, "start": 2639.3599999999997, "text": " under grads and working on it for six to 12 months. And it's and it's just been really, really" }, { "end": 2646.96, "start": 2643.6, "text": " cool to see how pettings you've grown from this almost obscure library to this thing that like" }, { "end": 2651.9199999999996, "start": 2646.96, "text": " all the different major multi agent RL libraries use. And all the ones that don't use it are currently" }, { "end": 2656, "start": 2651.92, "text": " actively working on supporting stuff for open spiel, which is doing something different that we're" }, { "end": 2659.6, "start": 2656, "text": " trying to. And so that just been a really cool thing to like just to see how as a grown to see," }, { "end": 2664.8, "start": 2659.6, "text": " there's like a brand new library that I created not that long ago. And there's something like 30 plus" }, { "end": 2668, "start": 2664.8, "text": " different third party environments for petting zoo now. I think more than that, there's a list of" }, { "end": 2672.7200000000003, "start": 2668, "text": " them on the documentation website. And you know, to see that this just like this huge wave integration" }, { "end": 2676.64, "start": 2672.7200000000003, "text": " like if people don't know what the RL discord is, it's what it sounds like. It's really cool." }, { "end": 2680.7200000000003, "start": 2676.64, "text": " Cool. You should Google it and check it out. I'm in there a lot of the open source RL people are." }, { "end": 2684.64, "start": 2680.72, "text": " And like the multi agent RL channel on that is essentially just like petting, it's like stack" }, { "end": 2689.12, "start": 2684.64, "text": " overflow for petting zoo now a lot of the time, which is really, really worthwhile to see just from" }, { "end": 2694, "start": 2689.12, "text": " a personal level that people are using your stuff this much. Sweet. Okay. I look forward to checking" }, { "end": 2698.8799999999997, "start": 2694, "text": " that out more. And it's been on my list for a while, but now it's moved up near the top now. Thanks." }, { "end": 2704.9599999999996, "start": 2699.9199999999996, "text": " Thanks for for giving the community petting zoo. I think this is we're just I'm sure it's just" }, { "end": 2710.48, "start": 2704.9599999999996, "text": " the beginning of the epic story of where petting zoo is going to go. Hopefully. So let's move on to" }, { "end": 2715.52, "start": 2710.48, "text": " super suit. Do you want to tell us about super suit? Yeah. So super suits in the process of being" }, { "end": 2722.96, "start": 2715.52, "text": " killed. So the story of super suit is that petting zoo needed, you know, rappers, right? And" }, { "end": 2727.2, "start": 2722.96, "text": " Jim's built in rappers weren't very good. They still aren't literally almost literally full" }, { "end": 2730.8, "start": 2727.2, "text": " of regret of them. And this is partially motivated by jumping out of the factors. But you know, okay," }, { "end": 2734.32, "start": 2730.8, "text": " well, if we're going to do petting zoo rappers that are like comprehensive and good stuff, why not" }, { "end": 2738.32, "start": 2734.32, "text": " do Jim rappers and put it in its own pack? Don't super soon have the rappers back a couple things" }, { "end": 2742.88, "start": 2738.32, "text": " that petting zoo did that were it may be a tiny bit innovative. Like they weren't profound or" }, { "end": 2748.1600000000003, "start": 2742.88, "text": " anything. One is that we were the first to add versioning for rappers the way the way that" }, { "end": 2751.28, "start": 2748.1600000000003, "text": " environments are versioned Jim and petting zoo. I think that's important because it's, you know," }, { "end": 2756.48, "start": 2751.28, "text": " rappers impact reproducibility just as much as this environment aspects to. And then the other" }, { "end": 2762.2400000000002, "start": 2756.48, "text": " aspect is to have a bunch of, you know, rappers that can be, you know, a bunch of rappers that do" }, { "end": 2765.28, "start": 2762.2400000000002, "text": " small things that people can just grab from, you know, if you know, if you're using musical," }, { "end": 2769.84, "start": 2765.28, "text": " most people have to write their own little code snippet to turn the the float 84 data type" }, { "end": 2775.52, "start": 2769.84, "text": " stuff as a turn and to float 32, it to float 32 to be able to pass a general network code or" }, { "end": 2780.8, "start": 2775.52, "text": " float 16 depending. What was cool about SuperSuit is that, you know, it just had a bunch of these" }, { "end": 2785.44, "start": 2780.8, "text": " rappers you can go and grab from for a variety of reasons, in part because of jump high and all these" }, { "end": 2792, "start": 2785.44, "text": " other things, SuperSuits being killed and broken up into Jim dot rappers and the new version of that." }, { "end": 2797.04, "start": 2792, "text": " And the petting zoo that will be moved in the petting zoo dot rappers. Okay. And then you said" }, { "end": 2804.08, "start": 2797.04, "text": " you wanted to speak about scientifically legitimate experiments in RL. There's this common problem in" }, { "end": 2810.32, "start": 2804.08, "text": " RL where, where, you know, you'll go and you'll read a paper in RL and you'll be able to tell very" }, { "end": 2816.08, "start": 2810.32, "text": " quickly that like reading this paper as a good use of time and like, okay, why does anyone care?" }, { "end": 2820.4, "start": 2816.08, "text": " Like this is multiple things went wrong here. And just to pick, without picking on these" }, { "end": 2826.8, "start": 2820.4, "text": " specific papers, one example of where a lot of things, where something's very serious and wrong is" }, { "end": 2831.2000000000003, "start": 2826.8, "text": " if you try to make claims about methods and stuff like this without doing hyper parameters," }, { "end": 2836, "start": 2831.2000000000003, "text": " searches or with only training once or without using the comparison methods to describe like Mark" }, { "end": 2841.76, "start": 2836, "text": " Belmer and RLiable or RLiable, I don't know how I pronounce it, other similar works. And we see" }, { "end": 2846.7200000000003, "start": 2841.76, "text": " paper like this really, really often they get accepted to, you know, hyperfodded venues by, in" }, { "end": 2851.4399999999996, "start": 2846.72, "text": " many cases, people who aren't profoundly experienced in RL or viewers, I would hypothesize," }, { "end": 2854.9599999999996, "start": 2851.4399999999996, "text": " why can't confirm this personally. This is just something that really, really bothers me that I" }, { "end": 2859.68, "start": 2854.9599999999996, "text": " think that a lot of attention needs to be put, it needs to be placed on. And, you know, it's one" }, { "end": 2865.2799999999997, "start": 2859.68, "text": " thing if people want to go and put out work because the problem is that, you know, if you want to" }, { "end": 2870.72, "start": 2865.2799999999997, "text": " claim that your method has its performance, so on, unless you do think like hyper parameter searches," }, { "end": 2874.8799999999997, "start": 2870.72, "text": " testing on a diverse set of environments, doing accurate, accurate, specific comparisons," }, { "end": 2878.2400000000002, "start": 2874.88, "text": " that you can't actually say that you've done anything like, like, you've done all this work," }, { "end": 2883.28, "start": 2878.2400000000002, "text": " and you've contributed no new knowledge to the space. And like, you know, you have about 10,000" }, { "end": 2886.48, "start": 2883.28, "text": " working days in average person's life, and this is what you're spending part of it on. And I don't," }, { "end": 2889.52, "start": 2886.48, "text": " I mean, I mean, there's obviously no motivation to publish a pair of sure, but like," }, { "end": 2893.76, "start": 2889.52, "text": " there are ways to do this where you're making actually scientifically valid claims about your," }, { "end": 2897.84, "start": 2893.76, "text": " about your working can contribute, can contribute actual knowledge. And this is something that's not" }, { "end": 2901.6800000000003, "start": 2897.84, "text": " widely done. I hope the Megan E. Serum is cheaper to run with hard work, so we'll improve this, but this" }, { "end": 2908, "start": 2901.68, "text": " is more than anything, requires a systemic change in terms of the culture and review process," }, { "end": 2913.04, "start": 2908, "text": " so on for papers. And but then the process beyond you, any contributions to the field or" }, { "end": 2917.2, "start": 2913.04, "text": " deference field or anything like this is that, you know, the people who are working on this," }, { "end": 2920.72, "start": 2917.2, "text": " who have spent these large amounts of time creating these papers and even the peer-reviewed venues," }, { "end": 2923.8399999999997, "start": 2920.72, "text": " and then don't go through the process of all this additional work to be able to make" }, { "end": 2928.64, "start": 2923.8399999999997, "text": " sign if it claims out their works that can actually be such that they can actually like contribute," }, { "end": 2933.92, "start": 2928.64, "text": " like knowledge that is cracked in the game, at least can you, that you can show is cracked." }, { "end": 2939.52, "start": 2934.56, "text": " Like, you're spending a fraction of your life doing this and it's a very fine thing. And it," }, { "end": 2944.4, "start": 2940.72, "text": " I can't run my head around around this. And I think there's something that really needs to be" }, { "end": 2949.52, "start": 2944.4, "text": " addressed. I think that two people have done a really good job in this space are Mark Belmer and" }, { "end": 2954.8799999999997, "start": 2949.52, "text": " Phil Thomas in terms of, you know, scientifically legit, and they're reinforced morning claims." }, { "end": 2959.76, "start": 2954.88, "text": " But in a lot of cases, in a lot of profile, high profile publications, this has been shown in" }, { "end": 2963.44, "start": 2959.76, "text": " the literature even, peer-reviewed literature, not that that means that much in this field," }, { "end": 2968.08, "start": 2963.44, "text": " for the claims about the spirited already of methods to be false. And everyone wants to go and" }, { "end": 2971.76, "start": 2968.08, "text": " this is the weird thing that I need, because for some reason I get approached by a ton of grad" }, { "end": 2977.28, "start": 2971.76, "text": " students wanting to do things with me, they all want to go and do method papers. They want to create" }, { "end": 2982, "start": 2977.28, "text": " the new PPO or the new thing like this. And the problem with that is, I mean, this has been done" }, { "end": 2987.2, "start": 2982, "text": " literally hundreds of times. And none of them have had the experimental support to be widely" }, { "end": 2992.4, "start": 2987.2, "text": " used in their claims. Maybe the one exception to this that's seen a little bit of, of many" }, { "end": 2999.76, "start": 2992.4, "text": " fleece is beyond the like one to see what we find three is Diane and PPG. You'll see use from time" }, { "end": 3004.56, "start": 2999.76, "text": " of time, but beyond those, you know, there's been no progress in this fairly important thing," }, { "end": 3010.48, "start": 3004.56, "text": " because primarily due to due to the issue of credible benchmarking. And so this is my" }, { "end": 3016.96, "start": 3010.48, "text": " man, Yosek, Clouds, monologue about that. So this, this sounds related to, you know, Rashab Agarwal" }, { "end": 3022.4, "start": 3016.96, "text": " was first author on a paper that won outstanding paper at Neurup's 2021 that's deep reinforcement" }, { "end": 3028.64, "start": 3022.4, "text": " learning at the edge of the statistical precipice. This is the one where he's saying that this" }, { "end": 3034.64, "start": 3028.64, "text": " to basically statistical methods used in RL comparisons are are pretty bad. And we should do a better" }, { "end": 3042.56, "start": 3034.64, "text": " job of benchmarking using basically commonly understood statistics. And, and I mean, one of the" }, { "end": 3048.48, "start": 3042.56, "text": " things here was that does usually the sample size is so small, you know, showing how, how can we" }, { "end": 3056.24, "start": 3048.48, "text": " draw conclusions with, you know, maybe one or a handful of runs. So it sounds related. It's not" }, { "end": 3061.04, "start": 3056.24, "text": " the entire issue, but it sounds related to the same. It's the same issue. Yeah. And, and you mentioned" }, { "end": 3066.16, "start": 3061.04, "text": " Mark Bellmere. He's also a co author on that co author on that paper. Yeah. One of the things" }, { "end": 3072.4, "start": 3066.16, "text": " that's related to this, if anyone's interested in some sort of topic for research, if you want to" }, { "end": 3075.36, "start": 3072.4, "text": " do this, you can email me and tell me tell you everything that I know because this really needs to" }, { "end": 3085.52, "start": 3075.36, "text": " be done is that there are a large amount of implementation specific tricks that impact how" }, { "end": 3091.28, "start": 3085.52, "text": " black box optimizers are used for automated hyper parameter shooting for RL. And I say this" }, { "end": 3097.2, "start": 3091.28, "text": " is someone who I've written, at least as far as I can publicly tell if I've written some of the" }, { "end": 3101.44, "start": 3097.2, "text": " largest amount of automated hyper parameter shooting code for deep RL in the world for a project" }, { "end": 3105.84, "start": 3101.44, "text": " that I'm trying to work on where the entire thing is like mass scale hyper parameter tuning with" }, { "end": 3110.32, "start": 3105.84, "text": " hundreds of GPUs. And this has been studied and coming back to, you know, if you want to make" }, { "end": 3114.08, "start": 3110.32, "text": " your count a claim of how good your method is, and you kind of have to do hyper parameter tuning" }, { "end": 3119.2799999999997, "start": 3114.08, "text": " in an automated manner. And if that's the case, well, presumably there should be a whole literature" }, { "end": 3125.52, "start": 3119.2799999999997, "text": " on the impact of all the generally well understood implementation tricks that correspond to" }, { "end": 3132.7999999999997, "start": 3125.52, "text": " hooking these two things together. And there is not. I think that that readily available" }, { "end": 3136.96, "start": 3132.7999999999997, "text": " hard work started in environments and stuff like gym like we hope to do would make doing academic" }, { "end": 3140.96, "start": 3136.96, "text": " scale research on this easier. But I think that if anyone's interested for research shop, I think" }, { "end": 3145.12, "start": 3140.96, "text": " this is a foundationally important thing that as far as no literally no one in the world is" }, { "end": 3150.96, "start": 3146.16, "text": " is working on. And if anyone wants to, you know, have me tell them everything I know about that is" }, { "end": 3156.16, "start": 3150.96, "text": " email me. Okay, that's a generous offer. I want to follow up on one aspect of that right now." }, { "end": 3161.36, "start": 3156.16, "text": " So I was looking at I'm trying to remember which one it was, but either you used hyper opt" }, { "end": 3167.28, "start": 3161.36, "text": " or optuna. And I was like, I'm going to get to know some hyper hyper parameter tuning. And so what" }, { "end": 3176.48, "start": 3167.28, "text": " I did is I hooked it up to a dummy problem where the result was just a random number like some kind" }, { "end": 3183.28, "start": 3176.48, "text": " of some noisy random number. And I said, okay, hyper parameter tuning try to optimize this. And it" }, { "end": 3191.1200000000003, "start": 3183.92, "text": " it what I found is it had no awareness that it's hyper parameters had no impact on the result." }, { "end": 3195.84, "start": 3191.1200000000003, "text": " Yep. And so it gave me some answer. It said, Oh, I found this point that had the best result." }, { "end": 3200.6400000000003, "start": 3195.84, "text": " But there was there was no point at which it said, you know what, I'm going to stop tuning" }, { "end": 3206.56, "start": 3200.6400000000003, "text": " it turning that particular knob because it has no impact. And so so I so that felt very dumb to me." }, { "end": 3212.1600000000003, "start": 3206.56, "text": " Yeah, I have not used hyper opt. I've used optuna exclusively. This isn't any of your religious" }, { "end": 3217.44, "start": 3212.1600000000003, "text": " thing. It just optuna has a bunch of built-in thing. Other people have built a bunch of things for" }, { "end": 3224.88, "start": 3217.44, "text": " optuna specifically that I use. And yeah, so if you specifically use pruning, if you use pruning" }, { "end": 3228.88, "start": 3224.88, "text": " with optuna, it'll get around this problem. There's different settings for that. But just to sort" }, { "end": 3233.6800000000003, "start": 3228.88, "text": " of illustrate the scale of the problem here, let me give one example of why this is such like a" }, { "end": 3238.6400000000003, "start": 3233.6800000000003, "text": " a important and be incredibly low hanging research problem. When you when you're training a" }, { "end": 3244.4, "start": 3238.6400000000003, "text": " reinforcement learning environment, what you do is that you go is that you go and take at some" }, { "end": 3248.88, "start": 3244.4, "text": " point of the like series of reward values and return that one value to the black box optimizer" }, { "end": 3254, "start": 3248.88, "text": " is black box optimizer can only take one value for good reason. Okay. What's value of that" }, { "end": 3258.56, "start": 3254, "text": " curve to your turn? Now you may say, oh, this is you take the best one, right? Well, the problem" }, { "end": 3266.72, "start": 3258.56, "text": " is taking the best one is that then your your is that then your hyper parameter optimizer will go" }, { "end": 3271.52, "start": 3267.28, "text": " will go and find these really, really unstable hyper parameters or hyper parameters that" }, { "end": 3275.6, "start": 3271.52, "text": " that advice really, really unstable learning. And so then your so then you're going to curve so" }, { "end": 3279.44, "start": 3275.6, "text": " essentially be like flat flat flat flat and just for a single step of like peak up incredibly" }, { "end": 3283.04, "start": 3279.44, "text": " high to like fully learn and instantly drop back down. And so all your learning curves look like" }, { "end": 3287.6, "start": 3283.04, "text": " and it's super weird. So okay, maybe don't report that value. Okay. Well, you can report the end" }, { "end": 3291.44, "start": 3287.6, "text": " value and that kind of works, right? Well, sure. But like do you want to take like a weighted" }, { "end": 3296.32, "start": 3291.44, "text": " average over the last end values or you know, all these things or another related problems. Okay," }, { "end": 3302.24, "start": 3296.32, "text": " well, you know, I want to find the best value, but it's also useful if if I find the best value and" }, { "end": 3307.68, "start": 3302.24, "text": " I want to try to, you know, somehow incentivize finding more sample efficient hyper parameters as" }, { "end": 3311.68, "start": 3307.68, "text": " well. Okay. How do you do this beyond? You can, you know, just arbitrarily get strained. But what if" }, { "end": 3315.3599999999997, "start": 3311.68, "text": " you want to add some sort of additional penalty, towards a turn to the hyper to the black box hyper" }, { "end": 3321.6, "start": 3315.3599999999997, "text": " parameter for either walk lock time run or resources you just have like this, how do you integrate that" }, { "end": 3325.04, "start": 3321.6, "text": " in a way that doesn't screw everything up? Or variance, right? Like if you want, or you want" }, { "end": 3330.3999999999996, "start": 3325.04, "text": " a responsibility. Yeah. Um, well, and the problem with variance, this, that's, um, is that you have to" }, { "end": 3335.7599999999998, "start": 3330.3999999999996, "text": " run them a bunch, which is even more challenging. But yeah, like this is what I mean, but like, okay," }, { "end": 3342.6400000000003, "start": 3335.76, "text": " what part of the of the award curve do I return to black box optimizer? This is not a formerly" }, { "end": 3346.88, "start": 3342.6400000000003, "text": " studied problem as best as I can tell. And like, like, this is, this is could be, could it be" }, { "end": 3350.96, "start": 3346.88, "text": " foundational work? And then lots of things like this would be that I can get into later if people" }, { "end": 3356.88, "start": 3350.96, "text": " care. And aside from just being generally foundational work, this is easy foundational work that has" }, { "end": 3363.2000000000003, "start": 3356.88, "text": " zero competition, which sounds appealing. No, that sounds, uh, that sounds like important stuff. I," }, { "end": 3367.2799999999997, "start": 3363.2, "text": " yeah, I'm not aware what other people did in that space, but I definitely have faced some of those" }, { "end": 3374.24, "start": 3368.16, "text": " issues in the past. Yeah. What, how do I plug in my curve, my training curve into, into" }, { "end": 3381.52, "start": 3374.24, "text": " hyperopt or optuna or anything like that? You mentioned that you gave up on reading RL papers" }, { "end": 3387.8399999999997, "start": 3381.52, "text": " under normal circumstances a long time ago. Uh, and you talked about this, uh, this article that we" }, { "end": 3393.36, "start": 3387.84, "text": " saw, you know, earlier this year, please commit more blade and academic. That article wasn't" }, { "end": 3398, "start": 3393.36, "text": " incredible. You want to tell us about the response to that article because I, that was, that was," }, { "end": 3402.96, "start": 3398, "text": " uh, that's the kind of, uh, perspective you don't hear very often. People are generally not that honest." }, { "end": 3410.96, "start": 3403.6800000000003, "text": " Um, or Frank, I would say. Yeah. Uh, so the reason I like the article so much is that" }, { "end": 3418.56, "start": 3410.96, "text": " that there is a specific benchmark of multi Asian RL and, and he says, this anyone will instantly" }, { "end": 3421.84, "start": 3418.56, "text": " know what I'm referring to is in the space. I don't want to call people out because professional" }, { "end": 3427.52, "start": 3421.84, "text": " issues, but, um, but like, like, there's essentially one, one benchmark in the space multi Asian RL," }, { "end": 3433.04, "start": 3427.52, "text": " where the entire space on this, the entire literature on this falls into this sort of, please" }, { "end": 3437.12, "start": 3433.04, "text": " commit more at more blade and academic fraud. And it's got at the point now where, where the, um," }, { "end": 3440.96, "start": 3437.12, "text": " which is wild by academic standards, where there's, where there's actual allegations of genuine" }, { "end": 3446, "start": 3440.96, "text": " academic fraud because of how badly the scientific claims that the papers in the space of a maid by," }, { "end": 3450.3199999999997, "start": 3446, "text": " like, the first person outside of the core people in it to like go and like, let's read things and" }, { "end": 3454.3199999999997, "start": 3450.3199999999997, "text": " think for a while before writing a paper first. It was, is all he did. And he matched to, like," }, { "end": 3460.3199999999997, "start": 3454.3199999999997, "text": " to solve the environment set. And it was in, in, you know, in this little subspace and multi Asian" }, { "end": 3465.68, "start": 3460.3199999999997, "text": " RL and being this fairly impactful thing, at least to me. And just the fact that this can happen is," }, { "end": 3470.48, "start": 3465.68, "text": " is, it's a detriment to all of us and ashamed of the field. And you know, everyone's, oh," }, { "end": 3475.7599999999998, "start": 3470.48, "text": " pre-produced abilities broken in this and that bad thing are an RL. And I mean, it just like," }, { "end": 3482.48, "start": 3475.7599999999998, "text": " you should at least try, like, like, like, you know, people make it like in biology and stuff." }, { "end": 3488.08, "start": 3482.48, "text": " If there are wild mistakes made very constantly in such as life, you know, turning, you know," }, { "end": 3493.04, "start": 3488.08, "text": " and people regularly find out like that an entire subfield is using invalid statistics. And stuff." }, { "end": 3497.2, "start": 3493.04, "text": " And this kind of happens, I guess, but like, they're at least trying to do the right thing. Like," }, { "end": 3502.32, "start": 3497.2, "text": " effort is put into doing things the right way. And I feel like it's somehow different than where we're at." }, { "end": 3508, "start": 3502.32, "text": " And this is why I stopped reading papers because like, you know, how many papers a year are published" }, { "end": 3512.4, "start": 3508, "text": " that you care about that you've been remembered like 10? Okay. So, so what's the point of going" }, { "end": 3517.2799999999997, "start": 3512.4, "text": " and reading through through the lose of new papers, right? And for never to have this even unique to me," }, { "end": 3521.68, "start": 3517.2799999999997, "text": " I'm not gonna name people, but like, there are other professors who like are very famous who," }, { "end": 3526.64, "start": 3521.68, "text": " I think some of them have been on here to make the same approach if you really know, except for" }, { "end": 3531.6, "start": 3526.64, "text": " some very edge case stuff, most of the stuff isn't really worth the time to read. And like, and like," }, { "end": 3536.7999999999997, "start": 3531.6, "text": " if people are spinning the large amounts of their life, turning out paper after paper after paper." }, { "end": 3541.7599999999998, "start": 3536.7999999999997, "text": " And this is the consensus that people have these papers like, this is what you're spending like," }, { "end": 3546.24, "start": 3541.7599999999998, "text": " a portion of your life doing. And this is something really, really fundamentally bothered me. Like," }, { "end": 3549.9199999999996, "start": 3546.24, "text": " like, just almost like for the people doing it for the community for everyone. And so I get that," }, { "end": 3554.8, "start": 3549.92, "text": " that like the entire last however long this, uh, my interview has been like, old man yells a cloud" }, { "end": 3561.12, "start": 3554.8, "text": " style, but like, the emperor has no clothes. Let's get some clothes for this emperor, please people." }, { "end": 3567.04, "start": 3562.08, "text": " So is there anything else I should have asked you, um, today, Jordan? I don't think so. Uh," }, { "end": 3572.88, "start": 3567.04, "text": " I guess, tensor this sort of thing at the end. What do I see by lines of research come over" }, { "end": 3578.7200000000003, "start": 3572.88, "text": " next come years and what is the holy grail of your line of research? One thing, I would really like" }, { "end": 3584, "start": 3578.72, "text": " to wrap up what I hopefully will be some moderately highly publicized works in the next three to six" }, { "end": 3587.6, "start": 3584, "text": " months and get those out so I can move on with my life, get Jim one point, don't know all that stuff" }, { "end": 3593.12, "start": 3587.6, "text": " out so I can again deal with other things. Um, one problem that I'd really like to try to solve" }, { "end": 3602.3199999999997, "start": 3593.12, "text": " personally is trying to beat PPO across a, uh, a well chosen diverse set of environments within" }, { "end": 3606.72, "start": 3602.3199999999997, "text": " it with experience on such a way that real scientific claims being made about beating PPO" }, { "end": 3610.3199999999997, "start": 3606.72, "text": " meaningfully for the first time. That's something I, I think it would be really cool problem to work" }, { "end": 3616.3999999999996, "start": 3610.3199999999997, "text": " on personally. Uh, and then regarding that what is the holy grail? I mean, in this, in, in the space" }, { "end": 3621.3599999999997, "start": 3616.3999999999996, "text": " of our elders, obviously, you know, if not general intelligence, there's, you know, like GPT-3 kind of" }, { "end": 3625.2, "start": 3621.3599999999997, "text": " general generality intelligence for our L that'd be really cool. If I want to talk to someone about" }, { "end": 3632.08, "start": 3625.2, "text": " that, I asked Joseph Suarez in my T. But for me personally, I think that I think the holy grail" }, { "end": 3637.6, "start": 3632.08, "text": " is to have a sort of like unified set of best practices regarding all the different oil environments" }, { "end": 3641.84, "start": 3637.6, "text": " and like what, uh, for, for experimental validity and making real scientific claims and all these" }, { "end": 3647.2799999999997, "start": 3641.84, "text": " things that can be sort of standardized across all the little sub disciplines of RL so that we can" }, { "end": 3652.24, "start": 3647.2799999999997, "text": " at least make better progress. I think whether this would solve coming back to the" }, { "end": 3657.12, "start": 3652.24, "text": " Emperor has no close problem is that like, you know, many of the people who who are trying to solve" }, { "end": 3661.92, "start": 3657.12, "text": " these problems are better worst don't have the most personal incentive for whatever the reason is" }, { "end": 3666.48, "start": 3662.48, "text": " to try to make profound contribution to science. They're trying to add as many papers as possible" }, { "end": 3671.04, "start": 3667.12, "text": " and they aren't, you know, trying all these things. And if you can at least create like some sort of" }, { "end": 3676.64, "start": 3671.04, "text": " genuine standard free-produced ability in RL, then you can at least say that like, well, okay," }, { "end": 3680.48, "start": 3676.64, "text": " you didn't meet the agreed upon criteria that's been published for the thing that you're working" }, { "end": 3687.52, "start": 3680.48, "text": " on for this go fix it. And then at least, you know, this will at least make a lot of the papers" }, { "end": 3692.48, "start": 3687.52, "text": " that would otherwise be completely worthless and offer almost no new real knowledge about" }, { "end": 3697.6, "start": 3692.48, "text": " anything empirically offer some even if the authors aren't even even if this isn't what the authors" }, { "end": 3702.16, "start": 3697.6, "text": " have their hearts bent on, you know? I mean, there's some tension there, I think like some people" }, { "end": 3710, "start": 3702.8, "text": " cynically say, you know, large organizations or would be pro regulations that are that build" }, { "end": 3715.68, "start": 3710, "text": " that create barriers to entry because they already they can already pass those strong barriers" }, { "end": 3720.96, "start": 3715.68, "text": " of entry. Yeah, well, yes, but I mean, there argument would be if, you know, if we insist it on," }, { "end": 3725.68, "start": 3720.96, "text": " let's say we insist it on 50 runs of your algorithm, that now limits it only to the biggest labs" }, { "end": 3730.48, "start": 3725.68, "text": " that can actually do that for significantly complex agents and environments. So here's the problem" }, { "end": 3735.44, "start": 3730.48, "text": " that I have with that, though, right? Well, I am entirely appreciative of all the constraints" }, { "end": 3739.76, "start": 3735.44, "text": " that come along with doing that. I think that hard work's a very functional help, or it's applicable," }, { "end": 3743.6800000000003, "start": 3739.76, "text": " there are many, there are many cases where it's not or people aren't able to do it. But like in" }, { "end": 3748.88, "start": 3743.6800000000003, "text": " these cases, okay, so let's say that, oh, well, I couldn't do 50 runs or I couldn't do a lot of" }, { "end": 3754.2400000000002, "start": 3748.88, "text": " hyper parameter tuning. Okay, it's fine. You couldn't, but like because you couldn't do this, even if" }, { "end": 3759.6000000000004, "start": 3754.2400000000002, "text": " even if you couldn't do without direct fault of your own, it still essentially precludes you from" }, { "end": 3763.5200000000004, "start": 3759.6000000000004, "text": " contributing side to knowledge or at least the vast majority of it in the work that you're publishing." }, { "end": 3769.36, "start": 3763.5200000000004, "text": " Like, like if you can't do that, you will already excluded from, from, from, from, at least your" }, { "end": 3772, "start": 3769.36, "text": " contributing to certain areas of reinforcing, but of course, many areas are reinforcing that you" }, { "end": 3777.1200000000003, "start": 3772, "text": " can contribute to without these public resources. But yes, you know, you would be excluded from" }, { "end": 3780.56, "start": 3777.1200000000003, "text": " other things you already are, right? It's just that people pretend that you aren't. It's just" }, { "end": 3785.92, "start": 3780.56, "text": " obscured. And so maybe it would clarify what areas of research it's appropriate for certain size" }, { "end": 3790.6400000000003, "start": 3785.92, "text": " labs to focus on so that they can actually produce scientifically valid results. But then your work" }, { "end": 3796, "start": 3790.6400000000003, "text": " on making this stuff more scalable can change the equation and presumably is better for the field." }, { "end": 3802.08, "start": 3796, "text": " I, I, I very much hope so we'll see how things got going over here. Awesome. Well, Jordan Terry," }, { "end": 3807.04, "start": 3802.08, "text": " this has been fantastic. I really enjoyed this conversation and I'm really enjoying learning about" }, { "end": 3811.36, "start": 3807.04, "text": " all the incredible work you're doing and your contributions to the community are, are just" }, { "end": 3816, "start": 3811.36, "text": " outstanding. And thanks so much for, for sharing all this with, with talk or else today. Thank you so" }, { "end": 3828.48, "start": 3816, "text": " much for having me. And it was really nice to be here." } ]
Robert Lange
Robert Lange on learning vs hard-coding, meta-RL, Lottery Tickets and Minimal Task Representations, Action Grammars and more!
https://media.transistor…dc1.mp3?src=site
Robert Tiacolange is a PhD student working at the Technical University of Berlin. Thanks so much for joining us, Robert. Thank you, Robin, for having me. So how do you like to describe your focus area? So that's actually not always an easy question to answer. I'm pretty much interested in many subfields of reinforcement learning, but lately I've been mainly focused on meta-learning and connections to evolution and evolutionary strategies. Great, so that's reflected in the papers that were your papers that we're going to talk about today. Starting with learning, not to learn nature versus nurture in silico. Was the paper written by me and my supervisor Henning Sprikela? So what was the gist of this paper? Many people know meta-learning, sort of under its pseudonym, learning to learn. So the idea is essentially that you train an agent or a neural network on a distribution of tasks and the neural network learns essentially either a learning algorithm or some form of initialization or some form of features that are sort of shared across these tasks and can be utilized across them. Back when we started working on the paper, we were sort of interested in the biological analog and the question of nature versus nurture. So how much you should adapt and how much you should essentially already hard-code from the get-go. And we felt like that the meta-learning community back then was mostly interested in the adaptation part. So you want to be capable of adapting fast to a new task, given a little amount of data. But there might be some situations where adaptation is actually not the optimal strategy. And instead, meta-learning should actually hard-code some form of a forestic choice. So for example, if you have an agent and that agent has a fixed amount of lifetime, then the agent might simply not be able to solve a task if it was to learn, given its lifetime. Because there is not enough information can be integrated. And in the paper, we essentially wanted to test whether or not modern tools from meta-learning are actually capable of not only learning to learn, but also learning not to learn and sort of enforce a-horistic behavior. And in the paper, we sort of have a little bit of theory for a simple banded task. And we actually able to show that indeed memory-based meta-learning, so the type of meta-learning where you train a recurrent neural network to solve a specific task, is actually capable of also not learning not to learn. So for the meta-rl part, I see you use rl squared. And then looking at the reference, it seems like there's two-there's another paper that uses that phrase. I guess one is from Dwan and the OpenAI folks in 2016 was the one I was thinking of, but then you actually referenced Wang and the deep-mind team in 2016. And so you mentioned there was some interesting overlap there. Do you want to fill us in on that? Yeah, so basically the paper by Dwan and by Wang et al, they sort of came into being simultaneously, like both of the authors discovered essentially that if you train a recurrent neural network on a set of tasks and the network receives as an input, the reward from the previous time step, then the recurrent dynamics of this RNN, are going to implement a type of learning algorithm, a type of information integration essentially. And both of these papers came up with that simultaneously and had different results, but actually almost identical algorithmic implementations. And in our paper, we use rl squared, or this setting of memory-based meta-learning, to essentially test our hypothesis whether or not you can also meta-learn not to learn. Yeah, and it's actually, so the question of whether or not you can meta-learn heristic behaviors is actually also really interesting for many robotic settings in which you might not have enough time to essentially learn some form of behavior at Hark, and you want to have a good default to which you can fall back and have a system that sort of identifies when it should learn and when not. Can you help us understand a little more about what these hard-coded behaviors are like? Is this open loop behavior? Is it sensing and responding still? So, the general animal behavior essentially, some really intriguing examples are, for example, giraffes, right? So a giraffe that is born is basically able to walk after a single minute. So there is some form of hard coding in terms of multiprimitives going on that allows the animal to really quickly walk without having a lot of reward signal to guide its behavior, why it's learning. Another example of instinctive or heristic behaviors are fish and David Ha likes to use this example oftentimes in his talks as well, where fish have a morphology that already hard-codes certain swimming behaviors. So in a very simple illustration, one can see a bunch of fish and these fish move downstream, but these fish are actually dead and the whole takeaway is essentially that the body of the fish is already sort of prone to execute certain behavior. And in the context of training neural network agents, we sort of have two example cases. One is simply like a classic banded task in which there are two arms and one arm is always sort of deterministic, going to give you a reward of zero, and we have another arm and that arms characteristics are sampled from some form of task distribution. And depending on the shape of that task distribution, it might be beneficial to explore that non-deterministic arm and to figure out whether or not its average reward is above zero, and if that's the case, then you should essentially continue exploring that arm, but if that's not the case, you should take the deterministic arm and get always a reward of zero. And in this case, the meta-learning essentially has to discriminate between task distributions in which it's sort of feasible or on average beneficial to learn which arm is better and in which settings it isn't. And when it figures out that it isn't beneficial to explore, then the recurrent dynamics of the RNN are going to implement essentially something very stale that just tells the agent always pick the deterministic arm. And then we have a second example, and in that second example we have an agent that has to explore some type of visual navigation task like a maze or a grid world, and in that grid world there are, for example, rewards of different magnitudes. And you can think of those rewards as sort of being different types of food or nutrition, and they vary their location with different amounts of probability. So one reward location might always be fixed at the same while others might vary. And in this setting the agent sort of has to meta-learn whether or not there is enough time to actually do the exploration to pick up a more uncertain but higher reward or to deterministically always go to safe location. And what we find there is that essentially agents which are trained using memory-based meta-learning are somewhat overfitting their lifetime horizon. So if an agent has only seen lives essentially in its meta-training setup that consists of lifetime five, then it's not going to be able to meta-learn a more expressive exploration strategy, for example. And in these cases even if the agent has more lifetime it's always going to choose the sub-optimal deterministic behavior. So this lifetime overfitting and meta-learning is also something that's in my opinion somewhat under-explored in the literature. In the sense that you ideally would like to meta-learn a time-universal agent that's capable of adjusting its own learning algorithm based on the amount of time that it has. It might not always be the optimal strategy to explore endlessly or to be heuristic from the get-go, but we as humans are very much capable of separating between what strategy we should pick depending on the time budget that we have for a certain task. So it sounds like we might not expect fruit flies to develop language and culture. Is that what you're saying? Yeah, basically. So I assume the lifetime of fruit flies is fairly limited to make use of language. So exactly the way how ecology or the environment sort of shapes what is beneficial to meta-learn is something that sort of explored in this paper. So were any of the results surprising to you or were you kind of confirming what you already suspected was the case? One result that was surprising to me was basically you can imagine settings where there's just enough time to figure out the solution to your task. And given a little bit less time, you won't be able to do it. And in these settings, what we find is that memory based metal learning, which is ultimately based on gradients, is very much a seat dependent. So some seeds figure out that at that point you are supposed to to to meta learn and adapt the strategy and others don't. So essentially whenever you have a setting where it's really sensitive how much time you have the optimization landscape becomes very erratic or hard to solve. And this is something that in my mind, more meta learning algorithms that should resemble more complex adaptation should essentially incorporate or try to tackle. So besides these hard-coded behaviors and and and learn behaviors, I guess you might say that animals have another channel, which is learning from culture and from teachers, teachers, student type thing. Do you think that that learning via like culture or by teachers is, do you consider that like just a subcategory of learning in the sense that you have in this paper or do you think of it as a separate category like if your agents had the ability to teach each other, do you think that that would have changed things? That's a really interesting question. So basically our paper is just a starting point. So we use essentially first very simple settings in which we can do analytical work in which we can figure out the base optimal behavior and then compare it to the meta learned approximate Bayesian inference. But going further, this is for sure something I'm interested in and in fact right now a master student, D'Altia Milla, who I'm working with is actually looking into these settings where there are sort of experts that guide or give feedback to the agent and the agent has to meta learn how to trust. This is very much along the lines of Natasha Jack's work and I think it's a it's a reasonable extension. Yeah, I guess I sometimes forget that that most of our knowledge and quote intelligence, really purely intelligence, but that knowledge comes from culture and teaching and very little we discover on our own in our own lifetime. Yeah, there's so many artifacts, right? Just if you think about books, you know, there's no machine learning algorithm right now out there that could explore the world, pick up a book and capture all the knowledge that's written in a book. So I think, yeah, thinking along the lines of what are sophisticated artifacts that we could build into artificial systems, environments and so on. It's also something really interesting. Cool. Okay, let's move on to your next paper on lottery tickets and minimal task representations in the reinforcement learning. So this paper is a paper that originated from another master's student, MacVisher, who started working on lottery tickets in the context of reinforcement learning. And back then, the lottery tickets hypothesis first sort of popped up and was mainly demonstrated in the computer vision community. And maybe before I dive into the paper, I'm going to take a minute to sort of introduce the lottery ticket hypothesis more conceptually. Great. Originally, people had all very like many hypotheses about why over parametrization is important in training deep neural networks. And there were many observations which showed that if you take a small network and try to prune it from scratch, given their endimensionalization, that this fails. And people argued that over parametrization essentially helps with the optimization by allowing more dimensions to essentially circumvent local minimal. And there have been many other sort of tales about like an information bottle and a kind of view on these things where you first have a network that needs to memorize the data generating process and then afterwards essentially sort of distilled it. And in order to do this memorization, you need a lot of network capacity. And then Jonathan Franco came along and showed that this was actually not the case. And most of these sort of observations or hypotheses were actually not really adequate. And most specifically, what he did is he came up with a procedure for how to derive sparse neural networks, which are trainable from scratch to the same or similar performance levels as dense neural networks. So before what people usually did is they took a dense neural network and they pruned it after training a little bit and then retrained and then pruned a little bit and then retrained. And in the lottery ticket hypothesis paper, the procedure essentially goes a different route where you start to train a neural network, to convergence. And you keep the initialization of that neural network saved. And then at the end of training, you ask the network, hey, what are your highest magnitude weights and you prune away the lowest magnitude weights like a percentage of fraction. And then reset the remaining weights back to the initial values that you had before you started training. And you iterate this process each time shaving away a little bit more of your weights. And if you do so, so this procedure is called iterative magnitude pruning, you end up with a network or with many networks at different sparsity levels. And importantly, many of these networks at high degrees of sparsity remain trainable. So basically what this says is there are spars neural networks which are trainable. But maybe right now we don't have the right efficient way for doing the initialization to that sparsity level. And back then, a bunch of follow up papers popped up trying to do this in different contexts like in natural language, for example, or in object recognition and in reinforcement learning that was also a first paper. But what most of these papers did is sort of only establish that this existence of spars neural networks also is present in different other contexts like, for example, in natural language processing, you can obtain sparse transformer models, which can also train up to the performance of their dense counterparts for specific level of sparsity. But no one really looked at sort of what's actually happening on the underlying level. So what are sort of their representations that are shaped by this pruning process. And we wanted to essentially explore these questions in our paper. In the paper you say you use masks for the pruning. Can you tell us what these masks are? Yeah, so in general, the winning lottery ticket, so the sparse network that is capable of training to high performance, consists essentially of two parts. So it consists of the network that was initialized originally, and this is a dense network, and it consists of the pruning mask. So that's a binary mask. There essentially masks out all the weights that were pruned away. And in our paper, we essentially come up with a set of baselines to try to disentangle the effect of the pruning mask and the weights that are preserved by that pruning mask. So you could imagine just keeping the pruning mask around and using a different initialization. But this would then sort of discard the effect of the weights that were sort of filtered out by the iterative magnitude pruning process. And what we wanted to do is see whether or not these masks and weights contribute differently to the ticket. And this was basically based on an observation that Mark, my master's student, had that oftentimes when you train Matina a perceptron policies, the initial layer is pruned a lot more than the higher layers. And when you look at what is pruned away, you find that essentially entire initial dimensions of your observation are pruned away. So that means all the weights that sort of originate from one input dimension are discarded, which in turn means that the agent does not perceive the dimension in order to do the decision making that it has to make. And what we then sort of looked at was whether or not this generalizes to do many other tasks as in robotics, for example. So we looked at a set of continuous control tasks and a set of visual control tasks. And in both of them, we see this phenomenon appear. So the lottery ticket is essentially not only yielding a sparse neural network that's trainable, and it's also yielding an interpretable inductive piers in the form of the input layer mask. So what we do in the papers, we show that if you take this mask and you essentially overlay it on the environment, then dimensions which are really just task irrelevant are being pruned away, while other dimensions which are task relevant are preserved. And thereby the pruning mask is essentially telling the agent what's important and what's not. So I wrote this question earlier, let's see if it still makes sense. The question was if you have ideas on which phases of learning require more model capacity. Sounds like you really really need the most model capacity in the beginning. Is that is that right? Like in terms of early exploration of a novice agent versus fine tuning an agent that's nearly become expert. Well, we actually never really looked at whether or not it's possible to train to prune intermediate phases of training. Right. So the lottery ticket procedure as is always trains up to some form of early stopping criteria and more up to some form of fixed iterations. And then you either set back to the initial weights or in a specific version of the lottery ticket procedure, you rewind the weights until a fixed period or a number of steps in the training process. But whether or not you can sort of dynamically prune at intermediate steps of training, I'm not really certain about that. It's actually a really interesting question to ask. Especially because in reinforcement learning oftentimes the learning dynamics may be fairly noisy. Right. So you might never fully get convergence in that sense. So does this mean that a lot of our neural network capacity is kind of wasted in general, but the default case. Yeah, that's also a good question. It definitely means that there might be more clever ways out there to initialize our neural networks that requires less parameters. And potentially this also means that the remaining parameters could be used for something different. Right. So I'm not sure if there already exists work on multi task lottery tickets, right or continual learning lottery tickets. But this is for sure an interesting question going forward. Do you have any speculation on how far we could go with this? Like some models are getting really big now. If pruning maybe maybe if the large ticket hypothesis can be taken to its ultimate conclusion and pruning, you know, can find ways to prune really well. Do you think there's a chance that we might distill these massive networks down into something really quite quite a lot smaller. So to me as a researcher, the lottery ticket hypothesis or framework is much more of a hypothesis testing engine than necessarily a way forward in terms of how to obtain sparse neural networks. So oftentimes doing this iterative procedure where you chain a neural network and then you prune a little bit and then you chain again and then you prune again a little bit. So the ticket hypothesis is what it can tell us about the learning dynamics of an agent. Right. So as I was sort of a looting to Jonathan Frankel has a set of papers where he looks at sort of the stability of the learning dynamics and the dependence on the order of the batches and the data set ordering and these types of work. So inspire how we think about learning dynamics and reinforcement learning. So one one thing that we study, for example, in the paper is whether or not training agents with explicit supervision. So in a behavioral cloning task allows us to obtain more sparse networks and that would sort of point to the observation or to the point that reinforcement learning is inherently more complex and may require more parameters. So what do I mean by that in behavior cloning you would chain an expert on a certain task and then have a student clone the behavior or try to imitate the behavior of the teacher. So this is most often supervised loss. There is no exploration, but instead the agent sort of has to simply predict what the teacher would do and then execute that behavior as well. But in the reinforcement learning setting, we have sort of the additional problem of exploration. So we have the additional problem of having to solve the problem of learning with a non stationary data distribution in the sense that the agent observes different parts of the environment and different parts of training and we have the problem of having a graded assignment signal in form of the reward, which is oftentimes noisy and sparse. So what we show in our experiments is that this setting actually really indeed requires more parameters. So if you take the same network architecture and you train it with supervised behavior cloning and then do the iterative magnitude pruning procedure in that setting and then compare this to what sparsity performance results you would get for the reinforcement learning case. So that reinforcement learning starts to create a lot earlier in terms of sparsity than supervised behavior cloning. So one thing just empirically that this says is that if you want to obtain sparse reinforcement learning agents or control agents, you should not use the reinforcement learning problem formulation where you have an agent which wonders around its environment perceives and then learns from our reinforcement signal. But instead use some form of student teacher distillation because that allows you to go sparser. On the other end it also tells us that the reinforcement learning problem essentially can benefit from having more parameters, which is something that until fairly recently was not necessarily clear. I guess it makes me wonder if the agent started by imitating a mediocre agent and then in reinforcement learning was then used to to fine tune this agent to expert would it need a lot of capacity for that or would that be kind of we could kind of just wouldn't need as much elbow room because it already has a lot of energy. So that's interesting. We didn't test that setting. Another setting brought forward the classical offline RL setting where you have a data set and don't have an agent exploring the environment necessarily but you just have transitions in a experience experience replay buffer for example and you have to learn from that. In case you also have sub optimal demonstrations and you could run the same procedure and try to figure out whether or not essentially learning from a static non distribution shift exposed data set also requires less parameters. But on the other end it's fairly hard to compare because usually these agents might not necessarily train up to the performance that the full reinforcement learning agent or the behavioral cloning agent trains up to. So it's hard to make the sparsity performance comparisons. The setting that you were talking about the one sort of way you have one initial learning phase in which you do sort of behavioral cloning and then start to fine tune using reinforcement learning is actually one that we didn't invest it yet. I mean I imagine a huge amount of effort goes into defining and running these experiments and analyzing results and it's so easy for people to come after the fact and be like what about this and what about that and there's always a million things you could have done. No, I'm really thankful like these are great ideas and you're right like the deciduative procedure requires you to train networks at different sparsity levels sequentially 20 times and that's not cheap but I think good ideas are also not cheap. So let's move to your next paper that is semantic RL with action grammars, data efficient learning of hierarchical task abstractions. Yeah, so this is a paper together with my former supervisor at Imperial College Aldo Faisal and this paper has sort of a bit of a longer history in Aldo's lab and it actually originates from something that's not related necessarily to reinforcement learning at first sight. So that's probably the evolution of tool use algorithms back in the days and Aldo my supervisor is very much into the concept of so called action grammars. So the idea behind action grammars is that many of our behaviors sort of have repeating sequences which are structured using a form of high level syntax that helps us solve tasks. So one classic example might be opening a door and closing it. They are sort of primitives and they're like grabbing for the knob which I used multiple times in solving that task and you can think of them as being some form of production rule or grammar like structure that is iterated or used multiple times which can be reused several times. And in sort of an old paper he and colleagues looked at how such grammatical structures might capture the complexity of tool use over time. And what they show is that these grammars sort of became more and more complex with evolution unfolding and that humans developed more and more sophisticated tool use sort of grammatical algorithms I would say. And in the paper that you were referring to we're sort of looking at whether or not one can also use such grammar structures as building blocks for hierarchical reinforcement learning. So traditionally in reinforcement learning we were sort of a single step actions and agents have to optimize some form of aggregated reward metric based on executing actions once they're at a time. But in hierarchical reinforcement learning the idea is that it might be more data efficient to essentially learn hierarchical policies which execute certain subpolices over multiple timestamps. And one very classical example of this hierarchy reinforcement learning framework are our options and in options you define essentially subpolices for an initiation set of states and these subpolices are then basically executed over multiple timestamps until you have a termination criterion and the termination criterion says no. Now you're stopping to execute that subpolicy and you return control back to a higher level policy and then higher level policy then executes the next subpolicy. And one key sort of question in hierarchical reinforcement learning is how can you come up with good subpolices and in the original paper by Sun et al. And the way how they came up with the first set of options was to manually construct them. And nowadays a lot of people are interested in sort of doing automated construction of such options which are essentially simply temporarily extended actions. And in our paper we look at whether or not one can use this notion of grammar and specifically action grammars to define a set of temporarily extended actions. We're not looking at options but we're looking at macro actions which are simply sort of deterministic sequences of primitive actions. So as you said deterministic sequences are they are these actions executed in the compound actions are executed in an open loop way like once you begin you just roll out those actions without regards to what's happening in the environment until until that sequence is done. And is that different than options in that sense? So yeah in options you have something called a termination criterion or function which in modern setups oftentimes is also state dependent. And if the higher level of controller says execute option one then essentially the subpolicy of option one is executed until the termination criteria says no don't execute anymore. While in our setting and when we use these more simple I guess macro actions you always deterministically executing the full sequence of actions. Yeah I love the connection here with language and action and I don't actually know anything about the neuroscience in this area but I can imagine maybe that our grammar of our tool use carry it over carries over to the grammar of language or ability to use language. And now we're finding in AI that transformers are going through and solving all the different modalities. And so maybe there is something fundamental in those grammars to all the things that we can do which is I think kind of a beautiful insight. And this paper reminds me a little bit of just just in the sense of framing RL as an NLP problem reminds me a bit of decision transformer. And I guess there's more work happening that direction using these NLP tools on RL tasks. Do you see that as a growing trend? Yeah so yeah like you said right now it seems like transformers are taking over many many subfields and at least for offline RL. It seems like they are also very efficient in modeling sequences and their generalization power allows you to create or sample trajectories which are even going beyond or what the agent has seen right in the offline data set. So back then when we worked on that paper there were no transformers yet. And oftentimes I also wonder how data efficient that ultimately is not going right because transformers at least in computer vision require quite a lot of data to really outperform the inductive biases that you get from from a convolution operator for example. And the same sort of applies here where the grammars which we use or which we infer they do not require a lot of data. So we use grammar compression techniques from context free grammars to construct our macro actions. And what we do in that setting is we treat a sequence of actions as sort of a sample from a language. So it's a sentence essentially and then we define primitive actions as being essentially our vocabulary. And then we can construct rules which generate these actions in a hierarchical fashion using these grammar compression algorithms and they run super fast right so oftentimes they they might be a little bit heuristic but they create the grammar which is then ultimately the set of macro actions in no time. While training and transformer offline requires a lot more compute a lot more data and does not come with this inactive bias of a context free grammar that we're using. And furthermore what we're showing is in our paper that you can infer these grammars online as the agent is learning. So we have a set of experiments which is more closely related to offline or imitation learning where you just infer essentially a grammar based on an optimal policy and then use that grammar to train in our agent from scratch. But you can also train an RL agent from scratch and then essentially infer macro actions as you go and as the agent gets better in the environment. And then it's essentially bootstrapping its own learning progress by compressing it into a grammar. Cool. Yeah, I guess I like to think that classical algorithms are always preferred when they when they do apply because they're super fast and efficient and exact right. Yeah, but it's also sort of I guess a trend of our time that ultimately we as computer scientists are interested in general purpose solutions right and yeah it oftentimes feels like. We first come up with the more specialized solutions which leverage inductive biases and then once we have a clue about how things work we we broaden and we generalize and we let the system discover all these inductive biases and even more on its own. So can you say more about how this how this grammar evolves? Like I guess as the agent is training does the set of compound actions or however you phrase them does that set grow and change. Through the process. Yeah, so the these grammar algorithms they also come with their own set of hyperperimeters right and you depending on these hyperperimeters you get more or less macro actions and this is actually something you need to control. Oftentimes it makes more sense to be a bit more conservative early on so that means extracting less macro actions initially and then sort of as the agent gets better extract more of them. And this is also sort of problem dependent and the way how they evolve over time is essentially that initially there might be more macro actions with fewer primitive actions. So short macro actions are not long sub policies and towards the end as the agent learns to solve the task the macro actions can be robustly longer right so yeah longer sub sequences of actions. And does it does it discard any earlier macros that it found that may not be applicable anymore or is it kind of an accumulating set. Yeah, so some of them can be discarded that's that happens naturally but oftentimes you have a a parse tree like in a context free grammar you essentially compose different protection rules into each other right so you have a hierarchy of macros and they rules that sort of compose of smaller sub rules right so smaller sub rules make up larger longer rules. So this paper was in 2019 are you you were your colleagues planning to pursue this direction more or has has more happened in this in this area sense. I moved on so I moved to Berlin after my time at Imperial, but in the years afterwards and other students worked on on the project and one sub project in which I was partially involved was trying to scale our experiments to the setting of a tower for example and we looked sort of at the low data regime. So only a few learning transitions not the full 200 million frames and back then we could also see that enhancing algorithms like DQN can actually with such macro actions can actually also outperform like doodling DQN baselines and back then my quarter, Peter los que sublulu, he had a really smart idea for how to construct macro actions in experience replay buffers. So usually in sort of classical DQN our experience replay buffer consists of one step transitions right so an agent finds itself in a specific state takes an action receives from the environment a reward as well as its state transition. And so this tuple stored in the replay buffer and then we sample batches of these transitions to construct gradient estimates. But in our setting where we're interested in temporarily extended actions, our replay buffer needs to sort of account for these macro actions. And one simple way to do so would be to say okay we have an action then you execute the macro action and we store that macro action as sort of a number in our replay buffer and encoding that action and then we store the final state that was observed as well as sort of the sort of return that was accumulated during the time we executed the macro action. But this would sort of only give us macro actions in our replay buffer whenever we execute actually a specific macro action. But what Peter los came up with is that you can also use the rest of the replay buffer which might consist of primitive actions so one step actions to construct macro actions based on those right so you can chain together different one step transitions into transitions which are essentially macro actions but they were not executed as macro actions but just as sequences of primitive actions. And this is approach he called hindsight action replay where we're essentially composing different transitions which executed a macro into transitions which then can be used to actually train our value estimates for a specific macro action. That's really cool very innovative. Yeah and you could also like we didn't exploit this but you could also imagine that you construct these macro actions using different discounts right so you have all the primitive actions available and you could think of constructing different macro transitions using different discounts and thereby learning essentially at different timescales right. We didn't do that but this is sort of something you could explore cool okay so let's move on to MLE infrastructure I see on your homepage that you maintain this this package can you say more about MLE infrastructure and maybe can you situate it a bit by comparing it to other frameworks in that space sure. So the MLE infrastructure is something that's actually not RL specific but more a set of tools that I've built throughout the last years when writing papers that allow me to execute and to train neural networks in a distributed fashion right so oftentimes I'm interested in exploring sort of the sensitivity of some architecture across a range of hyperpermeters and I wanted to so for multiple random seeds and I don't want to write each time sort of a submission script that executes this on a cluster and I don't want to have to manually SAP the results onto my local machine but instead I wrote essentially this set of tools the MLE infrastructure which comes with the set of sub packages that help me organize and orchestrate these experiments so for example in there I have a tool called MLE hyper opt which allows me to very easily and lightweight engineer grid searches or random searches or basin optimization pipelines and this all integrates and works very well on grid engine clusters, slurm clusters and with Google Cloud Platform for sure machines and the whole motivation behind it is mainly that places like Google open AI they have full time research engineers right they have the money to to pay for setting up a good and efficient infrastructure for testing hypothesis and for doing science but oftentimes in in the world or in the academic world if you're not at a place like Miele or Stanford or Berkeley and these structures might not necessarily be in place and what I am to provide with the MLE infrastructure is a set of simple tools that can be used by anyone like me who doesn't find themselves in a PhD program at these institutions and it compares to packages like ray tune or sacred from India which provides similar sort of services but the main selling point behind the MLE infrastructure is that it's some modules like the hyper per meter search or the utilities I use to log or the utilities I use to schedule jobs they are sort of modular and independent so that means you don't have to buy into the full ecosystem installing a SQL database or something like that but you can sort of pick and choose what part of the infrastructure you want to use and I provide sort of a wrapper in the MLE toolbox which is a tool to run sort of standardized and protocol distributed MLE experiments but you can also just use parts of it. So are other people using it too or is it is it mostly for yourself? Yeah no so basically I developed or I released most of this June to last two to an half months and they are already like a couple of people opening issues and starting to work on pull requests and yeah people in my lab I know starting to use it and it's it's up and coming let's put it that way or at least that's how I like to think about it and yeah I'm going to spend the next one and a half years finishing my PhD working with a daily and I have big plans. Very cool yeah I can see there's like a huge amount of plumbing behind every chart and experiment that we see in these papers there's a big gap between just being able to sketch out your code and then actually running that at scale and having it reproducible and all that so yeah I think I'll probably check that out. Like all of the papers we discussed today except for the hierarchical reinforcement learning paper they were also conducted using the toolbox right so you can imagine that training the slurry ticket settings also requires quite a lot of engineering and the sense that you have to train these networks sequentially 20 times and you want to keep track of all the checkpoints and all the progress and all of this is also orchestrated with the toolbox. So most of this sort of developed in sort of linked with the projects that I've been working on great we'll provide a link to this repo in the show notes and that was a great testimonial. Okay so let's move on to to matter RL so we talked about some of your working matter RL but in general on this show we've we've talked about different different definitions of matter RL and you touched on the definitions early on. I guess I have come to refer to two of these approaches as I call them like the Finn and the Faust approaches I guess the using RL to discover new RL algorithms is more like Alexandra Faust's work and then using pre training on related tasks to let RL agent quickly do fine tuning. I think of as as related to Chelsea Finn's work like like mammal. How can you say anything about how you see the scope of meta RL and you do you have your own way of thinking about the different different types and definitions. How do you define meta RL? So basically in the last five years a bunch of different meta RL or meta learning algorithms more generally sort of popped out popped up and I think the distinction that you've made is sort of adequate but the way I like to think about it is more in terms of inductive biases. So something that is shared across all these formulations is that you have a task distribution and that task distribution has some form of overlap. And given that overlap we're essentially interested in finding the best inductive biases that allow our agents or networks to adapt quickly to a new task that is part of that task distribution or at least not too far away from that task distribution. And coming up with or using tools like model agnostic meta learning and allows you to come up with one inductive bias in the form of an initialization shared across tasks. Like for example in a vision task this might be the inductive bias of having an edge detector at the on the early layers and this is most definitely shared across many tasks. But another inductive bias might be a learning algorithm and that learning algorithm might be very much tailored to the task distribution that you're looking at. So that learning algorithm might completely abstract away certain details of the environment or the transitions that the agent makes and it might focus on others. So in my mind these are maybe two sides of the same coin but you can also think of many other types of inductive biases that you might come up with. So for example prototypical neural networks are a different approach to meta learning or you might even think of the Larry ticket procedure like this iterative magnitude pruning procedure that I spoke about as some form of inductive bias discovery. And nowadays many people like for example Lewis Kirsch has been working on discovery new learning algorithms. And I think that's a really promising direction to take. But it's not necessarily trivial. Can you talk about any other advances in meta oral lately that you find interesting what's going on in that field that you find exciting. So one set of algorithms that I'm very excited about especially in the context of reinforcement learning is the work on meta gradients. And someone who has pioneered this work is for example Tom Zahavi and a set of researchers at Google deep mind. And the idea there is that in reinforcement learning we oftentimes have hyperperimeters like we spoke about the discount parameter or other parameters like the amount of important sampling correction that you might want to do in algorithms like Impala. And these are usually set to static values. So oftentimes you as a designer have to choose them a priori and then they might not change or you might have a exploration parameter like an absolute and an absolute greedy schedule which changes slowly over time and linearly decays. But it would be really cool to have a system that automatically tunes these algorithms in an end to end fashion. And what recently sort of has been popping up is that you can use sort of the same higher order gradients as you would use them in mammal to also optimize these parameters. And especially in the context of reinforcement learning where there is non-stationarity this seems to be very effective. And you can not only optimize online hyperperimeters like the discount factor but you can also offline try to optimize or discover the entire RL objective. And if you think about it oftentimes in reinforcement learning the objective functions that we use fairly historic in some sense like the mean squared Bellman error is something that came out of approximate dynamic programming right and trying to to minimize some Bellman error right. But an agent might actually learn better initially using a completely different objective which emphasizes other things or discounts differently. And in this meta-grainian work there has been a lot of follow up work trying to show that you can even sort of parameterize entire objective functions using black box neural networks and then optimize these offline on a set of tasks that yields a network or objective function. That then allows the agent to learn more effectively from scratch essentially. So I think all of these approaches which try to make more and more of the reinforcement learning pipeline be end to end discoverable and tunable is really promising. Where do you think this is going long term and like do you see a holy grail in terms of meta RL what what kind of meta RL achievements would make you think like we've really arrived at powerful mental learning for RL if we get there what would be the effects on on the rest of AI you think about that stuff. Ultimately the vision from your Schmidt Uber in the sense that we might aim for systems which self referentially refine themselves in a hierarchical loop is one that's that's very appealing right. So right now we're mainly talking about systems where we stack one layer on top of the other right so we have higher order gradients which optimize certain parameters in our system. But we're never thinking about going one step further in the hierarchy and there are many reasons for that related to sort of the variance of the gradients computational efficiency and other factors. But I think long term I would be interested in having systems which go go beyond that right so not only try to meta learning a certain set of primitives or ingredients but sort of render more and more exposed to such meta gradients on meta learning more generally. So is this the fuse of what people talk about when they talk about super-realms, intelligence explosion and automatically improving AI when it gets down to it is this really path to those things. This is a speculative question and I'm not sure if I'm senior enough to actually give you a good answer to that. I can say that it algorithmically excites me and I feel like it's the way to go but I can give you a good future prediction or anything like that. I think we're probably at the same time closer and further away from these utopian or dystopian endeavors. Cool, okay, so let's move on to economics and RL. Now I see you did your undergrad in economics and you mentioned that in your ML Street talk podcast interview which I'll link to as well in the show notes but you have this background in economics and I think that's not too common for someone working in RL and I find that super interesting. So does economics influence your view, your approach at all to your machine learning work? So actually much of the formalism of RL is fairly present in economics. So many macro economists use Markov decision processes to model household behavior and oftentimes an agent in economics is also learning and not necessarily learning but interacting and making decisions like savings and investments. And recently people from Salesforce also tried to to sort of reframe many of the fundamental economics problems like taxation and unemployment decisions into the word of reinforcement learning. To me, actually I decided to move out of economics because I felt like the level of description was not necessarily the most informative. So oftentimes in economics we're dealing with highly aggregated measures right so inflation is an estimate based on a basket of goods and unemployment rate is always estimated over many individuals and the modeling has to work on that aggregated level. And since otherwise it would be hard to reframe or to actually capture all the heterogeneity in the population. And I feel like in reinforcement learning we don't necessarily need to go that way right so a lot of very exciting work in the field of multi agent reinforcement learning by a cop first or for example can account for heterogeneity across agents and I think there's a lot of potential in that. And the sense that AI may also inform modern economics in terms of policy making and increasing the well being of all of us right that's the ultimate goal I would claim. So it's interesting you mentioned that Salesforce paper I think that was the AI economist. And so so is what you're saying that you you would consider looping back to to economics or is that more flight of fancy. So I think right now we're seeing that like the the reason breakthroughs by by deep mind in mathematics chemistry protein folding it seems like there is no discipline where there is no room for machine learning to improve certain things right. And not saying machine learning is going to substitute every field out there and but I think it's going to help scientists test new novel hypotheses and this ultimately I think seems like the hard sciences are more tractable at this point right like it's not like we're getting to the point of doing psychohistory like as mos predicting people's behavior over long times bands or something it seems like things like chemistry and physics and mathematics are very. Very neatly defined and so our algorithms can can attack them but but maybe things like sociology psychology economics are are still very very squishy and how could we possibly optimize those things with these these kind of tools that need everything to be so well defined. That's a good point but at least in economics there certain projects which for example try to to measure inflation in a more online fashion right and not only quarterly but sort of have an index that is online updated using machine learning essentially so I'm not sure if we're going to have an economics transformer in the next two months. But I feel like as we get more comfortable with these techniques and we can see more parallels across disciplines as well and I think there may be many opportunities to to support economics and yeah they're also a bunch of people working on this right now. I mean why is it so squishy it's because we don't have sensors everywhere we don't really know what's happening in in high resolution and that might actually change with these digital currencies the central bank digital currencies if you know if governments could could actually measure all the transactions and all the which is in one sense totally dystopian. But in other sense you would actually allow for optimizing a lot of things that we wouldn't really think about as being possible to optimize right now. Yeah and it's obviously fairly apparent that we need a lot more work on the ethics side to really make sure what we want to expose to that blind or somewhat blind optimization and what we don't want to expose right like especially with digital currencies and sort of a move towards decentralized or centralized ones it's a big question what we want to keep anonymous and what not and what types of analyses we can do in. What cases right so yeah I think all of these things should be taken with a grain of salt for sure so causality seems really important in economics and as a long history in economics and I guess RL is coming around to to causality do you do you spend much time thinking about causal angle and you have any comments about that I used to spend a lot more time thinking about causality when I still wasn't economics. But back then or economics and or causality in economics is a lot about estimating causal effects of some form of policy intervention right so you might think of how do employment decisions of workers change if certain policies implemented and you want to estimate some form of impact on on sort of employment based on that policy being enacted. And in economics usually you refer to so-called pseudo natural experiments so you have a data set where there was some form of policy intervention and then you try to estimate based on the data set and what the effect might be. And then there's a lot of statistical debate where you should account for certain unobserved variables like for example that the workers might have selected themselves to be more or less affected by the policy intervention and then the whole game starts about sort of what are the right standard errors what are the right significance levels to assess these effects and these effects might be time varying. And then the big question is also how does a quasi natural experiment that happened 50 years ago affect what's happening right now. And to tie this back to reinforcement learning and reinforcement learning we have the huge benefit that oftentimes we have access to simulators and oftentimes it's a lot easier for an agent to to test certain hypothesis and in exploration based way using sort of insights from causality. So in many ways the setting in which we find ourselves simulating agents is a lot easier or check the rules the wrong word but allows for more experimentation than the one in economics it's a lot harder to actually play around or enact interventions and economics then it is for reinforcement learning agent to essentially be exposed to an intervention. Do you see do you see economics being less dismal going forward you see any approach to that to getting closer to the RL paradigm where there are simulators that are meaningful or we can draw conclusions more clearly like how is that progressing on the economic side do you think there's hope there will always be this dismal. Yeah so I guess companies like like Google already run billions of experiments every day every month in the sense that it's very easy to change something in in the UI and do some form of a B testing and observe the behavior of users. And I think more and more governments are also moving into that direction and trying to test certain nudging techniques for example and try to assess effects of small changes on a subset of individuals and yeah I'm not sure whether or not one can really test large intervention and whether or not one actually needs a simulator to do so. But yeah this is sort of the general problem that one can face when when doing simulation right that the simulation might be always an abstraction or a downscaled abstraction of what's happening in the real world and then you have to ask yourself how much complexity do you actually need to to grasp the underlying real phenomenon while not increasing the amount of simulation time. So drastically that things become too slow to actually gain any insight. I mean it's always struck me that it seems if we ever want to get to scientific government governance like we're are most of what we do in physical policy and how governance is done is just legacy stuff traditional things. And so if we ever ever get to want to get to the point where we have more optimized society it seems like we want to be testing more radical ideas as quickly as we can and what would that look like if we had if we had a government that was part of its mission was to get insight on how to do good governance and I don't think that's really a really a priority these days. I agree but I also feel like a point that Jonathan Frankl oftentimes makes is that our leaders have to have the tools right they need to be educated in what we're doing essentially right which is machine learning or doing research at least to a certain degree because if you don't know necessarily what's possible and what's feasible right now it's really hard to come up with such governments design that might lead us into the future. So I think a big initiative has to come about for educating our future leaders in being able to tackle these challenges of ethical decision making and exploring diverse policies while not risking to discriminate or hurt damage people in our society. So we covered a lot of things but on a high level are there other things that we didn't talk about today happening in RL that are particularly interesting for you Robert. So I think I highlighted the work on meta gradients which which I find deeply fascinating. But one question that's sort of intriguing me right now is also related to the lottery ticket hypothesis and that's the question why we actually meet so many parameters in reinforcement learning to train and I realized fairly recently that if you take a step back and you don't use some of the classical RL problem formulation and classical RL algorithms. But you resort back to evolutionary strategies which directly try to optimize some form of fitness or return score. It's actually possible to obtain policies which have orders of magnitudes fewer parameters and oftentimes an evolution strategies. It's even better to have fewer parameters because that makes the search more efficient larger search spaces in evolutionary optimization is oftentimes harder than in smaller parameters spaces. So I'm really interested in figuring out what the role of over parametrization in reinforcement is for reinforcement learning is as compared to in supervised learning and I feel like evolution strategies might provide a different angle to when permatrization actually is harmful and when it's beneficial. Cool so is that a part of your plan for moving forward in your research and you want to see more of your yeah okay. Exactly so my PhD is mainly focused on meta learning and sort of the relation to to evolution but not only on a high level where you could think of meta learning being an evolutionary process but also relating to evolutionary strategies and how one could even try to meta learn evolutionary strategies. But also think of inductive biases that might be beneficial for doing evolution and yeah the other way around. I think you've given me a new perspective on that phrase inductive biases because I never would have thought of learning algorithms as an inductive bias. I guess I wasn't thinking that generally. I think you're expanding our minds today Robert thank you so much I'm really grateful I really enjoyed this conversation with you and thanks for taking the time to share your time and your insight with us at talk or all thank you Robin thank you for having me.
[ { "end": 8, "start": 0, "text": " Robert Tiacolange is a PhD student working at the Technical University of Berlin." }, { "end": 9, "start": 8, "text": " Thanks so much for joining us, Robert." }, { "end": 11, "start": 9, "text": " Thank you, Robin, for having me." }, { "end": 13, "start": 11, "text": " So how do you like to describe your focus area?" }, { "end": 17, "start": 13, "text": " So that's actually not always an easy question to answer." }, { "end": 21, "start": 17, "text": " I'm pretty much interested in many subfields of reinforcement learning," }, { "end": 27, "start": 21, "text": " but lately I've been mainly focused on meta-learning and connections to evolution" }, { "end": 30, "start": 27, "text": " and evolutionary strategies." }, { "end": 34, "start": 30, "text": " Great, so that's reflected in the papers that were your papers that we're going to talk about today." }, { "end": 39, "start": 34, "text": " Starting with learning, not to learn nature versus nurture in silico." }, { "end": 43, "start": 39, "text": " Was the paper written by me and my supervisor Henning Sprikela?" }, { "end": 45, "start": 43, "text": " So what was the gist of this paper?" }, { "end": 52, "start": 45, "text": " Many people know meta-learning, sort of under its pseudonym, learning to learn." }, { "end": 60, "start": 52, "text": " So the idea is essentially that you train an agent or a neural network on a distribution of tasks" }, { "end": 68, "start": 60, "text": " and the neural network learns essentially either a learning algorithm or some form of initialization" }, { "end": 74, "start": 68, "text": " or some form of features that are sort of shared across these tasks and can be utilized across them." }, { "end": 80, "start": 74, "text": " Back when we started working on the paper, we were sort of interested in the biological analog" }, { "end": 83, "start": 80, "text": " and the question of nature versus nurture." }, { "end": 90, "start": 83, "text": " So how much you should adapt and how much you should essentially already hard-code from the get-go." }, { "end": 97, "start": 90, "text": " And we felt like that the meta-learning community back then was mostly interested in the adaptation part." }, { "end": 101, "start": 97, "text": " So you want to be capable of adapting fast to a new task," }, { "end": 103, "start": 101, "text": " given a little amount of data." }, { "end": 108, "start": 103, "text": " But there might be some situations where adaptation is actually not the optimal strategy." }, { "end": 114, "start": 108, "text": " And instead, meta-learning should actually hard-code some form of a forestic choice." }, { "end": 120, "start": 114, "text": " So for example, if you have an agent and that agent has a fixed amount of lifetime," }, { "end": 127, "start": 120, "text": " then the agent might simply not be able to solve a task if it was to learn, given its lifetime." }, { "end": 131, "start": 127, "text": " Because there is not enough information can be integrated." }, { "end": 136, "start": 131, "text": " And in the paper, we essentially wanted to test whether or not modern tools from meta-learning" }, { "end": 146, "start": 136, "text": " are actually capable of not only learning to learn, but also learning not to learn and sort of enforce a-horistic behavior." }, { "end": 153, "start": 146, "text": " And in the paper, we sort of have a little bit of theory for a simple banded task." }, { "end": 157, "start": 153, "text": " And we actually able to show that indeed memory-based meta-learning," }, { "end": 163, "start": 157, "text": " so the type of meta-learning where you train a recurrent neural network to solve a specific task," }, { "end": 168, "start": 163, "text": " is actually capable of also not learning not to learn." }, { "end": 174, "start": 168, "text": " So for the meta-rl part, I see you use rl squared." }, { "end": 182, "start": 174, "text": " And then looking at the reference, it seems like there's two-there's another paper that uses that phrase." }, { "end": 189, "start": 182, "text": " I guess one is from Dwan and the OpenAI folks in 2016 was the one I was thinking of," }, { "end": 195, "start": 189, "text": " but then you actually referenced Wang and the deep-mind team in 2016." }, { "end": 199, "start": 195, "text": " And so you mentioned there was some interesting overlap there." }, { "end": 201, "start": 199, "text": " Do you want to fill us in on that?" }, { "end": 205, "start": 201, "text": " Yeah, so basically the paper by Dwan and by Wang et al," }, { "end": 212, "start": 205, "text": " they sort of came into being simultaneously, like both of the authors discovered essentially that" }, { "end": 221, "start": 212, "text": " if you train a recurrent neural network on a set of tasks and the network receives as an input," }, { "end": 226, "start": 221, "text": " the reward from the previous time step, then the recurrent dynamics of this RNN," }, { "end": 233, "start": 226, "text": " are going to implement a type of learning algorithm, a type of information integration essentially." }, { "end": 239, "start": 233, "text": " And both of these papers came up with that simultaneously and had different results," }, { "end": 244, "start": 239, "text": " but actually almost identical algorithmic implementations." }, { "end": 251, "start": 244, "text": " And in our paper, we use rl squared, or this setting of memory-based meta-learning," }, { "end": 257, "start": 251, "text": " to essentially test our hypothesis whether or not you can also meta-learn not to learn." }, { "end": 265, "start": 257, "text": " Yeah, and it's actually, so the question of whether or not you can meta-learn heristic behaviors" }, { "end": 272, "start": 265, "text": " is actually also really interesting for many robotic settings in which you might not have enough time" }, { "end": 275, "start": 272, "text": " to essentially learn some form of behavior at Hark," }, { "end": 280, "start": 275, "text": " and you want to have a good default to which you can fall back and have a system that sort of identifies" }, { "end": 282, "start": 280, "text": " when it should learn and when not." }, { "end": 286, "start": 282, "text": " Can you help us understand a little more about what these hard-coded behaviors are like?" }, { "end": 291, "start": 286, "text": " Is this open loop behavior? Is it sensing and responding still?" }, { "end": 297, "start": 291, "text": " So, the general animal behavior essentially, some really intriguing examples are," }, { "end": 305, "start": 297, "text": " for example, giraffes, right? So a giraffe that is born is basically able to walk after a single minute." }, { "end": 311, "start": 305, "text": " So there is some form of hard coding in terms of multiprimitives going on" }, { "end": 318, "start": 311, "text": " that allows the animal to really quickly walk without having a lot of reward signal to guide its behavior," }, { "end": 325, "start": 318, "text": " why it's learning. Another example of instinctive or heristic behaviors are fish" }, { "end": 331, "start": 325, "text": " and David Ha likes to use this example oftentimes in his talks as well," }, { "end": 337, "start": 331, "text": " where fish have a morphology that already hard-codes certain swimming behaviors." }, { "end": 347, "start": 337, "text": " So in a very simple illustration, one can see a bunch of fish and these fish move downstream," }, { "end": 353, "start": 347, "text": " but these fish are actually dead and the whole takeaway is essentially that the body of the fish" }, { "end": 357, "start": 353, "text": " is already sort of prone to execute certain behavior." }, { "end": 364, "start": 357, "text": " And in the context of training neural network agents, we sort of have two example cases." }, { "end": 372, "start": 364, "text": " One is simply like a classic banded task in which there are two arms and one arm is always sort of deterministic," }, { "end": 378, "start": 372, "text": " going to give you a reward of zero, and we have another arm and that arms characteristics are" }, { "end": 383, "start": 378, "text": " sampled from some form of task distribution. And depending on the shape of that task distribution," }, { "end": 390, "start": 383, "text": " it might be beneficial to explore that non-deterministic arm and to figure out whether or not" }, { "end": 396, "start": 390, "text": " its average reward is above zero, and if that's the case, then you should essentially continue" }, { "end": 402, "start": 396, "text": " exploring that arm, but if that's not the case, you should take the deterministic arm and get always a reward of zero." }, { "end": 410, "start": 402, "text": " And in this case, the meta-learning essentially has to discriminate between task distributions" }, { "end": 420, "start": 410, "text": " in which it's sort of feasible or on average beneficial to learn which arm is better and in which settings it isn't." }, { "end": 426, "start": 420, "text": " And when it figures out that it isn't beneficial to explore, then the recurrent dynamics of the RNN" }, { "end": 434, "start": 426, "text": " are going to implement essentially something very stale that just tells the agent always pick the deterministic arm." }, { "end": 439, "start": 434, "text": " And then we have a second example, and in that second example we have an agent that has to explore" }, { "end": 447, "start": 439, "text": " some type of visual navigation task like a maze or a grid world, and in that grid world there are, for example," }, { "end": 458, "start": 447, "text": " rewards of different magnitudes. And you can think of those rewards as sort of being different types of food or nutrition," }, { "end": 463, "start": 458, "text": " and they vary their location with different amounts of probability." }, { "end": 469, "start": 463, "text": " So one reward location might always be fixed at the same while others might vary." }, { "end": 475, "start": 469, "text": " And in this setting the agent sort of has to meta-learn whether or not there is enough time to actually do the exploration" }, { "end": 483, "start": 475, "text": " to pick up a more uncertain but higher reward or to deterministically always go to safe location." }, { "end": 494, "start": 483, "text": " And what we find there is that essentially agents which are trained using memory-based meta-learning are somewhat overfitting their lifetime horizon." }, { "end": 503, "start": 494, "text": " So if an agent has only seen lives essentially in its meta-training setup that consists of lifetime five," }, { "end": 511, "start": 503, "text": " then it's not going to be able to meta-learn a more expressive exploration strategy, for example." }, { "end": 519, "start": 511, "text": " And in these cases even if the agent has more lifetime it's always going to choose the sub-optimal deterministic behavior." }, { "end": 526, "start": 519, "text": " So this lifetime overfitting and meta-learning is also something that's in my opinion somewhat under-explored in the literature." }, { "end": 538, "start": 526, "text": " In the sense that you ideally would like to meta-learn a time-universal agent that's capable of adjusting its own learning algorithm based on the amount of time that it has." }, { "end": 545, "start": 538, "text": " It might not always be the optimal strategy to explore endlessly or to be heuristic from the get-go," }, { "end": 556, "start": 545, "text": " but we as humans are very much capable of separating between what strategy we should pick depending on the time budget that we have for a certain task." }, { "end": 563, "start": 556, "text": " So it sounds like we might not expect fruit flies to develop language and culture. Is that what you're saying?" }, { "end": 585, "start": 563, "text": " Yeah, basically. So I assume the lifetime of fruit flies is fairly limited to make use of language. So exactly the way how ecology or the environment sort of shapes what is beneficial to meta-learn is something that sort of explored in this paper." }, { "end": 593, "start": 585, "text": " So were any of the results surprising to you or were you kind of confirming what you already suspected was the case?" }, { "end": 606, "start": 593, "text": " One result that was surprising to me was basically you can imagine settings where there's just enough time to figure out the solution to your task." }, { "end": 610, "start": 606, "text": " And given a little bit less time, you won't be able to do it." }, { "end": 620, "start": 610, "text": " And in these settings, what we find is that memory based metal learning, which is ultimately based on gradients, is very much a seat dependent." }, { "end": 629, "start": 620, "text": " So some seeds figure out that at that point you are supposed to to to meta learn and adapt the strategy and others don't." }, { "end": 640, "start": 629, "text": " So essentially whenever you have a setting where it's really sensitive how much time you have the optimization landscape becomes very erratic or hard to solve." }, { "end": 653, "start": 640, "text": " And this is something that in my mind, more meta learning algorithms that should resemble more complex adaptation should essentially incorporate or try to tackle." }, { "end": 669, "start": 653, "text": " So besides these hard-coded behaviors and and and learn behaviors, I guess you might say that animals have another channel, which is learning from culture and from teachers, teachers, student type thing." }, { "end": 690, "start": 669, "text": " Do you think that that learning via like culture or by teachers is, do you consider that like just a subcategory of learning in the sense that you have in this paper or do you think of it as a separate category like if your agents had the ability to teach each other, do you think that that would have changed things?" }, { "end": 710, "start": 690, "text": " That's a really interesting question. So basically our paper is just a starting point. So we use essentially first very simple settings in which we can do analytical work in which we can figure out the base optimal behavior and then compare it to the meta learned approximate Bayesian inference." }, { "end": 729, "start": 710, "text": " But going further, this is for sure something I'm interested in and in fact right now a master student, D'Altia Milla, who I'm working with is actually looking into these settings where there are sort of experts that guide or give feedback to the agent and the agent has to meta learn how to trust." }, { "end": 737, "start": 729, "text": " This is very much along the lines of Natasha Jack's work and I think it's a it's a reasonable extension." }, { "end": 751, "start": 737, "text": " Yeah, I guess I sometimes forget that that most of our knowledge and quote intelligence, really purely intelligence, but that knowledge comes from culture and teaching and very little we discover on our own in our own lifetime." }, { "end": 764, "start": 751, "text": " Yeah, there's so many artifacts, right? Just if you think about books, you know, there's no machine learning algorithm right now out there that could explore the world, pick up a book and capture all the knowledge that's written in a book." }, { "end": 774, "start": 764, "text": " So I think, yeah, thinking along the lines of what are sophisticated artifacts that we could build into artificial systems, environments and so on." }, { "end": 776, "start": 774, "text": " It's also something really interesting." }, { "end": 782, "start": 776, "text": " Cool. Okay, let's move on to your next paper on lottery tickets and minimal task representations in the reinforcement learning." }, { "end": 795, "start": 782, "text": " So this paper is a paper that originated from another master's student, MacVisher, who started working on lottery tickets in the context of reinforcement learning." }, { "end": 805, "start": 795, "text": " And back then, the lottery tickets hypothesis first sort of popped up and was mainly demonstrated in the computer vision community." }, { "end": 814, "start": 805, "text": " And maybe before I dive into the paper, I'm going to take a minute to sort of introduce the lottery ticket hypothesis more conceptually." }, { "end": 825, "start": 814, "text": " Great. Originally, people had all very like many hypotheses about why over parametrization is important in training deep neural networks." }, { "end": 835, "start": 825, "text": " And there were many observations which showed that if you take a small network and try to prune it from scratch, given their endimensionalization, that this fails." }, { "end": 847, "start": 835, "text": " And people argued that over parametrization essentially helps with the optimization by allowing more dimensions to essentially circumvent local minimal." }, { "end": 861, "start": 847, "text": " And there have been many other sort of tales about like an information bottle and a kind of view on these things where you first have a network that needs to memorize the data generating process and then afterwards essentially sort of distilled it." }, { "end": 868, "start": 861, "text": " And in order to do this memorization, you need a lot of network capacity." }, { "end": 881, "start": 868, "text": " And then Jonathan Franco came along and showed that this was actually not the case. And most of these sort of observations or hypotheses were actually not really adequate." }, { "end": 897, "start": 881, "text": " And most specifically, what he did is he came up with a procedure for how to derive sparse neural networks, which are trainable from scratch to the same or similar performance levels as dense neural networks." }, { "end": 909, "start": 897, "text": " So before what people usually did is they took a dense neural network and they pruned it after training a little bit and then retrained and then pruned a little bit and then retrained." }, { "end": 922, "start": 909, "text": " And in the lottery ticket hypothesis paper, the procedure essentially goes a different route where you start to train a neural network, to convergence." }, { "end": 938, "start": 922, "text": " And you keep the initialization of that neural network saved. And then at the end of training, you ask the network, hey, what are your highest magnitude weights and you prune away the lowest magnitude weights like a percentage of fraction." }, { "end": 945, "start": 938, "text": " And then reset the remaining weights back to the initial values that you had before you started training." }, { "end": 960, "start": 945, "text": " And you iterate this process each time shaving away a little bit more of your weights. And if you do so, so this procedure is called iterative magnitude pruning, you end up with a network or with many networks at different sparsity levels." }, { "end": 968, "start": 960, "text": " And importantly, many of these networks at high degrees of sparsity remain trainable." }, { "end": 983, "start": 968, "text": " So basically what this says is there are spars neural networks which are trainable. But maybe right now we don't have the right efficient way for doing the initialization to that sparsity level." }, { "end": 1000, "start": 983, "text": " And back then, a bunch of follow up papers popped up trying to do this in different contexts like in natural language, for example, or in object recognition and in reinforcement learning that was also a first paper." }, { "end": 1027, "start": 1000, "text": " But what most of these papers did is sort of only establish that this existence of spars neural networks also is present in different other contexts like, for example, in natural language processing, you can obtain sparse transformer models, which can also train up to the performance of their dense counterparts for specific level of sparsity." }, { "end": 1039, "start": 1027, "text": " But no one really looked at sort of what's actually happening on the underlying level. So what are sort of their representations that are shaped by this pruning process." }, { "end": 1044, "start": 1039, "text": " And we wanted to essentially explore these questions in our paper." }, { "end": 1050, "start": 1044, "text": " In the paper you say you use masks for the pruning. Can you tell us what these masks are?" }, { "end": 1061, "start": 1050, "text": " Yeah, so in general, the winning lottery ticket, so the sparse network that is capable of training to high performance, consists essentially of two parts." }, { "end": 1072, "start": 1061, "text": " So it consists of the network that was initialized originally, and this is a dense network, and it consists of the pruning mask." }, { "end": 1080, "start": 1072, "text": " So that's a binary mask. There essentially masks out all the weights that were pruned away." }, { "end": 1093, "start": 1080, "text": " And in our paper, we essentially come up with a set of baselines to try to disentangle the effect of the pruning mask and the weights that are preserved by that pruning mask." }, { "end": 1106, "start": 1093, "text": " So you could imagine just keeping the pruning mask around and using a different initialization. But this would then sort of discard the effect of the weights that were sort of filtered out by the iterative magnitude pruning process." }, { "end": 1117, "start": 1106, "text": " And what we wanted to do is see whether or not these masks and weights contribute differently to the ticket." }, { "end": 1134, "start": 1117, "text": " And this was basically based on an observation that Mark, my master's student, had that oftentimes when you train Matina a perceptron policies, the initial layer is pruned a lot more than the higher layers." }, { "end": 1148, "start": 1134, "text": " And when you look at what is pruned away, you find that essentially entire initial dimensions of your observation are pruned away." }, { "end": 1164, "start": 1148, "text": " So that means all the weights that sort of originate from one input dimension are discarded, which in turn means that the agent does not perceive the dimension in order to do the decision making that it has to make." }, { "end": 1179, "start": 1164, "text": " And what we then sort of looked at was whether or not this generalizes to do many other tasks as in robotics, for example. So we looked at a set of continuous control tasks and a set of visual control tasks." }, { "end": 1199, "start": 1179, "text": " And in both of them, we see this phenomenon appear. So the lottery ticket is essentially not only yielding a sparse neural network that's trainable, and it's also yielding an interpretable inductive piers in the form of the input layer mask." }, { "end": 1217, "start": 1199, "text": " So what we do in the papers, we show that if you take this mask and you essentially overlay it on the environment, then dimensions which are really just task irrelevant are being pruned away, while other dimensions which are task relevant are preserved." }, { "end": 1224, "start": 1217, "text": " And thereby the pruning mask is essentially telling the agent what's important and what's not." }, { "end": 1233, "start": 1224, "text": " So I wrote this question earlier, let's see if it still makes sense. The question was if you have ideas on which phases of learning require more model capacity." }, { "end": 1244, "start": 1233, "text": " Sounds like you really really need the most model capacity in the beginning. Is that is that right? Like in terms of early exploration of a novice agent versus fine tuning an agent that's nearly become expert." }, { "end": 1263, "start": 1244, "text": " Well, we actually never really looked at whether or not it's possible to train to prune intermediate phases of training. Right. So the lottery ticket procedure as is always trains up to some form of early stopping criteria and more up to some form of fixed iterations." }, { "end": 1280, "start": 1263, "text": " And then you either set back to the initial weights or in a specific version of the lottery ticket procedure, you rewind the weights until a fixed period or a number of steps in the training process." }, { "end": 1295, "start": 1280, "text": " But whether or not you can sort of dynamically prune at intermediate steps of training, I'm not really certain about that. It's actually a really interesting question to ask." }, { "end": 1305, "start": 1295, "text": " Especially because in reinforcement learning oftentimes the learning dynamics may be fairly noisy. Right. So you might never fully get convergence in that sense." }, { "end": 1314, "start": 1305, "text": " So does this mean that a lot of our neural network capacity is kind of wasted in general, but the default case. Yeah, that's also a good question." }, { "end": 1323, "start": 1314, "text": " It definitely means that there might be more clever ways out there to initialize our neural networks that requires less parameters." }, { "end": 1338, "start": 1323, "text": " And potentially this also means that the remaining parameters could be used for something different. Right. So I'm not sure if there already exists work on multi task lottery tickets, right or continual learning lottery tickets." }, { "end": 1341, "start": 1338, "text": " But this is for sure an interesting question going forward." }, { "end": 1356, "start": 1341, "text": " Do you have any speculation on how far we could go with this? Like some models are getting really big now. If pruning maybe maybe if the large ticket hypothesis can be taken to its ultimate conclusion and pruning, you know, can find ways to prune really well." }, { "end": 1362, "start": 1356, "text": " Do you think there's a chance that we might distill these massive networks down into something really quite quite a lot smaller." }, { "end": 1378, "start": 1362, "text": " So to me as a researcher, the lottery ticket hypothesis or framework is much more of a hypothesis testing engine than necessarily a way forward in terms of how to obtain sparse neural networks." }, { "end": 1394, "start": 1378, "text": " So oftentimes doing this iterative procedure where you chain a neural network and then you prune a little bit and then you chain again and then you prune again a little bit." }, { "end": 1423, "start": 1394, "text": " So the ticket hypothesis is what it can tell us about the learning dynamics of an agent. Right. So as I was sort of a looting to Jonathan Frankel has a set of papers where he looks at sort of the stability of the learning dynamics and the dependence on the order of the batches and the data set ordering and these types of work." }, { "end": 1438, "start": 1423, "text": " So inspire how we think about learning dynamics and reinforcement learning. So one one thing that we study, for example, in the paper is whether or not training agents with explicit supervision." }, { "end": 1459, "start": 1438, "text": " So in a behavioral cloning task allows us to obtain more sparse networks and that would sort of point to the observation or to the point that reinforcement learning is inherently more complex and may require more parameters." }, { "end": 1473, "start": 1459, "text": " So what do I mean by that in behavior cloning you would chain an expert on a certain task and then have a student clone the behavior or try to imitate the behavior of the teacher." }, { "end": 1491, "start": 1473, "text": " So this is most often supervised loss. There is no exploration, but instead the agent sort of has to simply predict what the teacher would do and then execute that behavior as well. But in the reinforcement learning setting, we have sort of the additional problem of exploration." }, { "end": 1513, "start": 1491, "text": " So we have the additional problem of having to solve the problem of learning with a non stationary data distribution in the sense that the agent observes different parts of the environment and different parts of training and we have the problem of having a graded assignment signal in form of the reward, which is oftentimes noisy and sparse." }, { "end": 1539, "start": 1513, "text": " So what we show in our experiments is that this setting actually really indeed requires more parameters. So if you take the same network architecture and you train it with supervised behavior cloning and then do the iterative magnitude pruning procedure in that setting and then compare this to what sparsity performance results you would get for the reinforcement learning case." }, { "end": 1549, "start": 1539, "text": " So that reinforcement learning starts to create a lot earlier in terms of sparsity than supervised behavior cloning." }, { "end": 1568, "start": 1549, "text": " So one thing just empirically that this says is that if you want to obtain sparse reinforcement learning agents or control agents, you should not use the reinforcement learning problem formulation where you have an agent which wonders around its environment perceives and then learns from our reinforcement signal." }, { "end": 1576, "start": 1568, "text": " But instead use some form of student teacher distillation because that allows you to go sparser." }, { "end": 1605, "start": 1576, "text": " On the other end it also tells us that the reinforcement learning problem essentially can benefit from having more parameters, which is something that until fairly recently was not necessarily clear. I guess it makes me wonder if the agent started by imitating a mediocre agent and then in reinforcement learning was then used to to fine tune this agent to expert would it need a lot of capacity for that or would that be kind of we could kind of just wouldn't need as much elbow room because it already has a lot of energy." }, { "end": 1628, "start": 1605, "text": " So that's interesting. We didn't test that setting. Another setting brought forward the classical offline RL setting where you have a data set and don't have an agent exploring the environment necessarily but you just have transitions in a experience experience replay buffer for example and you have to learn from that." }, { "end": 1648, "start": 1628, "text": " In case you also have sub optimal demonstrations and you could run the same procedure and try to figure out whether or not essentially learning from a static non distribution shift exposed data set also requires less parameters." }, { "end": 1662, "start": 1648, "text": " But on the other end it's fairly hard to compare because usually these agents might not necessarily train up to the performance that the full reinforcement learning agent or the behavioral cloning agent trains up to." }, { "end": 1666, "start": 1662, "text": " So it's hard to make the sparsity performance comparisons." }, { "end": 1683, "start": 1666, "text": " The setting that you were talking about the one sort of way you have one initial learning phase in which you do sort of behavioral cloning and then start to fine tune using reinforcement learning is actually one that we didn't invest it yet." }, { "end": 1695, "start": 1683, "text": " I mean I imagine a huge amount of effort goes into defining and running these experiments and analyzing results and it's so easy for people to come after the fact and be like what about this and what about that and there's always a million things you could have done." }, { "end": 1713, "start": 1695, "text": " No, I'm really thankful like these are great ideas and you're right like the deciduative procedure requires you to train networks at different sparsity levels sequentially 20 times and that's not cheap but I think good ideas are also not cheap." }, { "end": 1720, "start": 1713, "text": " So let's move to your next paper that is semantic RL with action grammars, data efficient learning of hierarchical task abstractions." }, { "end": 1741, "start": 1720, "text": " Yeah, so this is a paper together with my former supervisor at Imperial College Aldo Faisal and this paper has sort of a bit of a longer history in Aldo's lab and it actually originates from something that's not related necessarily to reinforcement learning at first sight." }, { "end": 1757, "start": 1741, "text": " So that's probably the evolution of tool use algorithms back in the days and Aldo my supervisor is very much into the concept of so called action grammars." }, { "end": 1773, "start": 1757, "text": " So the idea behind action grammars is that many of our behaviors sort of have repeating sequences which are structured using a form of high level syntax that helps us solve tasks." }, { "end": 1802, "start": 1773, "text": " So one classic example might be opening a door and closing it. They are sort of primitives and they're like grabbing for the knob which I used multiple times in solving that task and you can think of them as being some form of production rule or grammar like structure that is iterated or used multiple times which can be reused several times." }, { "end": 1816, "start": 1802, "text": " And in sort of an old paper he and colleagues looked at how such grammatical structures might capture the complexity of tool use over time." }, { "end": 1832, "start": 1816, "text": " And what they show is that these grammars sort of became more and more complex with evolution unfolding and that humans developed more and more sophisticated tool use sort of grammatical algorithms I would say." }, { "end": 1846, "start": 1832, "text": " And in the paper that you were referring to we're sort of looking at whether or not one can also use such grammar structures as building blocks for hierarchical reinforcement learning." }, { "end": 1862, "start": 1846, "text": " So traditionally in reinforcement learning we were sort of a single step actions and agents have to optimize some form of aggregated reward metric based on executing actions once they're at a time." }, { "end": 1878, "start": 1862, "text": " But in hierarchical reinforcement learning the idea is that it might be more data efficient to essentially learn hierarchical policies which execute certain subpolices over multiple timestamps." }, { "end": 1902, "start": 1878, "text": " And one very classical example of this hierarchy reinforcement learning framework are our options and in options you define essentially subpolices for an initiation set of states and these subpolices are then basically executed over multiple timestamps until you have a termination criterion and the termination criterion says no." }, { "end": 1914, "start": 1902, "text": " Now you're stopping to execute that subpolicy and you return control back to a higher level policy and then higher level policy then executes the next subpolicy." }, { "end": 1927, "start": 1914, "text": " And one key sort of question in hierarchical reinforcement learning is how can you come up with good subpolices and in the original paper by Sun et al." }, { "end": 1948, "start": 1927, "text": " And the way how they came up with the first set of options was to manually construct them. And nowadays a lot of people are interested in sort of doing automated construction of such options which are essentially simply temporarily extended actions." }, { "end": 1961, "start": 1948, "text": " And in our paper we look at whether or not one can use this notion of grammar and specifically action grammars to define a set of temporarily extended actions." }, { "end": 1970, "start": 1961, "text": " We're not looking at options but we're looking at macro actions which are simply sort of deterministic sequences of primitive actions." }, { "end": 1989, "start": 1970, "text": " So as you said deterministic sequences are they are these actions executed in the compound actions are executed in an open loop way like once you begin you just roll out those actions without regards to what's happening in the environment until until that sequence is done." }, { "end": 1992, "start": 1989, "text": " And is that different than options in that sense?" }, { "end": 2003, "start": 1992, "text": " So yeah in options you have something called a termination criterion or function which in modern setups oftentimes is also state dependent." }, { "end": 2016, "start": 2003, "text": " And if the higher level of controller says execute option one then essentially the subpolicy of option one is executed until the termination criteria says no don't execute anymore." }, { "end": 2026, "start": 2016, "text": " While in our setting and when we use these more simple I guess macro actions you always deterministically executing the full sequence of actions." }, { "end": 2045, "start": 2026, "text": " Yeah I love the connection here with language and action and I don't actually know anything about the neuroscience in this area but I can imagine maybe that our grammar of our tool use carry it over carries over to the grammar of language or ability to use language." }, { "end": 2052, "start": 2045, "text": " And now we're finding in AI that transformers are going through and solving all the different modalities." }, { "end": 2062, "start": 2052, "text": " And so maybe there is something fundamental in those grammars to all the things that we can do which is I think kind of a beautiful insight." }, { "end": 2071, "start": 2062, "text": " And this paper reminds me a little bit of just just in the sense of framing RL as an NLP problem reminds me a bit of decision transformer." }, { "end": 2078, "start": 2071, "text": " And I guess there's more work happening that direction using these NLP tools on RL tasks." }, { "end": 2081, "start": 2078, "text": " Do you see that as a growing trend?" }, { "end": 2092, "start": 2081, "text": " Yeah so yeah like you said right now it seems like transformers are taking over many many subfields and at least for offline RL." }, { "end": 2110, "start": 2092, "text": " It seems like they are also very efficient in modeling sequences and their generalization power allows you to create or sample trajectories which are even going beyond or what the agent has seen right in the offline data set." }, { "end": 2134, "start": 2110, "text": " So back then when we worked on that paper there were no transformers yet. And oftentimes I also wonder how data efficient that ultimately is not going right because transformers at least in computer vision require quite a lot of data to really outperform the inductive biases that you get from from a convolution operator for example." }, { "end": 2144, "start": 2134, "text": " And the same sort of applies here where the grammars which we use or which we infer they do not require a lot of data." }, { "end": 2152, "start": 2144, "text": " So we use grammar compression techniques from context free grammars to construct our macro actions." }, { "end": 2170, "start": 2152, "text": " And what we do in that setting is we treat a sequence of actions as sort of a sample from a language. So it's a sentence essentially and then we define primitive actions as being essentially our vocabulary." }, { "end": 2193, "start": 2170, "text": " And then we can construct rules which generate these actions in a hierarchical fashion using these grammar compression algorithms and they run super fast right so oftentimes they they might be a little bit heuristic but they create the grammar which is then ultimately the set of macro actions in no time." }, { "end": 2205, "start": 2193, "text": " While training and transformer offline requires a lot more compute a lot more data and does not come with this inactive bias of a context free grammar that we're using." }, { "end": 2214, "start": 2205, "text": " And furthermore what we're showing is in our paper that you can infer these grammars online as the agent is learning." }, { "end": 2228, "start": 2214, "text": " So we have a set of experiments which is more closely related to offline or imitation learning where you just infer essentially a grammar based on an optimal policy and then use that grammar to train in our agent from scratch." }, { "end": 2238, "start": 2228, "text": " But you can also train an RL agent from scratch and then essentially infer macro actions as you go and as the agent gets better in the environment." }, { "end": 2244, "start": 2238, "text": " And then it's essentially bootstrapping its own learning progress by compressing it into a grammar." }, { "end": 2253, "start": 2244, "text": " Cool. Yeah, I guess I like to think that classical algorithms are always preferred when they when they do apply because they're super fast and efficient and exact right." }, { "end": 2267, "start": 2253, "text": " Yeah, but it's also sort of I guess a trend of our time that ultimately we as computer scientists are interested in general purpose solutions right and yeah it oftentimes feels like." }, { "end": 2283, "start": 2267, "text": " We first come up with the more specialized solutions which leverage inductive biases and then once we have a clue about how things work we we broaden and we generalize and we let the system discover all these inductive biases and even more on its own." }, { "end": 2296, "start": 2283, "text": " So can you say more about how this how this grammar evolves? Like I guess as the agent is training does the set of compound actions or however you phrase them does that set grow and change." }, { "end": 2312, "start": 2296, "text": " Through the process. Yeah, so the these grammar algorithms they also come with their own set of hyperperimeters right and you depending on these hyperperimeters you get more or less macro actions and this is actually something you need to control." }, { "end": 2324, "start": 2312, "text": " Oftentimes it makes more sense to be a bit more conservative early on so that means extracting less macro actions initially and then sort of as the agent gets better extract more of them." }, { "end": 2340, "start": 2324, "text": " And this is also sort of problem dependent and the way how they evolve over time is essentially that initially there might be more macro actions with fewer primitive actions." }, { "end": 2354, "start": 2340, "text": " So short macro actions are not long sub policies and towards the end as the agent learns to solve the task the macro actions can be robustly longer right so yeah longer sub sequences of actions." }, { "end": 2361, "start": 2354, "text": " And does it does it discard any earlier macros that it found that may not be applicable anymore or is it kind of an accumulating set." }, { "end": 2389, "start": 2361, "text": " Yeah, so some of them can be discarded that's that happens naturally but oftentimes you have a a parse tree like in a context free grammar you essentially compose different protection rules into each other right so you have a hierarchy of macros and they rules that sort of compose of smaller sub rules right so smaller sub rules make up larger longer rules." }, { "end": 2397, "start": 2389, "text": " So this paper was in 2019 are you you were your colleagues planning to pursue this direction more or has has more happened in this in this area sense." }, { "end": 2418, "start": 2397, "text": " I moved on so I moved to Berlin after my time at Imperial, but in the years afterwards and other students worked on on the project and one sub project in which I was partially involved was trying to scale our experiments to the setting of a tower for example and we looked sort of at the low data regime." }, { "end": 2440, "start": 2418, "text": " So only a few learning transitions not the full 200 million frames and back then we could also see that enhancing algorithms like DQN can actually with such macro actions can actually also outperform like doodling DQN baselines and back then my quarter," }, { "end": 2448, "start": 2440, "text": " Peter los que sublulu, he had a really smart idea for how to construct macro actions in experience replay buffers." }, { "end": 2467, "start": 2448, "text": " So usually in sort of classical DQN our experience replay buffer consists of one step transitions right so an agent finds itself in a specific state takes an action receives from the environment a reward as well as its state transition." }, { "end": 2475, "start": 2467, "text": " And so this tuple stored in the replay buffer and then we sample batches of these transitions to construct gradient estimates." }, { "end": 2484, "start": 2475, "text": " But in our setting where we're interested in temporarily extended actions, our replay buffer needs to sort of account for these macro actions." }, { "end": 2509, "start": 2484, "text": " And one simple way to do so would be to say okay we have an action then you execute the macro action and we store that macro action as sort of a number in our replay buffer and encoding that action and then we store the final state that was observed as well as sort of the sort of return that was accumulated during the time we executed the macro action." }, { "end": 2518, "start": 2509, "text": " But this would sort of only give us macro actions in our replay buffer whenever we execute actually a specific macro action." }, { "end": 2538, "start": 2518, "text": " But what Peter los came up with is that you can also use the rest of the replay buffer which might consist of primitive actions so one step actions to construct macro actions based on those right so you can chain together different one step transitions into transitions which are" }, { "end": 2547, "start": 2538, "text": " essentially macro actions but they were not executed as macro actions but just as sequences of primitive actions." }, { "end": 2565, "start": 2547, "text": " And this is approach he called hindsight action replay where we're essentially composing different transitions which executed a macro into transitions which then can be used to actually train our value estimates for a specific macro action." }, { "end": 2589, "start": 2565, "text": " That's really cool very innovative. Yeah and you could also like we didn't exploit this but you could also imagine that you construct these macro actions using different discounts right so you have all the primitive actions available and you could think of constructing different macro transitions using different discounts and thereby learning essentially at different timescales right." }, { "end": 2608, "start": 2589, "text": " We didn't do that but this is sort of something you could explore cool okay so let's move on to MLE infrastructure I see on your homepage that you maintain this this package can you say more about MLE infrastructure and maybe can you situate it a bit by comparing it to other frameworks in that space sure." }, { "end": 2637, "start": 2608, "text": " So the MLE infrastructure is something that's actually not RL specific but more a set of tools that I've built throughout the last years when writing papers that allow me to execute and to train neural networks in a distributed fashion right so oftentimes I'm interested in exploring sort of the sensitivity of some architecture across a range of hyperpermeters and I wanted to so for multiple random seeds" }, { "end": 2658, "start": 2637, "text": " and I don't want to write each time sort of a submission script that executes this on a cluster and I don't want to have to manually SAP the results onto my local machine but instead I wrote essentially this set of tools the MLE infrastructure which comes with the set of" }, { "end": 2674, "start": 2658, "text": " sub packages that help me organize and orchestrate these experiments so for example in there I have a tool called MLE hyper opt which allows me to very easily and lightweight engineer grid searches or random searches or" }, { "end": 2703, "start": 2674, "text": " basin optimization pipelines and this all integrates and works very well on grid engine clusters, slurm clusters and with Google Cloud Platform for sure machines and the whole motivation behind it is mainly that places like Google open AI they have full time research engineers right they have the money to to pay" }, { "end": 2731, "start": 2703, "text": " for setting up a good and efficient infrastructure for testing hypothesis and for doing science but oftentimes in in the world or in the academic world if you're not at a place like Miele or Stanford or Berkeley and these structures might not necessarily be in place and what I am to provide with the MLE infrastructure is a set of simple tools that can be" }, { "end": 2759, "start": 2731, "text": " used by anyone like me who doesn't find themselves in a PhD program at these institutions and it compares to packages like ray tune or sacred from India which provides similar sort of services but the main selling point behind the MLE infrastructure is that it's some modules like the hyper per meter search or the utilities" }, { "end": 2787, "start": 2759, "text": " I use to log or the utilities I use to schedule jobs they are sort of modular and independent so that means you don't have to buy into the full ecosystem installing a SQL database or something like that but you can sort of pick and choose what part of the infrastructure you want to use and I provide sort of a wrapper in the MLE toolbox which" }, { "end": 2795, "start": 2787, "text": " is a tool to run sort of standardized and protocol distributed MLE experiments but you can also just use parts of it." }, { "end": 2804, "start": 2795, "text": " So are other people using it too or is it is it mostly for yourself? Yeah no so basically I developed or I released most of this June to last two to" }, { "end": 2822, "start": 2804, "text": " an half months and they are already like a couple of people opening issues and starting to work on pull requests and yeah people in my lab I know starting to use it and it's it's up and coming let's put it that way or at least that's how I like to think about it" }, { "end": 2849, "start": 2822, "text": " and yeah I'm going to spend the next one and a half years finishing my PhD working with a daily and I have big plans. Very cool yeah I can see there's like a huge amount of plumbing behind every chart and experiment that we see in these papers there's a big gap between just being able to sketch out your code and then actually running that at scale and having it reproducible and all that so yeah I think I'll probably check that out." }, { "end": 2861, "start": 2849, "text": " Like all of the papers we discussed today except for the hierarchical reinforcement learning paper they were also conducted using the toolbox right so you can imagine that training" }, { "end": 2876, "start": 2861, "text": " the slurry ticket settings also requires quite a lot of engineering and the sense that you have to train these networks sequentially 20 times and you want to keep track of all the checkpoints and all the progress and all of this is also orchestrated with the toolbox." }, { "end": 2889, "start": 2876, "text": " So most of this sort of developed in sort of linked with the projects that I've been working on great we'll provide a link to this repo in the show notes and that was a great testimonial." }, { "end": 2903, "start": 2889, "text": " Okay so let's move on to to matter RL so we talked about some of your working matter RL but in general on this show we've we've talked about different different definitions of matter RL and you touched on the definitions early on." }, { "end": 2930, "start": 2903, "text": " I guess I have come to refer to two of these approaches as I call them like the Finn and the Faust approaches I guess the using RL to discover new RL algorithms is more like Alexandra Faust's work and then using pre training on related tasks to let RL agent quickly do fine tuning." }, { "end": 2946, "start": 2930, "text": " I think of as as related to Chelsea Finn's work like like mammal. How can you say anything about how you see the scope of meta RL and you do you have your own way of thinking about the different different types and definitions." }, { "end": 2948, "start": 2946, "text": " How do you define meta RL?" }, { "end": 2966, "start": 2948, "text": " So basically in the last five years a bunch of different meta RL or meta learning algorithms more generally sort of popped out popped up and I think the distinction that you've made is sort of adequate but the way I like to think about it is more in terms of inductive biases." }, { "end": 2978, "start": 2966, "text": " So something that is shared across all these formulations is that you have a task distribution and that task distribution has some form of overlap." }, { "end": 2995, "start": 2978, "text": " And given that overlap we're essentially interested in finding the best inductive biases that allow our agents or networks to adapt quickly to a new task that is part of that task distribution or at least not too far away from that task distribution." }, { "end": 3008, "start": 2995, "text": " And coming up with or using tools like model agnostic meta learning and allows you to come up with one inductive bias in the form of an initialization shared across tasks." }, { "end": 3019, "start": 3008, "text": " Like for example in a vision task this might be the inductive bias of having an edge detector at the on the early layers and this is most definitely shared across many tasks." }, { "end": 3029, "start": 3019, "text": " But another inductive bias might be a learning algorithm and that learning algorithm might be very much tailored to the task distribution that you're looking at." }, { "end": 3038, "start": 3029, "text": " So that learning algorithm might completely abstract away certain details of the environment or the transitions that the agent makes and it might focus on others." }, { "end": 3055, "start": 3038, "text": " So in my mind these are maybe two sides of the same coin but you can also think of many other types of inductive biases that you might come up with." }, { "end": 3071, "start": 3055, "text": " So for example prototypical neural networks are a different approach to meta learning or you might even think of the Larry ticket procedure like this iterative magnitude pruning procedure that I spoke about as some form of inductive bias discovery." }, { "end": 3081, "start": 3071, "text": " And nowadays many people like for example Lewis Kirsch has been working on discovery new learning algorithms." }, { "end": 3087, "start": 3081, "text": " And I think that's a really promising direction to take." }, { "end": 3093, "start": 3087, "text": " But it's not necessarily trivial." }, { "end": 3102, "start": 3093, "text": " Can you talk about any other advances in meta oral lately that you find interesting what's going on in that field that you find exciting." }, { "end": 3113, "start": 3102, "text": " So one set of algorithms that I'm very excited about especially in the context of reinforcement learning is the work on meta gradients." }, { "end": 3122, "start": 3113, "text": " And someone who has pioneered this work is for example Tom Zahavi and a set of researchers at Google deep mind." }, { "end": 3146, "start": 3122, "text": " And the idea there is that in reinforcement learning we oftentimes have hyperperimeters like we spoke about the discount parameter or other parameters like the amount of important sampling correction that you might want to do in algorithms like Impala." }, { "end": 3166, "start": 3146, "text": " And these are usually set to static values. So oftentimes you as a designer have to choose them a priori and then they might not change or you might have a exploration parameter like an absolute and an absolute greedy schedule which changes slowly over time and linearly decays." }, { "end": 3187, "start": 3166, "text": " But it would be really cool to have a system that automatically tunes these algorithms in an end to end fashion. And what recently sort of has been popping up is that you can use sort of the same higher order gradients as you would use them in mammal to also optimize these parameters." }, { "end": 3198, "start": 3187, "text": " And especially in the context of reinforcement learning where there is non-stationarity this seems to be very effective." }, { "end": 3212, "start": 3198, "text": " And you can not only optimize online hyperperimeters like the discount factor but you can also offline try to optimize or discover the entire RL objective." }, { "end": 3236, "start": 3212, "text": " And if you think about it oftentimes in reinforcement learning the objective functions that we use fairly historic in some sense like the mean squared Bellman error is something that came out of approximate dynamic programming right and trying to to minimize some Bellman error right." }, { "end": 3248, "start": 3236, "text": " But an agent might actually learn better initially using a completely different objective which emphasizes other things or discounts differently." }, { "end": 3265, "start": 3248, "text": " And in this meta-grainian work there has been a lot of follow up work trying to show that you can even sort of parameterize entire objective functions using black box neural networks and then optimize these offline on a set of tasks that yields a network or objective function." }, { "end": 3274, "start": 3265, "text": " That then allows the agent to learn more effectively from scratch essentially." }, { "end": 3286, "start": 3274, "text": " So I think all of these approaches which try to make more and more of the reinforcement learning pipeline be end to end discoverable and tunable is really promising." }, { "end": 3307, "start": 3286, "text": " Where do you think this is going long term and like do you see a holy grail in terms of meta RL what what kind of meta RL achievements would make you think like we've really arrived at powerful mental learning for RL if we get there what would be the effects on on the rest of AI you think about that stuff." }, { "end": 3322, "start": 3307, "text": " Ultimately the vision from your Schmidt Uber in the sense that we might aim for systems which self referentially refine themselves in a hierarchical loop is one that's that's very appealing right." }, { "end": 3334, "start": 3322, "text": " So right now we're mainly talking about systems where we stack one layer on top of the other right so we have higher order gradients which optimize certain parameters in our system." }, { "end": 3350, "start": 3334, "text": " But we're never thinking about going one step further in the hierarchy and there are many reasons for that related to sort of the variance of the gradients computational efficiency and other factors." }, { "end": 3373, "start": 3350, "text": " But I think long term I would be interested in having systems which go go beyond that right so not only try to meta learning a certain set of primitives or ingredients but sort of render more and more exposed to such meta gradients on meta learning more generally." }, { "end": 3386, "start": 3373, "text": " So is this the fuse of what people talk about when they talk about super-realms, intelligence explosion and automatically improving AI when it gets down to it is this really path to those things." }, { "end": 3393, "start": 3386, "text": " This is a speculative question and I'm not sure if I'm senior enough to actually give you a good answer to that." }, { "end": 3405, "start": 3393, "text": " I can say that it algorithmically excites me and I feel like it's the way to go but I can give you a good future prediction or anything like that." }, { "end": 3415, "start": 3405, "text": " I think we're probably at the same time closer and further away from these utopian or dystopian endeavors." }, { "end": 3433, "start": 3415, "text": " Cool, okay, so let's move on to economics and RL. Now I see you did your undergrad in economics and you mentioned that in your ML Street talk podcast interview which I'll link to as well in the show notes but you have this background in economics and I think that's not too common for someone working in RL and I find that super interesting." }, { "end": 3448, "start": 3433, "text": " So does economics influence your view, your approach at all to your machine learning work? So actually much of the formalism of RL is fairly present in economics." }, { "end": 3472, "start": 3448, "text": " So many macro economists use Markov decision processes to model household behavior and oftentimes an agent in economics is also learning and not necessarily learning but interacting and making decisions like savings and investments." }, { "end": 3488, "start": 3472, "text": " And recently people from Salesforce also tried to to sort of reframe many of the fundamental economics problems like taxation and unemployment decisions into the word of reinforcement learning." }, { "end": 3502, "start": 3488, "text": " To me, actually I decided to move out of economics because I felt like the level of description was not necessarily the most informative." }, { "end": 3521, "start": 3502, "text": " So oftentimes in economics we're dealing with highly aggregated measures right so inflation is an estimate based on a basket of goods and unemployment rate is always estimated over many individuals and the modeling has to work on that aggregated level." }, { "end": 3550, "start": 3521, "text": " And since otherwise it would be hard to reframe or to actually capture all the heterogeneity in the population. And I feel like in reinforcement learning we don't necessarily need to go that way right so a lot of very exciting work in the field of multi agent reinforcement learning by a cop first or for example can account for heterogeneity across agents and I think there's a lot of potential in that." }, { "end": 3562, "start": 3550, "text": " And the sense that AI may also inform modern economics in terms of policy making and increasing the well being of all of us right that's the ultimate goal I would claim." }, { "end": 3567, "start": 3562, "text": " So it's interesting you mentioned that Salesforce paper I think that was the AI economist." }, { "end": 3576, "start": 3567, "text": " And so so is what you're saying that you you would consider looping back to to economics or is that more flight of fancy." }, { "end": 3595, "start": 3576, "text": " So I think right now we're seeing that like the the reason breakthroughs by by deep mind in mathematics chemistry protein folding it seems like there is no discipline where there is no room for machine learning to improve certain things right." }, { "end": 3624, "start": 3595, "text": " And not saying machine learning is going to substitute every field out there and but I think it's going to help scientists test new novel hypotheses and this ultimately I think seems like the hard sciences are more tractable at this point right like it's not like we're getting to the point of doing psychohistory like as mos predicting people's behavior over long times bands or something it seems like things like chemistry and physics and mathematics are very." }, { "end": 3643, "start": 3624, "text": " Very neatly defined and so our algorithms can can attack them but but maybe things like sociology psychology economics are are still very very squishy and how could we possibly optimize those things with these these kind of tools that need everything to be so well defined." }, { "end": 3667, "start": 3643, "text": " That's a good point but at least in economics there certain projects which for example try to to measure inflation in a more online fashion right and not only quarterly but sort of have an index that is online updated using machine learning essentially so I'm not sure if we're going to have an economics transformer in the next two months." }, { "end": 3686, "start": 3667, "text": " But I feel like as we get more comfortable with these techniques and we can see more parallels across disciplines as well and I think there may be many opportunities to to support economics and yeah they're also a bunch of people working on this right now." }, { "end": 3706, "start": 3686, "text": " I mean why is it so squishy it's because we don't have sensors everywhere we don't really know what's happening in in high resolution and that might actually change with these digital currencies the central bank digital currencies if you know if governments could could actually measure all the transactions and all the which is in one sense totally dystopian." }, { "end": 3715, "start": 3706, "text": " But in other sense you would actually allow for optimizing a lot of things that we wouldn't really think about as being possible to optimize right now." }, { "end": 3744, "start": 3715, "text": " Yeah and it's obviously fairly apparent that we need a lot more work on the ethics side to really make sure what we want to expose to that blind or somewhat blind optimization and what we don't want to expose right like especially with digital currencies and sort of a move towards decentralized or centralized ones it's a big question what we want to keep anonymous and what not and what types of analyses we can do in." }, { "end": 3773, "start": 3744, "text": " What cases right so yeah I think all of these things should be taken with a grain of salt for sure so causality seems really important in economics and as a long history in economics and I guess RL is coming around to to causality do you do you spend much time thinking about causal angle and you have any comments about that I used to spend a lot more time thinking about causality when I still wasn't economics." }, { "end": 3800, "start": 3773, "text": " But back then or economics and or causality in economics is a lot about estimating causal effects of some form of policy intervention right so you might think of how do employment decisions of workers change if certain policies implemented and you want to estimate some form of impact on on sort of employment based on that policy being enacted." }, { "end": 3820, "start": 3800, "text": " And in economics usually you refer to so-called pseudo natural experiments so you have a data set where there was some form of policy intervention and then you try to estimate based on the data set and what the effect might be." }, { "end": 3848, "start": 3820, "text": " And then there's a lot of statistical debate where you should account for certain unobserved variables like for example that the workers might have selected themselves to be more or less affected by the policy intervention and then the whole game starts about sort of what are the right standard errors what are the right significance levels to assess these effects and these effects might be time varying." }, { "end": 3857, "start": 3848, "text": " And then the big question is also how does a quasi natural experiment that happened 50 years ago affect what's happening right now." }, { "end": 3877, "start": 3857, "text": " And to tie this back to reinforcement learning and reinforcement learning we have the huge benefit that oftentimes we have access to simulators and oftentimes it's a lot easier for an agent to to test certain hypothesis and in exploration based way using sort of insights from causality." }, { "end": 3906, "start": 3877, "text": " So in many ways the setting in which we find ourselves simulating agents is a lot easier or check the rules the wrong word but allows for more experimentation than the one in economics it's a lot harder to actually play around or enact interventions and economics then it is for reinforcement learning agent to essentially be exposed to an intervention." }, { "end": 3929, "start": 3906, "text": " Do you see do you see economics being less dismal going forward you see any approach to that to getting closer to the RL paradigm where there are simulators that are meaningful or we can draw conclusions more clearly like how is that progressing on the economic side do you think there's hope there will always be this dismal." }, { "end": 3948, "start": 3929, "text": " Yeah so I guess companies like like Google already run billions of experiments every day every month in the sense that it's very easy to change something in in the UI and do some form of a B testing and observe the behavior of users." }, { "end": 3975, "start": 3948, "text": " And I think more and more governments are also moving into that direction and trying to test certain nudging techniques for example and try to assess effects of small changes on a subset of individuals and yeah I'm not sure whether or not one can really test large intervention and whether or not one actually needs a simulator to do so." }, { "end": 4001, "start": 3975, "text": " But yeah this is sort of the general problem that one can face when when doing simulation right that the simulation might be always an abstraction or a downscaled abstraction of what's happening in the real world and then you have to ask yourself how much complexity do you actually need to to grasp the underlying real phenomenon while not increasing the amount of simulation time." }, { "end": 4005, "start": 4001, "text": " So drastically that things become too slow to actually gain any insight." }, { "end": 4022, "start": 4005, "text": " I mean it's always struck me that it seems if we ever want to get to scientific government governance like we're are most of what we do in physical policy and how governance is done is just legacy stuff traditional things." }, { "end": 4046, "start": 4022, "text": " And so if we ever ever get to want to get to the point where we have more optimized society it seems like we want to be testing more radical ideas as quickly as we can and what would that look like if we had if we had a government that was part of its mission was to get insight on how to do good governance and I don't think that's really a really a priority these days." }, { "end": 4075, "start": 4046, "text": " I agree but I also feel like a point that Jonathan Frankl oftentimes makes is that our leaders have to have the tools right they need to be educated in what we're doing essentially right which is machine learning or doing research at least to a certain degree because if you don't know necessarily what's possible and what's feasible right now it's really hard to come up with such governments design that might lead us into the future." }, { "end": 4098, "start": 4075, "text": " So I think a big initiative has to come about for educating our future leaders in being able to tackle these challenges of ethical decision making and exploring diverse policies while not risking to discriminate or hurt damage people in our society." }, { "end": 4108, "start": 4098, "text": " So we covered a lot of things but on a high level are there other things that we didn't talk about today happening in RL that are particularly interesting for you Robert." }, { "end": 4114, "start": 4108, "text": " So I think I highlighted the work on meta gradients which which I find deeply fascinating." }, { "end": 4143, "start": 4114, "text": " But one question that's sort of intriguing me right now is also related to the lottery ticket hypothesis and that's the question why we actually meet so many parameters in reinforcement learning to train and I realized fairly recently that if you take a step back and you don't use some of the classical RL problem formulation and classical RL algorithms." }, { "end": 4150, "start": 4143, "text": " But you resort back to evolutionary strategies which directly try to optimize some form of fitness or return score." }, { "end": 4159, "start": 4150, "text": " It's actually possible to obtain policies which have orders of magnitudes fewer parameters and oftentimes an evolution strategies." }, { "end": 4172, "start": 4159, "text": " It's even better to have fewer parameters because that makes the search more efficient larger search spaces in evolutionary optimization is oftentimes harder than in smaller parameters spaces." }, { "end": 4195, "start": 4172, "text": " So I'm really interested in figuring out what the role of over parametrization in reinforcement is for reinforcement learning is as compared to in supervised learning and I feel like evolution strategies might provide a different angle to when permatrization actually is harmful and when it's beneficial." }, { "end": 4203, "start": 4195, "text": " Cool so is that a part of your plan for moving forward in your research and you want to see more of your yeah okay." }, { "end": 4223, "start": 4203, "text": " Exactly so my PhD is mainly focused on meta learning and sort of the relation to to evolution but not only on a high level where you could think of meta learning being an evolutionary process but also relating to evolutionary strategies and how one could even try to meta learn evolutionary strategies." }, { "end": 4233, "start": 4223, "text": " But also think of inductive biases that might be beneficial for doing evolution and yeah the other way around." }, { "end": 4241, "start": 4233, "text": " I think you've given me a new perspective on that phrase inductive biases because I never would have thought of learning algorithms as an inductive bias." }, { "end": 4243, "start": 4241, "text": " I guess I wasn't thinking that generally." }, { "end": 4257, "start": 4243, "text": " I think you're expanding our minds today Robert thank you so much I'm really grateful I really enjoyed this conversation with you and thanks for taking the time to share your time and your insight with us at talk or all thank you Robin thank you for having me." } ]
NeurIPS 2021 Political Economy of Reinforcement Learning Systems (PERLS) Workshop
Dr. Thomas Gilbert and Dr. Mark Nitzberg on the upcoming PERLS Workshop @ NeurIPS 2021
https://media.transistor…61c.mp3?src=site
Hi listeners, today we're going to hear about the upcoming Pearls Workshop. That is the political economy of reinforcement learning systems. That's at the Neurips 2021 conference and it's on Tuesday, December 14th. The link will be in the show notes. So we're going to hear from co-organizer Dr. Thomas Crendel Gilbert and also Dr. Mark Nitzberg about the Pearls Workshop. Dr. Thomas Crendel Gilbert is a post-doc growth fellow at the Digital Life Initiative at Quarrel Tech. Thanks for joining us Dr. Gilbert. Thanks for having me. So you are co-organizing the workshop? Yes, so the Pearls community is an outgrowth of conversations that I started during my time at Simon's Assignments Institute one year ago. The workshop itself is being co-organized by myself alongside Stuart Russell, who is a professor of computer science at UC Berkeley Director of the Center of Human Capatable AI. I'm also co-organizing with Michael Dennis, who is a graduate student at Chi. Aaron Snauzwell, who I first met in the Simon's Program a year ago. We had conversations there about Pearls and then finally Tom Zick, who has a PhD from Berkeley as well. So why is Pearls important? Why do you want people to come to this? So Pearls is important for the same reason that reinforcement learning is important. Reinforcement learning is widely considered the single most viable technical path to general capabilities. It's also, I think, the most interesting institutional expression of what AI is going to do to society. And what I mean by that is that what makes reinforcement learning different than other branches of machine learning is that at the end of the day it's about agents that are actively learning how to navigate an environment and incorporate a behavior policy on terms that while those are overseen by a designer, there are many more types of feedback at stake in reinforcement learning than there are in other branches of machine learning. And it's precisely that potential for these different types of feedback to be brought into the reward function to be at stake in the way the environment has been specified that makes RL very exciting, not just at the technical level, but also for what we think we even mean by an intelligent agent that is able to interact socially, either in the sense of with other agents or just in relationship with human domains. What do you think might go wrong or could go wrong if people don't pay attention to this? No one comes to Pearls and we don't talk about this and we don't plan for this as a society. What's the danger here? I think the danger here is a few things. So one danger is that we are going to end up with agents that are smarter than we even know how to document or than we even know how to account for. So again, just a very simple example would be if you trace out what it would mean for a recommender system used in social media to send content to your feed, that not only you're likely to engage with, which is already the way that machine learning is used in recommender systems today, but that furthermore has learned how to slot you and nudge you over time to adopt behaviors and patterns of beliefs that will make you much more likely to adopt a certain world view that is suited to the kind of optimization it could provide. That's a different ballgame. There are a lot of interesting ways in which reinforcement learning points to a world in which AI is going to nudge us into forms of belief and behavior that we don't even understand until it's too late. And the reason is that it learns dynamically from how it interacts with its environment rather than just statically. It's beyond just making predictions and classifications in a vacuum and what it's instead doing is intervening on a domain to restructure it according to its own specification. It's a tool for world making rather than just representing. And that's why it's transformative and that's why you need to bring in political economy to really understand it. Can you tell us more about where this idea came from? Yeah, so the seat of pearls was originally an outgrowth of my time spent as the Law and Society Fellow at the Simon's Institute in fall 2020. So at the time, the Simon's Institute, which is based at UC Berkeley, which is where I got my PhD, was pursuing a program on the theory of reinforcement learning for the entire semester. And they brought me in as an expert on the legal and social implications of what this technology might mean. And in accordance with that, I was very interested in organizing a reading group with computer scientists on the topic of how it is that this different approach to optimization that reinforcement learning makes possible will affect specific human domains, like for example, transportation with self-driving cars or content recommendation in social media. And how do we begin to approach that question comparatively across domains? Other than just thinking of optimization in some narrowly abstract sense, how do we try to index it into human problems and human activities, and then compare across those activities to see where certain risks are most likely to emerge first or more intensely? And I think a lot of people actually were very excited to think that way. That isn't the way that most conversations about RL had been presented to computer scientists before. I was interested to learn myself. We kind of got the ball rolling by looking at papers together. And eventually that snowballed into really us coming up with a new semantics for how we even talk about reinforcement learning. We're again, the story ends up being much more about how you manage and intervene on the institutional dynamics that already exist in human domains, rather than just training an agent in some strictly abstract environment according to whatever terms the designers chose for the specification. Who would you say the Pearls Workshop is for? Like who do you want to attend? I think that there's two core audiences we're trying to reach here. One is actual computer scientists, so people who actually work with reinforcement learning, who try to use it to make headway on problems that have maybe been approached before in supervised learning, but they're interested if they can squeeze more to use that of it using RL. Maybe there are problems that were previously unsolvable, but they're interested in whether reinforcement learning can be brought to bear on them, particularly in the context of very grounded applications like self-driving cars or like content recommendation or like the optimization of an electrical grid, for example, how it meets human domains on the one hand. And the other audience we're trying to reach is policymakers, people who actually think about the law, who think about standards, who think about potential statutes that maybe don't exist right now, about how we should govern AI, how we should oversee these new kinds of systems that we're building. And what we're most interested in is to examine these places where it's not just that these communities would benefit from more cross-fernalization, like it's not just that they could teach each other new things about what can RL do we couldn't do before, how can we respond to that using policy. It's also to really look at RL itself as an approach to policy, to really sort of see these two different fields as really working on the same questions, because that's really how I'm defining political economy is how do we manage the institutional dynamics of different domains, and how do we compare across domains towards the end of doing a better job of specifying what we mean by a good society, a good set of metrics, a good kind of social behavior, how do we want that flow to be structured. This is exactly the same question that people in the law have asked since antiquity, and it's exactly the question that reinforcement learning faces today. Do you want to say more about what kind of conversations you think will happen there, that you want to happen there? Yeah, beyond what I've just mentioned, which is really just this more conceptual grappling with how do we we conceive of the law through the lens of RL and how to on vice versa, how do we maybe we conceive of the kinds of specification problems or questions that are all poses from the standpoint of the law. I think an important follow-up question to that is how do certain emerging areas of legal research present a particular way of thinking about reinforcement learning. For example, what I mean by that is we're going to have several speakers at the workshop who are going to speak to the relevance, this renewed interest in antitrust policy for how we should approach questions of AI governance. I think the reason that's important is that there seems to be, and in fact, in my dissertation I argued this, there is an affinity between the pursuit of something like the reward hypothesis in reinforcement learning, or rather the ability to design rewards and try to optimize them at ever greater planning horizons from a RL standpoint. On the one hand, there's an affinity between that and what antitrust scholars for 100 years now have referred to as monopoly power, which is really just to say, when you're optimizing a self-during car to navigate traffic, there's a difference between teaching an agent how to navigate a forum where it stops safely, which I think it's not controversial to say we all want that, versus teaching an agent to try to optimize traffic through a city grid. The latter question is not the kind of thing that the big three automakers ever asked before. It's not really a question the policymakers have ever been able to ask before, at least not in a well-framed way. And so we're now going to enter a world where the people building self-driving car fleets are able to concretely envision what it would mean for the city of San Francisco to optimize its traffic grid. And that's a public question. I mean, it's a public problem. And the fact that we're going to enter a world where private companies basically exercise discretion over a question like that, I think is something that should give both RL practitioners and legal scholars a lot of pause. And I think to really make sense of that question, we have to do a much better job of exploring the intersection between these communities of helping each other ask our own questions and finding a new kind of language with which to make sense of that question. So let me just ask you, are you pro-regulation? It's a good question. It's an important question. I would say that I am pro-specification, and what I mean by that is how we define rewards, actions, states, how observable the environment is. All of these bread and butter questions that every RL practitioner has to ask themselves, these are questions that law and policy have thought about for centuries. And so when we talk about regulation, I think it's important to ground that question in this larger context and somewhat more technical and formal context of specification. Unless we're willing to understand how these two different audiences, I think, can learn from each other and recognize that the questions they ask are the same, I don't think we're likely to come up with regulations that are good. And vice versa, I don't think we're likely to come up with specifications that are optimal or at least don't have the constraints that they should or that are actually as well suited to the domain as we think we are. I think the problem right now is we have an increasingly formal and refined technical work that amounts to basically learning a model in some specified environment. But a lot of existential uncertainty about how well that model conforms to the normative stakes and structure of the domain, the actual human context within which that model will be deployed. So if our goal is to build actual systems rather than just models, we have to begin to approach specification as a much larger and much older problem than strictly as it's been understood through the ones of reinforcement learning. Now whether that means regulation or whether that means a market based approach, that's somewhat beyond my pay grade and really the reason pearls exists is to even make sense of that question. We should explore that question and do research on it rather than dogmatically assume a yes or no answer. Awesome. Okay. Anything else you want, let's just know before you go. I appreciate the opportunity to talk about pearls. I really would recommend that if you're interested in anything I just said or anything I just said raises your heckles, please show up to the workshop and make your voice heard. The purpose of the workshop is to make a space within which new kinds of questions can be asked and new kinds of answers can be posed. It's not to come and dear this space and it's certainly not to structure it in according to a particular political agenda. It's really to make it so that we're doing the best job and most objective job we can have of making sense of the future. Thank you Dr. Thomas Gilbert. Thank you for having me. Dr. Nitzberg is Executive Director of CHI at UC Berkeley, that is the Center for Human Compatible AI, as well as a head of Strategic Outreach for Bear, that's Berkeley AI Research Lab. Welcome Dr. Nitzberg. Thank you for having me. So we've had a couple folks from CHI on here before, Thomas Gilbert and Michael Dennis and Thomas of course told us about his work on pearls. We can you remind us what does pearls mean? It stands for Political Economy of Reinforcement Learning Systems and I think of it at two levels. Reinforcement learning itself has a kind of political economy within the very nature of RL where you specify actions, observations and rewards for a given task and domain and then of course at the level of the RL systems in the context of society and their constituents. And you give us a hint about what you're most excited about in terms of this workshop. Well the questions at the center of pearls are really how should these constituents, technology, social scientists, policymakers and users, civil society, collaborate to design and use and regulate these massively influential systems. And the biggest commercial enterprises in history, the tech platform companies connect more than half the world's population and they affect what we see and read and do every day. So these algorithms influence so much of human life. We absolutely need the constituents connecting the technological y and how to the human y and how. And so what I'm excited about in terms of the workshop is exactly this human history's fastest and most widespread technology rollout. So the first one of the big technology meetings each year for machine learning and we absolutely need to factor the social considerations into what these tens of thousands of scientists meeting at NERPs are building and perfecting not as a separate line of inquiry but as part of the design of the systems. So pearls is really about policy I saw in the description of the workshop. Is that right? Yes and about the dialogue among the constituents. Is there some assumption that we need or might want regulation in this area or are there some alternatives to that? There is, in fact, more broadly, there's a need for regulation just as there is for any technology that has a huge amount of influence. So if you look at creation of transport vehicles, aircraft and so forth, there's a reason for regulation. You need standards, you need safety standards, you need testing standards, process and oversight bodies to assure that things are done in a way that minimizes harm, maximizes benefit and so forth. So what might you say to people who think that a government isn't very good at regulating things or that RL is beyond the limited capacity of government to regulate well and that they probably wouldn't do a good job of it? That's a fair point. I'm an armchair policy person and I think that what I have learned is that regulation is very, very hard to do right and in fact, I believe that we, you know, on the technology side and on the science side, we can really get some of the ground work right. So that, you know, there are appropriate technical guidelines for better ethical, you know, social outcomes and that's why I think this workshop may make it possible to help in crafting better regulation and possibly simpler. Maybe we can just touch on why is the political economy aspect important for RL specifically versus other types of ML? Like should we be talking about the political economy of unsupervised learning or ML in general? Can you remind us what is special about RL in terms of political economy? And then unsupervised learning produce, in a sense, static, broadly stable systems, even the epic systems, the semi-supervised learning system like GPT-3 is largely static. It was trained before COVID and Biden. So if you interact with it, you'll see it's output seems to be living in the past. But this is completely different with reinforcement learning systems that really are meant to operate in dynamic environments. Not only at the individual single one person interaction level or one machine, but in these vast ecosystems that the platform companies like Uber and Google and so on, optimizing not just one ride or one recommendation, but vast crowds of drivers and riders or users and content. So to me, the most important aspect of reinforcement learning over the other machine learning techniques is the case where the system is interacting with people. Content, recommender systems are the prime example. And I lost no one to COVID, but I've lost friends to QAnon. And I think reinforcement learning has something to do with that. It's really the most powerful kind of machine learning when it comes to influencing human thought and their force of society. So I guess then you're also considering bandits and those type of recommenders as within scope here for pearls? Absolutely. So what about the argument that RL is not the issue, but it's all about the application area. For example, maybe it makes sense to regulate policies on cars, but RL could also be used to control a car in a video game and we wouldn't really care about regulating that. So maybe it's really about the fact that it's a car. Or is it the fact that it's a policy or that's an RL policy versus a PID or an imitation learning policy? Or is it really about the fact that it's a car and cars are dangerous and some are argument for social media since now, as you say, it's clear that social media can affect societal health as well. Is it really about the application area or is it really about the fact that it's RL like what technology is powering it? What do you think about that? I like the question. Of course, the application matters. But reinforcement learning is at the core of these really vastly, broadly adopted systems. I'm trying to think of an analogy that works here and the best that I can come up with is grains like wheat. Of course, it matters whether you're making pasta or cookies or whether you're part of a huge national bakery or a small one, but what really matters here is that it makes sense to focus on how you make the wheat. And that is because it is so broadly adopted. And it has its own political economy. It affects masses of people in terms of nutrition and in many other ways. And so I think reinforcement learning is somewhat similar. There are areas in which it's very unlikely to have a broad, harmful effect or affect a large swath of people. But it's so clearly has applications that are affecting all of society and so it deserves its own attention. I see you wrote a book called Solomon's Code. Can you share how that relates to the theme of pearls? Well, absolutely. Have you got all day? There are really so many ways in which these systems are affecting humanity in everyday lives and industries. And because of its nature, reinforcement learning has some real risks. If you pit a reinforcement learning system against a human, the house always wins. And so in this book, we've interviewed over 100 people around the world about attempts at fair distribution of the burdens and benefits and so forth. And we came to the conclusion that you really need the representatives of these different parts of society, the tech designers and the regulators and the users and the policymakers working together to avoid the situation where you've got technology that lets the house always win and you have the house take over humanity. Our book has been called Optimistic and I know that what I'm saying doesn't sound optimistic but I actually am quite optimistic. There's so much attention now focused as it should be on the societal impacts that wasn't the case with the rise of automobiles and many other technologies. So we're really paying attention. I'm hoping that among other things, meetings like this Pearls Workshop are going to have a place in ensuring safe and beneficial applications. Well I'm really looking forward to the workshop. Personally, I think the issue of technological unemployment is specifically relevant for all. RL has the potential to do all sorts of jobs for us and I don't think we really answer the question of what then in our economy. Well let's talk about that at the workshop and I look forward to seeing you. Thanks so much Dr. Nitzberg and we'll see you at the Neurop's Pearls Workshop. Thanks for having me.
[ { "end": 5.5600000000000005, "start": 0, "text": " Hi listeners, today we're going to hear about the upcoming Pearls Workshop." }, { "end": 9.120000000000001, "start": 5.5600000000000005, "text": " That is the political economy of reinforcement learning systems." }, { "end": 15.16, "start": 9.120000000000001, "text": " That's at the Neurips 2021 conference and it's on Tuesday, December 14th." }, { "end": 16.6, "start": 15.16, "text": " The link will be in the show notes." }, { "end": 21.52, "start": 16.6, "text": " So we're going to hear from co-organizer Dr. Thomas Crendel Gilbert and also Dr. Mark" }, { "end": 23.96, "start": 21.52, "text": " Nitzberg about the Pearls Workshop." }, { "end": 29.96, "start": 23.96, "text": " Dr. Thomas Crendel Gilbert is a post-doc growth fellow at the Digital Life Initiative at" }, { "end": 30.96, "start": 29.96, "text": " Quarrel Tech." }, { "end": 31.96, "start": 30.96, "text": " Thanks for joining us Dr. Gilbert." }, { "end": 32.96, "start": 31.96, "text": " Thanks for having me." }, { "end": 34.96, "start": 32.96, "text": " So you are co-organizing the workshop?" }, { "end": 41.760000000000005, "start": 34.96, "text": " Yes, so the Pearls community is an outgrowth of conversations that I started during my time" }, { "end": 45, "start": 41.760000000000005, "text": " at Simon's Assignments Institute one year ago." }, { "end": 51.24, "start": 45, "text": " The workshop itself is being co-organized by myself alongside Stuart Russell, who is a" }, { "end": 56.36, "start": 51.24, "text": " professor of computer science at UC Berkeley Director of the Center of Human Capatable AI." }, { "end": 62.32, "start": 56.36, "text": " I'm also co-organizing with Michael Dennis, who is a graduate student at Chi." }, { "end": 66.96000000000001, "start": 62.32, "text": " Aaron Snauzwell, who I first met in the Simon's Program a year ago." }, { "end": 72.08, "start": 66.96000000000001, "text": " We had conversations there about Pearls and then finally Tom Zick, who has a PhD from" }, { "end": 73.4, "start": 72.08, "text": " Berkeley as well." }, { "end": 75.08, "start": 73.4, "text": " So why is Pearls important?" }, { "end": 76.44, "start": 75.08, "text": " Why do you want people to come to this?" }, { "end": 81.16, "start": 76.44, "text": " So Pearls is important for the same reason that reinforcement learning is important." }, { "end": 88.24, "start": 81.16, "text": " Reinforcement learning is widely considered the single most viable technical path to general" }, { "end": 90.24, "start": 88.24, "text": " capabilities." }, { "end": 98.92, "start": 90.24, "text": " It's also, I think, the most interesting institutional expression of what AI is going to do to society." }, { "end": 103, "start": 98.92, "text": " And what I mean by that is that what makes reinforcement learning different than other branches" }, { "end": 108.32, "start": 103, "text": " of machine learning is that at the end of the day it's about agents that are actively" }, { "end": 115.47999999999999, "start": 108.32, "text": " learning how to navigate an environment and incorporate a behavior policy on terms that" }, { "end": 121.24, "start": 115.47999999999999, "text": " while those are overseen by a designer, there are many more types of feedback at stake" }, { "end": 125.83999999999999, "start": 121.24, "text": " in reinforcement learning than there are in other branches of machine learning." }, { "end": 131.51999999999998, "start": 125.83999999999999, "text": " And it's precisely that potential for these different types of feedback to be brought" }, { "end": 137.35999999999999, "start": 131.51999999999998, "text": " into the reward function to be at stake in the way the environment has been specified" }, { "end": 142.96, "start": 137.36, "text": " that makes RL very exciting, not just at the technical level, but also for what we think" }, { "end": 149.20000000000002, "start": 142.96, "text": " we even mean by an intelligent agent that is able to interact socially, either in the" }, { "end": 154.04000000000002, "start": 149.20000000000002, "text": " sense of with other agents or just in relationship with human domains." }, { "end": 159.28000000000003, "start": 154.04000000000002, "text": " What do you think might go wrong or could go wrong if people don't pay attention to this?" }, { "end": 164.20000000000002, "start": 159.28000000000003, "text": " No one comes to Pearls and we don't talk about this and we don't plan for this as a society." }, { "end": 165.20000000000002, "start": 164.20000000000002, "text": " What's the danger here?" }, { "end": 168.79999999999998, "start": 165.2, "text": " I think the danger here is a few things." }, { "end": 174.92, "start": 168.79999999999998, "text": " So one danger is that we are going to end up with agents that are smarter than we even" }, { "end": 179.11999999999998, "start": 174.92, "text": " know how to document or than we even know how to account for." }, { "end": 186.51999999999998, "start": 179.11999999999998, "text": " So again, just a very simple example would be if you trace out what it would mean for" }, { "end": 193.83999999999997, "start": 186.51999999999998, "text": " a recommender system used in social media to send content to your feed, that not only" }, { "end": 198.76, "start": 193.84, "text": " you're likely to engage with, which is already the way that machine learning is used in" }, { "end": 205.84, "start": 198.76, "text": " recommender systems today, but that furthermore has learned how to slot you and nudge you" }, { "end": 213.28, "start": 205.84, "text": " over time to adopt behaviors and patterns of beliefs that will make you much more likely" }, { "end": 220.04, "start": 213.28, "text": " to adopt a certain world view that is suited to the kind of optimization it could provide." }, { "end": 222.24, "start": 220.04, "text": " That's a different ballgame." }, { "end": 226.44, "start": 222.24, "text": " There are a lot of interesting ways in which reinforcement learning points to a world" }, { "end": 234.72, "start": 226.44, "text": " in which AI is going to nudge us into forms of belief and behavior that we don't even" }, { "end": 236.8, "start": 234.72, "text": " understand until it's too late." }, { "end": 241.16000000000003, "start": 236.8, "text": " And the reason is that it learns dynamically from how it interacts with its environment" }, { "end": 243.64000000000001, "start": 241.16000000000003, "text": " rather than just statically." }, { "end": 249.8, "start": 243.64000000000001, "text": " It's beyond just making predictions and classifications in a vacuum and what it's instead doing" }, { "end": 255.84, "start": 249.8, "text": " is intervening on a domain to restructure it according to its own specification." }, { "end": 260.92, "start": 255.84, "text": " It's a tool for world making rather than just representing." }, { "end": 265.44, "start": 260.92, "text": " And that's why it's transformative and that's why you need to bring in political economy" }, { "end": 266.96000000000004, "start": 265.44, "text": " to really understand it." }, { "end": 269.6, "start": 266.96000000000004, "text": " Can you tell us more about where this idea came from?" }, { "end": 276.68, "start": 269.6, "text": " Yeah, so the seat of pearls was originally an outgrowth of my time spent as the Law" }, { "end": 281.40000000000003, "start": 276.68, "text": " and Society Fellow at the Simon's Institute in fall 2020." }, { "end": 286.52, "start": 281.40000000000003, "text": " So at the time, the Simon's Institute, which is based at UC Berkeley, which is where I" }, { "end": 294.04, "start": 286.52, "text": " got my PhD, was pursuing a program on the theory of reinforcement learning for the entire" }, { "end": 295.04, "start": 294.04, "text": " semester." }, { "end": 301.56, "start": 295.04, "text": " And they brought me in as an expert on the legal and social implications of what this technology" }, { "end": 303.24, "start": 301.56, "text": " might mean." }, { "end": 310.36, "start": 303.24, "text": " And in accordance with that, I was very interested in organizing a reading group with computer" }, { "end": 318.8, "start": 310.36, "text": " scientists on the topic of how it is that this different approach to optimization that" }, { "end": 323.56, "start": 318.8, "text": " reinforcement learning makes possible will affect specific human domains, like for example," }, { "end": 328.64, "start": 323.56, "text": " transportation with self-driving cars or content recommendation in social media." }, { "end": 332.96000000000004, "start": 328.64, "text": " And how do we begin to approach that question comparatively across domains?" }, { "end": 338.2, "start": 332.96, "text": " Other than just thinking of optimization in some narrowly abstract sense, how do we try" }, { "end": 345.47999999999996, "start": 338.2, "text": " to index it into human problems and human activities, and then compare across those activities" }, { "end": 352.2, "start": 345.47999999999996, "text": " to see where certain risks are most likely to emerge first or more intensely?" }, { "end": 355.28, "start": 352.2, "text": " And I think a lot of people actually were very excited to think that way." }, { "end": 361.32, "start": 355.28, "text": " That isn't the way that most conversations about RL had been presented to computer scientists" }, { "end": 362.32, "start": 361.32, "text": " before." }, { "end": 364.4, "start": 362.32, "text": " I was interested to learn myself." }, { "end": 368.12, "start": 364.4, "text": " We kind of got the ball rolling by looking at papers together." }, { "end": 374.2, "start": 368.12, "text": " And eventually that snowballed into really us coming up with a new semantics for how" }, { "end": 376.92, "start": 374.2, "text": " we even talk about reinforcement learning." }, { "end": 383.4, "start": 376.92, "text": " We're again, the story ends up being much more about how you manage and intervene on the" }, { "end": 389.12, "start": 383.4, "text": " institutional dynamics that already exist in human domains, rather than just training" }, { "end": 396.28000000000003, "start": 389.12, "text": " an agent in some strictly abstract environment according to whatever terms the designers" }, { "end": 398.2, "start": 396.28000000000003, "text": " chose for the specification." }, { "end": 400.2, "start": 398.2, "text": " Who would you say the Pearls Workshop is for?" }, { "end": 401.72, "start": 400.2, "text": " Like who do you want to attend?" }, { "end": 405.64, "start": 401.72, "text": " I think that there's two core audiences we're trying to reach here." }, { "end": 410.96, "start": 405.64, "text": " One is actual computer scientists, so people who actually work with reinforcement learning," }, { "end": 418.04, "start": 410.96, "text": " who try to use it to make headway on problems that have maybe been approached before in supervised" }, { "end": 422.6, "start": 418.04, "text": " learning, but they're interested if they can squeeze more to use that of it using RL." }, { "end": 426.76000000000005, "start": 422.6, "text": " Maybe there are problems that were previously unsolvable, but they're interested in whether" }, { "end": 431.20000000000005, "start": 426.76000000000005, "text": " reinforcement learning can be brought to bear on them, particularly in the context of" }, { "end": 437.24, "start": 431.20000000000005, "text": " very grounded applications like self-driving cars or like content recommendation or like" }, { "end": 442.36, "start": 437.24, "text": " the optimization of an electrical grid, for example, how it meets human domains on the" }, { "end": 443.68, "start": 442.36, "text": " one hand." }, { "end": 449, "start": 443.68, "text": " And the other audience we're trying to reach is policymakers, people who actually think" }, { "end": 454.88, "start": 449, "text": " about the law, who think about standards, who think about potential statutes that maybe" }, { "end": 460.72, "start": 454.88, "text": " don't exist right now, about how we should govern AI, how we should oversee these new" }, { "end": 462.88, "start": 460.72, "text": " kinds of systems that we're building." }, { "end": 468.64, "start": 462.88, "text": " And what we're most interested in is to examine these places where it's not just that these" }, { "end": 472.84000000000003, "start": 468.64, "text": " communities would benefit from more cross-fernalization, like it's not just that they could teach" }, { "end": 477.15999999999997, "start": 472.84, "text": " each other new things about what can RL do we couldn't do before, how can we respond" }, { "end": 478.56, "start": 477.15999999999997, "text": " to that using policy." }, { "end": 483.56, "start": 478.56, "text": " It's also to really look at RL itself as an approach to policy, to really sort of see" }, { "end": 489.28, "start": 483.56, "text": " these two different fields as really working on the same questions, because that's really" }, { "end": 495.71999999999997, "start": 489.28, "text": " how I'm defining political economy is how do we manage the institutional dynamics of different" }, { "end": 502.4, "start": 495.71999999999997, "text": " domains, and how do we compare across domains towards the end of doing a better job of" }, { "end": 510.15999999999997, "start": 502.4, "text": " specifying what we mean by a good society, a good set of metrics, a good kind of social" }, { "end": 514.56, "start": 510.15999999999997, "text": " behavior, how do we want that flow to be structured." }, { "end": 518.84, "start": 514.56, "text": " This is exactly the same question that people in the law have asked since antiquity, and" }, { "end": 522.0799999999999, "start": 518.84, "text": " it's exactly the question that reinforcement learning faces today." }, { "end": 525.9599999999999, "start": 522.0799999999999, "text": " Do you want to say more about what kind of conversations you think will happen there," }, { "end": 526.96, "start": 525.9599999999999, "text": " that you want to happen there?" }, { "end": 533.36, "start": 526.96, "text": " Yeah, beyond what I've just mentioned, which is really just this more conceptual grappling" }, { "end": 539.52, "start": 533.36, "text": " with how do we we conceive of the law through the lens of RL and how to on vice versa, how" }, { "end": 545.76, "start": 539.52, "text": " do we maybe we conceive of the kinds of specification problems or questions that are all poses from" }, { "end": 547.2800000000001, "start": 545.76, "text": " the standpoint of the law." }, { "end": 554.2800000000001, "start": 547.2800000000001, "text": " I think an important follow-up question to that is how do certain emerging areas of legal" }, { "end": 560.8, "start": 554.28, "text": " research present a particular way of thinking about reinforcement learning." }, { "end": 566, "start": 560.8, "text": " For example, what I mean by that is we're going to have several speakers at the workshop" }, { "end": 572.12, "start": 566, "text": " who are going to speak to the relevance, this renewed interest in antitrust policy for" }, { "end": 575.12, "start": 572.12, "text": " how we should approach questions of AI governance." }, { "end": 579.1999999999999, "start": 575.12, "text": " I think the reason that's important is that there seems to be, and in fact, in my dissertation" }, { "end": 586.44, "start": 579.2, "text": " I argued this, there is an affinity between the pursuit of something like the reward hypothesis" }, { "end": 593.5600000000001, "start": 586.44, "text": " in reinforcement learning, or rather the ability to design rewards and try to optimize them" }, { "end": 598.72, "start": 593.5600000000001, "text": " at ever greater planning horizons from a RL standpoint." }, { "end": 604.5600000000001, "start": 598.72, "text": " On the one hand, there's an affinity between that and what antitrust scholars for 100 years" }, { "end": 608.76, "start": 604.5600000000001, "text": " now have referred to as monopoly power, which is really just to say, when you're" }, { "end": 613.72, "start": 608.76, "text": " optimizing a self-during car to navigate traffic, there's a difference between teaching an" }, { "end": 619.24, "start": 613.72, "text": " agent how to navigate a forum where it stops safely, which I think it's not controversial" }, { "end": 624.48, "start": 619.24, "text": " to say we all want that, versus teaching an agent to try to optimize traffic through" }, { "end": 626.52, "start": 624.48, "text": " a city grid." }, { "end": 633.08, "start": 626.52, "text": " The latter question is not the kind of thing that the big three automakers ever asked before." }, { "end": 636.4399999999999, "start": 633.08, "text": " It's not really a question the policymakers have ever been able to ask before, at least" }, { "end": 638.16, "start": 636.4399999999999, "text": " not in a well-framed way." }, { "end": 644, "start": 638.16, "text": " And so we're now going to enter a world where the people building self-driving car fleets" }, { "end": 650.4399999999999, "start": 644, "text": " are able to concretely envision what it would mean for the city of San Francisco to optimize" }, { "end": 652.24, "start": 650.4399999999999, "text": " its traffic grid." }, { "end": 653.76, "start": 652.24, "text": " And that's a public question." }, { "end": 655.76, "start": 653.76, "text": " I mean, it's a public problem." }, { "end": 660.04, "start": 655.76, "text": " And the fact that we're going to enter a world where private companies basically exercise" }, { "end": 665.48, "start": 660.04, "text": " discretion over a question like that, I think is something that should give both RL practitioners" }, { "end": 668.96, "start": 665.48, "text": " and legal scholars a lot of pause." }, { "end": 674, "start": 668.96, "text": " And I think to really make sense of that question, we have to do a much better job of exploring" }, { "end": 680.5600000000001, "start": 674, "text": " the intersection between these communities of helping each other ask our own questions" }, { "end": 685.04, "start": 680.5600000000001, "text": " and finding a new kind of language with which to make sense of that question." }, { "end": 688.16, "start": 685.04, "text": " So let me just ask you, are you pro-regulation?" }, { "end": 689.16, "start": 688.16, "text": " It's a good question." }, { "end": 690.64, "start": 689.16, "text": " It's an important question." }, { "end": 697.64, "start": 690.64, "text": " I would say that I am pro-specification, and what I mean by that is how we define rewards," }, { "end": 703.16, "start": 697.64, "text": " actions, states, how observable the environment is." }, { "end": 708.48, "start": 703.16, "text": " All of these bread and butter questions that every RL practitioner has to ask themselves," }, { "end": 714.04, "start": 708.48, "text": " these are questions that law and policy have thought about for centuries." }, { "end": 720.04, "start": 714.04, "text": " And so when we talk about regulation, I think it's important to ground that question" }, { "end": 727.1999999999999, "start": 720.04, "text": " in this larger context and somewhat more technical and formal context of specification." }, { "end": 733.1999999999999, "start": 727.1999999999999, "text": " Unless we're willing to understand how these two different audiences, I think, can learn" }, { "end": 737.8399999999999, "start": 733.1999999999999, "text": " from each other and recognize that the questions they ask are the same, I don't think we're likely" }, { "end": 740.28, "start": 737.8399999999999, "text": " to come up with regulations that are good." }, { "end": 745.16, "start": 740.28, "text": " And vice versa, I don't think we're likely to come up with specifications that are optimal" }, { "end": 750.4399999999999, "start": 745.16, "text": " or at least don't have the constraints that they should or that are actually as well suited" }, { "end": 752.8, "start": 750.4399999999999, "text": " to the domain as we think we are." }, { "end": 760.64, "start": 752.8, "text": " I think the problem right now is we have an increasingly formal and refined technical" }, { "end": 767.8, "start": 760.64, "text": " work that amounts to basically learning a model in some specified environment." }, { "end": 774.24, "start": 767.8, "text": " But a lot of existential uncertainty about how well that model conforms to the normative" }, { "end": 780.84, "start": 774.24, "text": " stakes and structure of the domain, the actual human context within which that model will" }, { "end": 782.16, "start": 780.84, "text": " be deployed." }, { "end": 789.24, "start": 782.16, "text": " So if our goal is to build actual systems rather than just models, we have to begin to" }, { "end": 796.8, "start": 789.24, "text": " approach specification as a much larger and much older problem than strictly as it's" }, { "end": 799.36, "start": 796.8, "text": " been understood through the ones of reinforcement learning." }, { "end": 805.36, "start": 799.36, "text": " Now whether that means regulation or whether that means a market based approach, that's" }, { "end": 809.84, "start": 805.36, "text": " somewhat beyond my pay grade and really the reason pearls exists is to even make sense" }, { "end": 811.32, "start": 809.84, "text": " of that question." }, { "end": 816.76, "start": 811.32, "text": " We should explore that question and do research on it rather than dogmatically assume a yes" }, { "end": 817.76, "start": 816.76, "text": " or no answer." }, { "end": 818.76, "start": 817.76, "text": " Awesome." }, { "end": 819.76, "start": 818.76, "text": " Okay." }, { "end": 821.4, "start": 819.76, "text": " Anything else you want, let's just know before you go." }, { "end": 823.76, "start": 821.4, "text": " I appreciate the opportunity to talk about pearls." }, { "end": 829.04, "start": 823.76, "text": " I really would recommend that if you're interested in anything I just said or anything I just" }, { "end": 834, "start": 829.04, "text": " said raises your heckles, please show up to the workshop and make your voice heard." }, { "end": 839.7199999999999, "start": 834, "text": " The purpose of the workshop is to make a space within which new kinds of questions can" }, { "end": 844.52, "start": 839.7199999999999, "text": " be asked and new kinds of answers can be posed." }, { "end": 849.5999999999999, "start": 844.52, "text": " It's not to come and dear this space and it's certainly not to structure it in according" }, { "end": 852.8, "start": 849.5999999999999, "text": " to a particular political agenda." }, { "end": 858.3199999999999, "start": 852.8, "text": " It's really to make it so that we're doing the best job and most objective job we can" }, { "end": 860.32, "start": 858.32, "text": " have of making sense of the future." }, { "end": 861.6800000000001, "start": 860.32, "text": " Thank you Dr. Thomas Gilbert." }, { "end": 863.5200000000001, "start": 861.6800000000001, "text": " Thank you for having me." }, { "end": 868.4000000000001, "start": 863.5200000000001, "text": " Dr. Nitzberg is Executive Director of CHI at UC Berkeley, that is the Center for Human" }, { "end": 874, "start": 868.4000000000001, "text": " Compatible AI, as well as a head of Strategic Outreach for Bear, that's Berkeley AI Research" }, { "end": 875, "start": 874, "text": " Lab." }, { "end": 876, "start": 875, "text": " Welcome Dr. Nitzberg." }, { "end": 878.6, "start": 876, "text": " Thank you for having me." }, { "end": 883.36, "start": 878.6, "text": " So we've had a couple folks from CHI on here before, Thomas Gilbert and Michael Dennis" }, { "end": 886.72, "start": 883.36, "text": " and Thomas of course told us about his work on pearls." }, { "end": 889.6, "start": 886.72, "text": " We can you remind us what does pearls mean?" }, { "end": 897.28, "start": 889.6, "text": " It stands for Political Economy of Reinforcement Learning Systems and I think of it at two levels." }, { "end": 901.64, "start": 897.28, "text": " Reinforcement learning itself has a kind of political economy within the very nature of" }, { "end": 907.48, "start": 901.64, "text": " RL where you specify actions, observations and rewards for a given task and domain and" }, { "end": 913.72, "start": 907.48, "text": " then of course at the level of the RL systems in the context of society and their constituents." }, { "end": 918.48, "start": 913.72, "text": " And you give us a hint about what you're most excited about in terms of this workshop." }, { "end": 924.72, "start": 918.48, "text": " Well the questions at the center of pearls are really how should these constituents, technology," }, { "end": 931.8000000000001, "start": 924.72, "text": " social scientists, policymakers and users, civil society, collaborate to design and use" }, { "end": 935.32, "start": 931.8000000000001, "text": " and regulate these massively influential systems." }, { "end": 940.72, "start": 935.32, "text": " And the biggest commercial enterprises in history, the tech platform companies connect" }, { "end": 946.5600000000001, "start": 940.72, "text": " more than half the world's population and they affect what we see and read and do every day." }, { "end": 949.6, "start": 946.5600000000001, "text": " So these algorithms influence so much of human life." }, { "end": 955.76, "start": 949.6, "text": " We absolutely need the constituents connecting the technological y and how to the human y and" }, { "end": 956.76, "start": 955.76, "text": " how." }, { "end": 961.08, "start": 956.76, "text": " And so what I'm excited about in terms of the workshop is exactly this human history's" }, { "end": 964.5600000000001, "start": 961.08, "text": " fastest and most widespread technology rollout." }, { "end": 972.2399999999999, "start": 964.56, "text": " So the first one of the big technology meetings each year for machine learning and we absolutely" }, { "end": 977.56, "start": 972.2399999999999, "text": " need to factor the social considerations into what these tens of thousands of scientists" }, { "end": 982.1999999999999, "start": 977.56, "text": " meeting at NERPs are building and perfecting not as a separate line of inquiry but as part" }, { "end": 984.1199999999999, "start": 982.1999999999999, "text": " of the design of the systems." }, { "end": 988.5999999999999, "start": 984.1199999999999, "text": " So pearls is really about policy I saw in the description of the workshop." }, { "end": 989.5999999999999, "start": 988.5999999999999, "text": " Is that right?" }, { "end": 995.32, "start": 989.6, "text": " Yes and about the dialogue among the constituents." }, { "end": 1001.6800000000001, "start": 995.32, "text": " Is there some assumption that we need or might want regulation in this area or are there" }, { "end": 1003.96, "start": 1001.6800000000001, "text": " some alternatives to that?" }, { "end": 1011.52, "start": 1003.96, "text": " There is, in fact, more broadly, there's a need for regulation just as there is for" }, { "end": 1016.2, "start": 1011.52, "text": " any technology that has a huge amount of influence." }, { "end": 1022.6, "start": 1016.2, "text": " So if you look at creation of transport vehicles, aircraft and so forth, there's a reason" }, { "end": 1025.0800000000002, "start": 1022.6, "text": " for regulation." }, { "end": 1029.8400000000001, "start": 1025.0800000000002, "text": " You need standards, you need safety standards, you need testing standards, process and oversight" }, { "end": 1034.44, "start": 1029.8400000000001, "text": " bodies to assure that things are done in a way that minimizes harm, maximizes benefit" }, { "end": 1035.76, "start": 1034.44, "text": " and so forth." }, { "end": 1039.76, "start": 1035.76, "text": " So what might you say to people who think that a government isn't very good at regulating" }, { "end": 1044.1200000000001, "start": 1039.76, "text": " things or that RL is beyond the limited capacity of government to regulate well and that" }, { "end": 1045.88, "start": 1044.1200000000001, "text": " they probably wouldn't do a good job of it?" }, { "end": 1046.88, "start": 1045.88, "text": " That's a fair point." }, { "end": 1053.1200000000001, "start": 1046.88, "text": " I'm an armchair policy person and I think that what I have learned is that regulation" }, { "end": 1059.64, "start": 1053.1200000000001, "text": " is very, very hard to do right and in fact, I believe that we, you know, on the technology" }, { "end": 1065.2800000000002, "start": 1059.64, "text": " side and on the science side, we can really get some of the ground work right." }, { "end": 1070.3600000000001, "start": 1065.2800000000002, "text": " So that, you know, there are appropriate technical guidelines for better ethical, you know," }, { "end": 1077.6799999999998, "start": 1070.36, "text": " social outcomes and that's why I think this workshop may make it possible to help in" }, { "end": 1082.12, "start": 1077.6799999999998, "text": " crafting better regulation and possibly simpler." }, { "end": 1088.6799999999998, "start": 1082.12, "text": " Maybe we can just touch on why is the political economy aspect important for RL specifically" }, { "end": 1090.04, "start": 1088.6799999999998, "text": " versus other types of ML?" }, { "end": 1094.36, "start": 1090.04, "text": " Like should we be talking about the political economy of unsupervised learning or ML in" }, { "end": 1095.36, "start": 1094.36, "text": " general?" }, { "end": 1099.32, "start": 1095.36, "text": " Can you remind us what is special about RL in terms of political economy?" }, { "end": 1107.9199999999998, "start": 1099.32, "text": " And then unsupervised learning produce, in a sense, static, broadly stable systems, even" }, { "end": 1113.76, "start": 1107.9199999999998, "text": " the epic systems, the semi-supervised learning system like GPT-3 is largely static." }, { "end": 1116, "start": 1113.76, "text": " It was trained before COVID and Biden." }, { "end": 1120.24, "start": 1116, "text": " So if you interact with it, you'll see it's output seems to be living in the past." }, { "end": 1125.6399999999999, "start": 1120.24, "text": " But this is completely different with reinforcement learning systems that really are meant to operate" }, { "end": 1128.72, "start": 1125.6399999999999, "text": " in dynamic environments." }, { "end": 1136.1200000000001, "start": 1128.72, "text": " Not only at the individual single one person interaction level or one machine, but in these" }, { "end": 1142, "start": 1136.1200000000001, "text": " vast ecosystems that the platform companies like Uber and Google and so on, optimizing" }, { "end": 1151.2, "start": 1142, "text": " not just one ride or one recommendation, but vast crowds of drivers and riders or users" }, { "end": 1152.3600000000001, "start": 1151.2, "text": " and content." }, { "end": 1159.1999999999998, "start": 1152.36, "text": " So to me, the most important aspect of reinforcement learning over the other machine learning techniques" }, { "end": 1164.9599999999998, "start": 1159.1999999999998, "text": " is the case where the system is interacting with people." }, { "end": 1169.1599999999999, "start": 1164.9599999999998, "text": " Content, recommender systems are the prime example." }, { "end": 1174.56, "start": 1169.1599999999999, "text": " And I lost no one to COVID, but I've lost friends to QAnon." }, { "end": 1177.6, "start": 1174.56, "text": " And I think reinforcement learning has something to do with that." }, { "end": 1182.4399999999998, "start": 1177.6, "text": " It's really the most powerful kind of machine learning when it comes to influencing human" }, { "end": 1184.12, "start": 1182.4399999999998, "text": " thought and their force of society." }, { "end": 1190.32, "start": 1184.12, "text": " So I guess then you're also considering bandits and those type of recommenders as within" }, { "end": 1191.9199999999998, "start": 1190.32, "text": " scope here for pearls?" }, { "end": 1193.36, "start": 1191.9199999999998, "text": " Absolutely." }, { "end": 1198.48, "start": 1193.36, "text": " So what about the argument that RL is not the issue, but it's all about the application" }, { "end": 1199.48, "start": 1198.48, "text": " area." }, { "end": 1205.4399999999998, "start": 1199.48, "text": " For example, maybe it makes sense to regulate policies on cars, but RL could also be used" }, { "end": 1209.72, "start": 1205.44, "text": " to control a car in a video game and we wouldn't really care about regulating that." }, { "end": 1212.04, "start": 1209.72, "text": " So maybe it's really about the fact that it's a car." }, { "end": 1217.1200000000001, "start": 1212.04, "text": " Or is it the fact that it's a policy or that's an RL policy versus a PID or an imitation" }, { "end": 1218.1200000000001, "start": 1217.1200000000001, "text": " learning policy?" }, { "end": 1221.26, "start": 1218.1200000000001, "text": " Or is it really about the fact that it's a car and cars are dangerous and some are" }, { "end": 1225.96, "start": 1221.26, "text": " argument for social media since now, as you say, it's clear that social media can" }, { "end": 1227.3600000000001, "start": 1225.96, "text": " affect societal health as well." }, { "end": 1231.68, "start": 1227.3600000000001, "text": " Is it really about the application area or is it really about the fact that it's RL" }, { "end": 1233.8, "start": 1231.68, "text": " like what technology is powering it?" }, { "end": 1235.3200000000002, "start": 1233.8, "text": " What do you think about that?" }, { "end": 1237.24, "start": 1235.32, "text": " I like the question." }, { "end": 1239.76, "start": 1237.24, "text": " Of course, the application matters." }, { "end": 1247.6799999999998, "start": 1239.76, "text": " But reinforcement learning is at the core of these really vastly, broadly adopted systems." }, { "end": 1252.32, "start": 1247.6799999999998, "text": " I'm trying to think of an analogy that works here and the best that I can come up with" }, { "end": 1255.2, "start": 1252.32, "text": " is grains like wheat." }, { "end": 1260.08, "start": 1255.2, "text": " Of course, it matters whether you're making pasta or cookies or whether you're part" }, { "end": 1267.28, "start": 1260.08, "text": " of a huge national bakery or a small one, but what really matters here is that it makes" }, { "end": 1271.3999999999999, "start": 1267.28, "text": " sense to focus on how you make the wheat." }, { "end": 1274.6399999999999, "start": 1271.3999999999999, "text": " And that is because it is so broadly adopted." }, { "end": 1276.48, "start": 1274.6399999999999, "text": " And it has its own political economy." }, { "end": 1281.28, "start": 1276.48, "text": " It affects masses of people in terms of nutrition and in many other ways." }, { "end": 1285.32, "start": 1281.28, "text": " And so I think reinforcement learning is somewhat similar." }, { "end": 1294.04, "start": 1285.32, "text": " There are areas in which it's very unlikely to have a broad, harmful effect or affect" }, { "end": 1295.2, "start": 1294.04, "text": " a large swath of people." }, { "end": 1304.12, "start": 1295.2, "text": " But it's so clearly has applications that are affecting all of society and so it deserves" }, { "end": 1305.84, "start": 1304.12, "text": " its own attention." }, { "end": 1308.6799999999998, "start": 1305.84, "text": " I see you wrote a book called Solomon's Code." }, { "end": 1313.36, "start": 1308.6799999999998, "text": " Can you share how that relates to the theme of pearls?" }, { "end": 1316.32, "start": 1313.36, "text": " Well, absolutely." }, { "end": 1319, "start": 1316.32, "text": " Have you got all day?" }, { "end": 1327.1599999999999, "start": 1319, "text": " There are really so many ways in which these systems are affecting humanity in everyday" }, { "end": 1329.24, "start": 1327.1599999999999, "text": " lives and industries." }, { "end": 1335.6799999999998, "start": 1329.24, "text": " And because of its nature, reinforcement learning has some real risks." }, { "end": 1343.04, "start": 1335.6799999999998, "text": " If you pit a reinforcement learning system against a human, the house always wins." }, { "end": 1350.2, "start": 1343.04, "text": " And so in this book, we've interviewed over 100 people around the world about attempts" }, { "end": 1356.8, "start": 1350.2, "text": " at fair distribution of the burdens and benefits and so forth." }, { "end": 1362.2, "start": 1356.8, "text": " And we came to the conclusion that you really need the representatives of these different" }, { "end": 1368.2, "start": 1362.2, "text": " parts of society, the tech designers and the regulators and the users and the policymakers" }, { "end": 1375.88, "start": 1368.2, "text": " working together to avoid the situation where you've got technology that lets the house" }, { "end": 1379.88, "start": 1375.88, "text": " always win and you have the house take over humanity." }, { "end": 1386.32, "start": 1379.88, "text": " Our book has been called Optimistic and I know that what I'm saying doesn't sound" }, { "end": 1389.32, "start": 1386.32, "text": " optimistic but I actually am quite optimistic." }, { "end": 1397.72, "start": 1389.32, "text": " There's so much attention now focused as it should be on the societal impacts that wasn't" }, { "end": 1404.92, "start": 1397.72, "text": " the case with the rise of automobiles and many other technologies." }, { "end": 1406.24, "start": 1404.92, "text": " So we're really paying attention." }, { "end": 1412.92, "start": 1406.24, "text": " I'm hoping that among other things, meetings like this Pearls Workshop are going to have" }, { "end": 1418.52, "start": 1412.92, "text": " a place in ensuring safe and beneficial applications." }, { "end": 1420.8, "start": 1418.52, "text": " Well I'm really looking forward to the workshop." }, { "end": 1425.68, "start": 1420.8, "text": " Personally, I think the issue of technological unemployment is specifically relevant for" }, { "end": 1432.48, "start": 1425.68, "text": " all. RL has the potential to do all sorts of jobs for us and I don't think we really" }, { "end": 1435.72, "start": 1432.48, "text": " answer the question of what then in our economy." }, { "end": 1439.3600000000001, "start": 1435.72, "text": " Well let's talk about that at the workshop and I look forward to seeing you." }, { "end": 1443.3600000000001, "start": 1439.3600000000001, "text": " Thanks so much Dr. Nitzberg and we'll see you at the Neurop's Pearls Workshop." }, { "end": 1462.32, "start": 1443.36, "text": " Thanks for having me." } ]
Amy Zhang
Amy Zhang shares her work on Invariant Causal Prediction for Block MDPs, Multi-Task Reinforcement Learning with Context-based Representations, MBRL-Lib, shares insight...
https://media.transistor…c6c.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews of brilliant folks across the world of our realm. I'm your host, Rob and Chauhan. Dr. Amy Zhang is a postdoctoral scholar at UC Berkeley and a research scientist at Facebook AI Research. She will be starting as an assistant professor at UT Austin in spring 2023. Thanks so much for taking the time to do this, Dr. Zhang. Yeah, of course. Thanks for inviting me. How do you like to describe your personal research interests? Very much within the reinforcement learning framework, I think that interaction with the environment is really interesting. It has to do with a lot of the tasks in the real world that I care about. Most of my work, the problems that I choose, I typically ground in robotics and I also have an interest in healthcare. Because I really care about these real world problems, a really important problem that I think we have in reinforcement learning that I think is now getting more traction is generalization. I would definitely say that the focus of most of my research in the last few years has been generalization in reinforcement learning. How do you describe how you got to RL? Did your interest evolve towards that over time? Yeah, it's been a definitely a winding journey. My bachelor's and master's were actually more on the doubly side and I used to do signal processing, information theory, network coding. It was after that that I started exploring machine learning and I actually worked in more supervised learning at the time. So I was working on recommendation systems and then further went into computer vision and that's when I started doing deep learning. So there I was doing object detection, object classification, really a sort of classic problems. I didn't like that for a lot of real problems. So we were working on doing building detection from satellite images in order to estimate population density to provide internet all over the world. What I didn't like was that a lot, most of the progress that you made, like the thing that really moved the needle was more about the data, making sure the data was clean that you had enough of it. And I really missed having like a mathematical framework that you could work in and really like develop grounded algorithms. And so it seemed like reinforcement learning was like a setting where you could actually do that. Although of course, reinforcement learning is also very simple and efficient and so you do have to do a lot of like, you know, just software engineering and it requires a lot of compute. But I like that on the other side of it, you do have this really nice framework to describe the world. Cool. Okay. So I remember seeing your plantive ec poster that was back in Europe's 2019 in the Deep RL workshop at Vancouver at that workshop, I talked to a lot of the poster presenters in that room for the episode back in episode eight, but your poster was pretty busy. So I didn't get to talk to you. So I'm really glad we get a chance to talk now. We cross paths at ICML. That's right. Early this year. That's right. Super, super excited to have you. So let's let's talk about a couple of your recent papers. The first one is invariant causal prediction for block MDPs. And that was with yourself and Claire Lyle. Yes. Can you give us the general just of this paper? Yeah. So Claire and I actually started talking at RL DM, Reinforcement Learning and Position Making, which is this really great conference that only happens once every two years. It doesn't have formal proceedings, but it was in Montreal that year. And Claire and I had known each other because she'd done her masters at McGill, which is where I did my PhD. And we were both really interested in, I mean, our core areas, I would say, were reinforcement learning, but we were both getting really interested in causal inference. And specifically the inspiration for this paper was was another paper on invariant causal prediction or defining invariant causal prediction by UNS Peter's from 2015. And so this paper is very firmly in sort of causal inference land and was focused on this idea that there are these different kinds of interventions that you can use in order to do causal discovery on data and there were requirements of what kind of interventions you need and how much data you need in order to get statistically sound hypothesis testing in order to find like the true causal structure underlying the data that you've collected. And so we wanted it, it's really clear to us that there were really strong connections between causal inference and reinforcement learning. In causal inference, the assumption that you make is that you have access to all of these different environment variables and the way that the environment and variables interact with each other is through directed edges, which are cause and effect. And so you can, the goal in causal discovery is to figure out what is the directed acyclic graph, the DAB that connects all of these variables that you care about. And so usually most of those variables are your data, your x, and then one of those variables is your target what you're trying to infer your y. And so if you can, if you have the right data in order to find that DAB, then that basically builds your model and you can get out of distribution generalization. And so it seems like this idea was really useful to try to get that same type of out of distribution generalization and reinforcement learning. And so we tried to pose the same problem in reinforcement learning, figure out how to define this DAB in terms of the MDP. And here we focused on specifically out of distribution generalization for observations. So the idea is that typically the kind of state space that we work with in real problems is are these high dimensional bridge observations, pixels. And so when you have pixels, it's not the most paired down version of the state that you could use. There's all of these destructors, these like the asperius correlations, right? You know, that you have the sun in the sky and clouds and leaves moving in trees, but usually those are things that you don't necessarily care about for your specific task. And so in this paper, we really focused on like a few shots setting. And so this was sort of the high level motivation when we actually get to the kinds of experiments that we did. It's obviously much more paired down and much simpler and all in simulation. But the focus was on this idea that we can have access to only a couple of different environments at training time. And if they're selected carefully, if you see variation across those training environments that models the true variation in the testing environment, the environment that you care about deploying in, then we can learn a model that will be robust to those variations. And therefore generalize out of distribution to a testing environment where other things vary basically. So that sounds like the magical part to me, the out of distribution part, right? Unlike domain, adapt, generalization and methods where we're just trying to sample from within a distribution. If we go back to open AI's proxion, they were just trying to sort out the within distribution generalization issue, is that right? Yeah, you could say that. I guess. So what I would say is that within distribution and out of distribution in RL is a bit of a murky or thing. So even just in the scope of proxion, right? There are all these different levels. Let's just pick one of the simpler environments, like the maze environment. You have all of these different levels that just correspond to different base layouts. You can define a distribution that just consists of the training set of levels from proxion. That's your within distribution, right? And you can train all of those. And then you could say, OK, well, maybe my test levels are a different distribution. And if I can generalize to those that's out of distribution generalization. Because we're talking about samples are now environments, they're like, you know, you can define them as different MDPs rather than just a single data sample. Like it usually isn't supervised learning. Like the numbers that we're talking about are so different and supervised learning, you know, we have thousands, hundreds of thousands, millions of samples from a distribution. And so you can say that this distribution is very well defined by the samples that you've collected. Whereas in RL, the way it's specifically proxion, you know, you can think of this as like a multitask RL problem. And now your samples from the distribution are MDPs, are separate tasks and in the scope of proxion, now we're really only talking about like tens and hundreds of samples. And so because of that, I think, you know, what we define as a distribution, I get it's just less well defined. And so I think like saying, oh, what's in distribution versus out of distribution? It doesn't have quite the same meaning in RL as it does in supervised learning. Can you tell us more about this setting? What does a block MDP? What does that mean? Yeah, yeah. So the block MDP formulation was first defined by a paper by Simon Doe at all, I think in 2019. And that's like a very theoretical paper. And but this definition of the block MDP, I think, as like a form of structured MDP is very useful, again, in a lot of real world problems. And so what the block MDP is saying, it's not a very limiting assumption, it's just saying that in a typical, let's go back to a typical MDP, right? We have a state space, action space, transition distribution, reward function. But the thing that we're really focusing on here is the state space. The block MDP is saying, okay, we have a state space, but now let's say that that's latent. We don't actually have access to that state space. Instead, we see a different observation space. And the assumption is that this observation space is much larger than the state space, but we're going to make a simplifying assumption that it's still fully observable. And that's where the block assumption comes in, which is saying that for each observation, it is possible to decode what the underlying state is. And the way that you can do this is by saying that there is a one to many function, a rendering function that maps from the state S to the observation O. And each set of observations is disjoint. So a set of observations that belongs to a single state never overlaps with the set of observations that corresponds to a different state. And that's how we get full observability and kind of avoid the whole POMDP partial observability problem. And so the reason that the block MVP formulation is nice is that it's just saying, we have an environment in which we can get gains. There is latent structure, and we can get generalization. Because you can define any MVP. You can define like a worst case MVP where you will never get generalization, where every new state that you see has nothing to do with any other previous state that you've seen. You just have to fully exhaustively explore your entire state action space in order to understand it. And so the block MVP formulation is just saying, we don't care about that worst case scenario. We only care about problems where there is structure. And therefore we can, therefore generalization is possible. And is this the same blocking as the concept from design of experiments, blocking factor as a source of variability that's not a primary interest to the experiment or read that off a Wikipedia? But is that the same blocking? Actually, actually no. But that is a really interesting connection. So I think the block in block MDPs didn't come from that definition, but there's definitely a really nice link there. So the block from block MDP is really just this idea that if you construct like a matrix that to map from states to observations, this matrix has a disjoint blocks in it. And so that's how you represent the fact that the different sets of observations are disjoint that belong to different states. In this paper you talk about model irrelevance abstraction. Can you talk about that phrase? What does that mean? So a model irrelevance abstraction. So as far as I know, this was coined by in a paper by Lee Hommley. It's a really nice paper, just sort of unifying this framework of state abstractions. This paper was from 2009. And that's the first place where I've seen this definition of model irrelevance abstraction. And it actually just means the same thing as bisonulation, which is like another concept that I've talked about in my papers. And it's just this idea that if you ignore the states or observations, I mean, okay. In reinforcement learning, we don't really care about features in the statespace. What we really just care about is reward, right? We want to learn a policy that can maximize the total return, the total sum of rewards that we can achieve in the environment. So instead of looking at states, if we just discard that and only pay attention to the reward and future reward that we get for some sequence of actions, then bisonulation and model irrelevance abstractions just say, I don't care if these states look different. If I do a test, and a test consists of like a sequence of actions, no matter what test I perform, if the sequence of rewards that these two states give me are exactly the same, like the, also the distribution over rewards are the same, then these two states are the same to me. And so the model irrelevance abstraction is just saying is just constructing a state abstraction, like a coarser version of your state space that only pays attention to those differences. Cool. Okay. And we featured Dr. Pablo Samuel Castro on episode five and he did his dissertation involving bisonulation. So that concepts come up here. So just to my, I understand if we look at, let's say, a mojoco environment and we look at the pixel version versus the proprioceptive version, would that be a case of bisonulation or model irrelevance, would that be related to this? Yeah. Yeah. Okay. So I think like the proprioceptive state versus the pixel, right? So if you just look at these two different states spaces, they are, you can define them as two different of these. Right. But if you ignore that state and instead just look at the reward, we know that that matches up. And so we can say that this MDP, the MDP consisting of the proprioceptive state is by similar to the one consisting of pixels. And the model irrelevance abstraction. So I guess like in the example that you gave, like, I think you said Atari, there's a one-to-one mapping, right? There's only one pixel observation that corresponds to one proprioceptive state unless you start adding in distractors or changing backgrounds and stuff. And so because this is one-to-one, you could say it in either direction that one is an abstraction of the other. But typically when we talk about model irrelevance abstraction, we are talking about something coarser, something smaller of lesser coordinality, cardinality. And so in the setting where we're talking about like a pixel version where irrelevant features are changing, then we can say that the proprioceptive state version of this game is the abstraction of the pixel ones. The way of saying this is that what we want to find, like what we care about is the coarsest by simulation. And that means it's just the version, it's the MDP that has the fewest number of states that captures the exact same reward behavior as the original game. Cool. Okay. And we will have the link to the ICML poster session in the episode notes. And for the audience, I recommend giving that a listen. And also your second first author here, Claire Lyle, gave a great overview in that session, including some diagrams. And I saw she gave a more in-depth talk from the Simon's Institute that's also on YouTube. And so we'll link to that as well. Yeah. Yeah, awesome. Yeah, that was part of the reinforcement learning program there. And she definitely does like a really good job of dissecting that paper. Totally. Okay. So can you say a little bit more about invariant causal predictions? This is the concept, I gather that's the concept that this paper is built on from the causal world. And you brought it over to RO. But how does that really work? And are these linear models? Yeah. So in the original paper, the original ICP, which in our paper we called linear ICP, it relies on statistical tests. And so, or you get stronger guarantees if we can only focus on linear models. So as an example, and with invariant causal prediction, the goal is to basically find the causal future set. So the assumption is that you've got this supervised learning problem where you have your data set x and your labels y and you're trying to infer y from x. And the causal on a friend's perspective on this is not just, you know, is not just like the typical one in machine learning that just says, okay, like there exists some model. And if I do optimization, then I, with high likelihood, will find this model that can give the right prediction. In causal on a friend's, there are a lot, there's like a more structured assumption, right, which is that your x and y fit together as this directed acyclic graph. And you really want to find like the correct graph. And if you find the correct graph and fit the correct edge functions, then you can infer y correctly. And so, invariant causal prediction, the original paper by UNS Peter says that you need to have an intervention on every variable or you need to see a change in every variable in x and how it propagates to the other variables x and y in order to identify what the correct dad is to be able to eliminate all possible graphs except for one. And that one would be the correct one. That sounds like a lot of data and a lot of tests potentially, right? Yeah. So, in the original paper, yeah, so this paper, I would say, is, it has algorithms in it that you don't necessarily want to apply to large scale real world problems. And I remember there's actually a really funny passage in that paper that kind of admits like, yeah, this algorithm will give you the correct solution with strong guarantees, but it's also super exponential in the amount of data that you need. So, and so as part of the paper with Claire in the ICP for block MVP paper, we do do just like a very simple parallel from that ICP, that linear ICP to the RL version with just linear models, so assuming that the MVP is just consisting of linear functions. But we do also adapt another version, just, you know, like assuming any type of function right using neural networks so that you can have like universal function approximation. But then it also means that you have this trade off, which is that you get these really strong theoretical guarantees with the linear version and we lose them with the deep learning version. But at least it does scale up to two larger scale problems. It sounds pretty magical, keeping agents from being distracted by the confusing things that would be obvious to us, but maybe you'd approl would say a curve fitting style learning could easily be distracted by all these things. So you talked about some of the assumptions, but can you, is there, is there more in terms of like the time delay in the rewards and like how it just seems like your true, it's a really hard problem? And I guess, I don't know if you ever, if you ever step back and think about how we do it in our brain, like how humans do it, because like when we're playing soccer in the rain, it's, it's no harder than playing soccer in the sun. But to get to that point, we have so much experience and priors, which I guess we're, we're not assuming here, right? We're coming in tabula rassa and saying, what can we do? Yeah, the amount of, you're right. I mean, the, the kinds of priors and a lot of experience that we have, we are not giving to our agents right now. And so it, in that sense, it's not really a surprise that we are where we are, that we have, there were training agents that don't have good generalization. Um, you know, you, you say that this sounds almost magical, but it's, it's, it's really so mundane. It's really just this tradeoff of like how strong are the assumptions that you can make about a problem, lead to the, like strength in terms of the generalization results, right? And so the kinds of assumptions that we make, especially in the linear setting, in order to get those kinds of guarantees are just not applicable. Um, so there's really, I guess like what I want to highlight is that there's no free lunch. And I think what I'm really interested in or like one path that I think is, is useful is to think about different kinds of structure that do exist in the real world. And, and make those assumptions explicit and figure out how that leads to gains in terms of sample complexity in terms of generalization performance of algorithms. Because if you think about the typical definition of an MDP of a Markov decision process, like it's so general, where the, the, the, the definition makes no assumptions of underlying structure, like going back to the previous example that I gave, like this adversarily difficult MDP that you can construct, right? There's no hope of generalization to new and seen states. And it doesn't make sense to design algorithms for that setting. And so I'm a big proponent of defining new types of MDPs, new types of structured MDPs, like contextual MDPs, hidden parameter MDPs, block MDPs, because I think we need those in order to get better algorithms and better guarantees. Awesome. Yeah. In episode two, I got to ask Michael Littman, why is there so many RL algorithms? And his answer had to do with the fact that there's so many types of MDPs that we need different approaches to them. Yeah. Cool. Okay. So do you have follow up work planned along these lines? Yeah. There's been a couple of other things. Claire and I actually have another paper that we're working on right now, again, using this kind of causal and French perspective, but now for exploration, trying to develop new exploration algorithms for MDPs from this causal perspective. So that's one. Another one is on intervention design. And this was led by a PhD student, Miguel Melissa Mazziffian. And so this was looking at symterial transfers. So again, a type of generalization, also using this similar type of causal and French perspective to say, it will want to explain why data augmentation and domain randomization works so well for generalization. And to basically inform the type of data augmentation and type of domain randomization that are needed and how many are needed in order to get generalization or the type of generalization that we want. So those are a couple of other works that I think are like along this line of causal and French and RL. Great. I look forward to it. So let's move to your next paper that is multi-sask reinforcement learning with context-based representations by that's by Sedani at all. With yourself as a co-author, is that right? Yes, yes, that's right. So can you give us a brief version of this paper? Yeah, this paper is really fun. So this paper was looking at how we can use context like side information that's not really a part of, again, a part of the MDP formulation. But there are a lot of settings in which we have side information that is useful for our task at hand. And you know, like this could be just sort of like a description of the task, just like prior knowledge about the dynamics or the environment. And so in the scope of this paper, we were actually focusing on this multi-task and meta-RL benchmark meta-world. So meta-world has this like easy and hard version. There's MT10, which consists of these 10 tasks, MT50 has 50 tasks. But these tasks are all manipulation tasks. There's like an insumulation, a robot arm, different objects. And so in MT10, like examples of these different tasks are open a door, close a door, you know, open a window, close a window, open a door, or open a door, close a door. And in the multi-task setting, what we usually do is we assign task IDs. And so this can just be a one-hot. We can just talk about it as just like in these energy values. And you know, task one is open floor. Task two is open door. And in meta-world, it's really funny because there's actually these very simple sentence descriptions of each of these tasks that are meant for human consumption. And so you can read this sentence that's like the robot arm must, you know, like turn the knob in order to open this door. Something like this. It's just like one sentence. And from reading that sentence as a human, you're just like, okay, I know exactly what this task is. But that sentence was never meant to be part of the MVP. It's not given to the RL algorithm. The agent never uses it. And so one portion, one contribution of this paper was to show that, you know, we can design an architecture that uses this kind of contextual information. And we can beat state of the art performance just from using these simple sentences, using pre-trained language models in order to construct the context embeddings. And my hope is that this work shows that we should be using, we should be thinking harder about how to incorporate that kind of context into tasks because it can improve performance. Okay. So providing more about the context than just a task ID or what would be the alternative there? Yeah. So in the multitask setup, the alternative is a task ID. And the reason the task ID is so terrible, it's really terrible because if you're using task IDs to denote tasks at training time, it means that you have no hope of generalizing to a new unseen task at test time because there's no, there's no semantic meaning in the task ID. There's no structure underlying them mapping from the task ID to the actual task itself. And so if you're given a new task ID at test time, you have no idea what that new task is. Whereas if you use something more descriptive, like these sentences, we show that you can actually get zero-shot generalization to new unseen tasks. I mean, the performance was quite bad. This was just like a one-off experiment that we ran that was kind of like a side note. But I think the idea is that like if you can scale this up, like if we had done this with like a much larger family of tasks, we can definitely get better zero-shot generalization because the agent would be able to learn a mapping between different words and rewards and actions. Cool. And then again, this phrase block comes up again. I think you call this a block contextual MDP. What does block mean here again? Yeah. So the contextual MDP setting was something that was previously defined. And it just means that you have this context which informs the transition function and reward function. And so it just creates all of these different tasks with this underlying structure. MetaWorld interestingly, they made some design choices basically to make it work well with like RL algorithm. So one of the downsides of using neural networks, using a lot of models is that it requires like a fixed size input. And so you have all of these different tasks, but the objects in those tasks are different. But we need the, so this is all looking at perperioseceptive state. And so we need the dimensions of that state space to be the same across all those tasks. And the way that they did this was to have different dimensions mean different things in different tasks because they represent different objects. And so the block component. Yeah. And so the block component here is just to sort of reinforce that. And so the block component is saying not only do your reward and transition functions depend on this context, but your state space does too. And I think that's actually a really important component because when we think about the whole world and all the possible tasks that you can do, you can construct this as one giant MDP where your state space is, you know, the information about the whole world. But we don't operate that way. Like we operate by just, you know, having a focus on the objects that hand for a specific task. Or if you're, you know, trying to hammer nails and your focus on is like the nails and the hammer, it's not some like steep or off in the corner. And so just because your state space can change, doesn't mean that we're incapable of generalization. And so the block contextual MDP setting just is reinforcing that idea. Okay. Then and I gather that in the algorithm here, which I think you called care, the state gets encoded by a set of encoders. And then you have attention over them. Can you, can you talk to us about the intent with that, with the encoders and the attention? What, what is that doing? Why, why does that help? Yeah. So this was like another major contribution of this paper. So the idea here is that by having these separate encoders, the goal is that we're basically trying to get some sort of compositional generalization. By compositional generalization, I just mean that same idea, right? Where if, if we train an agent to close a drawer and open a window, then if we tell it to close a window, then it may understand what that concept is. And so we have these different encoders with the hope of each encoder mapping to some concept or object that appears in multiple tasks. So in meta world, all those examples that I've been giving, like, you know, it's, I think it should be pretty clear that there are concepts like open and close and drawers and doors that appear multiple times in different tasks, but just different combinations of them. And so the goal of, of using this mixture of encoders is that that each encoder will hopefully map to implicitly each one of these concepts. And so then the context can be used to train and it's just basically attention weights over these encoders. And so certain encoders will only activate if that concept is necessary to solve that task. And so what we found, and this is kind of hard to, this is all done like implicitly, right? So supervised mapping from encoder to word that we're actually using here. And so the way that we test, like, if that's what it's actually doing is by just varying the number of encoders K. And so in the experiments, we found that if K equals one or two, we get poorer performance. And this actually looks a lot more like, you know, like the multitask baseline, slow meta world. But if we increase K to be too large, like if we set K to be the number of, of actual tasks that we're trying to solve, we actually found that we also get worse performance because what ends up happening is that each encoder just gets assigned a separate task. And so no information is getting shared across tasks. And so K is something that you kind of have to tune or you can choose a K using the knowledge that you have about all of these tasks. So if we choose K to be approximately the number of concepts and objects that we think exist across this set of tasks, then we get the best performance. Cool. So I guess that shows that it's not just a matter of having enough capacity in that lap on that layer. Yeah. Yeah. It's really about how you share information. Right on. And then so this paper talks about zero-shot generalization. Can you talk about how zero-shot works in this in this setting? Is that like a whole new context and textual description with the tasks it's never seen? Yeah. And so this is again why this kind of more descriptive task ID or context is more useful than using just a number or a one-hot vector. Because there is actually structure in terms of like the components of the sentence, the context, the describe the task. Okay. And we know that this is true because as a human, if I read to you a sentence that's, please open this door for me, you know exactly what to do even if you had never opened a door before. Okay. That was a bad example. Another example that people talk about in compositional generalization is like DAX. So if I tell you, you don't necessarily know what DAX means. I'm just going to tell you it's like some motion, like maybe like clap twice. If you say DAX twice, you know what to do there. You know that you want to perform that DAX motion twice. And so that's the same type of compositional generalization that we think we can get from this kind of architecture and using this kind of context. And so just like in the scope of meta-world, what we did was we very carefully split up these 10 tasks in empty 10 so that all of the concepts and objects present in the test tasks are seen in the training tasks, but not the exact same combinations. And so we just wanted to see, okay, if an agent is introduced to all of these components necessary to figure out what these tests are, but never sees those tests tasks, can it perform that task? And you know, the success rates for this are pretty low. I think we were at like 30%. But that's still very promising, given that we have an agent that's given seven tasks and then asked to perform another three that it's never, never been trained on before. So I think that was a really promising first step towards something, which is something more impressive in terms of zero-shed generalization. So would you say that this type of system is learning grounded language? Is it attaching these words to concepts in the environment? In a very primitive way, yes. Like, you know, the vocabulary that we're using here consists of like definitely less than 100 words. But if we can scale this up, then absolutely, I think that what we would find is that it's learning in an unsupervised way to match words and phrases to specific transition functions, reward functions, or like components of the environment. I want to recommend your talk at UCL deciding, acting and reasoning with knowledge. That's the dark lab that's on YouTube. And that was from June this year. We'll have the link in the show notes. And that partly overlaps with this conversation and you shared a lot more besides in that talk. And then at the end of this paper, you mentioned some angles for follow-up. Is that something you might, you think you're doing? Yeah. So one of the obvious follow-ups here is that we can also extend all of this to rich observations. And so that's actually something that we are doing now. But we can also scale this up in the way that you also suggested, right? Which is like increasing the vocabulary and seeing how far we can push this sort of grounded language and RL component. And so there's actually a really nice environment for this out of Fair London led by Tim Rockeschel and Ed Greffinstead, I believe. So that's the NetHack environment. And so I don't know if you're familiar with NetHack, but it's this, I've never played it before. But I guess it's this old school computer game that's text-based. But the interesting thing about it is that there are a ton of different objects and agents and components of this game. And there is an extensive wiki. And I believe no human has ever beat the game without reading this wiki. Right. Yeah. So your agent would read the wiki? Is that what you're thinking? Yeah. Yeah. Well, so that's what their hope is. That's what their goal is, is to have an agent that can read this wiki and learn to play the game or like through interacting with the game while reading this text sort of ground the text to the game and learn to solve it. It's very hard. I think it's going to take a really long time before we can get there. But they have also created a mini version of this. I think the paper is now on archive and I believe the code is now publicly available. It's called mini hack. And so it's just many simpler versions of this game with smaller, I'm actually not sure if there is text attached to it, but I think it would still be pretty easy to create, like, paragraph, like, explaining what's going on. And so these smaller versions are much more doable with today's RL algorithms. And so the goal is to just sort of push that envelope further and see what we can do. And so this is an environment that we're working on with the collaborator and seeing how far we can get. That sounds really exciting. And personally, I've seen that hack for years when I'm terrified to even try it because it looks so addictive. Cool. I look forward to that since that sounds really powerful. So in these two papers that we just talked about, were you surprised by any of the results that you got or do you feel more like the things turned out just as you expected and planned? In the first paper with Claire, it definitely took a while for us. I think these concepts of, like, spurious correlations, and distractor variables, irrelevant features. I think they're very intuitive for us, or at least, you know, we understand what they mean in supervised learning better. But it was actually kind of hard at first to design environments correctly. You can, as an example, from supervised learning, that's maybe a little bit easier to think about understanding what's spurious or not. It has to be carefully tuned. So as an example, if you take combined like C4-10 and M-NIST datasets, right? And so let's say that we construct a mapping from the digits, 0-9 to the 10 classification labels from C-FAR. And we append those corresponding images together to create this joint M-NIST C-FAR dataset. And now we're going to declare that one of those things is a distractor, and the other thing is the real thing that we care about. Let's say it's like the C-FAR label is the thing that we care about, and the M-NIST digit is just a distractor. But because we've created this incredibly strong correlation between a specific digit and a specific object from C-FAR, there's no way for you to be able to tell which is the spurious relation and which is the true thing that we care about. The way to... Because they're always together? Because they're always together. And so we know that the way that we tell which one is actually the thing that we care about is if we add in some noise. So maybe there are some examples where you have a different digit that's been attached like typically it's mostly one in cat that are attached together. And so maybe you have a couple examples that are three in cat that are attached together, but the label is still the same. Then we'll know, OK, cat is the thing that we care about and not the digit. But if you don't have that, if you don't have that noise, then there's no way to tell. And so we had similar issues with designing RO environments in which we had the right type of variation in order to like get the failure mode that we wanted to exhibit, that we wanted to show that current RL algorithms have in order to fix it. But that's also just because we're dealing with toy environments and don't have real world problems. And it's like very clear these kinds of examples in the real world that do exist. So there's a really nice paper, I think NUREPS 2019 on causal confusion. They have a really nice example with autonomous driving where there's like a light on your dashboard that denotes like a whenever you break. And so if you see demonstration data of someone driving this car and the person breaks whenever a car in front of it breaks. So we know that the thing that we should be learning is that if you see brake lights on in front of on the car in front of you, then you should be breaking. But what the agent learns instead is to pay attention to the brake light on the dashboard and only breaks when the brake light is on, which means it'll never break at test time. And so that's the kind of spurious correlation that you have to be aware of. So that was for the first paper. For the second paper care, the only surprise was how well it worked. I didn't expect to see such a huge gain in performance just from incorporating these very simple sentences by giving these sentences to the agent. But it really did move the needle quite a lot and didn't require any tuning. So I thought that was really exciting. And was surprising. Cool. That must have been a nice feeling. Yeah. You don't get those often. Okay. So I wanted to ask you more about generalization in general. Can you talk a bit about the difference between generalization and supervised learning versus generalization and reinforcement learning? Yeah. So at a high level, they're very similar. You can define generalization and supervised learning. Learning distribution versus outer distribution is just sort of like the difference in performance of your model on training data versus test data. And we can do the same thing for our own. You can say generalization in general can be measured by the total reward you can achieve with your training policy on your training, MDPs versus your test MDPs. But an MDP has a lot more components than just data distribution that you're sampling from in supervised learning. And so you can think about different levels of generalization in reinforcement learning that I think are useful to think about. So I think the degenerate setting, the simplest setting, which is what a lot of people were working in up until a few years ago, was the setting where your training tests are exactly the same. If you have deterministic dynamics, just a single initial state, then it's very obvious, right? That like whatever performance you get at train time will be exactly the same as the performance you get at test time. There is no testing of generalization at all. You can start testing generalization if you have an initial state distribution. Because now you can control what initial states you see at train time versus test. So now you can actually start testing your agent on unseen states. And so one way that we did this in an early paper was by controlling the random seed. So if you control the random seed of the environment, then we can limit the number of initial states that you see at train time versus test time. And so we showed that if you limit your, depending on obviously the complexity of the environment, but even for very simple environments, if you limit your agent to 10 to 100 or hundreds of seeds at training time, so that's the initial states that it sees. And we have a held out set of initial state that test time. You do see generalization gap. You do see a performance difference of the agent on these like different initial states. Here's another paper on dissecting overfitting and reinforcement learning by Chi Yuen Zeng. And there they show that again for these kinds of like maze and tunnel environments that if you increase the difficulty of the environment, then it can take thousands, hundreds of thousands of different maze layouts before you can generalize to new unseen layouts. And so I think there's been like a slew of papers examining generalization in RL in the last few years that are really highlighted how far behind we are because we have these benchmarks that are deterministic dynamics and very narrow initial state distribution. And so we just never really word testing generalization. So initial state distribution is sort of like the first wrong on this ladder. But there are other things that you can change about your MDP that ways in which we as humans can generalize that our current agents can't. And so the first paper, ICP for block MDPs, was focusing on observational generalization. Can we generalize to distractors or things changing in the environment that don't affect the dynamics and reward? How do we develop algorithms that can be robust to that kind of change? You can also have a setting where your dynamics and reward can change, but there are underlying rules that stay invariant across all your tasks. As an example, you know, the laws of physics are always the same, but different objects act differently because they have different attributes, like different mass and volume and friction coefficients. And so these are all types of multitask settings where we should be able to get generalization that we currently just can't. And why is that that we can't do it right now? And I wonder if how much of that we can blame deep learning in a sense that deep learning doesn't do a great job of extrapolation. It seems to me mostly doing interpolation. If that makes sense, do you see it that way? Is it deep learning's fault that deep RL is not created generalizing that way? Or is it really, we don't know the right algorithms yet? Or is that just a really hard question that doesn't have any answer yet? There are so many different problems, like the examples that I gave are a few of ways in which our agents fail and, you know, we can solve them one by one. Okay, okay. I will say yes, it's deep learning's fault. But it's a trade-off, right? The fact is that deep learning, deep neural networks are, they're really nice because they're universal function approximators. They can fit anything. We don't need to hard code the model. We don't need to like program the laws of physics directly into an agent in order for it to learn to interact with the world. These are all trade-offs that we've made. And it just means that it's a sample efficiency issue, I think. Going back to like the causal inference connections, it just in order to like learn that correct causal model of the world, it's just going to require a lot of interaction. And so a big part of that is just scaling up our algorithms, building larger multitask simulations so that we can develop agents that can use information from other tasks, leverage that information in order to solve a new task. I think everything that we've done so far has just been really toy. And part of the problem is the sample efficiency. I think again, it's a trade-off of what inductive biases do we want to put in to improve sample efficiency and which ones we don't, and then we pay the cost of sample efficiency. But we get this promise of better generalization. And I think we don't really know where that line is. And the line is probably different for different classes of problems. So I guess I would say we need better algorithms, we need better sample efficiency so that we can actually do research on these like larger scale problems. And we need better benchmarks, we need better simulation environments, real world environments, things that we can actually iterate on quickly. So I think these are all limiting factors. And you mentioned inductive bias. It seems like there's that two sides of the coin in terms of generalization and inductive bias. Yeah. And how does that, I guess when I think of deep learning, I think that the inductive bias is largely about how you set up, how you frame the deep learning problem and how you set how you structure your network. Is that the same case in RL is all the inductive bias coming from the network design or how do you see designing the inductive bias in an RL problem? Like is the algorithm really changing the inductive bias? It can. So a lot of, like, let's say let's just focus on like different model free RL algorithms. Like, all of these different algorithms just have slightly different tricks in terms of the objective, right? Like the main objective is always the same. Like if it's policy gradient, you're just trying to get your policy to choose an action that's going to give higher return. Like that stays the same across all of these different algorithms. But you have different inductive biases as part of the objective. Like, stay close to the previous policy. Like, do updates that don't change your policy very much. And so all of these things, a lot of these were meant to stabilize training, but I think you can also do similar things in order to incorporate inductive biases about the real world. So yes, there's a lot of architectural things that we can do. We can use attention masks. We can use residual nets like you can do all of these things to try and like incorporate these inductive biases to like improve optimization, improve generalization. But we have another thing in RL that we don't really have in supervised learning as much, which is like the sub-zoolery objective. And so people use as auxiliary objectives like learning the dynamics of the environment, learning the dynamics of the reward of the environment. So keeping the entropy of the policy low or high. So like, there are all of these things that we do that is like based on our intuition of what will work well. Thanks. I'm going to re-list into that a few times. Okay. So let's move on to MBRL lib. So I see that you're a co-author for this library MBRL lib, which I gather is model-based RL library from Facebook research. Can you tell us a bit about MBRL lib? Yeah. So this is a project that was led by Luis Pineda, who's a research engineer in Fair Montreal. I'm really excited about this project. So one of the difficulties of RL research, as I'm sure you know, is just the reproducibility, the fact that these tiny little hacks or like design decisions, implementation decisions have a really large impact on performance. And there are, there have been an abundance of really great, really usable modular libraries for model-free RL that have come out. And I think it's led to an explosion of research. Like it means that it's now research is now accessible to a lot more people because they have this platform to build off of. I think we haven't really seen this explosion as much in model-based RL in big part because they're just haven't been a good library consisting of many algorithms that is stable, easy to use, readable, modular, has hyper parameters for a bunch of benchmark environments. And so that's what this is. So that's what NBRL lib is. And this is open-sourced. And the goal is to have practitioners, researchers use this library, contribute to this library in order to further a model-based RL research. I see that you have Nathan Lambert as a co-author and he hints at this library coming out when he was on episode 19. But I think he didn't name it at the time because it wasn't announced yet. Yeah. He did that incredible work using the model-based RL to get that tumbling behavior with a half cheetah which kind of completely destroyed that benchmark, I think, with that environment with a very shocking score. So it's cool to see this. That's great to see. So I can't wait to check that out. And so right now I think there's two algorithms implemented. And so I guess you're saying the plan is to have that as a long-term project to grow that set. And so the goal is that there will also be, there have already been sort of users who have been contributing back to it. And so the goal is that if you use the library to develop a new algorithm and write a paper about your algorithm that you can submit a PR to add that to the library. So right now the two algorithms in the library focus on on a proprioceptive state. And we're working on adding some algorithms that focus on rich observations as well to it. So it's sort of, it'll be usable for both of those settings. Ooh, rich observation model-based. Yeah. Okay, so can you comment on how do you see the balance between like following the literature and other people's work and then also coming up with your ideas? Like this, just the flood of papers doesn't seem to be slowing down and I'm sure all researchers could spend all day, every day just reading. How do you kind of think about that balance? I guess it is a balance. I mean, I guess my advice would definitely be read, like read a lot. I don't know if, I assume that this experience is shared by a lot of people. But I actually found that I assumed a lot more had been done than actually had been and reading made me understand where the holes are and it makes you sort of realize like what is still left to do when I enter into like a new subarea. I think I typically tend to assume that a lot of major work has already been done and there's no point in doing it and there's no contributions to be made. And I think when you read the literature and you follow what's going on and recent papers are a good way of doing this because they'll have related work sections where they explain what's been going on in the field so far and what is left to do. It really, it helps you see where the holes are and where you can step in and contribute. In terms of coming up with my own ideas and sort of like that balance, I've actually found that the easiest way to come up with new ideas is just through talking to people. I guess my advice would be to just have research conversations with people in your lab, with people at conferences. This will be a lot easier when we move back to in-person conferences. But a lot of my collaborations have come about just from meeting up with people at conferences and just chatting and it just makes you realize like a new fun idea that are a new insight that doesn't seem to really have spread in the literature yet. So I think in terms of like a balance makes it sound like it's some like stationary thing. I guess I would say it's usually what it should be is like you're following the literature. When you enter into like a new topic or new area, you should be doing a lot more following the literature and then as you get more and more aware of like what's established to all like what's been done like in the last like 10, 20 years, then talking to other people and that kind of helps you figure out like where you can contribute. Great. And then do you spend much time focused on things well outside of RL or deep learning in terms of reading or concepts to find raw material for your work? I guess actually like a lot of inspiration comes from like the older RL literature, like the pre-deep learning stuff. I think there's a lot of really nice ideas like people have thought very carefully about different assumptions to make about different environments like how to utilize those assumptions. So I found that literature to be very rich to dive into in order to get inspiration for how to develop algorithms and algorithms with guarantees that use deep learning and that can get scaled up to the kinds of problems that we deal with today. Yeah. So I guess like it's more of like the traditional RL focus. I did want to say like bisonulation, you know, the stuff that Pablo has worked on and that I've worked on. Bisonulation is actually very old. It comes from like the formal verification community with regards to like how do you determine two systems are the same based on their inputs and outputs. And then in the 90s bisonulation got transferred over or defined for the RL setting and then there's just been like a slow trickle of papers, especially by norm firms and Pablo about bisonulation for RL. So I think old ideas. I think there's a lot of richness in old ideas. Everything's been done before. It's just about modernizing it. Cool. Okay. And then besides your own work, are there other things happening in RL these days that you're particularly excited about? I've been focusing more on compositional generalization and so actually less lately and again, sort of more back in the day. There's a lot of work on factored MDPs and object oriented MDPs, different kinds of assumptions of structure that lend themselves really nicely to achieving compositional generalization. I think these ideas tie a lot more closely to model based RL rather than model free, which is again, why we've been pushing this MBRL lib. In terms of work that's been coming out more recently, I think there's been a lot of exciting work on the use of external data. Like again, trying to use information sources or feedback sources that aren't just a reward because reward is really hard to design in a lot of problems and it often can be sparse. And so if we can use other sources of data like demonstrations, like videos of humans, like performing tasks, then in order to speed up learning, I think those things are really exciting. There's a paper out of Chelsea Finns lab, especially, I don't remember the title, but the first author, the first author is Annie. He was something about in the RL from in the wild videos. And so I think they used YouTube videos in order to improve sample efficiency of RL across multiple tasks. That was really exciting. Is there anything else that I should have asked you about today or that you think the audience might want to hear about? I guess that I'm pretty biased, but I think you did a really great job of asking a lot of questions. At least I think are important about generalization and reinforcement learning, so yeah, this has been really fun. Awesome. And likewise, so do you have any suggestions for the show or the format or who we should hear from or anything else about the show? I'm hoping that it can be a useful resource to other people and it's actually really hard to get critical feedback. I'm a huge fan of just more interaction. Obviously that's very difficult in podcast form. I think, yeah, I guess that is just hard to podcast. I just like Q and A sessions are always nice. I don't know how you, I guess, like get feedback from your viewers or listeners at this, but I'm curious, like if that's something that you incorporate in terms of the show. Okay. So I have done a few polls on the Twitter. So there's the Twitter, which is talk URL podcast. And there's quite a few followers there and there is some interaction. We get some comments. I did some polls to ask people about general things, but mate, and I also asked who people would like to see. And actually a number of guests came out of those questions. So that would be one thing. I guess another thing I could do is pre-announce the guest and then get people to ask things on the Twitter. Maybe is that what you're pointing to? Yeah, I think, yeah, that would be nice. Yeah, just like ways of getting listeners involved. I'm a huge fan. I guess like where that comes from is like I'm a huge fan of like the unconference format that Ian Goodfellow is sort of espoused where instead of creating a conference where you have speakers who are invited and sort of like talk out of crowd, you instead have the participants bringing the content, right? You have them create breakout groups and lead discussions and I think it's a really great format. In terms of who else you should hear from, again, from my very biased perspective, I think Claire, I've done a bunch of really great work with her. She's a very strong researcher. I really enjoyed our collaborations. She's done a lot of really strong work in terms of calls on friends in RL and formal verification. Audrey Durand. She's a professor at University of Laval as she does Bandit Theory and as well as RL for Healthcare. So I'm sure she has like a lot of fun anecdotes about using RL and Bandits in the real world and George Conradaris from Brown. Obviously, I think we share a lot of the same views. Awesome. These are great suggestions. I've actually wanted to have Claire lie on the show for a while and I've had the been lucky to meet her. I did invite Audrey and I will follow up with her and George is a great suggestion too. So thanks for that. Yeah. Dr. Amy Zhang, this has been fantastic. Thank you so much for sharing your time today and your insight with TalkRL. Yeah. Thank you for having me. Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways. One, subscribe on your favorite podcast platform. Subscriptions make a big difference. Two, follow us on Twitter and talk RL podcast. We love retweets. Three, give us a five star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better. Bye.
[ { "end": 11, "start": 0, "text": " This is TalkArail Podcast." }, { "end": 13.84, "start": 11, "text": " All reinforcement learning, all the time." }, { "end": 16.68, "start": 13.84, "text": " Interviews of brilliant folks across the world of our realm." }, { "end": 20.68, "start": 16.68, "text": " I'm your host, Rob and Chauhan." }, { "end": 26.32, "start": 20.68, "text": " Dr. Amy Zhang is a postdoctoral scholar at UC Berkeley and a research scientist at Facebook" }, { "end": 27.400000000000002, "start": 26.32, "text": " AI Research." }, { "end": 32.28, "start": 27.4, "text": " She will be starting as an assistant professor at UT Austin in spring 2023." }, { "end": 34.64, "start": 32.28, "text": " Thanks so much for taking the time to do this, Dr. Zhang." }, { "end": 35.64, "start": 34.64, "text": " Yeah, of course." }, { "end": 36.64, "start": 35.64, "text": " Thanks for inviting me." }, { "end": 40.68, "start": 36.64, "text": " How do you like to describe your personal research interests?" }, { "end": 45.16, "start": 40.68, "text": " Very much within the reinforcement learning framework, I think that interaction with the" }, { "end": 46.56, "start": 45.16, "text": " environment is really interesting." }, { "end": 51.92, "start": 46.56, "text": " It has to do with a lot of the tasks in the real world that I care about." }, { "end": 60.32, "start": 51.92, "text": " Most of my work, the problems that I choose, I typically ground in robotics and I also" }, { "end": 63.52, "start": 60.32, "text": " have an interest in healthcare." }, { "end": 67.64, "start": 63.52, "text": " Because I really care about these real world problems, a really important problem that" }, { "end": 71.52000000000001, "start": 67.64, "text": " I think we have in reinforcement learning that I think is now getting more traction is" }, { "end": 72.52000000000001, "start": 71.52000000000001, "text": " generalization." }, { "end": 77.92, "start": 72.52000000000001, "text": " I would definitely say that the focus of most of my research in the last few years has" }, { "end": 80.64, "start": 77.92, "text": " been generalization in reinforcement learning." }, { "end": 82.88, "start": 80.64, "text": " How do you describe how you got to RL?" }, { "end": 86.08, "start": 82.88, "text": " Did your interest evolve towards that over time?" }, { "end": 89.84, "start": 86.08, "text": " Yeah, it's been a definitely a winding journey." }, { "end": 95.6, "start": 89.84, "text": " My bachelor's and master's were actually more on the doubly side and I used to do signal" }, { "end": 98.48, "start": 95.6, "text": " processing, information theory, network coding." }, { "end": 104.72, "start": 98.48, "text": " It was after that that I started exploring machine learning and I actually worked in" }, { "end": 106.36, "start": 104.72, "text": " more supervised learning at the time." }, { "end": 113.48, "start": 106.36, "text": " So I was working on recommendation systems and then further went into computer vision" }, { "end": 116.44, "start": 113.48, "text": " and that's when I started doing deep learning." }, { "end": 121.24, "start": 116.44, "text": " So there I was doing object detection, object classification, really a sort of classic" }, { "end": 124.68, "start": 121.24, "text": " problems." }, { "end": 130.92000000000002, "start": 124.68, "text": " I didn't like that for a lot of real problems." }, { "end": 136.07999999999998, "start": 130.92000000000002, "text": " So we were working on doing building detection from satellite images in order to estimate" }, { "end": 140.68, "start": 136.08, "text": " population density to provide internet all over the world." }, { "end": 146.28, "start": 140.68, "text": " What I didn't like was that a lot, most of the progress that you made, like the thing" }, { "end": 151.08, "start": 146.28, "text": " that really moved the needle was more about the data, making sure the data was clean" }, { "end": 153.44, "start": 151.08, "text": " that you had enough of it." }, { "end": 158.32000000000002, "start": 153.44, "text": " And I really missed having like a mathematical framework that you could work in and really" }, { "end": 162.44, "start": 158.32000000000002, "text": " like develop grounded algorithms." }, { "end": 166.6, "start": 162.44, "text": " And so it seemed like reinforcement learning was like a setting where you could actually do" }, { "end": 167.6, "start": 166.6, "text": " that." }, { "end": 171.07999999999998, "start": 167.6, "text": " Although of course, reinforcement learning is also very simple and efficient and so you do" }, { "end": 176.35999999999999, "start": 171.07999999999998, "text": " have to do a lot of like, you know, just software engineering and it requires a lot of" }, { "end": 177.35999999999999, "start": 176.35999999999999, "text": " compute." }, { "end": 182.76, "start": 177.35999999999999, "text": " But I like that on the other side of it, you do have this really nice framework to describe" }, { "end": 183.76, "start": 182.76, "text": " the world." }, { "end": 184.76, "start": 183.76, "text": " Cool." }, { "end": 185.76, "start": 184.76, "text": " Okay." }, { "end": 190.48, "start": 185.76, "text": " So I remember seeing your plantive ec poster that was back in Europe's 2019 in the Deep" }, { "end": 194.64, "start": 190.48, "text": " RL workshop at Vancouver at that workshop, I talked to a lot of the poster presenters" }, { "end": 198.83999999999997, "start": 194.64, "text": " in that room for the episode back in episode eight, but your poster was pretty busy." }, { "end": 200.32, "start": 198.83999999999997, "text": " So I didn't get to talk to you." }, { "end": 203.32, "start": 200.32, "text": " So I'm really glad we get a chance to talk now." }, { "end": 204.95999999999998, "start": 203.32, "text": " We cross paths at ICML." }, { "end": 205.95999999999998, "start": 204.95999999999998, "text": " That's right." }, { "end": 206.95999999999998, "start": 205.95999999999998, "text": " Early this year." }, { "end": 207.95999999999998, "start": 206.95999999999998, "text": " That's right." }, { "end": 208.95999999999998, "start": 207.95999999999998, "text": " Super, super excited to have you." }, { "end": 212.16, "start": 208.95999999999998, "text": " So let's let's talk about a couple of your recent papers." }, { "end": 216.48, "start": 212.16, "text": " The first one is invariant causal prediction for block MDPs." }, { "end": 219.56, "start": 216.48, "text": " And that was with yourself and Claire Lyle." }, { "end": 220.39999999999998, "start": 219.56, "text": " Yes." }, { "end": 223.24, "start": 220.4, "text": " Can you give us the general just of this paper?" }, { "end": 224.24, "start": 223.24, "text": " Yeah." }, { "end": 230.36, "start": 224.24, "text": " So Claire and I actually started talking at RL DM, Reinforcement Learning and Position Making," }, { "end": 233.68, "start": 230.36, "text": " which is this really great conference that only happens once every two years." }, { "end": 238.08, "start": 233.68, "text": " It doesn't have formal proceedings, but it was in Montreal that year." }, { "end": 242.4, "start": 238.08, "text": " And Claire and I had known each other because she'd done her masters at McGill, which is" }, { "end": 244.56, "start": 242.4, "text": " where I did my PhD." }, { "end": 249.52, "start": 244.56, "text": " And we were both really interested in, I mean, our core areas, I would say, were reinforcement" }, { "end": 253.08, "start": 249.52, "text": " learning, but we were both getting really interested in causal inference." }, { "end": 259.24, "start": 253.08, "text": " And specifically the inspiration for this paper was was another paper on invariant causal" }, { "end": 264.24, "start": 259.24, "text": " prediction or defining invariant causal prediction by UNS Peter's from 2015." }, { "end": 269.24, "start": 264.24, "text": " And so this paper is very firmly in sort of causal inference land and was focused on this" }, { "end": 274.6, "start": 269.24, "text": " idea that there are these different kinds of interventions that you can use in order to" }, { "end": 280.64000000000004, "start": 274.6, "text": " do causal discovery on data and there were requirements of what kind of interventions" }, { "end": 287.56, "start": 280.64000000000004, "text": " you need and how much data you need in order to get statistically sound hypothesis testing" }, { "end": 293.88, "start": 287.56, "text": " in order to find like the true causal structure underlying the data that you've collected." }, { "end": 299.28000000000003, "start": 293.88, "text": " And so we wanted it, it's really clear to us that there were really strong connections" }, { "end": 302.64000000000004, "start": 299.28000000000003, "text": " between causal inference and reinforcement learning." }, { "end": 306.76, "start": 302.64, "text": " In causal inference, the assumption that you make is that you have access to all of these" }, { "end": 311.12, "start": 306.76, "text": " different environment variables and the way that the environment and variables interact" }, { "end": 316.59999999999997, "start": 311.12, "text": " with each other is through directed edges, which are cause and effect." }, { "end": 323.03999999999996, "start": 316.59999999999997, "text": " And so you can, the goal in causal discovery is to figure out what is the directed acyclic" }, { "end": 327.76, "start": 323.03999999999996, "text": " graph, the DAB that connects all of these variables that you care about." }, { "end": 332.96, "start": 327.76, "text": " And so usually most of those variables are your data, your x, and then one of those variables" }, { "end": 336.52, "start": 332.96, "text": " is your target what you're trying to infer your y." }, { "end": 343.32, "start": 336.52, "text": " And so if you can, if you have the right data in order to find that DAB, then that basically" }, { "end": 347.32, "start": 343.32, "text": " builds your model and you can get out of distribution generalization." }, { "end": 352.44, "start": 347.32, "text": " And so it seems like this idea was really useful to try to get that same type of out of" }, { "end": 355.24, "start": 352.44, "text": " distribution generalization and reinforcement learning." }, { "end": 360.32, "start": 355.24, "text": " And so we tried to pose the same problem in reinforcement learning, figure out how to" }, { "end": 363.28000000000003, "start": 360.32, "text": " define this DAB in terms of the MDP." }, { "end": 370.2, "start": 363.28000000000003, "text": " And here we focused on specifically out of distribution generalization for observations." }, { "end": 376.32, "start": 370.2, "text": " So the idea is that typically the kind of state space that we work with in real problems" }, { "end": 381.16, "start": 376.32, "text": " is are these high dimensional bridge observations, pixels." }, { "end": 387.8, "start": 381.16, "text": " And so when you have pixels, it's not the most paired down version of the state that you" }, { "end": 388.8, "start": 387.8, "text": " could use." }, { "end": 394.56, "start": 388.8, "text": " There's all of these destructors, these like the asperius correlations, right?" }, { "end": 401.12, "start": 394.56, "text": " You know, that you have the sun in the sky and clouds and leaves moving in trees, but" }, { "end": 406.84000000000003, "start": 401.12, "text": " usually those are things that you don't necessarily care about for your specific task." }, { "end": 412.03999999999996, "start": 406.84, "text": " And so in this paper, we really focused on like a few shots setting." }, { "end": 416.67999999999995, "start": 412.03999999999996, "text": " And so this was sort of the high level motivation when we actually get to the kinds of experiments" }, { "end": 417.67999999999995, "start": 416.67999999999995, "text": " that we did." }, { "end": 422.67999999999995, "start": 417.67999999999995, "text": " It's obviously much more paired down and much simpler and all in simulation." }, { "end": 429.76, "start": 422.67999999999995, "text": " But the focus was on this idea that we can have access to only a couple of different" }, { "end": 431.84, "start": 429.76, "text": " environments at training time." }, { "end": 437.4, "start": 431.84, "text": " And if they're selected carefully, if you see variation across those training environments" }, { "end": 443.4, "start": 437.4, "text": " that models the true variation in the testing environment, the environment that you care" }, { "end": 451.79999999999995, "start": 443.4, "text": " about deploying in, then we can learn a model that will be robust to those variations." }, { "end": 458.4, "start": 451.79999999999995, "text": " And therefore generalize out of distribution to a testing environment where other things" }, { "end": 460.76, "start": 458.4, "text": " vary basically." }, { "end": 464.84, "start": 460.76, "text": " So that sounds like the magical part to me, the out of distribution part, right?" }, { "end": 471.59999999999997, "start": 464.84, "text": " Unlike domain, adapt, generalization and methods where we're just trying to sample from within" }, { "end": 473.92, "start": 471.59999999999997, "text": " a distribution." }, { "end": 480.24, "start": 473.92, "text": " If we go back to open AI's proxion, they were just trying to sort out the within distribution" }, { "end": 482, "start": 480.24, "text": " generalization issue, is that right?" }, { "end": 483.56, "start": 482, "text": " Yeah, you could say that." }, { "end": 484.56, "start": 483.56, "text": " I guess." }, { "end": 490.92, "start": 484.56, "text": " So what I would say is that within distribution and out of distribution in RL is a bit of a" }, { "end": 492.72, "start": 490.92, "text": " murky or thing." }, { "end": 494.88, "start": 492.72, "text": " So even just in the scope of proxion, right?" }, { "end": 496.16, "start": 494.88, "text": " There are all these different levels." }, { "end": 499.68, "start": 496.16, "text": " Let's just pick one of the simpler environments, like the maze environment." }, { "end": 503.6, "start": 499.68, "text": " You have all of these different levels that just correspond to different base layouts." }, { "end": 510.32, "start": 503.6, "text": " You can define a distribution that just consists of the training set of levels from proxion." }, { "end": 512.52, "start": 510.32, "text": " That's your within distribution, right?" }, { "end": 514.72, "start": 512.52, "text": " And you can train all of those." }, { "end": 520.56, "start": 514.72, "text": " And then you could say, OK, well, maybe my test levels are a different distribution." }, { "end": 526.6, "start": 520.56, "text": " And if I can generalize to those that's out of distribution generalization." }, { "end": 532.0799999999999, "start": 526.6, "text": " Because we're talking about samples are now environments, they're like, you know, you" }, { "end": 536.92, "start": 532.0799999999999, "text": " can define them as different MDPs rather than just a single data sample." }, { "end": 538.56, "start": 536.92, "text": " Like it usually isn't supervised learning." }, { "end": 543.2399999999999, "start": 538.56, "text": " Like the numbers that we're talking about are so different and supervised learning, you" }, { "end": 549.04, "start": 543.2399999999999, "text": " know, we have thousands, hundreds of thousands, millions of samples from a distribution." }, { "end": 554.1199999999999, "start": 549.04, "text": " And so you can say that this distribution is very well defined by the samples that you've" }, { "end": 555.1199999999999, "start": 554.1199999999999, "text": " collected." }, { "end": 560.0799999999999, "start": 555.1199999999999, "text": " Whereas in RL, the way it's specifically proxion, you know, you can think of this as like" }, { "end": 562.1999999999999, "start": 560.0799999999999, "text": " a multitask RL problem." }, { "end": 569.5200000000001, "start": 562.2, "text": " And now your samples from the distribution are MDPs, are separate tasks and in the scope" }, { "end": 576.72, "start": 569.5200000000001, "text": " of proxion, now we're really only talking about like tens and hundreds of samples." }, { "end": 582.6400000000001, "start": 576.72, "text": " And so because of that, I think, you know, what we define as a distribution, I get it's" }, { "end": 584.84, "start": 582.6400000000001, "text": " just less well defined." }, { "end": 588.24, "start": 584.84, "text": " And so I think like saying, oh, what's in distribution versus out of distribution?" }, { "end": 593.16, "start": 588.24, "text": " It doesn't have quite the same meaning in RL as it does in supervised learning." }, { "end": 596.08, "start": 593.16, "text": " Can you tell us more about this setting?" }, { "end": 597.08, "start": 596.08, "text": " What does a block MDP?" }, { "end": 598.08, "start": 597.08, "text": " What does that mean?" }, { "end": 599.08, "start": 598.08, "text": " Yeah, yeah." }, { "end": 605.88, "start": 599.08, "text": " So the block MDP formulation was first defined by a paper by Simon Doe at all, I think" }, { "end": 607.6800000000001, "start": 605.88, "text": " in 2019." }, { "end": 609.6800000000001, "start": 607.6800000000001, "text": " And that's like a very theoretical paper." }, { "end": 616.08, "start": 609.6800000000001, "text": " And but this definition of the block MDP, I think, as like a form of structured MDP is" }, { "end": 619.96, "start": 616.08, "text": " very useful, again, in a lot of real world problems." }, { "end": 624.36, "start": 619.96, "text": " And so what the block MDP is saying, it's not a very limiting assumption, it's just saying" }, { "end": 627.76, "start": 624.36, "text": " that in a typical, let's go back to a typical MDP, right?" }, { "end": 631.6800000000001, "start": 627.76, "text": " We have a state space, action space, transition distribution, reward function." }, { "end": 635.4000000000001, "start": 631.6800000000001, "text": " But the thing that we're really focusing on here is the state space." }, { "end": 640.24, "start": 635.4000000000001, "text": " The block MDP is saying, okay, we have a state space, but now let's say that that's" }, { "end": 641.24, "start": 640.24, "text": " latent." }, { "end": 643.08, "start": 641.24, "text": " We don't actually have access to that state space." }, { "end": 646.0400000000001, "start": 643.08, "text": " Instead, we see a different observation space." }, { "end": 652.08, "start": 646.0400000000001, "text": " And the assumption is that this observation space is much larger than the state space, but" }, { "end": 656.5200000000001, "start": 652.08, "text": " we're going to make a simplifying assumption that it's still fully observable." }, { "end": 662.5200000000001, "start": 656.5200000000001, "text": " And that's where the block assumption comes in, which is saying that for each observation," }, { "end": 666.5200000000001, "start": 662.5200000000001, "text": " it is possible to decode what the underlying state is." }, { "end": 671.96, "start": 666.5200000000001, "text": " And the way that you can do this is by saying that there is a one to many function, a rendering" }, { "end": 676.8000000000001, "start": 671.96, "text": " function that maps from the state S to the observation O." }, { "end": 681.6800000000001, "start": 676.8000000000001, "text": " And each set of observations is disjoint." }, { "end": 686.96, "start": 681.6800000000001, "text": " So a set of observations that belongs to a single state never overlaps with the set of" }, { "end": 690.44, "start": 686.96, "text": " observations that corresponds to a different state." }, { "end": 695.1600000000001, "start": 690.44, "text": " And that's how we get full observability and kind of avoid the whole POMDP partial" }, { "end": 697.96, "start": 695.1600000000001, "text": " observability problem." }, { "end": 703, "start": 697.96, "text": " And so the reason that the block MVP formulation is nice is that it's just saying, we have an" }, { "end": 706.72, "start": 703, "text": " environment in which we can get gains." }, { "end": 711.84, "start": 706.72, "text": " There is latent structure, and we can get generalization." }, { "end": 714.24, "start": 711.84, "text": " Because you can define any MVP." }, { "end": 719.9200000000001, "start": 714.24, "text": " You can define like a worst case MVP where you will never get generalization, where every" }, { "end": 724.8000000000001, "start": 719.9200000000001, "text": " new state that you see has nothing to do with any other previous state that you've seen." }, { "end": 731.0799999999999, "start": 724.8, "text": " You just have to fully exhaustively explore your entire state action space in order to understand" }, { "end": 732.56, "start": 731.0799999999999, "text": " it." }, { "end": 737.68, "start": 732.56, "text": " And so the block MVP formulation is just saying, we don't care about that worst case scenario." }, { "end": 742.16, "start": 737.68, "text": " We only care about problems where there is structure." }, { "end": 745.8, "start": 742.16, "text": " And therefore we can, therefore generalization is possible." }, { "end": 751.3199999999999, "start": 745.8, "text": " And is this the same blocking as the concept from design of experiments, blocking factor" }, { "end": 755.2, "start": 751.32, "text": " as a source of variability that's not a primary interest to the experiment or read that off" }, { "end": 756.2, "start": 755.2, "text": " a Wikipedia?" }, { "end": 758.2, "start": 756.2, "text": " But is that the same blocking?" }, { "end": 762.48, "start": 758.2, "text": " Actually, actually no." }, { "end": 766.1600000000001, "start": 762.48, "text": " But that is a really interesting connection." }, { "end": 772.0400000000001, "start": 766.1600000000001, "text": " So I think the block in block MDPs didn't come from that definition, but there's definitely" }, { "end": 773.7600000000001, "start": 772.0400000000001, "text": " a really nice link there." }, { "end": 781.12, "start": 773.7600000000001, "text": " So the block from block MDP is really just this idea that if you construct like a matrix" }, { "end": 788.12, "start": 781.12, "text": " that to map from states to observations, this matrix has a disjoint blocks in it." }, { "end": 794.24, "start": 788.12, "text": " And so that's how you represent the fact that the different sets of observations are" }, { "end": 796.8, "start": 794.24, "text": " disjoint that belong to different states." }, { "end": 800.68, "start": 796.8, "text": " In this paper you talk about model irrelevance abstraction." }, { "end": 801.68, "start": 800.68, "text": " Can you talk about that phrase?" }, { "end": 802.96, "start": 801.68, "text": " What does that mean?" }, { "end": 805.84, "start": 802.96, "text": " So a model irrelevance abstraction." }, { "end": 810.12, "start": 805.84, "text": " So as far as I know, this was coined by in a paper by Lee Hommley." }, { "end": 815.48, "start": 810.12, "text": " It's a really nice paper, just sort of unifying this framework of state abstractions." }, { "end": 817.88, "start": 815.48, "text": " This paper was from 2009." }, { "end": 822.88, "start": 817.88, "text": " And that's the first place where I've seen this definition of model irrelevance abstraction." }, { "end": 830.44, "start": 822.88, "text": " And it actually just means the same thing as bisonulation, which is like another concept" }, { "end": 833, "start": 830.44, "text": " that I've talked about in my papers." }, { "end": 840.04, "start": 833, "text": " And it's just this idea that if you ignore the states or observations, I mean, okay." }, { "end": 845.0799999999999, "start": 840.04, "text": " In reinforcement learning, we don't really care about features in the statespace." }, { "end": 847.16, "start": 845.0799999999999, "text": " What we really just care about is reward, right?" }, { "end": 854.04, "start": 847.16, "text": " We want to learn a policy that can maximize the total return, the total sum of rewards" }, { "end": 856.56, "start": 854.04, "text": " that we can achieve in the environment." }, { "end": 860.8, "start": 856.56, "text": " So instead of looking at states, if we just discard that and only pay attention to the" }, { "end": 865.88, "start": 860.8, "text": " reward and future reward that we get for some sequence of actions, then bisonulation and" }, { "end": 871.8, "start": 865.88, "text": " model irrelevance abstractions just say, I don't care if these states look different." }, { "end": 879.36, "start": 871.8, "text": " If I do a test, and a test consists of like a sequence of actions, no matter what test" }, { "end": 885.08, "start": 879.36, "text": " I perform, if the sequence of rewards that these two states give me are exactly the same," }, { "end": 891.56, "start": 885.08, "text": " like the, also the distribution over rewards are the same, then these two states are the" }, { "end": 893.36, "start": 891.56, "text": " same to me." }, { "end": 899.6800000000001, "start": 893.36, "text": " And so the model irrelevance abstraction is just saying is just constructing a state" }, { "end": 905.08, "start": 899.6800000000001, "text": " abstraction, like a coarser version of your state space that only pays attention to those" }, { "end": 906.08, "start": 905.08, "text": " differences." }, { "end": 907.08, "start": 906.08, "text": " Cool." }, { "end": 908.08, "start": 907.08, "text": " Okay." }, { "end": 914.2, "start": 908.08, "text": " And we featured Dr. Pablo Samuel Castro on episode five and he did his dissertation involving" }, { "end": 915.2, "start": 914.2, "text": " bisonulation." }, { "end": 917.6, "start": 915.2, "text": " So that concepts come up here." }, { "end": 923.04, "start": 917.6, "text": " So just to my, I understand if we look at, let's say, a mojoco environment and we look" }, { "end": 930.5999999999999, "start": 923.04, "text": " at the pixel version versus the proprioceptive version, would that be a case of bisonulation" }, { "end": 934.4399999999999, "start": 930.5999999999999, "text": " or model irrelevance, would that be related to this?" }, { "end": 935.4399999999999, "start": 934.4399999999999, "text": " Yeah." }, { "end": 936.4399999999999, "start": 935.4399999999999, "text": " Yeah." }, { "end": 937.4399999999999, "start": 936.4399999999999, "text": " Okay." }, { "end": 941.56, "start": 937.4399999999999, "text": " So I think like the proprioceptive state versus the pixel, right?" }, { "end": 946.16, "start": 941.56, "text": " So if you just look at these two different states spaces, they are, you can define them" }, { "end": 947.56, "start": 946.16, "text": " as two different of these." }, { "end": 949.56, "start": 947.56, "text": " Right." }, { "end": 955.1199999999999, "start": 949.56, "text": " But if you ignore that state and instead just look at the reward, we know that that matches" }, { "end": 956.1199999999999, "start": 955.1199999999999, "text": " up." }, { "end": 962.88, "start": 956.1199999999999, "text": " And so we can say that this MDP, the MDP consisting of the proprioceptive state is by similar" }, { "end": 966.7199999999999, "start": 962.88, "text": " to the one consisting of pixels." }, { "end": 971.1999999999999, "start": 966.7199999999999, "text": " And the model irrelevance abstraction." }, { "end": 978.5999999999999, "start": 971.1999999999999, "text": " So I guess like in the example that you gave, like, I think you said Atari, there's a one-to-one" }, { "end": 979.6, "start": 978.6, "text": " mapping, right?" }, { "end": 986.6800000000001, "start": 979.6, "text": " There's only one pixel observation that corresponds to one proprioceptive state unless you start" }, { "end": 990.48, "start": 986.6800000000001, "text": " adding in distractors or changing backgrounds and stuff." }, { "end": 995.84, "start": 990.48, "text": " And so because this is one-to-one, you could say it in either direction that one is an" }, { "end": 997.32, "start": 995.84, "text": " abstraction of the other." }, { "end": 1003.5600000000001, "start": 997.32, "text": " But typically when we talk about model irrelevance abstraction, we are talking about something" }, { "end": 1008.7199999999999, "start": 1003.56, "text": " coarser, something smaller of lesser coordinality, cardinality." }, { "end": 1017.4399999999999, "start": 1008.7199999999999, "text": " And so in the setting where we're talking about like a pixel version where irrelevant features" }, { "end": 1028.52, "start": 1017.4399999999999, "text": " are changing, then we can say that the proprioceptive state version of this game is the abstraction" }, { "end": 1030.76, "start": 1028.52, "text": " of the pixel ones." }, { "end": 1034.28, "start": 1030.76, "text": " The way of saying this is that what we want to find, like what we care about is the" }, { "end": 1036.6, "start": 1034.28, "text": " coarsest by simulation." }, { "end": 1043.08, "start": 1036.6, "text": " And that means it's just the version, it's the MDP that has the fewest number of states" }, { "end": 1047.8, "start": 1043.08, "text": " that captures the exact same reward behavior as the original game." }, { "end": 1048.8, "start": 1047.8, "text": " Cool." }, { "end": 1049.8, "start": 1048.8, "text": " Okay." }, { "end": 1054.36, "start": 1049.8, "text": " And we will have the link to the ICML poster session in the episode notes." }, { "end": 1057.28, "start": 1054.36, "text": " And for the audience, I recommend giving that a listen." }, { "end": 1064.76, "start": 1057.28, "text": " And also your second first author here, Claire Lyle, gave a great overview in that session," }, { "end": 1066.08, "start": 1064.76, "text": " including some diagrams." }, { "end": 1071.04, "start": 1066.08, "text": " And I saw she gave a more in-depth talk from the Simon's Institute that's also on YouTube." }, { "end": 1072.28, "start": 1071.04, "text": " And so we'll link to that as well." }, { "end": 1073.28, "start": 1072.28, "text": " Yeah." }, { "end": 1074.28, "start": 1073.28, "text": " Yeah, awesome." }, { "end": 1076.52, "start": 1074.28, "text": " Yeah, that was part of the reinforcement learning program there." }, { "end": 1080.56, "start": 1076.52, "text": " And she definitely does like a really good job of dissecting that paper." }, { "end": 1081.56, "start": 1080.56, "text": " Totally." }, { "end": 1082.56, "start": 1081.56, "text": " Okay." }, { "end": 1084.48, "start": 1082.56, "text": " So can you say a little bit more about invariant causal predictions?" }, { "end": 1088.32, "start": 1084.48, "text": " This is the concept, I gather that's the concept that this paper is built on from the" }, { "end": 1089.32, "start": 1088.32, "text": " causal world." }, { "end": 1090.84, "start": 1089.32, "text": " And you brought it over to RO." }, { "end": 1092.32, "start": 1090.84, "text": " But how does that really work?" }, { "end": 1093.84, "start": 1092.32, "text": " And are these linear models?" }, { "end": 1094.84, "start": 1093.84, "text": " Yeah." }, { "end": 1102.68, "start": 1094.84, "text": " So in the original paper, the original ICP, which in our paper we called linear ICP, it relies" }, { "end": 1104.88, "start": 1102.68, "text": " on statistical tests." }, { "end": 1114.04, "start": 1104.88, "text": " And so, or you get stronger guarantees if we can only focus on linear models." }, { "end": 1123, "start": 1114.04, "text": " So as an example, and with invariant causal prediction, the goal is to basically find the" }, { "end": 1124.6399999999999, "start": 1123, "text": " causal future set." }, { "end": 1131.1599999999999, "start": 1124.6399999999999, "text": " So the assumption is that you've got this supervised learning problem where you have your data set" }, { "end": 1135.96, "start": 1131.1599999999999, "text": " x and your labels y and you're trying to infer y from x." }, { "end": 1141, "start": 1135.96, "text": " And the causal on a friend's perspective on this is not just, you know, is not just" }, { "end": 1146.36, "start": 1141, "text": " like the typical one in machine learning that just says, okay, like there exists some model." }, { "end": 1151, "start": 1146.36, "text": " And if I do optimization, then I, with high likelihood, will find this model that can" }, { "end": 1153.44, "start": 1151, "text": " give the right prediction." }, { "end": 1158.8, "start": 1153.44, "text": " In causal on a friend's, there are a lot, there's like a more structured assumption, right," }, { "end": 1164.4, "start": 1158.8, "text": " which is that your x and y fit together as this directed acyclic graph." }, { "end": 1167.76, "start": 1164.4, "text": " And you really want to find like the correct graph." }, { "end": 1173.36, "start": 1167.76, "text": " And if you find the correct graph and fit the correct edge functions, then you can infer" }, { "end": 1177.16, "start": 1173.36, "text": " y correctly." }, { "end": 1182.56, "start": 1177.16, "text": " And so, invariant causal prediction, the original paper by UNS Peter says that you need" }, { "end": 1192.44, "start": 1182.56, "text": " to have an intervention on every variable or you need to see a change in every variable" }, { "end": 1205.4, "start": 1192.44, "text": " in x and how it propagates to the other variables x and y in order to identify what the correct" }, { "end": 1210.8, "start": 1205.4, "text": " dad is to be able to eliminate all possible graphs except for one." }, { "end": 1212.92, "start": 1210.8, "text": " And that one would be the correct one." }, { "end": 1218.6000000000001, "start": 1212.92, "text": " That sounds like a lot of data and a lot of tests potentially, right?" }, { "end": 1219.6000000000001, "start": 1218.6000000000001, "text": " Yeah." }, { "end": 1227.6799999999998, "start": 1219.6, "text": " So, in the original paper, yeah, so this paper, I would say, is, it has algorithms in" }, { "end": 1231.9599999999998, "start": 1227.6799999999998, "text": " it that you don't necessarily want to apply to large scale real world problems." }, { "end": 1237.6799999999998, "start": 1231.9599999999998, "text": " And I remember there's actually a really funny passage in that paper that kind of admits" }, { "end": 1244.6, "start": 1237.6799999999998, "text": " like, yeah, this algorithm will give you the correct solution with strong guarantees," }, { "end": 1250.1999999999998, "start": 1244.6, "text": " but it's also super exponential in the amount of data that you need." }, { "end": 1259.04, "start": 1250.1999999999998, "text": " So, and so as part of the paper with Claire in the ICP for block MVP paper, we do do just" }, { "end": 1266.1599999999999, "start": 1259.04, "text": " like a very simple parallel from that ICP, that linear ICP to the RL version with just linear" }, { "end": 1271.4399999999998, "start": 1266.1599999999999, "text": " models, so assuming that the MVP is just consisting of linear functions." }, { "end": 1278.56, "start": 1271.44, "text": " But we do also adapt another version, just, you know, like assuming any type of function" }, { "end": 1283.3200000000002, "start": 1278.56, "text": " right using neural networks so that you can have like universal function approximation." }, { "end": 1287.8400000000001, "start": 1283.3200000000002, "text": " But then it also means that you have this trade off, which is that you get these really" }, { "end": 1293.6000000000001, "start": 1287.8400000000001, "text": " strong theoretical guarantees with the linear version and we lose them with the deep learning" }, { "end": 1294.6000000000001, "start": 1293.6000000000001, "text": " version." }, { "end": 1299.88, "start": 1294.6000000000001, "text": " But at least it does scale up to two larger scale problems." }, { "end": 1303.88, "start": 1299.88, "text": " It sounds pretty magical, keeping agents from being distracted by the confusing things" }, { "end": 1311.44, "start": 1303.88, "text": " that would be obvious to us, but maybe you'd approl would say a curve fitting style learning" }, { "end": 1314.2, "start": 1311.44, "text": " could easily be distracted by all these things." }, { "end": 1317.72, "start": 1314.2, "text": " So you talked about some of the assumptions, but can you, is there, is there more in terms" }, { "end": 1322.96, "start": 1317.72, "text": " of like the time delay in the rewards and like how it just seems like your true, it's" }, { "end": 1324.44, "start": 1322.96, "text": " a really hard problem?" }, { "end": 1328.8000000000002, "start": 1324.44, "text": " And I guess, I don't know if you ever, if you ever step back and think about how we do" }, { "end": 1333.12, "start": 1328.8, "text": " it in our brain, like how humans do it, because like when we're playing soccer in the" }, { "end": 1336.1599999999999, "start": 1333.12, "text": " rain, it's, it's no harder than playing soccer in the sun." }, { "end": 1340.2, "start": 1336.1599999999999, "text": " But to get to that point, we have so much experience and priors, which I guess we're, we're" }, { "end": 1341.56, "start": 1340.2, "text": " not assuming here, right?" }, { "end": 1345.24, "start": 1341.56, "text": " We're coming in tabula rassa and saying, what can we do?" }, { "end": 1349.08, "start": 1345.24, "text": " Yeah, the amount of, you're right." }, { "end": 1354.8, "start": 1349.08, "text": " I mean, the, the kinds of priors and a lot of experience that we have, we are not giving" }, { "end": 1357.36, "start": 1354.8, "text": " to our agents right now." }, { "end": 1362.9599999999998, "start": 1357.36, "text": " And so it, in that sense, it's not really a surprise that we are where we are, that" }, { "end": 1368.4399999999998, "start": 1362.9599999999998, "text": " we have, there were training agents that don't have good generalization." }, { "end": 1374.32, "start": 1368.4399999999998, "text": " Um, you know, you, you say that this sounds almost magical, but it's, it's, it's really" }, { "end": 1375.8, "start": 1374.32, "text": " so mundane." }, { "end": 1380.8, "start": 1375.8, "text": " It's really just this tradeoff of like how strong are the assumptions that you can make" }, { "end": 1386.8799999999999, "start": 1380.8, "text": " about a problem, lead to the, like strength in terms of the generalization results," }, { "end": 1387.88, "start": 1386.88, "text": " right?" }, { "end": 1393.72, "start": 1387.88, "text": " And so the kinds of assumptions that we make, especially in the linear setting, in order" }, { "end": 1397.48, "start": 1393.72, "text": " to get those kinds of guarantees are just not applicable." }, { "end": 1403.2800000000002, "start": 1397.48, "text": " Um, so there's really, I guess like what I want to highlight is that there's no free lunch." }, { "end": 1408.2800000000002, "start": 1403.2800000000002, "text": " And I think what I'm really interested in or like one path that I think is, is useful" }, { "end": 1414.72, "start": 1408.2800000000002, "text": " is to think about different kinds of structure that do exist in the real world." }, { "end": 1421.32, "start": 1414.72, "text": " And, and make those assumptions explicit and figure out how that leads to gains in" }, { "end": 1426.32, "start": 1421.32, "text": " terms of sample complexity in terms of generalization performance of algorithms." }, { "end": 1431.16, "start": 1426.32, "text": " Because if you think about the typical definition of an MDP of a Markov decision process," }, { "end": 1439.64, "start": 1431.16, "text": " like it's so general, where the, the, the, the definition makes no assumptions of underlying" }, { "end": 1444.96, "start": 1439.64, "text": " structure, like going back to the previous example that I gave, like this adversarily difficult" }, { "end": 1447.0800000000002, "start": 1444.96, "text": " MDP that you can construct, right?" }, { "end": 1450.8000000000002, "start": 1447.0800000000002, "text": " There's no hope of generalization to new and seen states." }, { "end": 1455.4, "start": 1450.8000000000002, "text": " And it doesn't make sense to design algorithms for that setting." }, { "end": 1463, "start": 1455.4, "text": " And so I'm a big proponent of defining new types of MDPs, new types of structured MDPs," }, { "end": 1471.32, "start": 1463, "text": " like contextual MDPs, hidden parameter MDPs, block MDPs, because I think we need those" }, { "end": 1475.8, "start": 1471.32, "text": " in order to get better algorithms and better guarantees." }, { "end": 1476.8, "start": 1475.8, "text": " Awesome." }, { "end": 1477.8, "start": 1476.8, "text": " Yeah." }, { "end": 1481.44, "start": 1477.8, "text": " In episode two, I got to ask Michael Littman, why is there so many RL algorithms?" }, { "end": 1485.32, "start": 1481.44, "text": " And his answer had to do with the fact that there's so many types of MDPs that we need" }, { "end": 1486.88, "start": 1485.32, "text": " different approaches to them." }, { "end": 1487.88, "start": 1486.88, "text": " Yeah." }, { "end": 1488.88, "start": 1487.88, "text": " Cool." }, { "end": 1489.88, "start": 1488.88, "text": " Okay." }, { "end": 1492.56, "start": 1489.88, "text": " So do you have follow up work planned along these lines?" }, { "end": 1493.56, "start": 1492.56, "text": " Yeah." }, { "end": 1496.52, "start": 1493.56, "text": " There's been a couple of other things." }, { "end": 1502.6, "start": 1496.52, "text": " Claire and I actually have another paper that we're working on right now, again, using" }, { "end": 1508.24, "start": 1502.6, "text": " this kind of causal and French perspective, but now for exploration, trying to develop" }, { "end": 1515.44, "start": 1508.24, "text": " new exploration algorithms for MDPs from this causal perspective." }, { "end": 1516.44, "start": 1515.44, "text": " So that's one." }, { "end": 1519.2, "start": 1516.44, "text": " Another one is on intervention design." }, { "end": 1523.72, "start": 1519.2, "text": " And this was led by a PhD student, Miguel Melissa Mazziffian." }, { "end": 1525.88, "start": 1523.72, "text": " And so this was looking at symterial transfers." }, { "end": 1531, "start": 1525.88, "text": " So again, a type of generalization, also using this similar type of causal and French perspective" }, { "end": 1539, "start": 1531, "text": " to say, it will want to explain why data augmentation and domain randomization works so well for" }, { "end": 1540.96, "start": 1539, "text": " generalization." }, { "end": 1550.6000000000001, "start": 1540.96, "text": " And to basically inform the type of data augmentation and type of domain randomization that are needed" }, { "end": 1557.88, "start": 1550.6000000000001, "text": " and how many are needed in order to get generalization or the type of generalization that we want." }, { "end": 1563.08, "start": 1557.88, "text": " So those are a couple of other works that I think are like along this line of causal and" }, { "end": 1564.08, "start": 1563.08, "text": " French and RL." }, { "end": 1565.08, "start": 1564.08, "text": " Great." }, { "end": 1566.08, "start": 1565.08, "text": " I look forward to it." }, { "end": 1570.48, "start": 1566.08, "text": " So let's move to your next paper that is multi-sask reinforcement learning with context-based" }, { "end": 1573.3600000000001, "start": 1570.48, "text": " representations by that's by Sedani at all." }, { "end": 1575.68, "start": 1573.3600000000001, "text": " With yourself as a co-author, is that right?" }, { "end": 1577.64, "start": 1575.68, "text": " Yes, yes, that's right." }, { "end": 1580.6, "start": 1577.64, "text": " So can you give us a brief version of this paper?" }, { "end": 1583.28, "start": 1580.6, "text": " Yeah, this paper is really fun." }, { "end": 1591.04, "start": 1583.28, "text": " So this paper was looking at how we can use context like side information that's not really" }, { "end": 1594.32, "start": 1591.04, "text": " a part of, again, a part of the MDP formulation." }, { "end": 1599.6, "start": 1594.32, "text": " But there are a lot of settings in which we have side information that is useful for our" }, { "end": 1601.8, "start": 1599.6, "text": " task at hand." }, { "end": 1607.8, "start": 1601.8, "text": " And you know, like this could be just sort of like a description of the task, just like" }, { "end": 1613.12, "start": 1607.8, "text": " prior knowledge about the dynamics or the environment." }, { "end": 1619.9199999999998, "start": 1613.12, "text": " And so in the scope of this paper, we were actually focusing on this multi-task and meta-RL" }, { "end": 1621.84, "start": 1619.9199999999998, "text": " benchmark meta-world." }, { "end": 1625, "start": 1621.84, "text": " So meta-world has this like easy and hard version." }, { "end": 1631.76, "start": 1625, "text": " There's MT10, which consists of these 10 tasks, MT50 has 50 tasks." }, { "end": 1634.92, "start": 1631.76, "text": " But these tasks are all manipulation tasks." }, { "end": 1639.68, "start": 1634.92, "text": " There's like an insumulation, a robot arm, different objects." }, { "end": 1644.72, "start": 1639.68, "text": " And so in MT10, like examples of these different tasks are open a door, close a door, you know," }, { "end": 1650.96, "start": 1644.72, "text": " open a window, close a window, open a door, or open a door, close a door." }, { "end": 1658.04, "start": 1650.96, "text": " And in the multi-task setting, what we usually do is we assign task IDs." }, { "end": 1661.08, "start": 1658.04, "text": " And so this can just be a one-hot." }, { "end": 1664.04, "start": 1661.08, "text": " We can just talk about it as just like in these energy values." }, { "end": 1666.8400000000001, "start": 1664.04, "text": " And you know, task one is open floor." }, { "end": 1671.6000000000001, "start": 1666.8400000000001, "text": " Task two is open door." }, { "end": 1677.24, "start": 1671.6000000000001, "text": " And in meta-world, it's really funny because there's actually these very simple sentence" }, { "end": 1683.1200000000001, "start": 1677.24, "text": " descriptions of each of these tasks that are meant for human consumption." }, { "end": 1690.56, "start": 1683.1200000000001, "text": " And so you can read this sentence that's like the robot arm must, you know, like turn" }, { "end": 1692.48, "start": 1690.56, "text": " the knob in order to open this door." }, { "end": 1693.48, "start": 1692.48, "text": " Something like this." }, { "end": 1695.48, "start": 1693.48, "text": " It's just like one sentence." }, { "end": 1698.56, "start": 1695.48, "text": " And from reading that sentence as a human, you're just like, okay, I know exactly what" }, { "end": 1699.56, "start": 1698.56, "text": " this task is." }, { "end": 1702.32, "start": 1699.56, "text": " But that sentence was never meant to be part of the MVP." }, { "end": 1704.28, "start": 1702.32, "text": " It's not given to the RL algorithm." }, { "end": 1707.04, "start": 1704.28, "text": " The agent never uses it." }, { "end": 1712.32, "start": 1707.04, "text": " And so one portion, one contribution of this paper was to show that, you know, we can design" }, { "end": 1716.56, "start": 1712.32, "text": " an architecture that uses this kind of contextual information." }, { "end": 1722.84, "start": 1716.56, "text": " And we can beat state of the art performance just from using these simple sentences, using" }, { "end": 1730.12, "start": 1722.84, "text": " pre-trained language models in order to construct the context embeddings." }, { "end": 1739.08, "start": 1730.12, "text": " And my hope is that this work shows that we should be using, we should be thinking harder" }, { "end": 1747.1599999999999, "start": 1739.08, "text": " about how to incorporate that kind of context into tasks because it can improve performance." }, { "end": 1748.1599999999999, "start": 1747.1599999999999, "text": " Okay." }, { "end": 1752.8799999999999, "start": 1748.1599999999999, "text": " So providing more about the context than just a task ID or what would be the alternative" }, { "end": 1753.8799999999999, "start": 1752.8799999999999, "text": " there?" }, { "end": 1754.8799999999999, "start": 1753.8799999999999, "text": " Yeah." }, { "end": 1757.6399999999999, "start": 1754.8799999999999, "text": " So in the multitask setup, the alternative is a task ID." }, { "end": 1763.5600000000002, "start": 1757.64, "text": " And the reason the task ID is so terrible, it's really terrible because if you're using" }, { "end": 1770, "start": 1763.5600000000002, "text": " task IDs to denote tasks at training time, it means that you have no hope of generalizing" }, { "end": 1777.5600000000002, "start": 1770, "text": " to a new unseen task at test time because there's no, there's no semantic meaning in the task" }, { "end": 1778.5600000000002, "start": 1777.5600000000002, "text": " ID." }, { "end": 1784.4, "start": 1778.5600000000002, "text": " There's no structure underlying them mapping from the task ID to the actual task itself." }, { "end": 1788.88, "start": 1784.4, "text": " And so if you're given a new task ID at test time, you have no idea what that new task is." }, { "end": 1794.64, "start": 1788.88, "text": " Whereas if you use something more descriptive, like these sentences, we show that you can" }, { "end": 1798.4, "start": 1794.64, "text": " actually get zero-shot generalization to new unseen tasks." }, { "end": 1800.3200000000002, "start": 1798.4, "text": " I mean, the performance was quite bad." }, { "end": 1805.5600000000002, "start": 1800.3200000000002, "text": " This was just like a one-off experiment that we ran that was kind of like a side note." }, { "end": 1810.4, "start": 1805.5600000000002, "text": " But I think the idea is that like if you can scale this up, like if we had done this with" }, { "end": 1816.5600000000002, "start": 1810.4, "text": " like a much larger family of tasks, we can definitely get better zero-shot generalization" }, { "end": 1824.0800000000002, "start": 1816.5600000000002, "text": " because the agent would be able to learn a mapping between different words and rewards" }, { "end": 1825.5600000000002, "start": 1824.0800000000002, "text": " and actions." }, { "end": 1826.5600000000002, "start": 1825.5600000000002, "text": " Cool." }, { "end": 1829.2, "start": 1826.5600000000002, "text": " And then again, this phrase block comes up again." }, { "end": 1832.48, "start": 1829.2, "text": " I think you call this a block contextual MDP." }, { "end": 1834.6000000000001, "start": 1832.48, "text": " What does block mean here again?" }, { "end": 1835.6000000000001, "start": 1834.6000000000001, "text": " Yeah." }, { "end": 1840.6799999999998, "start": 1835.6, "text": " So the contextual MDP setting was something that was previously defined." }, { "end": 1848.08, "start": 1840.6799999999998, "text": " And it just means that you have this context which informs the transition function and" }, { "end": 1849.08, "start": 1848.08, "text": " reward function." }, { "end": 1854.8799999999999, "start": 1849.08, "text": " And so it just creates all of these different tasks with this underlying structure." }, { "end": 1862.56, "start": 1854.8799999999999, "text": " MetaWorld interestingly, they made some design choices basically to make it work well with" }, { "end": 1863.56, "start": 1862.56, "text": " like RL algorithm." }, { "end": 1869.84, "start": 1863.56, "text": " So one of the downsides of using neural networks, using a lot of models is that it requires" }, { "end": 1871.48, "start": 1869.84, "text": " like a fixed size input." }, { "end": 1878.8799999999999, "start": 1871.48, "text": " And so you have all of these different tasks, but the objects in those tasks are different." }, { "end": 1883.44, "start": 1878.8799999999999, "text": " But we need the, so this is all looking at perperioseceptive state." }, { "end": 1889.84, "start": 1883.44, "text": " And so we need the dimensions of that state space to be the same across all those tasks." }, { "end": 1895.08, "start": 1889.84, "text": " And the way that they did this was to have different dimensions mean different things in" }, { "end": 1898.9199999999998, "start": 1895.08, "text": " different tasks because they represent different objects." }, { "end": 1901.28, "start": 1898.9199999999998, "text": " And so the block component." }, { "end": 1902.28, "start": 1901.28, "text": " Yeah." }, { "end": 1907.8, "start": 1902.28, "text": " And so the block component here is just to sort of reinforce that." }, { "end": 1913.9599999999998, "start": 1907.8, "text": " And so the block component is saying not only do your reward and transition functions depend" }, { "end": 1917.3999999999999, "start": 1913.9599999999998, "text": " on this context, but your state space does too." }, { "end": 1922.4, "start": 1917.4, "text": " And I think that's actually a really important component because when we think about the" }, { "end": 1928.76, "start": 1922.4, "text": " whole world and all the possible tasks that you can do, you can construct this as one giant" }, { "end": 1933.8400000000001, "start": 1928.76, "text": " MDP where your state space is, you know, the information about the whole world." }, { "end": 1935.6000000000001, "start": 1933.8400000000001, "text": " But we don't operate that way." }, { "end": 1941.2, "start": 1935.6000000000001, "text": " Like we operate by just, you know, having a focus on the objects that hand for a specific" }, { "end": 1942.2, "start": 1941.2, "text": " task." }, { "end": 1947.24, "start": 1942.2, "text": " Or if you're, you know, trying to hammer nails and your focus on is like the nails and" }, { "end": 1951.44, "start": 1947.24, "text": " the hammer, it's not some like steep or off in the corner." }, { "end": 1958.8400000000001, "start": 1951.44, "text": " And so just because your state space can change, doesn't mean that we're incapable of generalization." }, { "end": 1962.96, "start": 1958.8400000000001, "text": " And so the block contextual MDP setting just is reinforcing that idea." }, { "end": 1963.96, "start": 1962.96, "text": " Okay." }, { "end": 1969.48, "start": 1963.96, "text": " Then and I gather that in the algorithm here, which I think you called care, the state" }, { "end": 1973.32, "start": 1969.48, "text": " gets encoded by a set of encoders." }, { "end": 1975.08, "start": 1973.32, "text": " And then you have attention over them." }, { "end": 1979.24, "start": 1975.08, "text": " Can you, can you talk to us about the intent with that, with the encoders and the attention?" }, { "end": 1980.24, "start": 1979.24, "text": " What, what is that doing?" }, { "end": 1981.24, "start": 1980.24, "text": " Why, why does that help?" }, { "end": 1982.24, "start": 1981.24, "text": " Yeah." }, { "end": 1985.1200000000001, "start": 1982.24, "text": " So this was like another major contribution of this paper." }, { "end": 1992.2, "start": 1985.1200000000001, "text": " So the idea here is that by having these separate encoders, the goal is that we're basically" }, { "end": 1995.92, "start": 1992.2, "text": " trying to get some sort of compositional generalization." }, { "end": 1999.88, "start": 1995.92, "text": " By compositional generalization, I just mean that same idea, right?" }, { "end": 2007.92, "start": 1999.88, "text": " Where if, if we train an agent to close a drawer and open a window, then if we tell it to" }, { "end": 2013.64, "start": 2007.92, "text": " close a window, then it may understand what that concept is." }, { "end": 2021.16, "start": 2013.64, "text": " And so we have these different encoders with the hope of each encoder mapping to some" }, { "end": 2026.28, "start": 2021.16, "text": " concept or object that appears in multiple tasks." }, { "end": 2030.92, "start": 2026.28, "text": " So in meta world, all those examples that I've been giving, like, you know, it's, I think" }, { "end": 2035.6000000000001, "start": 2030.92, "text": " it should be pretty clear that there are concepts like open and close and drawers and doors" }, { "end": 2040.92, "start": 2035.6000000000001, "text": " that appear multiple times in different tasks, but just different combinations of them." }, { "end": 2046.72, "start": 2040.92, "text": " And so the goal of, of using this mixture of encoders is that that each encoder will hopefully" }, { "end": 2051.48, "start": 2046.72, "text": " map to implicitly each one of these concepts." }, { "end": 2059.08, "start": 2051.48, "text": " And so then the context can be used to train and it's just basically attention weights over" }, { "end": 2060.44, "start": 2059.08, "text": " these encoders." }, { "end": 2068.64, "start": 2060.44, "text": " And so certain encoders will only activate if that concept is necessary to solve that task." }, { "end": 2074.7200000000003, "start": 2068.64, "text": " And so what we found, and this is kind of hard to, this is all done like implicitly," }, { "end": 2075.7200000000003, "start": 2074.7200000000003, "text": " right?" }, { "end": 2081.3199999999997, "start": 2075.72, "text": " So supervised mapping from encoder to word that we're actually using here." }, { "end": 2085.56, "start": 2081.3199999999997, "text": " And so the way that we test, like, if that's what it's actually doing is by just varying" }, { "end": 2088.68, "start": 2085.56, "text": " the number of encoders K." }, { "end": 2095.2, "start": 2088.68, "text": " And so in the experiments, we found that if K equals one or two, we get poorer performance." }, { "end": 2101.2799999999997, "start": 2095.2, "text": " And this actually looks a lot more like, you know, like the multitask baseline, slow meta" }, { "end": 2102.2799999999997, "start": 2101.2799999999997, "text": " world." }, { "end": 2108.2400000000002, "start": 2102.28, "text": " But if we increase K to be too large, like if we set K to be the number of, of actual" }, { "end": 2113.8, "start": 2108.2400000000002, "text": " tasks that we're trying to solve, we actually found that we also get worse performance because" }, { "end": 2119.52, "start": 2113.8, "text": " what ends up happening is that each encoder just gets assigned a separate task." }, { "end": 2123.1600000000003, "start": 2119.52, "text": " And so no information is getting shared across tasks." }, { "end": 2130, "start": 2123.1600000000003, "text": " And so K is something that you kind of have to tune or you can choose a K using the knowledge" }, { "end": 2132.2000000000003, "start": 2130, "text": " that you have about all of these tasks." }, { "end": 2136.7999999999997, "start": 2132.2, "text": " So if we choose K to be approximately the number of concepts and objects that we think" }, { "end": 2141.4399999999996, "start": 2136.7999999999997, "text": " exist across this set of tasks, then we get the best performance." }, { "end": 2142.4399999999996, "start": 2141.4399999999996, "text": " Cool." }, { "end": 2145.64, "start": 2142.4399999999996, "text": " So I guess that shows that it's not just a matter of having enough capacity in that" }, { "end": 2146.64, "start": 2145.64, "text": " lap on that layer." }, { "end": 2147.64, "start": 2146.64, "text": " Yeah." }, { "end": 2148.64, "start": 2147.64, "text": " Yeah." }, { "end": 2150.64, "start": 2148.64, "text": " It's really about how you share information." }, { "end": 2151.64, "start": 2150.64, "text": " Right on." }, { "end": 2156.3999999999996, "start": 2151.64, "text": " And then so this paper talks about zero-shot generalization." }, { "end": 2159.52, "start": 2156.3999999999996, "text": " Can you talk about how zero-shot works in this in this setting?" }, { "end": 2165.96, "start": 2159.52, "text": " Is that like a whole new context and textual description with the tasks it's never seen?" }, { "end": 2167.96, "start": 2165.96, "text": " Yeah." }, { "end": 2174.28, "start": 2167.96, "text": " And so this is again why this kind of more descriptive task ID or context is more useful" }, { "end": 2179.72, "start": 2174.28, "text": " than using just a number or a one-hot vector." }, { "end": 2184.84, "start": 2179.72, "text": " Because there is actually structure in terms of like the components of the sentence, the" }, { "end": 2188.52, "start": 2184.84, "text": " context, the describe the task." }, { "end": 2189.52, "start": 2188.52, "text": " Okay." }, { "end": 2195.52, "start": 2189.52, "text": " And we know that this is true because as a human, if I read to you a sentence that's," }, { "end": 2203.36, "start": 2195.52, "text": " please open this door for me, you know exactly what to do even if you had never opened a door" }, { "end": 2204.36, "start": 2203.36, "text": " before." }, { "end": 2205.36, "start": 2204.36, "text": " Okay." }, { "end": 2206.36, "start": 2205.36, "text": " That was a bad example." }, { "end": 2210.8, "start": 2206.36, "text": " Another example that people talk about in compositional generalization is like DAX." }, { "end": 2214.64, "start": 2210.8, "text": " So if I tell you, you don't necessarily know what DAX means." }, { "end": 2218.64, "start": 2214.64, "text": " I'm just going to tell you it's like some motion, like maybe like clap twice." }, { "end": 2221.7599999999998, "start": 2218.64, "text": " If you say DAX twice, you know what to do there." }, { "end": 2226.7999999999997, "start": 2221.7599999999998, "text": " You know that you want to perform that DAX motion twice." }, { "end": 2233.7999999999997, "start": 2226.7999999999997, "text": " And so that's the same type of compositional generalization that we think we can get from" }, { "end": 2237.7599999999998, "start": 2233.7999999999997, "text": " this kind of architecture and using this kind of context." }, { "end": 2243.96, "start": 2237.7599999999998, "text": " And so just like in the scope of meta-world, what we did was we very carefully split up" }, { "end": 2250.92, "start": 2243.96, "text": " these 10 tasks in empty 10 so that all of the concepts and objects present in the test" }, { "end": 2258.08, "start": 2250.92, "text": " tasks are seen in the training tasks, but not the exact same combinations." }, { "end": 2264.6, "start": 2258.08, "text": " And so we just wanted to see, okay, if an agent is introduced to all of these components" }, { "end": 2270.36, "start": 2264.6, "text": " necessary to figure out what these tests are, but never sees those tests tasks, can it" }, { "end": 2272.6, "start": 2270.36, "text": " perform that task?" }, { "end": 2275.7599999999998, "start": 2272.6, "text": " And you know, the success rates for this are pretty low." }, { "end": 2277.72, "start": 2275.7599999999998, "text": " I think we were at like 30%." }, { "end": 2282.96, "start": 2277.72, "text": " But that's still very promising, given that we have an agent that's given seven tasks" }, { "end": 2288.92, "start": 2282.96, "text": " and then asked to perform another three that it's never, never been trained on before." }, { "end": 2292.72, "start": 2288.92, "text": " So I think that was a really promising first step towards something, which is something" }, { "end": 2295.68, "start": 2292.72, "text": " more impressive in terms of zero-shed generalization." }, { "end": 2301.6, "start": 2295.68, "text": " So would you say that this type of system is learning grounded language?" }, { "end": 2306.08, "start": 2301.6, "text": " Is it attaching these words to concepts in the environment?" }, { "end": 2309.8399999999997, "start": 2306.08, "text": " In a very primitive way, yes." }, { "end": 2316.8399999999997, "start": 2309.8399999999997, "text": " Like, you know, the vocabulary that we're using here consists of like definitely less" }, { "end": 2318.88, "start": 2316.8399999999997, "text": " than 100 words." }, { "end": 2324.7599999999998, "start": 2318.88, "text": " But if we can scale this up, then absolutely, I think that what we would find is that it's" }, { "end": 2334.92, "start": 2324.76, "text": " learning in an unsupervised way to match words and phrases to specific transition functions," }, { "end": 2338.2400000000002, "start": 2334.92, "text": " reward functions, or like components of the environment." }, { "end": 2342.2000000000003, "start": 2338.2400000000002, "text": " I want to recommend your talk at UCL deciding, acting and reasoning with knowledge." }, { "end": 2344.5200000000004, "start": 2342.2000000000003, "text": " That's the dark lab that's on YouTube." }, { "end": 2345.6000000000004, "start": 2344.5200000000004, "text": " And that was from June this year." }, { "end": 2348.48, "start": 2345.6000000000004, "text": " We'll have the link in the show notes." }, { "end": 2353.36, "start": 2348.48, "text": " And that partly overlaps with this conversation and you shared a lot more besides in that" }, { "end": 2354.36, "start": 2353.36, "text": " talk." }, { "end": 2358.56, "start": 2354.36, "text": " And then at the end of this paper, you mentioned some angles for follow-up." }, { "end": 2361.36, "start": 2358.56, "text": " Is that something you might, you think you're doing?" }, { "end": 2362.36, "start": 2361.36, "text": " Yeah." }, { "end": 2368.96, "start": 2362.36, "text": " So one of the obvious follow-ups here is that we can also extend all of this to rich" }, { "end": 2371.28, "start": 2368.96, "text": " observations." }, { "end": 2374.88, "start": 2371.28, "text": " And so that's actually something that we are doing now." }, { "end": 2378.36, "start": 2374.88, "text": " But we can also scale this up in the way that you also suggested, right?" }, { "end": 2384.08, "start": 2378.36, "text": " Which is like increasing the vocabulary and seeing how far we can push this sort of grounded" }, { "end": 2387, "start": 2384.08, "text": " language and RL component." }, { "end": 2395.88, "start": 2387, "text": " And so there's actually a really nice environment for this out of Fair London led by Tim Rockeschel" }, { "end": 2398.64, "start": 2395.88, "text": " and Ed Greffinstead, I believe." }, { "end": 2400.52, "start": 2398.64, "text": " So that's the NetHack environment." }, { "end": 2404.08, "start": 2400.52, "text": " And so I don't know if you're familiar with NetHack, but it's this, I've never played" }, { "end": 2405.08, "start": 2404.08, "text": " it before." }, { "end": 2410.52, "start": 2405.08, "text": " But I guess it's this old school computer game that's text-based." }, { "end": 2416.12, "start": 2410.52, "text": " But the interesting thing about it is that there are a ton of different objects and agents" }, { "end": 2419.08, "start": 2416.12, "text": " and components of this game." }, { "end": 2423, "start": 2419.08, "text": " And there is an extensive wiki." }, { "end": 2427.72, "start": 2423, "text": " And I believe no human has ever beat the game without reading this wiki." }, { "end": 2428.72, "start": 2427.72, "text": " Right." }, { "end": 2429.72, "start": 2428.72, "text": " Yeah." }, { "end": 2431.72, "start": 2429.72, "text": " So your agent would read the wiki?" }, { "end": 2432.72, "start": 2431.72, "text": " Is that what you're thinking?" }, { "end": 2433.72, "start": 2432.72, "text": " Yeah." }, { "end": 2434.72, "start": 2433.72, "text": " Yeah." }, { "end": 2435.72, "start": 2434.72, "text": " Well, so that's what their hope is." }, { "end": 2442.4399999999996, "start": 2435.72, "text": " That's what their goal is, is to have an agent that can read this wiki and learn to play" }, { "end": 2447.12, "start": 2442.4399999999996, "text": " the game or like through interacting with the game while reading this text sort of ground" }, { "end": 2449.8399999999997, "start": 2447.12, "text": " the text to the game and learn to solve it." }, { "end": 2450.8399999999997, "start": 2449.8399999999997, "text": " It's very hard." }, { "end": 2455.04, "start": 2450.8399999999997, "text": " I think it's going to take a really long time before we can get there." }, { "end": 2458.04, "start": 2455.04, "text": " But they have also created a mini version of this." }, { "end": 2462.64, "start": 2458.04, "text": " I think the paper is now on archive and I believe the code is now publicly available." }, { "end": 2464.04, "start": 2462.64, "text": " It's called mini hack." }, { "end": 2471.52, "start": 2464.04, "text": " And so it's just many simpler versions of this game with smaller, I'm actually not sure" }, { "end": 2474.84, "start": 2471.52, "text": " if there is text attached to it, but I think it would still be pretty easy to create," }, { "end": 2477.88, "start": 2474.84, "text": " like, paragraph, like, explaining what's going on." }, { "end": 2482.96, "start": 2477.88, "text": " And so these smaller versions are much more doable with today's RL algorithms." }, { "end": 2487.4, "start": 2482.96, "text": " And so the goal is to just sort of push that envelope further and see what we can do." }, { "end": 2492.64, "start": 2487.4, "text": " And so this is an environment that we're working on with the collaborator and seeing how far" }, { "end": 2493.64, "start": 2492.64, "text": " we can get." }, { "end": 2494.64, "start": 2493.64, "text": " That sounds really exciting." }, { "end": 2498.7599999999998, "start": 2494.64, "text": " And personally, I've seen that hack for years when I'm terrified to even try it because" }, { "end": 2500.48, "start": 2498.7599999999998, "text": " it looks so addictive." }, { "end": 2501.48, "start": 2500.48, "text": " Cool." }, { "end": 2504.56, "start": 2501.48, "text": " I look forward to that since that sounds really powerful." }, { "end": 2510.3199999999997, "start": 2504.56, "text": " So in these two papers that we just talked about, were you surprised by any of the results" }, { "end": 2515.2799999999997, "start": 2510.3199999999997, "text": " that you got or do you feel more like the things turned out just as you expected and planned?" }, { "end": 2519.8799999999997, "start": 2515.2799999999997, "text": " In the first paper with Claire, it definitely took a while for us." }, { "end": 2527.48, "start": 2519.88, "text": " I think these concepts of, like, spurious correlations, and distractor variables, irrelevant features." }, { "end": 2531.88, "start": 2527.48, "text": " I think they're very intuitive for us, or at least, you know, we understand what they" }, { "end": 2535.08, "start": 2531.88, "text": " mean in supervised learning better." }, { "end": 2543.8, "start": 2535.08, "text": " But it was actually kind of hard at first to design environments correctly." }, { "end": 2548.2000000000003, "start": 2543.8, "text": " You can, as an example, from supervised learning, that's maybe a little bit easier to think" }, { "end": 2551.3199999999997, "start": 2548.2, "text": " about understanding what's spurious or not." }, { "end": 2553.48, "start": 2551.3199999999997, "text": " It has to be carefully tuned." }, { "end": 2561.64, "start": 2553.48, "text": " So as an example, if you take combined like C4-10 and M-NIST datasets, right?" }, { "end": 2570.16, "start": 2561.64, "text": " And so let's say that we construct a mapping from the digits, 0-9 to the 10 classification" }, { "end": 2572.56, "start": 2570.16, "text": " labels from C-FAR." }, { "end": 2582.4, "start": 2572.56, "text": " And we append those corresponding images together to create this joint M-NIST C-FAR dataset." }, { "end": 2587.92, "start": 2582.4, "text": " And now we're going to declare that one of those things is a distractor, and the other" }, { "end": 2589.56, "start": 2587.92, "text": " thing is the real thing that we care about." }, { "end": 2592.6, "start": 2589.56, "text": " Let's say it's like the C-FAR label is the thing that we care about, and the M-NIST" }, { "end": 2594.64, "start": 2592.6, "text": " digit is just a distractor." }, { "end": 2601.12, "start": 2594.64, "text": " But because we've created this incredibly strong correlation between a specific digit" }, { "end": 2611.12, "start": 2601.12, "text": " and a specific object from C-FAR, there's no way for you to be able to tell which is" }, { "end": 2616.12, "start": 2611.12, "text": " the spurious relation and which is the true thing that we care about." }, { "end": 2619.68, "start": 2616.12, "text": " The way to..." }, { "end": 2621.08, "start": 2619.68, "text": " Because they're always together?" }, { "end": 2623.16, "start": 2621.08, "text": " Because they're always together." }, { "end": 2627.4, "start": 2623.16, "text": " And so we know that the way that we tell which one is actually the thing that we care" }, { "end": 2629.92, "start": 2627.4, "text": " about is if we add in some noise." }, { "end": 2635.28, "start": 2629.92, "text": " So maybe there are some examples where you have a different digit that's been attached" }, { "end": 2639.44, "start": 2635.28, "text": " like typically it's mostly one in cat that are attached together." }, { "end": 2643.52, "start": 2639.44, "text": " And so maybe you have a couple examples that are three in cat that are attached together," }, { "end": 2645.96, "start": 2643.52, "text": " but the label is still the same." }, { "end": 2650.08, "start": 2645.96, "text": " Then we'll know, OK, cat is the thing that we care about and not the digit." }, { "end": 2653.96, "start": 2650.08, "text": " But if you don't have that, if you don't have that noise, then there's no way to tell." }, { "end": 2659.36, "start": 2653.96, "text": " And so we had similar issues with designing RO environments in which we had the right" }, { "end": 2669.36, "start": 2659.36, "text": " type of variation in order to like get the failure mode that we wanted to exhibit, that we" }, { "end": 2676.44, "start": 2669.36, "text": " wanted to show that current RL algorithms have in order to fix it." }, { "end": 2680, "start": 2676.44, "text": " But that's also just because we're dealing with toy environments and don't have real world" }, { "end": 2681, "start": 2680, "text": " problems." }, { "end": 2687.4, "start": 2681, "text": " And it's like very clear these kinds of examples in the real world that do exist." }, { "end": 2694.44, "start": 2687.4, "text": " So there's a really nice paper, I think NUREPS 2019 on causal confusion." }, { "end": 2699.92, "start": 2694.44, "text": " They have a really nice example with autonomous driving where there's like a light on your" }, { "end": 2703.88, "start": 2699.92, "text": " dashboard that denotes like a whenever you break." }, { "end": 2710.52, "start": 2703.88, "text": " And so if you see demonstration data of someone driving this car and the person breaks whenever" }, { "end": 2711.92, "start": 2710.52, "text": " a car in front of it breaks." }, { "end": 2717.96, "start": 2711.92, "text": " So we know that the thing that we should be learning is that if you see brake lights on" }, { "end": 2721.44, "start": 2717.96, "text": " in front of on the car in front of you, then you should be breaking." }, { "end": 2728.28, "start": 2721.44, "text": " But what the agent learns instead is to pay attention to the brake light on the dashboard" }, { "end": 2732.92, "start": 2728.28, "text": " and only breaks when the brake light is on, which means it'll never break at test time." }, { "end": 2738.6, "start": 2732.92, "text": " And so that's the kind of spurious correlation that you have to be aware of." }, { "end": 2739.84, "start": 2738.6, "text": " So that was for the first paper." }, { "end": 2744.6800000000003, "start": 2739.84, "text": " For the second paper care, the only surprise was how well it worked." }, { "end": 2753.28, "start": 2744.6800000000003, "text": " I didn't expect to see such a huge gain in performance just from incorporating these" }, { "end": 2759.08, "start": 2753.28, "text": " very simple sentences by giving these sentences to the agent." }, { "end": 2765.7200000000003, "start": 2759.08, "text": " But it really did move the needle quite a lot and didn't require any tuning." }, { "end": 2767.7200000000003, "start": 2765.7200000000003, "text": " So I thought that was really exciting." }, { "end": 2768.7200000000003, "start": 2767.7200000000003, "text": " And was surprising." }, { "end": 2769.72, "start": 2768.72, "text": " Cool." }, { "end": 2770.72, "start": 2769.72, "text": " That must have been a nice feeling." }, { "end": 2771.72, "start": 2770.72, "text": " Yeah." }, { "end": 2773.72, "start": 2771.72, "text": " You don't get those often." }, { "end": 2774.72, "start": 2773.72, "text": " Okay." }, { "end": 2780.3999999999996, "start": 2774.72, "text": " So I wanted to ask you more about generalization in general." }, { "end": 2785.16, "start": 2780.3999999999996, "text": " Can you talk a bit about the difference between generalization and supervised learning versus" }, { "end": 2787.7599999999998, "start": 2785.16, "text": " generalization and reinforcement learning?" }, { "end": 2788.7599999999998, "start": 2787.7599999999998, "text": " Yeah." }, { "end": 2792.64, "start": 2788.7599999999998, "text": " So at a high level, they're very similar." }, { "end": 2798.2, "start": 2792.64, "text": " You can define generalization and supervised learning." }, { "end": 2803, "start": 2798.2, "text": " Learning distribution versus outer distribution is just sort of like the difference in performance" }, { "end": 2806.56, "start": 2803, "text": " of your model on training data versus test data." }, { "end": 2808.16, "start": 2806.56, "text": " And we can do the same thing for our own." }, { "end": 2814.96, "start": 2808.16, "text": " You can say generalization in general can be measured by the total reward you can achieve" }, { "end": 2822.6, "start": 2814.96, "text": " with your training policy on your training, MDPs versus your test MDPs." }, { "end": 2828.72, "start": 2822.6, "text": " But an MDP has a lot more components than just data distribution that you're sampling" }, { "end": 2830.92, "start": 2828.72, "text": " from in supervised learning." }, { "end": 2835.8399999999997, "start": 2830.92, "text": " And so you can think about different levels of generalization in reinforcement learning" }, { "end": 2838.92, "start": 2835.8399999999997, "text": " that I think are useful to think about." }, { "end": 2845.88, "start": 2838.92, "text": " So I think the degenerate setting, the simplest setting, which is what a lot of people were" }, { "end": 2851.44, "start": 2845.88, "text": " working in up until a few years ago, was the setting where your training tests are exactly" }, { "end": 2852.44, "start": 2851.44, "text": " the same." }, { "end": 2859.76, "start": 2852.44, "text": " If you have deterministic dynamics, just a single initial state, then it's very obvious," }, { "end": 2860.76, "start": 2859.76, "text": " right?" }, { "end": 2864.84, "start": 2860.76, "text": " That like whatever performance you get at train time will be exactly the same as the performance" }, { "end": 2865.84, "start": 2864.84, "text": " you get at test time." }, { "end": 2869.32, "start": 2865.84, "text": " There is no testing of generalization at all." }, { "end": 2875.2400000000002, "start": 2869.32, "text": " You can start testing generalization if you have an initial state distribution." }, { "end": 2881.44, "start": 2875.2400000000002, "text": " Because now you can control what initial states you see at train time versus test." }, { "end": 2886.64, "start": 2881.44, "text": " So now you can actually start testing your agent on unseen states." }, { "end": 2892.4, "start": 2886.64, "text": " And so one way that we did this in an early paper was by controlling the random seed." }, { "end": 2899.04, "start": 2892.4, "text": " So if you control the random seed of the environment, then we can limit the number of initial" }, { "end": 2902.08, "start": 2899.04, "text": " states that you see at train time versus test time." }, { "end": 2909, "start": 2902.08, "text": " And so we showed that if you limit your, depending on obviously the complexity of the environment," }, { "end": 2916.4, "start": 2909, "text": " but even for very simple environments, if you limit your agent to 10 to 100 or hundreds" }, { "end": 2922.36, "start": 2916.4, "text": " of seeds at training time, so that's the initial states that it sees." }, { "end": 2925.04, "start": 2922.36, "text": " And we have a held out set of initial state that test time." }, { "end": 2927.08, "start": 2925.04, "text": " You do see generalization gap." }, { "end": 2933.92, "start": 2927.08, "text": " You do see a performance difference of the agent on these like different initial states." }, { "end": 2943.48, "start": 2933.92, "text": " Here's another paper on dissecting overfitting and reinforcement learning by Chi Yuen Zeng." }, { "end": 2949.36, "start": 2943.48, "text": " And there they show that again for these kinds of like maze and tunnel environments that" }, { "end": 2957.16, "start": 2949.36, "text": " if you increase the difficulty of the environment, then it can take thousands, hundreds of thousands" }, { "end": 2963.6800000000003, "start": 2957.16, "text": " of different maze layouts before you can generalize to new unseen layouts." }, { "end": 2969.3199999999997, "start": 2963.68, "text": " And so I think there's been like a slew of papers examining generalization in RL in the" }, { "end": 2975.2799999999997, "start": 2969.3199999999997, "text": " last few years that are really highlighted how far behind we are because we have these" }, { "end": 2981.8399999999997, "start": 2975.2799999999997, "text": " benchmarks that are deterministic dynamics and very narrow initial state distribution." }, { "end": 2986.24, "start": 2981.8399999999997, "text": " And so we just never really word testing generalization." }, { "end": 2991.7999999999997, "start": 2986.24, "text": " So initial state distribution is sort of like the first wrong on this ladder." }, { "end": 2998.84, "start": 2991.8, "text": " But there are other things that you can change about your MDP that ways in which we as humans" }, { "end": 3002.2400000000002, "start": 2998.84, "text": " can generalize that our current agents can't." }, { "end": 3009.1200000000003, "start": 3002.2400000000002, "text": " And so the first paper, ICP for block MDPs, was focusing on observational generalization." }, { "end": 3014.84, "start": 3009.1200000000003, "text": " Can we generalize to distractors or things changing in the environment that don't affect" }, { "end": 3018.52, "start": 3014.84, "text": " the dynamics and reward?" }, { "end": 3024.48, "start": 3018.52, "text": " How do we develop algorithms that can be robust to that kind of change?" }, { "end": 3029.7599999999998, "start": 3024.48, "text": " You can also have a setting where your dynamics and reward can change, but there are underlying" }, { "end": 3033.92, "start": 3029.7599999999998, "text": " rules that stay invariant across all your tasks." }, { "end": 3038.66, "start": 3033.92, "text": " As an example, you know, the laws of physics are always the same, but different objects" }, { "end": 3044.92, "start": 3038.66, "text": " act differently because they have different attributes, like different mass and volume" }, { "end": 3048.48, "start": 3044.92, "text": " and friction coefficients." }, { "end": 3057.52, "start": 3048.48, "text": " And so these are all types of multitask settings where we should be able to get generalization" }, { "end": 3060.84, "start": 3057.52, "text": " that we currently just can't." }, { "end": 3064.88, "start": 3060.84, "text": " And why is that that we can't do it right now?" }, { "end": 3070.04, "start": 3064.88, "text": " And I wonder if how much of that we can blame deep learning in a sense that deep learning" }, { "end": 3072.84, "start": 3070.04, "text": " doesn't do a great job of extrapolation." }, { "end": 3075, "start": 3072.84, "text": " It seems to me mostly doing interpolation." }, { "end": 3077.48, "start": 3075, "text": " If that makes sense, do you see it that way?" }, { "end": 3082.32, "start": 3077.48, "text": " Is it deep learning's fault that deep RL is not created generalizing that way?" }, { "end": 3085.08, "start": 3082.32, "text": " Or is it really, we don't know the right algorithms yet?" }, { "end": 3088.36, "start": 3085.08, "text": " Or is that just a really hard question that doesn't have any answer yet?" }, { "end": 3094.2400000000002, "start": 3088.36, "text": " There are so many different problems, like the examples that I gave are a few of ways" }, { "end": 3099.2, "start": 3094.2400000000002, "text": " in which our agents fail and, you know, we can solve them one by one." }, { "end": 3100.2, "start": 3099.2, "text": " Okay, okay." }, { "end": 3104.28, "start": 3100.2, "text": " I will say yes, it's deep learning's fault." }, { "end": 3110.88, "start": 3104.28, "text": " But it's a trade-off, right? The fact is that deep learning, deep neural networks are," }, { "end": 3113.6400000000003, "start": 3110.88, "text": " they're really nice because they're universal function approximators." }, { "end": 3115, "start": 3113.6400000000003, "text": " They can fit anything." }, { "end": 3117.36, "start": 3115, "text": " We don't need to hard code the model." }, { "end": 3123.32, "start": 3117.36, "text": " We don't need to like program the laws of physics directly into an agent in order for it" }, { "end": 3127.1600000000003, "start": 3123.32, "text": " to learn to interact with the world." }, { "end": 3129.6400000000003, "start": 3127.1600000000003, "text": " These are all trade-offs that we've made." }, { "end": 3134.2400000000002, "start": 3129.6400000000003, "text": " And it just means that it's a sample efficiency issue, I think." }, { "end": 3141.8399999999997, "start": 3134.24, "text": " Going back to like the causal inference connections, it just in order to like learn that correct" }, { "end": 3146.9199999999996, "start": 3141.8399999999997, "text": " causal model of the world, it's just going to require a lot of interaction." }, { "end": 3153.56, "start": 3146.9199999999996, "text": " And so a big part of that is just scaling up our algorithms, building larger multitask" }, { "end": 3161.8399999999997, "start": 3153.56, "text": " simulations so that we can develop agents that can use information from other tasks, leverage" }, { "end": 3164.7200000000003, "start": 3161.84, "text": " that information in order to solve a new task." }, { "end": 3170.36, "start": 3164.7200000000003, "text": " I think everything that we've done so far has just been really toy." }, { "end": 3173.2400000000002, "start": 3170.36, "text": " And part of the problem is the sample efficiency." }, { "end": 3178.96, "start": 3173.2400000000002, "text": " I think again, it's a trade-off of what inductive biases do we want to put in to improve sample" }, { "end": 3184.1600000000003, "start": 3178.96, "text": " efficiency and which ones we don't, and then we pay the cost of sample efficiency." }, { "end": 3189.92, "start": 3184.1600000000003, "text": " But we get this promise of better generalization." }, { "end": 3195.36, "start": 3189.92, "text": " And I think we don't really know where that line is." }, { "end": 3200.44, "start": 3195.36, "text": " And the line is probably different for different classes of problems." }, { "end": 3205.6, "start": 3200.44, "text": " So I guess I would say we need better algorithms, we need better sample efficiency so that we" }, { "end": 3209.12, "start": 3205.6, "text": " can actually do research on these like larger scale problems." }, { "end": 3216.2000000000003, "start": 3209.12, "text": " And we need better benchmarks, we need better simulation environments, real world environments," }, { "end": 3220.04, "start": 3216.2, "text": " things that we can actually iterate on quickly." }, { "end": 3223, "start": 3220.04, "text": " So I think these are all limiting factors." }, { "end": 3224.7999999999997, "start": 3223, "text": " And you mentioned inductive bias." }, { "end": 3229.24, "start": 3224.7999999999997, "text": " It seems like there's that two sides of the coin in terms of generalization and inductive" }, { "end": 3230.24, "start": 3229.24, "text": " bias." }, { "end": 3231.24, "start": 3230.24, "text": " Yeah." }, { "end": 3235.4399999999996, "start": 3231.24, "text": " And how does that, I guess when I think of deep learning, I think that the inductive bias" }, { "end": 3239.52, "start": 3235.4399999999996, "text": " is largely about how you set up, how you frame the deep learning problem and how you set" }, { "end": 3242.24, "start": 3239.52, "text": " how you structure your network." }, { "end": 3247.64, "start": 3242.24, "text": " Is that the same case in RL is all the inductive bias coming from the network design or how" }, { "end": 3251.4799999999996, "start": 3247.64, "text": " do you see designing the inductive bias in an RL problem?" }, { "end": 3254.3599999999997, "start": 3251.4799999999996, "text": " Like is the algorithm really changing the inductive bias?" }, { "end": 3255.8399999999997, "start": 3254.3599999999997, "text": " It can." }, { "end": 3263.2799999999997, "start": 3255.8399999999997, "text": " So a lot of, like, let's say let's just focus on like different model free RL algorithms." }, { "end": 3274, "start": 3263.28, "text": " Like, all of these different algorithms just have slightly different tricks in terms of" }, { "end": 3275.48, "start": 3274, "text": " the objective, right?" }, { "end": 3277.2400000000002, "start": 3275.48, "text": " Like the main objective is always the same." }, { "end": 3282.6000000000004, "start": 3277.2400000000002, "text": " Like if it's policy gradient, you're just trying to get your policy to choose an action" }, { "end": 3284.7200000000003, "start": 3282.6000000000004, "text": " that's going to give higher return." }, { "end": 3287.96, "start": 3284.7200000000003, "text": " Like that stays the same across all of these different algorithms." }, { "end": 3291.84, "start": 3287.96, "text": " But you have different inductive biases as part of the objective." }, { "end": 3295.96, "start": 3291.84, "text": " Like, stay close to the previous policy." }, { "end": 3300.4, "start": 3295.96, "text": " Like, do updates that don't change your policy very much." }, { "end": 3305.52, "start": 3300.4, "text": " And so all of these things, a lot of these were meant to stabilize training, but I think" }, { "end": 3311.52, "start": 3305.52, "text": " you can also do similar things in order to incorporate inductive biases about the real" }, { "end": 3312.52, "start": 3311.52, "text": " world." }, { "end": 3317.48, "start": 3312.52, "text": " So yes, there's a lot of architectural things that we can do." }, { "end": 3319.48, "start": 3317.48, "text": " We can use attention masks." }, { "end": 3324, "start": 3319.48, "text": " We can use residual nets like you can do all of these things to try and like incorporate" }, { "end": 3328.16, "start": 3324, "text": " these inductive biases to like improve optimization, improve generalization." }, { "end": 3334.44, "start": 3328.16, "text": " But we have another thing in RL that we don't really have in supervised learning as much," }, { "end": 3336.44, "start": 3334.44, "text": " which is like the sub-zoolery objective." }, { "end": 3341.28, "start": 3336.44, "text": " And so people use as auxiliary objectives like learning the dynamics of the environment," }, { "end": 3345.44, "start": 3341.28, "text": " learning the dynamics of the reward of the environment." }, { "end": 3350.64, "start": 3345.44, "text": " So keeping the entropy of the policy low or high." }, { "end": 3357.2400000000002, "start": 3350.64, "text": " So like, there are all of these things that we do that is like based on our intuition of" }, { "end": 3359.08, "start": 3357.2400000000002, "text": " what will work well." }, { "end": 3360.08, "start": 3359.08, "text": " Thanks." }, { "end": 3362.08, "start": 3360.08, "text": " I'm going to re-list into that a few times." }, { "end": 3363.08, "start": 3362.08, "text": " Okay." }, { "end": 3365.32, "start": 3363.08, "text": " So let's move on to MBRL lib." }, { "end": 3371.16, "start": 3365.32, "text": " So I see that you're a co-author for this library MBRL lib, which I gather is model-based" }, { "end": 3373.7200000000003, "start": 3371.16, "text": " RL library from Facebook research." }, { "end": 3375.9599999999996, "start": 3373.72, "text": " Can you tell us a bit about MBRL lib?" }, { "end": 3376.9599999999996, "start": 3375.9599999999996, "text": " Yeah." }, { "end": 3383.72, "start": 3376.9599999999996, "text": " So this is a project that was led by Luis Pineda, who's a research engineer in Fair Montreal." }, { "end": 3386.12, "start": 3383.72, "text": " I'm really excited about this project." }, { "end": 3394.48, "start": 3386.12, "text": " So one of the difficulties of RL research, as I'm sure you know, is just the reproducibility," }, { "end": 3401.7999999999997, "start": 3394.48, "text": " the fact that these tiny little hacks or like design decisions, implementation decisions" }, { "end": 3405.52, "start": 3401.8, "text": " have a really large impact on performance." }, { "end": 3414.6000000000004, "start": 3405.52, "text": " And there are, there have been an abundance of really great, really usable modular libraries" }, { "end": 3418.04, "start": 3414.6000000000004, "text": " for model-free RL that have come out." }, { "end": 3422.8, "start": 3418.04, "text": " And I think it's led to an explosion of research." }, { "end": 3429.4, "start": 3422.8, "text": " Like it means that it's now research is now accessible to a lot more people because they" }, { "end": 3432.32, "start": 3429.4, "text": " have this platform to build off of." }, { "end": 3440.48, "start": 3432.32, "text": " I think we haven't really seen this explosion as much in model-based RL in big part because" }, { "end": 3450.88, "start": 3440.48, "text": " they're just haven't been a good library consisting of many algorithms that is stable," }, { "end": 3459.36, "start": 3450.88, "text": " easy to use, readable, modular, has hyper parameters for a bunch of benchmark environments." }, { "end": 3462.1200000000003, "start": 3459.36, "text": " And so that's what this is." }, { "end": 3464.28, "start": 3462.1200000000003, "text": " So that's what NBRL lib is." }, { "end": 3466.08, "start": 3464.28, "text": " And this is open-sourced." }, { "end": 3472.88, "start": 3466.08, "text": " And the goal is to have practitioners, researchers use this library, contribute to this library" }, { "end": 3475.36, "start": 3472.88, "text": " in order to further a model-based RL research." }, { "end": 3481.4, "start": 3475.36, "text": " I see that you have Nathan Lambert as a co-author and he hints at this library coming out when" }, { "end": 3482.7200000000003, "start": 3481.4, "text": " he was on episode 19." }, { "end": 3486.04, "start": 3482.7200000000003, "text": " But I think he didn't name it at the time because it wasn't announced yet." }, { "end": 3487.04, "start": 3486.04, "text": " Yeah." }, { "end": 3492.08, "start": 3487.04, "text": " He did that incredible work using the model-based RL to get that tumbling behavior with" }, { "end": 3498.44, "start": 3492.08, "text": " a half cheetah which kind of completely destroyed that benchmark, I think, with that environment" }, { "end": 3502.2, "start": 3498.44, "text": " with a very shocking score." }, { "end": 3503.8, "start": 3502.2, "text": " So it's cool to see this." }, { "end": 3504.8, "start": 3503.8, "text": " That's great to see." }, { "end": 3506.88, "start": 3504.8, "text": " So I can't wait to check that out." }, { "end": 3509.64, "start": 3506.88, "text": " And so right now I think there's two algorithms implemented." }, { "end": 3515.12, "start": 3509.64, "text": " And so I guess you're saying the plan is to have that as a long-term project to grow" }, { "end": 3516.12, "start": 3515.12, "text": " that set." }, { "end": 3524.12, "start": 3516.12, "text": " And so the goal is that there will also be, there have already been sort of users who have" }, { "end": 3525.72, "start": 3524.12, "text": " been contributing back to it." }, { "end": 3533.64, "start": 3525.72, "text": " And so the goal is that if you use the library to develop a new algorithm and write a paper" }, { "end": 3541.56, "start": 3533.64, "text": " about your algorithm that you can submit a PR to add that to the library." }, { "end": 3548.2, "start": 3541.56, "text": " So right now the two algorithms in the library focus on on a proprioceptive state." }, { "end": 3553.72, "start": 3548.2, "text": " And we're working on adding some algorithms that focus on rich observations as well to" }, { "end": 3554.72, "start": 3553.72, "text": " it." }, { "end": 3558.6, "start": 3554.72, "text": " So it's sort of, it'll be usable for both of those settings." }, { "end": 3560.48, "start": 3558.6, "text": " Ooh, rich observation model-based." }, { "end": 3561.48, "start": 3560.48, "text": " Yeah." }, { "end": 3566.72, "start": 3561.48, "text": " Okay, so can you comment on how do you see the balance between like following the literature" }, { "end": 3571.04, "start": 3566.72, "text": " and other people's work and then also coming up with your ideas?" }, { "end": 3576.84, "start": 3571.04, "text": " Like this, just the flood of papers doesn't seem to be slowing down and I'm sure all researchers" }, { "end": 3579.7599999999998, "start": 3576.84, "text": " could spend all day, every day just reading." }, { "end": 3581.64, "start": 3579.7599999999998, "text": " How do you kind of think about that balance?" }, { "end": 3583.8, "start": 3581.64, "text": " I guess it is a balance." }, { "end": 3588.64, "start": 3583.8, "text": " I mean, I guess my advice would definitely be read, like read a lot." }, { "end": 3594.84, "start": 3588.64, "text": " I don't know if, I assume that this experience is shared by a lot of people." }, { "end": 3602.6400000000003, "start": 3594.84, "text": " But I actually found that I assumed a lot more had been done than actually had been and" }, { "end": 3609.88, "start": 3602.6400000000003, "text": " reading made me understand where the holes are and it makes you sort of realize like what" }, { "end": 3618.96, "start": 3609.88, "text": " is still left to do when I enter into like a new subarea." }, { "end": 3625.4, "start": 3618.96, "text": " I think I typically tend to assume that a lot of major work has already been done and" }, { "end": 3628.6, "start": 3625.4, "text": " there's no point in doing it and there's no contributions to be made." }, { "end": 3633.68, "start": 3628.6, "text": " And I think when you read the literature and you follow what's going on and recent papers" }, { "end": 3638.88, "start": 3633.68, "text": " are a good way of doing this because they'll have related work sections where they explain" }, { "end": 3644.76, "start": 3638.88, "text": " what's been going on in the field so far and what is left to do." }, { "end": 3653, "start": 3644.76, "text": " It really, it helps you see where the holes are and where you can step in and contribute." }, { "end": 3658.2400000000002, "start": 3653, "text": " In terms of coming up with my own ideas and sort of like that balance, I've actually" }, { "end": 3665.2400000000002, "start": 3658.2400000000002, "text": " found that the easiest way to come up with new ideas is just through talking to people." }, { "end": 3670.88, "start": 3665.2400000000002, "text": " I guess my advice would be to just have research conversations with people in your lab," }, { "end": 3673.1200000000003, "start": 3670.88, "text": " with people at conferences." }, { "end": 3677.16, "start": 3673.12, "text": " This will be a lot easier when we move back to in-person conferences." }, { "end": 3682.64, "start": 3677.16, "text": " But a lot of my collaborations have come about just from meeting up with people at conferences" }, { "end": 3690.88, "start": 3682.64, "text": " and just chatting and it just makes you realize like a new fun idea that are a new insight" }, { "end": 3695.72, "start": 3690.88, "text": " that doesn't seem to really have spread in the literature yet." }, { "end": 3701.24, "start": 3695.72, "text": " So I think in terms of like a balance makes it sound like it's some like stationary" }, { "end": 3702.24, "start": 3701.24, "text": " thing." }, { "end": 3708.3999999999996, "start": 3702.24, "text": " I guess I would say it's usually what it should be is like you're following the literature." }, { "end": 3713, "start": 3708.3999999999996, "text": " When you enter into like a new topic or new area, you should be doing a lot more following" }, { "end": 3719.3599999999997, "start": 3713, "text": " the literature and then as you get more and more aware of like what's established to" }, { "end": 3726.2799999999997, "start": 3719.3599999999997, "text": " all like what's been done like in the last like 10, 20 years, then talking to other people" }, { "end": 3731.7999999999997, "start": 3726.2799999999997, "text": " and that kind of helps you figure out like where you can contribute." }, { "end": 3732.8, "start": 3731.8, "text": " Great." }, { "end": 3738.1600000000003, "start": 3732.8, "text": " And then do you spend much time focused on things well outside of RL or deep learning" }, { "end": 3743.44, "start": 3738.1600000000003, "text": " in terms of reading or concepts to find raw material for your work?" }, { "end": 3749.92, "start": 3743.44, "text": " I guess actually like a lot of inspiration comes from like the older RL literature," }, { "end": 3752.52, "start": 3749.92, "text": " like the pre-deep learning stuff." }, { "end": 3758.4, "start": 3752.52, "text": " I think there's a lot of really nice ideas like people have thought very carefully about" }, { "end": 3763.4, "start": 3758.4, "text": " different assumptions to make about different environments like how to utilize those assumptions." }, { "end": 3769.64, "start": 3763.4, "text": " So I found that literature to be very rich to dive into in order to get inspiration for" }, { "end": 3776.7200000000003, "start": 3769.64, "text": " how to develop algorithms and algorithms with guarantees that use deep learning and that" }, { "end": 3781.48, "start": 3776.7200000000003, "text": " can get scaled up to the kinds of problems that we deal with today." }, { "end": 3782.48, "start": 3781.48, "text": " Yeah." }, { "end": 3785.08, "start": 3782.48, "text": " So I guess like it's more of like the traditional RL focus." }, { "end": 3790.2, "start": 3785.08, "text": " I did want to say like bisonulation, you know, the stuff that Pablo has worked on and" }, { "end": 3791.2, "start": 3790.2, "text": " that I've worked on." }, { "end": 3793.2799999999997, "start": 3791.2, "text": " Bisonulation is actually very old." }, { "end": 3799.2799999999997, "start": 3793.2799999999997, "text": " It comes from like the formal verification community with regards to like how do you determine" }, { "end": 3803.3199999999997, "start": 3799.2799999999997, "text": " two systems are the same based on their inputs and outputs." }, { "end": 3813.72, "start": 3803.3199999999997, "text": " And then in the 90s bisonulation got transferred over or defined for the RL setting and then" }, { "end": 3821.8799999999997, "start": 3813.72, "text": " there's just been like a slow trickle of papers, especially by norm firms and Pablo about bisonulation" }, { "end": 3823.48, "start": 3821.8799999999997, "text": " for RL." }, { "end": 3825.52, "start": 3823.48, "text": " So I think old ideas." }, { "end": 3829.04, "start": 3825.52, "text": " I think there's a lot of richness in old ideas." }, { "end": 3830.2799999999997, "start": 3829.04, "text": " Everything's been done before." }, { "end": 3832.48, "start": 3830.2799999999997, "text": " It's just about modernizing it." }, { "end": 3833.48, "start": 3832.48, "text": " Cool." }, { "end": 3834.48, "start": 3833.48, "text": " Okay." }, { "end": 3838.12, "start": 3834.48, "text": " And then besides your own work, are there other things happening in RL these days that" }, { "end": 3840.3999999999996, "start": 3838.12, "text": " you're particularly excited about?" }, { "end": 3846.88, "start": 3840.4, "text": " I've been focusing more on compositional generalization and so actually less lately and again," }, { "end": 3850.12, "start": 3846.88, "text": " sort of more back in the day." }, { "end": 3855.4, "start": 3850.12, "text": " There's a lot of work on factored MDPs and object oriented MDPs, different kinds of assumptions" }, { "end": 3861.96, "start": 3855.4, "text": " of structure that lend themselves really nicely to achieving compositional generalization." }, { "end": 3868.12, "start": 3861.96, "text": " I think these ideas tie a lot more closely to model based RL rather than model free," }, { "end": 3873.12, "start": 3868.12, "text": " which is again, why we've been pushing this MBRL lib." }, { "end": 3880.04, "start": 3873.12, "text": " In terms of work that's been coming out more recently, I think there's been a lot of" }, { "end": 3883.44, "start": 3880.04, "text": " exciting work on the use of external data." }, { "end": 3889.08, "start": 3883.44, "text": " Like again, trying to use information sources or feedback sources that aren't just" }, { "end": 3894.96, "start": 3889.08, "text": " a reward because reward is really hard to design in a lot of problems and it often can" }, { "end": 3896.7999999999997, "start": 3894.96, "text": " be sparse." }, { "end": 3903.88, "start": 3896.8, "text": " And so if we can use other sources of data like demonstrations, like videos of humans," }, { "end": 3910.36, "start": 3903.88, "text": " like performing tasks, then in order to speed up learning, I think those things are really" }, { "end": 3911.36, "start": 3910.36, "text": " exciting." }, { "end": 3918.52, "start": 3911.36, "text": " There's a paper out of Chelsea Finns lab, especially, I don't remember the title, but the" }, { "end": 3922.88, "start": 3918.52, "text": " first author, the first author is Annie." }, { "end": 3927.6, "start": 3922.88, "text": " He was something about in the RL from in the wild videos." }, { "end": 3933.6, "start": 3927.6, "text": " And so I think they used YouTube videos in order to improve sample efficiency of RL across" }, { "end": 3934.6, "start": 3933.6, "text": " multiple tasks." }, { "end": 3936.1600000000003, "start": 3934.6, "text": " That was really exciting." }, { "end": 3939.84, "start": 3936.1600000000003, "text": " Is there anything else that I should have asked you about today or that you think the" }, { "end": 3941.6800000000003, "start": 3939.84, "text": " audience might want to hear about?" }, { "end": 3947.12, "start": 3941.6800000000003, "text": " I guess that I'm pretty biased, but I think you did a really great job of asking a lot" }, { "end": 3948.12, "start": 3947.12, "text": " of questions." }, { "end": 3953.96, "start": 3948.12, "text": " At least I think are important about generalization and reinforcement learning, so yeah, this has" }, { "end": 3955.3599999999997, "start": 3953.96, "text": " been really fun." }, { "end": 3956.3599999999997, "start": 3955.3599999999997, "text": " Awesome." }, { "end": 3962.64, "start": 3956.3599999999997, "text": " And likewise, so do you have any suggestions for the show or the format or who we should" }, { "end": 3965.64, "start": 3962.64, "text": " hear from or anything else about the show?" }, { "end": 3969.7599999999998, "start": 3965.64, "text": " I'm hoping that it can be a useful resource to other people and it's actually really hard" }, { "end": 3971.64, "start": 3969.7599999999998, "text": " to get critical feedback." }, { "end": 3975.12, "start": 3971.64, "text": " I'm a huge fan of just more interaction." }, { "end": 3978.3599999999997, "start": 3975.12, "text": " Obviously that's very difficult in podcast form." }, { "end": 3982.56, "start": 3978.3599999999997, "text": " I think, yeah, I guess that is just hard to podcast." }, { "end": 3985.16, "start": 3982.56, "text": " I just like Q and A sessions are always nice." }, { "end": 3992.48, "start": 3985.16, "text": " I don't know how you, I guess, like get feedback from your viewers or listeners at this," }, { "end": 3999.16, "start": 3992.48, "text": " but I'm curious, like if that's something that you incorporate in terms of the show." }, { "end": 4000.16, "start": 3999.16, "text": " Okay." }, { "end": 4002.56, "start": 4000.16, "text": " So I have done a few polls on the Twitter." }, { "end": 4005.88, "start": 4002.56, "text": " So there's the Twitter, which is talk URL podcast." }, { "end": 4011.6, "start": 4005.88, "text": " And there's quite a few followers there and there is some interaction." }, { "end": 4012.6, "start": 4011.6, "text": " We get some comments." }, { "end": 4019.16, "start": 4012.6, "text": " I did some polls to ask people about general things, but mate, and I also asked who people" }, { "end": 4020.16, "start": 4019.16, "text": " would like to see." }, { "end": 4022.68, "start": 4020.16, "text": " And actually a number of guests came out of those questions." }, { "end": 4023.7999999999997, "start": 4022.68, "text": " So that would be one thing." }, { "end": 4029.68, "start": 4023.7999999999997, "text": " I guess another thing I could do is pre-announce the guest and then get people to ask things" }, { "end": 4030.68, "start": 4029.68, "text": " on the Twitter." }, { "end": 4032.36, "start": 4030.68, "text": " Maybe is that what you're pointing to?" }, { "end": 4034.6800000000003, "start": 4032.36, "text": " Yeah, I think, yeah, that would be nice." }, { "end": 4037.96, "start": 4034.6800000000003, "text": " Yeah, just like ways of getting listeners involved." }, { "end": 4038.96, "start": 4037.96, "text": " I'm a huge fan." }, { "end": 4043.36, "start": 4038.96, "text": " I guess like where that comes from is like I'm a huge fan of like the unconference format" }, { "end": 4049.1600000000003, "start": 4043.36, "text": " that Ian Goodfellow is sort of espoused where instead of creating a conference where you" }, { "end": 4056.44, "start": 4049.1600000000003, "text": " have speakers who are invited and sort of like talk out of crowd, you instead have the participants" }, { "end": 4058.08, "start": 4056.44, "text": " bringing the content, right?" }, { "end": 4063.3199999999997, "start": 4058.08, "text": " You have them create breakout groups and lead discussions and I think it's a really great" }, { "end": 4064.3199999999997, "start": 4063.3199999999997, "text": " format." }, { "end": 4070.96, "start": 4064.3199999999997, "text": " In terms of who else you should hear from, again, from my very biased perspective, I think" }, { "end": 4074.16, "start": 4070.96, "text": " Claire, I've done a bunch of really great work with her." }, { "end": 4075.48, "start": 4074.16, "text": " She's a very strong researcher." }, { "end": 4078.16, "start": 4075.48, "text": " I really enjoyed our collaborations." }, { "end": 4083.24, "start": 4078.16, "text": " She's done a lot of really strong work in terms of calls on friends in RL and formal" }, { "end": 4085.24, "start": 4083.24, "text": " verification." }, { "end": 4086.56, "start": 4085.24, "text": " Audrey Durand." }, { "end": 4094.16, "start": 4086.56, "text": " She's a professor at University of Laval as she does Bandit Theory and as well as RL" }, { "end": 4095.16, "start": 4094.16, "text": " for Healthcare." }, { "end": 4103.6, "start": 4095.16, "text": " So I'm sure she has like a lot of fun anecdotes about using RL and Bandits in the real world" }, { "end": 4105.88, "start": 4103.6, "text": " and George Conradaris from Brown." }, { "end": 4109.64, "start": 4105.88, "text": " Obviously, I think we share a lot of the same views." }, { "end": 4110.64, "start": 4109.64, "text": " Awesome." }, { "end": 4111.64, "start": 4110.64, "text": " These are great suggestions." }, { "end": 4115.8, "start": 4111.64, "text": " I've actually wanted to have Claire lie on the show for a while and I've had the been" }, { "end": 4117.320000000001, "start": 4115.8, "text": " lucky to meet her." }, { "end": 4122.12, "start": 4117.320000000001, "text": " I did invite Audrey and I will follow up with her and George is a great suggestion too." }, { "end": 4123.12, "start": 4122.12, "text": " So thanks for that." }, { "end": 4124.12, "start": 4123.12, "text": " Yeah." }, { "end": 4125.52, "start": 4124.12, "text": " Dr. Amy Zhang, this has been fantastic." }, { "end": 4129.12, "start": 4125.52, "text": " Thank you so much for sharing your time today and your insight with TalkRL." }, { "end": 4130.12, "start": 4129.12, "text": " Yeah." }, { "end": 4140.12, "start": 4130.12, "text": " Thank you for having me." }, { "end": 4144.88, "start": 4140.12, "text": " Notes and links for this episode are at talkrl.com." }, { "end": 4147.12, "start": 4144.88, "text": " If you like this show, I need your support." }, { "end": 4148.92, "start": 4147.12, "text": " You can help in a few ways." }, { "end": 4151.92, "start": 4148.92, "text": " One, subscribe on your favorite podcast platform." }, { "end": 4153.8, "start": 4151.92, "text": " Subscriptions make a big difference." }, { "end": 4158.4800000000005, "start": 4153.8, "text": " Two, follow us on Twitter and talk RL podcast." }, { "end": 4159.88, "start": 4158.4800000000005, "text": " We love retweets." }, { "end": 4164.96, "start": 4159.88, "text": " Three, give us a five star rating on Apple podcasts." }, { "end": 4174.16, "start": 4164.96, "text": " If you don't think we deserve five stars, let us know on Twitter what we could do better." }, { "end": 4175.16, "start": 4174.16, "text": " Bye." } ]
Xianyuan Zhan
Xianyuan Zhan on DeepThermal for controlling thermal power plants, the MORE algorithm for Model-based Offline RL, comparing AI in China and the US, and more!
https://media.transistor…bc4.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan. Professor Jean is currently a research assistant professor at the Institute for AI Industry Research at Tsinghua University. He received his PhD degree at Purdue University. Before joining Tsinghua University, Professor Jean worked as a researcher at Microsoft Research Asia and a data scientist at JD Technology. At JD Technology, he led the research that uses offline RL to optimize real-world industrial systems. Welcome, Professor Jean. Thank you, Rob. And how do you describe your research interests? My current research interests mainly in reinforcement learning, especially offline reinforcement learning. I also worked on the problems related to complex systems and also the data-driven methods in transportation applications. But currently, my major research interests are Foucault Sound offline RL. Can you say more about your RL work at JD Technology? Basically, I'm a research-oriented data scientist and I used to live a small team there and develop data-driven control optimization algorithms, as well as some of the other AI algorithms for industrial optimization problems, such as this term-power plant optimization work, which is where we use some of our reinforcement learning techniques to optimize the control strategy for industrial systems. Can you tell us a bit more about the Institute for AI Industry Research at Tsinghua University? What are the main focus areas there? So Institute for AI Industry Research, or we also call it AIR, is basically a new research institute found in Tsinghua University last year. And it's a new research institute and its mission is to conduct cutting-edge research that are transferable to industries, and we want to solve the real industrial problems. And currently, the AI Industry Research Foucault Sound 3 research directions, the first is AI for transportation, such as autonomous driving, and also AI for healthcare, and also AI for Internet of Things. So basically, I joined AIR July this year. Great. Okay, let's move to the main paper we're going to discuss today. So the main thing is deep thermal, combustion optimization for thermal power generating units using offline reinforcement learning. That is, first offer yourself at all. Yeah, and also a corporate with several authors, which are Haoran Xu, Yuejiang, and Yu Shenghua, Xiang Yu Zhu, and Honglei Ying, and last one is Yu Zheng. So I remember seeing this paper soon after it first came out. I'm not sure where that was, maybe on Twitter. But then I met you at your poster session during ICML 2021 reinforcement learning for real-life workshop, and I was immediately struck by the scope of what you were doing here. I don't think I completely appreciated that the first time I encountered the paper. And so I'm super glad to have you on the show and to hear directly from you in more detail about this exciting work. So can you start us off with the brief overview of what's going on in this paper? Sure. So this works with developing new data driven AI system called deep thermal, a thermal to optimize the control combustion efficiency of a real world thermal power generating units. For those who are not familiar with the thermal power generating units or TVGU, basically it's the central facility in a power plant that convert the chemical energy of the coal to electric power. And it is a very complex and large industrial system, which has a size almost like a ten-story building with lots of equipment and huge amount of sensors and very complicated operation dynamics. So what we do is trying to optimize the combustion efficiency of a TVGU so that we can achieve more sufficient combustion and also use less coal and produce less emission and to generate the similar amount of electricity. So basically the problem with dealing with is the high-dimensional continuous constraint control problem with a large, partially observed system. And also it's like because the TVGU or the thermal power generating units is a mission critical system. So basically we do not have the possibility to have interactions with the system during model training. So we need to use the offline RL technique to solve this problem. And also in this paper we develop a new model based offline RL algorithm called more to solve this problem. And we develop this system and we deploy it in the real-world power plants and validated through a series of the field experiments. Awesome. Okay. And then in the paper it mentions that this is claiming that this is the first study that applies offline RL in a large complex real-world industrial control scenario that the authors are aware of. And I guess I might have earlier pointed back to D-Mind's HVAC system, the data-centered cooling system. Though as we heard from Natasha Jakes at the beginning of this podcast that that system wasn't actually deployed. But I think do you have any other comments on that on how unique this system is? Okay, so at least from the Google's new IPS 2018 paper, HVAC paper, I think their solution isn't really using offline RL. But basically they learn a dynamic model from the data and also use the model predicted control NPC to perform the control. So I will say it's a model based control problem and it is not using offline RL. I think using offline RL is what we do differently in this paper. Okay. And then before we get into more details here, I want to talk about the fact that we're dealing with coal plants here. And it's hard for me to discuss coal power plants without saying we know that coal is very dirty in terms of emissions and greenhouse gases. And for that reason, I do wish that there were no coal power plants running. But there are. And I assume that that's not your decision, but you're just here to optimize the plants as they stand. Is that right? Yeah. Yeah. I think I share most of the point with you because actually the fact is like the policy makers in China has made a very clear plan to cut down coal-fired power plants in maybe next 10 to 20 years. And it is expected within 20 or 20 years and the electricity generation contribution of the coal power in China will reduce from about 65% now to less than 10% in the future. So, but the fact is like to make this transition, you need to you need a huge amount of investment and infrastructure building. And so this cutting down coal-fired power plants will, this process will take quite some time. I think our work here is mainly to make this transition process greener. And so basically what we are doing is trying to do what we can to help the transition process when to using AI technique to make this process better. And so apparently you were able to achieve improved performance on these plants. Can you talk about the performance and how you measured it and evaluated it? And you ran real world tests on your systems, right? Yes, it's right. So basically we perform a series of experiments both on the simulation environment as well as the real world thermal power plants. And the simulation is mainly for multiple selection and validation. It's a pretty selection and some partial validation. And actually we conduct a series and lots of real world experiments on real world power plants because the system is actually already deployed in four or five real thermal power plants right now. And for each powerful one, actually we conduct a series of human move before and after tests. So basically it's the experiments goes like this. And we first find the time slot. And with a relatively stable load, say about 300 megawatts. And we recorded a combustion efficiency, the emission, as well as other key performance indicators before the test. And during the experiments, we asked the human operators to follow the optimized control strategies provided by the RL agent to adjust the control of a TVGU. And we record the data area to maybe five to ten minutes. And the experiments TVL last for about half an hour to maybe 1.5 or two hours. And we have a record all these results and we report some of the experiment results at different load segment in a power plant. And we are able to achieve about 3.3% to 0.5% improvements on the combustion efficiency. And also reduces the nitrogen oxide emission. 0.3% sounds small, but actually it means a lot for thermal power plants because a typical modern TVGU usually operates with a combustion efficiency around 92 to 94%. And it's already very high. So even if it's a 0.5% it's become very difficult. And if you can consistently achieve 0.5% increase in the combustion efficiency, you can help the tone power plants save about 3000 tons of coal a year. Okay. And would you say that your agents behaved significantly differently than a human operator would or was it kind of mostly doing imitation in the end? Okay. I would say it depends because actually we did run some experiments and to see when the agents behave differently than a human when it became similarly. And we observed that at some of the states with very relatively low combustion efficiency. Actually the RO agents give very different strategy compared to the human operator for example, you want power plants. And we find some of the several operating conditions. The regular human conscious strategies are not very good. For example, they will inject too much coal primary air. They will use unbalanced secondary air from both sides of the burner and which makes the fireball in the burner not forming the center. And during these conditions and the RO agents actually give very different control strategy. And actually we think it's much more reasonable than the original human strategies. And for other cases when the system operates well. And I think the RO agents perform very similarly compared to a human operator. And then is there any way to know what how well a perfect controller could do in this type of problem? Like what do you think is the very peak efficiency that could be achieved if the RL controller was somehow perfect? Is there any way to know that? Okay, so I think this is a very problem because you know it's the industry. The energy industry actually investigated the combustion optimization problem for decades. And actually right now I think there are traditional conventional approach has reached bottleneck because the problems are very difficult. And the perfect controller or perfect control strategy are very hard to find because the situation and the environment is very complex. So from the very high level view I will say good controller is at least for this problem is to find the perfect proportion of the hot and cold wind into the burner and also accurately adjust a dozen of the secondary air valves as well as several other control variables. And it will be very difficult for human handcraft to human for to handcraft such such a rule based system. And I think that's why we we choose RL to solve this problem because this very complex system. And also most of cases you are facing a very black box and to optimize you is very hard to for a very conventional rule based or conventional control strategy control algorithms to solve this problem. And I think AI is probably a right direction for this kind of hard problems. Can you tell us about the deep thermal project overall like how was this project started whose idea was this and kind of who was involved? So the project was actually started when we are a group of the researchers at Microsoft Research Asia and later we moved to GD technology. So when we are back in MSRA so a senior manager at China Energy Group which is very large state owned energy company in China contacted us and introduced a combustion optimization problem. And they say they have spent for years for solving this problem and they have faced some bottleneck and they are asking us to see if you can use some AI methods to solve this. And at the beginning we find this is a very interesting problem because it is challenging and also it solves a real world problem that can have a lot of social and environmental impact. And lastly because we can get a lot of real world industrial operational data and we have a place to test our model in a real world TV to you. So it's a very good chance. So we brought this project to GD technology and it should actually protect for this. So the research and development of this project started in the beginning of the 2018 and we almost spent a year to finish the first prototype prototype and to test the first generation of our model in the C-City Energy landing power plant in March 2019. And later we also spent a year to keep improving the model performance and developing the better solution. And right now the algorithm has become a product of the company and we have deployed in five different power plants in China. And then was this the case where you applied this more algorithm that you developed that you developed in the past to this problem or did you develop more specifically for the TGPU task? So basically more is the central component of our AI system deep terminal. And so for more itself is a general purpose of line our algorithm for the constraint of the decision process problem. And it is basically is a new algorithm developed in this research and we use a model based offline our framework. And but of course there is a lot of engineering components in deep terminal such as we also did a lot of feature engineering especially for the especially for the TVG optimization task. And also added a lot of hard cost of constraints to the output of the RL policy output our policy optimized policies. So more is actually the previous version of the offline our already using deep terminal right now we have a newer version and probably more robust and we might write another paper for this. Okay and I look forward to reading that this paper mentions over 10,000 sensors per TGPU so this a lot of sensors that's a huge observation space can you tell us more about these sensors and with that many sensors you have to worry about faulty or missing missing readings as well. Yeah, yeah, that's correct. So basically it's like the TPU is a very large infrastructure and it is composed of a lot of different equipment and a lot of sensors. And many of the sensors are actually monitoring the temperature of different parts of the TPU for example the the temperature of the air the wind and the water the cold particles as well as the surface of the boilers and water pipes water and steam pipes. And there is also lots of sensors monitoring the air and water pressure at different locations and as well as the volume of the air the water and vaporize the power as the colds are also monitored and there's some other states like the concentration of the emissions and the and the current load of the of the TVGU such as that's not going to be a problem. Such as such basically it's a lot of sensors and in many cases for example a certain location in a burner there might be several different temperatures sensors. So what we do is like we have performed a series of data cleaning filtering and feature engineering to process raw data. For example for many of the state such as temperature and they might have multiple sensors for the similar location and we just filtered 40 ones and averaged the rest and the rest of the data into maybe 20 to 30 seconds, 70 intervals. And this actually helps to reduce the observation noise and some 40 readings from the data. So basically the whole process is first to perform a series of the odd light detection and data filtering and second is some data engineering techniques to make the sensor readings more stable and accurate. And these data are used in the training and online inference for the RL agent. And then talking about the action space here the paper mentions 70 to 100 continuous control variables. It's a lot of a lot of actions. Can you can you tell us more about what types of actions are used here? Okay so basically most of the actions are the adjustment of the valves and the bathos in the TBGU. For example you have a lot of valves for wind, for water and for cold particles. And also you have the bathos and valves for some of the wind blowers. And then you have the different types of the boiler and burner and the equipment. So the actions actually ranges from the 70 to 100 and all these are continuous variables. Some of the actions actually share the same operational mode. For example maybe two or three sessions. Some actions might have the take the same value during the operation. So these actions are merged. So finally so we merged some of the key actions and there are about 30 to 50 continuous actions actually goes to the RL model. And it differs from four different types of the TBGU. And you had a human in the loop here interpreting the actions output by your agent. Is that right? And was that did that mitigate safety concerns and was that ever what is it was that ever needed to mitigate like did any safety concerns arise? Yeah, it surely mitigate some safety concerns, but because our work is actually a really new thing for the energy industry in China, especially using a technique like RL for such a mission critical system control. So at the current stage and no way in the energy industry actually there is applying such a new technology without any condition. So our beginning step is actually to put a human in the loop and we provide the optimized recommend action policy control strategy or recommendations to the human they operate on the actual systems. I think it's basically the first step towards building such a new technology to relatively traditional industry, but I think it makes a very good case to show that actually AI can be used to such a mission critical decision making applications. And but it's still of course it still has a long way to go and we need to have built more robust AI algorithms and the four truly close looped control. Can you say a little bit more about how your more algorithm works in a bit more detail? What are the central components of this algorithm? Okay, so the so the more is basically a model based offline RL over them. So we choose the model based architecture initially because really so we have the problem is a large scale problem. We have what we are dealing with a very high-dimensional state space and action space. And but our data is meaning while two years operational data from our power plants so compared to the size of the optimizing problems the data is really not too much. So we first built a data driven simulator from the from this data and they use this simulator as a model to actually generate some imaginary routes and generate some simulated data to facilitate the RL or in training and but since we are dealing with offline or algorithm and what we the only reliable information is actually the data and the simulator is also learned from data so it's not very reliable. So the more actually the key thing it does is actually to tackle the challenge between the offline policy learning on the constraints and but with the imperfect simulator. So it tackles two problems here first is like we were solving a constant constraint optimization problems and we are we need to make the optimized policy to satisfy the safety constraints. So in essence we introduce is an additional cost critique which is like the cost cost q function to model an a for safety constraint satisfaction of this optimization problems. And for this problem more actually uses the clip double q technique and by using two reward creative q functions to penalize uncertainty in the reward in the reward q function to elevate the estimation issue that commonly occurring the offline RL and for the cost critique q function and we perform q evaluation to update this value. And all this policy optimization is performed offline and we carefully combine the real and simulator data and that is the global picture and to address the offline learning basically you need to tackle about the problem of the potential problematic data introduced by the imperfect simulator. And so what we do is actually we introduce a new restrictive exploration strategy to free utilize the general visibility of the simulator but we at the same time we are not fully trust the data and the simulator. So we perform a series of the data filtering and also the data processing similarly simulated data processing to actually to make the simulator samples reliable during the training. So the restrictive exploration strategy actually contains two steps so we first perform a model sensitivity based filtering to filter the simulator samples not the model is not certain lack of prediction and the way measured model sensitivity by injecting a noise in the model input and computed rise of the output perturbations and if the model has a very large sensitivity at some simulator samples then this is samples a filter out. So the remaining samples are further evaluated based on the probability in the data distribution and we split the data into positive samples and if they belong to the high density region of the data and the consider them as negative samples if they belong to the load density region of the data or out of the solution data. And these negative data and the reward of this negative negative negative samples are penalized and to avoid potential exploration error during the RL training. And so finally we constructed a very special local buffer to combine both real positive and negative simple simulator samples in the in the repay buffer and we use that to for for the offline training and we called hybrid hybrid training and that is the overall picture. So basically it's solving a constraint optimizing problems plus also offline learning process. How do the constraints work? Like what are the what types of safety constraints are you concerned with? Is it maximum temperatures or or or a variety of things? Yeah, it's basically a variety of things. So we have a lot of we have some some some temperature safety constraints. For example, it cannot exceed some of the safety threshold for the temperature for certain part of the boiler and also there is some pressure constraints. For example, the internal pressure of the of the of the burner should be kept negative unless it might have the chance to for explosion and also it's like the the load generated by the the TVGU must satisfy the demand load. So basically it's there is a naive solution that you just reduce the coal coal used and reduce the water and so the amount of the coal consumption will job but you will not get enough electricity generated. So it's like a series constraint like this. And can you tell us about the reward in in this task? What is your agent trying to maximize? Okay, so the reward for this problem is a prop is very straightforward. So basically it's the composition of the improvement in the combustion efficiency and also the reduction in the nitrogen oxide emission. And the combustion efficiency itself is well defined and measured quantity and energy industry. So we have to use that. And also it's like we measured the decrease of the image nitrogen oxide emission and it is also paired with the combustion efficiency increase in combustion efficiency. So basically we are so what we are trying to do is we want to maximize the combustion efficiency while control or reduce the nitrogen oxide emission. How do you approach exploration here? I gather you don't you try not to do exploration in the in the real world but do you try to explore in the simulator how does that is exploration work? Okay, so basically it's like we are trying to be do minimum exploration because it's a offline RL problem. And so basically it's like from the data you really do not do any exploration and you many do exploitation. And from the simulator or from the model actually we do some we do some some some rules from using the model. But it's like we we we we actually filter out a lot of data that out of data distribution. So I will say it's like the exploration is not really a main thing in this work and the million about exploitation. Is there is there any hope of or did you look at or do you maybe plan to look at transfer learning to generalize across different plants or is that maybe a more distant goal or is it is it really about treating each plant as its own task. Okay, so basically we are treating each plant as a different task because usually in different programs the actual configuration and equipment of a TVG is very different and for example they might have different boilers and they have different kind of that borders and they have different winblowers. So basically there is really not not a lot of things that can transfer and also even in some cases and the sensors are deployed at different locations of the of the TVG. So it's like if you're trying to do transfer learning so it actually become a more challenging task. So what we do is actually we're trying to extract the maximum information from the single power plant and just to use not information to train our agent offline. And I was going to ask if you consider the problem solved but you you mentioned that you're working already on the next version. Can you see is there anything with us about the next version or would you prefer that we wait wait for the next paper. Yeah, I will just have some brief discussion about this. So I do not think this is a well solved problem and I will say it's ongoing work and we did achieve some improvements in human loop control but our final goal is actually to do the close loop control. And so what we are doing and what we are trying to solve right now is actually several improvements for similar offline algorithms for example we want to do simple efficient offline RL for example because the data in power plant might not be that much and we want to use the small small amount data to still train very reliable. And also is like the general ability is a place of key issue in offline RL and if we want to really have a very good performance policy. So you need to go beyond the current data set. So that also requires our agents to have better general ability and also is like things where developing the deploy the model in the real reward power plant. We might get a chance to collect the new data after they they use the policy we provided and how to use this new data and combine with the old offline RL model and to do offline to online or deploy efficient RL is also another direction we are looking at. And lastly is like the robustness so because a lot of sensors in the in the in an industrial equipment might have some noises. So you need to make sure your policy is very robust to the noise input and that is also what we do in the next version. So the way we introduce some of the industrial training during the RL policy learning to make the policy more robust and we did see it produces more stable optimized policies. Okay, I look I really look forward to to reading all about your next version as this one's already massively impressive accomplishment here. But going forward for you, what does the future look like and I guess you're going to continue working in this area of industrial RL? Yeah, yeah, and so basically I shifted from the GD technology to the to the same way, university and but I'm still collaborating with my previous team on the using the offline RL or other data driven decision making methods to solve industrial control problems. And back in team high university, so basically I will I'm also looking at applying offline RL to more diverse areas such as autonomous driving and health care recommendations such as these kinds of problems. But I think it's like the mid focus currently I am working on is actually to see if we can develop better offline RL with example with better simple efficiency and better genus ability that has the potential to better solve the real world problems. So you're in maybe a rare position where you have worked on both sides, both in the US at Purdue and also Microsoft and working in China and Singapore. Could you do you have any comments on how approach to AI may differ between US and the West and China? Yeah, I think the AI research in US has a very good environment to encourage innovation and also where you can do a lot of theoretical and also some innovations in the developing new methods to solve diverse aspect of problems. But in China, I think the one advantage here is like first is it's AI is actually it's like considered highly in the country and also has a lot of investment and also it's like it's like I think a lot of conventional traditional industry such as energy and other industries are more willing to try some of the AI solutions. In the US and maybe the new applications are many carried out by the big tech companies but in China. So many of the people from the other industries are willing to reach out and to try something new. I think it's you have some more flexibility to do some of the things that are interesting. In the US and think you have a lot of regulations and maybe you can do a lot of things inside the big company but to have to deploy some of the experimental stuff research without really a lot of support from the to develop something new I would say. So it might be not as flexible as in China. So besides your own work, are there other things happening in RL that you find quite interesting these days that you'd want to comment on? Okay, so I think very because I'm really working on the offline or problems. So I think one of the key issues I'm looking at is to actually see how the general stability of offline or can be improved because offline or you are basically solving a very restrictive problem with the very under very restrictive setting. And you cannot go beyond the data to actually exploit exploit on the Arnold part but it's like to be for a flyer to be really useful in the real world deployment. You need to actually to make reasonable inference and to maybe use the more general ability to actually predict or infer on something you don't know whether it's Arnold data or it's the situation for an unknown. So I think it's one thing I think it's very important and what's looking is to really to improve the general ability of the oral algorithm as well as the general ability of the neural models using the RL. And I think this is a very important and really interesting and also it's like I think combining also in reasoning in the RL is also a very interesting direction. Because most of the oral algorithm we are carrying out today is basically based on the Q learning. So basically you are performing the inference on the Q table or Q function. And that is a lot of times not enough to solve complex problems because most of the real world problems are not the single step mark of decision process problems. They might have the multi-step interactions and some other complex dynamics. And I think if you if one can just perform very good cause reasoning in the oral algorithm, I think that is can greatly improve the performance of the of the algorithm and also you can really improve the general ability as well as the simple efficiency. And the last thing is like actually so what I'm trying to even working on today right now is actually to combine contrast really learning into the offline RL to enhance the simple efficiency and also to make the to make the most information out of the limited data. Is there anything else that you'd want to mention today or that I that I should have asked you about today. Yeah, so maybe I just at one point is like I think offline is a very promise area and we can really use it to solve a lot of things. And for example is like a lot of world real world problems actually do not have a simulator perfect simulator always impossible to build a simulator and the real world collection of data is very expensive or prohibitive. So I think it's in that case is like if we have the right amount of data and we can do offline RL and we can make the RL really workable in the real world. And I think that is a very good direction and but also it's like we have a lot of huge amount of open questions for open problems for a lot of and I think it still works a lot of effort in developing better offline RL algorithms to make. More to produce more reliable policies to solve the real world problems and I think that let's do need a lot of effort and also it's like a better to be a very meaningful thing to make RL really workable in real life. Professor Jean on behalf of myself and our listeners I want to thank you so much for sharing your time and your insight with the talk oral community today. Thank you. Thank you. Notes and links for this episode are at talkrl.com. If you like this show I need your support you can help in a few ways. If you don't think we deserve five stars let us know on Twitter what we could do better. TalkRL.
[ { "end": 13, "start": 0, "text": " This is TalkArail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 13, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 27, "start": 20, "text": " Professor Jean is currently a research assistant professor at the Institute for AI Industry Research at Tsinghua University." }, { "end": 39, "start": 27, "text": " He received his PhD degree at Purdue University. Before joining Tsinghua University, Professor Jean worked as a researcher at Microsoft Research Asia and a data scientist at JD Technology." }, { "end": 46, "start": 39, "text": " At JD Technology, he led the research that uses offline RL to optimize real-world industrial systems. Welcome, Professor Jean." }, { "end": 47, "start": 46, "text": " Thank you, Rob." }, { "end": 51, "start": 47, "text": " And how do you describe your research interests?" }, { "end": 59, "start": 51, "text": " My current research interests mainly in reinforcement learning, especially offline reinforcement learning." }, { "end": 69, "start": 59, "text": " I also worked on the problems related to complex systems and also the data-driven methods in transportation applications." }, { "end": 75, "start": 69, "text": " But currently, my major research interests are Foucault Sound offline RL." }, { "end": 79, "start": 75, "text": " Can you say more about your RL work at JD Technology?" }, { "end": 93, "start": 79, "text": " Basically, I'm a research-oriented data scientist and I used to live a small team there and develop data-driven control optimization algorithms," }, { "end": 106, "start": 93, "text": " as well as some of the other AI algorithms for industrial optimization problems, such as this term-power plant optimization work," }, { "end": 116, "start": 106, "text": " which is where we use some of our reinforcement learning techniques to optimize the control strategy for industrial systems." }, { "end": 124, "start": 116, "text": " Can you tell us a bit more about the Institute for AI Industry Research at Tsinghua University? What are the main focus areas there?" }, { "end": 136, "start": 124, "text": " So Institute for AI Industry Research, or we also call it AIR, is basically a new research institute found in Tsinghua University last year." }, { "end": 145, "start": 136, "text": " And it's a new research institute and its mission is to conduct cutting-edge research that are transferable to industries, and we want to solve the real industrial problems." }, { "end": 160, "start": 145, "text": " And currently, the AI Industry Research Foucault Sound 3 research directions, the first is AI for transportation, such as autonomous driving, and also AI for healthcare, and also AI for Internet of Things." }, { "end": 163, "start": 160, "text": " So basically, I joined AIR July this year." }, { "end": 167, "start": 163, "text": " Great. Okay, let's move to the main paper we're going to discuss today." }, { "end": 177, "start": 167, "text": " So the main thing is deep thermal, combustion optimization for thermal power generating units using offline reinforcement learning. That is, first offer yourself at all." }, { "end": 191, "start": 177, "text": " Yeah, and also a corporate with several authors, which are Haoran Xu, Yuejiang, and Yu Shenghua, Xiang Yu Zhu, and Honglei Ying, and last one is Yu Zheng." }, { "end": 197, "start": 191, "text": " So I remember seeing this paper soon after it first came out. I'm not sure where that was, maybe on Twitter." }, { "end": 207, "start": 197, "text": " But then I met you at your poster session during ICML 2021 reinforcement learning for real-life workshop, and I was immediately struck by the scope of what you were doing here." }, { "end": 211, "start": 207, "text": " I don't think I completely appreciated that the first time I encountered the paper." }, { "end": 219, "start": 211, "text": " And so I'm super glad to have you on the show and to hear directly from you in more detail about this exciting work." }, { "end": 225, "start": 219, "text": " So can you start us off with the brief overview of what's going on in this paper?" }, { "end": 238, "start": 225, "text": " Sure. So this works with developing new data driven AI system called deep thermal, a thermal to optimize the control combustion efficiency of a real world thermal power generating units." }, { "end": 250, "start": 238, "text": " For those who are not familiar with the thermal power generating units or TVGU, basically it's the central facility in a power plant that convert the chemical energy of the coal to electric power." }, { "end": 266, "start": 250, "text": " And it is a very complex and large industrial system, which has a size almost like a ten-story building with lots of equipment and huge amount of sensors and very complicated operation dynamics." }, { "end": 282, "start": 266, "text": " So what we do is trying to optimize the combustion efficiency of a TVGU so that we can achieve more sufficient combustion and also use less coal and produce less emission and to generate the similar amount of electricity." }, { "end": 294, "start": 282, "text": " So basically the problem with dealing with is the high-dimensional continuous constraint control problem with a large, partially observed system." }, { "end": 302, "start": 294, "text": " And also it's like because the TVGU or the thermal power generating units is a mission critical system." }, { "end": 308, "start": 302, "text": " So basically we do not have the possibility to have interactions with the system during model training." }, { "end": 315, "start": 308, "text": " So we need to use the offline RL technique to solve this problem." }, { "end": 322, "start": 315, "text": " And also in this paper we develop a new model based offline RL algorithm called more to solve this problem." }, { "end": 335, "start": 322, "text": " And we develop this system and we deploy it in the real-world power plants and validated through a series of the field experiments." }, { "end": 349, "start": 335, "text": " Awesome. Okay. And then in the paper it mentions that this is claiming that this is the first study that applies offline RL in a large complex real-world industrial control scenario that the authors are aware of." }, { "end": 356, "start": 349, "text": " And I guess I might have earlier pointed back to D-Mind's HVAC system, the data-centered cooling system." }, { "end": 363, "start": 356, "text": " Though as we heard from Natasha Jakes at the beginning of this podcast that that system wasn't actually deployed." }, { "end": 368, "start": 363, "text": " But I think do you have any other comments on that on how unique this system is?" }, { "end": 380, "start": 368, "text": " Okay, so at least from the Google's new IPS 2018 paper, HVAC paper, I think their solution isn't really using offline RL." }, { "end": 389, "start": 380, "text": " But basically they learn a dynamic model from the data and also use the model predicted control NPC to perform the control." }, { "end": 401, "start": 389, "text": " So I will say it's a model based control problem and it is not using offline RL. I think using offline RL is what we do differently in this paper." }, { "end": 407, "start": 401, "text": " Okay. And then before we get into more details here, I want to talk about the fact that we're dealing with coal plants here." }, { "end": 414, "start": 407, "text": " And it's hard for me to discuss coal power plants without saying we know that coal is very dirty in terms of emissions and greenhouse gases." }, { "end": 419, "start": 414, "text": " And for that reason, I do wish that there were no coal power plants running." }, { "end": 427, "start": 419, "text": " But there are. And I assume that that's not your decision, but you're just here to optimize the plants as they stand. Is that right?" }, { "end": 441, "start": 427, "text": " Yeah. Yeah. I think I share most of the point with you because actually the fact is like the policy makers in China has made a very clear plan to cut down coal-fired power plants in maybe next 10 to 20 years." }, { "end": 456, "start": 441, "text": " And it is expected within 20 or 20 years and the electricity generation contribution of the coal power in China will reduce from about 65% now to less than 10% in the future." }, { "end": 473, "start": 456, "text": " So, but the fact is like to make this transition, you need to you need a huge amount of investment and infrastructure building. And so this cutting down coal-fired power plants will, this process will take quite some time." }, { "end": 478, "start": 473, "text": " I think our work here is mainly to make this transition process greener." }, { "end": 490, "start": 478, "text": " And so basically what we are doing is trying to do what we can to help the transition process when to using AI technique to make this process better." }, { "end": 500, "start": 490, "text": " And so apparently you were able to achieve improved performance on these plants. Can you talk about the performance and how you measured it and evaluated it?" }, { "end": 504, "start": 500, "text": " And you ran real world tests on your systems, right?" }, { "end": 513, "start": 504, "text": " Yes, it's right. So basically we perform a series of experiments both on the simulation environment as well as the real world thermal power plants." }, { "end": 521, "start": 513, "text": " And the simulation is mainly for multiple selection and validation. It's a pretty selection and some partial validation." }, { "end": 533, "start": 521, "text": " And actually we conduct a series and lots of real world experiments on real world power plants because the system is actually already deployed in four or five real thermal power plants right now." }, { "end": 540, "start": 533, "text": " And for each powerful one, actually we conduct a series of human move before and after tests." }, { "end": 552, "start": 540, "text": " So basically it's the experiments goes like this. And we first find the time slot. And with a relatively stable load, say about 300 megawatts." }, { "end": 558, "start": 552, "text": " And we recorded a combustion efficiency, the emission, as well as other key performance indicators before the test." }, { "end": 568, "start": 558, "text": " And during the experiments, we asked the human operators to follow the optimized control strategies provided by the RL agent to adjust the control of a TVGU." }, { "end": 579, "start": 568, "text": " And we record the data area to maybe five to ten minutes. And the experiments TVL last for about half an hour to maybe 1.5 or two hours." }, { "end": 587, "start": 579, "text": " And we have a record all these results and we report some of the experiment results at different load segment in a power plant." }, { "end": 599, "start": 587, "text": " And we are able to achieve about 3.3% to 0.5% improvements on the combustion efficiency. And also reduces the nitrogen oxide emission." }, { "end": 612, "start": 599, "text": " 0.3% sounds small, but actually it means a lot for thermal power plants because a typical modern TVGU usually operates with a combustion efficiency around 92 to 94%." }, { "end": 627, "start": 612, "text": " And it's already very high. So even if it's a 0.5% it's become very difficult. And if you can consistently achieve 0.5% increase in the combustion efficiency, you can help the tone power plants save about 3000 tons of coal a year." }, { "end": 638, "start": 627, "text": " Okay. And would you say that your agents behaved significantly differently than a human operator would or was it kind of mostly doing imitation in the end?" }, { "end": 651, "start": 638, "text": " Okay. I would say it depends because actually we did run some experiments and to see when the agents behave differently than a human when it became similarly." }, { "end": 660, "start": 651, "text": " And we observed that at some of the states with very relatively low combustion efficiency." }, { "end": 668, "start": 660, "text": " Actually the RO agents give very different strategy compared to the human operator for example, you want power plants." }, { "end": 680, "start": 668, "text": " And we find some of the several operating conditions. The regular human conscious strategies are not very good. For example, they will inject too much coal primary air." }, { "end": 697, "start": 680, "text": " They will use unbalanced secondary air from both sides of the burner and which makes the fireball in the burner not forming the center. And during these conditions and the RO agents actually give very different control strategy." }, { "end": 716, "start": 697, "text": " And actually we think it's much more reasonable than the original human strategies. And for other cases when the system operates well. And I think the RO agents perform very similarly compared to a human operator." }, { "end": 732, "start": 716, "text": " And then is there any way to know what how well a perfect controller could do in this type of problem? Like what do you think is the very peak efficiency that could be achieved if the RL controller was somehow perfect? Is there any way to know that?" }, { "end": 753, "start": 732, "text": " Okay, so I think this is a very problem because you know it's the industry. The energy industry actually investigated the combustion optimization problem for decades. And actually right now I think there are traditional conventional approach has reached bottleneck because the problems are very difficult." }, { "end": 766, "start": 753, "text": " And the perfect controller or perfect control strategy are very hard to find because the situation and the environment is very complex." }, { "end": 789, "start": 766, "text": " So from the very high level view I will say good controller is at least for this problem is to find the perfect proportion of the hot and cold wind into the burner and also accurately adjust a dozen of the secondary air valves as well as several other control variables." }, { "end": 808, "start": 789, "text": " And it will be very difficult for human handcraft to human for to handcraft such such a rule based system. And I think that's why we we choose RL to solve this problem because this very complex system." }, { "end": 823, "start": 808, "text": " And also most of cases you are facing a very black box and to optimize you is very hard to for a very conventional rule based or conventional control strategy control algorithms to solve this problem." }, { "end": 828, "start": 823, "text": " And I think AI is probably a right direction for this kind of hard problems." }, { "end": 838, "start": 828, "text": " Can you tell us about the deep thermal project overall like how was this project started whose idea was this and kind of who was involved?" }, { "end": 862, "start": 838, "text": " So the project was actually started when we are a group of the researchers at Microsoft Research Asia and later we moved to GD technology. So when we are back in MSRA so a senior manager at China Energy Group which is very large state owned energy company in China contacted us and introduced a combustion optimization problem." }, { "end": 873, "start": 862, "text": " And they say they have spent for years for solving this problem and they have faced some bottleneck and they are asking us to see if you can use some AI methods to solve this." }, { "end": 886, "start": 873, "text": " And at the beginning we find this is a very interesting problem because it is challenging and also it solves a real world problem that can have a lot of social and environmental impact." }, { "end": 898, "start": 886, "text": " And lastly because we can get a lot of real world industrial operational data and we have a place to test our model in a real world TV to you." }, { "end": 906, "start": 898, "text": " So it's a very good chance. So we brought this project to GD technology and it should actually protect for this." }, { "end": 926, "start": 906, "text": " So the research and development of this project started in the beginning of the 2018 and we almost spent a year to finish the first prototype prototype and to test the first generation of our model in the C-City Energy landing power plant in March 2019." }, { "end": 941, "start": 926, "text": " And later we also spent a year to keep improving the model performance and developing the better solution. And right now the algorithm has become a product of the company and we have deployed in five different power plants in China." }, { "end": 955, "start": 941, "text": " And then was this the case where you applied this more algorithm that you developed that you developed in the past to this problem or did you develop more specifically for the TGPU task?" }, { "end": 971, "start": 955, "text": " So basically more is the central component of our AI system deep terminal. And so for more itself is a general purpose of line our algorithm for the constraint of the decision process problem." }, { "end": 990, "start": 971, "text": " And it is basically is a new algorithm developed in this research and we use a model based offline our framework. And but of course there is a lot of engineering components in deep terminal such as we also did a lot of feature engineering especially for the especially for the TVG optimization task." }, { "end": 1015, "start": 990, "text": " And also added a lot of hard cost of constraints to the output of the RL policy output our policy optimized policies. So more is actually the previous version of the offline our already using deep terminal right now we have a newer version and probably more robust and we might write another paper for this." }, { "end": 1032, "start": 1015, "text": " Okay and I look forward to reading that this paper mentions over 10,000 sensors per TGPU so this a lot of sensors that's a huge observation space can you tell us more about these sensors and with that many sensors you have to worry about faulty or missing missing readings as well." }, { "end": 1061, "start": 1032, "text": " Yeah, yeah, that's correct. So basically it's like the TPU is a very large infrastructure and it is composed of a lot of different equipment and a lot of sensors. And many of the sensors are actually monitoring the temperature of different parts of the TPU for example the the temperature of the air the wind and the water the cold particles as well as the surface of the boilers and water pipes water and steam pipes." }, { "end": 1090, "start": 1061, "text": " And there is also lots of sensors monitoring the air and water pressure at different locations and as well as the volume of the air the water and vaporize the power as the colds are also monitored and there's some other states like the concentration of the emissions and the and the current load of the of the TVGU such as that's not going to be a problem." }, { "end": 1104, "start": 1090, "text": " Such as such basically it's a lot of sensors and in many cases for example a certain location in a burner there might be several different temperatures sensors." }, { "end": 1111, "start": 1104, "text": " So what we do is like we have performed a series of data cleaning filtering and feature engineering to process raw data." }, { "end": 1134, "start": 1111, "text": " For example for many of the state such as temperature and they might have multiple sensors for the similar location and we just filtered 40 ones and averaged the rest and the rest of the data into maybe 20 to 30 seconds, 70 intervals." }, { "end": 1155, "start": 1134, "text": " And this actually helps to reduce the observation noise and some 40 readings from the data. So basically the whole process is first to perform a series of the odd light detection and data filtering and second is some data engineering techniques to make the sensor readings more stable and accurate." }, { "end": 1161, "start": 1155, "text": " And these data are used in the training and online inference for the RL agent." }, { "end": 1171, "start": 1161, "text": " And then talking about the action space here the paper mentions 70 to 100 continuous control variables. It's a lot of a lot of actions." }, { "end": 1176, "start": 1171, "text": " Can you can you tell us more about what types of actions are used here?" }, { "end": 1194, "start": 1176, "text": " Okay so basically most of the actions are the adjustment of the valves and the bathos in the TBGU. For example you have a lot of valves for wind, for water and for cold particles. And also you have the bathos and valves for some of the wind blowers." }, { "end": 1214, "start": 1194, "text": " And then you have the different types of the boiler and burner and the equipment. So the actions actually ranges from the 70 to 100 and all these are continuous variables." }, { "end": 1227, "start": 1214, "text": " Some of the actions actually share the same operational mode. For example maybe two or three sessions. Some actions might have the take the same value during the operation. So these actions are merged." }, { "end": 1241, "start": 1227, "text": " So finally so we merged some of the key actions and there are about 30 to 50 continuous actions actually goes to the RL model. And it differs from four different types of the TBGU." }, { "end": 1258, "start": 1241, "text": " And you had a human in the loop here interpreting the actions output by your agent. Is that right? And was that did that mitigate safety concerns and was that ever what is it was that ever needed to mitigate like did any safety concerns arise?" }, { "end": 1271, "start": 1258, "text": " Yeah, it surely mitigate some safety concerns, but because our work is actually a really new thing for the energy industry in China, especially using a technique like RL for such a mission critical system control." }, { "end": 1279, "start": 1271, "text": " So at the current stage and no way in the energy industry actually there is applying such a new technology without any condition." }, { "end": 1294, "start": 1279, "text": " So our beginning step is actually to put a human in the loop and we provide the optimized recommend action policy control strategy or recommendations to the human they operate on the actual systems." }, { "end": 1317, "start": 1294, "text": " I think it's basically the first step towards building such a new technology to relatively traditional industry, but I think it makes a very good case to show that actually AI can be used to such a mission critical decision making applications." }, { "end": 1327, "start": 1317, "text": " And but it's still of course it still has a long way to go and we need to have built more robust AI algorithms and the four truly close looped control." }, { "end": 1336, "start": 1327, "text": " Can you say a little bit more about how your more algorithm works in a bit more detail? What are the central components of this algorithm?" }, { "end": 1352, "start": 1336, "text": " Okay, so the so the more is basically a model based offline RL over them. So we choose the model based architecture initially because really so we have the problem is a large scale problem." }, { "end": 1356, "start": 1352, "text": " We have what we are dealing with a very high-dimensional state space and action space." }, { "end": 1369, "start": 1356, "text": " And but our data is meaning while two years operational data from our power plants so compared to the size of the optimizing problems the data is really not too much." }, { "end": 1385, "start": 1369, "text": " So we first built a data driven simulator from the from this data and they use this simulator as a model to actually generate some imaginary routes and generate some simulated data to facilitate the RL" }, { "end": 1398, "start": 1385, "text": " or in training and but since we are dealing with offline or algorithm and what we the only reliable information is actually the data and the simulator is also learned from data so it's not very reliable." }, { "end": 1412, "start": 1398, "text": " So the more actually the key thing it does is actually to tackle the challenge between the offline policy learning on the constraints and but with the imperfect simulator." }, { "end": 1425, "start": 1412, "text": " So it tackles two problems here first is like we were solving a constant constraint optimization problems and we are we need to make the optimized policy to satisfy the safety constraints." }, { "end": 1439, "start": 1425, "text": " So in essence we introduce is an additional cost critique which is like the cost cost q function to model an a for safety constraint satisfaction of this optimization problems." }, { "end": 1466, "start": 1439, "text": " And for this problem more actually uses the clip double q technique and by using two reward creative q functions to penalize uncertainty in the reward in the reward q function to elevate the estimation issue that commonly occurring the offline RL and for the cost critique q function and we perform q evaluation to update this value." }, { "end": 1491, "start": 1466, "text": " And all this policy optimization is performed offline and we carefully combine the real and simulator data and that is the global picture and to address the offline learning basically you need to tackle about the problem of the potential problematic data introduced by the imperfect simulator." }, { "end": 1504, "start": 1491, "text": " And so what we do is actually we introduce a new restrictive exploration strategy to free utilize the general visibility of the simulator but we at the same time we are not fully trust the data and the simulator." }, { "end": 1520, "start": 1504, "text": " So we perform a series of the data filtering and also the data processing similarly simulated data processing to actually to make the simulator samples reliable during the training." }, { "end": 1549, "start": 1520, "text": " So the restrictive exploration strategy actually contains two steps so we first perform a model sensitivity based filtering to filter the simulator samples not the model is not certain lack of prediction and the way measured model sensitivity by injecting a noise in the model input and computed rise of the output perturbations and if the model has a very large sensitivity at some simulator samples then this is samples a filter out." }, { "end": 1568, "start": 1549, "text": " So the remaining samples are further evaluated based on the probability in the data distribution and we split the data into positive samples and if they belong to the high density region of the data and the consider them as negative samples if they belong to the load density region of the data or out of the solution data." }, { "end": 1582, "start": 1568, "text": " And these negative data and the reward of this negative negative negative samples are penalized and to avoid potential exploration error during the RL training." }, { "end": 1603, "start": 1582, "text": " And so finally we constructed a very special local buffer to combine both real positive and negative simple simulator samples in the in the repay buffer and we use that to for for the offline training and we called hybrid hybrid training and that is the overall picture." }, { "end": 1611, "start": 1603, "text": " So basically it's solving a constraint optimizing problems plus also offline learning process." }, { "end": 1621, "start": 1611, "text": " How do the constraints work? Like what are the what types of safety constraints are you concerned with? Is it maximum temperatures or or or a variety of things?" }, { "end": 1639, "start": 1621, "text": " Yeah, it's basically a variety of things. So we have a lot of we have some some some temperature safety constraints. For example, it cannot exceed some of the safety threshold for the temperature for certain part of the boiler and also there is some pressure constraints." }, { "end": 1657, "start": 1639, "text": " For example, the internal pressure of the of the of the burner should be kept negative unless it might have the chance to for explosion and also it's like the the load generated by the the TVGU must satisfy the demand load." }, { "end": 1673, "start": 1657, "text": " So basically it's there is a naive solution that you just reduce the coal coal used and reduce the water and so the amount of the coal consumption will job but you will not get enough electricity generated." }, { "end": 1677, "start": 1673, "text": " So it's like a series constraint like this." }, { "end": 1684, "start": 1677, "text": " And can you tell us about the reward in in this task? What is your agent trying to maximize?" }, { "end": 1701, "start": 1684, "text": " Okay, so the reward for this problem is a prop is very straightforward. So basically it's the composition of the improvement in the combustion efficiency and also the reduction in the nitrogen oxide emission." }, { "end": 1726, "start": 1701, "text": " And the combustion efficiency itself is well defined and measured quantity and energy industry. So we have to use that. And also it's like we measured the decrease of the image nitrogen oxide emission and it is also paired with the combustion efficiency increase in combustion efficiency." }, { "end": 1737, "start": 1726, "text": " So basically we are so what we are trying to do is we want to maximize the combustion efficiency while control or reduce the nitrogen oxide emission." }, { "end": 1749, "start": 1737, "text": " How do you approach exploration here? I gather you don't you try not to do exploration in the in the real world but do you try to explore in the simulator how does that is exploration work?" }, { "end": 1765, "start": 1749, "text": " Okay, so basically it's like we are trying to be do minimum exploration because it's a offline RL problem. And so basically it's like from the data you really do not do any exploration and you many do exploitation." }, { "end": 1781, "start": 1765, "text": " And from the simulator or from the model actually we do some we do some some some rules from using the model. But it's like we we we we actually filter out a lot of data that out of data distribution." }, { "end": 1790, "start": 1781, "text": " So I will say it's like the exploration is not really a main thing in this work and the million about exploitation." }, { "end": 1806, "start": 1790, "text": " Is there is there any hope of or did you look at or do you maybe plan to look at transfer learning to generalize across different plants or is that maybe a more distant goal or is it is it really about treating each plant as its own task." }, { "end": 1826, "start": 1806, "text": " Okay, so basically we are treating each plant as a different task because usually in different programs the actual configuration and equipment of a TVG is very different and for example they might have different boilers and they have different kind of that borders and they have different winblowers." }, { "end": 1840, "start": 1826, "text": " So basically there is really not not a lot of things that can transfer and also even in some cases and the sensors are deployed at different locations of the of the TVG." }, { "end": 1846, "start": 1840, "text": " So it's like if you're trying to do transfer learning so it actually become a more challenging task." }, { "end": 1859, "start": 1846, "text": " So what we do is actually we're trying to extract the maximum information from the single power plant and just to use not information to train our agent offline." }, { "end": 1867, "start": 1859, "text": " And I was going to ask if you consider the problem solved but you you mentioned that you're working already on the next version." }, { "end": 1873, "start": 1867, "text": " Can you see is there anything with us about the next version or would you prefer that we wait wait for the next paper." }, { "end": 1890, "start": 1873, "text": " Yeah, I will just have some brief discussion about this. So I do not think this is a well solved problem and I will say it's ongoing work and we did achieve some improvements in human loop control but our final goal is actually to do the close loop control." }, { "end": 1919, "start": 1890, "text": " And so what we are doing and what we are trying to solve right now is actually several improvements for similar offline algorithms for example we want to do simple efficient offline RL for example because the data in power plant might not be that much and we want to use the small small amount data to still train very reliable." }, { "end": 1931, "start": 1919, "text": " And also is like the general ability is a place of key issue in offline RL and if we want to really have a very good performance policy." }, { "end": 1947, "start": 1931, "text": " So you need to go beyond the current data set. So that also requires our agents to have better general ability and also is like things where developing the deploy the model in the real reward power plant." }, { "end": 1967, "start": 1947, "text": " We might get a chance to collect the new data after they they use the policy we provided and how to use this new data and combine with the old offline RL model and to do offline to online or deploy efficient RL is also another direction we are looking at." }, { "end": 1978, "start": 1967, "text": " And lastly is like the robustness so because a lot of sensors in the in the in an industrial equipment might have some noises." }, { "end": 1987, "start": 1978, "text": " So you need to make sure your policy is very robust to the noise input and that is also what we do in the next version." }, { "end": 2000, "start": 1987, "text": " So the way we introduce some of the industrial training during the RL policy learning to make the policy more robust and we did see it produces more stable optimized policies." }, { "end": 2010, "start": 2000, "text": " Okay, I look I really look forward to to reading all about your next version as this one's already massively impressive accomplishment here." }, { "end": 2018, "start": 2010, "text": " But going forward for you, what does the future look like and I guess you're going to continue working in this area of industrial RL?" }, { "end": 2037, "start": 2018, "text": " Yeah, yeah, and so basically I shifted from the GD technology to the to the same way, university and but I'm still collaborating with my previous team on the using the offline RL or other data driven decision making methods to solve industrial control problems." }, { "end": 2053, "start": 2037, "text": " And back in team high university, so basically I will I'm also looking at applying offline RL to more diverse areas such as autonomous driving and health care recommendations such as these kinds of problems." }, { "end": 2073, "start": 2053, "text": " But I think it's like the mid focus currently I am working on is actually to see if we can develop better offline RL with example with better simple efficiency and better genus ability that has the potential to better solve the real world problems." }, { "end": 2086, "start": 2073, "text": " So you're in maybe a rare position where you have worked on both sides, both in the US at Purdue and also Microsoft and working in China and Singapore." }, { "end": 2094, "start": 2086, "text": " Could you do you have any comments on how approach to AI may differ between US and the West and China?" }, { "end": 2114, "start": 2094, "text": " Yeah, I think the AI research in US has a very good environment to encourage innovation and also where you can do a lot of theoretical and also some innovations in the developing new methods to solve diverse aspect of problems." }, { "end": 2143, "start": 2114, "text": " But in China, I think the one advantage here is like first is it's AI is actually it's like considered highly in the country and also has a lot of investment and also it's like it's like I think a lot of conventional traditional industry such as energy and other industries are more willing to try some of the AI solutions." }, { "end": 2153, "start": 2143, "text": " In the US and maybe the new applications are many carried out by the big tech companies but in China." }, { "end": 2161, "start": 2153, "text": " So many of the people from the other industries are willing to reach out and to try something new." }, { "end": 2174, "start": 2161, "text": " I think it's you have some more flexibility to do some of the things that are interesting." }, { "end": 2196, "start": 2174, "text": " In the US and think you have a lot of regulations and maybe you can do a lot of things inside the big company but to have to deploy some of the experimental stuff research without really a lot of support from the to develop something new I would say." }, { "end": 2200, "start": 2196, "text": " So it might be not as flexible as in China." }, { "end": 2208, "start": 2200, "text": " So besides your own work, are there other things happening in RL that you find quite interesting these days that you'd want to comment on?" }, { "end": 2213, "start": 2208, "text": " Okay, so I think very because I'm really working on the offline or problems." }, { "end": 2233, "start": 2213, "text": " So I think one of the key issues I'm looking at is to actually see how the general stability of offline or can be improved because offline or you are basically solving a very restrictive problem with the very under very restrictive setting." }, { "end": 2246, "start": 2233, "text": " And you cannot go beyond the data to actually exploit exploit on the Arnold part but it's like to be for a flyer to be really useful in the real world deployment." }, { "end": 2262, "start": 2246, "text": " You need to actually to make reasonable inference and to maybe use the more general ability to actually predict or infer on something you don't know whether it's Arnold data or it's the situation for an unknown." }, { "end": 2279, "start": 2262, "text": " So I think it's one thing I think it's very important and what's looking is to really to improve the general ability of the oral algorithm as well as the general ability of the neural models using the RL." }, { "end": 2291, "start": 2279, "text": " And I think this is a very important and really interesting and also it's like I think combining also in reasoning in the RL is also a very interesting direction." }, { "end": 2304, "start": 2291, "text": " Because most of the oral algorithm we are carrying out today is basically based on the Q learning. So basically you are performing the inference on the Q table or Q function." }, { "end": 2316, "start": 2304, "text": " And that is a lot of times not enough to solve complex problems because most of the real world problems are not the single step mark of decision process problems." }, { "end": 2324, "start": 2316, "text": " They might have the multi-step interactions and some other complex dynamics." }, { "end": 2340, "start": 2324, "text": " And I think if you if one can just perform very good cause reasoning in the oral algorithm, I think that is can greatly improve the performance of the of the algorithm and also you can really improve the general ability as well as the simple efficiency." }, { "end": 2360, "start": 2340, "text": " And the last thing is like actually so what I'm trying to even working on today right now is actually to combine contrast really learning into the offline RL to enhance the simple efficiency and also to make the to make the most information out of the limited data." }, { "end": 2366, "start": 2360, "text": " Is there anything else that you'd want to mention today or that I that I should have asked you about today." }, { "end": 2376, "start": 2366, "text": " Yeah, so maybe I just at one point is like I think offline is a very promise area and we can really use it to solve a lot of things." }, { "end": 2391, "start": 2376, "text": " And for example is like a lot of world real world problems actually do not have a simulator perfect simulator always impossible to build a simulator and the real world collection of data is very expensive or prohibitive." }, { "end": 2404, "start": 2391, "text": " So I think it's in that case is like if we have the right amount of data and we can do offline RL and we can make the RL really workable in the real world." }, { "end": 2420, "start": 2404, "text": " And I think that is a very good direction and but also it's like we have a lot of huge amount of open questions for open problems for a lot of and I think it still works a lot of effort in developing better offline RL algorithms to make." }, { "end": 2436, "start": 2420, "text": " More to produce more reliable policies to solve the real world problems and I think that let's do need a lot of effort and also it's like a better to be a very meaningful thing to make RL really workable in real life." }, { "end": 2445, "start": 2436, "text": " Professor Jean on behalf of myself and our listeners I want to thank you so much for sharing your time and your insight with the talk oral community today. Thank you." }, { "end": 2452, "start": 2445, "text": " Thank you." }, { "end": 2479, "start": 2452, "text": " Notes and links for this episode are at talkrl.com. If you like this show I need your support you can help in a few ways." }, { "end": 2489, "start": 2479, "text": " If you don't think we deserve five stars let us know on Twitter what we could do better." }, { "end": 2517, "start": 2489, "text": " TalkRL." } ]
Eugene Vinitsky
Eugene Vinitsky of UC Berkeley on social norms and sanctions, traffic simulation, mixed-autonomy traffic, and more!
https://media.transistor…137.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan. Eugene Vinitsky is a PhD student at UC Berkeley, advised by Alexandria Bayon, and he's interned at Tesla and D-Mind. Thanks so much for joining us, Eugene. He's really psyched to be here. I love this podcast, so yeah, super happy. Awesome, thanks so much. How do you like to describe what you do in your focus area? Yeah, so my PhD has mostly kind of focused around transportation problems and applications of reinforcement learning to problems in transportation, particularly as relates to designing new cruise controllers that improve some kind of performance metric on the highway, and then some amount of cooperative behavior between those cruise controllers, and then sort of as time has gone on, and I've tried to kind of push the boundaries of this field. I've delved a bit into multi-agent reinforcement learning in terms of analyzing some of the algorithms, and then the last piece is, and again, this ties back to this cruise control stuff, is kind of thinking about robustness in reinforcement learning, because we designed these cruise controllers, and we want to this summer put them on the highway, and so we now have this concurrent nightmare of trying to think about how, whether RL methods are robust and how to make them robust and so on. So I guess there's three pieces there. So let's talk about them. We're going to start with one of your more recent papers, the Social Norms paper, so that was a learning agent that acquires social norms from public sanctions in a decentralized multi-agent setting. That was first author yourself at all. So can you give us a brief overview of this paper? This was some joint work with some lovely folks at DeepMind, and they have built out this large set of multi-agent benchmarks that kind of test features of cooperation and competition, all these kind of game theoretic questions, but in these extended settings, which they call sequential social dilemmas. So cooperation and competition across time in these kind of complex grid worlds. And one of the key sets of problems in that benchmark is overcoming collective action problems. So maybe we have some common pool resource, like a fishery or something like that. And we want to kind of optimize its usage. Everyone selfishly wants to use it as much as possible, but if anyone overuses it, it depletes, and then everyone is worse off than if they had cooperate. So I just want to point out that our very first episode, we talked about this quite a bit with Natasha Jakes, and talking about some of Joel Leibos papers on this topic, so I'm really happy to hear more about it. Okay, yeah, I enjoyed that episode. Natasha is just like a fantastic researcher. We have these set of common resource dilemmas. And we want to develop agents that manage them effectively. There's some amount of evidence that humans can come up with, kind of norms and solutions that allow them to manage these resources. There's some great work by Eleanor Ostrom on this topic and others. But if you take a bunch of agents and you train them using multi agent reinforcement learning, and you do this in a totally decentralized fashion where each agent is optimizing its own reward function, basically what happens in almost every trial is that the agents will converge on total free writing. So to maybe put a concrete point on it, so you have the setting where agents have a, are like running around the script world, trying to eat as many apples as possible, and they have a preference over which type of apples they eat. So there are, say, red apples and green apples and blue apples. And some agents prefer red and some prefer green. And by prefer, I mean that they get more reward from eating various of that color. And what you'll find if you run like straightforward, like decentralized, multi agent reinforcement learning, is that they will all just run around eating apples. And one key feature of this environment is that to maximize the number of apples, you need to like go around and recolor these like these planting sites. So when you recolor the planting sites, as all of the planting sites in this map become the same color, the amount of berries generated of that color increases linearly. So everyone is best off if the map is entirely colored one color. But instead of converging on a particular color, splitting the work and recoloring the map so that it's entirely one color, the agents just kind of run around eating berries, don't do any coloring work, and are all like significantly worse off than if at the beginning of the episode, they had split up the work and just colored the map. So they're kind of failing to converge to cooperative behavior and free writing is dominant. And so we were interested in coming up with other decentralized methods that are capable of overcoming these problems and leading to cooperative solutions rather than free writing. And I guess I want to focus on the word decentralized here. So if you use some kind of centralized method, like for example, like you could have all the agents share a reward function. So whenever another agent eats, I also get some reward from that. That's not the sort of solution we're looking for because we know that human beings solve these dilemmas without being able to share each other's reward functions. We converge on cooperative solutions without all of us being trained by the same algorithms. So we want to build algorithms that can do this in this fully decentralized fashion. We're kind of motivated by the fact that, you know, all in video games, it's perfectly sensible to have all the agents controlled by the same algorithm shared. But when you think about all these like new autonomous deployments that might be happening in the future, there are all these upcoming decentralized autonomous deployments that we're interested in where you're going to have to do this in a decentralized fashion. So you know, they're likely to be multiple autonomous car companies, each with deployed products, they are not going to agree to share their policies with each other or share their reward functions and things like that. So, you know, there's a real, I think, both technological and just purely pure interest in designing decentralized algorithms that can come to this kind of cooperative behavior. And so we took some amount of inspiration from literature on learning social norms and how social norms work in human systems. Basically, what we said is, you know, a social norm is a shared classifier on events that are approved and disapproved. And this is a concept really inspired by a work by Jillian Hadfeld. And so we said, okay, all right. So a social norm is a classifier that agrees on what's approved and disapproved. So let's just do that. Let's train a classifier on, let's declare some set of events to be approved and disapproved. In these environments, there's a zapping action that you can use to zap other agents and that freezes them in place. And if they get zapped twice, they get some penalty reward. So let's call that a disapproved action and everything else in approval action. And we'll train a classifier that takes in some number of frames before scene happens and predicts approved and disapproved. And then let's create some internal motivation to align with that classifier. So we add a small amount of suitor reward for acting in approval or disapproval with your internal classifier. And then let's see what happens. And what we found is that when you add that small amount of suitor reward for following those classifiers, then we got a lot more convergence to cooperative behavior and free writing basically disappeared from the outcomes in the environment. So the idea is that we're rating specific behaviors is that right? We're not rating the reputation of individual agents themselves, is that the case? Yeah, so there's not even really the ability to identify individual agents. So these agents are these little blocks walking around on a grid and we can't distinguish one agent from another. We're just saying we saw an agent do something. Would the other agents in the scene have zapped it? And if our classifier says that they would have, then we're going to get a small amount of reward for also zapping at a disapproving. And if they wouldn't have, but we still zap anyway. So for example, we might zap them so that they freeze and we can steal their berry. If it's the case that other agents would not have zapped in that scenario, then we'll be penalized for zapping or punishing. So we're not, we're rating a specific behavior and there's no notion of individual agents whatsoever. Cool. Okay. And then there's, I guess there's no notion of like trust, explicit trust between, or I guess you would say, if the agents trust each other's sanctioning data, then there is kind of that implicit trust between the agents that they can trust each other's sanctioning. How would you describe the trust scenario here? Yeah. So the sanctioning are kind of ground truth events. So we might imagine, so like to take a concrete example, you might imagine that every, every time an autonomous car gets honked at, that honking event is stored in some database that everyone is able to access and see and use that to develop their own model of what other agents in the scene approve or disapprove up. So I guess the trust component of it comes into the fact that it's not explicitly modeled, but the trust thing is knowing that in a, the other agents in the scene are also kind of behaving in the appropriate way. So I'm saying that if I'm spending time appropriately coloring the berries so that we all get the right, get increased reward in the future, the other agents in the scene are also doing trust kind of operates in that manner, but there's no explicit representation of trust. Cool. So I love your example of using honks as the labels, right? That would be, it would be that would be a sanctioning event. So what is it true you'd have only, you'd kind of have only positive labels like, do you have any indication of when, do you assume that when honks are not happening that everything is kind of going well? Yeah, basically every non honk event is an explicit approval. So you know, if no one hooked it, you probably, that was a fine behavior. Yeah, but, but you know, this is a, yeah, so that's that's where your positive and negative labels come from. Would you consider this kind of like a type of centralization in a sense because they're not being trained together, but because their reward is kind of coupled in that way, I guess you have the centralized database of sanctioning events. Is that right? Is that how the centralization comes in? So that you could kind of think of this as a central system, right? So there's, there's a couple pieces in which it's centralized one everyone in the in the system has the same reward function, right? They have the same incentive to obey the group norm, the same incentive to not disobey the group norm. So that's kind of a piece of centralization. And yes, there is this shared buffer that we can all access of the sanctioning events. There is an ablation in the paper where we ask the question, okay, can I learn what the group norm is just from my own observations? So, right, I move around in the world occasionally I get sanctioned. And if we pretend for a second, but I could kind of place myself into the shoes of the person who sanctioned me, which is maybe technically feasible and certainly feasible in these grid worlds, then I can see what I did that caused them to sanction me. And I can learn on those those behaviors and then I could take every set of events where I wasn't sanctioned and view those as approved. So you could also do this in a not centralized way. And we found that that actually works pretty well too in these grid world examples. Cool. Okay. So do you see, I guess you see this type of technique potentially being useful in the real world in the transportation setting as you described? You know, it's a bit of always there's always a big gap between these methods in a paper and something that's, you know, readily useful. But I was very enamored of this idea of kind of a public database sanctioning and non sanctioning events as a way of, you know, discovering what local social norms are in driving settings and allowing agents to kind of adapt to them, you know, certainly from city to city, there's there's huge variation in how people drive. And what the norms are around that. And so this isn't something really that the techniques in here cover, but you could imagine that these sanctioning events or something that these companies are willing to share since it usually, since they're kind of incentivized to have other people obey what their drivers think was was like bad behavior. And so you could, you could maybe imagine constructing these databases and then training agents to obey like local social norms in different settings. I think as a driver, it would be so satisfying to be able to label bad behavior on the road because when someone cuts me off or makes a dangerous maneuver, like I have, there's nothing like there's pretty much nothing I can do. I mean, I'll get them. They ignore it. Are you going to call the police for someone running a red light? Like it's just really there's no recourse, but just to have have some of the like that, I would just feel better. I would just feel so much better to be like, I sanctioned that guy. Yeah, I mean, you'd honk at them and then you know in the future, like there's going to be some autonomous car that's trained to obey your preferences in some way. There's this there when I say that I realize that there is this feature of like mechanism gaping. So, you know, you have a very particular preference over how people drive. If you're honking constantly, you're going to, you know, overwhelm the buffer with your preferences. This is not a mechanism that can't be gained. It's still potentially interesting and you can maybe think about ways to prevent this type of gaming. For sure. And then you're you're coming about different driving styles that are in different cities is pretty interesting. Like I guess the whole idea of social norms to my mind, the last few years has really called into question some of our assumptions about social. Like, you know, how the president of the US is supposed to act was some kind of norm before. And a lot of these norms are just going out the window and we're and then people are asking questions like, well, what, you know, who gets to define what is a social norm. But I like that here, the social norm is kind of just defined by sort of on average what people, what people sanction. Is that right? I mean, yeah, I mean, that that is explicitly what it is. And so like, you know, the minute that these agents approve and punish and don't punish in accord with the way that the other agents also agree on. That's a social norm. Right. It may be a bad one. It may be a good one. It may be totally neutral and have no effect. But like if we're all approving and disapproving in the same way, which is what this board function incentivizes you to do. Then then then you have you have a social norm that's operative. Yeah, that's what a social norm is. It's like a belief that other others are going to treat me in a particular way. If I act a certain way, we just yeah, we're just trying to give that to our agents here and hope that that leads to some better outcomes. Awesome. So any follow up work plan in this direction. Yeah, a little bit. So one thing we don't really touch on in this paper that I think is interesting is the set up of it is as I just said, like if you, if once agents start obeying or disobeying in accord with the classifier, if all the classifier is relatively similar, then you have a social norm. But there's no operative mechanism here to distinguish between good social norms and bad social norms. So for example, if everyone is free writing. And I punish any agent that tries to do something good. If that's our consensus, like any time agent tries to not free ride, we get punished. That's also a social in this context. It's a terrible one. But it is a social. And so one thing we don't touch on in this work is like, how do we get this mechanism to select good social norms rather than bad social norms. So, you know, one kind of surface answer is you can have like a group selection process, right? So now there are multiple sets of agents that are learning. And so, the only ones that have higher reward continue to survive some kind of like evolutionary sense. And that's going to select for good social norms that are bad ones. But I think there's an interesting question on maybe how we can shape the learning dynamics so that it preferentially selects good social norms in the first place over bad ones like everyone free writes. Okay, it's coming to mind where there's some drivers who are just so impatient and they'll honk if I'm slowing down for somebody, which I think is a good I should be doing, but they're still honking me. So just sanctioning me for for actually what I think of as good behavior. So maybe that's an example of what you're talking about. Yeah, I mean, we could all like you get sanctioned by someone for slowing down. And maybe you learned to pick up that behavior too. And in time, we all live in this consensus norm where like everyone is driving really dangerously anyone who drives safely is honked at. And that's just the consensus. It would be terrible, but like that could be the emergent social norm, you know, maybe in some cities already is. So you have a number of papers on traffic simulation. Can you help us get oriented with the general idea here? What kind of general problems are you looking to tackle with with simulation? So if this kind of really exciting under the radar thing, I think that has happened, which is that, you know, like while we all talked about full self driving and things like that, like our highways started to become partially automated. So there are lots of level two cruise controllers that do lane keeping and distance keep. You don't have to pay attention to keep your hands on the wheel, but you know, they they're doing this, this automated driving. And in some areas, the perpetration of these level two things, I'm going to call them automated cars, but I mean cruise control is really in like automated cruise control where these automated cars get, you know, three to four percent on a good day. This increasing automation of our highways that is happening right now, we call this mixed autonomy traffic. So there's a small amount of autonomous cars. There's a huge number of human drivers and they're all kind of operating together. And this is a really exciting opportunity because we can kind of decide what outcomes we want from the types of cruise controllers that are implemented. So every car company has some type of cruise controller and these cruise controllers have some to be determined impact on traffic. You can also choose to rewrite those cruise controllers in different ways that maybe optimize the throughput of highway or optimize the energy efficiency of a highway. And so that's that's really the type of problem that I've been trying to tackle over the course of my PC is like, how should we select good cruise controllers that optimize some some desired traffic metric. And so it's a really exciting opportunity because those cruise controllers are already there. They're already deployed. We could change the parameters of that cruise controller tomorrow. It's just a matter of will to do so and having a good good controller to use. So I gather you use a simulator called sumo and then you developed a system called flow. Can you tell us a bit about about sumo to start with like what is what is sumo doing? What is it model? What is the abstract away? Simulation. Yeah. Is this an a little more detail? Yeah. So sumo is this wonderful open source micro simulation platform, which generally does traffic at the micro level. So there are a lot of different levels. You can we can you can model traffic. So you can model it at the level of flows between origins and destinations. You can model it at the level of like individual traffic links. And then you can go all the way down to the bottom and like model the behavior of individual drivers generally the flow models these drivers at the level of like of not like really fine grain automated driving using LiDAR. But more like just kind of distance keep to the car in front of you and occasional lane changing behavior, but it is a micro model. It's modeling the behavior of individual drivers in response to the scene around them. And so it lets us investigate questions about, you know, if I change the behavior of a fraction of these drivers, how do the other drivers respond? What is the impact on some some relevant traffic metrics and so on. So that that's what flow does. It's it's a wonderful tool. The developers of it have been working on it for I don't know 10 plus years now and are super responsive. So my whole PhD is built on it. So I wanted to give appropriate credit to those folks for doing this wonderful open source work. Awesome. Okay. And then you've developed a system called flow. I gather. Can you tell us a little bit more about that one. Yeah. So I have to pause here. Because it's flow is developed initially by Kathy Wu and an amazing team of undergrads. Kathy was now a professor at MIT. And then Abdul Rahman, and I extended it a lot again in collaboration with a huge team of wonderful undergrads and master students just to give credit where credits do. So flow is a library built atop sumo that provides a Python interface and some pre-built building blocks for investigating the effect of autonomous vehicles and modified cruise controllers on traffic. So we have built out this like big set of pre existing networks that you can play around with. So like like there's a ring of traffic. There's a toy intersection. Then there's like a kind of mini model of Manhattan that you can scale up and down a model of the San Francisco, clean, baby bridge. And then we're about to release models of a section of the I to 10 in Los Angeles and the I 24 in Tennessee. And so what we try to do is make it easy for existing machine learning practitioners to kind of play with and modify sumo using tools that they already know how to do and reinforcement learning libraries that they are comfortable using. So yeah, flow just lets you kind of easily modify these networks insert new different types of cruise controllers run RL algorithms design new reward functions and build new networks without having to go into sumo and modify it in ways that might be maybe harder for users. Cool. And then I'm not sure if you heard the episode where we had professor Shimon Whiteson founded latent logic, which I gather models him and drive driver behavior. And that company was acquired by wayma. Yeah, we didn't really discuss with him too much about the details of that, but I gather that modeling him and driver behavior is pretty deep topic. And I guess other other simulators that that do this in more in in different levels or are they pretty similar or or how might they might they differ. Yeah, so there's a lot of variation here. So I mean huge fed for professor Whiteson's work. They, but I assume that he and Wabo are modeling these as much more fine grained controllers. So, you know, at the level of like small turns and how do I go around a corner and taken on ramp. Sumo is modeling the cars are basically on rails. They can they can move forwards and backwards and they'll stay in their lane. No matter what you do. So it's not it's not a matter of optimizing like really small level individual decision making. But there is a really large existing literature on modeling driving at these levels. So there's a particular popular model called the intelligent driver model, which models the human driver as an ordinary differential equation. And you know it takes in you know distance the lead car speed of the lead car speed of yourself and converts that into an expected acceleration modulated by some noise. And so this is kind of the level that that these simulators that we use are operating at. You know there are there are however you know different ways to build that driver model. So you know there's there's other driver models that people use like optimal velocity models and things. But so it's a it's at like one level of abstraction above what they might be doing at headway mode. And there are a lot of other simulators, Aimson, Vissib, none of which we use we have a we you know we really like that sumo is open source. I'm not super interested in releasing tools that can't be used by other people. So we've we've primarily played with sumo, but there are other simulators we've thought about that maybe operated a slightly more fine grain level. So this is where generally interested in traffic and transportation through what we want to simulate as fast as possible. And so this is the the lowest level we can simulate at at a reasonable speed. Without giving up the kind of micro dynamics that we care about. Okay, that makes total sense. So let's jump into your next paper here. This is Lagrangian control through D. Barrel applications to bottleneck de congestion. So I think that's the first author yourself at all in 2018. Can you give us a brief idea of what's going on in this paper, Eugene? Yeah, so I believe if I remember correctly that canad pervata is also a co first author on this paper. So what's happening here is we wanted to think about the San Francisco Oakland Bay Bridge has a set of traffic lights on it. Traffic lights turn on when whenever it gets too congested and the goal is to mitigate this thing called capacity drop. So as if you think about the inflow outflow relation, so the number of cars that go into a network and the number of cars that go out when the inflow is small, there's like a linear relationship between the inflow and the outflow as many cars go in as come out. So you kind of increase that when you have a bottleneck. So some number of lanes that constrict to a smaller number of lanes above a certain inflow. The outflow will actually start to drop off that linear relationship is broken. And so you get less outflow than inflow and you can start to get this build up at the bottleneck. So if you're in this linear regime, you will have say traffic lights that prevent the inflow from ever getting too large so that you never get this bottleneck performing. So what I'm thinking about is thinking about if there was a way to kind of replace that traffic light with kind of mobile traffic lights. So let the AVs act as kind of adaptive traffic lights. Look at the flow around them determine whether their lane should go or not go and then use that to kind of keep the inflow in the appropriate regime without having to say. Have the cost of building another set of traffic lights. So maybe you don't deploy this on the bay bridge where there's already the light, but maybe you can deploy this at other known bottlenecks without having to build new expensive infrastructure. And what we found was that you know this this actually worked remarkably well. The cars could almost equal the performance of it pre existing bottleneck at a 10% penetration rate, which is a bit above where we are now. But yeah, that's the idea is like how can we view autonomous cars as kind of mobile traffic lights. And so this is centralized control unlike your previous papers that right? Yeah, so this is kind of a history of deep RL libraries in a nutshell. There was not a prominent existing multi agent RL library at the time. So even though we wanted to do this in decentralized settings, this was basically you know one of the first RL papers we wrote so we're still figuring things out. And we didn't feel ready to write our own RL library. And so it's it's in a centralized setting. Yeah, so all the cars are that we take the network and we cut it up into bits. And then the centralized network makes decisions about what all the cars and a given chunk are going to do. If you're in that chunk, every car in that chunk will make the same decision. This works out okay because mostly the decision you're making is I'm right in front of the bottleneck does the set of cars behind me go or not go. And so like if you go every car behind you should probably also go. You don't give too much up here. And then on the terminology, what is meant by Lagrangian control? I gathered that it's in contrast to you, Larry in control, but I don't actually know what either of those terms mean. Yeah, so this is kind of a suggestion on the part of my PI. So it's about the Lagrangian setting is when you switch from viewing the thing as a flow to looking at on the like if you're talking about a fluid. The Lagrangian view is when you're looking like as though you're riding a top and individual particle. And so here like we've moved from this setting where the lights are looking at the total flow to looking at an individual vehicle within the flow as a controller. And so we call that Lagrangian, but there's not something particularly deep there except that we are looking. It is the individual elements of that flow that are making the control decisions. And so is there some do you have some idea of like what is the maximum flow rate that you could achieve with like the most perfect controller or is that so super obvious given the nature of the outflow road? There is an obvious and non obvious answer. So before you consider if you if you assume that the merging dynamics are conventional human merging dynamics, then then you know that the outflow is wherever exactly that like inflow outflow curve starts to drop off. So you want to be exactly right before that inflow outflow curve drops off. You can't do better than that because if you go to higher inflows, the outflow starts to go down. And so you're you're losing up. But if you go beyond that and you start to think about the AVs having different merging dynamics than the humans, then you can start to think about the point at which that curve drops off moving to higher inflows. So like the if the AVs can somehow merge more efficiently or kind of engineer the dynamics of the humans around them as well, then you can start to go to higher inflows. But without that, then you could you do know where the the cap is. It's exactly wherever that curve starts to drop off. And then on the centralization decentralization question like, is it crazy to think about centralized solutions? I get that decentralized solutions are in a way more general and safer in a sense. If ever you know if the communication system goes down, but it seems like we could be giving up a lot by by not considering a centralized solution. Like it seems like if we if I had a choice of both, it would be simpler to centralize it. And I might expect better results if we could centralize. Does anyone consider that as a possibility or is that just discarded on the face of it? So there's there's a bit of an open research question here. So in some follow up work that we we did examine a decentralized solution. And we found that that decentralized solution outperform the centralized solution like it had better throughput. And is that that next this next paper they optimizing mixed autonomy traffic law? Yeah, yeah, yeah. So yes, so he can kind of fuse this discussion. Yeah, so we we the folks at RLib built this lovely multi agent RL library that we started using for this problem. And so what we found was that if we did this in decentralized fashion, so each agent they all share the same controller. But they're all making their decisions locally. Then we would outperform the throughput of the centralized controller. Now in like a strict technical sense like this is not correct like a centralized solution should always do better than a decentralized solution. But they're they're associated optimization problems, right? So with the centralized solution, we like had to cut up the network into these little chunks that restricted the expressiveness of our policy. And then you know, also there's kind of this like cursive dimensionality issue. So the centralized solution had some like 20 or 40 actions, you know, is making a decision for each of these chunks and that you know makes exploration harder in some ways. So while technically a centralized solution should do better than a decentralized solution. In this case, our decentralized controllers did better, but that doesn't I think it is possible for some of us to sit down and come up with a centralized solution that does better than our decentralized controllers. And so given that's true, you could think about deploying centralized solutions. But when you think about centralization, the issue becomes who is in charge of writing those centralized controllers, right? So, okay, so now you think about having some rule where once you get close to a bottleneck, some you pass up control to some computer sitting nearby there. And that computer decides your set of actions as you try to pass through that network. But now you got asked like, who wrote that controller? How do you get the drivers to agree that they should see control to that controller? It feels like the government starts again involved here. And you know, that runs into like questions of policy that I don't quite know how to answer. Yeah, if we can't trust them to do a lot simpler things than it might not be a great idea to trust them to do multi agent to centralized RL. Yeah, I mean, I think it's possible. I just it's a big leap. But I mean, they do they do run the traffic lights. Yeah, you know, they do run the traffic lights. So I don't like how they do that, to be honest. Super inefficient. But you should come to New York City. We have we have an amazing green wave set up here. Oh, well, sometimes I was hit like 20, 20 green lights in a row. It's wonderful. Wow. Okay. And you don't get the red wave. You know, maybe I have mostly hit the green wave. I'm sure someone out there, some traffic engineer sitting there and be like, no, the New York City traffic lights are horrible. I could fix them if I had the chance. But it seems pretty good to me. So I guess maybe somewhere between decentralization and centralization is maybe something where the cars are communicating. Like you can imagine scenario where all the tezels on the road have a back channel and they're sharing notes and they're maybe they're all making their decisions. Maybe they're sharing some observations. Do you think that's do you think that's a feasible scenario? Yeah, this is this called CACC so cooperative autonomous control. There's a ton of work on this. I don't know about specifically applied to this problem. I think it'd be some interesting follow up work for someone to do is to see what happens when you let the cars communicate with each other in this setting. I think that it's possible. But it becomes challenging as you think about there being multiple actors, right? So right now if you are in Palo Alto, a lot of the cruise controllers that are operating or testless. But as other car companies start to roll out their solutions. You kind of get this this cross conflict like are they are the company is going to agree to just to like coordinate their control together. What happens if there are two people to two companies each of which are deploying those controllers, but those controllers don't mesh well when played together. So there's there again runs into these kind of technical problems. This why I like the decentralized aspect of it is I think this control this decentralized controller you could deploy and basically ignore whatever analysis doing it doesn't matter what they do if you drive up near the front of the bottleneck and you stop and only go when you feel ready. It doesn't matter what the person behind you is doing yeah okay cool makes sense so let's move to your next paper that we've partially mentioned that is optimizing mixed autonomy traffic flow with decentralized autonomous vehicles and multi agent RL first author yourself at all and that was 2020. So can you walk us through the main idea with this paper. Yeah and again, so to keep doing this to you but Nathan Lishley is the other joint co for a stop there. Thanks I appreciate that. Yeah so the idea of this paper is that we now there are these multi agent RL libraries that we can start using so we started looking at the fully decentralized solutions. Maybe communication between the cars but each car is making its own decisions locally. So this is something that you could feasibly imagine actually deploying and we even look at you know like really low penetration rates so like four or five percent and see how well you can do in those in those low settings and we see that there is still some improvement over the human baseline. So here all the cars are trained to jointly optimize the throughput so the reward function is the number of cars that got through the system. But they're all making their own decisions locally to optimize that through but together. Yeah and sorry what are the observing the cars in this case. And this to be something that could genuinely employed so we use kind of a radar observation so they see cars in front of them. So they'll see like the nearest cars in front of them in each lane. It's a little unrealistic they're definitely settings where the radar would not return some of the cars that we return. But yeah it's a radar observation space. So they're actually seeing distance to lead cars speed. I was just wondering if it's possible to kind of describe like the behavior of the policies that come out to the act a lot of humans or they do or they act in a very different way to how humans would it would behave. They act they act nothing like humans so basically what they do is they drive up to right before the entrance to the bottleneck and then they'll kind of look at how many cars are in the bottleneck. And one of the lanes will decide to go and it'll basically the the AV at the front of the lane will go and then the huge the humans behind it will follow it along into the bottleneck. And then all the other AV is right in front of the bottleneck in the in the other adjacent lanes will not go they will wait until a sufficient until the first platoon has kind of gone part of the way through. And then one of the other lanes will decide to go so it's kind of this like smooth thing where one lane goes then another lane goes then another lane goes where the platoon that left will be replaced with another AV that's blocking the entrance and getting the particular timings of those platoons correct as hard. So you kind of want that as one platoon goes through when the second platoon starts it will also get through the bottleneck without kind of causing too many merge conflicts. So you'll see them like stop and start and stop and start and then occasionally when congestion does occur in the bottleneck they'll all kind of wait until it's cleared out and then they'll start this process again. Yeah, it's very inhuman. Cool. Okay. And then so they are they're kind of acting not selfishly right that's why they're able to do this whereas humans are all looking out for their own reward personal reward. I mean, yes and no, so they are acting they're trying to maximize this cooperative object, but because when you avoid congestion everyone is better off if humans were to do this they would have been better off to right. This is a case where like the national equilibrium and social equilibrium are kind of not the same thing everyone like greedily just going right away is worse off than if they had waited a little bit and kind of tried to coordinate. Okay, so I think you were saying that that there there's again no communication here, but that some of the sensors might be returning more than they would with a realistic sensor so and maybe maybe that could be spanned by by a bit of communication or how do you see the potential for communication in these kind of situations. Yeah, so we didn't get we looked at this a little bit, but didn't didn't make it into the publication, but you could imagine that the cars nearby other cars broadcast signals and what we wanted to see was kind of we're hoping to see was kind of the emergence of some kind of car language where they would you know pass information of the stream. So there's this this bottleneck where sometimes cars get congested and the cars often the radar can't often see into that bottleneck it's like too far away, but you could imagine the cars in the bottleneck passing information to the car behind them about the state of the bottleneck which then gets past the car behind that and so like kind of this like global information would be communicated backwards up the flow. So we played around with that a little bit we didn't see anything exciting happening, but I think there there's potential a lot of these settings to kind of think about what the language of cooperative autonomous cruise controllers might might be and look like. So all this stuff reminds me of a project that I didn't grade seven and in that project I wrote that cars could go at full speed through intersections and when they wouldn't need traffic lights and they could even do it in the dark as long as they were properly coordinated like a zipper and and I show my dad and his his first comment was was like yeah, but what about what happens when someone gets a flat tire or if a car breaks down then there's going to be a problem. So do you think that that those kind of issues are going to be key to hand I mean understand this is preliminary work I don't it's not a criticism to say you should handle every single detail, but I wonder to what extent those types of of unexpected events would make a difference in models like this. Yeah, so I think for things like maximizing intersection throughput these kind of safety critical things are really key, but they're less key for things like this where you know if I come you know I drive up to the intersection I come to stop it doesn't matter what the people around me do and a lot of the things that we build are kind of robust to these issues. The one that does concern me and that I don't know how to model and that we thought about a lot but we don't know how to model is like this summer we're doing this this big project where we take some of these cruise controllers that we built put them on the roadway and we try to have them smooth waves and improve the energy efficiency of traffic this is this big project with something called the circles consortium, which partners from Vanderbilt and Rutgers and the Tennessee Department transportation and all sorts of folks. And what we don't know is how people will adapt and respond to this non human behavior and this isn't something we can we do in our papers either so you know some of these car cruise controllers keep larger gaps than humans are used to and they could respond to this in all sorts of unusual ways they could start lane changing more often than normal they could just become angry and start tailgating really aggressively or you know there's all sorts of ways that humans can respond to these non natural behaviors. And we don't know how to model and we we occasionally try to model this by just like letting the human driver model to like a best response to our current controller so you know you take your controller you optimize it then you take your human driver model and you like optimize its parameters is the best response to that assuming that humans operates obviously and then you go back and forth but it's kind of an open question for us is like what happens if the humans around you get annoyed or change their behavior in response to these non human driving types cool OK so let's move on to another paper that I just saw on Twitter today that you published which is a nice surprise hot off the press on archive so this is the surprising effectiveness of PPO in cooperative multi agent games and that is by you at all with yourself as a co author is that correct. Yeah so the the genesis of this this paper is that in a lot of the works that we've done we've always used multi agent PPO like for everything the reason being we've never been able to get some of the off policy methods that are popular in the multi agent RL literature to work. That's not to say that they can't we just haven't been able to get them to work so we've kind of used PPO as a workhorse for everything and I was talking to you about this and he he mentioned that you know at open AI a lot of their stuff had also been using PPO and they were kind of puzzled why this algorithm was not more more popular in literature so I had a very excellent undergrad at Cache Bay Lou and you had a student show you. And we asked them to kind of start start looking into this and trying to build some benchmarks for these on policy methods compared to off policy methods and modulo the fact that we don't have a standard of evidence in RL what we found was that the sample complexity of these on policy methods in in three benchmarks so multi particle worlds star craft this star craft multi agent challenge and Hanabi that these on policy methods. On policy methods got basically the same performance as the off policy methods in in similar sample complexity similar number of samples to get to that performance and we found this you know really surprising because I think that in in the single age of literature the conventional wisdom is that these off policy methods are good deal more sample efficient then then the on policy methods and I think that's true or at least in my experience has been true but we were not finding. This in this in this multi agent setting and I mean a piece of this might be that you know the the the sense in which you get off policy happens a lot faster in the multi agent setting you know so like everyone's policies are changing so old samples become stale faster in some way we were able to provide kind of empirical support for that hypothesis but but we definitely yeah we feel pretty confident about this statement that the ppmeth is performed really well in these benchmarks cool do you mind just briefly describing how multi agent pp o differs from standard pp o how does it handle multi agents yeah it's it's quite quite straightforward so instead of the value function just looking at your state it takes in the state of all agents in the in the scene and so that lets you get more accurate value estimates so it's like a centralized critic yeah it is a centralized critic we have a centralized critic and decentralized active so trade centralized act decentralized but I mean another another thing that we found somewhat surprising is that at least in the starcraft multi agent challenge it didn't really seem to matter very much like the the not using a centralized critic and just using straight up normal pp o for all the agents also performed very well I mean I guess from the way I always think about it is the the off policy stuff has a huge advantage of being able to use that we might have and how would we ever deploy something that was on policy if it wasn't performing pretty well from the get go so I guess if you have a simulator then then either one is equal equally feasible but in so many cases in the real world I I'm not sure if on policy would be ever be realistic what do you think about that yeah I think that that's a good point so I certainly think it in single agent settings like definitely off policy stuff is is going to win out you know in the a lot of the time we have a simulator it's like pretty rare to me imagine a circumstance where we're genuinely thinking about an agent that learns in the world without some simulator pre training phase that is probably a controversial statement I think this will not always be true like we will definitely get to the point where we're training methods online in the real world but but at the moment you often have a simulator phase and I mean at least partially motivated by safety reasons you want to start by testing your stuff had simulation so you have that simulator I do think that if you if you wanted to deploy in the real world directly and learn there then you definitely should be thinking about off policy methods that well OK let me roll back slightly so if you were trying to deploy a multi agent system in the real world given the statement I made about on policy multi HRL having similar complexity to off policy multi agent RL in the benchmarks we looked at then you know you probably should feel as comfortable as you can probably should feel as comfortable deploying either one they're going to have similar sample complexity cool so do you do you see like following this up sounds like there's a bunch of open open questions follow up here to DC pursuing that yeah I mean the real open question to us is is why you know how do we how do we quantify this this reason that the policy methods do not seem to working as well as there like some some notion of staleness that we can examine and then like we we looked at this question some really really good. So this is cooperative fully cooperative problems with discrete action spaces so it's you know fully plausible that these statements I made are not true and other settings and I'd like to know if they are or are not so there should be some follow up work on that on that question and then just going back to some things we talked about earlier so with respect to these sequential social dead lemma's you know Natasha and Joel's work on this was one of the inspirations for me starting up this show I found it so fascinating the whole question of you know how do we solve these sequential social dead lemmas these social dead lemmas are a major problem in the world today in all sorts of contexts and can any of this work, you know ultimately help us solve them in the real world in terms of you know free riders could be with respect to climate change and we can't do that in terms of the way we are, but we're trying to do it. So we have a major problem in the world today in all sorts of contexts and can any of this work, you know ultimately help us solve them in the real world in terms of you know free riders could be with respect to climate change you know if some nations don't sign an accord they get a free ride on the emissions and you know anti-vaxxers are kind of getting a free ride on everyone else being vaccinated this shows up everywhere and some days I think it may be the central question of our time how do we solve this these social dead lemmas so I guess my question here is do you see you know any of these lines of work helping us deal with the social dead lemmas that we that that are admitted me admittedly a lot more complicated than the ones obviously tackled in simulation so far but do you see them ever getting to the point where they might really help us solve these really thorny problems in the real world. Wow what a question you know anyone listening I'm gonna miss pitball hot now less epistemically confident about my answers on this than anything else. So there's a lot of thorny pieces of this puzzle right so the first thing is you know you could think about using these methods to do things like incentive design right like what are the appropriate. Incentives to kind of push humans away from this and you could also think about things that that I'm quite interested in that are that are kind of like AI mediation so like how can we. Modify clusters of humans so that they're connected to the right people such that they start to like move towards the outcomes that they they actually want for themselves for their society and so on. And a lot of this though you know like you're not this this goes back to the sample it if you see where all you're not going to do this online you're going to do this at least a start in simulation and so now there's this this piece of how do you build models of human beings and how they react interventions and how they react to chat box attempting interventions and modifications on their social network graph and so on. And lately there's been a lot of work building kind of like LSTM like models about humans are going to respond to things that that have worked much better than I thought they would have I would I would have guessed that some some human responses are quite hard to model. But yeah it really the real blockers like how do you build models of human beings such that you can then begin to study interventions on their behavior but I think it's a really promising area and you know it's it's really well we're all I think like kind of pushing to is trying to get better equilibrium to emerge then then currently do yeah absolutely I mean I think one thing that Natasha Jake's this paper showed on on socials lm's was. So I was sorry on social influence was that social influence can be a helpful intrinsic motivation to help agents solve these collective action problems so the whole idea that I'm. That if the agents can influence each other then they can work together as a group and that that kind of seems like intuitively true and maybe a bit obvious and I get that you know are the state of our all today. And the state of our simulations is such that those kind of questions are more tractable than then then simulations that are trying to be extremely detailed with human behavior. But I can't help but wonder if there's something inherent in game theory that can help us help us find our way through some of these messes that we're in right now and some of that might have to do with human behavior or some of it maybe some of it is just pure game theory stuff like you say like if you design the mechanisms such then game theory may tell us that we'll have a better time finding a better equilibrium. I understand that this question is a very vague but the it comes up for me every time we touch on game theory in this podcast and every time we touch on social dilemmas which seems to be a lot more often than expected partly because if it's I think it's interesting and and as I say I think it could be central question of our time whether we get through all this. Some of these tools seem relevant and you know some top researchers when deep mind describes what they're doing and they're and their hopes and aspirations. They do talk about using AI to solve the most important problems of our time and and to me this is this is maybe one of them so so sorry to to drop this on you without it wasn't even in the notes but I couldn't help it. Yeah no I like as long as you're comfortable with me spitballing like I'm very happy to talk about this I think you know I love ambitious ambitious research you know I think it's good for people to say like I want my research to tackle this like impossibly hard problem we have no idea how to do like that's good that's it's good to want to to your research to be useful and productive and. And you know you shouldn't feel too bad about stating ambitious things like that sorry bit of a segue. So yeah so I think I think as far as like things like Natasha's paper go which which I really love like that that's a good example of like building in some kind of like structural priors into into multi agent reinforcement learning right you know like. You kind of have have some you know priors in the environment like agents influence other agents there bit of bits of your actions based that don't influence other agents and then these are two separate things and and maybe maybe like building in more priors like what Natasha did there is is part of the path towards more sample efficient multi agent reinforcement learning which is I think a key challenge a free enforcement learning is inefficient multi agent reinforcement learning is like 10 times as inefficient. But yeah I think one one kind of promising opportunity in this direction is not you know you building models of human beings is hard but what you can do is you can. Set up the problem you care about and then train train a diverse array of agents to solve it and hopefully like human behavior is somewhere within that superset and then you can kind of. Refine and pick out the agents that you want that like coordinate well with the humans that you care about so like maybe maybe modeling humans is too hard but maybe getting human behavior to be included in the set of agent behaviors you generate is not impossible and I think that that's kind of a promising direction is just like methods for training really diverse sets of agents that then you can you can then select from. And I think there's some work doing that in overcooked that I have seen and I know that sorry what was that phrase overcooked yeah overcooked so there's like overcooked is this I guess I've never played it but I think a phone game where you collaborate with some other folks to try cook meal of some sort. And there's been some amount of work on like how to generate partners that coordinate well with humans and that and similarly Jacob forster has some some work on like how to generate agents that zero shot coordinate well with human beings in Hanabi and so I think like the moral community starting to think about this question of how do I you know generate agents that can interface with humans correctly even though I cannot ever train with humans so I do think we're starting to look at this question. Cool. Okay. So what else is going on or the things going on in RL or maybe your your areas of interest that that you find really interesting outside of your your own work. Oh my God so many things so so many things. I've really liked of late like a lot of the model based RL work that's been happening model based systems are nice. You know we already have physical models of a lot of things we've done. I don't know maybe a couple hundred years worth of studying physics and so it's always you know bothered me that model free methods are so prominent and I think this is probably shared by the ton of folks but it's really seems like the model based RL methods are taking off and doing really well and so I'm personally looking forward to to playing with those and seeing seeing how well they work. Maybe looking at you know some extensions of that into the multi agent domain and you know I think I am hopefully starting to see more things where RL is turning out to be useful for some actual application there you know there's the the chip design paper from Google there was this nice presentation. I think at nirips on designing agents that could like pilot hydrophoil so that you could then do this like co optimization where you design a new hydrophoil then you have the agents like pilot the hydrofoil and then you can use that to like continually optimize the the the boat design and so like I'm very interested in trying to see where our health can actually be used to like create some real gains today and the the corresponding like the other side of that is like I become very interested in. Like robustness of RL controllers because if you like look if you look at an RL paper and you look at like the deviation of the I mean this is this is this is separate from actually a little separate from Russ but if you look at like the deviation of the results like you'll have like a mojo co hopper that like 80% of the time gets 10,000 reward and 20% of the time just falls on its face that's really not what you want to do. And so yeah thinking about different ways to enable robustness with regards to kind of like uncertainty in the model of the system or uncertainty with respect to the behavior of the other agents in the system is something like you know hopefully you'll be seeking more work for me some work for me in the future. I think that that stuff is really promising cool I look forward to to reading about what you do there so what do you see yourself speaking of that what is your self doing in the next few years you do you think you'll be continuing your the themes that we've talked about of your work so far. Yeah for sure we spent you know five years designing cruise controllers that we wanted to put on the highway we're getting ready to put them on the highway and then the summer and then in the following year like put even more cruise controllers on the highway and see how this works at scale. So I am you know very optimistic about the ability of RL to design optimal cruise controllers for for improving the throughput of the the energy efficiency of the highway and I think you know will hopefully be putting out some empirical evidence to that point so definitely some some work going there yeah and then you know I care very much about multi agent RL becoming more sample efficient and you know therefore accessible to other researchers is a tool and so definitely not going to stop working on that. Cool on a personal note I I drive a Tesla Model 3 and I just recently tried out the autopilot feature which is the the adaptive cruise control and it was both inspiring and a little bit terrifying because knowing what I know about the AI actually made me probably more more concerned for my safety no no incidents yet but and they did have a good good warning signal when when you know anything unexpected happened but I look forward to seeing that. But I look forward to that being even even better because I really don't trust the drivers on the road yeah I mean personally I'm like terrible driver and I try not to drive so I am ready for someone to automate me out of existence please someone do it. So Eugene anything else that you want to mention or I should have asked you about today. No this isn't really fun I you came you came prepared with some some solid questions so yeah thank you for having me. Oh it's been amazing so do you have any suggestions for the for the show or who we might feature next. Yeah so I don't know if you so just this is off the top my head Jacob forster is extremely opinionated and has some really interesting perspectives on moral research could be fun to have on I think in terms of some folks who's I there's a Berkeley grad student. I'm sure a dean who I hope I'm not putting on the spot but like I think does like amazing work on the robustness and machine learning methods yeah I would be personally curious to hear her opinions on things coming from like a more like controls background. Yeah those are the two people who like spit off the top of my head cool Eugene Vinyski this has been fantastic thanks so much for taking the time out to speaking with us at talk or I really appreciate it. Yeah this was really fun Robin thanks thanks again for having me. Notes and links for this episode are at talkorl.com if you like this show I need your support you can help in a few ways. Talkorl podcast we love retweets. Talkorl.com.
[ { "end": 13, "start": 0, "text": " This is TalkArail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 13, "text": " Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 26, "start": 20, "text": " Eugene Vinitsky is a PhD student at UC Berkeley, advised by Alexandria Bayon," }, { "end": 31, "start": 26, "text": " and he's interned at Tesla and D-Mind. Thanks so much for joining us, Eugene." }, { "end": 35, "start": 31, "text": " He's really psyched to be here. I love this podcast, so yeah, super happy." }, { "end": 40, "start": 35, "text": " Awesome, thanks so much. How do you like to describe what you do in your focus area?" }, { "end": 46, "start": 40, "text": " Yeah, so my PhD has mostly kind of focused around transportation problems" }, { "end": 51, "start": 46, "text": " and applications of reinforcement learning to problems in transportation," }, { "end": 57, "start": 51, "text": " particularly as relates to designing new cruise controllers" }, { "end": 61, "start": 57, "text": " that improve some kind of performance metric on the highway," }, { "end": 66, "start": 61, "text": " and then some amount of cooperative behavior between those cruise controllers," }, { "end": 74, "start": 66, "text": " and then sort of as time has gone on, and I've tried to kind of push the boundaries of this field." }, { "end": 83, "start": 74, "text": " I've delved a bit into multi-agent reinforcement learning in terms of analyzing some of the algorithms," }, { "end": 91, "start": 83, "text": " and then the last piece is, and again, this ties back to this cruise control stuff," }, { "end": 95, "start": 91, "text": " is kind of thinking about robustness in reinforcement learning," }, { "end": 100, "start": 95, "text": " because we designed these cruise controllers, and we want to this summer put them on the highway," }, { "end": 105, "start": 100, "text": " and so we now have this concurrent nightmare of trying to think about how," }, { "end": 109, "start": 105, "text": " whether RL methods are robust and how to make them robust and so on." }, { "end": 111, "start": 109, "text": " So I guess there's three pieces there." }, { "end": 115, "start": 111, "text": " So let's talk about them. We're going to start with one of your more recent papers," }, { "end": 120, "start": 115, "text": " the Social Norms paper, so that was a learning agent that acquires social norms" }, { "end": 124, "start": 120, "text": " from public sanctions in a decentralized multi-agent setting." }, { "end": 127, "start": 124, "text": " That was first author yourself at all." }, { "end": 130, "start": 127, "text": " So can you give us a brief overview of this paper?" }, { "end": 134, "start": 130, "text": " This was some joint work with some lovely folks at DeepMind," }, { "end": 140, "start": 134, "text": " and they have built out this large set of multi-agent benchmarks" }, { "end": 144, "start": 140, "text": " that kind of test features of cooperation and competition," }, { "end": 149, "start": 144, "text": " all these kind of game theoretic questions, but in these extended settings," }, { "end": 152, "start": 149, "text": " which they call sequential social dilemmas." }, { "end": 159, "start": 152, "text": " So cooperation and competition across time in these kind of complex grid worlds." }, { "end": 168, "start": 159, "text": " And one of the key sets of problems in that benchmark is overcoming collective action problems." }, { "end": 174, "start": 168, "text": " So maybe we have some common pool resource, like a fishery or something like that." }, { "end": 179, "start": 174, "text": " And we want to kind of optimize its usage." }, { "end": 182, "start": 179, "text": " Everyone selfishly wants to use it as much as possible," }, { "end": 187, "start": 182, "text": " but if anyone overuses it, it depletes, and then everyone is worse off than if they had cooperate." }, { "end": 190, "start": 187, "text": " So I just want to point out that our very first episode," }, { "end": 194, "start": 190, "text": " we talked about this quite a bit with Natasha Jakes," }, { "end": 198, "start": 194, "text": " and talking about some of Joel Leibos papers on this topic," }, { "end": 201, "start": 198, "text": " so I'm really happy to hear more about it." }, { "end": 206, "start": 201, "text": " Okay, yeah, I enjoyed that episode. Natasha is just like a fantastic researcher." }, { "end": 210, "start": 206, "text": " We have these set of common resource dilemmas." }, { "end": 215, "start": 210, "text": " And we want to develop agents that manage them effectively." }, { "end": 219, "start": 215, "text": " There's some amount of evidence that humans can come up with," }, { "end": 222, "start": 219, "text": " kind of norms and solutions that allow them to manage these resources." }, { "end": 229, "start": 222, "text": " There's some great work by Eleanor Ostrom on this topic and others." }, { "end": 235, "start": 229, "text": " But if you take a bunch of agents and you train them using multi agent reinforcement learning," }, { "end": 240, "start": 235, "text": " and you do this in a totally decentralized fashion where each agent is optimizing its own reward function," }, { "end": 247, "start": 240, "text": " basically what happens in almost every trial is that the agents will converge on total free writing." }, { "end": 255, "start": 247, "text": " So to maybe put a concrete point on it, so you have the setting where agents have a," }, { "end": 259, "start": 255, "text": " are like running around the script world, trying to eat as many apples as possible," }, { "end": 263, "start": 259, "text": " and they have a preference over which type of apples they eat." }, { "end": 266, "start": 263, "text": " So there are, say, red apples and green apples and blue apples." }, { "end": 270, "start": 266, "text": " And some agents prefer red and some prefer green." }, { "end": 276, "start": 270, "text": " And by prefer, I mean that they get more reward from eating various of that color." }, { "end": 279, "start": 276, "text": " And what you'll find if you run like straightforward," }, { "end": 281, "start": 279, "text": " like decentralized, multi agent reinforcement learning," }, { "end": 284, "start": 281, "text": " is that they will all just run around eating apples." }, { "end": 288, "start": 284, "text": " And one key feature of this environment is that to maximize the number of apples," }, { "end": 293, "start": 288, "text": " you need to like go around and recolor these like these planting sites." }, { "end": 302, "start": 293, "text": " So when you recolor the planting sites, as all of the planting sites in this map become the same color," }, { "end": 306, "start": 302, "text": " the amount of berries generated of that color increases linearly." }, { "end": 314, "start": 306, "text": " So everyone is best off if the map is entirely colored one color." }, { "end": 319, "start": 314, "text": " But instead of converging on a particular color, splitting the work and recoloring the map" }, { "end": 325, "start": 319, "text": " so that it's entirely one color, the agents just kind of run around eating berries," }, { "end": 330, "start": 325, "text": " don't do any coloring work, and are all like significantly worse off than if at the beginning of the episode," }, { "end": 334, "start": 330, "text": " they had split up the work and just colored the map." }, { "end": 340, "start": 334, "text": " So they're kind of failing to converge to cooperative behavior and free writing is dominant." }, { "end": 347, "start": 340, "text": " And so we were interested in coming up with other decentralized methods" }, { "end": 356, "start": 347, "text": " that are capable of overcoming these problems and leading to cooperative solutions rather than free writing." }, { "end": 360, "start": 356, "text": " And I guess I want to focus on the word decentralized here." }, { "end": 366, "start": 360, "text": " So if you use some kind of centralized method, like for example," }, { "end": 371, "start": 366, "text": " like you could have all the agents share a reward function. So whenever another agent eats," }, { "end": 373, "start": 371, "text": " I also get some reward from that." }, { "end": 381, "start": 373, "text": " That's not the sort of solution we're looking for because we know that human beings solve these dilemmas" }, { "end": 383, "start": 381, "text": " without being able to share each other's reward functions." }, { "end": 388, "start": 383, "text": " We converge on cooperative solutions without all of us being trained by the same algorithms." }, { "end": 393, "start": 388, "text": " So we want to build algorithms that can do this in this fully decentralized fashion." }, { "end": 398, "start": 393, "text": " We're kind of motivated by the fact that, you know, all in video games," }, { "end": 403, "start": 398, "text": " it's perfectly sensible to have all the agents controlled by the same algorithm shared." }, { "end": 408, "start": 403, "text": " But when you think about all these like new autonomous deployments that might be happening in the future," }, { "end": 412, "start": 408, "text": " there are all these upcoming decentralized autonomous deployments that we're interested in" }, { "end": 415, "start": 412, "text": " where you're going to have to do this in a decentralized fashion." }, { "end": 419, "start": 415, "text": " So you know, they're likely to be multiple autonomous car companies," }, { "end": 424, "start": 419, "text": " each with deployed products, they are not going to agree to share their policies with each other" }, { "end": 427, "start": 424, "text": " or share their reward functions and things like that." }, { "end": 434, "start": 427, "text": " So, you know, there's a real, I think, both technological and just purely pure interest" }, { "end": 439, "start": 434, "text": " in designing decentralized algorithms that can come to this kind of cooperative behavior." }, { "end": 445, "start": 439, "text": " And so we took some amount of inspiration from literature on learning social norms" }, { "end": 449, "start": 445, "text": " and how social norms work in human systems." }, { "end": 456, "start": 449, "text": " Basically, what we said is, you know, a social norm is a shared classifier on events" }, { "end": 458, "start": 456, "text": " that are approved and disapproved." }, { "end": 462, "start": 458, "text": " And this is a concept really inspired by a work by Jillian Hadfeld." }, { "end": 467, "start": 462, "text": " And so we said, okay, all right. So a social norm is a classifier that agrees on what's approved and disapproved." }, { "end": 474, "start": 467, "text": " So let's just do that. Let's train a classifier on, let's declare some set of events to be approved and disapproved." }, { "end": 480, "start": 474, "text": " In these environments, there's a zapping action that you can use to zap other agents" }, { "end": 484, "start": 480, "text": " and that freezes them in place. And if they get zapped twice, they get some penalty reward." }, { "end": 488, "start": 484, "text": " So let's call that a disapproved action and everything else in approval action." }, { "end": 492, "start": 488, "text": " And we'll train a classifier that takes in some number of frames before scene happens" }, { "end": 494, "start": 492, "text": " and predicts approved and disapproved." }, { "end": 500, "start": 494, "text": " And then let's create some internal motivation to align with that classifier." }, { "end": 508, "start": 500, "text": " So we add a small amount of suitor reward for acting in approval or disapproval with your internal classifier." }, { "end": 510, "start": 508, "text": " And then let's see what happens." }, { "end": 516, "start": 510, "text": " And what we found is that when you add that small amount of suitor reward for following those classifiers," }, { "end": 522, "start": 516, "text": " then we got a lot more convergence to cooperative behavior and free writing basically disappeared" }, { "end": 525, "start": 522, "text": " from the outcomes in the environment." }, { "end": 530, "start": 525, "text": " So the idea is that we're rating specific behaviors is that right?" }, { "end": 535, "start": 530, "text": " We're not rating the reputation of individual agents themselves, is that the case?" }, { "end": 540, "start": 535, "text": " Yeah, so there's not even really the ability to identify individual agents." }, { "end": 545, "start": 540, "text": " So these agents are these little blocks walking around on a grid and we can't distinguish one agent from another." }, { "end": 548, "start": 545, "text": " We're just saying we saw an agent do something." }, { "end": 551, "start": 548, "text": " Would the other agents in the scene have zapped it?" }, { "end": 559, "start": 551, "text": " And if our classifier says that they would have, then we're going to get a small amount of reward for also zapping at a disapproving." }, { "end": 562, "start": 559, "text": " And if they wouldn't have, but we still zap anyway." }, { "end": 567, "start": 562, "text": " So for example, we might zap them so that they freeze and we can steal their berry." }, { "end": 575, "start": 567, "text": " If it's the case that other agents would not have zapped in that scenario, then we'll be penalized for zapping or punishing." }, { "end": 582, "start": 575, "text": " So we're not, we're rating a specific behavior and there's no notion of individual agents whatsoever." }, { "end": 588, "start": 582, "text": " Cool. Okay. And then there's, I guess there's no notion of like trust, explicit trust between, or I guess you would say," }, { "end": 597, "start": 588, "text": " if the agents trust each other's sanctioning data, then there is kind of that implicit trust between the agents that they can trust each other's sanctioning." }, { "end": 599, "start": 597, "text": " How would you describe the trust scenario here?" }, { "end": 603, "start": 599, "text": " Yeah. So the sanctioning are kind of ground truth events." }, { "end": 629, "start": 603, "text": " So we might imagine, so like to take a concrete example, you might imagine that every, every time an autonomous car gets honked at, that honking event is stored in some database that everyone is able to access and see and use that to develop their own model of what other agents in the scene approve or disapprove up." }, { "end": 644, "start": 629, "text": " So I guess the trust component of it comes into the fact that it's not explicitly modeled, but the trust thing is knowing that in a, the other agents in the scene are also kind of behaving in the appropriate way." }, { "end": 660, "start": 644, "text": " So I'm saying that if I'm spending time appropriately coloring the berries so that we all get the right, get increased reward in the future, the other agents in the scene are also doing trust kind of operates in that manner, but there's no explicit representation of trust." }, { "end": 668, "start": 660, "text": " Cool. So I love your example of using honks as the labels, right? That would be, it would be that would be a sanctioning event." }, { "end": 680, "start": 668, "text": " So what is it true you'd have only, you'd kind of have only positive labels like, do you have any indication of when, do you assume that when honks are not happening that everything is kind of going well?" }, { "end": 689, "start": 680, "text": " Yeah, basically every non honk event is an explicit approval. So you know, if no one hooked it, you probably, that was a fine behavior." }, { "end": 701, "start": 689, "text": " Yeah, but, but you know, this is a, yeah, so that's that's where your positive and negative labels come from." }, { "end": 707, "start": 701, "text": " Would you consider this kind of like a type of centralization in a sense because they're not being trained together, but because their reward is kind of coupled in that way, I guess you have the centralized database of sanctioning events." }, { "end": 710, "start": 707, "text": " Is that right? Is that how the centralization comes in?" }, { "end": 721, "start": 710, "text": " So that you could kind of think of this as a central system, right? So there's, there's a couple pieces in which it's centralized one everyone in the in the system has the same reward function, right?" }, { "end": 729, "start": 721, "text": " They have the same incentive to obey the group norm, the same incentive to not disobey the group norm." }, { "end": 737, "start": 729, "text": " So that's kind of a piece of centralization. And yes, there is this shared buffer that we can all access of the sanctioning events." }, { "end": 747, "start": 737, "text": " There is an ablation in the paper where we ask the question, okay, can I learn what the group norm is just from my own observations? So, right, I" }, { "end": 758, "start": 747, "text": " move around in the world occasionally I get sanctioned. And if we pretend for a second, but I could kind of place myself into the shoes of the person who sanctioned me, which is" }, { "end": 768, "start": 758, "text": " maybe technically feasible and certainly feasible in these grid worlds, then I can see what I did that caused them to sanction me." }, { "end": 774, "start": 768, "text": " And I can learn on those those behaviors and then I could take every set of events where I wasn't sanctioned and view those as approved." }, { "end": 783, "start": 774, "text": " So you could also do this in a not centralized way. And we found that that actually works pretty well too in these grid world examples." }, { "end": 794, "start": 783, "text": " Cool. Okay. So do you see, I guess you see this type of technique potentially being useful in the real world in the transportation setting as you described?" }, { "end": 802, "start": 794, "text": " You know, it's a bit of always there's always a big gap between these methods in a paper and something that's, you know, readily useful." }, { "end": 824, "start": 802, "text": " But I was very enamored of this idea of kind of a public database sanctioning and non sanctioning events as a way of, you know, discovering what local social norms are in driving settings and allowing agents to kind of adapt to them, you know, certainly from city to city, there's there's huge variation in how people drive." }, { "end": 847, "start": 824, "text": " And what the norms are around that. And so this isn't something really that the techniques in here cover, but you could imagine that these sanctioning events or something that these companies are willing to share since it usually, since they're kind of incentivized to have other people obey what their drivers think was was like bad behavior." }, { "end": 855, "start": 847, "text": " And so you could, you could maybe imagine constructing these databases and then training agents to obey like local social norms in different settings." }, { "end": 866, "start": 855, "text": " I think as a driver, it would be so satisfying to be able to label bad behavior on the road because when someone cuts me off or makes a dangerous maneuver, like I have, there's nothing like there's pretty much nothing I can do." }, { "end": 875, "start": 866, "text": " I mean, I'll get them. They ignore it. Are you going to call the police for someone running a red light? Like it's just really there's no recourse, but just to have have some of the like that, I would just feel better." }, { "end": 880, "start": 875, "text": " I would just feel so much better to be like, I sanctioned that guy." }, { "end": 889, "start": 880, "text": " Yeah, I mean, you'd honk at them and then you know in the future, like there's going to be some autonomous car that's trained to obey your preferences in some way." }, { "end": 905, "start": 889, "text": " There's this there when I say that I realize that there is this feature of like mechanism gaping. So, you know, you have a very particular preference over how people drive. If you're honking constantly, you're going to, you know, overwhelm the buffer with your preferences." }, { "end": 914, "start": 905, "text": " This is not a mechanism that can't be gained. It's still potentially interesting and you can maybe think about ways to prevent this type of gaming." }, { "end": 927, "start": 914, "text": " For sure. And then you're you're coming about different driving styles that are in different cities is pretty interesting. Like I guess the whole idea of social norms to my mind, the last few years has really called into question some of our assumptions about social." }, { "end": 942, "start": 927, "text": " Like, you know, how the president of the US is supposed to act was some kind of norm before. And a lot of these norms are just going out the window and we're and then people are asking questions like, well, what, you know, who gets to define what is a social norm." }, { "end": 951, "start": 942, "text": " But I like that here, the social norm is kind of just defined by sort of on average what people, what people sanction. Is that right?" }, { "end": 967, "start": 951, "text": " I mean, yeah, I mean, that that is explicitly what it is. And so like, you know, the minute that these agents approve and punish and don't punish in accord with the way that the other agents also agree on. That's a social norm." }, { "end": 979, "start": 967, "text": " Right. It may be a bad one. It may be a good one. It may be totally neutral and have no effect. But like if we're all approving and disapproving in the same way, which is what this board function incentivizes you to do." }, { "end": 988, "start": 979, "text": " Then then then you have you have a social norm that's operative. Yeah, that's what a social norm is. It's like a belief that other others are going to treat me in a particular way." }, { "end": 997, "start": 988, "text": " If I act a certain way, we just yeah, we're just trying to give that to our agents here and hope that that leads to some better outcomes. Awesome. So any follow up work plan in this direction." }, { "end": 1014, "start": 997, "text": " Yeah, a little bit. So one thing we don't really touch on in this paper that I think is interesting is the set up of it is as I just said, like if you, if once agents start obeying or disobeying in accord with the classifier, if all the classifier is relatively similar, then you have a social norm." }, { "end": 1024, "start": 1014, "text": " But there's no operative mechanism here to distinguish between good social norms and bad social norms. So for example, if everyone is free writing." }, { "end": 1032, "start": 1024, "text": " And I punish any agent that tries to do something good. If that's our consensus, like any time agent tries to not free ride, we get punished." }, { "end": 1039, "start": 1032, "text": " That's also a social in this context. It's a terrible one. But it is a social." }, { "end": 1049, "start": 1039, "text": " And so one thing we don't touch on in this work is like, how do we get this mechanism to select good social norms rather than bad social norms." }, { "end": 1061, "start": 1049, "text": " So, you know, one kind of surface answer is you can have like a group selection process, right? So now there are multiple sets of agents that are learning." }, { "end": 1070, "start": 1061, "text": " And so, the only ones that have higher reward continue to survive some kind of like evolutionary sense. And that's going to select for good social norms that are bad ones." }, { "end": 1081, "start": 1070, "text": " But I think there's an interesting question on maybe how we can shape the learning dynamics so that it preferentially selects good social norms in the first place over bad ones like everyone free writes." }, { "end": 1093, "start": 1081, "text": " Okay, it's coming to mind where there's some drivers who are just so impatient and they'll honk if I'm slowing down for somebody, which I think is a good I should be doing, but they're still honking me. So just sanctioning me for for actually what I think of as good behavior." }, { "end": 1095, "start": 1093, "text": " So maybe that's an example of what you're talking about." }, { "end": 1104, "start": 1095, "text": " Yeah, I mean, we could all like you get sanctioned by someone for slowing down. And maybe you learned to pick up that behavior too." }, { "end": 1113, "start": 1104, "text": " And in time, we all live in this consensus norm where like everyone is driving really dangerously anyone who drives safely is honked at." }, { "end": 1121, "start": 1113, "text": " And that's just the consensus. It would be terrible, but like that could be the emergent social norm, you know, maybe in some cities already is." }, { "end": 1133, "start": 1121, "text": " So you have a number of papers on traffic simulation. Can you help us get oriented with the general idea here? What kind of general problems are you looking to tackle with with simulation?" }, { "end": 1148, "start": 1133, "text": " So if this kind of really exciting under the radar thing, I think that has happened, which is that, you know, like while we all talked about full self driving and things like that, like our highways started to become partially automated." }, { "end": 1154, "start": 1148, "text": " So there are lots of level two cruise controllers that do lane keeping and distance keep." }, { "end": 1174, "start": 1154, "text": " You don't have to pay attention to keep your hands on the wheel, but you know, they they're doing this, this automated driving. And in some areas, the perpetration of these level two things, I'm going to call them automated cars, but I mean cruise control is really in like automated cruise control where these automated cars get, you know, three to four percent on a good day." }, { "end": 1189, "start": 1174, "text": " This increasing automation of our highways that is happening right now, we call this mixed autonomy traffic. So there's a small amount of autonomous cars. There's a huge number of human drivers and they're all kind of operating together." }, { "end": 1209, "start": 1189, "text": " And this is a really exciting opportunity because we can kind of decide what outcomes we want from the types of cruise controllers that are implemented. So every car company has some type of cruise controller and these cruise controllers have some to be determined impact on traffic." }, { "end": 1232, "start": 1209, "text": " You can also choose to rewrite those cruise controllers in different ways that maybe optimize the throughput of highway or optimize the energy efficiency of a highway. And so that's that's really the type of problem that I've been trying to tackle over the course of my PC is like, how should we select good cruise controllers that optimize some some desired traffic metric." }, { "end": 1246, "start": 1232, "text": " And so it's a really exciting opportunity because those cruise controllers are already there. They're already deployed. We could change the parameters of that cruise controller tomorrow. It's just a matter of will to do so and having a good good controller to use." }, { "end": 1258, "start": 1246, "text": " So I gather you use a simulator called sumo and then you developed a system called flow. Can you tell us a bit about about sumo to start with like what is what is sumo doing? What is it model? What is the abstract away?" }, { "end": 1272, "start": 1258, "text": " Simulation. Yeah. Is this an a little more detail? Yeah. So sumo is this wonderful open source micro simulation platform, which generally does traffic at the micro level. So there are a lot of different levels. You can we can you can model traffic." }, { "end": 1294, "start": 1272, "text": " So you can model it at the level of flows between origins and destinations. You can model it at the level of like individual traffic links. And then you can go all the way down to the bottom and like model the behavior of individual drivers generally the flow models these drivers at the level of like of not like really fine grain automated driving using LiDAR." }, { "end": 1306, "start": 1294, "text": " But more like just kind of distance keep to the car in front of you and occasional lane changing behavior, but it is a micro model. It's modeling the behavior of individual drivers in response to the scene around them." }, { "end": 1319, "start": 1306, "text": " And so it lets us investigate questions about, you know, if I change the behavior of a fraction of these drivers, how do the other drivers respond? What is the impact on some some relevant traffic metrics and so on." }, { "end": 1334, "start": 1319, "text": " So that that's what flow does. It's it's a wonderful tool. The developers of it have been working on it for I don't know 10 plus years now and are super responsive. So my whole PhD is built on it. So I wanted to give appropriate credit to those folks for doing this wonderful open source work." }, { "end": 1342, "start": 1334, "text": " Awesome. Okay. And then you've developed a system called flow. I gather. Can you tell us a little bit more about that one. Yeah. So I have to pause here." }, { "end": 1364, "start": 1342, "text": " Because it's flow is developed initially by Kathy Wu and an amazing team of undergrads. Kathy was now a professor at MIT. And then Abdul Rahman, and I extended it a lot again in collaboration with a huge team of wonderful undergrads and master students just to give credit where credits do." }, { "end": 1380, "start": 1364, "text": " So flow is a library built atop sumo that provides a Python interface and some pre-built building blocks for investigating the effect of autonomous vehicles and modified cruise controllers on traffic." }, { "end": 1400, "start": 1380, "text": " So we have built out this like big set of pre existing networks that you can play around with. So like like there's a ring of traffic. There's a toy intersection. Then there's like a kind of mini model of Manhattan that you can scale up and down a model of the San Francisco, clean, baby bridge." }, { "end": 1424, "start": 1400, "text": " And then we're about to release models of a section of the I to 10 in Los Angeles and the I 24 in Tennessee. And so what we try to do is make it easy for existing machine learning practitioners to kind of play with and modify sumo using tools that they already know how to do and reinforcement learning libraries that they are comfortable using." }, { "end": 1440, "start": 1424, "text": " So yeah, flow just lets you kind of easily modify these networks insert new different types of cruise controllers run RL algorithms design new reward functions and build new networks without having to go into sumo and modify it in ways that might be maybe harder for users." }, { "end": 1450, "start": 1440, "text": " Cool. And then I'm not sure if you heard the episode where we had professor Shimon Whiteson founded latent logic, which I gather models him and drive driver behavior." }, { "end": 1460, "start": 1450, "text": " And that company was acquired by wayma. Yeah, we didn't really discuss with him too much about the details of that, but I gather that modeling him and driver behavior is pretty deep topic." }, { "end": 1469, "start": 1460, "text": " And I guess other other simulators that that do this in more in in different levels or are they pretty similar or or how might they might they differ." }, { "end": 1484, "start": 1469, "text": " Yeah, so there's a lot of variation here. So I mean huge fed for professor Whiteson's work. They, but I assume that he and Wabo are modeling these as much more fine grained controllers." }, { "end": 1501, "start": 1484, "text": " So, you know, at the level of like small turns and how do I go around a corner and taken on ramp. Sumo is modeling the cars are basically on rails. They can they can move forwards and backwards and they'll stay in their lane." }, { "end": 1511, "start": 1501, "text": " No matter what you do. So it's not it's not a matter of optimizing like really small level individual decision making." }, { "end": 1526, "start": 1511, "text": " But there is a really large existing literature on modeling driving at these levels. So there's a particular popular model called the intelligent driver model, which models the human driver as an ordinary differential equation." }, { "end": 1544, "start": 1526, "text": " And you know it takes in you know distance the lead car speed of the lead car speed of yourself and converts that into an expected acceleration modulated by some noise. And so this is kind of the level that that these simulators that we use are operating at." }, { "end": 1558, "start": 1544, "text": " You know there are there are however you know different ways to build that driver model. So you know there's there's other driver models that people use like optimal velocity models and things." }, { "end": 1564, "start": 1558, "text": " But so it's a it's at like one level of abstraction above what they might be doing at headway mode." }, { "end": 1574, "start": 1564, "text": " And there are a lot of other simulators, Aimson, Vissib, none of which we use we have a we you know we really like that sumo is open source." }, { "end": 1578, "start": 1574, "text": " I'm not super interested in releasing tools that can't be used by other people." }, { "end": 1587, "start": 1578, "text": " So we've we've primarily played with sumo, but there are other simulators we've thought about that maybe operated a slightly more fine grain level." }, { "end": 1600, "start": 1587, "text": " So this is where generally interested in traffic and transportation through what we want to simulate as fast as possible. And so this is the the lowest level we can simulate at at a reasonable speed." }, { "end": 1604, "start": 1600, "text": " Without giving up the kind of micro dynamics that we care about." }, { "end": 1613, "start": 1604, "text": " Okay, that makes total sense. So let's jump into your next paper here. This is Lagrangian control through D. Barrel applications to bottleneck de congestion." }, { "end": 1620, "start": 1613, "text": " So I think that's the first author yourself at all in 2018. Can you give us a brief idea of what's going on in this paper, Eugene?" }, { "end": 1627, "start": 1620, "text": " Yeah, so I believe if I remember correctly that canad pervata is also a co first author on this paper." }, { "end": 1635, "start": 1627, "text": " So what's happening here is we wanted to think about the San Francisco Oakland Bay Bridge has a set of traffic lights on it." }, { "end": 1645, "start": 1635, "text": " Traffic lights turn on when whenever it gets too congested and the goal is to mitigate this thing called capacity drop." }, { "end": 1661, "start": 1645, "text": " So as if you think about the inflow outflow relation, so the number of cars that go into a network and the number of cars that go out when the inflow is small, there's like a linear relationship between the inflow and the outflow as many cars go in as come out." }, { "end": 1672, "start": 1661, "text": " So you kind of increase that when you have a bottleneck. So some number of lanes that constrict to a smaller number of lanes above a certain inflow." }, { "end": 1682, "start": 1672, "text": " The outflow will actually start to drop off that linear relationship is broken. And so you get less outflow than inflow and you can start to get this build up at the bottleneck." }, { "end": 1692, "start": 1682, "text": " So if you're in this linear regime, you will have say traffic lights that prevent the inflow from ever getting too large so that you never get this bottleneck performing." }, { "end": 1707, "start": 1692, "text": " So what I'm thinking about is thinking about if there was a way to kind of replace that traffic light with kind of mobile traffic lights. So let the AVs act as kind of adaptive traffic lights." }, { "end": 1720, "start": 1707, "text": " Look at the flow around them determine whether their lane should go or not go and then use that to kind of keep the inflow in the appropriate regime without having to say." }, { "end": 1731, "start": 1720, "text": " Have the cost of building another set of traffic lights. So maybe you don't deploy this on the bay bridge where there's already the light, but maybe you can deploy this at other known bottlenecks without having to build new expensive infrastructure." }, { "end": 1745, "start": 1731, "text": " And what we found was that you know this this actually worked remarkably well. The cars could almost equal the performance of it pre existing bottleneck at a 10% penetration rate, which is a bit above where we are now." }, { "end": 1751, "start": 1745, "text": " But yeah, that's the idea is like how can we view autonomous cars as kind of mobile traffic lights." }, { "end": 1756, "start": 1751, "text": " And so this is centralized control unlike your previous papers that right?" }, { "end": 1766, "start": 1756, "text": " Yeah, so this is kind of a history of deep RL libraries in a nutshell. There was not a prominent existing multi agent RL library at the time." }, { "end": 1775, "start": 1766, "text": " So even though we wanted to do this in decentralized settings, this was basically you know one of the first RL papers we wrote so we're still figuring things out." }, { "end": 1784, "start": 1775, "text": " And we didn't feel ready to write our own RL library. And so it's it's in a centralized setting. Yeah, so all the cars are that we take the network and we cut it up into bits." }, { "end": 1790, "start": 1784, "text": " And then the centralized network makes decisions about what all the cars and a given chunk are going to do." }, { "end": 1801, "start": 1790, "text": " If you're in that chunk, every car in that chunk will make the same decision. This works out okay because mostly the decision you're making is I'm right in front of the bottleneck does the set of cars behind me go or not go." }, { "end": 1806, "start": 1801, "text": " And so like if you go every car behind you should probably also go. You don't give too much up here." }, { "end": 1814, "start": 1806, "text": " And then on the terminology, what is meant by Lagrangian control? I gathered that it's in contrast to you, Larry in control, but I don't actually know what either of those terms mean." }, { "end": 1827, "start": 1814, "text": " Yeah, so this is kind of a suggestion on the part of my PI. So it's about the Lagrangian setting is when you switch from viewing the thing as a flow to looking at on the like if you're talking about a fluid." }, { "end": 1832, "start": 1827, "text": " The Lagrangian view is when you're looking like as though you're riding a top and individual particle." }, { "end": 1844, "start": 1832, "text": " And so here like we've moved from this setting where the lights are looking at the total flow to looking at an individual vehicle within the flow as a controller." }, { "end": 1850, "start": 1844, "text": " And so we call that Lagrangian, but there's not something particularly deep there except that we are looking." }, { "end": 1854, "start": 1850, "text": " It is the individual elements of that flow that are making the control decisions." }, { "end": 1865, "start": 1854, "text": " And so is there some do you have some idea of like what is the maximum flow rate that you could achieve with like the most perfect controller or is that so super obvious given the nature of the outflow road?" }, { "end": 1881, "start": 1865, "text": " There is an obvious and non obvious answer. So before you consider if you if you assume that the merging dynamics are conventional human merging dynamics, then then you know that the outflow is wherever exactly that like inflow outflow curve starts to drop off." }, { "end": 1885, "start": 1881, "text": " So you want to be exactly right before that inflow outflow curve drops off." }, { "end": 1892, "start": 1885, "text": " You can't do better than that because if you go to higher inflows, the outflow starts to go down. And so you're you're losing up." }, { "end": 1903, "start": 1892, "text": " But if you go beyond that and you start to think about the AVs having different merging dynamics than the humans, then you can start to think about the point at which that curve drops off moving to higher inflows." }, { "end": 1913, "start": 1903, "text": " So like the if the AVs can somehow merge more efficiently or kind of engineer the dynamics of the humans around them as well, then you can start to go to higher inflows." }, { "end": 1919, "start": 1913, "text": " But without that, then you could you do know where the the cap is. It's exactly wherever that curve starts to drop off." }, { "end": 1927, "start": 1919, "text": " And then on the centralization decentralization question like, is it crazy to think about centralized solutions?" }, { "end": 1941, "start": 1927, "text": " I get that decentralized solutions are in a way more general and safer in a sense. If ever you know if the communication system goes down, but it seems like we could be giving up a lot by by not considering a centralized solution." }, { "end": 1950, "start": 1941, "text": " Like it seems like if we if I had a choice of both, it would be simpler to centralize it. And I might expect better results if we could centralize." }, { "end": 1956, "start": 1950, "text": " Does anyone consider that as a possibility or is that just discarded on the face of it?" }, { "end": 1965, "start": 1956, "text": " So there's there's a bit of an open research question here. So in some follow up work that we we did examine a decentralized solution." }, { "end": 1971, "start": 1965, "text": " And we found that that decentralized solution outperform the centralized solution like it had better throughput." }, { "end": 1976, "start": 1971, "text": " And is that that next this next paper they optimizing mixed autonomy traffic law?" }, { "end": 1981, "start": 1976, "text": " Yeah, yeah, yeah. So yes, so he can kind of fuse this discussion." }, { "end": 1991, "start": 1981, "text": " Yeah, so we we the folks at RLib built this lovely multi agent RL library that we started using for this problem." }, { "end": 1999, "start": 1991, "text": " And so what we found was that if we did this in decentralized fashion, so each agent they all share the same controller." }, { "end": 2004, "start": 1999, "text": " But they're all making their decisions locally. Then we would outperform the throughput of the centralized controller." }, { "end": 2011, "start": 2004, "text": " Now in like a strict technical sense like this is not correct like a centralized solution should always do better than a decentralized solution." }, { "end": 2022, "start": 2011, "text": " But they're they're associated optimization problems, right? So with the centralized solution, we like had to cut up the network into these little chunks that restricted the expressiveness of our policy." }, { "end": 2026, "start": 2022, "text": " And then you know, also there's kind of this like cursive dimensionality issue." }, { "end": 2036, "start": 2026, "text": " So the centralized solution had some like 20 or 40 actions, you know, is making a decision for each of these chunks and that you know makes exploration harder in some ways." }, { "end": 2041, "start": 2036, "text": " So while technically a centralized solution should do better than a decentralized solution." }, { "end": 2053, "start": 2041, "text": " In this case, our decentralized controllers did better, but that doesn't I think it is possible for some of us to sit down and come up with a centralized solution that does better than our decentralized controllers." }, { "end": 2058, "start": 2053, "text": " And so given that's true, you could think about deploying centralized solutions." }, { "end": 2067, "start": 2058, "text": " But when you think about centralization, the issue becomes who is in charge of writing those centralized controllers, right?" }, { "end": 2077, "start": 2067, "text": " So, okay, so now you think about having some rule where once you get close to a bottleneck, some you pass up control to some computer sitting nearby there." }, { "end": 2085, "start": 2077, "text": " And that computer decides your set of actions as you try to pass through that network. But now you got asked like, who wrote that controller?" }, { "end": 2090, "start": 2085, "text": " How do you get the drivers to agree that they should see control to that controller?" }, { "end": 2100, "start": 2090, "text": " It feels like the government starts again involved here. And you know, that runs into like questions of policy that I don't quite know how to answer." }, { "end": 2110, "start": 2100, "text": " Yeah, if we can't trust them to do a lot simpler things than it might not be a great idea to trust them to do multi agent to centralized RL." }, { "end": 2118, "start": 2110, "text": " Yeah, I mean, I think it's possible. I just it's a big leap. But I mean, they do they do run the traffic lights." }, { "end": 2123, "start": 2118, "text": " Yeah, you know, they do run the traffic lights. So I don't like how they do that, to be honest." }, { "end": 2135, "start": 2123, "text": " Super inefficient. But you should come to New York City. We have we have an amazing green wave set up here. Oh, well, sometimes I was hit like 20, 20 green lights in a row. It's wonderful. Wow. Okay." }, { "end": 2140, "start": 2135, "text": " And you don't get the red wave. You know, maybe I have mostly hit the green wave." }, { "end": 2153, "start": 2140, "text": " I'm sure someone out there, some traffic engineer sitting there and be like, no, the New York City traffic lights are horrible. I could fix them if I had the chance. But it seems pretty good to me." }, { "end": 2166, "start": 2153, "text": " So I guess maybe somewhere between decentralization and centralization is maybe something where the cars are communicating. Like you can imagine scenario where all the tezels on the road have a back channel and they're sharing notes and they're maybe they're all making their decisions." }, { "end": 2177, "start": 2166, "text": " Maybe they're sharing some observations. Do you think that's do you think that's a feasible scenario? Yeah, this is this called CACC so cooperative autonomous control." }, { "end": 2189, "start": 2177, "text": " There's a ton of work on this. I don't know about specifically applied to this problem. I think it'd be some interesting follow up work for someone to do is to see what happens when you let the cars communicate with each other in this setting." }, { "end": 2203, "start": 2189, "text": " I think that it's possible. But it becomes challenging as you think about there being multiple actors, right? So right now if you are in Palo Alto, a lot of the cruise controllers that are operating or testless." }, { "end": 2214, "start": 2203, "text": " But as other car companies start to roll out their solutions. You kind of get this this cross conflict like are they are the company is going to agree to just to like coordinate their control together." }, { "end": 2222, "start": 2214, "text": " What happens if there are two people to two companies each of which are deploying those controllers, but those controllers don't mesh well when played together." }, { "end": 2240, "start": 2222, "text": " So there's there again runs into these kind of technical problems. This why I like the decentralized aspect of it is I think this control this decentralized controller you could deploy and basically ignore whatever analysis doing it doesn't matter what they do if you drive up near the front of the bottleneck and you stop and only go when you feel ready." }, { "end": 2258, "start": 2240, "text": " It doesn't matter what the person behind you is doing yeah okay cool makes sense so let's move to your next paper that we've partially mentioned that is optimizing mixed autonomy traffic flow with decentralized autonomous vehicles and multi agent RL first author yourself at all and that was 2020." }, { "end": 2262, "start": 2258, "text": " So can you walk us through the main idea with this paper." }, { "end": 2270, "start": 2262, "text": " Yeah and again, so to keep doing this to you but Nathan Lishley is the other joint co for a stop there." }, { "end": 2272, "start": 2270, "text": " Thanks I appreciate that." }, { "end": 2282, "start": 2272, "text": " Yeah so the idea of this paper is that we now there are these multi agent RL libraries that we can start using so we started looking at the fully decentralized solutions." }, { "end": 2306, "start": 2282, "text": " Maybe communication between the cars but each car is making its own decisions locally. So this is something that you could feasibly imagine actually deploying and we even look at you know like really low penetration rates so like four or five percent and see how well you can do in those in those low settings and we see that there is still some improvement over the human baseline." }, { "end": 2318, "start": 2306, "text": " So here all the cars are trained to jointly optimize the throughput so the reward function is the number of cars that got through the system." }, { "end": 2324, "start": 2318, "text": " But they're all making their own decisions locally to optimize that through but together." }, { "end": 2331, "start": 2324, "text": " Yeah and sorry what are the observing the cars in this case." }, { "end": 2341, "start": 2331, "text": " And this to be something that could genuinely employed so we use kind of a radar observation so they see cars in front of them." }, { "end": 2345, "start": 2341, "text": " So they'll see like the nearest cars in front of them in each lane." }, { "end": 2351, "start": 2345, "text": " It's a little unrealistic they're definitely settings where the radar would not return some of the cars that we return." }, { "end": 2354, "start": 2351, "text": " But yeah it's a radar observation space." }, { "end": 2364, "start": 2354, "text": " So they're actually seeing distance to lead cars speed." }, { "end": 2370, "start": 2364, "text": " I was just wondering if it's possible to kind of describe like the behavior of the policies that come out to the act a lot of humans or they do or they act in a very different way to how humans would it would behave." }, { "end": 2380, "start": 2370, "text": " They act they act nothing like humans so basically what they do is they drive up to right before the entrance to the bottleneck and then they'll kind of look at how many cars are in the bottleneck." }, { "end": 2390, "start": 2380, "text": " And one of the lanes will decide to go and it'll basically the the AV at the front of the lane will go and then the huge the humans behind it will follow it along into the bottleneck." }, { "end": 2403, "start": 2390, "text": " And then all the other AV is right in front of the bottleneck in the in the other adjacent lanes will not go they will wait until a sufficient until the first platoon has kind of gone part of the way through." }, { "end": 2419, "start": 2403, "text": " And then one of the other lanes will decide to go so it's kind of this like smooth thing where one lane goes then another lane goes then another lane goes where the platoon that left will be replaced with another AV that's blocking the entrance and getting the particular timings of those platoons correct as hard." }, { "end": 2431, "start": 2419, "text": " So you kind of want that as one platoon goes through when the second platoon starts it will also get through the bottleneck without kind of causing too many merge conflicts." }, { "end": 2442, "start": 2431, "text": " So you'll see them like stop and start and stop and start and then occasionally when congestion does occur in the bottleneck they'll all kind of wait until it's cleared out and then they'll start this process again." }, { "end": 2444, "start": 2442, "text": " Yeah, it's very inhuman." }, { "end": 2455, "start": 2444, "text": " Cool. Okay. And then so they are they're kind of acting not selfishly right that's why they're able to do this whereas humans are all looking out for their own reward personal reward." }, { "end": 2469, "start": 2455, "text": " I mean, yes and no, so they are acting they're trying to maximize this cooperative object, but because when you avoid congestion everyone is better off if humans were to do this they would have been better off to right." }, { "end": 2482, "start": 2469, "text": " This is a case where like the national equilibrium and social equilibrium are kind of not the same thing everyone like greedily just going right away is worse off than if they had waited a little bit and kind of tried to coordinate." }, { "end": 2501, "start": 2482, "text": " Okay, so I think you were saying that that there there's again no communication here, but that some of the sensors might be returning more than they would with a realistic sensor so and maybe maybe that could be spanned by by a bit of communication or how do you see the potential for communication in these kind of situations." }, { "end": 2521, "start": 2501, "text": " Yeah, so we didn't get we looked at this a little bit, but didn't didn't make it into the publication, but you could imagine that the cars nearby other cars broadcast signals and what we wanted to see was kind of we're hoping to see was kind of the emergence of some kind of car language where they would you know pass information of the stream." }, { "end": 2544, "start": 2521, "text": " So there's this this bottleneck where sometimes cars get congested and the cars often the radar can't often see into that bottleneck it's like too far away, but you could imagine the cars in the bottleneck passing information to the car behind them about the state of the bottleneck which then gets past the car behind that and so like kind of this like global information would be communicated backwards up the flow." }, { "end": 2561, "start": 2544, "text": " So we played around with that a little bit we didn't see anything exciting happening, but I think there there's potential a lot of these settings to kind of think about what the language of cooperative autonomous cruise controllers might might be and look like." }, { "end": 2586, "start": 2561, "text": " So all this stuff reminds me of a project that I didn't grade seven and in that project I wrote that cars could go at full speed through intersections and when they wouldn't need traffic lights and they could even do it in the dark as long as they were properly coordinated like a zipper and and I show my dad and his his first comment was was like yeah, but what about what happens when someone gets a flat tire or if a car breaks down then there's going to be a problem." }, { "end": 2607, "start": 2586, "text": " So do you think that that those kind of issues are going to be key to hand I mean understand this is preliminary work I don't it's not a criticism to say you should handle every single detail, but I wonder to what extent those types of of unexpected events would make a difference in models like this." }, { "end": 2626, "start": 2607, "text": " Yeah, so I think for things like maximizing intersection throughput these kind of safety critical things are really key, but they're less key for things like this where you know if I come you know I drive up to the intersection I come to stop it doesn't matter what the people around me do and a lot of the things that we build are kind of robust to these issues." }, { "end": 2647, "start": 2626, "text": " The one that does concern me and that I don't know how to model and that we thought about a lot but we don't know how to model is like this summer we're doing this this big project where we take some of these cruise controllers that we built put them on the roadway and we try to have them smooth waves and improve the energy efficiency of traffic this is this big project with something called the circles consortium," }, { "end": 2655, "start": 2647, "text": " which partners from Vanderbilt and Rutgers and the Tennessee Department transportation and all sorts of folks." }, { "end": 2684, "start": 2655, "text": " And what we don't know is how people will adapt and respond to this non human behavior and this isn't something we can we do in our papers either so you know some of these car cruise controllers keep larger gaps than humans are used to and they could respond to this in all sorts of unusual ways they could start lane changing more often than normal they could just become angry and start tailgating really aggressively or you know there's all sorts of ways that humans can respond to these non natural behaviors." }, { "end": 2703, "start": 2684, "text": " And we don't know how to model and we we occasionally try to model this by just like letting the human driver model to like a best response to our current controller so you know you take your controller you optimize it then you take your human driver model and you like optimize its parameters is the best response to that assuming that humans operates" }, { "end": 2721, "start": 2703, "text": " obviously and then you go back and forth but it's kind of an open question for us is like what happens if the humans around you get annoyed or change their behavior in response to these non human driving types cool OK so let's move on to another paper that I just saw on Twitter today that you published" }, { "end": 2735, "start": 2721, "text": " which is a nice surprise hot off the press on archive so this is the surprising effectiveness of PPO in cooperative multi agent games and that is by you at all with yourself as a co author is that correct." }, { "end": 2758, "start": 2735, "text": " Yeah so the the genesis of this this paper is that in a lot of the works that we've done we've always used multi agent PPO like for everything the reason being we've never been able to get some of the off policy methods that are popular in the multi agent RL literature to work." }, { "end": 2787, "start": 2758, "text": " That's not to say that they can't we just haven't been able to get them to work so we've kind of used PPO as a workhorse for everything and I was talking to you about this and he he mentioned that you know at open AI a lot of their stuff had also been using PPO and they were kind of puzzled why this algorithm was not more more popular in literature so I had a very excellent undergrad at Cache Bay Lou and you had a student show you." }, { "end": 2816, "start": 2787, "text": " And we asked them to kind of start start looking into this and trying to build some benchmarks for these on policy methods compared to off policy methods and modulo the fact that we don't have a standard of evidence in RL what we found was that the sample complexity of these on policy methods in in three benchmarks so multi particle worlds star craft this star craft multi agent challenge and Hanabi that these on policy methods." }, { "end": 2845, "start": 2816, "text": " On policy methods got basically the same performance as the off policy methods in in similar sample complexity similar number of samples to get to that performance and we found this you know really surprising because I think that in in the single age of literature the conventional wisdom is that these off policy methods are good deal more sample efficient then then the on policy methods and I think that's true or at least in my experience has been true but we were not finding." }, { "end": 2874, "start": 2845, "text": " This in this in this multi agent setting and I mean a piece of this might be that you know the the the sense in which you get off policy happens a lot faster in the multi agent setting you know so like everyone's policies are changing so old samples become stale faster in some way we were able to provide kind of empirical support for that hypothesis but but we definitely yeah we feel pretty confident about this statement that the ppmeth is performed really well in these benchmarks cool" }, { "end": 2901, "start": 2874, "text": " do you mind just briefly describing how multi agent pp o differs from standard pp o how does it handle multi agents yeah it's it's quite quite straightforward so instead of the value function just looking at your state it takes in the state of all agents in the in the scene and so that lets you get more accurate value estimates so it's like a centralized critic yeah it is a centralized critic we have a centralized critic and decentralized" }, { "end": 2930, "start": 2901, "text": " active so trade centralized act decentralized but I mean another another thing that we found somewhat surprising is that at least in the starcraft multi agent challenge it didn't really seem to matter very much like the the not using a centralized critic and just using straight up normal pp o for all the agents also performed very well I mean I guess from the way I always think about it is the the off policy stuff has a huge advantage of being able to use that we might have and how would we ever" }, { "end": 2959, "start": 2930, "text": " deploy something that was on policy if it wasn't performing pretty well from the get go so I guess if you have a simulator then then either one is equal equally feasible but in so many cases in the real world I I'm not sure if on policy would be ever be realistic what do you think about that yeah I think that that's a good point so I certainly think it in single agent settings like definitely off policy stuff is is going to win out you know in the a lot of the time we have a" }, { "end": 2988, "start": 2959, "text": " simulator it's like pretty rare to me imagine a circumstance where we're genuinely thinking about an agent that learns in the world without some simulator pre training phase that is probably a controversial statement I think this will not always be true like we will definitely get to the point where we're training methods online in the real world but but at the moment you often have a simulator phase and I mean at least partially motivated by safety reasons you want to start by testing your stuff had simulation so you have that simulator" }, { "end": 3017, "start": 2988, "text": " I do think that if you if you wanted to deploy in the real world directly and learn there then you definitely should be thinking about off policy methods that well OK let me roll back slightly so if you were trying to deploy a multi agent system in the real world given the statement I made about on policy multi HRL having similar complexity to off policy multi agent RL in the benchmarks we looked at then you know you probably should feel as comfortable as you can" }, { "end": 3046, "start": 3017, "text": " probably should feel as comfortable deploying either one they're going to have similar sample complexity cool so do you do you see like following this up sounds like there's a bunch of open open questions follow up here to DC pursuing that yeah I mean the real open question to us is is why you know how do we how do we quantify this this reason that the policy methods do not seem to working as well as there like some some notion of staleness that we can examine and then like we we looked at this question some really really good." }, { "end": 3074, "start": 3046, "text": " So this is cooperative fully cooperative problems with discrete action spaces so it's you know fully plausible that these statements I made are not true and other settings and I'd like to know if they are or are not so there should be some follow up work on that on that question and then just going back to some things we talked about earlier so with respect to these sequential social dead lemma's you know Natasha and Joel's work on this was one of the inspirations for me starting up this show" }, { "end": 3085, "start": 3074, "text": " I found it so fascinating the whole question of you know how do we solve these sequential social dead lemmas these social dead lemmas are a major problem in the world today in all sorts of contexts and can any of this work, you know ultimately help us solve them in the real world in terms of you know free riders could be with respect to climate change and we can't do that in terms of the way we are, but we're trying to do it." }, { "end": 3110, "start": 3085, "text": " So we have a major problem in the world today in all sorts of contexts and can any of this work, you know ultimately help us solve them in the real world in terms of you know free riders could be with respect to climate change you know if some nations don't sign an accord they get a free ride on the emissions" }, { "end": 3138, "start": 3110, "text": " and you know anti-vaxxers are kind of getting a free ride on everyone else being vaccinated this shows up everywhere and some days I think it may be the central question of our time how do we solve this these social dead lemmas so I guess my question here is do you see you know any of these lines of work helping us deal with the social dead lemmas that we that that are admitted me admittedly a lot more complicated" }, { "end": 3149, "start": 3138, "text": " than the ones obviously tackled in simulation so far but do you see them ever getting to the point where they might really help us solve these really thorny problems in the real world." }, { "end": 3161, "start": 3149, "text": " Wow what a question you know anyone listening I'm gonna miss pitball hot now less epistemically confident about my answers on this than anything else." }, { "end": 3174, "start": 3161, "text": " So there's a lot of thorny pieces of this puzzle right so the first thing is you know you could think about using these methods to do things like incentive design right like what are the appropriate." }, { "end": 3190, "start": 3174, "text": " Incentives to kind of push humans away from this and you could also think about things that that I'm quite interested in that are that are kind of like AI mediation so like how can we." }, { "end": 3206, "start": 3190, "text": " Modify clusters of humans so that they're connected to the right people such that they start to like move towards the outcomes that they they actually want for themselves for their society and so on." }, { "end": 3229, "start": 3206, "text": " And a lot of this though you know like you're not this this goes back to the sample it if you see where all you're not going to do this online you're going to do this at least a start in simulation and so now there's this this piece of how do you build models of human beings and how they react interventions and how they react to chat box attempting interventions and modifications on their social network graph and so on." }, { "end": 3247, "start": 3229, "text": " And lately there's been a lot of work building kind of like LSTM like models about humans are going to respond to things that that have worked much better than I thought they would have I would I would have guessed that some some human responses are quite hard to model." }, { "end": 3276, "start": 3247, "text": " But yeah it really the real blockers like how do you build models of human beings such that you can then begin to study interventions on their behavior but I think it's a really promising area and you know it's it's really well we're all I think like kind of pushing to is trying to get better equilibrium to emerge then then currently do yeah absolutely I mean I think one thing that Natasha Jake's this paper showed on on socials lm's was." }, { "end": 3295, "start": 3276, "text": " So I was sorry on social influence was that social influence can be a helpful intrinsic motivation to help agents solve these collective action problems so the whole idea that I'm." }, { "end": 3311, "start": 3295, "text": " That if the agents can influence each other then they can work together as a group and that that kind of seems like intuitively true and maybe a bit obvious and I get that you know are the state of our all today." }, { "end": 3323, "start": 3311, "text": " And the state of our simulations is such that those kind of questions are more tractable than then then simulations that are trying to be extremely detailed with human behavior." }, { "end": 3352, "start": 3323, "text": " But I can't help but wonder if there's something inherent in game theory that can help us help us find our way through some of these messes that we're in right now and some of that might have to do with human behavior or some of it maybe some of it is just pure game theory stuff like you say like if you design the mechanisms such then game theory may tell us that we'll have a better time finding a better equilibrium." }, { "end": 3374, "start": 3352, "text": " I understand that this question is a very vague but the it comes up for me every time we touch on game theory in this podcast and every time we touch on social dilemmas which seems to be a lot more often than expected partly because if it's I think it's interesting and and as I say I think it could be central question of our time whether we get through all this." }, { "end": 3384, "start": 3374, "text": " Some of these tools seem relevant and you know some top researchers when deep mind describes what they're doing and they're and their hopes and aspirations." }, { "end": 3399, "start": 3384, "text": " They do talk about using AI to solve the most important problems of our time and and to me this is this is maybe one of them so so sorry to to drop this on you without it wasn't even in the notes but I couldn't help it." }, { "end": 3428, "start": 3399, "text": " Yeah no I like as long as you're comfortable with me spitballing like I'm very happy to talk about this I think you know I love ambitious ambitious research you know I think it's good for people to say like I want my research to tackle this like impossibly hard problem we have no idea how to do like that's good that's it's good to want to to your research to be useful and productive and." }, { "end": 3438, "start": 3428, "text": " And you know you shouldn't feel too bad about stating ambitious things like that sorry bit of a segue." }, { "end": 3455, "start": 3438, "text": " So yeah so I think I think as far as like things like Natasha's paper go which which I really love like that that's a good example of like building in some kind of like structural priors into into multi agent reinforcement learning right you know like." }, { "end": 3484, "start": 3455, "text": " You kind of have have some you know priors in the environment like agents influence other agents there bit of bits of your actions based that don't influence other agents and then these are two separate things and and maybe maybe like building in more priors like what Natasha did there is is part of the path towards more sample efficient multi agent reinforcement learning which is I think a key challenge a free enforcement learning is inefficient multi agent reinforcement learning is like 10 times as inefficient." }, { "end": 3499, "start": 3484, "text": " But yeah I think one one kind of promising opportunity in this direction is not you know you building models of human beings is hard but what you can do is you can." }, { "end": 3513, "start": 3499, "text": " Set up the problem you care about and then train train a diverse array of agents to solve it and hopefully like human behavior is somewhere within that superset and then you can kind of." }, { "end": 3541, "start": 3513, "text": " Refine and pick out the agents that you want that like coordinate well with the humans that you care about so like maybe maybe modeling humans is too hard but maybe getting human behavior to be included in the set of agent behaviors you generate is not impossible and I think that that's kind of a promising direction is just like methods for training really diverse sets of agents that then you can you can then select from." }, { "end": 3567, "start": 3541, "text": " And I think there's some work doing that in overcooked that I have seen and I know that sorry what was that phrase overcooked yeah overcooked so there's like overcooked is this I guess I've never played it but I think a phone game where you collaborate with some other folks to try cook meal of some sort." }, { "end": 3596, "start": 3567, "text": " And there's been some amount of work on like how to generate partners that coordinate well with humans and that and similarly Jacob forster has some some work on like how to generate agents that zero shot coordinate well with human beings in Hanabi and so I think like the moral community starting to think about this question of how do I you know generate agents that can interface with humans correctly even though I cannot ever train with humans so I do think we're starting to look at this question." }, { "end": 3606, "start": 3596, "text": " Cool. Okay. So what else is going on or the things going on in RL or maybe your your areas of interest that that you find really interesting outside of your your own work." }, { "end": 3618, "start": 3606, "text": " Oh my God so many things so so many things. I've really liked of late like a lot of the model based RL work that's been happening model based systems are nice." }, { "end": 3645, "start": 3618, "text": " You know we already have physical models of a lot of things we've done. I don't know maybe a couple hundred years worth of studying physics and so it's always you know bothered me that model free methods are so prominent and I think this is probably shared by the ton of folks but it's really seems like the model based RL methods are taking off and doing really well and so I'm personally looking forward to to playing with those and seeing seeing how well they work." }, { "end": 3668, "start": 3645, "text": " Maybe looking at you know some extensions of that into the multi agent domain and you know I think I am hopefully starting to see more things where RL is turning out to be useful for some actual application there you know there's the the chip design paper from Google there was this nice presentation." }, { "end": 3697, "start": 3668, "text": " I think at nirips on designing agents that could like pilot hydrophoil so that you could then do this like co optimization where you design a new hydrophoil then you have the agents like pilot the hydrofoil and then you can use that to like continually optimize the the the boat design and so like I'm very interested in trying to see where our health can actually be used to like create some real gains today and the the corresponding like the other side of that is like I become very interested in." }, { "end": 3725, "start": 3697, "text": " Like robustness of RL controllers because if you like look if you look at an RL paper and you look at like the deviation of the I mean this is this is this is separate from actually a little separate from Russ but if you look at like the deviation of the results like you'll have like a mojo co hopper that like 80% of the time gets 10,000 reward and 20% of the time just falls on its face that's really not what you want to do." }, { "end": 3748, "start": 3725, "text": " And so yeah thinking about different ways to enable robustness with regards to kind of like uncertainty in the model of the system or uncertainty with respect to the behavior of the other agents in the system is something like you know hopefully you'll be seeking more work for me some work for me in the future." }, { "end": 3762, "start": 3748, "text": " I think that that stuff is really promising cool I look forward to to reading about what you do there so what do you see yourself speaking of that what is your self doing in the next few years you do you think you'll be continuing your the themes that we've talked about of your work so far." }, { "end": 3776, "start": 3762, "text": " Yeah for sure we spent you know five years designing cruise controllers that we wanted to put on the highway we're getting ready to put them on the highway and then the summer and then in the following year like put even more cruise controllers on the highway and see how this works at scale." }, { "end": 3805, "start": 3776, "text": " So I am you know very optimistic about the ability of RL to design optimal cruise controllers for for improving the throughput of the the energy efficiency of the highway and I think you know will hopefully be putting out some empirical evidence to that point so definitely some some work going there yeah and then you know I care very much about multi agent RL becoming more sample efficient and you know therefore accessible to other researchers is a tool and so definitely not going to stop working on that." }, { "end": 3834, "start": 3805, "text": " Cool on a personal note I I drive a Tesla Model 3 and I just recently tried out the autopilot feature which is the the adaptive cruise control and it was both inspiring and a little bit terrifying because knowing what I know about the AI actually made me probably more more concerned for my safety no no incidents yet but and they did have a good good warning signal when when you know anything unexpected happened but I look forward to seeing that." }, { "end": 3850, "start": 3834, "text": " But I look forward to that being even even better because I really don't trust the drivers on the road yeah I mean personally I'm like terrible driver and I try not to drive so I am ready for someone to automate me out of existence please someone do it." }, { "end": 3854, "start": 3850, "text": " So Eugene anything else that you want to mention or I should have asked you about today." }, { "end": 3862, "start": 3854, "text": " No this isn't really fun I you came you came prepared with some some solid questions so yeah thank you for having me." }, { "end": 3868, "start": 3862, "text": " Oh it's been amazing so do you have any suggestions for the for the show or who we might feature next." }, { "end": 3888, "start": 3868, "text": " Yeah so I don't know if you so just this is off the top my head Jacob forster is extremely opinionated and has some really interesting perspectives on moral research could be fun to have on I think in terms of some folks who's I there's a Berkeley grad student." }, { "end": 3904, "start": 3888, "text": " I'm sure a dean who I hope I'm not putting on the spot but like I think does like amazing work on the robustness and machine learning methods yeah I would be personally curious to hear her opinions on things coming from like a more like controls background." }, { "end": 3916, "start": 3904, "text": " Yeah those are the two people who like spit off the top of my head cool Eugene Vinyski this has been fantastic thanks so much for taking the time out to speaking with us at talk or I really appreciate it." }, { "end": 3920, "start": 3916, "text": " Yeah this was really fun Robin thanks thanks again for having me." }, { "end": 3944, "start": 3920, "text": " Notes and links for this episode are at talkorl.com if you like this show I need your support you can help in a few ways." }, { "end": 3950, "start": 3944, "text": " Talkorl podcast we love retweets." }, { "end": 3960, "start": 3950, "text": " Talkorl.com." } ]
Jess Whittlestone
Jess Whittlestone on societal implications of deep reinforcement Learning, AI policy, warning signs of transformative progress in AI, and more!
https://media.transistor…5fa.mp3?src=site
This is Talk Our All Podcast. All reinforcement learning, all the time. Interviews of brilliant folks across the world of our all. I'm your host, Robin Chohan. Dr. Jess Woodlesone is a senior research fellow at the Center for the Study of Existential Risk and the Leaver Hume Center for the Future of Intelligence, both at the University of Cambridge. Thanks so much for joining us today, Dr. Woodlesone. Thanks for having me. So how do you describe your focus area? So the basic focus of my research is I guess I'm trying to think pretty clearly about other possible long-term impacts of advances in AI and how the various choices we make today about how we develop and govern this technology may shape that. So I come from a pretty interdisciplinary background, a mix of maths and philosophy and cognitive science and some policy experience. So really a lot of what I'm trying to do is bring together different perspectives on thinking about progress in AI and its impacts and different kinds of methodologies to try and think more clearly about what might happen in future and what that means that we should do today. So what I do doesn't really fit needily in an academic field or have a great term. It's sort of at the intersection of various things. People might call AI policy, AI ethics, AI governance, but also with a greater emphasis on this kind of more descriptive side of trying to think about how is AI actually impacting society, how might it do so in future, and then using that to inform, I guess, what I would call the more prescriptive work of thinking about what kinds of policies do we need to design and how should we be governing this technology to ensure that it benefits society. I'm also co-leading a research group called AI Futures and Responsibility which sits between these two centres that I work on, which focuses on this broad question of what we can do today to shape the long-term impacts of AI from a number of different angles. So we bring together, we've got a computer scientist, an anthropologist, someone with background in policy, someone with background in international law, and try and bring together all those different perspectives to think about AI governance today from this sort of longer term perspective. So I admit I've been dying to have you on the show to hear about this first paper for what seems like a really long time, like even ages before it was even published. And I was going to ask our earlier guest, Dr. Kai Arulkamaran, to speak about it. He was a co-author, but he said, I should really come to you. So thanks to Kai for the great suggestion. And here we are. And the paper is titled, societal implications of deep reinforcement learning that's by yourself, Jasperel Stone, Kai Arulkamaran, and Matthew Krosby. So did I get that right? And can you start us with a brief overview? Yeah, absolutely. And thanks to Kai for the recommendation. So yeah, I wrote this paper with Kai and also my colleague Matthew Krosby. They're both computer scientists working on things related to reinforcement learning. I'm in coming from a bit more of this broader sort of social science impacts policy perspective. And the background or real motivation for this paper was we were talking and thinking about the fact that as AI capabilities advance, they're going to have bigger impacts on society and we need to be able to sort of think ahead about how to steer this. But the thing I've really been thinking about is how that doesn't need to involve total speculation, like the technology that's impacting society today has been researched development for well over a decade, if you think about the sort of capabilities behind the computer vision, the NLP models, the deepfakes that are being talked about today. So we should expect that the technology that's going to have the biggest impact on society in the future will emerge from the kinds of areas that are currently receiving attention and research. And deeper reinforcement learning seems like a good example of something that's receiving a lot of research is that attention and seeing a lot of progress, but not yet being at least widely applied in society. So it feels like it's kind of in this sweet spot where we understand the technology well enough to potentially think through its possible impacts without having to kind of engage in world speculation. But there's still a lot of room to think carefully about how we want to use this in society and how to mitigate any harms that might arise. So what we try to do in this paper is discuss what it might look like for our systems to be applied more widely in society. And initially did this by thinking through a few domains where it seems plausible, we'll see more application of deep RL in coming years. So we thought particularly about online recommended systems, things like managing critical infrastructure and applications in robotics, and then spend some time sort of thinking through the issues that might arise in these domains, as well as trying to consider how might the kinds of ethics and policy challenges that are currently being discussed be sort of pushed or strained or changed by our systems. So I'll kind of I can talk through a few of the ethics and society issues that we discuss in the paper. And then if you want to go into more detail on any of them, than we can do. One sort of obvious thing that we first started thinking about is a lot of what RL allows compared to supervised learning systems is that it promises these much more autonomous systems sort of operating in the real world with much less human intervention, which is exciting, but also raises issues for these notions of the importance of human oversight over systems in order to ensure that they're safe and reliable. Currently the kind of main way that that notion of oversight is operationalized in policy and in nothing discussions is this idea of having a human in the loop, which is is less likely to be feasible with a lot of RL systems, especially if you've got kind of, for example, like deep reinforcement learning systems being used in some form of research management, like monitoring and adjusting the energy usage in a building, which seems quite plausible as an application. And if you've got a system that's making hundreds of small decisions in an initial time period, it's not really clear what human oversight of that looks like. And this is maybe particularly likely to be challenging in a concern if we've got RL systems that are doing continual learning or even multi-agent systems, it's then much less clear like what the model of oversight looks like for those systems. And I guess this is related to a second. A second concern we brought up, which is just that deep oral systems are going to raise new and bigger challenges for ensuring safety or reliability of AI systems, partly because they learn via trial and error. If we're going to start deploying these systems in the real world, we really need to make sure that we sort of have really solid approaches to safe exploration. Again, continual learning is likely to pose a challenge. We can't just do this thing that we can do with supervised learning systems, which is kind of assure their safety or reliability before deployment against some standard. Obviously, if you've got a system that's going to continue learning from real world data, you're going to need some form of continual monitoring to be assured that it's safe. And in addition, if deep RL is going to enable us just to deploy more autonomous systems in more high-sex domains, there's a bunch of talk about deep RL applied to smart cities, to sort of critical infrastructure and things like energy, transportation and agriculture. There simply kind of raises the stakes of safety challenges. If something goes wrong, it's a much bigger deal than perhaps a lot of the systems being deployed today. And then we kind of go on to discuss a bunch of other issues, one around sort of the fact that the flexibility of reward functions, which is kind of what makes reinforcement learning systems so powerful that also introduces a lot of greater potential for unintended consequences. So Stuart Russell in his relatively recent book talks about this concerning has about social media content selection algorithms, which you know designed to maximize the likelihood of user clicks on an item may have this side effect of making users preferences more predictable, which has shifted them to more extreme context, which perhaps is contributing to online polarization. This is the kind of thing we might be concerned about where you know, a company is optimizing for a specific objective. And actually, you know, those systems that are optimizing for the objective become quite powerful in society have in this example, you know, play quite a large role in influencing what content people read and how they choose it. And has this broader unintended consequence, which could end up being harmful. So I think as we start deploying these, these more autonomous, more optimizing systems with these kind of more flexible reward functions that you might have received by learning, we need to be thinking a lot more about these things. And then maybe we're briefly are just give over the few other things we've discussed and again, if you want to ask questions, who can. So we talk also about the potential for deeper, I'll to increase the incentives that exist for ongoing data collection across society. We talk about security concerns, so so one thing there is that compared with other ML approaches, it can be harder to distinguish adversarial attacks from benign actions in deep or all because the cause of the exploration, the training data distribution is constantly changing. And then we also talk about this kind of big topic of automation and just kind of broadly discuss the fact that advances in deep or all could really shift the susceptibility of different jobs to automation and this hasn't really been thoroughly considered at all in any of their. And analyses of what kinds of jobs might get automated and then what we do later in the paper is we kind of we take. So we sort of discuss all these kinds of issues that might need to be considered and then we come back to the question of like well, what are the things that are currently limiting us from applying deeper our systems more in society, what are the research barriers and talk through those and kind of. Discuss how maybe those should be things that we're keeping better track of as kind of warning signs that we might might start to see more application of reinforcement learning in society, which maybe means that policymakers and others need to be thinking more seriously and harder about the kinds of concerns that we raise in the paper. Cool. Okay, so this show usually we're usually dealing with more technical papers and but recently we had Thomas Gilbert on who has kind of a related focus in terms of political economy and RL. So I guess the audience here is a little bit different the intended audience for this paper, maybe a little bit different than we we usually focus on what what would you say is the main target audience you had in mind for this paper. Yeah, so I guess I would say we have two two audiences in mind, perhaps like just slightly sort of the primary audience I suppose is more. The sort of the sort of research as you do the kind of work that that I do on those more ethics and governance side part of what I wanted to do with this paper was actually. Bring a greater understanding of some of the kind of more technical elements of progress in in AI and the more nuanced understanding of the different types of AI systems and the different capabilities that they have to that discussion so rather than just talking about the impact of AI broadly which can feel like very crude and broad brush actually start to say well. You know a lot of the systems that we're concerned about today a supervised learning but we're seeing progress in this is different form of machine learning which maybe raises some different challenges and opportunities. And I'm really to that audience of people who are mostly thinking about how to govern these systems and thinking about what issues they raise contribute a bit more of a sort of new discussion and and bring to mind for those people like maybe this is something you should be paying attention to progress in this area. So that was kind of the primary audience but definitely there was also and I and I hope that the paper is still engaging to to the more technical audience because part of what we wanted to do was also get people who do research in in reinforcement learning thinking about. Especially if if you're doing stuff that sort of closer to bringing bringing capabilities to application get people thinking about what the possible impacts and issues raised by the kind of research and the kind of systems that building are and ultimately I mean something we really stress at the end of the paper is that we think to address many of the challenges that we discuss is going to need collaboration between people working on the more technical side of building RL systems. And people who are thinking about the governance challenges if we want to be able to you know think really carefully about what kinds of oversight the system need you need to bring together people who you know really understand the systems and the challenges that they're going to pose for oversight with people who really understand you know the requirements and the situations in which oversight might be needed and what purposes. And I guess that's sort of what we try to start exactly this paper was a collaboration between me someone who works on the more ethics and governance side and a couple of people working more on the development side this I guess I would say is something I try and bring to my work in a direction I'm kind of trying to nudge this ethics and governance field in more broadly I think it's it's really important that. And what I think is going to be a collaboration between the two of us is going to be a collaboration between the two of us and the two of us and the two of us are thinking about societal impacts is underpinned or at least sort of you know can draw on a fairly solid understanding of of this nuance and the details of like what current systems can actually do and where they might be going otherwise I think it's going to sort of for short of being actually relevant or practical so that's something that I think a lot about and so yeah part of the end with this paper was to also engage people working on the more technical side maybe with some of these issues. Okay so it sounds like we really need a multidisciplinary perspective to to make progress in this area so once we have that what like what is the path look like forgetting from from where we are today where there hasn't been a whole lot of thinking on this area especially in RL which is why I'm excited to chat with you here to a society that's that's really more prepared for this what what is that path look like yeah that's a big question and an important question yeah I think the first step is bringing together. Some of these people from different groups and backgrounds and expertise a bit more productive in sort of talking about and and figuring out you know what are the kinds of advances and the kinds of systems and the kinds of impacts they're going to have on society and and I think you know one of the challenges for doing that it's all very well to say we need more interdisciplinary work. I think a lot of people are shouting in in AI in another domains but it's challenging because sort of people speak different languages and have different incentives right like I even when it comes to like I try and collaborate with people working in ML and there's a sometimes a bit of a challenge of like you know the journals they want to publish in for their career and not necessarily the same journals that all the conferences that that I want to publish in and and so there's some challenges there that I think yeah I did. Yeah I need to be overcome to some extent I think you know we're starting to see big conferences like your apps and I see a mile and others you know have workshops that attempt to bring these different perspectives together and allow people to to publish papers that that sit at this intersection but I think I think there's still some work to be done there. I think there's then there's a sort of also bridging that gap between academic understanding and and the policy understanding and the governance and that's another gap that I'm really trying to bridge with the work I'm doing and some of the people I'm bringing together is you know we need to do this big picture open ended exploration of. You know what are the things what are the kinds of impacts we might be concerned about in future like what kinds of specific advances and capabilities do we need to be thinking about but then you also need to be able to tie that back to like okay practically what can be done today when it comes to decisions that the government and industry are making about about these technologies and getting into some of that nitty gritty one one specific thing I'm working on at the moment that I think is quite quite promising in this regard. Is trying to think about how governments can build better capacity to be measuring and assessing features of systems and monitoring progress so I think. A big problem for the governance of this technology at the moment is the kind of classic problem with technology policy which is basically technologies fast government slow. I'm a regulation isn't and other forms of governance generally can't keep up with the pace of progress and I think we're really seeing this with AI government is getting caught off guard by advances. I'm and only really able to respond when something's already blown up in the media or ready created economic activity. And that I think is going to be a real problem is the systems get more advanced as they get more widely deployed across society. Potentially you know we might see bigger bigger harms and bigger mistakes and we can't just have government responding to two mistakes as they as they happen. Be a bit ahead of the curve and prepare you don't have to speculate about the future you just need to know what's what's going on today better. In especially in the sort of and you know what's being applied in society what capabilities are looking close to deployment. And so I'm working on this with with Jack Clark who used to be the policy director open AI. And has been involved in a lot of AI measurement and assessment initiatives and is just launched a new project and you company called anthropic AI. We're working on kind of developing this proposal around how can governments build greater capacity to essentially be kind of getting that reliable information about when potentially impactful capabilities are going to be deployed. Getting better information about where systems might not be as robust and safe as they should be in order to sort of be better prepared to you know not. It's not that they should necessarily be taking action before anything's happened but like reacting to more sensitive information not necessarily reacting to like this huge things got blown up in the media but reacting to. Sort of the finer details of what's going on in the world so that's something I'm excited about but that's not definitely not the whole solution to be more prepared but it feels like quite an important step to me. Do you think that there could or should be regulation on on or and maybe related any thoughts on the recently proposed EU AI regulations. Yeah, very thoughts on this it's a big topic. I'll start with the EU regulation and then say what I think specifically about RL I do think I'm generally broadly pretty positive about the EU regulation. I think that we actually over the last few years I've come to be sort of like more. Confents that we do need some kind of a regulation. I guess I think of regulation as being you know regulation is sort of your your bluntest tool. You know you can either ban things or you can put restrictions on their use or make them have to conform to standards. And so I think of regulation as being the thing we use when we like reasonably well and to stand what the harms are or what the risks are of something. So you know I like the approach that the EU has taken in general in that they say okay there are some. There are some AI systems some applications of AI that seem like they cause or are likely to cause unacceptable levels of harm. And so those were just going to ban and that includes. I can't remember the exact details but it includes certain forms of sort of discrimination and manipulation from AI systems. And then they say okay there are other kinds of AI systems that are high risk and those we're going to. Subject to we're basically going to say like they have to go through conformity assessments so they have to go through. Certain sort of processes of assessment and they need to have all this documentation and they need all this disclosure before they can they can be put in the market. And then there's sort of then there's below that there's there's I think another couple of levels of risk and I think broadly that makes a lot of sense to me as an approach. I think breaking systems down according to sort of like their level of risk or. All the kinds of impacts they might have is much more sensible than trying to regulate AI as a whole technology or trying to regulate specific applications of AI. Just by sort of sector or something but I think the all that said the devil is really in the details and I think there are still a lot of challenges and how we. Choose and define which systems fall in which categories so there's a lot of ambiguity in the definitions of. What in the definitions of what systems for land are the prohibited parts I think there's some with the notion of sort of manipulative systems. There's something about you know if it would cause someone to behave in a way that's detrimental to them that they wouldn't have otherwise and you sort of think. Assessing that to be pretty difficult and even in the high risk category. There's sort of a set of domains that they've identified as as as having specific applications under them that are high risk but it's pretty difficult to to adapt that. I think it may I think we. So one of the things that I do think is really important for for this regulation is that it's able to adapt as capabilities advance. I think one of the big concerns with something is sort of. I suppose kind of regimented and hard lined as regulation is that it's hard to change and so it can become ossified which either means it's kind of irrelevant because it doesn't really deal with the challenges of the systems that we have in feature. Or that in my even kind of. Yeah does it kind of focuses attention on the things that aren't really what matters we focus attention on sort of making sure there's all this documentation for these specific types of systems but actually something falls to the loop holes. So I think one thing that's going to be interesting and challenging to think about and actually I'm working on with some of my group is is thinking about this like okay. What are the features of this regulation is to have in order to be able to adapt over time and how does that applies to the regulation it does have various things in place that allow it to you know things to be added things to be changed but there are some limits on that. I think a really interesting test case here in terms of sort of probing the limits of its ability to adapt is to think about something like. You know does this deal with reinforcement learning systems. I think at the moment it doesn't really it doesn't really draw those distinctions certainly there are specific examples of reinforcement learning systems that might end up being sort of shoot in under some of the categories that are currently seen as high risk. I think you know in my view there certainly well and could easily be examples of reinforcement learning systems that get deployed in the world that should certainly be subject to this kind of conformity assessment. You know a high standard of testing for safety and reliability and robustness if you're you know if you're deploying quite autonomous systems in high stakes domains like some of the sort of critical infrastructure domains I talked about earlier especially if they're doing some form of continual learning I think it's very sensible that they should need to be subject to. At least some form of conformity assessment there's a whole other question of like who determines and how does it get so that what those conformity assessments look like but I think that's very sensible. I don't think we should be thinking about regulating RL as a category or as a whole. I think that doesn't make sense it's an underlying capability you know it's a mathematical like framework. Certainly there are there are types of systems that are based on RL that have certain features and certain domains that we'll probably want to think about regulating and I guess yeah thinking that through what are the what are the conditions in which RL system should be regulated is I think a challenge that we're going to need to deal with. So do you think that RL specifically is likely to contribute to more concentration of power and increase inequality. Do you think that's a major risk here? Yeah it's a really interesting question and I haven't really thought about this with RL specifically so I'm going to be thinking a little bit on the fly here. I do worry in general about about the way we're going with AI contributing to increasing quality and in concentration of power so a big project I've been working on recently with my research system time clock is amazing is. Trying to sort of like more comprehensively map out different pathways by which I might might pose sort of larger risk for society. So yeah slight background on this is that I guess we came to this because it feels like the discussion about AI kind of posing risk society or having very extreme impacts still feels fairly immature I think you know there's Nick boss from super intelligence which is sort of like the main. The main scenario that a lot of people would associate when they were here sort of like AI risk AI extreme risk is this concern about you know developing AI systems that are as intelligent or more intelligent than humans and then that going badly wrong. I think that's like a scenario we should be concerned about I'm not I'm not dismissing that but it's also it's one very specific scenario and also the specific scenario he sketches sort of relies on a lot of assumptions about what AI progress is going to look like that it's going to be. It's going to look like that it's likely to be this sort of fast centralized thing that we're going to have this sort of one system. So part of what we've been trying to do is is broaden and and nuance that conversation discussion and pulling together lots of different literature and perspectives and and we're working towards something like a research agenda to try and sort of map out okay what are some different things that we we might be concerned about perhaps even in scenarios where we don't reach anything like super intelligence but you know AI capability sort of. Advanced along the trajectory that they currently on and get used more widely in society and all of that specter grant to say that one of the one of the scenarios I think that has come out of that that. There is a lot of thinking on but it's not quite you know find enough is this idea of you know AI really leading to power concentration and increase inequality one of the big questions I think I have my mind about that at the moment is you know how bad is. Does that get could this get locked in for really long periods of time but I think when you have you've got a bunch of trends sort of pushing in that direction you've got the fact that. You know AI is is perhaps leading to more winner takes all like monopolistic dynamics in tech you have this kind of feedback loop where the companies that have the most data and the most computing power can design better products and services which allowed them to like a mass more data and computing power. And and and that kind of feeds back on itself you have the fact that you know. Ferry sml techniques and I've been used by tech company social media companies to improve their advertising to improve their brand to influence what people. Think in such a way that that makes them all powerful at the same time you know you also have like on a sort of global scale you have the fact that. These capabilities have been used much more in the sort of develop them the developing world to boost their economies and then you have the fact that potentially you've got kind of potential sort of job losses due to automation leading to greater inequality and society which. You know could kind of intersect with those broader trends so none of this answers should have our RL I think in so far that RL is just going to enable like more powerful optimizing systems that can optimize more effectively for sort of any given objective. Any given sort of easily I suppose any given objective that you can specify I do think it's it's likely to worsen this trend without without other thoughts right because. It's likely to improve the ability of social media companies to like really optimize for the for their revenue and and that's going to sort of drive. Increasing concentration of power power there and and potentially you know as I said sort of combined with impacts on on jobs. It could do that but you know I guess the caveat here is this is something we're working on I feel like there's still kind of a lot of gaps in in that story. About what exactly happens and what exactly something like RL is going to contribute but right now I'm that's a thing I'm concerned about for sure. I mean I guess for my point of view if you wanted to replace a worker with an AI system it pretty much has to be framed as an RL problem so that you can optimize for the workers actions towards the goal of profit for the business. And so it seems like the most natural framing for that and so I guess I worry about this like economic singularity I'm not so worried about intelligence singularity but the economic singularity seems like a much more likely very high probability scenario which is just a little less sexy to talk about than a GI but it seems like it could be here a lot sooner and like you say it it may have some kind of lock in properties that are that are hard to to escape from. So I really hope that policy people are thinking clearly about this that type of risk as well and it's not just one in a giant list of potential risks. It seems to me it's going to be a challenge for us to avoid that I guess futures in which a eyes is highly decentralized and knowledge of its use is very decentralized and maybe maybe you know labor itself is able to use these tools to not be replaced but just be. Improve and improve its productivity maybe maybe there's a way through there but I really look forward to to guidance from from people like you on how to avoid this economic singularity that that does concern me. Yeah, try like can I just say when you say economic singularity do you mean yeah what do you mean by that you mean like a scenario where we have a sort of transition point where there's kind of like suddenly like a discontinuity and economic growth and it starts from really rapidly increasing a different right of change or do you mean something different. Yeah, I just threw out that phrase. I don't know if it's if it's a phrase at all. Maybe we've made a new phrase maybe I like it. So I looked it up after the show this phrase shows up in a couple places. Calum Chase in his 2015 book surviving AI the promise and peril of artificial intelligence. He defines economic singularity as the moment when a I renders most of us unemployed and indeed unemployable because our jobs have been automated and then William D. Nordhouse in his 2015 national bureau of economic research working paper titled are we approaching an economic singularity information technology in the future of economic growth. Defines it as some boundary or singularity after which economic growth will accelerate sharply as an ever accelerating pace of improvements cascades through the economy. And now let's return to the episode with just little snow. Yeah, I was thinking in terms of concentration of power and capital not needing labor really anymore. This is something I've I've thought about a bit and I've I've heard some other people talking about there's a few different elements to this. Yeah, one is this just sort of notion of AI leading to more what we might call like kind of transformative and think this continuous change says is possibility of just increasing the rate of economic growth. The increasing the rate of change of economic growth in a way that kind of produces a discontinuity and I think I mean that's one way of describing what would happen right which is very much on the like economic metric side but I think. Probably what that that looks like is being able to automate like a very large proportion of like economically valuable work. And whether or not that looks like concentration of power or whether there's some way of doing that the some like scenario which occurs in which it's more decentralized and and therefore. I don't know but that certainly seems like a scenario in which like the world as we know it is like very different and I am not confident it would be good so yeah that's I yeah definitely more thought needs to go into into what I would look like. So do you have future work in mind in the in the direction of this paper yeah so not I have various bits of future work in mind sort of are in the direction of this paper but not an actual extension I mean one thing when we finished the paper I did have in mind was maybe trying to do some kind of similar analyses and explorations for. Other areas of machine learning that are that are that are seeing kind of substantial progress today deep our was the one that that really stood out but maybe you know doing this for you know what if we see substantial progress in certain kinds of transfer learning or in certain kinds of unsupervised learning or you know fill in I haven't sort of done there it would be nice to do the kind of. So high levels thinking about okay what are really other areas that might have what are other areas of progress in the mail that that we should be thinking about because they might really impact society I ended up sort of taking a step back from trying to answer that question and more recently sort of stepping back further and looking at this bigger picture question of what are the what are the sort of scenarios we should be most concerned about as we were starting to get into there. From this perspective of sort of longer term impact of AI and looking beyond the GIC for intelligence scenario and part of the reason I did that was I felt like I wanted to have a clear picture in mind of like yeah the sort of longer term pathways that and the longer term forms of societal impact that we might be most concerned about in order to sort of then trace back to one of the developments that are likely to matter most. So I sort of see I've gotten into this project ongoing whether this is kind of iterative process between trying to pull together different perspectives and research to to map out some of these these longer term pathways so you know what might happen a power concentration scenario look like what might it look like or or could you know developments in in an ML end up sort of undermining humanity's sort of. Competence and ability to deal with deal with challenges in certain ways obviously it could also potentially go in the more positive direction but I tend to focus more on the risks which we can talk about at some point. Anyway sort of tracing out these longer term trajectories kind of to like orient myself and get this clear picture of sort of what what really matters here. But then I think there's some interesting work to be done in sort of tracing back from that but also looking out from the future and saying like well if these are the scenarios we're concerned about what are the kinds of. Developments that perhaps matter most and you know I think the reason we chose that we chose deep RL was because this really does feel like as you said it's sort of like it's hard to imagine more autonomous generally capable systems in society that aren't based on on deep RL in in some sense because. It's just such a sort of basic framework for for having systems that are more like agents right that that interact with the world. I think advances in natural language processing language modeling there obviously getting a lot of attention in the moment starting to see deployment and society. You know have potential to really impact both the way the information is produced and disseminated and assessed in society and and and work you know I think when GPT 3 the sort of. The latest language model from open AI came out one of the things that sort of struck me or surprised or scared me most was there. The uses we are very quickly seeing to write code the fact that it's like very good at taking like natural language descriptions and turning it into code and being like oh wow like I can see this was automating like all kinds of software engineering. So I think that's an avenue that is is going to have a lot of impact and and needs more more thinking about. What what that's going to do so. Yeah this this kind of approach I'm taking is like quite big ambitious project of like you know figuring out what we should be concerned about in the future kind of involves taking this like very big picture look at the sort of longer term pathways and and trying to map out different possibilities. But then also looking at sort of like where are we today and which one of the impacts of these these trends are going to have so I think looking more at some of that is there's sort of this much bigger broader project that's the natural extension of this at the moment and I think that I end up focusing in again at some point but yeah I sort of. So I skipped my head down into into the RL space and now kind of come back up and doing this much wider broader thing and then maybe we'll kind of end up digging back into something more specific but I'm not quite sure that looks like yeah awesome okay so let's move to your next paper artificial canaries early warning signs for anticipatory and democratic governance in of AI. That was Carla Zoe creamer and yourself so what's the gist of this paper yeah so this paper and I'm from so Zoe Zoe Kramer and she was doing sort of years research project at the center of the central risk where I work and she did this really great project where she was interviewing. And various researches across ML cognitive science philosophy to try and better understand like what people saw the current limitations of deep learning being particularly with respect to us achieving something like human level intelligence she has more precise definition but she was asking people you know one of the things that you think AI systems today can't do. That is sort of like the things that they would need to do what are the most important kind of milestones that the current deep learning systems need to achieve if they're going to get to anything like human intelligence and also sort of what are the things that you think if any you know deep learning systems maybe fundamentally can't do that the human intelligence can do. And from this project she came up with a really interesting and sort of in depth list of kind of collated a lot of these these milestones and these capabilities you know so from with a really wide range so from like you know. Cause a learning to certain forms of interpretability to concept formation and and yeah lots of different things and and she was really interested in trying to think about you know how do we distill what we have from these experts here to kind of get a better understanding especially if we think about the relationship between the different milestones that people have come up with. Could some of these milestones serve as kind of early warning signs of of progress towards something like human level intelligence so if these are the things that the experts are saying like you know we're never going to get human level intelligence without well you know we should if we see progress like substantial progress on these things and we should we should really kind of I don't know start paying attention and she and I was talking about this and started kind of thinking about is there a is there a broader sort of methodology or approach here that could be. For identifying early warning signs of progress towards sort of like any big event or milestone in AI whether that's human level intelligence or or something else and you know I particularly was was a bit more interested in thinking about you know what are. How do we identify early warning signs of progress that my lead to like really large societal impact so for example like if what you're concerned about is you know scenario which. I was able to automate 60% of jobs can we can we think about like what's the process we would go through if we wanted to be able to identify sort of particular. Are you the progress that if we if we saw progress we saw progress in them we should kind of recognize that as as progress towards that sort of big scenario so in this paper we basically report and discuss the results from higher interviews and. And the way that we used cause mapping techniques to try and understand the relationship between them and therefore set it kind of identify the ones and the kinds of milestones which we call canaries which are the sort of really fundamental ones because. You know we identified that there are some milestones where it's like oh if you may progress on this it seems like you're going to make progress on a whole load of other things that people say are important and so we discuss the specific results of of her study but then also kind of proposing and discuss this this broader methodology that could be used for identifying. These kinds of particularly important warning signs of a broader range of sort of events and kinds of progress in the eye. So do you feel like these these milestones are measured in a binary like attained or not attained sense or is it more a gradual process that that we're monitoring like if we had a milestone for for the ML maybe that's too broad or would you consider that attained by now in 2021 or. Or back when CNNs were were first introduced and solved image that or or maybe not yet. Yeah I think I mean this is one of the this is one of the challenges I think of implementing this in practice because I think they have to be gradual at least the way that we define them I mean I think probably something like vision ML is too is too broad to. Be a milestone in the sense that we intend that I think it would need to be something like ideally you know you define it something much more specific like. The ability of ML systems to generate like convincing faces that humans cannot distinguish from real faces or something like that right and obviously that even there there's like a bit of ambiguity like you know what level the humans need to be convinced at but at least something like that is more specific. As we have it in the paper with the with the specific milestones that that so we generated from her interviews they are broader than that although I think not quite as maybe not quite a specific as vision ML those some of them then probably I don't have it up in front of me but you know there are some that are pretty pretty broad like causal reasoning and I think probably actually for. Probably an important next step for this and maybe a next thing that would be valuable to do moving on from this project is to think about you know how do you actually operationalize these more specifically and think about what are the what are the indicators that you've made the indicators right like our indicators at the moment and maybe maybe a little too broad at the same time I think you can also just have these indicators where progress is more is more continuous and you know if nothing else you've identified. So I should explain I think I didn't really explain very well in the beginning of the paper but one of part of the methodology we we outline is that you you know you use expert elicitation to generate sort of a list of these are the these are the milestones that are likely to be really important and then we use this technique called a form of causal mapping to and the idea is that you you bring in kind of a wide range of experts to do this for the idea is to identify relationships. So the between the milestone so to say like oh well some form of like concept formation is important for underpinning causal mapping I don't actually quite know if that's true but something something in that space and then you have this map where you've got all the sort of relationships and and the the canaries as we call them they're like particularly important warning signs are the other nodes that have more outgoing arrows than other sort of ones that that underpin. So going back to the point about whether they're binary or continuous I think it's it's still useful to be able to say like oh well it looks like concept formation in some some sense is is a really important node now there might not be any binary factor whether it's been past or not but at least then you've identified like OK for what we care about is identifying warning signs of progress towards human level intelligence then we need to be paying a lot more attention to research in in concept formation or whatever. But yeah it's a good it's a good question and I think it's it's a detail of our methodology that that certainly could do with a little bit of refining. So yeah definitely not a critique just just a comment that this this kind of stuff can be a bit fuzzy and it gets interesting to think about the gradations so so I was curious to what extent you feel the experts can can really map out future milestones. So I was thinking back to maybe how people before DQN and the Atari work might have conceived of these milestones of how RL or AI might play out or say before CNN's and image net. And which is kind of modern deep learning like could the experts back then have done a good job of listening. Meaningful milestones in the near term which got us to this point in terms of our or maybe do we understand the problems just so much better now and in terms of what intelligence systems need to do and so maybe we are at a point where where we were like even experts do have the ability to map out meaningfully what do you think about that. Yeah the certainly question and I think I'm generally skeptical of any claim that's sort of like oh we couldn't do this thing in the future but in the past but things are different now and so we will be able to do it now. I mean obviously it's it's hard to know to look back with wood people in the past have been able to to kind of predict. In fact for milestones my general I mean my general response to these these kinds of questions you know can we can we meaningfully predict the future should we have any confidence in this is like. I don't know but I think it's worth trying and it's better to try and think explicitly about what kinds of milestones might be important than it is to sort of. I guess this is more general more general point and a more general point about a lot of what I'm doing which you know I've said is about trying to think about the future and I'm trying to think about the future likes long term societal impacts of this very broad technology which. I think to many people sounds just like a totally intractable goal and you can never get any precision on it and you're never going to get it right and and I guess what I what I say to that all that the way I see it is that I'm not trying to. I'm actually not trying to predict what I because I don't think I don't think we can predict you know this is going to be the impact on society of this very broad technology and 10 20 50 years time. I do think it is worth thinking systematically and rigorously about what could happen in the future thinking through a range of possibilities. Because we're kind of making we're making assumptions about the future all the time when we make decisions right and coming back to the milestones example where you know researchers and others in the community are kind of making assumptions of the time about what will or won't be possible in future when they decide what to focus their attention on. When we decide what to be concerned about and and so we're making all these assumptions implicitly and so I think it's much better that we make them explicit you know by mapping out these different pathways we don't think that we've like predicted what's going to happen but we're at least on I think some of the interesting questions that need to be asked and some of the things that we might need to to prepare for. And I think with the with the master's example part of the value of doing it is also that so I think part of Zoe's motivation with the deep learning limitations work was there seems to be a lot of disagreement among. Different sort of parts of the research community interested in the I progress about whether and when anything like human level intelligence is going to be possible and so part of the purpose there was well if I can dig into in a bit more detail what people think the limitations are then maybe that will help to kind of give a better understanding of why people disagree and what kinds of evidence should make us update more direction or the other you know if loads of people think that. Some specific form of course a learning is like the thing that is going to make the difference well then you know if we see that progress and then we should change our minds. So I guess all this is to say i'm not entirely confident that we can predict with any level of uncertainty like what the important advances are going to be in in future but I do think trying to do so at least can like on a helpful questions sometimes at least kind of. For so to think more explicitly about what things would would change our minds and and sometimes you know the other thing is that you know if you if you map out and think about like one of the different things that that could happen one of the different advances that perhaps we can be concerned about I do think it helps you to make better decisions. Better decisions in the present so like it's worth us spending some energy paying attention to progress in different areas of AI that we think might have big impacts on society at least I think that's true going back to the point about getting getting governments to monitor our progress like let's not choose those areas at random let's at least try and do some systematic work thinking about which avenues of progress might underpin. Loads more progress that might lead to some big societal impact we might not be right but it's better than not trying to do it all similarly you know we're going to govern this technology in some way we're going to have regulations governments are going to do things let's. Do it informed by a wide range of possible things that might happen in in future and things we might be concerned about and sort of try and identify like robustly good policies that. You know won't fall apart in less our very specific assumptions hold tree so that's I guess quite a big picture answer to your question where the literal answer is like i'm not sure I don't know I don't know if they would have been able to predict it. But but maybe some of it would have gotten something useful yeah i'm playing devil's advocate a little bit doesn't mean I don't have a ton of respect for your work and I think it's really important. Yeah no they're good questions so do you think that we would generally know when we've achieved a milestone like at the time or do you think that some of these are only clear in retrospect yeah that's also a good question I think it depends on how precisely you specify the milestone which kind of goes back to some of your earlier questions I think we can. We can we can specify them precisely and this kind of points towards specifying them more precisely we can specify them precisely enough that we could we could know that we're achieved it right if you specify it in terms of a very specific benchmark being achieved. That obviously we can know the sort of the vaguer is and the more that it relies on I don't know sort of like human intuitions about what's impressive or something. Then maybe it's it's harder to know until later do you have did you have any examples in mind of the kinds of things that like I don't progress you've seen in the past the time we wouldn't have recognized this like a big deal but that but that we did later on. I don't have a great example of that to be honest but I know it's fine I just wanted to. Yeah I think that maybe the technical community might be excited about some advancement and then it takes a while for the impact of that to be felt or recognized more broadly for example you know what might seem like a you know a nice little paper in say curriculum learning and I'm thinking of a paper in specific here might be like oh that's a nice little achievement in that one area but it might actually have a huge effect downstream and that might actually take a long time for us to realize the impact of that. I was thinking of the paired paper and we've had the authors so it was just one of many posters but the potential impacts of some are just so much more than others and I think it's it might be hard to see it at the time yeah I would actually I'd love to like try and do look at some case studies like this or something because I mean in a in in part I would say that like part of what I'm trying to do with my research is think about you know how do we identify that sooner I like how do you how do you pick that curriculum learning paper out of the posters. At conference I guess I guess part of your question like is it possible but you know part of the purpose of this idea of having governments or research is looking at social impacts like monitoring paying more attention to progress thinking more systematically about and paying more attention to progress in machine learning is so that we kind of have this ability to at least try and think through. You know what like the societal impact of some of these things be sooner I think it's really hard but I think it's but it possible to do do better than than we currently are but yeah it would be really interesting to look at some of the papers or you know specific. I'm sort of techniques that have ended up having quite a big impact and then and looking kind of looking back and saying like could we have known this at the time what the stages this went through like what happened and she approach has been thinking about. Is trying to sort of do more in depth analysis also of which I think is kind of similar to this of I guess what call like this sort of research deployment pipeline so take some capability you know take look at the first transformer model all the way through to like. Application in Google search like what were all the steps in between in between like the very first transformer paper and it being applied in Google search and can we learn anything from sort of following that path about. The path that other papers or capabilities might might follow and do you think about time skills on these milestones do you think it that it's feasible to talk about when some of these things might happen or is time not so much a focus on it's more about the structure of the graph. How do you think about time yeah we were how do I think about time we were we were very much focused on the structure and finally deliberately chose not to include. Time in this paper just because yeah why I guess partly just sort of like for the simplicity and really what we were trying to do was was think about understanding the causal with causal structure and sort of the order of milestones as opposed to any specific time I'm. Quite skeptical of like being able to put time scales on these things I in general sort of more towards thinking about sort of the order of progression of capabilities and the order in which things come. As opposed to time I think I might be I might be rational these skeptical of of trying to put times on things but I feel like sometimes trying to put specific times on things can kind of get a bit distracting and arbitrary and it's not the thing that matters so much is like sort of the. The order of things that totally makes sense so do you have any opinions on what are the really important milestones or canaries around circa 80 2021 like are we going through some of these milestones pretty quickly right now or are you kind of focused on more long term no I mean I think I definitely I want to be able to think about you know like what are going to be important bits of pieces of progress in the in the coming years. Like this is you know this is exactly my interest is in sort of like grounding or thinking about longer time impact and what's going on today. I feel like I don't have good answers to this yet I mean one sort of it's a really hard question for our paper yeah I mean and because the deep our papers is very present in my mind I do think something about seeing more real world applications of our systems. And particularly perhaps particularly in safety critical domains I mean that's that's not a very clear one obviously we are seeing some applications maybe there's some way of like operationalizing that more clearly but I think there's some milestone we might overcome in terms of being able to deploy reinforcement and any systems more easily in the real world. That I think could be quite important for sort of all of the reasons I discussed before in terms of like this then is is opening up the possibility of sort of more autonomous systems that are kind of more agent like and perhaps have more general capabilities. So that's I guess that's one miles that's one kind of milestone on like the yeah should think about how to how to specify that a bit more precisely or like what's the what's the measure we would we would have a fact right like if I'm saying governments need to be measuring progress what's the what's the measure that they could be concerned about there. I guess another way you might think about a milestone is not just in sort of the capabilities but like what are the signs that we should be looking out for more broadly in society that maybe something is something is really changing or something is really happening and I think going back to the sort of your economics and singularity term there's something to watch out for in terms of sort of when we start to see kind of detectable changes in productivity or economic growth from AI I think there's there's something there about the fact that you know we're seeing a lot of investment and a lot of hyper on the day at the moment but right now it's not yeah it's not kind of creating huge economic economic benefits and there's some but there's a lot of anticipation that that it might and there's something about when we start to see that. That could be the sign of of maybe a real kind of explosion and lots more investment and lots more changes but yeah there's a couple of very broad ones do you have any thoughts. I think it's a really hard question and it kind of goes back to that same issue of can we tell when progress is happening while it's progress it's happening or only in retrospect and it's just commenting on how calibrated we are I mean the experts disagree all the time on what's important on what's really happening yeah what's happening that meaning that's meaningful I think most papers probably better published probably won't impact anything and some of them will have a ridiculously outsized impact and it's kind of hard to tell at the time so I mean I think that's one of the reasons why I do this show and why I talk to people like you because I'd like to develop my ability to see what's important because there's so much noise. So yeah I don't have I really don't have a great I don't have answers to any of the questions I'm asking you generally and some of them are kind of fairly a lot of them are unfairly difficult and I'm kind of sorry for that but I'm kind of like this is the kind of stuff we have to deal with in this field. No the good questions is fun as long as you're okay with sliding and re-entering on so I'm quite happy to be asked them I do think this idea of like maybe looking back at you know capabilities and bits of progress in the past that maybe we didn't know we're going to be impact. But I have ended up being so and doing some and trying to do some analysis like is there anything we can learn from them would be really interesting and potentially useful. And I just say you know part of the point of this this canary's paper was it was you know very early thinking about and this needs a lot of development but sort of thinking about methodology for trying to distill more of the sort of knowledge that exists in expert communities. As you say lots of people will have very different opinions about what's important and what's a big deal and what's been a big deal but we we believe or at least hope that there's some signal in all of that noise if you can kind of you know rather than just going into a few people at conferences. Although you know I think I think there's there's value in just talking to lots of people to trying to bring together like lots of different perspectives and kind of distilled and and map that and find commonalities. We're like somewhat optimistic that that at least would help somewhat to like distill some signal but it's very much working progress. So I really enjoyed reading this paper and it really changed my perspective on time and progress. Cool. And the kind of the structure of progress and the like thinking about the ability to predict you know what would that map look like and what would you need what how how would you develop that capability to predict that stuff. There's so many issues that come into this I think it's it was just a very memorable experience reading and I'll I think about it fairly often and but it also brings to mind you know another type of canary map and maybe you have this in mind already but this it seems like the maps that you're dealing within this paper have to do with the structured progress through technological achievements. And maybe there's an alternative map that has to do with the political and economic and democracy related events and impacts of AI and maybe actions that can be taken as well to to deal with them. So I was thinking like the nodes on that might have to do with things like you know impact of AI and democracy in terms of you know voter manipulation synthetic media influencing the political process laws and regulations regarding AI being passed. AI being used for governance itself maybe. AI replacing different types of labor like there's all these things that we can kind of in the kind of political I'm using the I'm starting to like this phrase from Thomas Gilbert. Political economy type stuff. Yeah. If we had a slice through all that and and what would that map look like and it that would be that wouldn't just be I think the technical map is is in some way simpler because there's there's has to be some natural progression. Whereas the the political economy AI map or canary graph would would maybe be a bit prescriptive to like okay if that happens it would be a really good idea if we push for this type of policy. And when that thing over there happens then we're really going to need some kind of you know we're going to have to strengthen our democracy in this other way. Otherwise that other thing down the line is going to cause a real problem for democracy itself or so I mean I think this is the more I think about it I think it's a much more ambitious it's like almost it seems impossible to me to really map all that stuff out. But it also seems like maybe something that that really needs to be done. I wonder is there is is that kind of the the angle you're taking with your work sort of implicitly. Yeah actually is you're saying that I was thinking in a way what you're describing is quite close to this this bigger project I was I was describing where we sort of turn and map out these different sort of pathways by which AI ends up having these more extreme impacts like a large part of what we're trying to do there is is kind of. Look at a lot of the current impossible trends that are getting discussed in terms of like AI's impact on on politics and on the media and on science and on inequality and and we've really been trying to like just collect together like a lot of a lot of the discussion on that and and then I mean I've literally been kind of drawing graphs of light OK well you know. This thing could feed into this thing and this could end up getting you here and and part of the point of that is to try and identify and I actually I realized when I was doing this process a few months ago I was like I'm doing the canary thing I didn't even intend to but it was like kind of drawing out these maps one. I and so I do feel like and that kind of goes back to where I'm saying before I've like part of the point of this being not that it's going to perfectly predict anything but that it leads makes a bit more explicit like the various different possible things that could happen in the ways that they might intersect with each other. And you know if if AI leads to the automation of lots of jobs, how might that affect politics, you know, we might we see sort of like more dissatisfaction and uprising and protest and what does that mean for the way that people in powerful positions who perhaps have the ability to manipulate information and things act. And obviously this is all like super speculative but like at least by mapping out these arrows you can then kind of start to ask really interesting and important questions about the relationships between these things and identify like well maybe this is the sort of thing that could actually have a really big really big impact. I guess like ultimately what I'd like to be able to do is figure out sort of how to connect this like political economy societal impact map with the technical map a bit more easily and I think this is one of the challenges is kind of when you do one they're kind of two very different ways of looking at. The same thing I think with the with the broader political social impact map one thing I can kind of see as doing is you know drawing out this map of these things that can happen could happen and then you kind of can start to think about like well how my different and Francis in capabilities feed into these things like what kinds of advances in AI do we need for it to start being applied to scientific research. In such a way that we're able to like automate certain kinds of scientific discovery that might you know change the nature of scientific work or might make it more likely we discovered dangerous technologies or something like that and then if you think about what kind of what kinds of AI systems do we need to be able to do this and then how far are we from this today. What kinds of progress would we need to see that then really helps you to start to map out like okay I then I'm fairly optimistic if you are kind of bringing bringing enough people enough for you to this that maybe you are then able to start identifying. The kinds of progress that are likely to be important or at least that that's a better way to try and identify that then just sort of. People from there perhaps narrow perspective in their field saying well this seems important and I think one thing I've been thinking about a lot is like what level of kind of understanding the technical details do people like me who think about kind of societal impacts need and I think you know earlier I said you know I think they need to be engaging with more than it seems but I don't think I think you don't need to be. And perhaps it's better not to be like a really narrow specialist in a specific area of ML because then you kind of. You may be missed the miss the big picture and so part of what I'm trying to sort of figure out in my own like learning and development and understanding is like you know how to have that kind of birds I view of while this is what this is broadly the areas that. The we're kind of seeing progress in this is what our systems can do this is their limitations these are the different avenues we might see things going. So that it's then possible to sort of draw those connections between the big societal trends and the kinds of progress and research so yeah it's a big project and it's a big undertaking. Well I'm glad you're doing it and maybe you know the existence of yourself and people doing the work like you're doing is is an important canary as well on these craft. I'm sorry I may be a positive one or something I don't know yet so I guess just related to this discussion I was thinking about the relationship between these these canaries and maybe threat models. Like when when we go to look at the security of a computer system will make a threat model talking about what the different threats are and how we can be minute they can be mitigated. It seems like some of your canary graphs could be a great first step in terms of a threat model for you know the political economy and democracy itself there's actually so many ways that our political economy and democracy itself can be attacked by AI and if we think of if we look to game theory. We should see that the stakes are really high and there's many actors that would be motivated to attack different aspects of politics economy and democracy and with AI these the capabilities change a lot and power can be amplified a lot and so how could we possibly mitigate it. So those I mean it's it's yet another angle but it seems like one that I hope people get around to considering and taking seriously because I think the institutions and everything that we the way that we run our world in many ways hasn't changed that much over hundreds of years and we have a lot of catching up to do really fast to keep our systems working working well with capabilities like this floating around. Yeah I think and again I mean I think one good one one way of nice way that you can be of kind of describing part of what I'm doing is is trying to map out sort of different threat models for AI on a pretty on a pretty big scale but but get those clearer. Yeah and I agree with you I kind of I should say like I think I'm not fundamentally pessimistic about AI like I think you know this is this is a technology that could bring huge huge benefit certainly like in terms of sort of like you know improving the quality of health care and. Medical treatment and and potentially I think you will I alluded to this before but you know these. I you know I have a background my PhD was actually in cognitive science and and I was sort of looking at the strengths and limitations of human rationality and I think I think I've been quite interested this. You know how to think about AI capabilities is complementing human capabilities and and can we sort of think about building systems in ways that. Systems that can do things the humans can do perhaps rather than mimicking exactly what what humans can do so like in theory I'm very. I have a lot of optimism about about AI is kind of being able to to complement human like strengths in solving big important problems in the world you know just helping us to make sense of enormous amounts of data that are brain calm process and. And things like that but in practice right now I'm quite pessimistic because I feel like. As you say like we sort of with with our institutions haven't really evolved fast enough to deal with these powerful technologies with developing very powerful technologies in a world where there's already you know large amounts of inequality. And our government and our very well equipped to deal with it and so I really do worry that by default. These powerful technologies in the hands of of powerful people or in the hands of people who just aren't able to think super thoroughly about the way the consequences. Do sort of inevitably end up doing harm whether that's by you know increasing concentration of power to to an extreme point whether it's because we're not careful enough and we use this technology to develop something you know as a more dangerous than the nuclear weapons. Whether it's because we just let these systems without thinking very hard about it take over more and more of the economy and then it kind of turns out they're not really doing what we wanted. But we can't do anything about it anymore I think. Yeah but if it was up to me I would be like let's maybe pause I out of the problem for a bit and and and fix some other problems and then let's do it and let's do all that you know people who research I got. Yeah it's not AI itself that I think is like fundamentally problematic I think it's like maybe sort of developing powerful technologies quickly without without dealing with a bunch of other like institutional and political problems first. I totally agree with you so yeah I research right now seems remarkably open it's done all in the open. What do you think this openness will will continue and is that important in terms of how things will progress. Yeah I think this is this is quite a complex issue there's obviously been sort of there's this real strong sort of norm towards openness in the in the AI community. I don't know I guess that's like maybe partly come from from the open source community or something I think it's interesting to look historically at where that's come from. There's been a fair amount of debate over the last year or so sort of more about this at least there's debate in the kind of circles that I. The intellectual circles that I move in which is like a little bit more on the sort of policy and governance side of the AI but I think it's been happening in the ML to open AI with the but they're released of G2 obviously prompted this. This quite big conversation when they sort of said we're not going to release we're not going to release the model because we're we're somewhat concerned about misuse. I think no matter what you think about that decision it was interesting and that it started people talking about this and that was I think there was they were surprised at how much I don't know exactly but my impression was they were maybe surprised at how much backlash they got from the the ML community on that which I think it just shows how strong the openness norms are. I do think they were raising an important and difficult issue which is you know when there is there are there are costs to openness. You know when when capabilities have the potential to be misused or you know even if not sort of maliciously misused if they if they might be use thoughtlessly in ways that could could cause harm. There is some responsibility I think on on the research community to think about and you know drawing at allergies or sort of areas of of life sciences research like like gain a function research right like there's a point at which you say no we don't think it's appropriate to publish openly the details of. How to manufacture a little pandemic although I think it still happens I think people still publish like a blueprint of a smallpox virus I think that's happened so I think you know most most people would agree that that's not a thing we want to do. I was still thinking that norms of openness are really important in the community it seems to me over the last couple of years there's been a bit of a move towards acknowledging that more at least in certain circles and at least certain groups of people thinking a lot more about sort of you know it's not an open versus closed decision right it's a question of. In what circumstances might sharing research widely in what ways like be a bit risky and and where should we think about limiting that actually I wrote this paper a couple of years with a Vivo Vajra about we looked we sort of use synthetic media as a case study and. And talks about different decisions that you might consider kind of attempting to break down this open closed distinction when thinking about how widely research should be shared so it's kind of. It's about like you know there's a difference between publishing a paper and not you know not really doing any promotion and getting media attention for it right and that's that's not a decision we think about there's a difference between never publishing. And then you're doing the code for your model and publishing it a bit later after the fact so that there's a bit more time for this is I think what I intended to do so there's a bit more time for to do research on. Sort of any form of defense it so in their case they were interested in having a bit more time to do research on. Which is to better detecting and distinguishing synthetic media from from real content or more time to do research on reducing bias in language models. So I think we are likely to see a move towards sort of a bit more nuanced decision making here a bit more thinking through the cost and benefits of sharing certain kinds of. Research widely and some context in which you might want to do that but I don't expect anything to change. Very soon like I think that norm is still very strong I suppose there's also the fact that you know more and more research is is being done in industry now a lot of research as a moving to industry. I don't have a super good sense of how that's going to change things but obviously you are going to end up with a lot more stuff that's that's proprietary to so. I think it's changing a little bit some good and some bad reasons I but I think the openness norms are like pretty strong and so I don't see it changing. Usually anytime soon and I think broadly that's probably pretty good. But yes it's definitely complex I saw one talk that you gave I watch the video for you mentioned human difficulty with rational decisions and collective action do you think that AI. Has any chance of helping us in that department yeah I think I mean I mentioned this this briefly. Earlier when I was sort of talking about being optimistic about AI at least in in theory and this is where I first my sort of optimistic take on AI first came from is I you know studying. The limits of a human decision making and human rationality and was was quite interested in thinking about sort of AI as decisions for. Tools and this idea of trying to understand sort of the the the limits and the relative strengths and limits of human and AI decision making and I think this one talk I gave and maybe it was the one that you are so sort of saying you know these the strengths are very complimentary you know that this. Sort of in some way surprising phenomenon that we found that AI is easier to get AI to do things that we find really hard like. Chess and it is to get them to do the things we find really easy like recognizing an object as a chair and and in some ways I feel like that sort of. Not appreciated enough what it is which is like we have these systems that can do very different things for a you know for example again as I mentioned before it's like well the strengths of. Of machine learning other that you know they can they can learn patterns in like enormous data sets that we couldn't even possibly. Begin to process or or make sense of so when it comes to things like you know discovering drugs or medical interventions. There's like a huge advantage there in terms of. In terms of sort of helping us to identify robust patterns you know one of the biases that comes up in human decision making is this kind of tendency to see patterns where they're on and. And I think that's definitely a thing that that machine learning can help us with. The collective action one is a bit more so that's sort of like improving human personality to some degree I definitely think there's promise there actually another thing I'll say on that is a thing I've been kind of interested in is. There's um. There's an ml startup in San Francisco called or that are trying to basically develop ML based tools to help people reflect better so I think the idea is really to sort of. Use ml to. I think they're starting with sort of chat bot type systems to sort of like ask good questions. To help with this kind of like dialogic reasoning. And but also to kind of help try and bring together to sort of like knowledge and reasoning from different sources to help people just like think through and what systematic and. And rigorous way you know what they want what solutions to their problems might look like. And that I think is is quite exciting and and call is a. Yeah as a thing to be working on. Yeah the collective action one I feel like is a little bit more complex and I'm actually not sure I'm not what did I say on that talk that they I could do. I think you know collective action is obviously a huge. A huge challenge and a huge thing underpinning many problems in in society. You know collective action on climate change things like that one thing I'll say is that there is a there are a group of people. By Alan de so who was until recently. He has been thinking a lot he is a political scientist and he's been working with quite a few people at deep mind and he's he's now gone to deep minds. this sort of broad field of cooperative AI. So this is both problems of how do we build AI systems that can cooperate effectively with each other, but also like how we build AI systems that help us to cooperate more effectively by perhaps for example, making it easier to make credible commitments and things like that. And I think that's really interesting and exciting. I think, yeah, problems of cooperation do underpin like a lot of and collective action problems do underpin a lot of difficult things in the world. So I think there's potentially some exciting stuff there, but it's not something I thought a load about myself. Awesome. And what do you think the path going forward looks like for yourself? Yeah, good question. So just figure out all of the different things you should be concerned about with AI and then like, which capabilities are going to affect them? And then no. So I mean, I'm trying to do this pretty broad, big picture stuff. I don't think I'm going to figure out any answers, but I really feel like thinking this stuff through in a lot of detail and trying to sort of bring together lots of expertise and perspectives in that is at least promising for like my own thinking and clarity and what most matters in this space. So I'm definitely just going to spend more time doing that. I'm the sort of person who's, you know, I don't know for sure if I'm going to stay in sort of a traditional academic path or whether I, you know, I think we're at this particular opportune moment in terms of governments, particularly, are really starting to think about governing our AI and regulation and what they do. And there's some quite exciting stuff going on in the UK and the EU and in the US too, I'm sure, but I'm just less up to speed with that. And so I could definitely see myself going a bit more of a direction of kind of trying to get into the weeds of influencing that and shaping that given this kind of, this sort of bigger picture understanding I'm developing of what's going to matter most longer term. And I think even if I stay on the sort of academic path or if I were to go more into policy, I still see myself very much as like bringing this kind of connecting big picture thinking to the more kind of concrete day-to-day decisions and trying to bring that bigger picture perspective. I'm also kind of moving more into, away from kind of just doing my own research towards managing this team of researchers. And that's a thing I really love doing because I think if you want, as we've talked about this kind of work really needs interdisciplinarity, but interdisciplinarity is challenging and I think, you know, one of the things that requires is maybe sort of like a bit more explicit. Management and having a team that has kind of a bit more of a shared strategy and goals and can kind of speak the same language. So I'm quite excited about sort of developing this team that we have who come from a range of backgrounds. And I just, I don't know, I'm not a researcher who likes sitting on my own interim, although that is what most of the last year has been. I really like working with people. So like I'm quite excited about kind of developing in that direction and just trying to just trying to keep understanding and exploring and thinking, yeah, thinking in a lot of depth about these different scenarios and what we should do. Well, I can't wait to read all about what you come up with with yourself and your team. Yeah, we'll see. So besides your own work, is there other research lately that you're really excited about? Yeah, I'd say like a few general themes. One is like, I mean, something that I'm interested in but haven't really gone so much time on, but there's been quite a lot of work coming out. A various different academic centers and civil society sort of on this idea of like participatory futures and like how do we engage a wider range of perspectives and sort of thinking about what we do and don't want from this technology and what we might be concerned about? I've sort of been getting increasingly interested over time in this sort of perspective of like, how do we develop this technology in a way that's more kind of democratic and inclusive of a wide range of perspectives and concerns and one of the benefits of that and why should we want that? So I have a really great colleague, Alex Tahagatee who's got a background in anthropology and she's really interested in this question of sort of like, you know, integrating the perspectives of sort of people and communities affected by technology into thinking about what responsible development looks like and so I'm hoping to, she's been doing some really great work on that and there's been some really interesting work coming out of places like Nesta and some other places sort of looking at this, this is quite a participation question and we're hoping to do some sort of more substantive thinking about, you know, why and when this is useful because I think there's a bit of a tendency of kind of two extremes where you have one group of people who say like, inclusivity and like participation is just like, obviously important and like maybe, you know, there's an element of that like, obviously we want to be inclusive but that sort of, you know, it doesn't really get into the details of like why this is beneficial and then there are maybe other people who kind of be dismissive and say like, oh, we, you know, the public don't really know, we can't really ask for their expertise and I think there's a more nuanced understanding in between this which is like, no, we don't want to go and ask, like I'm not going to go and ask sort of the wider public to tell me what specific, like to try and help me like, develop a canary map of like very specific technical capabilities, right? Like there are places for like specific expertise but I also do think that like, you know, one of the problems with AI development today is that we don't have, we kind of, you know, it is being driven by relatively narrow set of interests and we sort of, there is all this thinking like that that I'm doing about harms but not that much sense of like, kind of collective visions of kind of possible and exciting features and so although this isn't like a thing that I'm emphasizing a lot in my own work at the time, I'm really, I'm quite excited about work that's happening that's kind of trying to do that kind of thing, like bring together more diverse perspectives and like a wider range of expertise to really think in more detail about like what are the ways this could be really good? So yeah, I'm excited to see more people doing that kind of stuff and to try and contribute a bit, I guess in part because it also helps kind of complement and offset some of the more negative stuff I'm doing. Yeah, so that's one thing I'm really excited about. Cool, okay, this episode has been a long time in the coming. We've had a fall start before which is, which was totally my whole technical trouble and then we had a lot of scheduling, rescheduling. I just, I just want to thank you for your patience through all this and I'm so glad that we made it happen. Yeah, me too. Dr. Jess Whitlestone, I really appreciate you sharing your time and your insight with Talk Areal today. Thanks so much for joining us here. Thanks so much for having me. I'm pretty into it at the conversation. Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better.
[ { "end": 13, "start": 0, "text": " This is Talk Our All Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 13, "text": " Interviews of brilliant folks across the world of our all. I'm your host, Robin Chohan." }, { "end": 25, "start": 20, "text": " Dr. Jess Woodlesone is a senior research fellow at the Center for the Study of Existential Risk" }, { "end": 30, "start": 25, "text": " and the Leaver Hume Center for the Future of Intelligence, both at the University of Cambridge." }, { "end": 33, "start": 30, "text": " Thanks so much for joining us today, Dr. Woodlesone. Thanks for having me." }, { "end": 36, "start": 33, "text": " So how do you describe your focus area?" }, { "end": 40, "start": 36, "text": " So the basic focus of my research is I guess I'm trying to think pretty clearly" }, { "end": 43, "start": 40, "text": " about other possible long-term impacts of advances in AI" }, { "end": 49, "start": 43, "text": " and how the various choices we make today about how we develop and govern this technology may shape that." }, { "end": 54, "start": 49, "text": " So I come from a pretty interdisciplinary background, a mix of maths and philosophy" }, { "end": 59, "start": 54, "text": " and cognitive science and some policy experience. So really a lot of what I'm trying to do" }, { "end": 63, "start": 59, "text": " is bring together different perspectives on thinking about progress in AI and its impacts" }, { "end": 69, "start": 63, "text": " and different kinds of methodologies to try and think more clearly about what might happen in future" }, { "end": 74, "start": 69, "text": " and what that means that we should do today. So what I do doesn't really fit needily" }, { "end": 79, "start": 74, "text": " in an academic field or have a great term. It's sort of at the intersection of various things." }, { "end": 84, "start": 79, "text": " People might call AI policy, AI ethics, AI governance, but also with a greater emphasis" }, { "end": 90, "start": 84, "text": " on this kind of more descriptive side of trying to think about how is AI actually impacting society," }, { "end": 94, "start": 90, "text": " how might it do so in future, and then using that to inform, I guess," }, { "end": 99, "start": 94, "text": " what I would call the more prescriptive work of thinking about what kinds of policies do we need to design" }, { "end": 105, "start": 99, "text": " and how should we be governing this technology to ensure that it benefits society." }, { "end": 110, "start": 105, "text": " I'm also co-leading a research group called AI Futures and Responsibility" }, { "end": 115, "start": 110, "text": " which sits between these two centres that I work on, which focuses on this broad question" }, { "end": 120, "start": 115, "text": " of what we can do today to shape the long-term impacts of AI from a number of different angles." }, { "end": 125, "start": 120, "text": " So we bring together, we've got a computer scientist, an anthropologist," }, { "end": 130, "start": 125, "text": " someone with background in policy, someone with background in international law," }, { "end": 136, "start": 130, "text": " and try and bring together all those different perspectives to think about AI governance today" }, { "end": 140, "start": 136, "text": " from this sort of longer term perspective." }, { "end": 144, "start": 140, "text": " So I admit I've been dying to have you on the show to hear about this first paper" }, { "end": 149, "start": 144, "text": " for what seems like a really long time, like even ages before it was even published." }, { "end": 154, "start": 149, "text": " And I was going to ask our earlier guest, Dr. Kai Arulkamaran, to speak about it." }, { "end": 157, "start": 154, "text": " He was a co-author, but he said, I should really come to you." }, { "end": 160, "start": 157, "text": " So thanks to Kai for the great suggestion." }, { "end": 165, "start": 160, "text": " And here we are. And the paper is titled, societal implications of deep reinforcement learning" }, { "end": 170, "start": 165, "text": " that's by yourself, Jasperel Stone, Kai Arulkamaran, and Matthew Krosby." }, { "end": 175, "start": 170, "text": " So did I get that right? And can you start us with a brief overview?" }, { "end": 179, "start": 175, "text": " Yeah, absolutely. And thanks to Kai for the recommendation." }, { "end": 185, "start": 179, "text": " So yeah, I wrote this paper with Kai and also my colleague Matthew Krosby." }, { "end": 191, "start": 185, "text": " They're both computer scientists working on things related to reinforcement learning." }, { "end": 197, "start": 191, "text": " I'm in coming from a bit more of this broader sort of social science impacts policy perspective." }, { "end": 203, "start": 197, "text": " And the background or real motivation for this paper was we were talking and thinking about the fact" }, { "end": 207, "start": 203, "text": " that as AI capabilities advance, they're going to have bigger impacts on society" }, { "end": 210, "start": 207, "text": " and we need to be able to sort of think ahead about how to steer this." }, { "end": 216, "start": 210, "text": " But the thing I've really been thinking about is how that doesn't need to involve total speculation," }, { "end": 222, "start": 216, "text": " like the technology that's impacting society today has been researched development for well over a decade," }, { "end": 228, "start": 222, "text": " if you think about the sort of capabilities behind the computer vision, the NLP models," }, { "end": 231, "start": 228, "text": " the deepfakes that are being talked about today." }, { "end": 235, "start": 231, "text": " So we should expect that the technology that's going to have the biggest impact on society in the future" }, { "end": 240, "start": 235, "text": " will emerge from the kinds of areas that are currently receiving attention and research." }, { "end": 245, "start": 240, "text": " And deeper reinforcement learning seems like a good example of something that's receiving a lot of research" }, { "end": 251, "start": 245, "text": " is that attention and seeing a lot of progress, but not yet being at least widely applied in society." }, { "end": 256, "start": 251, "text": " So it feels like it's kind of in this sweet spot where we understand the technology well enough to potentially think" }, { "end": 261, "start": 256, "text": " through its possible impacts without having to kind of engage in world speculation." }, { "end": 266, "start": 261, "text": " But there's still a lot of room to think carefully about how we want to use this in society" }, { "end": 268, "start": 266, "text": " and how to mitigate any harms that might arise." }, { "end": 273, "start": 268, "text": " So what we try to do in this paper is discuss what it might look like for our systems" }, { "end": 276, "start": 273, "text": " to be applied more widely in society." }, { "end": 279, "start": 276, "text": " And initially did this by thinking through a few domains where it seems plausible," }, { "end": 282, "start": 279, "text": " we'll see more application of deep RL in coming years." }, { "end": 289, "start": 282, "text": " So we thought particularly about online recommended systems, things like managing critical infrastructure" }, { "end": 294, "start": 289, "text": " and applications in robotics, and then spend some time sort of thinking through the issues" }, { "end": 300, "start": 294, "text": " that might arise in these domains, as well as trying to consider how might the kinds of ethics and policy challenges" }, { "end": 306, "start": 300, "text": " that are currently being discussed be sort of pushed or strained or changed by our systems." }, { "end": 312, "start": 306, "text": " So I'll kind of I can talk through a few of the ethics and society issues that we discuss in the paper." }, { "end": 317, "start": 312, "text": " And then if you want to go into more detail on any of them, than we can do." }, { "end": 326, "start": 317, "text": " One sort of obvious thing that we first started thinking about is a lot of what RL allows compared to supervised learning systems" }, { "end": 331, "start": 326, "text": " is that it promises these much more autonomous systems sort of operating in the real world" }, { "end": 341, "start": 331, "text": " with much less human intervention, which is exciting, but also raises issues for these notions of the importance of human oversight" }, { "end": 345, "start": 341, "text": " over systems in order to ensure that they're safe and reliable." }, { "end": 352, "start": 345, "text": " Currently the kind of main way that that notion of oversight is operationalized in policy and in nothing discussions" }, { "end": 360, "start": 352, "text": " is this idea of having a human in the loop, which is is less likely to be feasible with a lot of RL systems," }, { "end": 367, "start": 360, "text": " especially if you've got kind of, for example, like deep reinforcement learning systems being used in some form of research management," }, { "end": 372, "start": 367, "text": " like monitoring and adjusting the energy usage in a building, which seems quite plausible as an application." }, { "end": 376, "start": 372, "text": " And if you've got a system that's making hundreds of small decisions in an initial time period," }, { "end": 379, "start": 376, "text": " it's not really clear what human oversight of that looks like." }, { "end": 386, "start": 379, "text": " And this is maybe particularly likely to be challenging in a concern if we've got RL systems that are doing continual learning" }, { "end": 393, "start": 386, "text": " or even multi-agent systems, it's then much less clear like what the model of oversight looks like for those systems." }, { "end": 395, "start": 393, "text": " And I guess this is related to a second." }, { "end": 405, "start": 395, "text": " A second concern we brought up, which is just that deep oral systems are going to raise new and bigger challenges for ensuring safety or reliability of AI systems," }, { "end": 407, "start": 405, "text": " partly because they learn via trial and error." }, { "end": 415, "start": 407, "text": " If we're going to start deploying these systems in the real world, we really need to make sure that we sort of have really solid approaches to safe exploration." }, { "end": 418, "start": 415, "text": " Again, continual learning is likely to pose a challenge." }, { "end": 424, "start": 418, "text": " We can't just do this thing that we can do with supervised learning systems, which is kind of assure their safety or reliability" }, { "end": 427, "start": 424, "text": " before deployment against some standard." }, { "end": 437, "start": 427, "text": " Obviously, if you've got a system that's going to continue learning from real world data, you're going to need some form of continual monitoring to be assured that it's safe." }, { "end": 444, "start": 437, "text": " And in addition, if deep RL is going to enable us just to deploy more autonomous systems in more high-sex domains," }, { "end": 452, "start": 444, "text": " there's a bunch of talk about deep RL applied to smart cities, to sort of critical infrastructure and things like energy, transportation and agriculture." }, { "end": 455, "start": 452, "text": " There simply kind of raises the stakes of safety challenges." }, { "end": 463, "start": 455, "text": " If something goes wrong, it's a much bigger deal than perhaps a lot of the systems being deployed today." }, { "end": 472, "start": 463, "text": " And then we kind of go on to discuss a bunch of other issues, one around sort of the fact that the flexibility of reward functions," }, { "end": 480, "start": 472, "text": " which is kind of what makes reinforcement learning systems so powerful that also introduces a lot of greater potential for unintended consequences." }, { "end": 495, "start": 480, "text": " So Stuart Russell in his relatively recent book talks about this concerning has about social media content selection algorithms, which you know designed to maximize the likelihood of user clicks on an item may have this side effect of making users preferences more predictable," }, { "end": 500, "start": 495, "text": " which has shifted them to more extreme context, which perhaps is contributing to online polarization." }, { "end": 506, "start": 500, "text": " This is the kind of thing we might be concerned about where you know, a company is optimizing for a specific objective." }, { "end": 519, "start": 506, "text": " And actually, you know, those systems that are optimizing for the objective become quite powerful in society have in this example, you know, play quite a large role in influencing what content people read and how they choose it." }, { "end": 524, "start": 519, "text": " And has this broader unintended consequence, which could end up being harmful." }, { "end": 537, "start": 524, "text": " So I think as we start deploying these, these more autonomous, more optimizing systems with these kind of more flexible reward functions that you might have received by learning, we need to be thinking a lot more about these things." }, { "end": 543, "start": 537, "text": " And then maybe we're briefly are just give over the few other things we've discussed and again, if you want to ask questions, who can." }, { "end": 550, "start": 543, "text": " So we talk also about the potential for deeper, I'll to increase the incentives that exist for ongoing data collection across society." }, { "end": 564, "start": 550, "text": " We talk about security concerns, so so one thing there is that compared with other ML approaches, it can be harder to distinguish adversarial attacks from benign actions in deep or all because the cause of the exploration, the training data distribution is constantly changing." }, { "end": 577, "start": 564, "text": " And then we also talk about this kind of big topic of automation and just kind of broadly discuss the fact that advances in deep or all could really shift the susceptibility of different jobs to automation and this hasn't really been thoroughly considered at all in any of their." }, { "end": 584, "start": 577, "text": " And analyses of what kinds of jobs might get automated and then what we do later in the paper is we kind of we take." }, { "end": 599, "start": 584, "text": " So we sort of discuss all these kinds of issues that might need to be considered and then we come back to the question of like well, what are the things that are currently limiting us from applying deeper our systems more in society, what are the research barriers and talk through those and kind of." }, { "end": 618, "start": 599, "text": " Discuss how maybe those should be things that we're keeping better track of as kind of warning signs that we might might start to see more application of reinforcement learning in society, which maybe means that policymakers and others need to be thinking more seriously and harder about the kinds of concerns that we raise in the paper." }, { "end": 631, "start": 618, "text": " Cool. Okay, so this show usually we're usually dealing with more technical papers and but recently we had Thomas Gilbert on who has kind of a related focus in terms of political economy and RL." }, { "end": 642, "start": 631, "text": " So I guess the audience here is a little bit different the intended audience for this paper, maybe a little bit different than we we usually focus on what what would you say is the main target audience you had in mind for this paper." }, { "end": 652, "start": 642, "text": " Yeah, so I guess I would say we have two two audiences in mind, perhaps like just slightly sort of the primary audience I suppose is more." }, { "end": 660, "start": 652, "text": " The sort of the sort of research as you do the kind of work that that I do on those more ethics and governance side part of what I wanted to do with this paper was actually." }, { "end": 680, "start": 660, "text": " Bring a greater understanding of some of the kind of more technical elements of progress in in AI and the more nuanced understanding of the different types of AI systems and the different capabilities that they have to that discussion so rather than just talking about the impact of AI broadly which can feel like very crude and broad brush actually start to say well." }, { "end": 690, "start": 680, "text": " You know a lot of the systems that we're concerned about today a supervised learning but we're seeing progress in this is different form of machine learning which maybe raises some different challenges and opportunities." }, { "end": 706, "start": 690, "text": " And I'm really to that audience of people who are mostly thinking about how to govern these systems and thinking about what issues they raise contribute a bit more of a sort of new discussion and and bring to mind for those people like maybe this is something you should be paying attention to progress in this area." }, { "end": 722, "start": 706, "text": " So that was kind of the primary audience but definitely there was also and I and I hope that the paper is still engaging to to the more technical audience because part of what we wanted to do was also get people who do research in in reinforcement learning thinking about." }, { "end": 751, "start": 722, "text": " Especially if if you're doing stuff that sort of closer to bringing bringing capabilities to application get people thinking about what the possible impacts and issues raised by the kind of research and the kind of systems that building are and ultimately I mean something we really stress at the end of the paper is that we think to address many of the challenges that we discuss is going to need collaboration between people working on the more technical side of building RL systems." }, { "end": 774, "start": 751, "text": " And people who are thinking about the governance challenges if we want to be able to you know think really carefully about what kinds of oversight the system need you need to bring together people who you know really understand the systems and the challenges that they're going to pose for oversight with people who really understand you know the requirements and the situations in which oversight might be needed and what purposes." }, { "end": 797, "start": 774, "text": " And I guess that's sort of what we try to start exactly this paper was a collaboration between me someone who works on the more ethics and governance side and a couple of people working more on the development side this I guess I would say is something I try and bring to my work in a direction I'm kind of trying to nudge this ethics and governance field in more broadly I think it's it's really important that." }, { "end": 827, "start": 797, "text": " And what I think is going to be a collaboration between the two of us is going to be a collaboration between the two of us and the two of us and the two of us are thinking about societal impacts is underpinned or at least sort of you know can draw on a fairly solid understanding of of this nuance and the details of like what current systems can actually do and where they might be going otherwise I think it's going to sort of for short of being actually relevant or practical so that's something that I think a lot about and so yeah part of the end with this paper was to also engage people working on the more technical side maybe with" }, { "end": 856, "start": 827, "text": " some of these issues. Okay so it sounds like we really need a multidisciplinary perspective to to make progress in this area so once we have that what like what is the path look like forgetting from from where we are today where there hasn't been a whole lot of thinking on this area especially in RL which is why I'm excited to chat with you here to a society that's that's really more prepared for this what what is that path look like yeah that's a big question and an important question yeah I think the first step is bringing together." }, { "end": 880, "start": 856, "text": " Some of these people from different groups and backgrounds and expertise a bit more productive in sort of talking about and and figuring out you know what are the kinds of advances and the kinds of systems and the kinds of impacts they're going to have on society and and I think you know one of the challenges for doing that it's all very well to say we need more interdisciplinary work." }, { "end": 909, "start": 880, "text": " I think a lot of people are shouting in in AI in another domains but it's challenging because sort of people speak different languages and have different incentives right like I even when it comes to like I try and collaborate with people working in ML and there's a sometimes a bit of a challenge of like you know the journals they want to publish in for their career and not necessarily the same journals that all the conferences that that I want to publish in and and so there's some challenges there that I think yeah I did." }, { "end": 929, "start": 909, "text": " Yeah I need to be overcome to some extent I think you know we're starting to see big conferences like your apps and I see a mile and others you know have workshops that attempt to bring these different perspectives together and allow people to to publish papers that that sit at this intersection but I think I think there's still some work to be done there." }, { "end": 947, "start": 929, "text": " I think there's then there's a sort of also bridging that gap between academic understanding and and the policy understanding and the governance and that's another gap that I'm really trying to bridge with the work I'm doing and some of the people I'm bringing together is you know we need to do this big picture open ended exploration of." }, { "end": 976, "start": 947, "text": " You know what are the things what are the kinds of impacts we might be concerned about in future like what kinds of specific advances and capabilities do we need to be thinking about but then you also need to be able to tie that back to like okay practically what can be done today when it comes to decisions that the government and industry are making about about these technologies and getting into some of that nitty gritty one one specific thing I'm working on at the moment that I think is quite quite promising in this regard." }, { "end": 988, "start": 976, "text": " Is trying to think about how governments can build better capacity to be measuring and assessing features of systems and monitoring progress so I think." }, { "end": 998, "start": 988, "text": " A big problem for the governance of this technology at the moment is the kind of classic problem with technology policy which is basically technologies fast government slow." }, { "end": 1010, "start": 998, "text": " I'm a regulation isn't and other forms of governance generally can't keep up with the pace of progress and I think we're really seeing this with AI government is getting caught off guard by advances." }, { "end": 1016, "start": 1010, "text": " I'm and only really able to respond when something's already blown up in the media or ready created economic activity." }, { "end": 1029, "start": 1016, "text": " And that I think is going to be a real problem is the systems get more advanced as they get more widely deployed across society." }, { "end": 1038, "start": 1029, "text": " Potentially you know we might see bigger bigger harms and bigger mistakes and we can't just have government responding to two mistakes as they as they happen." }, { "end": 1046, "start": 1038, "text": " Be a bit ahead of the curve and prepare you don't have to speculate about the future you just need to know what's what's going on today better." }, { "end": 1053, "start": 1046, "text": " In especially in the sort of and you know what's being applied in society what capabilities are looking close to deployment." }, { "end": 1060, "start": 1053, "text": " And so I'm working on this with with Jack Clark who used to be the policy director open AI." }, { "end": 1070, "start": 1060, "text": " And has been involved in a lot of AI measurement and assessment initiatives and is just launched a new project and you company called anthropic AI." }, { "end": 1086, "start": 1070, "text": " We're working on kind of developing this proposal around how can governments build greater capacity to essentially be kind of getting that reliable information about when potentially impactful capabilities are going to be deployed." }, { "end": 1096, "start": 1086, "text": " Getting better information about where systems might not be as robust and safe as they should be in order to sort of be better prepared to you know not." }, { "end": 1107, "start": 1096, "text": " It's not that they should necessarily be taking action before anything's happened but like reacting to more sensitive information not necessarily reacting to like this huge things got blown up in the media but reacting to." }, { "end": 1122, "start": 1107, "text": " Sort of the finer details of what's going on in the world so that's something I'm excited about but that's not definitely not the whole solution to be more prepared but it feels like quite an important step to me." }, { "end": 1132, "start": 1122, "text": " Do you think that there could or should be regulation on on or and maybe related any thoughts on the recently proposed EU AI regulations." }, { "end": 1137, "start": 1132, "text": " Yeah, very thoughts on this it's a big topic." }, { "end": 1149, "start": 1137, "text": " I'll start with the EU regulation and then say what I think specifically about RL I do think I'm generally broadly pretty positive about the EU regulation." }, { "end": 1155, "start": 1149, "text": " I think that we actually over the last few years I've come to be sort of like more." }, { "end": 1160, "start": 1155, "text": " Confents that we do need some kind of a regulation." }, { "end": 1168, "start": 1160, "text": " I guess I think of regulation as being you know regulation is sort of your your bluntest tool." }, { "end": 1176, "start": 1168, "text": " You know you can either ban things or you can put restrictions on their use or make them have to conform to standards." }, { "end": 1186, "start": 1176, "text": " And so I think of regulation as being the thing we use when we like reasonably well and to stand what the harms are or what the risks are of something." }, { "end": 1193, "start": 1186, "text": " So you know I like the approach that the EU has taken in general in that they say okay there are some." }, { "end": 1201, "start": 1193, "text": " There are some AI systems some applications of AI that seem like they cause or are likely to cause unacceptable levels of harm." }, { "end": 1204, "start": 1201, "text": " And so those were just going to ban and that includes." }, { "end": 1211, "start": 1204, "text": " I can't remember the exact details but it includes certain forms of sort of discrimination and manipulation from AI systems." }, { "end": 1218, "start": 1211, "text": " And then they say okay there are other kinds of AI systems that are high risk and those we're going to." }, { "end": 1223, "start": 1218, "text": " Subject to we're basically going to say like they have to go through conformity assessments so they have to go through." }, { "end": 1234, "start": 1223, "text": " Certain sort of processes of assessment and they need to have all this documentation and they need all this disclosure before they can they can be put in the market." }, { "end": 1242, "start": 1234, "text": " And then there's sort of then there's below that there's there's I think another couple of levels of risk and I think broadly that makes a lot of sense to me as an approach." }, { "end": 1248, "start": 1242, "text": " I think breaking systems down according to sort of like their level of risk or." }, { "end": 1259, "start": 1248, "text": " All the kinds of impacts they might have is much more sensible than trying to regulate AI as a whole technology or trying to regulate specific applications of AI." }, { "end": 1269, "start": 1259, "text": " Just by sort of sector or something but I think the all that said the devil is really in the details and I think there are still a lot of challenges and how we." }, { "end": 1276, "start": 1269, "text": " Choose and define which systems fall in which categories so there's a lot of ambiguity in the definitions of." }, { "end": 1286, "start": 1276, "text": " What in the definitions of what systems for land are the prohibited parts I think there's some with the notion of sort of manipulative systems." }, { "end": 1295, "start": 1286, "text": " There's something about you know if it would cause someone to behave in a way that's detrimental to them that they wouldn't have otherwise and you sort of think." }, { "end": 1301, "start": 1295, "text": " Assessing that to be pretty difficult and even in the high risk category." }, { "end": 1313, "start": 1301, "text": " There's sort of a set of domains that they've identified as as as having specific applications under them that are high risk but it's pretty difficult to to adapt that." }, { "end": 1317, "start": 1313, "text": " I think it may I think we." }, { "end": 1323, "start": 1317, "text": " So one of the things that I do think is really important for for this regulation is that it's able to adapt as capabilities advance." }, { "end": 1328, "start": 1323, "text": " I think one of the big concerns with something is sort of." }, { "end": 1341, "start": 1328, "text": " I suppose kind of regimented and hard lined as regulation is that it's hard to change and so it can become ossified which either means it's kind of irrelevant because it doesn't really deal with the challenges of the systems that we have in feature." }, { "end": 1346, "start": 1341, "text": " Or that in my even kind of." }, { "end": 1358, "start": 1346, "text": " Yeah does it kind of focuses attention on the things that aren't really what matters we focus attention on sort of making sure there's all this documentation for these specific types of systems but actually something falls to the loop holes." }, { "end": 1366, "start": 1358, "text": " So I think one thing that's going to be interesting and challenging to think about and actually I'm working on with some of my group is is thinking about this like okay." }, { "end": 1380, "start": 1366, "text": " What are the features of this regulation is to have in order to be able to adapt over time and how does that applies to the regulation it does have various things in place that allow it to you know things to be added things to be changed but there are some limits on that." }, { "end": 1387, "start": 1380, "text": " I think a really interesting test case here in terms of sort of probing the limits of its ability to adapt is to think about something like." }, { "end": 1390, "start": 1387, "text": " You know does this deal with reinforcement learning systems." }, { "end": 1404, "start": 1390, "text": " I think at the moment it doesn't really it doesn't really draw those distinctions certainly there are specific examples of reinforcement learning systems that might end up being sort of shoot in under some of the categories that are currently seen as high risk." }, { "end": 1417, "start": 1404, "text": " I think you know in my view there certainly well and could easily be examples of reinforcement learning systems that get deployed in the world that should certainly be subject to this kind of conformity assessment." }, { "end": 1437, "start": 1417, "text": " You know a high standard of testing for safety and reliability and robustness if you're you know if you're deploying quite autonomous systems in high stakes domains like some of the sort of critical infrastructure domains I talked about earlier especially if they're doing some form of continual learning I think it's very sensible that they should need to be subject to." }, { "end": 1447, "start": 1437, "text": " At least some form of conformity assessment there's a whole other question of like who determines and how does it get so that what those conformity assessments look like but I think that's very sensible." }, { "end": 1455, "start": 1447, "text": " I don't think we should be thinking about regulating RL as a category or as a whole." }, { "end": 1461, "start": 1455, "text": " I think that doesn't make sense it's an underlying capability you know it's a mathematical like framework." }, { "end": 1483, "start": 1461, "text": " Certainly there are there are types of systems that are based on RL that have certain features and certain domains that we'll probably want to think about regulating and I guess yeah thinking that through what are the what are the conditions in which RL system should be regulated is I think a challenge that we're going to need to deal with." }, { "end": 1490, "start": 1483, "text": " So do you think that RL specifically is likely to contribute to more concentration of power and increase inequality." }, { "end": 1501, "start": 1490, "text": " Do you think that's a major risk here? Yeah it's a really interesting question and I haven't really thought about this with RL specifically so I'm going to be thinking a little bit on the fly here." }, { "end": 1519, "start": 1501, "text": " I do worry in general about about the way we're going with AI contributing to increasing quality and in concentration of power so a big project I've been working on recently with my research system time clock is amazing is." }, { "end": 1529, "start": 1519, "text": " Trying to sort of like more comprehensively map out different pathways by which I might might pose sort of larger risk for society." }, { "end": 1548, "start": 1529, "text": " So yeah slight background on this is that I guess we came to this because it feels like the discussion about AI kind of posing risk society or having very extreme impacts still feels fairly immature I think you know there's Nick boss from super intelligence which is sort of like the main." }, { "end": 1563, "start": 1548, "text": " The main scenario that a lot of people would associate when they were here sort of like AI risk AI extreme risk is this concern about you know developing AI systems that are as intelligent or more intelligent than humans and then that going badly wrong." }, { "end": 1577, "start": 1563, "text": " I think that's like a scenario we should be concerned about I'm not I'm not dismissing that but it's also it's one very specific scenario and also the specific scenario he sketches sort of relies on a lot of assumptions about what AI progress is going to look like that it's going to be." }, { "end": 1583, "start": 1577, "text": " It's going to look like that it's likely to be this sort of fast centralized thing that we're going to have this sort of one system." }, { "end": 1606, "start": 1583, "text": " So part of what we've been trying to do is is broaden and and nuance that conversation discussion and pulling together lots of different literature and perspectives and and we're working towards something like a research agenda to try and sort of map out okay what are some different things that we we might be concerned about perhaps even in scenarios where we don't reach anything like super intelligence but you know AI capability sort of." }, { "end": 1617, "start": 1606, "text": " Advanced along the trajectory that they currently on and get used more widely in society and all of that specter grant to say that one of the one of the scenarios I think that has come out of that that." }, { "end": 1631, "start": 1617, "text": " There is a lot of thinking on but it's not quite you know find enough is this idea of you know AI really leading to power concentration and increase inequality one of the big questions I think I have my mind about that at the moment is you know how bad is." }, { "end": 1640, "start": 1631, "text": " Does that get could this get locked in for really long periods of time but I think when you have you've got a bunch of trends sort of pushing in that direction you've got the fact that." }, { "end": 1660, "start": 1640, "text": " You know AI is is perhaps leading to more winner takes all like monopolistic dynamics in tech you have this kind of feedback loop where the companies that have the most data and the most computing power can design better products and services which allowed them to like a mass more data and computing power." }, { "end": 1666, "start": 1660, "text": " And and and that kind of feeds back on itself you have the fact that you know." }, { "end": 1676, "start": 1666, "text": " Ferry sml techniques and I've been used by tech company social media companies to improve their advertising to improve their brand to influence what people." }, { "end": 1686, "start": 1676, "text": " Think in such a way that that makes them all powerful at the same time you know you also have like on a sort of global scale you have the fact that." }, { "end": 1706, "start": 1686, "text": " These capabilities have been used much more in the sort of develop them the developing world to boost their economies and then you have the fact that potentially you've got kind of potential sort of job losses due to automation leading to greater inequality and society which." }, { "end": 1724, "start": 1706, "text": " You know could kind of intersect with those broader trends so none of this answers should have our RL I think in so far that RL is just going to enable like more powerful optimizing systems that can optimize more effectively for sort of any given objective." }, { "end": 1736, "start": 1724, "text": " Any given sort of easily I suppose any given objective that you can specify I do think it's it's likely to worsen this trend without without other thoughts right because." }, { "end": 1747, "start": 1736, "text": " It's likely to improve the ability of social media companies to like really optimize for the for their revenue and and that's going to sort of drive." }, { "end": 1757, "start": 1747, "text": " Increasing concentration of power power there and and potentially you know as I said sort of combined with impacts on on jobs." }, { "end": 1766, "start": 1757, "text": " It could do that but you know I guess the caveat here is this is something we're working on I feel like there's still kind of a lot of gaps in in that story." }, { "end": 1774, "start": 1766, "text": " About what exactly happens and what exactly something like RL is going to contribute but right now I'm that's a thing I'm concerned about for sure." }, { "end": 1792, "start": 1774, "text": " I mean I guess for my point of view if you wanted to replace a worker with an AI system it pretty much has to be framed as an RL problem so that you can optimize for the workers actions towards the goal of profit for the business." }, { "end": 1806, "start": 1792, "text": " And so it seems like the most natural framing for that and so I guess I worry about this like economic singularity I'm not so worried about intelligence singularity but the economic" }, { "end": 1825, "start": 1806, "text": " singularity seems like a much more likely very high probability scenario which is just a little less sexy to talk about than a GI but it seems like it could be here a lot sooner and like you say it it may have some kind of lock in properties that are that are hard to to escape from." }, { "end": 1836, "start": 1825, "text": " So I really hope that policy people are thinking clearly about this that type of risk as well and it's not just one in a giant list of potential risks." }, { "end": 1854, "start": 1836, "text": " It seems to me it's going to be a challenge for us to avoid that I guess futures in which a eyes is highly decentralized and knowledge of its use is very decentralized and maybe maybe you know labor itself is able to use these tools to not be replaced but just be." }, { "end": 1867, "start": 1854, "text": " Improve and improve its productivity maybe maybe there's a way through there but I really look forward to to guidance from from people like you on how to avoid this economic singularity that that does concern me." }, { "end": 1883, "start": 1867, "text": " Yeah, try like can I just say when you say economic singularity do you mean yeah what do you mean by that you mean like a scenario where we have a sort of transition point where there's kind of like suddenly like a discontinuity and economic growth and it starts" }, { "end": 1890, "start": 1883, "text": " from really rapidly increasing a different right of change or do you mean something different. Yeah, I just threw out that phrase. I don't know if it's if it's a phrase at all." }, { "end": 1893, "start": 1890, "text": " Maybe we've made a new phrase maybe I like it." }, { "end": 1898, "start": 1893, "text": " So I looked it up after the show this phrase shows up in a couple places." }, { "end": 1904, "start": 1898, "text": " Calum Chase in his 2015 book surviving AI the promise and peril of artificial intelligence." }, { "end": 1925, "start": 1904, "text": " He defines economic singularity as the moment when a I renders most of us unemployed and indeed unemployable because our jobs have been automated and then William D. Nordhouse in his 2015 national bureau of economic research working paper titled are we approaching an economic singularity information technology in the future of economic growth." }, { "end": 1934, "start": 1925, "text": " Defines it as some boundary or singularity after which economic growth will accelerate sharply as an ever accelerating pace of improvements cascades through the economy." }, { "end": 1938, "start": 1934, "text": " And now let's return to the episode with just little snow." }, { "end": 1946, "start": 1938, "text": " Yeah, I was thinking in terms of concentration of power and capital not needing labor really anymore." }, { "end": 1966, "start": 1946, "text": " This is something I've I've thought about a bit and I've I've heard some other people talking about there's a few different elements to this. Yeah, one is this just sort of notion of AI leading to more what we might call like kind of transformative and think this continuous change says is possibility of just increasing the rate of economic growth." }, { "end": 1977, "start": 1966, "text": " The increasing the rate of change of economic growth in a way that kind of produces a discontinuity and I think I mean that's one way of describing what would happen right which is very much on the like economic metric side but I think." }, { "end": 1987, "start": 1977, "text": " Probably what that that looks like is being able to automate like a very large proportion of like economically valuable work." }, { "end": 1996, "start": 1987, "text": " And whether or not that looks like concentration of power or whether there's some way of doing that the some like scenario which occurs in which it's more decentralized and and therefore." }, { "end": 2009, "start": 1996, "text": " I don't know but that certainly seems like a scenario in which like the world as we know it is like very different and I am not confident it would be good so yeah that's I yeah definitely more thought needs to go into into what I would look like." }, { "end": 2029, "start": 2009, "text": " So do you have future work in mind in the in the direction of this paper yeah so not I have various bits of future work in mind sort of are in the direction of this paper but not an actual extension I mean one thing when we finished the paper I did have in mind was maybe trying to do some kind of similar analyses and explorations for." }, { "end": 2053, "start": 2029, "text": " Other areas of machine learning that are that are that are seeing kind of substantial progress today deep our was the one that that really stood out but maybe you know doing this for you know what if we see substantial progress in certain kinds of transfer learning or in certain kinds of unsupervised learning or you know fill in I haven't sort of done there it would be nice to do the kind of." }, { "end": 2082, "start": 2053, "text": " So high levels thinking about okay what are really other areas that might have what are other areas of progress in the mail that that we should be thinking about because they might really impact society I ended up sort of taking a step back from trying to answer that question and more recently sort of stepping back further and looking at this bigger picture question of what are the what are the sort of scenarios we should be most concerned about as we were starting to get into there." }, { "end": 2104, "start": 2082, "text": " From this perspective of sort of longer term impact of AI and looking beyond the GIC for intelligence scenario and part of the reason I did that was I felt like I wanted to have a clear picture in mind of like yeah the sort of longer term pathways that and the longer term forms of societal impact that we might be most concerned about in order to sort of then trace back to one of the developments that are likely to matter most." }, { "end": 2133, "start": 2104, "text": " So I sort of see I've gotten into this project ongoing whether this is kind of iterative process between trying to pull together different perspectives and research to to map out some of these these longer term pathways so you know what might happen a power concentration scenario look like what might it look like or or could you know developments in in an ML end up sort of undermining humanity's sort of." }, { "end": 2147, "start": 2133, "text": " Competence and ability to deal with deal with challenges in certain ways obviously it could also potentially go in the more positive direction but I tend to focus more on the risks which we can talk about at some point." }, { "end": 2154, "start": 2147, "text": " Anyway sort of tracing out these longer term trajectories kind of to like orient myself and get this clear picture of sort of what what really matters here." }, { "end": 2164, "start": 2154, "text": " But then I think there's some interesting work to be done in sort of tracing back from that but also looking out from the future and saying like well if these are the scenarios we're concerned about what are the kinds of." }, { "end": 2182, "start": 2164, "text": " Developments that perhaps matter most and you know I think the reason we chose that we chose deep RL was because this really does feel like as you said it's sort of like it's hard to imagine more autonomous generally capable systems in society that aren't based on on deep RL in in some sense because." }, { "end": 2191, "start": 2182, "text": " It's just such a sort of basic framework for for having systems that are more like agents right that that interact with the world." }, { "end": 2201, "start": 2191, "text": " I think advances in natural language processing language modeling there obviously getting a lot of attention in the moment starting to see deployment and society." }, { "end": 2216, "start": 2201, "text": " You know have potential to really impact both the way the information is produced and disseminated and assessed in society and and and work you know I think when GPT 3 the sort of." }, { "end": 2227, "start": 2216, "text": " The latest language model from open AI came out one of the things that sort of struck me or surprised or scared me most was there." }, { "end": 2241, "start": 2227, "text": " The uses we are very quickly seeing to write code the fact that it's like very good at taking like natural language descriptions and turning it into code and being like oh wow like I can see this was automating like all kinds of software engineering." }, { "end": 2248, "start": 2241, "text": " So I think that's an avenue that is is going to have a lot of impact and and needs more more thinking about." }, { "end": 2253, "start": 2248, "text": " What what that's going to do so." }, { "end": 2271, "start": 2253, "text": " Yeah this this kind of approach I'm taking is like quite big ambitious project of like you know figuring out what we should be concerned about in the future kind of involves taking this like very big picture look at the sort of longer term pathways and and trying to map out different possibilities." }, { "end": 2289, "start": 2271, "text": " But then also looking at sort of like where are we today and which one of the impacts of these these trends are going to have so I think looking more at some of that is there's sort of this much bigger broader project that's the natural extension of this at the moment and I think that I end up focusing in again at some point but yeah I sort of." }, { "end": 2310, "start": 2289, "text": " So I skipped my head down into into the RL space and now kind of come back up and doing this much wider broader thing and then maybe we'll kind of end up digging back into something more specific but I'm not quite sure that looks like yeah awesome okay so let's move to your next paper artificial canaries early warning signs for anticipatory and democratic governance in of AI." }, { "end": 2336, "start": 2310, "text": " That was Carla Zoe creamer and yourself so what's the gist of this paper yeah so this paper and I'm from so Zoe Zoe Kramer and she was doing sort of years research project at the center of the central risk where I work and she did this really great project where she was interviewing." }, { "end": 2357, "start": 2336, "text": " And various researches across ML cognitive science philosophy to try and better understand like what people saw the current limitations of deep learning being particularly with respect to us achieving something like human level intelligence she has more precise definition but she was asking people you know one of the things that you think AI systems today can't do." }, { "end": 2377, "start": 2357, "text": " That is sort of like the things that they would need to do what are the most important kind of milestones that the current deep learning systems need to achieve if they're going to get to anything like human intelligence and also sort of what are the things that you think if any you know deep learning systems maybe fundamentally can't do that the human intelligence can do." }, { "end": 2392, "start": 2377, "text": " And from this project she came up with a really interesting and sort of in depth list of kind of collated a lot of these these milestones and these capabilities you know so from with a really wide range so from like you know." }, { "end": 2416, "start": 2392, "text": " Cause a learning to certain forms of interpretability to concept formation and and yeah lots of different things and and she was really interested in trying to think about you know how do we distill what we have from these experts here to kind of get a better understanding especially if we think about the relationship between the different milestones that people have come up with." }, { "end": 2445, "start": 2416, "text": " Could some of these milestones serve as kind of early warning signs of of progress towards something like human level intelligence so if these are the things that the experts are saying like you know we're never going to get human level intelligence without well you know we should if we see progress like substantial progress on these things and we should we should really kind of I don't know start paying attention and she and I was talking about this and started kind of thinking about is there a is there a broader sort of methodology or approach here that could be." }, { "end": 2465, "start": 2445, "text": " For identifying early warning signs of progress towards sort of like any big event or milestone in AI whether that's human level intelligence or or something else and you know I particularly was was a bit more interested in thinking about you know what are." }, { "end": 2475, "start": 2465, "text": " How do we identify early warning signs of progress that my lead to like really large societal impact so for example like if what you're concerned about is you know scenario which." }, { "end": 2484, "start": 2475, "text": " I was able to automate 60% of jobs can we can we think about like what's the process we would go through if we wanted to be able to identify sort of particular." }, { "end": 2504, "start": 2484, "text": " Are you the progress that if we if we saw progress we saw progress in them we should kind of recognize that as as progress towards that sort of big scenario so in this paper we basically report and discuss the results from higher interviews and." }, { "end": 2517, "start": 2504, "text": " And the way that we used cause mapping techniques to try and understand the relationship between them and therefore set it kind of identify the ones and the kinds of milestones which we call canaries which are the sort of really fundamental ones because." }, { "end": 2533, "start": 2517, "text": " You know we identified that there are some milestones where it's like oh if you may progress on this it seems like you're going to make progress on a whole load of other things that people say are important and so we discuss the specific results of of her study but then also kind of proposing and discuss this this broader methodology that could be used for identifying." }, { "end": 2540, "start": 2533, "text": " These kinds of particularly important warning signs of a broader range of sort of events and kinds of progress in the eye." }, { "end": 2558, "start": 2540, "text": " So do you feel like these these milestones are measured in a binary like attained or not attained sense or is it more a gradual process that that we're monitoring like if we had a milestone for for the ML maybe that's too broad or would you consider that attained by now in 2021 or." }, { "end": 2563, "start": 2558, "text": " Or back when CNNs were were first introduced and solved image that or or maybe not yet." }, { "end": 2579, "start": 2563, "text": " Yeah I think I mean this is one of the this is one of the challenges I think of implementing this in practice because I think they have to be gradual at least the way that we define them I mean I think probably something like vision ML is too is too broad to." }, { "end": 2589, "start": 2579, "text": " Be a milestone in the sense that we intend that I think it would need to be something like ideally you know you define it something much more specific like." }, { "end": 2608, "start": 2589, "text": " The ability of ML systems to generate like convincing faces that humans cannot distinguish from real faces or something like that right and obviously that even there there's like a bit of ambiguity like you know what level the humans need to be convinced at but at least something like that is more specific." }, { "end": 2632, "start": 2608, "text": " As we have it in the paper with the with the specific milestones that that so we generated from her interviews they are broader than that although I think not quite as maybe not quite a specific as vision ML those some of them then probably I don't have it up in front of me but you know there are some that are pretty pretty broad like causal reasoning and I think probably actually for." }, { "end": 2660, "start": 2632, "text": " Probably an important next step for this and maybe a next thing that would be valuable to do moving on from this project is to think about you know how do you actually operationalize these more specifically and think about what are the what are the indicators that you've made the indicators right like our indicators at the moment and maybe maybe a little too broad at the same time I think you can also just have these indicators where progress is more is more continuous and you know if nothing else you've identified." }, { "end": 2689, "start": 2660, "text": " So I should explain I think I didn't really explain very well in the beginning of the paper but one of part of the methodology we we outline is that you you know you use expert elicitation to generate sort of a list of these are the these are the milestones that are likely to be really important and then we use this technique called a form of causal mapping to and the idea is that you you bring in kind of a wide range of experts to do this for the idea is to identify relationships." }, { "end": 2717, "start": 2689, "text": " So the between the milestone so to say like oh well some form of like concept formation is important for underpinning causal mapping I don't actually quite know if that's true but something something in that space and then you have this map where you've got all the sort of relationships and and the the canaries as we call them they're like particularly important warning signs are the other nodes that have more outgoing arrows than other sort of ones that that underpin." }, { "end": 2739, "start": 2717, "text": " So going back to the point about whether they're binary or continuous I think it's it's still useful to be able to say like oh well it looks like concept formation in some some sense is is a really important node now there might not be any binary factor whether it's been past or not but at least then you've identified like OK for what we care about is identifying" }, { "end": 2753, "start": 2739, "text": " warning signs of progress towards human level intelligence then we need to be paying a lot more attention to research in in concept formation or whatever." }, { "end": 2760, "start": 2753, "text": " But yeah it's a good it's a good question and I think it's it's a detail of our methodology that that certainly could do with a little bit of refining." }, { "end": 2775, "start": 2760, "text": " So yeah definitely not a critique just just a comment that this this kind of stuff can be a bit fuzzy and it gets interesting to think about the gradations so so I was curious to what extent you feel the experts can can really map out future milestones." }, { "end": 2789, "start": 2775, "text": " So I was thinking back to maybe how people before DQN and the Atari work might have conceived of these milestones of how RL or AI might play out or say before CNN's and image net." }, { "end": 2797, "start": 2789, "text": " And which is kind of modern deep learning like could the experts back then have done a good job of listening." }, { "end": 2813, "start": 2797, "text": " Meaningful milestones in the near term which got us to this point in terms of our or maybe do we understand the problems just so much better now and in terms of what intelligence systems need to do and so maybe we are at a point where where we were" }, { "end": 2819, "start": 2813, "text": " like even experts do have the ability to map out meaningfully what do you think about that." }, { "end": 2828, "start": 2819, "text": " Yeah the certainly question and I think I'm generally skeptical of any claim that's sort of like oh we couldn't do this thing in the future but in the past but things are different now and so we will be able to do it now." }, { "end": 2838, "start": 2828, "text": " I mean obviously it's it's hard to know to look back with wood people in the past have been able to to kind of predict." }, { "end": 2848, "start": 2838, "text": " In fact for milestones my general I mean my general response to these these kinds of questions you know can we can we meaningfully predict the future should we have any confidence in this is like." }, { "end": 2861, "start": 2848, "text": " I don't know but I think it's worth trying and it's better to try and think explicitly about what kinds of milestones might be important than it is to sort of." }, { "end": 2874, "start": 2861, "text": " I guess this is more general more general point and a more general point about a lot of what I'm doing which you know I've said is about trying to think about the future and I'm trying to think about the future likes long term societal impacts of this very broad technology which." }, { "end": 2890, "start": 2874, "text": " I think to many people sounds just like a totally intractable goal and you can never get any precision on it and you're never going to get it right and and I guess what I what I say to that all that the way I see it is that I'm not trying to." }, { "end": 2901, "start": 2890, "text": " I'm actually not trying to predict what I because I don't think I don't think we can predict you know this is going to be the impact on society of this very broad technology and 10 20 50 years time." }, { "end": 2914, "start": 2901, "text": " I do think it is worth thinking systematically and rigorously about what could happen in the future thinking through a range of possibilities." }, { "end": 2936, "start": 2914, "text": " Because we're kind of making we're making assumptions about the future all the time when we make decisions right and coming back to the milestones example where you know researchers and others in the community are kind of making assumptions of the time about what will or won't be possible in future when they decide what to focus their attention on." }, { "end": 2959, "start": 2936, "text": " When we decide what to be concerned about and and so we're making all these assumptions implicitly and so I think it's much better that we make them explicit you know by mapping out these different pathways we don't think that we've like predicted what's going to happen but we're at least on I think some of the interesting questions that need to be asked and some of the things that we might need to to prepare for." }, { "end": 2973, "start": 2959, "text": " And I think with the with the master's example part of the value of doing it is also that so I think part of Zoe's motivation with the deep learning limitations work was there seems to be a lot of disagreement among." }, { "end": 3001, "start": 2973, "text": " Different sort of parts of the research community interested in the I progress about whether and when anything like human level intelligence is going to be possible and so part of the purpose there was well if I can dig into in a bit more detail what people think the limitations are then maybe that will help to kind of give a better understanding of why people disagree and what kinds of evidence should make us update more direction or the other you know if loads of people think that." }, { "end": 3010, "start": 3001, "text": " Some specific form of course a learning is like the thing that is going to make the difference well then you know if we see that progress and then we should change our minds." }, { "end": 3030, "start": 3010, "text": " So I guess all this is to say i'm not entirely confident that we can predict with any level of uncertainty like what the important advances are going to be in in future but I do think trying to do so at least can like on a helpful questions sometimes at least kind of." }, { "end": 3050, "start": 3030, "text": " For so to think more explicitly about what things would would change our minds and and sometimes you know the other thing is that you know if you if you map out and think about like one of the different things that that could happen one of the different advances that perhaps we can be concerned about I do think it helps you to make better decisions." }, { "end": 3077, "start": 3050, "text": " Better decisions in the present so like it's worth us spending some energy paying attention to progress in different areas of AI that we think might have big impacts on society at least I think that's true going back to the point about getting getting governments to monitor our progress like let's not choose those areas at random let's at least try and do some systematic work thinking about which avenues of progress might underpin." }, { "end": 3091, "start": 3077, "text": " Loads more progress that might lead to some big societal impact we might not be right but it's better than not trying to do it all similarly you know we're going to govern this technology in some way we're going to have regulations governments are going to do things let's." }, { "end": 3104, "start": 3092, "text": " Do it informed by a wide range of possible things that might happen in in future and things we might be concerned about and sort of try and identify like robustly good policies that." }, { "end": 3119, "start": 3104, "text": " You know won't fall apart in less our very specific assumptions hold tree so that's I guess quite a big picture answer to your question where the literal answer is like i'm not sure I don't know I don't know if they would have been able to predict it." }, { "end": 3130, "start": 3119, "text": " But but maybe some of it would have gotten something useful yeah i'm playing devil's advocate a little bit doesn't mean I don't have a ton of respect for your work and I think it's really important." }, { "end": 3151, "start": 3130, "text": " Yeah no they're good questions so do you think that we would generally know when we've achieved a milestone like at the time or do you think that some of these are only clear in retrospect yeah that's also a good question I think it depends on how precisely you specify the milestone which kind of goes back to some of your earlier questions I think we can." }, { "end": 3164, "start": 3151, "text": " We can we can specify them precisely and this kind of points towards specifying them more precisely we can specify them precisely enough that we could we could know that we're achieved it right if you specify it in terms of a very specific benchmark being achieved." }, { "end": 3176, "start": 3164, "text": " That obviously we can know the sort of the vaguer is and the more that it relies on I don't know sort of like human intuitions about what's impressive or something." }, { "end": 3191, "start": 3176, "text": " Then maybe it's it's harder to know until later do you have did you have any examples in mind of the kinds of things that like I don't progress you've seen in the past the time we wouldn't have recognized this like a big deal but that but that we did later on." }, { "end": 3196, "start": 3191, "text": " I don't have a great example of that to be honest but I know it's fine I just wanted to." }, { "end": 3225, "start": 3196, "text": " Yeah I think that maybe the technical community might be excited about some advancement and then it takes a while for the impact of that to be felt or recognized more broadly for example you know what might seem like a you know a nice little paper in say curriculum learning and I'm thinking of a paper in specific here might be like oh that's a nice little achievement in that one area but it might actually have a huge effect downstream and that might actually take a long time for us to realize the impact of that." }, { "end": 3253, "start": 3225, "text": " I was thinking of the paired paper and we've had the authors so it was just one of many posters but the potential impacts of some are just so much more than others and I think it's it might be hard to see it at the time yeah I would actually I'd love to like try and do look at some case studies like this or something because I mean in a in in part I would say that like part of what I'm trying to do with my research is think about you know how do we identify that sooner I like how do you how do you pick that curriculum learning paper out of the posters." }, { "end": 3282, "start": 3253, "text": " At conference I guess I guess part of your question like is it possible but you know part of the purpose of this idea of having governments or research is looking at social impacts like monitoring paying more attention to progress thinking more systematically about and paying more attention to progress in machine learning is so that we kind of have this ability to at least try and think through." }, { "end": 3301, "start": 3282, "text": " You know what like the societal impact of some of these things be sooner I think it's really hard but I think it's but it possible to do do better than than we currently are but yeah it would be really interesting to look at some of the papers or you know specific." }, { "end": 3317, "start": 3301, "text": " I'm sort of techniques that have ended up having quite a big impact and then and looking kind of looking back and saying like could we have known this at the time what the stages this went through like what happened and she approach has been thinking about." }, { "end": 3334, "start": 3317, "text": " Is trying to sort of do more in depth analysis also of which I think is kind of similar to this of I guess what call like this sort of research deployment pipeline so take some capability you know take look at the first transformer model all the way through to like." }, { "end": 3346, "start": 3334, "text": " Application in Google search like what were all the steps in between in between like the very first transformer paper and it being applied in Google search and can we learn anything from sort of following that path about." }, { "end": 3362, "start": 3346, "text": " The path that other papers or capabilities might might follow and do you think about time skills on these milestones do you think it that it's feasible to talk about when some of these things might happen or is time not so much a focus on it's more about the structure of the graph." }, { "end": 3373, "start": 3362, "text": " How do you think about time yeah we were how do I think about time we were we were very much focused on the structure and finally deliberately chose not to include." }, { "end": 3390, "start": 3373, "text": " Time in this paper just because yeah why I guess partly just sort of like for the simplicity and really what we were trying to do was was think about understanding the causal with causal structure and sort of the order of milestones as opposed to any specific time I'm." }, { "end": 3408, "start": 3390, "text": " Quite skeptical of like being able to put time scales on these things I in general sort of more towards thinking about sort of the order of progression of capabilities and the order in which things come." }, { "end": 3426, "start": 3408, "text": " As opposed to time I think I might be I might be rational these skeptical of of trying to put times on things but I feel like sometimes trying to put specific times on things can kind of get a bit distracting and arbitrary and it's not the thing that matters so much is like sort of the." }, { "end": 3454, "start": 3426, "text": " The order of things that totally makes sense so do you have any opinions on what are the really important milestones or canaries around circa 80 2021 like are we going through some of these milestones pretty quickly right now or are you kind of focused on more long term no I mean I think I definitely I want to be able to think about you know like what are going to be important bits of pieces of progress in the in the coming years." }, { "end": 3464, "start": 3454, "text": " Like this is you know this is exactly my interest is in sort of like grounding or thinking about longer time impact and what's going on today." }, { "end": 3483, "start": 3464, "text": " I feel like I don't have good answers to this yet I mean one sort of it's a really hard question for our paper yeah I mean and because the deep our papers is very present in my mind I do think something about seeing more real world applications of our systems." }, { "end": 3506, "start": 3483, "text": " And particularly perhaps particularly in safety critical domains I mean that's that's not a very clear one obviously we are seeing some applications maybe there's some way of like operationalizing that more clearly but I think there's some milestone we might overcome in terms of being able to deploy reinforcement and any systems more easily in the real world." }, { "end": 3523, "start": 3506, "text": " That I think could be quite important for sort of all of the reasons I discussed before in terms of like this then is is opening up the possibility of sort of more autonomous systems that are kind of more agent like and perhaps have more general capabilities." }, { "end": 3540, "start": 3523, "text": " So that's I guess that's one miles that's one kind of milestone on like the yeah should think about how to how to specify that a bit more precisely or like what's the what's the measure we would we would have a fact right like if I'm saying governments need to be measuring progress what's the what's the measure that they could be concerned about there." }, { "end": 3562, "start": 3540, "text": " I guess another way you might think about a milestone is not just in sort of the capabilities but like what are the signs that we should be looking out for more broadly in society that maybe something is something is really changing or something is really happening and I think going back to the sort of your economics and" }, { "end": 3582, "start": 3562, "text": " singularity term there's something to watch out for in terms of sort of when we start to see kind of detectable changes in productivity or economic growth from AI I think there's there's something there about the fact that you know we're seeing a lot of investment and a lot of hyper on the" }, { "end": 3601, "start": 3582, "text": " day at the moment but right now it's not yeah it's not kind of creating huge economic economic benefits and there's some but there's a lot of anticipation that that it might and there's something about when we start to see that." }, { "end": 3613, "start": 3601, "text": " That could be the sign of of maybe a real kind of explosion and lots more investment and lots more changes but yeah there's a couple of very broad ones do you have any thoughts." }, { "end": 3629, "start": 3613, "text": " I think it's a really hard question and it kind of goes back to that same issue of can we tell when progress is happening while it's progress it's happening or only in retrospect and it's just commenting on how calibrated we are I mean the experts disagree all the time" }, { "end": 3644, "start": 3629, "text": " on what's important on what's really happening yeah what's happening that meaning that's meaningful I think most papers probably better published probably won't impact anything and some of them will have a ridiculously" }, { "end": 3657, "start": 3644, "text": " outsized impact and it's kind of hard to tell at the time so I mean I think that's one of the reasons why I do this show and why I talk to people like you because I'd like to develop my ability to see what's important because there's so much noise." }, { "end": 3672, "start": 3657, "text": " So yeah I don't have I really don't have a great I don't have answers to any of the questions I'm asking you generally and some of them are kind of fairly a lot of them are unfairly difficult and I'm kind of sorry for that but I'm kind of like this is the kind of stuff we have to deal with in this field." }, { "end": 3686, "start": 3672, "text": " No the good questions is fun as long as you're okay with sliding and re-entering on so I'm quite happy to be asked them I do think this idea of like maybe looking back at you know capabilities and bits of progress in the past that maybe we didn't know we're going to be impact." }, { "end": 3694, "start": 3686, "text": " But I have ended up being so and doing some and trying to do some analysis like is there anything we can learn from them would be really interesting and potentially useful." }, { "end": 3710, "start": 3694, "text": " And I just say you know part of the point of this this canary's paper was it was you know very early thinking about and this needs a lot of development but sort of thinking about methodology for trying to distill more of the sort of knowledge that exists in expert communities." }, { "end": 3725, "start": 3710, "text": " As you say lots of people will have very different opinions about what's important and what's a big deal and what's been a big deal but we we believe or at least hope that there's some signal in all of that noise if you can kind of you know rather than just going into a few people at conferences." }, { "end": 3736, "start": 3725, "text": " Although you know I think I think there's there's value in just talking to lots of people to trying to bring together like lots of different perspectives and kind of distilled and and map that and find commonalities." }, { "end": 3743, "start": 3736, "text": " We're like somewhat optimistic that that at least would help somewhat to like distill some signal but it's very much working progress." }, { "end": 3749, "start": 3743, "text": " So I really enjoyed reading this paper and it really changed my perspective on time and progress." }, { "end": 3750, "start": 3749, "text": " Cool." }, { "end": 3761, "start": 3750, "text": " And the kind of the structure of progress and the like thinking about the ability to predict you know what would that map look like and what would you need what how how would you develop that capability to predict that stuff." }, { "end": 3785, "start": 3761, "text": " There's so many issues that come into this I think it's it was just a very memorable experience reading and I'll I think about it fairly often and but it also brings to mind you know another type of canary map and maybe you have this in mind already but this it seems like the maps that you're dealing within this paper have to do with the structured progress through technological achievements." }, { "end": 3805, "start": 3785, "text": " And maybe there's an alternative map that has to do with the political and economic and democracy related events and impacts of AI and maybe actions that can be taken as well to to deal with them." }, { "end": 3820, "start": 3805, "text": " So I was thinking like the nodes on that might have to do with things like you know impact of AI and democracy in terms of you know voter manipulation synthetic media influencing the political process laws and regulations regarding AI being passed." }, { "end": 3823, "start": 3820, "text": " AI being used for governance itself maybe." }, { "end": 3832, "start": 3823, "text": " AI replacing different types of labor like there's all these things that we can kind of in the kind of political I'm using the I'm starting to like this phrase from Thomas Gilbert." }, { "end": 3849, "start": 3832, "text": " Political economy type stuff. Yeah. If we had a slice through all that and and what would that map look like and it that would be that wouldn't just be I think the technical map is is in some way simpler because there's there's has to be some natural progression." }, { "end": 3862, "start": 3849, "text": " Whereas the the political economy AI map or canary graph would would maybe be a bit prescriptive to like okay if that happens it would be a really good idea if we push for this type of policy." }, { "end": 3870, "start": 3862, "text": " And when that thing over there happens then we're really going to need some kind of you know we're going to have to strengthen our democracy in this other way." }, { "end": 3884, "start": 3870, "text": " Otherwise that other thing down the line is going to cause a real problem for democracy itself or so I mean I think this is the more I think about it I think it's a much more ambitious it's like almost it seems impossible to me to really map all that stuff out." }, { "end": 3894, "start": 3884, "text": " But it also seems like maybe something that that really needs to be done. I wonder is there is is that kind of the the angle you're taking with your work sort of implicitly." }, { "end": 3912, "start": 3894, "text": " Yeah actually is you're saying that I was thinking in a way what you're describing is quite close to this this bigger project I was I was describing where we sort of turn and map out these different sort of pathways by which AI ends up having these more extreme impacts like a large part of what we're trying to do there is is kind of." }, { "end": 3938, "start": 3912, "text": " Look at a lot of the current impossible trends that are getting discussed in terms of like AI's impact on on politics and on the media and on science and on inequality and and we've really been trying to like just collect together like a lot of a lot of the discussion on that and and then I mean I've literally been kind of drawing graphs of light OK well you know." }, { "end": 3959, "start": 3938, "text": " This thing could feed into this thing and this could end up getting you here and and part of the point of that is to try and identify and I actually I realized when I was doing this process a few months ago I was like I'm doing the canary thing I didn't even intend to but it was like kind of drawing out these maps one." }, { "end": 3977, "start": 3959, "text": " I and so I do feel like and that kind of goes back to where I'm saying before I've like part of the point of this being not that it's going to perfectly predict anything but that it leads makes a bit more explicit like the various different possible things that could happen in the ways that they might intersect with each other." }, { "end": 4006, "start": 3977, "text": " And you know if if AI leads to the automation of lots of jobs, how might that affect politics, you know, we might we see sort of like more dissatisfaction and uprising and protest and what does that mean for the way that people in powerful positions who perhaps have the ability to manipulate information and things act." }, { "end": 4024, "start": 4006, "text": " And obviously this is all like super speculative but like at least by mapping out these arrows you can then kind of start to ask really interesting and important questions about the relationships between these things and identify like well maybe this is the sort of thing that could actually have a really big really big impact." }, { "end": 4045, "start": 4024, "text": " I guess like ultimately what I'd like to be able to do is figure out sort of how to connect this like political economy societal impact map with the technical map a bit more easily and I think this is one of the challenges is kind of when you do one they're kind of two very different ways of looking at." }, { "end": 4071, "start": 4045, "text": " The same thing I think with the with the broader political social impact map one thing I can kind of see as doing is you know drawing out this map of these things that can happen could happen and then you kind of can start to think about like well how my different and Francis in capabilities feed into these things like what kinds of advances in AI do we need for it to start being applied to scientific research." }, { "end": 4095, "start": 4071, "text": " In such a way that we're able to like automate certain kinds of scientific discovery that might you know change the nature of scientific work or might make it more likely we discovered dangerous technologies or something like that and then if you think about what kind of what kinds of AI systems do we need to be able to do this and then how far are we from this today." }, { "end": 4109, "start": 4095, "text": " What kinds of progress would we need to see that then really helps you to start to map out like okay I then I'm fairly optimistic if you are kind of bringing bringing enough people enough for you to this that maybe you are then able to start identifying." }, { "end": 4117, "start": 4109, "text": " The kinds of progress that are likely to be important or at least that that's a better way to try and identify that then just sort of." }, { "end": 4144, "start": 4117, "text": " People from there perhaps narrow perspective in their field saying well this seems important and I think one thing I've been thinking about a lot is like what level of kind of understanding the technical details do people like me who think about kind of societal impacts need and I think you know earlier I said you know I think they need to be engaging with more than it seems but I don't think I think you don't need to be." }, { "end": 4153, "start": 4144, "text": " And perhaps it's better not to be like a really narrow specialist in a specific area of ML because then you kind of." }, { "end": 4167, "start": 4153, "text": " You may be missed the miss the big picture and so part of what I'm trying to sort of figure out in my own like learning and development and understanding is like you know how to have that kind of birds I view of while this is what this is broadly the areas that." }, { "end": 4176, "start": 4167, "text": " The we're kind of seeing progress in this is what our systems can do this is their limitations these are the different avenues we might see things going." }, { "end": 4187, "start": 4176, "text": " So that it's then possible to sort of draw those connections between the big societal trends and the kinds of progress and research so yeah it's a big project and it's a big undertaking." }, { "end": 4197, "start": 4187, "text": " Well I'm glad you're doing it and maybe you know the existence of yourself and people doing the work like you're doing is is an important canary as well on these craft." }, { "end": 4209, "start": 4197, "text": " I'm sorry I may be a positive one or something I don't know yet so I guess just related to this discussion I was thinking about the relationship between these these canaries and maybe threat models." }, { "end": 4220, "start": 4209, "text": " Like when when we go to look at the security of a computer system will make a threat model talking about what the different threats are and how we can be minute they can be mitigated." }, { "end": 4238, "start": 4220, "text": " It seems like some of your canary graphs could be a great first step in terms of a threat model for you know the political economy and democracy itself there's actually so many ways that our political economy and democracy itself can be attacked by AI and if we think of if we look to game theory." }, { "end": 4258, "start": 4238, "text": " We should see that the stakes are really high and there's many actors that would be motivated to attack different aspects of politics economy and democracy and with AI these the capabilities change a lot and power can be amplified a lot and so how could we possibly mitigate it." }, { "end": 4285, "start": 4258, "text": " So those I mean it's it's yet another angle but it seems like one that I hope people get around to considering and taking seriously because I think the institutions and everything that we the way that we run our world in many ways hasn't changed that much over hundreds of years and we have a lot of catching up to do really fast to keep our systems working working well with capabilities like this floating around." }, { "end": 4298, "start": 4285, "text": " Yeah I think and again I mean I think one good one one way of nice way that you can be of kind of describing part of what I'm doing is is trying to map out sort of different threat models for AI on a pretty on a pretty big scale but but get those clearer." }, { "end": 4314, "start": 4298, "text": " Yeah and I agree with you I kind of I should say like I think I'm not fundamentally pessimistic about AI like I think you know this is this is a technology that could bring huge huge benefit certainly like in terms of sort of like you know improving the quality of health care and." }, { "end": 4320, "start": 4314, "text": " Medical treatment and and potentially I think you will I alluded to this before but you know these." }, { "end": 4332, "start": 4320, "text": " I you know I have a background my PhD was actually in cognitive science and and I was sort of looking at the strengths and limitations of human rationality and I think I think I've been quite interested this." }, { "end": 4342, "start": 4332, "text": " You know how to think about AI capabilities is complementing human capabilities and and can we sort of think about building systems in ways that." }, { "end": 4350, "start": 4342, "text": " Systems that can do things the humans can do perhaps rather than mimicking exactly what what humans can do so like in theory I'm very." }, { "end": 4365, "start": 4350, "text": " I have a lot of optimism about about AI is kind of being able to to complement human like strengths in solving big important problems in the world you know just helping us to make sense of enormous amounts of data that are brain calm process and." }, { "end": 4372, "start": 4365, "text": " And things like that but in practice right now I'm quite pessimistic because I feel like." }, { "end": 4391, "start": 4372, "text": " As you say like we sort of with with our institutions haven't really evolved fast enough to deal with these powerful technologies with developing very powerful technologies in a world where there's already you know large amounts of inequality." }, { "end": 4402, "start": 4391, "text": " And our government and our very well equipped to deal with it and so I really do worry that by default." }, { "end": 4412, "start": 4402, "text": " These powerful technologies in the hands of of powerful people or in the hands of people who just aren't able to think super thoroughly about the way the consequences." }, { "end": 4430, "start": 4412, "text": " Do sort of inevitably end up doing harm whether that's by you know increasing concentration of power to to an extreme point whether it's because we're not careful enough and we use this technology to develop something you know as a more dangerous than the nuclear weapons." }, { "end": 4442, "start": 4430, "text": " Whether it's because we just let these systems without thinking very hard about it take over more and more of the economy and then it kind of turns out they're not really doing what we wanted." }, { "end": 4445, "start": 4442, "text": " But we can't do anything about it anymore I think." }, { "end": 4458, "start": 4445, "text": " Yeah but if it was up to me I would be like let's maybe pause I out of the problem for a bit and and and fix some other problems and then let's do it and let's do all that you know people who research I got." }, { "end": 4469, "start": 4458, "text": " Yeah it's not AI itself that I think is like fundamentally problematic I think it's like maybe sort of developing powerful technologies quickly without without dealing with a bunch of other like institutional and political problems first." }, { "end": 4476, "start": 4469, "text": " I totally agree with you so yeah I research right now seems remarkably open it's done all in the open." }, { "end": 4484, "start": 4476, "text": " What do you think this openness will will continue and is that important in terms of how things will progress." }, { "end": 4494, "start": 4484, "text": " Yeah I think this is this is quite a complex issue there's obviously been sort of there's this real strong sort of norm towards openness in the in the AI community." }, { "end": 4503, "start": 4494, "text": " I don't know I guess that's like maybe partly come from from the open source community or something I think it's interesting to look historically at where that's come from." }, { "end": 4512, "start": 4503, "text": " There's been a fair amount of debate over the last year or so sort of more about this at least there's debate in the kind of circles that I." }, { "end": 4527, "start": 4512, "text": " The intellectual circles that I move in which is like a little bit more on the sort of policy and governance side of the AI but I think it's been happening in the ML to open AI with the but they're released of G2 obviously prompted this." }, { "end": 4537, "start": 4527, "text": " This quite big conversation when they sort of said we're not going to release we're not going to release the model because we're we're somewhat concerned about misuse." }, { "end": 4555, "start": 4537, "text": " I think no matter what you think about that decision it was interesting and that it started people talking about this and that was I think there was they were surprised at how much I don't know exactly but my impression was they were maybe surprised at how much backlash they got from the the ML community on that which I think it just shows how strong the openness norms are." }, { "end": 4565, "start": 4555, "text": " I do think they were raising an important and difficult issue which is you know when there is there are there are costs to openness." }, { "end": 4582, "start": 4565, "text": " You know when when capabilities have the potential to be misused or you know even if not sort of maliciously misused if they if they might be use thoughtlessly in ways that could could cause harm." }, { "end": 4605, "start": 4582, "text": " There is some responsibility I think on on the research community to think about and you know drawing at allergies or sort of areas of of life sciences research like like gain a function research right like there's a point at which you say no we don't think it's appropriate to publish openly the details of." }, { "end": 4621, "start": 4605, "text": " How to manufacture a little pandemic although I think it still happens I think people still publish like a blueprint of a smallpox virus I think that's happened so I think you know most most people would agree that that's not a thing we want to do." }, { "end": 4642, "start": 4621, "text": " I was still thinking that norms of openness are really important in the community it seems to me over the last couple of years there's been a bit of a move towards acknowledging that more at least in certain circles and at least certain groups of people thinking a lot more about sort of you know it's not an open versus closed decision right it's a question of." }, { "end": 4663, "start": 4642, "text": " In what circumstances might sharing research widely in what ways like be a bit risky and and where should we think about limiting that actually I wrote this paper a couple of years with a Vivo Vajra about we looked we sort of use synthetic media as a case study and." }, { "end": 4676, "start": 4663, "text": " And talks about different decisions that you might consider kind of attempting to break down this open closed distinction when thinking about how widely research should be shared so it's kind of." }, { "end": 4692, "start": 4676, "text": " It's about like you know there's a difference between publishing a paper and not you know not really doing any promotion and getting media attention for it right and that's that's not a decision we think about there's a difference between never publishing." }, { "end": 4706, "start": 4692, "text": " And then you're doing the code for your model and publishing it a bit later after the fact so that there's a bit more time for this is I think what I intended to do so there's a bit more time for to do research on." }, { "end": 4713, "start": 4706, "text": " Sort of any form of defense it so in their case they were interested in having a bit more time to do research on." }, { "end": 4726, "start": 4713, "text": " Which is to better detecting and distinguishing synthetic media from from real content or more time to do research on reducing bias in language models." }, { "end": 4740, "start": 4726, "text": " So I think we are likely to see a move towards sort of a bit more nuanced decision making here a bit more thinking through the cost and benefits of sharing certain kinds of." }, { "end": 4747, "start": 4740, "text": " Research widely and some context in which you might want to do that but I don't expect anything to change." }, { "end": 4758, "start": 4747, "text": " Very soon like I think that norm is still very strong I suppose there's also the fact that you know more and more research is is being done in industry now a lot of research as a moving to industry." }, { "end": 4767, "start": 4758, "text": " I don't have a super good sense of how that's going to change things but obviously you are going to end up with a lot more stuff that's that's proprietary to so." }, { "end": 4774, "start": 4767, "text": " I think it's changing a little bit some good and some bad reasons I but I think the openness norms are like pretty strong and so I don't see it changing." }, { "end": 4779, "start": 4774, "text": " Usually anytime soon and I think broadly that's probably pretty good." }, { "end": 4792, "start": 4779, "text": " But yes it's definitely complex I saw one talk that you gave I watch the video for you mentioned human difficulty with rational decisions and collective action do you think that AI." }, { "end": 4796, "start": 4792, "text": " Has any chance of helping us in that department yeah I think I mean I mentioned this this briefly." }, { "end": 4808, "start": 4796, "text": " Earlier when I was sort of talking about being optimistic about AI at least in in theory and this is where I first my sort of optimistic take on AI first came from is I you know studying." }, { "end": 4816, "start": 4808, "text": " The limits of a human decision making and human rationality and was was quite interested in thinking about sort of AI as decisions for." }, { "end": 4833, "start": 4816, "text": " Tools and this idea of trying to understand sort of the the the limits and the relative strengths and limits of human and AI decision making and I think this one talk I gave and maybe it was the one that you are so sort of saying you know these the strengths are very complimentary you know that this." }, { "end": 4841, "start": 4833, "text": " Sort of in some way surprising phenomenon that we found that AI is easier to get AI to do things that we find really hard like." }, { "end": 4850, "start": 4841, "text": " Chess and it is to get them to do the things we find really easy like recognizing an object as a chair and and in some ways I feel like that sort of." }, { "end": 4859, "start": 4850, "text": " Not appreciated enough what it is which is like we have these systems that can do very different things for a you know for example again as I mentioned before it's like well the strengths of." }, { "end": 4867, "start": 4859, "text": " Of machine learning other that you know they can they can learn patterns in like enormous data sets that we couldn't even possibly." }, { "end": 4876, "start": 4867, "text": " Begin to process or or make sense of so when it comes to things like you know discovering drugs or medical interventions." }, { "end": 4881, "start": 4876, "text": " There's like a huge advantage there in terms of." }, { "end": 4890, "start": 4881, "text": " In terms of sort of helping us to identify robust patterns you know one of the biases that comes up in human decision making is this kind of tendency to see patterns where they're on and." }, { "end": 4896, "start": 4890, "text": " And I think that's definitely a thing that that machine learning can help us with." }, { "end": 4908, "start": 4896, "text": " The collective action one is a bit more so that's sort of like improving human personality to some degree I definitely think there's promise there actually another thing I'll say on that is a thing I've been kind of interested in is." }, { "end": 4910, "start": 4908, "text": " There's um." }, { "end": 4922, "start": 4910, "text": " There's an ml startup in San Francisco called or that are trying to basically develop ML based tools to help people reflect better so I think the idea is really to sort of." }, { "end": 4927, "start": 4922, "text": " Use ml to." }, { "end": 4933, "start": 4927, "text": " I think they're starting with sort of chat bot type systems to sort of like ask good questions." }, { "end": 4937, "start": 4933, "text": " To help with this kind of like dialogic reasoning." }, { "end": 4947, "start": 4937, "text": " And but also to kind of help try and bring together to sort of like knowledge and reasoning from different sources to help people just like think through and what systematic and." }, { "end": 4953, "start": 4947, "text": " And rigorous way you know what they want what solutions to their problems might look like." }, { "end": 4958, "start": 4953, "text": " And that I think is is quite exciting and and call is a." }, { "end": 4961, "start": 4958, "text": " Yeah as a thing to be working on." }, { "end": 4971, "start": 4961, "text": " Yeah the collective action one I feel like is a little bit more complex and I'm actually not sure I'm not what did I say on that talk that they I could do." }, { "end": 4977, "start": 4971, "text": " I think you know collective action is obviously a huge." }, { "end": 4983, "start": 4977, "text": " A huge challenge and a huge thing underpinning many problems in in society." }, { "end": 4992, "start": 4983, "text": " You know collective action on climate change things like that one thing I'll say is that there is a there are a group of people." }, { "end": 5002, "start": 4992, "text": " By Alan de so who was until recently." }, { "end": 5012, "start": 5002, "text": " He has been thinking a lot he is a political scientist and he's been working with quite a few people at deep mind and he's he's now gone to deep minds." }, { "end": 5014.8, "start": 5012, "text": " this sort of broad field of cooperative AI." }, { "end": 5018.2, "start": 5014.8, "text": " So this is both problems of how do we build AI systems" }, { "end": 5020.2, "start": 5018.2, "text": " that can cooperate effectively with each other," }, { "end": 5024.6, "start": 5021.72, "text": " but also like how we build AI systems" }, { "end": 5028.04, "start": 5024.6, "text": " that help us to cooperate more effectively" }, { "end": 5030.44, "start": 5028.04, "text": " by perhaps for example, making it easier" }, { "end": 5033.4, "start": 5030.44, "text": " to make credible commitments and things like that." }, { "end": 5037.16, "start": 5033.4, "text": " And I think that's really interesting and exciting." }, { "end": 5040.72, "start": 5037.16, "text": " I think, yeah, problems of cooperation do underpin" }, { "end": 5044.360000000001, "start": 5040.72, "text": " like a lot of and collective action problems" }, { "end": 5046.240000000001, "start": 5044.360000000001, "text": " do underpin a lot of difficult things in the world." }, { "end": 5049.64, "start": 5046.240000000001, "text": " So I think there's potentially some exciting stuff there," }, { "end": 5053.6, "start": 5049.64, "text": " but it's not something I thought a load about myself." }, { "end": 5054.4400000000005, "start": 5053.6, "text": " Awesome." }, { "end": 5056.4800000000005, "start": 5054.4400000000005, "text": " And what do you think the path going forward" }, { "end": 5057.84, "start": 5056.4800000000005, "text": " looks like for yourself?" }, { "end": 5059.68, "start": 5057.84, "text": " Yeah, good question." }, { "end": 5062.76, "start": 5059.68, "text": " So just figure out all of the different things" }, { "end": 5064.68, "start": 5062.76, "text": " you should be concerned about with AI and then like," }, { "end": 5066.400000000001, "start": 5064.68, "text": " which capabilities are going to affect them?" }, { "end": 5068.280000000001, "start": 5066.400000000001, "text": " And then no." }, { "end": 5070.44, "start": 5068.28, "text": " So I mean, I'm trying to do this pretty broad," }, { "end": 5071.24, "start": 5070.44, "text": " big picture stuff." }, { "end": 5073.5599999999995, "start": 5071.24, "text": " I don't think I'm going to figure out any answers," }, { "end": 5075.84, "start": 5073.5599999999995, "text": " but I really feel like thinking this stuff through" }, { "end": 5080.84, "start": 5075.84, "text": " in a lot of detail and trying to sort of bring together" }, { "end": 5084.04, "start": 5081.679999999999, "text": " lots of expertise and perspectives in that" }, { "end": 5086.759999999999, "start": 5084.04, "text": " is at least promising for like my own thinking" }, { "end": 5089.24, "start": 5086.759999999999, "text": " and clarity and what most matters in this space." }, { "end": 5094.08, "start": 5090.88, "text": " So I'm definitely just going to spend more time doing that." }, { "end": 5096.04, "start": 5094.08, "text": " I'm the sort of person who's," }, { "end": 5098.92, "start": 5096.04, "text": " you know, I don't know for sure if I'm going to stay" }, { "end": 5101.44, "start": 5098.92, "text": " in sort of a traditional academic path" }, { "end": 5104.96, "start": 5101.44, "text": " or whether I, you know, I think we're at this particular" }, { "end": 5108.92, "start": 5104.96, "text": " opportune moment in terms of governments," }, { "end": 5111.68, "start": 5108.92, "text": " particularly, are really starting to think about governing" }, { "end": 5115.04, "start": 5111.68, "text": " our AI and regulation and what they do." }, { "end": 5117.44, "start": 5115.04, "text": " And there's some quite exciting stuff going on" }, { "end": 5121.48, "start": 5117.44, "text": " in the UK and the EU and in the US too, I'm sure," }, { "end": 5123, "start": 5121.48, "text": " but I'm just less up to speed with that." }, { "end": 5125.6, "start": 5123, "text": " And so I could definitely see myself going" }, { "end": 5127.360000000001, "start": 5125.6, "text": " a bit more of a direction of kind of trying to get" }, { "end": 5130.120000000001, "start": 5127.360000000001, "text": " into the weeds of influencing that" }, { "end": 5132.72, "start": 5130.120000000001, "text": " and shaping that given this kind of," }, { "end": 5134.96, "start": 5132.72, "text": " this sort of bigger picture understanding" }, { "end": 5137.4800000000005, "start": 5134.96, "text": " I'm developing of what's going to matter most longer term." }, { "end": 5140.92, "start": 5137.4800000000005, "text": " And I think even if I stay on the sort of academic path" }, { "end": 5142.64, "start": 5140.92, "text": " or if I were to go more into policy," }, { "end": 5147.280000000001, "start": 5142.64, "text": " I still see myself very much as like bringing this kind" }, { "end": 5149.08, "start": 5147.280000000001, "text": " of connecting big picture thinking" }, { "end": 5152.56, "start": 5149.08, "text": " to the more kind of concrete day-to-day decisions" }, { "end": 5157.56, "start": 5152.56, "text": " and trying to bring that bigger picture perspective." }, { "end": 5161.080000000001, "start": 5157.68, "text": " I'm also kind of moving more into," }, { "end": 5164.160000000001, "start": 5161.080000000001, "text": " away from kind of just doing my own research towards managing" }, { "end": 5165.200000000001, "start": 5164.160000000001, "text": " this team of researchers." }, { "end": 5167.080000000001, "start": 5165.200000000001, "text": " And that's a thing I really love doing" }, { "end": 5170.04, "start": 5167.080000000001, "text": " because I think if you want, as we've talked about this kind" }, { "end": 5173, "start": 5170.04, "text": " of work really needs interdisciplinarity," }, { "end": 5174.84, "start": 5173, "text": " but interdisciplinarity is challenging" }, { "end": 5177.8, "start": 5174.84, "text": " and I think, you know, one of the things that requires" }, { "end": 5181.56, "start": 5177.8, "text": " is maybe sort of like a bit more explicit." }, { "end": 5186.56, "start": 5181.56, "text": " Management and having a team that has kind of a bit more" }, { "end": 5190.88, "start": 5187.8, "text": " of a shared strategy and goals and can kind of speak" }, { "end": 5191.72, "start": 5190.88, "text": " the same language." }, { "end": 5193.52, "start": 5191.72, "text": " So I'm quite excited about sort of developing this team" }, { "end": 5195.84, "start": 5193.52, "text": " that we have who come from a range of backgrounds." }, { "end": 5198.400000000001, "start": 5195.84, "text": " And I just, I don't know, I'm not a researcher" }, { "end": 5200.92, "start": 5198.400000000001, "text": " who likes sitting on my own interim," }, { "end": 5204.04, "start": 5200.92, "text": " although that is what most of the last year has been." }, { "end": 5205.280000000001, "start": 5204.04, "text": " I really like working with people." }, { "end": 5208.92, "start": 5205.280000000001, "text": " So like I'm quite excited about kind of developing" }, { "end": 5212.4, "start": 5208.92, "text": " in that direction and just trying to just trying to keep" }, { "end": 5217.4, "start": 5212.4, "text": " understanding and exploring and thinking, yeah," }, { "end": 5220.32, "start": 5217.72, "text": " thinking in a lot of depth about these different scenarios" }, { "end": 5222.56, "start": 5220.32, "text": " and what we should do." }, { "end": 5225.92, "start": 5222.56, "text": " Well, I can't wait to read all about what you come up with" }, { "end": 5227.28, "start": 5225.92, "text": " with yourself and your team." }, { "end": 5228.92, "start": 5227.28, "text": " Yeah, we'll see." }, { "end": 5232.64, "start": 5228.92, "text": " So besides your own work, is there other research lately" }, { "end": 5234.6, "start": 5232.64, "text": " that you're really excited about?" }, { "end": 5238.84, "start": 5234.6, "text": " Yeah, I'd say like a few general themes." }, { "end": 5242.6, "start": 5240.240000000001, "text": " One is like, I mean, something that I'm interested in" }, { "end": 5244.400000000001, "start": 5242.6, "text": " but haven't really gone so much time on," }, { "end": 5248.360000000001, "start": 5244.400000000001, "text": " but there's been quite a lot of work coming out." }, { "end": 5251.200000000001, "start": 5248.360000000001, "text": " A various different academic centers and civil society" }, { "end": 5255.200000000001, "start": 5251.200000000001, "text": " sort of on this idea of like participatory futures" }, { "end": 5260.200000000001, "start": 5255.200000000001, "text": " and like how do we engage a wider range of perspectives" }, { "end": 5262.04, "start": 5260.400000000001, "text": " and sort of thinking about what we do" }, { "end": 5263.56, "start": 5262.04, "text": " and don't want from this technology" }, { "end": 5265.92, "start": 5263.56, "text": " and what we might be concerned about?" }, { "end": 5268.400000000001, "start": 5265.92, "text": " I've sort of been getting increasingly interested" }, { "end": 5271.4800000000005, "start": 5268.400000000001, "text": " over time in this sort of perspective of like," }, { "end": 5274.160000000001, "start": 5271.4800000000005, "text": " how do we develop this technology in a way" }, { "end": 5277.92, "start": 5274.160000000001, "text": " that's more kind of democratic and inclusive" }, { "end": 5280.400000000001, "start": 5277.92, "text": " of a wide range of perspectives and concerns" }, { "end": 5282.96, "start": 5280.400000000001, "text": " and one of the benefits of that and why should we want that?" }, { "end": 5287.080000000001, "start": 5282.96, "text": " So I have a really great colleague, Alex Tahagatee" }, { "end": 5289.84, "start": 5287.080000000001, "text": " who's got a background in anthropology" }, { "end": 5292.4400000000005, "start": 5289.84, "text": " and she's really interested in this question of sort of like," }, { "end": 5296.48, "start": 5292.44, "text": " you know, integrating the perspectives of sort of people" }, { "end": 5298.4, "start": 5296.48, "text": " and communities affected by technology" }, { "end": 5301.24, "start": 5298.4, "text": " into thinking about what responsible development looks like" }, { "end": 5304.24, "start": 5301.24, "text": " and so I'm hoping to, she's been doing some really great work" }, { "end": 5306.12, "start": 5304.24, "text": " on that and there's been some really interesting work" }, { "end": 5309.839999999999, "start": 5306.12, "text": " coming out of places like Nesta" }, { "end": 5313.879999999999, "start": 5309.839999999999, "text": " and some other places sort of looking at this," }, { "end": 5315.5199999999995, "start": 5313.879999999999, "text": " this is quite a participation question" }, { "end": 5318.5199999999995, "start": 5315.5199999999995, "text": " and we're hoping to do some sort of more substantive thinking" }, { "end": 5320.2, "start": 5318.5199999999995, "text": " about, you know, why and when this is useful" }, { "end": 5324.8, "start": 5320.2, "text": " because I think there's a bit of a tendency of kind of two extremes" }, { "end": 5326.4, "start": 5324.8, "text": " where you have one group of people who say like," }, { "end": 5329.12, "start": 5326.4, "text": " inclusivity and like participation is just like," }, { "end": 5331.24, "start": 5329.12, "text": " obviously important and like maybe, you know," }, { "end": 5334.76, "start": 5331.24, "text": " there's an element of that like, obviously we want to be inclusive" }, { "end": 5337.48, "start": 5334.76, "text": " but that sort of, you know, it doesn't really get into the details" }, { "end": 5338.76, "start": 5337.48, "text": " of like why this is beneficial" }, { "end": 5340.96, "start": 5338.76, "text": " and then there are maybe other people who kind of be dismissive" }, { "end": 5344.2, "start": 5340.96, "text": " and say like, oh, we, you know, the public don't really know," }, { "end": 5346.04, "start": 5344.2, "text": " we can't really ask for their expertise" }, { "end": 5348.36, "start": 5346.04, "text": " and I think there's a more nuanced understanding in between this" }, { "end": 5350.88, "start": 5348.36, "text": " which is like, no, we don't want to go and ask," }, { "end": 5355.28, "start": 5350.88, "text": " like I'm not going to go and ask sort of the wider public" }, { "end": 5359.24, "start": 5355.28, "text": " to tell me what specific, like to try and help me like," }, { "end": 5362.5199999999995, "start": 5359.24, "text": " develop a canary map of like very specific technical capabilities," }, { "end": 5365.5599999999995, "start": 5362.5199999999995, "text": " right? Like there are places for like specific expertise" }, { "end": 5367.08, "start": 5365.5599999999995, "text": " but I also do think that like, you know," }, { "end": 5374.08, "start": 5367.08, "text": " one of the problems with AI development today is that we don't have," }, { "end": 5380.28, "start": 5374.08, "text": " we kind of, you know, it is being driven by relatively narrow set of interests" }, { "end": 5383.16, "start": 5380.28, "text": " and we sort of, there is all this thinking like that" }, { "end": 5387, "start": 5383.16, "text": " that I'm doing about harms but not that much sense of like," }, { "end": 5393.5599999999995, "start": 5387, "text": " kind of collective visions of kind of possible and exciting features" }, { "end": 5396.8, "start": 5393.5599999999995, "text": " and so although this isn't like a thing that I'm emphasizing a lot" }, { "end": 5399.4, "start": 5396.8, "text": " in my own work at the time, I'm really, I'm quite excited about work" }, { "end": 5403.16, "start": 5399.4, "text": " that's happening that's kind of trying to do that kind of thing," }, { "end": 5406.68, "start": 5403.16, "text": " like bring together more diverse perspectives" }, { "end": 5409.88, "start": 5406.68, "text": " and like a wider range of expertise to really think in more detail" }, { "end": 5411.68, "start": 5409.88, "text": " about like what are the ways this could be really good?" }, { "end": 5415.48, "start": 5411.68, "text": " So yeah, I'm excited to see more people doing that kind of stuff" }, { "end": 5419.32, "start": 5415.48, "text": " and to try and contribute a bit, I guess in part because it also helps" }, { "end": 5424.4, "start": 5419.32, "text": " kind of complement and offset some of the more negative stuff I'm doing." }, { "end": 5427.72, "start": 5424.4, "text": " Yeah, so that's one thing I'm really excited about." }, { "end": 5430.48, "start": 5427.72, "text": " Cool, okay, this episode has been a long time in the coming." }, { "end": 5434.4, "start": 5430.48, "text": " We've had a fall start before which is, which was totally my whole" }, { "end": 5439.4, "start": 5434.4, "text": " technical trouble and then we had a lot of scheduling, rescheduling." }, { "end": 5441.799999999999, "start": 5439.4, "text": " I just, I just want to thank you for your patience through all this" }, { "end": 5443.599999999999, "start": 5441.799999999999, "text": " and I'm so glad that we made it happen." }, { "end": 5444.5199999999995, "start": 5443.599999999999, "text": " Yeah, me too." }, { "end": 5447.48, "start": 5444.5199999999995, "text": " Dr. Jess Whitlestone, I really appreciate you sharing your time" }, { "end": 5449.16, "start": 5447.48, "text": " and your insight with Talk Areal today." }, { "end": 5450.599999999999, "start": 5449.16, "text": " Thanks so much for joining us here." }, { "end": 5451.639999999999, "start": 5450.599999999999, "text": " Thanks so much for having me." }, { "end": 5481.200000000001, "start": 5451.64, "text": " I'm pretty into it at the conversation." }, { "end": 5485.12, "start": 5481.2, "text": " Three, give us a five-star rating on Apple podcasts." }, { "end": 5513.64, "start": 5485.12, "text": " If you don't think we deserve five stars, let us know on Twitter what we could do better." } ]
Aleksandra Faust
Aleksandra Faust of Google Brain Research on AutoRL, meta-RL, learning to learn & learning to teach, curriculum learning, collaborations between senior and junior ...
https://media.transistor…5ef.mp3?src=site
This is Talk by Rail Podcast. All reinforcement learning, all the time. Interviews of brilliant folks across the world of RL. I'm your host, Rob and Chauhan. Dr. Alexandra Faust is a staff research scientist and reinforcement learning research team co-founder at Google Brain Research. Thanks for taking the time to do this Dr. Faust. Thank you for having me. I'm really excited. Awesome. So how do you describe your research focus? So in the process, I'm interested in making reinforcement learning scalable and suitable for complex decision making in real interactive world and learning something about cognitive science in the process. Specifically, nowadays I'm interested in two main tracks or things. First figuring out how to continuously learn new and more complex tasks, fundamentally more complex things, not only improve in the competency of a single set of tasks. And the second is treating reinforcement learning training as a decision making process itself and applying learning and automation methods with the population of agents to get better type of training. So I see that your PhD dissertation involved RL. Can you tell us briefly about that? Sure. That was fun time. My dissertation was about preference, a balancing task, learning preference, balancing task. I mean, it's a task, a task with the opposite preferences, but without known hard or self constraints. So for example, imagine setting lots on the table. We all we want to get this task done as soon as possible, but not really break the glass in the process. All of us can do it yet. None of us know what is exactly forced. It's needed. That's going to break the glass. Right. So the idea here is to find. And the preference preference balance is task with reinforcement learning. So we did this first in a context of the quadrorder with a suspended load and basically asking to deliver the load or the package with a minimum swinging. So this is a drone delivery task. I believe that this was the first application of reinforcement learning to UABs. Here we then asked under what conditions the policy that we learn will drive the quadrorder to the goal. And we derived the very very viable conditions under which that was the case. It turned out that actually for any control of fine system, which is a fancy way of saying systems control by a force that have enough power to overcome wind and so on. And if the value function ends up being positive definite, we are guaranteed to reach the destination. And that's given goes in the presence of disturbances like wind and so on. And even worked in the multi agent systems. And we want to show that this technique hole for the classic computer science such as resilient sorting when we need to sort an array and the computer is unreliable and gives us wrong answers. Up to 50% of time. So the key here for this method was connecting the state value functions in reinforcement learning with the control, the upper no functions from the control theory and using the tools from the stability theory and the control theory to analyze the system behavior. So let's jump into the first paper that is learning navigation behaviors and to end with auto RL and that was Chang at all and with you yourself as a co author. So what was the just of of this auto RL paper? So it was basically similar idea. And in a sense extension of my PhD work, the assumption in this case was that we didn't know relationship between the preferences since the PhD work with another relationship between the preferences, but we had good intuition about what the important preferences might be. In this case, we're going to move on this one step further and observe that in many reinforce learning tasks, they are difficult to solve because the task true objective is either task was completed or it was not completed. And that's very difficult for agent to learn. This is what we kind of refer to in the literature as part of our problem. So we ended up as kind of engineers and kind of making this method of work. We end up spending endless time on engineering this proxy interest your keywords and so on. So we observed that we actually have a good intuition about what might be important features that would give us the reward on how well the agent is doing with respect to completion of the tasks. For example, how far from the goal it is the orientation, the speed and so on, but just like before we didn't know how they relate to each other, so kind of having these weights that kind of put them together in a function. So learning came to the rescue to solve this task. So in this particular work, we focused on two tasks in realm, both mobile navigation, one was goal condition policies and the second was path following in real unstructured environments. And we selected these two because these two tasks were good building blocks that were a unsolved at the time and be if sold it can be used as a building box for the larger navigation system. Okay, so I mean, I remember hand designing, reward shaping for a certain task and I kept wishing that I could somehow learn what these what these coefficients could be, but I just assumed that it was just that was impossible. It would be too expensive and that was true for me because I just had a desktop GPU and a small budget, but I guess I guess you don't have those constraints for some of your projects. I probably helps a lot. Yeah, it helps. Yes. So let's move on to the next paper, evolving rewards to automate reinforcement learning that was first off or yourself. So can you give us an idea of what this paper is about? Sure. So after having the reward learning work on the robot navigation task and other robots tasks as well, further actual robots, we want to know how really general this technique was. So in this paper, we applied the method across different benchmarking task reinforcement learning algorithms and different types of objective. And we learned some surprising things we learned that in Tristic reward is tightly coupled with the reinforcement learning algorithm. So if we're using soft extrocritic, we end up with a one reward and we're using PPO, we end up with a different reward for the same objective on the same task. So in retrospect, it was very surprising, but in retrospect, that makes sense because the reward serves as a guiding heuristic for the learning. So the last and the reward are very closely tightly coupled. If I understand correctly, we end up having to try a lot of different variations on these proxure rewards to see how they work out. Do you think that we could ever, you know, a time will ever come when we can kind of compute these somehow or estimate them without a lot of without so much trial and error, or is that just kind of a hopeless thing? So it's not completely, it is to some extent trial and error, but it is learning. And the methods we're using in this case for learning are either Gaussian bandits or cross entropy methods, and it is somewhat sample efficient more than just brute force that said the there is a lot that we can do to make this learning more practical. One obvious thing is that in this particular work, the agents in the training population do not communicate and don't share experience with each other. So moving in moving this in the offline setting, where we have population of agents that shares a data set that was pre collected would improve the, yeah, the computational complexity over the time that it takes to train tremendously because we don't need to run the simulator in the loop just to the training in the process. The second way to go about that is that we're doing this exercise for a single task at a time and that's highly inefficient. Imagine being able to learn interest security words over familiar related tasks at the same time, just like we do. We will learn a bunch of tasks at the same time and the interest security words along with it, which is basically internal feedback on how well we're going proceeding with the task. So learning good internal mechanism is a promising method. I think it's here to stay would love to see more methods that make it better and more scalable and learn a number of ways to go about that. Okay, so let's move on to evolving reinforcement learning algorithms that was by co raise at all in 2021. Can you give us a brief overview of this of this one? Sure. So in this paper, we asked the question or made an observation that learning loss functions nowadays, there's lots of new RL algorithms coming out every day and this seems like tweaking the loss function is thing to do. And we observed that loss function is really nothing more than a computational graph over certain parameters of the policies or the state action observed state and so on. So the question was, well, can we learn a new algorithm loss function that is trained on small set cheap environments and then be generalized and applied just like any other loss function that we know and love in unseen environments and so on. That was the gist of the paper and we are able to find several losses and so on that actually we're training in very simple environments such as the inverted pendulum and the lunar lander and couple simple mazes and actually outperform some of the Atari games. Actually all of the Atari games that we tested on that's amazing and I see this paper sites auto ML zero and shares two co authors that's real and Lee with that paper and I guess it uses a similar representation though I think the present paper is searching through graphs where as auto ML zero seemed to be searching through linear sequences of instructions if I understood that correctly. But in both cases I'm imagining some researcher had to pour over all the things that the algorithm discovered the discovered algorithms and kind of figure out how to explain what it's doing which kind of seems like reverse engineering in alien artifact. Can you talk about this reversing process? How did that go in this paper? Yep, you're right and that's exactly what it is and I personally find it very exciting. But it does that to some extent and it's not that bad and not that different from how we normally analyze algorithms did the difference is that when we design the algorithm we kind of have the design that we have in mind. So we don't need to necessarily backward explain but the process is is what makes it exciting we learn something new that we didn't know about the algorithms. We see the same some very creative loss functions one example is a loss function that consists of two max functions which effectively creates a small decision tree in the state space and using a different loss function in each partition that it creates. I personally would never thought of constructing a loss function that that way but it kind of makes sense. Yeah, it's it's it's a surprisingly challenging to do the backward analysis but we're using the same mathematical tools that we know and that this is the process where were you already used to when we're explaining the deficiency of the existing algorithms. So do you consider that that type of activity and approach under the umbrella of explainable AI it seems a little bit different but it's a little bit similar or how do you categorize that that type of approach. I didn't think of it that way but now that you put it that way it's a really good way to put it. It is always very helpful and two fields can be connected and benefits from each other tools. And in this case, I produce this math formula so all the tools from calculus and analysis that we know and love and use on every day today basis we can still use that tool set for analysis of all these algorithms. And that's that's exciting and it's also easy for for the deployment because it's again, it's the same interface so we can just in this in the case of these algorithms that we found it's literally one line change over the Q and S. So do you see this kind of data mining and interpreting the output of an AI algorithm as like a growing area in AI or is this or do you think this is kind of an exotic niche will stay as an exotic niche. I think the interpretation is very important. It's very understudied and will need to grow more important as we start applying AI systems in the real world and getting them into the hands of the users. And the reason there are two reasons why they're important first it builds the trust with the end users knowing what to expect goes a long way towards accepting the technology. For example, according to a Pew Study from 2017 56% of Americans would not want to ride in a cell driving car because they don't trust the technology and they're not willing to give up the control to a machine in a life or death situation. Okay, the similar results hold for surgical robots as well. So the the burden is on us technology development and researchers to work to to learn and develop the methods to earn the trust of the users. The second this techniques bring really new insights. This is the first time that we have thousands of the reinforcement learning algorithms and their performance. And there are some surprising observations. For example, there are only less than dozen different performance values, which means that a lot of algorithms even though their loss functions look very differently in practice, they perform the same. And I think we have a very good explanation to know why, but that's the observation that a lot to see kind of community try to explain that. Like people are running like as you mentioned earlier, people are coming up with new oral algorithms every day. How long do you think before you know a big chunk of these are going to be produced in an automated way, something similar to this? I think that should be a good focus because all of our energy mental energy is limited. So automating the pieces that can be automated focusing our energy on the design, the elements and interpretation and so on is I guess better use of our time. So I'm hoping that these techniques that we find the way to make them more broadly available. And that means both computationally and sharing the results and sharing the data sets and all the nine years that we need to go about, but then move the field in that direction. I mean, it seems like it would be quite a challenge to find the sweet spot when you're designing this type of search space. Like you could imagine designing the space to be easier to cover and smaller but less expressive and then you don't find certain types of algorithms or maybe it's more expressive and then it's massive, massive and then it's harder to interpret it. It's hard to actually discover those those good algorithms in that large space. So is it, do you see it that way or how do you think about designing the types of search spaces that you need for this and does it involve a lot of trial and error or how does that process go? So in general, this is my personal research approach. I think and goes both for designing simulators and designing these spaces is that we should aim to find the smallest or the simplest space that does the job. So it makes sense to start small and expand it start with with a vicious end goal, but start small and get some results and expand the smaller search space allows for faster iterations and that kind of helps can improve that process. But it does require what's a trial and error, the and hopefully I think one thing that we need to do in the future by kind of having repeated that experiments across number of applications is to understand or better trade offs of different search elements and then go move towards having the best practice guidelines. But for now in this early state, we're still developing our own intuition over what works and what doesn't work. Okay, so let's move on to adversarial environment generation for learning to navigate the web. That was by Goor at all with with yourself as a co-author. And let me just say, you know, looking over this paper, some days I definitely feel like some websites are generated by an adversary AI and I feel like I have about an 80% success rate, you know, trying to use them and I feel like I could use some agents to help me so I can relate right away. But could you could you give us the general the actual idea of this paper? Yeah, yeah, I'm very excited about this direction. The idea is simple. So consider things we do online purchasing airline tickets on a commercial airline, change password, logging to number of different websites order food and so on. We generally have no major problems adapting to a new task, for example, you know, I want to purchase your ticket on a new airline or hey, let's go buy movie tickets. Or oftentimes don't have issues dealing with a website redesigns. So why is that? Can AI and reinforcement learning do that? And why is it reasonable to expect to generalize to say movie tickets and not spaghetti making? Right. So this is some fundamental questions here. So underneath all of these tasks is a combination of simple manipulation skills, like enter the correct information in a text field or select the date and so on. And navigation, which basically tell you let's move to the next room by hitting next button or submit button, button, not get lost in a way by subscribing to a newsletter or so on. And this kind of space is what we refer to as compositional tasks. Those are the tasks that consists of set of basic manipulation skills that are connected together into dependency graph. We kind of need to complete number of these manipulation tasks before you can proceed to the next phase and so on. And we need to learn to navigate by completing those manipulation tasks. So at this point, we can start talking about the family of related tasks and the reason why it makes sense to be able to generalize between same movie tickets and passwords and not the cooking spaghetti. So in this in the most recent work, we propose actually a formal framework for doing this for petri nets and just as a sneak because that paper is not out yet. The space of learnable task is huge. So we just 45 skills like these basic skills that creates a task space of 10 to the power 24 different tasks that are solvable with this skill set. And if we kind of do 35 hundred skill set that creates a task space of 10 to the power 31 that's huge. So in this line of work, we aim to train a single reinforcement learning agent that can complete all of these compositional tasks without additional training. And very excited about this line of work because it would allow us to both have formal framework for reasoning what is learnable and what's not learnable and also enable us to create agents that quite qualitatively learn more difficult tasks that are seeded on with few basic behaviors. Cool. OK. And then as a note to listeners here, we featured co author of this paper Natasha jakes on our very first episode of talk our role. And I see that this work also sites and builds upon paired from the paper emergent complexity and zero shot shot transfer via unsupervised environment design. That was by Michael Dennis at all also with the jakes as a co author. And we were lucky enough to have Michael Dennis on the show not too long ago to talk about paired that was back in January. So before we get into this, can you remind us of the basic idea of paired. Sure. And by the way, both Natasha and Michael are just amazing. The basic idea behind pair paired is that for many tasks, it helps to guide the agent with the curriculum and hand designing curriculum is difficult. So the idea is to pose the curriculum learning as a multi agent game where we have one agent adversary that creates difficult challenges and then agent that it's learning how to solve these challenges. And the adversary is creating the most difficult environment you can and then the agent does the best it can now to be able to provide the adversary with the learning signal for itself. The regret which measures in late terms, the learning potential of the agent, the paper proposes adding an additional agent and estimate the regret is a difference in performance of the two that are training. So that way the adversary can make things more difficult when it observes that there is a lot more for the agent to learn and keep experimenting with different environment setups if the agents are performing the same observing zero regret. So that's what's the just of the paper. So super elegant formulation here and I love that paper as soon as I saw the poster, I'm like, this is beautiful. So but here we're going further you introduced flexible paired and be paired. Can you can you explain flexible paired and be paired. Yeah, I love their paper was not author on this but very elegant and I love it. The so yeah, there was the interesting journey with kind of extensions. Remember that we're interested in a compositional tasks. So the topology of this task is like a serious of the task in the the original paper, paired paper did. They are kind of set of connected rooms where kind of agents need to complete smaller challenges along the way. And because the tasks are orders of magnitude harder, regret is estimated in a paired paper is often zero. And that doesn't kind of lead us everywhere. Second upon the further analysis and this was not obvious. So what we prefer from the get go is we realized that in the original paper, paper, the adversary is learning to create solvable environments. And we don't have problem in this context because all of all of our tasks are solvable by design. They are very complicated, but they are solvable. And in the original paper, many of these environments are not solvable in the adversaries learning to create feasible environments. So the to solve the compositional task with this learn curriculum, we created an adversarial STM that creates up to 10 page long websites and places up to 100 elements on each page of the agency to do. And also equip the LSTM with an explicit control of the design budget. Basically LSTM decides how many design elements and pages are appropriate to create an output given the observed competency of the agent that it's training. And that that's what we call the budgeted pair or be paired. And yeah, so that that to that, we then added the additional to do that. We added additional loss component, which encourages the LSTM to increase the difficulty when the agents are doing well and decrease it when they're struggling. And the regret loss from the original pair, which kind of modified is used for fine control over selection of individual skills and design elements to place on the page. And then the second part is that we made the regret estimation bit more generic, extending to population of agents in the original periods, just two agents in managed fixed. In our case, we compute regret is difference between the best performing agent in a group and the average. And that train that makes training a bit more stable. So there's no there's no longer an antagonist and protagonist. That's the idea. And the so with these two modifications were able to train those generic navigation policies for basically any website and so on. And we observed that the complexity of the learn tasks, a cellular increases with a prolonged training. Okay. If we look back at the different methods we talked about today, the common theme is they seem to involve a fair bit of compute, especially with this outer metal learning or evolutionary loops around RL, RL itself being relatively expensive and compute. So I wonder how you think about the cost benefit when using whether using that compute is worthwhile in each case. Like is it obvious where the line is of worthwhile and not or does it usually involve experiments to decide where the line is? How do you think about that? It goes both ways. So in my mind, the investing in compute is a design choice. And I tend to ask two questions. First is the task tedious and repetitive? Are we learning heavy spent months? It's not yours tuning the rewards in various problems across different applications and whatnot. And then the second question is, is the solution with the solution be reusable? Does it make sense to invest in compute when the result of that computation is something that can be used over and over again? For example, when we learn the policies with the learn rewards, we can use these policies for higher order planning and multi agent systems for the Vendevoot task, but then each agent kind of controls itself. And they're doing the joint planning and so on. So that makes sense to do. So the answer to any of these two question is no, then this investment in the compute is probably not justified. So it makes sense always to start small and see if there is promising kind of reusability or the same engineering saving time and then invest more into computation. And do you start with like a fixed compute budget in mind and see what you can do with that? Or do you sometimes fix a problem and then try to estimate the budget you need? Compute budget you need? How does that work? How do you think about that? Yeah, so it's more that we start with a fixed budget because even in Google the budget is fixed and we actually have fixed a lot allowance of the budget to begin with and go from from there. But if we've done something similar before, we have good sense of the need for the computational budget and have a good estimate what is needed. Then we can ask for more and then kind of carve out the problem. So for example, now with the last functions, our first dip in that space was on the value based functions that proved to work. The it makes sense to kind of rethink and what might be better or larger computation of a budget in that space given that we're producing the database of the algorithms and so on that the community can use and build upon. And just to be clear, how do you like to define meta or all it seems like it could mean a few different things. Sure. So in community meta are well comes in basically two main flavors. One is learning to learn. And the other method is methods like my model, meta learning and so on in which case we learn a generic policy and then we learn specialization policies based on the additional data and so on. My research focus is more geared towards generic learning methods and defining meta are all is a trainer or learning agent that aids learning of the RL agents. So it's a multi agent system and the meta trainer is training the RL agent now for the meta trainer. It can be either evolution, it can be reinforcement learning can be supervised and what not in the methods there differ, but the the paradigms that we have a RL agent under training and the meta trainer that aids the training of of the meta trainer of the RL agent. Okay, yeah, thanks for clarifying those two two flavors were getting very mixed up, I think, in my mind. And so I was thinking this is not like mammal, but that's not metaro. Yeah, so the other term that I tend to use is learning to learn and I think that's a little bit more clear because it is focused on all the particular flavor of the meta learning. We could describe mammal that way too, right? Like is learning to learn very quickly how to compensate for different service or something. Okay, so, okay, so we'll go with that. So, so following, following that seems like we're moving up the stack in a sense. Like, you know, if we look back a few decades ago, deep RL itself, we would say, well, it's the compute is too too too much for deep RL. It would have been prohibitive. And maybe a few years ago, we would say meta RL would have been prohibitive in terms of compute. And but do you see a for time for see like sometime in the future where we could go even up another layer and talk about meta RL being being feasible. For example, maybe to explore various types of search spaces designs like we mentioned earlier. Could this process continue on and on or do you see that there would be diminishing returns and we would we would just stop at some point. I love this question. I mean, to zestically nodding here on the other side. And in fact, there's something that that would be very curious to know how to do. Interesting questions to ask here would be when and how do we accept and learn new skills under what circumstances do we forget once. Some other interesting aspects would be integration of different various methods and learning more about interdependency between them. So for example, imagine learning rewards neural network architecture search and loss function and the curriculum in at the same time in the same swoop. That would be cool. I think I don't think that we are close to being doing doing that. But we're kind of just dipping our toes in this space and being able to kind of focus more on the learning the dependencies in this complex learning system. So how what is the role of the neural network architecture, right? What is the role of the reward and so on and how they interplay in every very exciting. You described your work and we were talking before the episode in terms of learning to learning to learn, learning to teach and I don't know if you use the phrase, but learning to reward. And I guess when you put them that way, they sound very relatable and almost like relatable to humans. Is that how you think about them? Yeah, yeah. The they do come as a part of the cognitive process and I did mention that I'm broadly interested in learning something about cognitive science in the process and drawing the exploration to it. So each one of these maps to a different cognitive process. We have some recent work on the joint attention and we showed it was also with Natasha Jake's the we showed that in the multi agent systems that when they're sharing attention layers and we reward sharing the shared attention. Actually improve performance on the cooperative multi agent tasks, which is very cool result and kind of very interested because we know from cognitive science and psychology biology that happens in in humans and not in other say prime might seem not. Okay, and then so how do you think about the how do you think about how this meta or our relates to see humans are animals like do you think is there some learning to learn that happens in with humans are animals. So that's a very good question and I'm really glad you ask it so there is a number of cognitive processes and I've focused on the evolution reinforcement cell supervision and curriculum mostly so in biological systems, the evolution determines the fixed hyper parameters of the system that is tailored precisely for the task in environments, then the system will be performing. So our height and all the art tailor to the environment that we're going to operate and so on. The and that seems to translate directly to AI in terms of the hyper parameters for the reinforcement learning, normal network architecture search is a hyper parameter how many layers and the rewards as well. So text biological systems learn skills very basic ones at first and we do that through interaction with the world, banking on the objects right. The testing that the gravity and so on and this loosely maps to the reinforcement and imitation learning. So next is the biological system practices we practice these skills and we observe not only our behaviors, but those of others and we learn the causality and learn to predict the future what future my hold. The we call this intuition, but it's really prior experience compressed into a predictive model that's trained with the self supervision for example having a wall in head of you you're not going to go straight ahead even now you haven't experienced that particular wall because you've seen examples and experience examples before we've kind of had some serious of papers in the self supervision. So we can learn pretty much anything that that we're curious about about both our policies without knowing actually policies and our teammates as well. So then that predictive model can be higher or lower quality, but it's never perfect just like our skills and never perfect. But we can still use them in combination with the basic skills that we have to imagine scenarios and we can start learning more complex skills that require more planning such as navigation over long distances or combinatorial and compositional task like we did in navigation and manipulation. So in biological systems, the learning process and things that we learn changes over time through the curriculum and we saw the evidence also through paired like papers in the literature, both my working kind of external literature. The the biological inspired ideas tend to transfer to some extent. So I think the future is going to be very interesting to see how hierarchical model plays based multi agent RL come together to learn increasingly complex tasks. We treat them now as a separate subfield of the RL, but they seem to be very highly related to each other and we don't quite know how to put them together this single system. I mean, it seems like evolution is doing our matter or our right it's it's building ability to learn. So is it is it purely evolution because I think I wonder if you could say I wonder if you could say that in our lifetime we're doing something like mammal because as we get more skills than it's easier for us to get it. You know when we learn our fifth sport, it's actually a lot easier because we already have some in the same way as a mammal or something. Yeah, that that's a very good point and this is not meant to be completely exhaustive listed will merely the list of the work I personally worked on. There is a lot more there to be said and one way to kind of think of it is at least kind of the tax on a way that I'm starting to observe is what are the fixed hyperparameters. So some things are fixed for life more more or less in the life of the agent is questionable and the other things kind of change over time. The so curriculum based methods and the paired business where we're having kind of teacher and student or adversary and what not are more learned linked towards adaptation. I think I would put mammal into that space as well, but how how to put these two together at the same time. I don't idea how to do. That's what makes it an interesting field, I suppose. Yeah, yeah, yeah. Awesome. So do you plan to continue focusing on on these themes going forward with your work? Yeah, yeah. I think it's super exciting and very rich area to explore the. Yeah, and I mean, consider this being able to train the RL agents and making them work is very human intensive process right now. And yeah, that means the process of training. As we talked about before, that process is trading work and putting our cognitive cycles into that to make the sequential decisions to solve them. So if we offload that and solve the RL with RL or other learning methods, the. I mean, we can open ourselves to more opportunities and ultimately that automation will not only lead to better policies, but even more to deeper insights about how and why and why and when our networks. And I will free us from the burden of picking experiments and able to focus on kind of what matters most. So for the listeners, Dr. Fas and I crossed paths at ICLR conference in the Gather Town video chat. And I also attended your your mentoring sessions on on research careers. So I can definitely recommend your mentoring advice and is mentoring a significant focus for you. Thanks, thanks so much, Robin. Yeah, actually mentorship is very important to me and I find this personally gratifying. I've learned a lot from each mentor in relationship that I have. And I highly recommend anybody to do it. I'm not a mentor to the end to quote my mentor, my material, MUSC, no matter where you incur your you can always mentor someone. Even high school or students can mentor me all school students and really no excuse. I love that. Okay, so and then can you say anything about the what you see as the ideal working relationship between a senior researcher and a junior researcher like, like for example, what should each each contribute to the work? Any working relationship is a two way process and comes from combining strengths and weaknesses of each party. In any successful relationship, one party is contributing something and learning something else and go same for the other party as well. So the so that that kind of goes. Same thing for the junior slash senior researcher in a field. They are complimentary roles and each party kind of has different role to fulfill. So from the perspective of the senior researcher research career is filled with lots of ups and downs. And it is really not for faint of heart. The both wins and losses are highly personal personal. And when we're in the lows when things are not working and we don't know what the next question is and when we're going to have a solution. The last. Periods can feel indefinite. So senior researcher brings experience and can kind of fill in several roles to that is expect. First, the cyclical nature of the process, the senior researcher understands that and is able to aid with the emotional journey and I really don't want to underestimate emotional journey. And we talk about that, but we should. And bring the encouragement at the times of the lows. Again, to quote one of my prior professors, say research is lots of feels lots like banging heads on the wall. And after having enough experience, you know that eventually wall is going to give up. So they can abuse and sense of confidence. So second role, the senior researcher has developed an intuition and taste for good problems. So he or she needs to steer the junior person towards asking the right question and kind of really equipping with kind of developing the taste in research. The and then the third senior researcher should be in a better position to judge the full potential of the research. Because I've seen several times that people get excited like, OK, it's been low. Right, it's tiring. The minimum things start to work. It's like, OK, let's go publish it and so on. Like, no, maybe we should just kind of hang on a little bit more because there is a lot more that can be done in a short time. If we cannot give this a bit deeper. So and then finally, the senior researcher can help junior person with visibility and connections and reflecting their strengths to them. And again, with the strengths, it's very easy for us for most of us is we're trained to do that to pick to to Nick pick and find the deficiency and we're very good to find in the efficiency with ourselves. And it's not obvious to us what our strengths are. So the senior people can help with that. And then the junior researchers bring the creativity and more focused time on fewer projects than a senior senior people tend to be kind of spread thin. The and have more details and depth on a particular technical areas. So keeping the senior researcher informed about the findings and the process that's kind of the giving back part that can form very productive partnership. That was super interesting. Okay. So besides what we talked about today as there are other things going on in our all these days that you find quite interesting lots of things. So besides breaking the barrier of skill learning and generalization over these families of the tasks and the environments. And treating RL is decision making problem solving it. I'm really excited about RL coming together as systems for doing social good. There is tremendous opportunity to use autonomous agents or all systems called them however you want to automate repetitive tasks that we do. So not that's not only just convenient and freezes, but also opens a huge opportunity with people with the accessibility needs or maybe other people that are not that comfortable with the technology who are unable to complete those tasks on their own. And that really makes a difference for for for the people. We here will be that we think about RL agents and RL systems in terms of the interfaces that are compatible with human users. So having vision and natural language. And so on is going to be very key important addition to the interpretability and so on. So I understand some other word I'm very excited about right now like chip design from Anna and Azalia that came out earlier this week and the balloon navigation from our bell. Valemar seeing the RL work in these really smartly real world problems is super super exciting. So in terms of the particular methods together. There is a version of this population based RL. Social RL where we're kind of training groups of the agents and agents are learning from agents at Natasha Jakes is spearheading an offline RL. I'm very excited about that. As treating the learning as a social construct. The offline RL methods, similar focus on learning from limited interactions with the world. And this has been one of the kind of key bottlenecks of the RL when it gets to kind of bringing it into real world. So lastly, I think very soon we'll need to look into the fairness and interpretability of decision making. And that's something that we had the luxury of not doing so far. The in the terms of the RL, but as we're moving towards really real world problems is going to be incredibly important. And then the second thing is develop theoretical foundations around generalization in the RL. The field is moving towards generalization across environments, tasks and systems and the good old market decision process or palm MVP, the abstraction is not very useful as it is. So developing new theoretical tools will be very exciting in this space. Wow. Okay. Can you can you say more about that? You were saying how the MVP and Palm D.P. abstractions are limiting in some ways. So what can you talk about how they how they're not useful or limiting and what alternatives could we have there? Sure. So palm MVP or let's say MVP tells you that your RL problem is state action. Hidden dynamics hidden reward that we can observe and gamma function right. And yet when we're solving any problem, the first things we ask is like, OK, what is the environment? What is a task? What is the system? We're going to run and so on. None of this really exists in in in the palm MVP. Basically what we're doing is and this is like the very undiscovered area that we're just kind of empirically doing is we're mapping the real world and the real problems into palm MVP. Right. Second question about that is that palm MVP says, OK, here's a fixed state, here's your fixed actions between and so on. What we really ask when we're talking about generalization is how this agent can solve a number of tasks in M number of environments, right, which at this point then is a family of palm MP. It can be a chain in the compositional in the compositional task formulation. It is a chain of M DPs related chain and so on. We don't have tools to do that. There is absolutely no theoretical tools or definition of the problem that can kind of eat us in that. So we end up in this very kind of highly empirical environment where we're defining environments and we're defining benchmarks and what not. But really the theory works on the palm MVP in the palm and the PC without looking at the structure behind environments and tasks and so on. And by tasks, I mean, you don't just mean rewards, right, the reward to what we're using to get the task done. Exactly. Right. So we talked about learning the rewards. Our task objectives and if you can all read the theory of reinforcement learning, they'll tell you, I think in Bartos book is that the reward should be your task objective, which means we should not be even talking about interest or keywords. Okay, fine. Now we do interest or keywords and we put that in palm MP, but in what it really is is that we're creating a proxy palm MP for the one that we actually want to solve. Right. So you see what is going. So ideally, you would be able to more directly state what it is we need and in a framework that has some theoretical basis to get us there. Is that kind of what you're saying? Yes, I don't know how to do it, but I think that we can and should do better. Sounds like some of your work is maybe building the bridge there. If the matter or else taking us there. I am hoping that matter, I'm hoping that the compositional in the compositional paper position task paper that is going to come out and archive soon. We put foundations of the compositional tasks through petri nets. And that allows the actually can see that apology of the task. And we actually propose that from that the task is a graph, which is a petri net that can kind of control the state and directly out of that graph. We can infer the palm MPs that are that we can use them for solving RL agents. And we can define the family of the related graphs of the petri nets that describe this space. So at least that kind of gives somewhat of the of the framework you're just scratching the surface. I don't think this necessarily the right way to think of it or something, but it is a way. Very cool. Okay, so is there anything else I should have I should have asked you about today or anything that you want to share with our listeners today? It is very exciting time to be in reinforcement learning. I think reinforcement learning is really the cost of breakthrough that requires a really holistic and multidisciplinary approach between research tools and frameworks and challenging applications really need to come together to try to progress. And it will happen soon. It's kind of happening now. So I personally feel very honored and privileged to be part of this journey at this point of time and super grateful to all of my collaborators for kind of joining the ride and sharing the sharing the journey. Fantastic. Any any suggestions for the talk or I'll show here? This looks great and thank you so much for having me. It has been great conversation. It's been so great to have you, Dr. Alexandra Faust. Thanks for your time and your insight and thanks for sharing with talk or I'll. Thank you so much. Thank you for joining us today.
[ { "end": 12, "start": 0, "text": " This is Talk by Rail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 12, "text": " Interviews of brilliant folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 27, "start": 20, "text": " Dr. Alexandra Faust is a staff research scientist and reinforcement learning research team co-founder at Google Brain Research." }, { "end": 33, "start": 27, "text": " Thanks for taking the time to do this Dr. Faust. Thank you for having me. I'm really excited." }, { "end": 37, "start": 33, "text": " Awesome. So how do you describe your research focus?" }, { "end": 50, "start": 37, "text": " So in the process, I'm interested in making reinforcement learning scalable and suitable for complex decision making in real interactive world and learning something about cognitive science in the process." }, { "end": 68, "start": 50, "text": " Specifically, nowadays I'm interested in two main tracks or things. First figuring out how to continuously learn new and more complex tasks, fundamentally more complex things, not only improve in the competency of a single set of tasks." }, { "end": 81, "start": 68, "text": " And the second is treating reinforcement learning training as a decision making process itself and applying learning and automation methods with the population of agents to get better type of training." }, { "end": 86, "start": 81, "text": " So I see that your PhD dissertation involved RL. Can you tell us briefly about that?" }, { "end": 94, "start": 86, "text": " Sure. That was fun time. My dissertation was about preference, a balancing task, learning preference, balancing task." }, { "end": 105, "start": 94, "text": " I mean, it's a task, a task with the opposite preferences, but without known hard or self constraints. So for example, imagine setting lots on the table." }, { "end": 113, "start": 105, "text": " We all we want to get this task done as soon as possible, but not really break the glass in the process. All of us can do it yet." }, { "end": 122, "start": 113, "text": " None of us know what is exactly forced. It's needed. That's going to break the glass. Right. So the idea here is to find." }, { "end": 127, "start": 122, "text": " And the preference preference balance is task with reinforcement learning." }, { "end": 138, "start": 127, "text": " So we did this first in a context of the quadrorder with a suspended load and basically asking to deliver the load or the package with a minimum swinging." }, { "end": 146, "start": 138, "text": " So this is a drone delivery task. I believe that this was the first application of reinforcement learning to UABs." }, { "end": 155, "start": 146, "text": " Here we then asked under what conditions the policy that we learn will drive the quadrorder to the goal." }, { "end": 161, "start": 155, "text": " And we derived the very very viable conditions under which that was the case." }, { "end": 171, "start": 161, "text": " It turned out that actually for any control of fine system, which is a fancy way of saying systems control by a force that have enough power to overcome wind and so on." }, { "end": 177, "start": 171, "text": " And if the value function ends up being positive definite, we are guaranteed to reach the destination." }, { "end": 182, "start": 177, "text": " And that's given goes in the presence of disturbances like wind and so on." }, { "end": 196, "start": 182, "text": " And even worked in the multi agent systems. And we want to show that this technique hole for the classic computer science such as resilient sorting when we need to sort an array and the computer is unreliable and gives us wrong answers." }, { "end": 211, "start": 196, "text": " Up to 50% of time. So the key here for this method was connecting the state value functions in reinforcement learning with the control, the upper no functions from the control theory and using the tools from the stability theory and the control theory to analyze the system behavior." }, { "end": 222, "start": 211, "text": " So let's jump into the first paper that is learning navigation behaviors and to end with auto RL and that was Chang at all and with you yourself as a co author." }, { "end": 227, "start": 222, "text": " So what was the just of of this auto RL paper?" }, { "end": 232, "start": 227, "text": " So it was basically similar idea." }, { "end": 250, "start": 232, "text": " And in a sense extension of my PhD work, the assumption in this case was that we didn't know relationship between the preferences since the PhD work with another relationship between the preferences, but we had good intuition about what the important preferences might be." }, { "end": 265, "start": 250, "text": " In this case, we're going to move on this one step further and observe that in many reinforce learning tasks, they are difficult to solve because the task true objective is either task was completed or it was not completed." }, { "end": 272, "start": 265, "text": " And that's very difficult for agent to learn. This is what we kind of refer to in the literature as part of our problem." }, { "end": 282, "start": 272, "text": " So we ended up as kind of engineers and kind of making this method of work. We end up spending endless time on engineering this proxy interest your keywords and so on." }, { "end": 295, "start": 282, "text": " So we observed that we actually have a good intuition about what might be important features that would give us the reward on how well the agent is doing with respect to completion of the tasks." }, { "end": 307, "start": 295, "text": " For example, how far from the goal it is the orientation, the speed and so on, but just like before we didn't know how they relate to each other, so kind of having these weights that kind of put them together in a function." }, { "end": 312, "start": 307, "text": " So learning came to the rescue to solve this task." }, { "end": 325, "start": 312, "text": " So in this particular work, we focused on two tasks in realm, both mobile navigation, one was goal condition policies and the second was path following in real unstructured environments." }, { "end": 338, "start": 325, "text": " And we selected these two because these two tasks were good building blocks that were a unsolved at the time and be if sold it can be used as a building box for the larger navigation system." }, { "end": 348, "start": 338, "text": " Okay, so I mean, I remember hand designing, reward shaping for a certain task and I kept wishing that I could somehow learn what these what these coefficients could be, but I just assumed that it was just that was impossible." }, { "end": 355, "start": 348, "text": " It would be too expensive and that was true for me because I just had a desktop GPU and a small budget, but I guess I guess you don't have those constraints for some of your projects." }, { "end": 356, "start": 355, "text": " I probably helps a lot." }, { "end": 358, "start": 356, "text": " Yeah, it helps. Yes." }, { "end": 369, "start": 358, "text": " So let's move on to the next paper, evolving rewards to automate reinforcement learning that was first off or yourself. So can you give us an idea of what this paper is about?" }, { "end": 382, "start": 369, "text": " Sure. So after having the reward learning work on the robot navigation task and other robots tasks as well, further actual robots, we want to know how really general this technique was." }, { "end": 390, "start": 382, "text": " So in this paper, we applied the method across different benchmarking task reinforcement learning algorithms and different types of objective." }, { "end": 399, "start": 390, "text": " And we learned some surprising things we learned that in Tristic reward is tightly coupled with the reinforcement learning algorithm." }, { "end": 409, "start": 399, "text": " So if we're using soft extrocritic, we end up with a one reward and we're using PPO, we end up with a different reward for the same objective on the same task." }, { "end": 418, "start": 409, "text": " So in retrospect, it was very surprising, but in retrospect, that makes sense because the reward serves as a guiding heuristic for the learning." }, { "end": 423, "start": 418, "text": " So the last and the reward are very closely tightly coupled." }, { "end": 431, "start": 423, "text": " If I understand correctly, we end up having to try a lot of different variations on these proxure rewards to see how they work out." }, { "end": 442, "start": 431, "text": " Do you think that we could ever, you know, a time will ever come when we can kind of compute these somehow or estimate them without a lot of without so much trial and error, or is that just kind of a hopeless thing?" }, { "end": 448, "start": 442, "text": " So it's not completely, it is to some extent trial and error, but it is learning." }, { "end": 467, "start": 448, "text": " And the methods we're using in this case for learning are either Gaussian bandits or cross entropy methods, and it is somewhat sample efficient more than just brute force that said the there is a lot that we can do to make this learning more practical." }, { "end": 477, "start": 467, "text": " One obvious thing is that in this particular work, the agents in the training population do not communicate and don't share experience with each other." }, { "end": 499, "start": 477, "text": " So moving in moving this in the offline setting, where we have population of agents that shares a data set that was pre collected would improve the, yeah, the computational complexity over the time that it takes to train tremendously because we don't need to run the simulator in the loop just to the training in the process." }, { "end": 509, "start": 499, "text": " The second way to go about that is that we're doing this exercise for a single task at a time and that's highly inefficient." }, { "end": 517, "start": 509, "text": " Imagine being able to learn interest security words over familiar related tasks at the same time, just like we do." }, { "end": 529, "start": 517, "text": " We will learn a bunch of tasks at the same time and the interest security words along with it, which is basically internal feedback on how well we're going proceeding with the task." }, { "end": 540, "start": 529, "text": " So learning good internal mechanism is a promising method. I think it's here to stay would love to see more methods that make it better and more scalable and learn a number of ways to go about that." }, { "end": 552, "start": 540, "text": " Okay, so let's move on to evolving reinforcement learning algorithms that was by co raise at all in 2021. Can you give us a brief overview of this of this one?" }, { "end": 569, "start": 552, "text": " Sure. So in this paper, we asked the question or made an observation that learning loss functions nowadays, there's lots of new RL algorithms coming out every day and this seems like tweaking the loss function is thing to do." }, { "end": 584, "start": 569, "text": " And we observed that loss function is really nothing more than a computational graph over certain parameters of the policies or the state action observed state and so on." }, { "end": 604, "start": 584, "text": " So the question was, well, can we learn a new algorithm loss function that is trained on small set cheap environments and then be generalized and applied just like any other loss function that we know and love in unseen environments and so on." }, { "end": 625, "start": 604, "text": " That was the gist of the paper and we are able to find several losses and so on that actually we're training in very simple environments such as the inverted pendulum and the lunar lander and couple simple mazes and actually outperform some of the Atari games." }, { "end": 649, "start": 625, "text": " Actually all of the Atari games that we tested on that's amazing and I see this paper sites auto ML zero and shares two co authors that's real and Lee with that paper and I guess it uses a similar representation though I think the present paper is searching through graphs where as auto ML zero seemed to be searching through linear sequences of instructions if I understood that correctly." }, { "end": 665, "start": 649, "text": " But in both cases I'm imagining some researcher had to pour over all the things that the algorithm discovered the discovered algorithms and kind of figure out how to explain what it's doing which kind of seems like reverse engineering in alien artifact." }, { "end": 670, "start": 665, "text": " Can you talk about this reversing process? How did that go in this paper?" }, { "end": 694, "start": 670, "text": " Yep, you're right and that's exactly what it is and I personally find it very exciting. But it does that to some extent and it's not that bad and not that different from how we normally analyze algorithms did the difference is that when we design the algorithm we kind of have the design that we have in mind." }, { "end": 706, "start": 694, "text": " So we don't need to necessarily backward explain but the process is is what makes it exciting we learn something new that we didn't know about the algorithms." }, { "end": 725, "start": 706, "text": " We see the same some very creative loss functions one example is a loss function that consists of two max functions which effectively creates a small decision tree in the state space and using a different loss function in each partition that it creates." }, { "end": 750, "start": 725, "text": " I personally would never thought of constructing a loss function that that way but it kind of makes sense. Yeah, it's it's it's a surprisingly challenging to do the backward analysis but we're using the same mathematical tools that we know and that this is the process where were you already used to when we're explaining the deficiency of the existing algorithms." }, { "end": 761, "start": 750, "text": " So do you consider that that type of activity and approach under the umbrella of explainable AI it seems a little bit different but it's a little bit similar or how do you categorize that that type of approach." }, { "end": 773, "start": 761, "text": " I didn't think of it that way but now that you put it that way it's a really good way to put it. It is always very helpful and two fields can be connected and benefits from each other tools." }, { "end": 788, "start": 773, "text": " And in this case, I produce this math formula so all the tools from calculus and analysis that we know and love and use on every day today basis we can still use that tool set for analysis of all these algorithms." }, { "end": 803, "start": 788, "text": " And that's that's exciting and it's also easy for for the deployment because it's again, it's the same interface so we can just in this in the case of these algorithms that we found it's literally one line change over the Q and S." }, { "end": 817, "start": 803, "text": " So do you see this kind of data mining and interpreting the output of an AI algorithm as like a growing area in AI or is this or do you think this is kind of an exotic niche will stay as an exotic niche." }, { "end": 831, "start": 817, "text": " I think the interpretation is very important. It's very understudied and will need to grow more important as we start applying AI systems in the real world and getting them into the hands of the users." }, { "end": 843, "start": 831, "text": " And the reason there are two reasons why they're important first it builds the trust with the end users knowing what to expect goes a long way towards accepting the technology." }, { "end": 862, "start": 843, "text": " For example, according to a Pew Study from 2017 56% of Americans would not want to ride in a cell driving car because they don't trust the technology and they're not willing to give up the control to a machine in a life or death situation." }, { "end": 880, "start": 862, "text": " Okay, the similar results hold for surgical robots as well. So the the burden is on us technology development and researchers to work to to learn and develop the methods to earn the trust of the users." }, { "end": 892, "start": 880, "text": " The second this techniques bring really new insights. This is the first time that we have thousands of the reinforcement learning algorithms and their performance." }, { "end": 909, "start": 892, "text": " And there are some surprising observations. For example, there are only less than dozen different performance values, which means that a lot of algorithms even though their loss functions look very differently in practice, they perform the same." }, { "end": 921, "start": 909, "text": " And I think we have a very good explanation to know why, but that's the observation that a lot to see kind of community try to explain that." }, { "end": 926, "start": 921, "text": " Like people are running like as you mentioned earlier, people are coming up with new oral algorithms every day." }, { "end": 936, "start": 926, "text": " How long do you think before you know a big chunk of these are going to be produced in an automated way, something similar to this?" }, { "end": 954, "start": 936, "text": " I think that should be a good focus because all of our energy mental energy is limited. So automating the pieces that can be automated focusing our energy on the design, the elements and interpretation and so on is I guess better use of our time." }, { "end": 970, "start": 954, "text": " So I'm hoping that these techniques that we find the way to make them more broadly available. And that means both computationally and sharing the results and sharing the data sets and all the nine years that we need to go about, but then move the field in that direction." }, { "end": 977, "start": 970, "text": " I mean, it seems like it would be quite a challenge to find the sweet spot when you're designing this type of search space." }, { "end": 990, "start": 977, "text": " Like you could imagine designing the space to be easier to cover and smaller but less expressive and then you don't find certain types of algorithms or maybe it's more expressive and then it's massive, massive and then it's harder to interpret it." }, { "end": 1005, "start": 990, "text": " It's hard to actually discover those those good algorithms in that large space. So is it, do you see it that way or how do you think about designing the types of search spaces that you need for this and does it involve a lot of trial and error or how does that process go?" }, { "end": 1020, "start": 1005, "text": " So in general, this is my personal research approach. I think and goes both for designing simulators and designing these spaces is that we should aim to find the smallest or the simplest space that does the job." }, { "end": 1037, "start": 1020, "text": " So it makes sense to start small and expand it start with with a vicious end goal, but start small and get some results and expand the smaller search space allows for faster iterations and that kind of helps can improve that process." }, { "end": 1058, "start": 1037, "text": " But it does require what's a trial and error, the and hopefully I think one thing that we need to do in the future by kind of having repeated that experiments across number of applications is to understand or better trade offs of different search elements and then go move towards having the best practice guidelines." }, { "end": 1067, "start": 1058, "text": " But for now in this early state, we're still developing our own intuition over what works and what doesn't work." }, { "end": 1077, "start": 1067, "text": " Okay, so let's move on to adversarial environment generation for learning to navigate the web. That was by Goor at all with with yourself as a co-author." }, { "end": 1092, "start": 1077, "text": " And let me just say, you know, looking over this paper, some days I definitely feel like some websites are generated by an adversary AI and I feel like I have about an 80% success rate, you know, trying to use them and I feel like I could use some agents to help me so I can relate right away." }, { "end": 1097, "start": 1092, "text": " But could you could you give us the general the actual idea of this paper?" }, { "end": 1114, "start": 1097, "text": " Yeah, yeah, I'm very excited about this direction. The idea is simple. So consider things we do online purchasing airline tickets on a commercial airline, change password, logging to number of different websites order food and so on." }, { "end": 1127, "start": 1114, "text": " We generally have no major problems adapting to a new task, for example, you know, I want to purchase your ticket on a new airline or hey, let's go buy movie tickets." }, { "end": 1136, "start": 1127, "text": " Or oftentimes don't have issues dealing with a website redesigns. So why is that?" }, { "end": 1149, "start": 1136, "text": " Can AI and reinforcement learning do that? And why is it reasonable to expect to generalize to say movie tickets and not spaghetti making?" }, { "end": 1153, "start": 1149, "text": " Right. So this is some fundamental questions here." }, { "end": 1166, "start": 1153, "text": " So underneath all of these tasks is a combination of simple manipulation skills, like enter the correct information in a text field or select the date and so on." }, { "end": 1178, "start": 1166, "text": " And navigation, which basically tell you let's move to the next room by hitting next button or submit button, button, not get lost in a way by subscribing to a newsletter or so on." }, { "end": 1190, "start": 1178, "text": " And this kind of space is what we refer to as compositional tasks. Those are the tasks that consists of set of basic manipulation skills that are connected together into dependency graph." }, { "end": 1203, "start": 1190, "text": " We kind of need to complete number of these manipulation tasks before you can proceed to the next phase and so on. And we need to learn to navigate by completing those manipulation tasks." }, { "end": 1218, "start": 1203, "text": " So at this point, we can start talking about the family of related tasks and the reason why it makes sense to be able to generalize between same movie tickets and passwords and not the cooking spaghetti." }, { "end": 1233, "start": 1218, "text": " So in this in the most recent work, we propose actually a formal framework for doing this for petri nets and just as a sneak because that paper is not out yet. The space of learnable task is huge." }, { "end": 1253, "start": 1233, "text": " So we just 45 skills like these basic skills that creates a task space of 10 to the power 24 different tasks that are solvable with this skill set. And if we kind of do 35 hundred skill set that creates a task space of 10 to the power 31 that's huge." }, { "end": 1265, "start": 1253, "text": " So in this line of work, we aim to train a single reinforcement learning agent that can complete all of these compositional tasks without additional training." }, { "end": 1282, "start": 1265, "text": " And very excited about this line of work because it would allow us to both have formal framework for reasoning what is learnable and what's not learnable and also enable us to create agents that quite qualitatively learn more difficult tasks that are seeded on with few basic behaviors." }, { "end": 1290, "start": 1282, "text": " Cool. OK. And then as a note to listeners here, we featured co author of this paper Natasha jakes on our very first episode of talk our role." }, { "end": 1303, "start": 1290, "text": " And I see that this work also sites and builds upon paired from the paper emergent complexity and zero shot shot transfer via unsupervised environment design. That was by Michael Dennis at all also with the jakes as a co author." }, { "end": 1310, "start": 1303, "text": " And we were lucky enough to have Michael Dennis on the show not too long ago to talk about paired that was back in January." }, { "end": 1315, "start": 1310, "text": " So before we get into this, can you remind us of the basic idea of paired." }, { "end": 1320, "start": 1315, "text": " Sure. And by the way, both Natasha and Michael are just amazing." }, { "end": 1332, "start": 1320, "text": " The basic idea behind pair paired is that for many tasks, it helps to guide the agent with the curriculum and hand designing curriculum is difficult." }, { "end": 1347, "start": 1332, "text": " So the idea is to pose the curriculum learning as a multi agent game where we have one agent adversary that creates difficult challenges and then agent that it's learning how to solve these challenges." }, { "end": 1359, "start": 1347, "text": " And the adversary is creating the most difficult environment you can and then the agent does the best it can now to be able to provide the adversary with the learning signal for itself." }, { "end": 1375, "start": 1359, "text": " The regret which measures in late terms, the learning potential of the agent, the paper proposes adding an additional agent and estimate the regret is a difference in performance of the two that are training." }, { "end": 1388, "start": 1375, "text": " So that way the adversary can make things more difficult when it observes that there is a lot more for the agent to learn and keep experimenting with different environment setups if the agents are performing the same observing zero regret." }, { "end": 1397, "start": 1388, "text": " So that's what's the just of the paper. So super elegant formulation here and I love that paper as soon as I saw the poster, I'm like, this is beautiful." }, { "end": 1405, "start": 1397, "text": " So but here we're going further you introduced flexible paired and be paired. Can you can you explain flexible paired and be paired." }, { "end": 1411, "start": 1405, "text": " Yeah, I love their paper was not author on this but very elegant and I love it." }, { "end": 1419, "start": 1411, "text": " The so yeah, there was the interesting journey with kind of extensions. Remember that we're interested in a compositional tasks." }, { "end": 1428, "start": 1419, "text": " So the topology of this task is like a serious of the task in the the original paper, paired paper did." }, { "end": 1436, "start": 1428, "text": " They are kind of set of connected rooms where kind of agents need to complete smaller challenges along the way." }, { "end": 1447, "start": 1436, "text": " And because the tasks are orders of magnitude harder, regret is estimated in a paired paper is often zero." }, { "end": 1454, "start": 1447, "text": " And that doesn't kind of lead us everywhere. Second upon the further analysis and this was not obvious." }, { "end": 1466, "start": 1454, "text": " So what we prefer from the get go is we realized that in the original paper, paper, the adversary is learning to create solvable environments." }, { "end": 1472, "start": 1466, "text": " And we don't have problem in this context because all of all of our tasks are solvable by design." }, { "end": 1476, "start": 1472, "text": " They are very complicated, but they are solvable." }, { "end": 1485, "start": 1476, "text": " And in the original paper, many of these environments are not solvable in the adversaries learning to create feasible environments." }, { "end": 1501, "start": 1485, "text": " So the to solve the compositional task with this learn curriculum, we created an adversarial STM that creates up to 10 page long websites and places up to 100 elements on each page of the agency to do." }, { "end": 1507, "start": 1501, "text": " And also equip the LSTM with an explicit control of the design budget." }, { "end": 1520, "start": 1507, "text": " Basically LSTM decides how many design elements and pages are appropriate to create an output given the observed competency of the agent that it's training." }, { "end": 1532, "start": 1520, "text": " And that that's what we call the budgeted pair or be paired. And yeah, so that that to that, we then added the additional to do that." }, { "end": 1542, "start": 1532, "text": " We added additional loss component, which encourages the LSTM to increase the difficulty when the agents are doing well and decrease it when they're struggling." }, { "end": 1553, "start": 1542, "text": " And the regret loss from the original pair, which kind of modified is used for fine control over selection of individual skills and design elements to place on the page." }, { "end": 1564, "start": 1553, "text": " And then the second part is that we made the regret estimation bit more generic, extending to population of agents in the original periods, just two agents in managed fixed." }, { "end": 1572, "start": 1564, "text": " In our case, we compute regret is difference between the best performing agent in a group and the average." }, { "end": 1575, "start": 1572, "text": " And that train that makes training a bit more stable." }, { "end": 1580, "start": 1575, "text": " So there's no there's no longer an antagonist and protagonist. That's the idea." }, { "end": 1589, "start": 1580, "text": " And the so with these two modifications were able to train those generic navigation policies for basically any website and so on." }, { "end": 1595, "start": 1589, "text": " And we observed that the complexity of the learn tasks, a cellular increases with a prolonged training." }, { "end": 1610, "start": 1595, "text": " Okay. If we look back at the different methods we talked about today, the common theme is they seem to involve a fair bit of compute, especially with this outer metal learning or evolutionary loops around RL, RL itself being relatively expensive and compute." }, { "end": 1619, "start": 1610, "text": " So I wonder how you think about the cost benefit when using whether using that compute is worthwhile in each case." }, { "end": 1628, "start": 1619, "text": " Like is it obvious where the line is of worthwhile and not or does it usually involve experiments to decide where the line is? How do you think about that?" }, { "end": 1636, "start": 1628, "text": " It goes both ways. So in my mind, the investing in compute is a design choice." }, { "end": 1652, "start": 1636, "text": " And I tend to ask two questions. First is the task tedious and repetitive? Are we learning heavy spent months? It's not yours tuning the rewards in various problems across different applications and whatnot." }, { "end": 1681, "start": 1652, "text": " And then the second question is, is the solution with the solution be reusable? Does it make sense to invest in compute when the result of that computation is something that can be used over and over again? For example, when we learn the policies with the learn rewards, we can use these policies for higher order planning and multi agent systems for the Vendevoot task, but then each agent kind of controls itself." }, { "end": 1693, "start": 1681, "text": " And they're doing the joint planning and so on. So that makes sense to do. So the answer to any of these two question is no, then this investment in the compute is probably not justified." }, { "end": 1703, "start": 1693, "text": " So it makes sense always to start small and see if there is promising kind of reusability or the same engineering saving time and then invest more into computation." }, { "end": 1716, "start": 1703, "text": " And do you start with like a fixed compute budget in mind and see what you can do with that? Or do you sometimes fix a problem and then try to estimate the budget you need? Compute budget you need? How does that work? How do you think about that?" }, { "end": 1735, "start": 1716, "text": " Yeah, so it's more that we start with a fixed budget because even in Google the budget is fixed and we actually have fixed a lot allowance of the budget to begin with and go from from there." }, { "end": 1749, "start": 1735, "text": " But if we've done something similar before, we have good sense of the need for the computational budget and have a good estimate what is needed." }, { "end": 1775, "start": 1749, "text": " Then we can ask for more and then kind of carve out the problem. So for example, now with the last functions, our first dip in that space was on the value based functions that proved to work." }, { "end": 1792, "start": 1775, "text": " The it makes sense to kind of rethink and what might be better or larger computation of a budget in that space given that we're producing the database of the algorithms and so on that the community can use and build upon." }, { "end": 1809, "start": 1792, "text": " And just to be clear, how do you like to define meta or all it seems like it could mean a few different things. Sure. So in community meta are well comes in basically two main flavors. One is learning to learn." }, { "end": 1824, "start": 1809, "text": " And the other method is methods like my model, meta learning and so on in which case we learn a generic policy and then we learn specialization policies based on the additional data and so on." }, { "end": 1840, "start": 1824, "text": " My research focus is more geared towards generic learning methods and defining meta are all is a trainer or learning agent that aids learning of the RL agents." }, { "end": 1868, "start": 1840, "text": " So it's a multi agent system and the meta trainer is training the RL agent now for the meta trainer. It can be either evolution, it can be reinforcement learning can be supervised and what not in the methods there differ, but the the paradigms that we have a RL agent under training and the meta trainer that aids the training of of the meta trainer of the RL agent." }, { "end": 1877, "start": 1868, "text": " Okay, yeah, thanks for clarifying those two two flavors were getting very mixed up, I think, in my mind. And so I was thinking this is not like mammal, but that's not metaro." }, { "end": 1890, "start": 1877, "text": " Yeah, so the other term that I tend to use is learning to learn and I think that's a little bit more clear because it is focused on all the particular flavor of the meta learning." }, { "end": 1898, "start": 1890, "text": " We could describe mammal that way too, right? Like is learning to learn very quickly how to compensate for different service or something." }, { "end": 1916, "start": 1898, "text": " Okay, so, okay, so we'll go with that. So, so following, following that seems like we're moving up the stack in a sense. Like, you know, if we look back a few decades ago, deep RL itself, we would say, well, it's the compute is too too too much for deep RL. It would have been prohibitive." }, { "end": 1921, "start": 1916, "text": " And maybe a few years ago, we would say meta RL would have been prohibitive in terms of compute." }, { "end": 1932, "start": 1921, "text": " And but do you see a for time for see like sometime in the future where we could go even up another layer and talk about meta RL being being feasible." }, { "end": 1945, "start": 1932, "text": " For example, maybe to explore various types of search spaces designs like we mentioned earlier. Could this process continue on and on or do you see that there would be diminishing returns and we would we would just stop at some point." }, { "end": 1950, "start": 1945, "text": " I love this question. I mean, to zestically nodding here on the other side." }, { "end": 1957, "start": 1950, "text": " And in fact, there's something that that would be very curious to know how to do." }, { "end": 1968, "start": 1957, "text": " Interesting questions to ask here would be when and how do we accept and learn new skills under what circumstances do we forget once." }, { "end": 1978, "start": 1968, "text": " Some other interesting aspects would be integration of different various methods and learning more about interdependency between them." }, { "end": 1988, "start": 1978, "text": " So for example, imagine learning rewards neural network architecture search and loss function and the curriculum in at the same time in the same swoop." }, { "end": 2007, "start": 1988, "text": " That would be cool. I think I don't think that we are close to being doing doing that. But we're kind of just dipping our toes in this space and being able to kind of focus more on the learning the dependencies in this complex learning system." }, { "end": 2018, "start": 2007, "text": " So how what is the role of the neural network architecture, right? What is the role of the reward and so on and how they interplay in every very exciting." }, { "end": 2029, "start": 2018, "text": " You described your work and we were talking before the episode in terms of learning to learning to learn, learning to teach and I don't know if you use the phrase, but learning to reward." }, { "end": 2036, "start": 2029, "text": " And I guess when you put them that way, they sound very relatable and almost like relatable to humans. Is that how you think about them?" }, { "end": 2050, "start": 2036, "text": " Yeah, yeah. The they do come as a part of the cognitive process and I did mention that I'm broadly interested in learning something about cognitive science in the process and drawing the exploration to it." }, { "end": 2054, "start": 2050, "text": " So each one of these maps to a different cognitive process." }, { "end": 2071, "start": 2054, "text": " We have some recent work on the joint attention and we showed it was also with Natasha Jake's the we showed that in the multi agent systems that when they're sharing attention layers and we reward sharing the shared attention." }, { "end": 2090, "start": 2071, "text": " Actually improve performance on the cooperative multi agent tasks, which is very cool result and kind of very interested because we know from cognitive science and psychology biology that happens in in humans and not in other say prime might seem not." }, { "end": 2106, "start": 2090, "text": " Okay, and then so how do you think about the how do you think about how this meta or our relates to see humans are animals like do you think is there some learning to learn that happens in with humans are animals." }, { "end": 2132, "start": 2106, "text": " So that's a very good question and I'm really glad you ask it so there is a number of cognitive processes and I've focused on the evolution reinforcement cell supervision and curriculum mostly so in biological systems, the evolution determines the fixed hyper parameters of the system that is tailored precisely for the task in environments, then the system will be performing." }, { "end": 2140, "start": 2132, "text": " So our height and all the art tailor to the environment that we're going to operate and so on." }, { "end": 2155, "start": 2140, "text": " The and that seems to translate directly to AI in terms of the hyper parameters for the reinforcement learning, normal network architecture search is a hyper parameter how many layers and the rewards as well." }, { "end": 2168, "start": 2155, "text": " So text biological systems learn skills very basic ones at first and we do that through interaction with the world, banking on the objects right." }, { "end": 2177, "start": 2168, "text": " The testing that the gravity and so on and this loosely maps to the reinforcement and imitation learning." }, { "end": 2194, "start": 2177, "text": " So next is the biological system practices we practice these skills and we observe not only our behaviors, but those of others and we learn the causality and learn to predict the future what future my hold." }, { "end": 2222, "start": 2194, "text": " The we call this intuition, but it's really prior experience compressed into a predictive model that's trained with the self supervision for example having a wall in head of you you're not going to go straight ahead even now you haven't experienced that particular wall because you've seen examples and experience examples before we've kind of had some serious of papers in the self supervision." }, { "end": 2232, "start": 2222, "text": " So we can learn pretty much anything that that we're curious about about both our policies without knowing actually policies and our teammates as well." }, { "end": 2241, "start": 2232, "text": " So then that predictive model can be higher or lower quality, but it's never perfect just like our skills and never perfect." }, { "end": 2261, "start": 2241, "text": " But we can still use them in combination with the basic skills that we have to imagine scenarios and we can start learning more complex skills that require more planning such as navigation over long distances or combinatorial and compositional task like we did in navigation and manipulation." }, { "end": 2276, "start": 2261, "text": " So in biological systems, the learning process and things that we learn changes over time through the curriculum and we saw the evidence also through paired like papers in the literature, both my working kind of external literature." }, { "end": 2294, "start": 2276, "text": " The the biological inspired ideas tend to transfer to some extent. So I think the future is going to be very interesting to see how hierarchical model plays based multi agent RL come together to learn increasingly complex tasks." }, { "end": 2308, "start": 2294, "text": " We treat them now as a separate subfield of the RL, but they seem to be very highly related to each other and we don't quite know how to put them together this single system." }, { "end": 2315, "start": 2308, "text": " I mean, it seems like evolution is doing our matter or our right it's it's building ability to learn." }, { "end": 2328, "start": 2315, "text": " So is it is it purely evolution because I think I wonder if you could say I wonder if you could say that in our lifetime we're doing something like mammal because as we get more skills than it's easier for us to get it." }, { "end": 2335, "start": 2328, "text": " You know when we learn our fifth sport, it's actually a lot easier because we already have some in the same way as a mammal or something." }, { "end": 2345, "start": 2335, "text": " Yeah, that that's a very good point and this is not meant to be completely exhaustive listed will merely the list of the work I personally worked on." }, { "end": 2356, "start": 2345, "text": " There is a lot more there to be said and one way to kind of think of it is at least kind of the tax on a way that I'm starting to observe is what are the fixed hyperparameters." }, { "end": 2367, "start": 2356, "text": " So some things are fixed for life more more or less in the life of the agent is questionable and the other things kind of change over time." }, { "end": 2378, "start": 2367, "text": " The so curriculum based methods and the paired business where we're having kind of teacher and student or adversary and what not are more learned linked towards adaptation." }, { "end": 2386, "start": 2378, "text": " I think I would put mammal into that space as well, but how how to put these two together at the same time." }, { "end": 2388, "start": 2386, "text": " I don't idea how to do." }, { "end": 2391, "start": 2388, "text": " That's what makes it an interesting field, I suppose." }, { "end": 2392, "start": 2391, "text": " Yeah, yeah, yeah." }, { "end": 2398, "start": 2392, "text": " Awesome. So do you plan to continue focusing on on these themes going forward with your work?" }, { "end": 2405, "start": 2398, "text": " Yeah, yeah. I think it's super exciting and very rich area to explore the." }, { "end": 2415, "start": 2405, "text": " Yeah, and I mean, consider this being able to train the RL agents and making them work is very human intensive process right now." }, { "end": 2419, "start": 2415, "text": " And yeah, that means the process of training." }, { "end": 2428, "start": 2419, "text": " As we talked about before, that process is trading work and putting our cognitive cycles into that to make the sequential decisions to solve them." }, { "end": 2436, "start": 2428, "text": " So if we offload that and solve the RL with RL or other learning methods, the." }, { "end": 2453, "start": 2436, "text": " I mean, we can open ourselves to more opportunities and ultimately that automation will not only lead to better policies, but even more to deeper insights about how and why and why and when our networks." }, { "end": 2459, "start": 2453, "text": " And I will free us from the burden of picking experiments and able to focus on kind of what matters most." }, { "end": 2466, "start": 2459, "text": " So for the listeners, Dr. Fas and I crossed paths at ICLR conference in the Gather Town video chat." }, { "end": 2471, "start": 2466, "text": " And I also attended your your mentoring sessions on on research careers." }, { "end": 2477, "start": 2471, "text": " So I can definitely recommend your mentoring advice and is mentoring a significant focus for you." }, { "end": 2487, "start": 2477, "text": " Thanks, thanks so much, Robin. Yeah, actually mentorship is very important to me and I find this personally gratifying." }, { "end": 2493, "start": 2487, "text": " I've learned a lot from each mentor in relationship that I have." }, { "end": 2498, "start": 2493, "text": " And I highly recommend anybody to do it." }, { "end": 2508, "start": 2498, "text": " I'm not a mentor to the end to quote my mentor, my material, MUSC, no matter where you incur your you can always mentor someone." }, { "end": 2513, "start": 2508, "text": " Even high school or students can mentor me all school students and really no excuse." }, { "end": 2515, "start": 2513, "text": " I love that." }, { "end": 2527, "start": 2515, "text": " Okay, so and then can you say anything about the what you see as the ideal working relationship between a senior researcher and a junior researcher like, like for example, what should each each contribute to the work?" }, { "end": 2536, "start": 2527, "text": " Any working relationship is a two way process and comes from combining strengths and weaknesses of each party." }, { "end": 2545, "start": 2536, "text": " In any successful relationship, one party is contributing something and learning something else and go same for the other party as well." }, { "end": 2548, "start": 2545, "text": " So the so that that kind of goes." }, { "end": 2553, "start": 2548, "text": " Same thing for the junior slash senior researcher in a field." }, { "end": 2559, "start": 2553, "text": " They are complimentary roles and each party kind of has different role to fulfill." }, { "end": 2566, "start": 2559, "text": " So from the perspective of the senior researcher research career is filled with lots of ups and downs." }, { "end": 2569, "start": 2566, "text": " And it is really not for faint of heart." }, { "end": 2574, "start": 2569, "text": " The both wins and losses are highly personal personal." }, { "end": 2582, "start": 2574, "text": " And when we're in the lows when things are not working and we don't know what the next question is and when we're going to have a solution." }, { "end": 2585, "start": 2582, "text": " The last." }, { "end": 2588, "start": 2585, "text": " Periods can feel indefinite." }, { "end": 2596, "start": 2588, "text": " So senior researcher brings experience and can kind of fill in several roles to that is expect." }, { "end": 2610, "start": 2596, "text": " First, the cyclical nature of the process, the senior researcher understands that and is able to aid with the emotional journey and I really don't want to underestimate emotional journey." }, { "end": 2613, "start": 2610, "text": " And we talk about that, but we should." }, { "end": 2617, "start": 2613, "text": " And bring the encouragement at the times of the lows." }, { "end": 2624, "start": 2617, "text": " Again, to quote one of my prior professors, say research is lots of feels lots like banging heads on the wall." }, { "end": 2631, "start": 2624, "text": " And after having enough experience, you know that eventually wall is going to give up." }, { "end": 2634, "start": 2631, "text": " So they can abuse and sense of confidence." }, { "end": 2641, "start": 2634, "text": " So second role, the senior researcher has developed an intuition and taste for good problems." }, { "end": 2651, "start": 2641, "text": " So he or she needs to steer the junior person towards asking the right question and kind of really equipping with kind of developing the taste in research." }, { "end": 2659, "start": 2651, "text": " The and then the third senior researcher should be in a better position to judge the full potential of the research." }, { "end": 2664, "start": 2659, "text": " Because I've seen several times that people get excited like, OK, it's been low." }, { "end": 2665, "start": 2664, "text": " Right, it's tiring." }, { "end": 2668, "start": 2665, "text": " The minimum things start to work." }, { "end": 2670, "start": 2668, "text": " It's like, OK, let's go publish it and so on." }, { "end": 2676, "start": 2670, "text": " Like, no, maybe we should just kind of hang on a little bit more because there is a lot more that can be done in a short time." }, { "end": 2679, "start": 2676, "text": " If we cannot give this a bit deeper." }, { "end": 2688, "start": 2679, "text": " So and then finally, the senior researcher can help junior person with visibility and connections and reflecting their strengths to them." }, { "end": 2702, "start": 2688, "text": " And again, with the strengths, it's very easy for us for most of us is we're trained to do that to pick to to Nick pick and find the deficiency and we're very good to find in the efficiency with ourselves." }, { "end": 2705, "start": 2702, "text": " And it's not obvious to us what our strengths are." }, { "end": 2708, "start": 2705, "text": " So the senior people can help with that." }, { "end": 2719, "start": 2708, "text": " And then the junior researchers bring the creativity and more focused time on fewer projects than a senior senior people tend to be kind of spread thin." }, { "end": 2735, "start": 2719, "text": " The and have more details and depth on a particular technical areas. So keeping the senior researcher informed about the findings and the process that's kind of the giving back part that can form very productive partnership." }, { "end": 2745, "start": 2735, "text": " That was super interesting. Okay. So besides what we talked about today as there are other things going on in our all these days that you find quite interesting lots of things." }, { "end": 2754, "start": 2745, "text": " So besides breaking the barrier of skill learning and generalization over these families of the tasks and the environments." }, { "end": 2767, "start": 2754, "text": " And treating RL is decision making problem solving it. I'm really excited about RL coming together as systems for doing social good." }, { "end": 2777, "start": 2767, "text": " There is tremendous opportunity to use autonomous agents or all systems called them however you want to automate repetitive tasks that we do." }, { "end": 2794, "start": 2777, "text": " So not that's not only just convenient and freezes, but also opens a huge opportunity with people with the accessibility needs or maybe other people that are not that comfortable with the technology who are unable to complete those tasks on their own." }, { "end": 2799, "start": 2794, "text": " And that really makes a difference for for for the people." }, { "end": 2808, "start": 2799, "text": " We here will be that we think about RL agents and RL systems in terms of the interfaces that are compatible with human users." }, { "end": 2813, "start": 2808, "text": " So having vision and natural language." }, { "end": 2833, "start": 2813, "text": " And so on is going to be very key important addition to the interpretability and so on. So I understand some other word I'm very excited about right now like chip design from Anna and Azalia that came out earlier this week and the balloon navigation from our bell." }, { "end": 2844, "start": 2833, "text": " Valemar seeing the RL work in these really smartly real world problems is super super exciting. So in terms of the particular methods together." }, { "end": 2848, "start": 2844, "text": " There is a version of this population based RL." }, { "end": 2860, "start": 2848, "text": " Social RL where we're kind of training groups of the agents and agents are learning from agents at Natasha Jakes is spearheading an offline RL." }, { "end": 2865, "start": 2860, "text": " I'm very excited about that." }, { "end": 2869, "start": 2865, "text": " As treating the learning as a social construct." }, { "end": 2877, "start": 2869, "text": " The offline RL methods, similar focus on learning from limited interactions with the world." }, { "end": 2885, "start": 2877, "text": " And this has been one of the kind of key bottlenecks of the RL when it gets to kind of bringing it into real world." }, { "end": 2894, "start": 2885, "text": " So lastly, I think very soon we'll need to look into the fairness and interpretability of decision making." }, { "end": 2903, "start": 2894, "text": " And that's something that we had the luxury of not doing so far." }, { "end": 2916, "start": 2903, "text": " The in the terms of the RL, but as we're moving towards really real world problems is going to be incredibly important. And then the second thing is develop theoretical foundations around generalization in the RL." }, { "end": 2929, "start": 2916, "text": " The field is moving towards generalization across environments, tasks and systems and the good old market decision process or palm MVP, the abstraction is not very useful as it is." }, { "end": 2934, "start": 2929, "text": " So developing new theoretical tools will be very exciting in this space." }, { "end": 2937, "start": 2934, "text": " Wow. Okay. Can you can you say more about that?" }, { "end": 2941, "start": 2937, "text": " You were saying how the MVP and Palm D.P. abstractions are limiting in some ways." }, { "end": 2948, "start": 2941, "text": " So what can you talk about how they how they're not useful or limiting and what alternatives could we have there?" }, { "end": 2955, "start": 2948, "text": " Sure. So palm MVP or let's say MVP tells you that your RL problem is state action." }, { "end": 2961, "start": 2955, "text": " Hidden dynamics hidden reward that we can observe and gamma function right." }, { "end": 2967, "start": 2961, "text": " And yet when we're solving any problem, the first things we ask is like, OK, what is the environment? What is a task?" }, { "end": 2975, "start": 2967, "text": " What is the system? We're going to run and so on. None of this really exists in in in the palm MVP." }, { "end": 2986, "start": 2975, "text": " Basically what we're doing is and this is like the very undiscovered area that we're just kind of empirically doing is we're mapping the real world and the real problems into palm MVP." }, { "end": 2994, "start": 2986, "text": " Right. Second question about that is that palm MVP says, OK, here's a fixed state, here's your fixed actions between and so on." }, { "end": 3009, "start": 2994, "text": " What we really ask when we're talking about generalization is how this agent can solve a number of tasks in M number of environments, right, which at this point then is a family of palm MP." }, { "end": 3016, "start": 3009, "text": " It can be a chain in the compositional in the compositional task formulation." }, { "end": 3029, "start": 3016, "text": " It is a chain of M DPs related chain and so on. We don't have tools to do that. There is absolutely no theoretical tools or definition of the problem that can kind of eat us in that." }, { "end": 3037, "start": 3029, "text": " So we end up in this very kind of highly empirical environment where we're defining environments and we're defining benchmarks and what not." }, { "end": 3046, "start": 3037, "text": " But really the theory works on the palm MVP in the palm and the PC without looking at the structure behind environments and tasks and so on." }, { "end": 3054, "start": 3046, "text": " And by tasks, I mean, you don't just mean rewards, right, the reward to what we're using to get the task done." }, { "end": 3058, "start": 3054, "text": " Exactly. Right. So we talked about learning the rewards." }, { "end": 3074, "start": 3058, "text": " Our task objectives and if you can all read the theory of reinforcement learning, they'll tell you, I think in Bartos book is that the reward should be your task objective, which means we should not be even talking about interest or keywords." }, { "end": 3085, "start": 3074, "text": " Okay, fine. Now we do interest or keywords and we put that in palm MP, but in what it really is is that we're creating a proxy palm MP for the one that we actually want to solve." }, { "end": 3090, "start": 3085, "text": " Right. So you see what is going." }, { "end": 3099, "start": 3090, "text": " So ideally, you would be able to more directly state what it is we need and in a framework that has some theoretical basis to get us there." }, { "end": 3105, "start": 3099, "text": " Is that kind of what you're saying? Yes, I don't know how to do it, but I think that we can and should do better." }, { "end": 3111, "start": 3105, "text": " Sounds like some of your work is maybe building the bridge there. If the matter or else taking us there." }, { "end": 3123, "start": 3111, "text": " I am hoping that matter, I'm hoping that the compositional in the compositional paper position task paper that is going to come out and archive soon." }, { "end": 3127, "start": 3123, "text": " We put foundations of the compositional tasks through petri nets." }, { "end": 3131, "start": 3127, "text": " And that allows the actually can see that apology of the task." }, { "end": 3142, "start": 3131, "text": " And we actually propose that from that the task is a graph, which is a petri net that can kind of control the state and directly out of that graph." }, { "end": 3149, "start": 3142, "text": " We can infer the palm MPs that are that we can use them for solving RL agents." }, { "end": 3156, "start": 3149, "text": " And we can define the family of the related graphs of the petri nets that describe this space." }, { "end": 3161, "start": 3156, "text": " So at least that kind of gives somewhat of the of the framework you're just scratching the surface." }, { "end": 3167, "start": 3161, "text": " I don't think this necessarily the right way to think of it or something, but it is a way." }, { "end": 3175, "start": 3167, "text": " Very cool. Okay, so is there anything else I should have I should have asked you about today or anything that you want to share with our listeners today?" }, { "end": 3179, "start": 3175, "text": " It is very exciting time to be in reinforcement learning." }, { "end": 3195, "start": 3179, "text": " I think reinforcement learning is really the cost of breakthrough that requires a really holistic and multidisciplinary approach between research tools and frameworks and challenging applications really need to come together to try to progress." }, { "end": 3199, "start": 3195, "text": " And it will happen soon. It's kind of happening now." }, { "end": 3212, "start": 3199, "text": " So I personally feel very honored and privileged to be part of this journey at this point of time and super grateful to all of my collaborators for kind of joining the ride and sharing the sharing the journey." }, { "end": 3216, "start": 3212, "text": " Fantastic. Any any suggestions for the talk or I'll show here?" }, { "end": 3222, "start": 3216, "text": " This looks great and thank you so much for having me. It has been great conversation." }, { "end": 3228, "start": 3222, "text": " It's been so great to have you, Dr. Alexandra Faust. Thanks for your time and your insight and thanks for sharing with talk or I'll." }, { "end": 3230, "start": 3228, "text": " Thank you so much." }, { "end": 3260, "start": 3258, "text": " Thank you for joining us today." } ]
Sam Ritter
Sam Ritter of DeepMind on Neuroscience and RL, Episodic Memory, Meta-RL, Synthetic Returns, the MERLIN agent, decoding brain activation, and more!
https://media.transistor…4a6.mp3?src=site
This is Talk by Rail Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan. Dr. Sam Ritter is a research scientist and deep-mind. Thanks for joining us today, Sam. Thanks so much. I'm really happy to be here. I'm a big fan of the show, so I'm especially excited to chat with you about episodic memory and deep RL in neuroscience. Great. Okay, so you've already started, but how do you describe your area of research? Yeah, so I guess for the last almost four years now, I've been really focused on deep reinforcement learning. And especially the use of a particular data structure in deep RL, and that data structure is called episodic memory, which sounds, I don't know, really specific or fancy, but really it just means that your deep RL agent has an agent state that can grow to arbitrary size, so it's kind of the opposite of having a fixed size agent state. In the kind of course of working with that data structure in deep RL, I've gotten the chance to do some kind of agent performance development work, as well as some neuroscience work. So using these deep RL systems as models of human cognition and what might be happening in the brain. So let's start with meta learning. In your dissertation, you focused on episodic meta RL. Is that right? Not just episodic RL? And can you remind us how meta RL is defined? Episodic RL, you know, I kind of think of as starting with this at this point, fairly classic paper by Matei Lignel and Peter Diane, or they basically said, you know, look, we can do, you know, value function learning with a sort of a kernel regression model rather than a parametric system. So in that setup, we're going to have basically a storage of all the past states that we've seen, and we're going to, with each one of those past states, record, you know, something about the return, the rewards that we're seeing after those states. And then we go to estimate the value of some new state. We'll do like awaited sum over those past states and some embedding space using, you know, a kernel that'll give us basically a scalar for each past state has that kind of estimates how similar it is the current one. And we'll just do awaited sum over those returns that are associated with them in order to estimate the current value function. So it's a very specific kind of algorithm. And it's recently been the term episodic RL has been kind of broadened a little bit in some recent work by Nathaniel Daw and colleagues. But that's kind of the most canonical form of it really. In contrast, episodic meta-RL comes out of this, you know, meta reinforcement learning work, which really started with, you know, a couple of papers, really, it was popularized very least by a couple of papers in 2016, one by Jane Wong and another one by Duane and colleagues. And in those papers, they demonstrated that you could learn by reinforcement learning how to do reinforcement learning. So specifically, they would just train a, you know, a three C style agent. So basically an RL agent with a recurrent memory. In those cases in LSTM, you could train those agents on tasks that required RL to be done sort of inside the task. So an example would be a bandit problem. So in those papers, they trained these LSTM based agents on, you know, these can step or 100 step bandit problems. And what they observed was that these agents could actually learn to perform what appear to be close to Bayes optimal exploration and sort of handle the explain and learn to handle the exploration exploitation trade off. So it was a really kind of cool way of thinking about what goes on in these recurrent reinforcement learning systems. We can actually think of them as learning reinforcement learning algorithms that they will execute at inference time. So then episodic met RL is basically the some work that comes out of some work that I did following on Jane Wong's met RL work where we basically said, all right, but look, if we have this fixed width memory recurrent controller and it's capable of doing, you know, of learning bandit algorithms. Maybe we can learn more sophisticated algorithms if we provide that system with an episodic store. And so yeah, a lot of the work that you and I will talk about today, Robin will be kind of expounding on this point. I'm kind of demonstrating what we can do if we add episodic to met RL and I'll just actually point out that the sort of narrow form of episodic RL that I mentioned is the kind of thing that you could in principle learn to do by met RL with an episodic memory. So it does kind of all fit together. We can say, oh, by episodic met RL, we can learn to do episodic RL. We have the data structure and we have sort of the neural networks in place to learn to do that. So meta RL is that something that happens in the brain? I guess I always assume that we evolved to do RL in the brain and maybe our meta RL is just something we do in software. Is that true or does that happen in neuroscience? I think it's question is definitely an open one. So the hypothesis that meta RL might happen in the brain is sort of built up in a fairly recent paper from Jane Wong and colleagues at DeepMind from I think 2017 or 2018 when it was published. It's in nature neuroscience and they basically use this LSTM A3C agent as a model of human behavior and animal behavior and even neural recordings from animals as they carry out various sort of lab tasks. And what Jane and colleagues show in that paper is that with this really simple model, which is basically just a recurrent working memory that has sort of parameters that are or weights or synapses, if you will, that are learned to maximize reward as it's defined in these lab tasks. And by using that as a model, you can recover some really specific characteristics of human and animal behavior and some striking sort of features in the neural recordings that are made as they're carrying out that behavior. Do we have definitive evidence that the brain does meta RL? I think that's always going to be hard to ascertain for certain. But I think the evidence is pretty good and it's a model that I think is at this point has a fair amount of recognition by the cognitive neuroscience community. I guess for my point of view, episodic RL and meta RL are both a little bit exotic and here you are treating these two things, the intersection of these two things together. Can you talk about why you wanted to treat them together, the pair of us up? Yeah, I mean, I guess it seemed like a really obvious pairing in context. So maybe I can describe that context and it'll seem obvious to you in the audience as well. So basically I was in the middle of my PhD around the time that Jane did this meta RL work and I thought it was really, really interesting. I've been working on some language stuff before but I wanted to make a switch I was getting really keen on reinforcement learning. So I went to my advisor, Matt Bafinick, who was the senior author on that paper. I asked him, what's bugging you about meta RL? What's not quite there? What are some interesting avenues? What he said was that the meta RL agent, the one that has basically this recurrent working memory in the form of an LSTM, that agent basically behaves like a human who has hippocampal amnesia. So what is that? So the hippocampus is this structure in the mammalian brain that appears to have a large role in the storage of information for long periods of time and specifically in what's called episodic memory. So the ability to remember specific events from the past and what Matt was getting out there is that these meta RL agents, you could drop them into say a bandit problem and they would be really smart at trying an arm and then making the correct inference based on the reward they got, what the right arm to try next would be and basically doing really smart close to base optimal decision making. But as soon as you pull them out of that bandit problem and drop them in another one, they completely forget the solution that they learned. So throughout the course of their experience with that previous bandit problem, they pretty much figured out the task, they solved the task, and as soon as you pull them out, they completely forget it. So you can drop them back in that same bandit problem again and just like a hippocampal amnesia would do, they have to start from scratch and they have the general knowledge about how to solve bandit problems. But they just don't remember that specific one. They don't remember the episode, if you will, so they kind of lack episodic memory. And at the same time, Greg Wayne and colleagues at DeepMind had demonstrated sort of success using episodic memory that is in the definition I was using earlier, like using episodic memory, this ever growing buffer of vectors in deep reinforcement learning. So the Merlin paper had, I don't know if it would come out, but we at that point knew that that agent worked well. And we knew that you could do episodic memory in deep RL, which was quite exciting at that point. It was a novel demonstration. And so my advisor Matt and I were thinking, okay, let's see if we can endow these meta-RL agents with a hippocampus. And that basically is what kind of kick started the work that ended up in my dissertation was basically, and it was basically starting with this goal of like, let's make it so that meta-RL agents can remember the solutions that they discover, using their smart meta-RL strategies. I just want to say I enjoyed your two meta-learning talks on YouTube, and I encourage the audience to check them out. We'll have links to them in the episode page. So can you tell us more about how you contrast meta-learning to some closely related terms, like transfer learning and fuchsia learning? Is it closely related to those? I'd say, yeah, it feels very related to me. So I guess first, kind of a basic definition of meta-learning. Sort of intuitively, like a broad definition of meta-learning is just learning how to learn. So for instance, you learn your first programming language, and it's really hard to do. You haven't learned how to browse stack over flow or read docs, and you don't know what data structures are, etc. But then the second programming language you learn is a lot easier because you've learned some background knowledge, and you kind of know the strategies that work. You know kind of little exercises you can set for yourself. And things like that that enable you to learn much faster by having learned how to do it. In machine learning, we have a more sort of specific and narrow, and also a more formal definition of what meta-learning is. And I think it's useful to have that formal definition to contrast with the other problem settings that you pointed out, Robin. So the formal definition of meta-learning, the meta-learning setting, is that you have a learner, which is faced with a series of learning tasks. So tasks where, in order to solve it, you have to learn some stuff. And each one of those learning tasks is drawn from some distribution. So basically I have this big bag of learning tasks, and I'm just going to, for each episode of my learner's experience, I'm going to pull out a learning task, make the learner try to solve it. So with the bandit tasks in the meta-rl setting, you've got this big bag of bandit tasks, and each one has a different set of parameters, reward parameters for each arm. And you kind of make your, in this case, meta reinforcement learner, you make that agent learn how to discover the rewarding arms or the arm reward parameters in any given task that it might face. So yeah, then coming to the, you know, the question of how meta-learning relates to a few shot learning, transfer learning, etc. So I think it, it depends on which setting we contrast against. So, a few shot learning, I guess I could say that meta-learning is one way of, you know, producing a learning system that can do few shot learning. So with meta-learning, I could say, oh, I've got this few shot learning problem, like, you know, I've got, you know, for whatever reason I have the need to see one or two examples of a few, let's say, image-neck category classes of a few object categories, and each example is an image. And then I have to, you know, generalize from those examples to categorize other images that might be in the same category. One way that I could do that provided that I had a bunch of other image-neck categories and data examples is that I could set up a meta-learning training distribution from which I could draw a whole bunch of these few shot learning problems. And then I could evaluate this system on held out few shot learning problems, or in other words, on held out categories. And in fact, this is what, you know, the matching networks paper does, and a whole bunch of other papers after that. And yeah, it's an exciting way to do it. Similarly, with transfer, you know, if you kind of have the setup that are the problem setting where you want your learner to learn some stuff in some environment or in some, some task setting. And then in another task setting, make use of the same information, you know, a primary problem there if you're just using standard neural network training is that your network overfits to the first task, and it's not able to make use of the commonalities between the first task and the second task. And the way that meta-learning gets around that is to say, well, okay, we're actually going to sample a lot of these tasks and train the neural network in sequence to solve them. And this prohibits it from overfitting to any particular one. And kind of similarly with domain generalization, it's the same kind of idea you want to be able to generalize to a new domain. If you can sample a bunch of different domains, then you can prevent from overfitting to any particular one such that the sort of the key commonalities, the key generalities across them, can kind of emerge in your policy. Yeah, and in continual learning, I'll just do one more of these. Continual learning, I guess I always think of that setting as one where you've kind of asserted that your agent has to experience task A for a while, and then it will experience task B. And there's no option to have the task setting or the environment give you experience from those two tasks in interspersed. So it's very much an on-ID. And I think that that one, in contrast to the other ones we mentioned, I think that that is just a very different problem setting from meta-learning, where you're kind of insisting that we are not allowed to do this IID task sampling like we do in meta-learning. So in some ways it's maybe a, it feels like a more challenging and more open problem than meta-learning to me. Which I guess our current generation of function approximators really likes IID and really likes being able to go back or kind of needs to go back to sample previous tasks and has trouble with this domain shift. Is that right? That's exactly how I think about it. Yeah, I know that people are developing lots of methods for trying to make the learning system itself or to make the neural networks less reliant on this IID experience generation. But you know, I think it's an exciting open research area. There are some replay methods that seem particularly promising to me. But I think you nailed it. Yeah, it's sort of the bog standard deep learning methods really require IID sampling and meta-learning kind of just says fine. We'll just IID sample learning tasks for you. Okay, so let's move to the first paper we're going to talk about today that is unsupervised predictive memory in a goal directed agent. That's the Merlin paper by Wayne et al in 2018. Can you give us an overview of this paper ship? Totally. Yeah, so, yeah, I mentioned this paper a little bit before because at least for us for kind of the little community of researchers that I am a part of. It kind of opened the door for using non parametric memory or just like an agent state that grows in D. Barrel. So, so a little bit of context here. This was around 2016 when when this work was being done. And you know, the A3C paper had come out quite recently. So, so Vlad, me kind of shown, okay, right, we can do D. UN, which has a feed forward architecture. But now with A3C we can actually use architectures with memory. So, we could use an LSTM in a D. Barrel agent and get great results. And you know, around this time we had the NTM, neural-turning machines and what became the DNC and there were the memory networks from Jason Wadsen and colleagues. So, in addition to kind of wanting to go from feed forward to recurrent, there was this really obvious next step of like let's also try to do the NTM thing of having an external memory. So, not only do we have, you know, recurrence, we have some kind of memory. We also can get away from the fixed, fixed width through the fixed vector size inherent in a recurrent working memory. And this seemed to be producing good results and supervised learning. So, I think it was a really, you know, obvious and exciting direction. The issue was it didn't really work right away in RL. So, around the time that I started a D. Mine, this was kind of the state of things. And I wasn't working on D. Barrel agents at the time. But I kind of was aware that there was, you know, some impediment here. And with the Merlin paper, the basic idea was let's move on the assumption that we're not getting great results in RL with external memory because there's something about the noisy RL gradients that make it so that when you propagate gradients between these sort of retrieval from the memory and the writing to the memory, things break down and it doesn't work. So, they kind of proceed on that assumption and say, well, maybe we can use an unsupervised learning method to determine what we write to memory so that we don't have to backprop those gradients through or we don't have to rely on them anyway. And then we can just learn the retrieval mechanism onto these sort of fixed vectors. And that worked really well. That's basically the Merlin paper is demonstrating that. So, I have kind of puzzled over this agent a few times over the years. It seems like every year. So, I pull up the diagram like, what the heck is this thing trying to do? Yeah. Seems very different than agent design design. When more are you still looking at? Totally. So, in the figure, and this is one of the times that I wish we had video, but in the figure that describes the overall agent design figure one, it shows a few different agents that RL LSTM, RL Mem, and then the final Merlin agent. And these seem to be, I guess, variations on A3C. Could you very briefly talk about how these three agents differ? What are they doing that's different? Yeah, definitely. So, first, I want to say, like, I'm totally with you and being confused by that diagram. I actually hadn't looked at that diagram in quite a long time because I just sort of knew the architecture from talking to the authors over the years. And I went back to look at it before the interview today because I just wanted to remember exactly what was in the paper. And I found it incredibly hard to parse. So, thank you. Yeah, I mean, I don't think you're off base by any means finding that thing slightly inscrutable. I think that the main, like, the primary components of those agents are quite straightforward though. So, the RL LSTM, as far as I know that is just A3C. There's nothing different about that agent, substantively from what's in sort of Vlad's A3C paper. Yeah, RL Mem is then the addition of an external memory where at every time step you're projecting your hidden state with a neural network. And I don't remember if that's a linear layer in MLP to, you know, some size. So, let's say you've got, like, a 128 size LSTM state. I'm going to project out to, you know, a 50-dimensional vector that I'm going to store. And then, at the same time, at every time step, you do a retrieval by, with a different neural network projecting out to some, you know, 50 dimensions again. And then, I don't remember actually if that architecture does, I think it's probably a dot product retrieval there. Where you basically do a dot product between the vector that you generated for retrieval or the key, as it's often called, and the vector you want to retrieve, which is often called the key. And, you know, based on the weight you get back from the dot product, you basically do a weighted sum over all of the memories. So, you're trying to find similar memories that are similar somehow to the current situation, is that right? I think that's exactly right. You can imagine learning, you know, an embedding space where that similarity is something really, really specific. It's like, I want states that had a similar color, but I don't care about what objects were present, or I want states where I was seeing an object with a similar shape, but I don't care what the color was. So, it's kind of similarity in some learned space, and you retrieve information based on, yeah, based on those embeddings and the similarity in that space. Cool. And then the Merlin agent goes beyond that, adding this memory-based predictor. What's going on there? Yeah, that's right. So, the way that that works is you basically, rather than, you know, just storing a projection that you're then going to- I don't remember actually if that, with that RLMM baseline, they back-propagated through it, or if they didn't. In the diagram, it suggests that they did- that they did back-prop through it, which slightly surprising, because there's a lighter paper, this MRA paper, where they show that things actually work pretty well. It's different tasks when you do that. So, I'm a little bit reluctant to read too much into, like, little details about that. But in any case, with that agent, you're just storing a vector that you've either learned from gradients from the future that are back-propagated through your retrieval pathway, or are just untrained. So, you're just storing something, and it's a random projection, which believe it or not, that actually does work pretty well out of the time. But sometimes, not sufficiently well, depending on the task. So, with Merlin, what they do is they just run a variational autoencoder on the- sort of the current LSTM state in order to decide what to store. So, it's basically the standard, again, this is like 2016, so VAE's, or all the rage. So, there's this kind of obvious logical step, which is like, oh, we're not happy with the random projections. And we're basically not getting good performance when we put random projections in the memory. And things aren't getting much better when we're back-propagating through the memory. So, maybe there's some unsupervised loss that we can put on this thing that will make it so that the things in memory are useful in the future. And because VAE's are really popular right now, let's try to VAE. So, we'll basically take the vector that we were going to store, and we'll treat it as the latent for a variational autoencoder. And I think in this work, they actually were trying to predict the next time step. So, I guess it's a variational next step predictor. But, you know, a lot of these details like that, again, I wouldn't read too much into them because it changed a lot over the cycle of this project. Basically, the idea is like, we're going to sample some latent and try to either reconstruct the current frame that we're seeing, or predict the next frame. And I think it's actually predicting the next frame in that diagram, if I remember right. And basically, I think the main thrust of this paper is like, look, if we add this unsupervised loss, such that we're not just putting random projections in the memory, we're putting representations of the current time step that have been shaped by this next step prediction loss, then we can get better performance in our tasks. So, what type of tasks does this type of agent work well for? Right. So, they have this latent learning task where the agent is kind of just running around in an environment and then, you know, for a while without a goal. And then it's given a goal and it has to navigate to it. They have the memory game, which is a really nice task. It's kind of the one that you might have played as a kid, where I think maybe clue us like this. I think there are some common games that are like this, where you basically have a bunch of cards laid out on a table face down, and you're allowed to flip over one at a time and see what the face is. And there are some pairs in this set of cards that you have. And your goal is to flip over a pair and sequence. So, you basically have to remember where the things are that you've flipped over so that you can intentionally flip over to an arrow. Is that a familiar game? Yeah, I remember playing that as a kid. Okay, nice. Yeah, that was like a classic. I remember they were really excited when they finally got that one to work. So, yeah, it's tasks like that that require like some of the latent learning setting, for instance, an LSTM will just probably not remember very much of what it's seen during the sort of exploration base. Whereas the agent with the external memory and with representations in it that are shaped in a reasonable way so they're not just random projections can actually remember what it saw before in order to navigate effectively. So, we had Rich Sutton's bitter lesson article recently where he talked about data always wins. And maybe is that more data always wins? Is that, does that inform your choice to focus on meta-RL that even learning, the algorithm, one level higher than maybe traditionally consider is something worth doing because with the right data and learning system we can learn to do that better than we could hand design? Yeah, that's an interesting question. I would say that wasn't part of my original motivation, but I think it wouldn't have been a bad reason to do it. And it might be a good reason to keep working on meta-RL now. Yeah, it's interesting that bitter lesson because, you know, there are a lot of settings right now that I would like to be able to make progress in that I can't make progress in with meta-RL. So, it's a very basic one just Atari. So, the whole idea with meta-RL is that you're going to sample all these tasks from a task distribution and by training your learner on it you're going to get good at solving those tasks super fast. So, okay, can I do that with Atari? I really can because I've only got 57 Atari games and they're really quite different from one another. So, you know, I can't sample 10,000 Atari games or a million Atari games and then the 57 real ones are samples from that distribution such that I can, you know, have those as my held out tasks and learn by meta-learning a great, you know, or really great policy for learning these ones that I care about. So, yeah, I wonder about that bitter lesson maybe. I guess there's hypothesis that eventually will have access to so much data, you know, maybe by learning on all of YouTube then we can, you know, with other algorithms, it wouldn't be exactly meta-RL like we have now. But with some clever algorithms we could basically just let the data tell the age and how to solve those. But I have to say right now as of 2021, I'm really keen on methods that are a little bit more, I don't want to say hand design, but where, you know, as a researcher we can design something that can solve Atari a lot faster than we can right now because they're practically speaking isn't a way to just let the data do it. Cool. And then how does going back to Merlin, how does it handle exploration, is it using a standard A3C exploration strategy or is it doing something different there anything? Yeah, yeah, yeah, it is. So, so that agent and actually all the agents will talk about up until the very last paper basically are just using the exploration and the credit design basically all the RL is the same as A3C or Impala as kind of your standard agent. And it's really just the architecture that's changing in these. There is one caveat to that, which I think in at least one of their experiments they they did like reward shaping like in the late learning one because if you just used the policy entropy exploration from A3C they wouldn't be able to run around the map to see stuff in order to do the late learning. So, yeah, I think that demonstrates just just how much in this past work we were kind of stuck with the some of the limitations of the sort of the basic RL algorithm we were working with. And then speaking of sampling games in Atari and not having enough games to sample from, I mean, would you consider something like ProcGen from OpenAI which is generating new levels, would you consider that kind of a form of Met RL because you're learning how to solve these sample data. So, solve these sample, sample games or is that kind of splitting hairs at that point? It definitely feels like a Met RL setting to me. I think, you know, there's that, there's the alchemy data set that came out from from DeepMind recently. There's CoinRun, it was another nice one where you could programmatically generate tasks. And of course, like in, you know, Magyoko and continuous control, there's a lot of Met RL work that's just like sampling from like a distribution over. You know, how much the legs weigh and things like that. So, yeah, I mean, I'm definitely excited about those. I think the tough thing is trying to generalize. So, let's say you've trained an agent on like ProcGen or what's that other one? I'm blanking on the name of it, but something like alchemy. Let's say you trained an agent on those, but then you want to generalize to Atari or you want to generalize to some other game or some tasks that you want to solve in an applied setting. There's not really a way to do that generalization. It really only works within the task distribution that you've kind of cooked up. Okay, so let's move on to the next paper here today. That's Met RL without forgetting. Been there, done that, Meta learning with episodic recall. That's by yourself, Samuel Ritter in 2018. So, can you give us a lowdown on the main idea in this paper? Yeah, for sure. So, this is the one that we were talking about earlier, where my advisors said that Met RL acts like a hippocampal amnesia, amnesia. Let's try to fix that. So, basically, in this work, I kind of picked up where Merlin left off or kind of went through this door that Merlin had opened, if you will, to say, all right, we've got this episodic, or sorry, we've got this recurrent working memory that's doing these cool things, solving these band of tasks, for instance. And I basically want to take the knowledge that this recurrent network has gained through some really smart exploration. And I want to store it in such a way that it can be retrieved later when the agent needs it again. And so, basically, the way we ended up doing that was by having an episodic store of LSTM states. So, in contrast to Merlin, a lot of other episodic memory work, where the thing you're storing is some projection of the LSTM state. In this setting, we were like, no, like the LSTM state has exactly what we need. Like, for instance, it has the bandit parameters encoded in it, or at least it has the sort of state of the bandit solving algorithm encoded in it, if you will, it's kind of got the belief state. And so, we're just going to store that raw in the episodic memory, and then we're going to pair it with a contextual queue. So, these were recurring bandits, right, and the sort of the basic task setting from that paper. And each time a bandit would re-occur, it would come along with, I think in those experiments, it was an Omnichlott digit. So, it was kind of like your hat, a bandit in a casino, and there's a picture above the bandit that you're playing. And so, later on, the next night or later, the same night, you come back to the same bandit, and you're like, oh, I remember that picture, I remember that I actually found that this arm was working really well all the time. I assume an actual casino is more like a wandering bandit, so it's randomized or something. But in this fast setup, when the agent saw the same image again, it could assume that it was in a bandit problem with the same arm parameters. So, if it had solved that bandit problem before, or at least done some exploration and learned, gotten some information about what was probably good or probably not good, then it could assume that that would still be the case. And so, with this episodic memory, what our agents could do is do a search in the memory by doing this neural network-based query like in Maryland over the contextual Q representations. And then one kind of key aspect of this work that differentiates it from pretty much all the other episodic memory, DBRL work I know, is that when we retrieved from the episodic memory, again, rather than doing some projection of what we retrieved and feeding it as input to the recurrent network. Instead, we basically said, well, we know that vector that we stored has the information we need, and we know that the recurrent weights, so the dynamics of the LSTM are already shaped in such a way to process that thing. So, let's just sum it on to the LSTM state as though it was another component of the LSTM. So, within LSTM, you have the input times a gate plus the previous state times a gate. And here we were saying, well, let's actually add another gate for reinstatement for the past LSTM states that we've retrieved based on contextual similarity of another gate. And we'll just multiply the retrieved LSTM states, the old LSTM states by this gate and some goes on with the other two. And that turned out to work super well. Do that surprise you, or did you expect that to work? I remember, so I kind of expected that it would work to retrieve this information from memory and feed it into the LSTM somehow. I was really happy to see that this particular way of doing it worked a lot better than feeding it as an input or projecting and feeding as an input. And I was especially happy with the formulation where it was basically like another gate. And I think when we talk about the dissertation, part of that will become clear. But I do remember I had this meeting with Ross von Paskanu and he suggested something about that way of gating. Or he mentioned something about how you could treat this thing as though it was another gate. And like I went back to the code and like, ended up trying this particular version. And when that version worked out, it's like, that is so cool. Like I didn't expect that to work. It ended up just being kind of pretty, I suppose. So speaking of your dissertation, let's get to that. This is meta-reinforced learning with episodic recall and integrative theory of reward driven learning. And that's Samuel Ritter 2019. So can you briefly tell us what your dissertation was about? Sure. Yeah. So basically the paper we just talked about was in ICML. And it's kind of happy with that as a sort of machine learning paper or machine learning project. And then what's kind of natural to ask the question, can we take this architecture that we've designed seriously as a model of reward driven learning as it happens in the brain? And there were some reasons I think that might be a good idea. So, you know, my advisor had this, you know, qualm with the original meta-RL model that it acted like a hippocampalian music. And they actually had this paper where they argued for that model as a model of what's going on in the brain, specifically as a model of the interaction between prefrontal cortex and the strideal dopamine. And so, I mentioned that Jane and colleagues, Jane Wong and colleagues had this paper in Nature Neuroscience on meta-reinforced learning. And that's the one I'm talking about here. So, yeah, they had this nice paper where they provided all of this evidence that you can explain a bunch of different behavioral and findings and neural recordings using this idea that you have this recurrent working memory. And this is the thing I didn't mention before. And that the weights of that recurrent working memory are trained by a learning signal that's produced by the striateum, which is this sub-cortical structure, it sits below the cortex, and it projects neurons into the cortex as well as other areas. And these neurons emit this neurotransmitter dopamine, which is thought among other effects to kind of modulate plasticity. So, the idea in this Nature Neuroscience paper is that maybe what's going on in human and animal learning is that the strideal dopamine system, which is, by the way, very well studied, and some of the most kind of exciting and believable findings in cognitive neuroscience are about this system. So, it was really exciting when Jane and Matt came out with this paper saying, well, maybe what that system is doing, the system that we generally know a lot about how it works, we don't know exactly what it's kind of downstream effects are. Maybe what it's effects are to train the dynamics of this recurrent working memory such that that working memory can then execute reinforcement learning algorithms. So, just a, that's kind of like a big statement. So, let me just unpack it a little bit more. Like, the strideal dopamine system is really a very simple, I guess it implements a very simple algorithm. Specifically, we think of it as doing reward prediction errors and kind of doing value prediction. So, we think of it as basically a TD learning implementation. And one problem with this is that the animal behavior and human behavior shows evidence for lots of different kinds of learning algorithms that are more seemingly more sophisticated than that. And so, and Jane and Matt's paper they argue, well, the reason that that's possible is that there's more sophisticated behaviors and this more sophisticated learning phenomena are arising via meta reinforcement learning that is carried out by this very simple strideal learning system. That was, that was basically the prior work. And then one big hole in that picture is the fact that that system just forgets everything as soon as it learns it. So, you've, you've got this hard one knowledge that you've gleaned through this really smart reinforcement learning algorithm that you learned through strideal dopamine. But then you just forget it right away as soon as you go into another context and that's really not ideal doesn't seem like a full model of human learning. And so, in my dissertation, the idea was, well, can we model the hippocampuses role in this picture? And can we do that modeling with an episodic memory based deeper reinforcement learning agent? So, basically, we took that architecture with the, this reinstatement process where we store LSTM states and we say, well, actually, we're storing like cortical states, we're storing like firing rates across cortex, we're storing those in such a way that they can be retrieved when a similar situation happens again. And this is what I said right there is like a very classic model at a high level of what the hippocampus does going back at least to David Mar. And kind of in our model, we were just implementing in a very different way rather than having like an attractor network, like they would have an older modeling work. We just have this slot based memory where we just store everything separately, which kind of gets us away from some implementational challenges that makes it really hard to work with those models. And by doing that, basically, the thesis shows that you can capture some specific data that pertains to the interaction between episodic learning and the kinds of learning that are associated with stridal dopamine and prefrontal cortex. So it kind of puts it all together into one picture with cortex, hippocampus, and the striatum, and they're kind of associated learning phenomena altogether so you can see it via a unified account. Okay, so in this dissertation, you talk about how different parts of the brain are involved in different types of learning. And if I gather correctly, that's model-free learning with dopamine and model-based learning in the prefrontal cortex and episodic learning with the hippocampus. So can you maybe tell us more about these three types of learning in the brain and what is each type best used for? Sure, so model-free learning is associated with stridal and it's sort of dopamine-ergic projections. And the idea is that that is your very basic habit learning or your kind of learning that comes down to I did something and then later something good happens happened. So I'm just going to do that thing over and over again. I'm going to do it more regardless of what happened in between. So, you know, for example, you develop a habit of, you know, you're out to work every morning and you don't really have to plan that route every time. You just know, oh, when I come to this intersection, I take a left. Simple as that. That's the kind of behavior that we associate with model-free learning. Model-based, on the other hand, is typically associated with the prefrontal cortex and it's more about explicit planning. So, when you're playing chess, for instance, or at least, you know, kind of how I imagine people play chess, I think experts might have slightly different strategies for playing. But some of like me is definitely a amateur. It's all about trying to predict out sequences of moves in order to, you know, predict some outcome that's really relatively far in the future so that I can decide what to do now. And this contrasts very strikingly with habit learning where you basically just have to try entire sequences of experience over and over again in order to see what leads to good outcomes and to gradually learn to do the things that achieve reward. Okay, I think you asked about also episodic. And that's kind of a newcomer, I guess, to the discussion in neuroscience about reward-driven learning strategies. So, whereas, you know, model-free learning and model-based learning have these very historic legacies. Episodic learning really starts with, as far as I can tell, with this paper from Lang-Yan Diane, and then more recently with some, there's kind of a preview-preview paper from Nathaniel Daw and Sam Gershman, and then a bunch of empirical work from Aaron Bornstein. That basically argues that the picture you get with just model-based and model-free is sort of incomplete. In part because you can see in sort of laboratory experiments, the effect of specific single past events on people's decision-making. So, you can like show people a context while they're making some decision and then something good or something bad happens. And then like days later, you can show them that context again, and they'll be very heavily influenced by it in a way that's not predicted by sort of the classical model-free model-based paradigms. And, you know, so these researches have argued that, look, there's this kind of missing, that there's an explanation for this type of behavior that's missing, and it seems really, you know, obvious that the hippocampus would probably be involved with this given its sort of long-standing association with memory for specific past events. Do you think that our use of these kinds of learnings are different between babies and adults? Like, is this change over our life? Yeah, that's an interesting question. I sort of surprisingly, there's not as much research on that as I might have thought, but there is one paper from 2016. I think it's from the Cornell Medical School, where they show that, yeah, if you have children and adults perform some of these classic lab tasks that assess model-based versus model-free learning behavior, that, you know, children exhibit much less model-based. Much less model-based behavior than adults do, which is kind of intuitive, you know, this sort of habit learning does seem somehow easier or to require less than model-based reasoning does. And I can say that over, of course, in my life, it does seem like I'm better at planning than I was when I was like four years old or five years old. Yeah, it's just one paper and some intuition there, so I think it's still a pretty open area of question. And then looking at the broader picture, do these types of learning appear at a certain point in evolution, or how do you think about, you know, whether some of these are more recent or ancient? Yeah, yeah, that's an interesting one as well. And again, there's a lot less work on this than I might have thought, but I asked some friends about this. When you asked the question, as we were preparing for this, some folks who have, you know, some really in-depth knowledge of sort of the cognitive neuroscience of these learning strategies. And it did turn up one paper that was interesting, which was showing, or I guess it was arguing for the possibility that the transition from aquatic life to land-based life may have come with greater value. So, it's a greater requirement for planning, and they do some simulations to show that under water there's, you know, more direct line of sight that's possible, and there's less to occlude your vision. And so there's less of a need for planning, and so they do some simulation experiments that suggest that it's plausible that at that transition in evolutionary history, there may have been, you know, a dramatic increase in planning abilities. But, yeah, that's about planning, and not exactly about model-based RL, which is like one way you could do planning. So, I guess the main takeaway is, again, this is a wide open area of questioning. So, yeah, if you want to start a cognitive neurobiology evolutionary neurobiology lab, I think this might be a cool question to start with. Let's move to the next paper that is RapidTas Solving in Novel Environments, that's by Ritter et al. in 2020. Can you give us the just of this one? Yeah, for sure. And I do want to point out, actually, that at this point, you know, the next two papers we're going to talk about in contrast to the earlier ones, which we're doing in my PhD, so it was all very sort of lonely and isolated work. These were done in tight collaboration with some other people, so I really cannot, by any means, take all or even much of the credit for it. So, especially, I want to say, DeVeter Poso, I've been collaborating with for the last two years has been amazing. I think my work's gotten a lot better from it. And also, in this paper, Ryan Faulkner and Laurent Sartrein, we were kind of all working on it, full time. It wasn't this kind of single author. The first author does 99% of the work sort of thing. It was really, really nice. And, yeah, basically, in this work, we wanted to push the boundaries of metery enforcement learning with episodic memory. And specifically, we wanted to do it in service of building agents that could build models on the fly. So, we wanted an agent that we could drop into some new environment, and it would kind of know how to start exploring the environment in an intelligent way, kind of the way that met RL agents do. But further, we wanted that agent to be able to gather knowledge and then repurpose it, and then basically use that knowledge for planning in ways that previous met RL agents just don't do. So, to be concrete about it, in a navigation setting, because that's maybe the easiest to think about. We wanted to be able to drop an agent into, say, some neighborhood, and tell it, we want you to go to this goal location, show it an image of where we wanted to go, and have it intelligently explore the map in order to find that goal location. And then we wanted to be able to give it another goal, and have it remember what it saw along the way, and piece together a better strategy for getting to that goal than just kind of its basic exploration. And, you know, our reasoning was that after some relatively small amount of experience in an environment, agents with this sort of ability should be able to just plan by short as paths to any goal that we would give it. And this is something that humans can do, at least intuitively, it seems like this is something we're quite good at, but it's definitely not within the wheelhouse of agents before these ones. Part of the reason that we were keen on this is we thought that we could do it really well with episodic memory and deep RL. And so we started out with tasks like the one I described, so maybe I'll just go with that one, we used the street learning environment that came out a couple years ago. And we just kind of modified that environment so that we could sample neighborhoods from many different cities for each task, and then we'd have held out neighborhoods to evaluate the agents on. And with this task, we tried, you know, first of all, the basic RNN-based meta learners, and they do not do well at this sort of thing. Specifically, they don't do well at the planning aspect, so they can learn good exploration strategies, which is what we saw in the sort of the previous generation of meta-RL agents, but they really weren't able to plan these shortest paths. And we assumed that that was because these LSTM agents just couldn't remember what they had seen, because these were pretty long episodes. You can't remember over dozens of time steps, this kind of information we suspected. So we then moved to the next step of more sophisticated memory agents, which was Merlin, and there was this other one called MRA that had come out more recently. And these are agents that are like the one from my dissertation, and like Merlin, they would basically do a weighted sum of the memories in the episodic store, based on a single query, or maybe a multi-headed query, and they would basically use that to inform a policy. And those agents also, we saw, were just not able to do this planning. So at that point, we were like, this is interesting, because these agents clearly remember everything that they need to plan these shortest paths. By enough time in the episode, we can be pretty sure that they've seen what they need to see, but they're not able to actually execute a planning algorithm over that information. And so we tried a bunch of different approaches to enable agents to plan with this information that was in their episodic memory, and the algorithm that we converged on that worked the best, and we kind of liked the best, was this so-called episodic planning network, which is basically an extension to those old memory agents, where you could retrieve from the memory, and then run self-attention over the items that you retrieved. And I think actually in that paper, we just run self-attention over the whole memory, because the episodes aren't that long. And, right, so probably people will be familiar with self-attention. Basically, the idea is like each memory queries to each other memory. And basically, we would iteratively self-attention, so we would select a set of vectors to attend over, and then get a new state over that set of vectors as a result, and then we would iterate with the same attention model, that same process, some number of times, and then we would send the results out to the policy. And even though it might not be obvious, especially because I'm describing it with words, it might be a little hard to see, and the diagrams hopefully is a little easier to see. This kind of architecture is, in principle, capable of learning algorithms like value iteration, where you basically, at least intuitively, you could imagine storing the identity of each state in each one of the rows of the self-attention vectors, or in each self-attention vector. And then the self-attention could carry out this kind of adjacency computation that you do in value iteration. And so basically, we saw that, first of all, this agent worked really well, saw the task it was planning with perfect shortest paths after only a small amount of experience in each environment. But further, we were able to analyze the agent and find evidence that it does actually learn value iteration like algorithm, and further that that algorithm generalizes to much larger maps than the ones that we trained on. So in value iteration, we're writing updates as we go. Is that right? And is there some notion of writing updates as well? Yeah, definitely. This was the thing that I was worried might not come through with my description. So basically, when you're doing self-attention, you have a set of states, and you attend to all of them, or you attend from each one to each other one. And then the result is another set of vectors of the same dimensionality. And so that basically provides you with something like a little workspace in which you can do this iterative updating. Does that make sense? So you're okay. So if I understand correctly, the agent's not updating any memories, but it's using its workspace to figure out the value. Is that right? Exactly. Yeah, yeah, that's right. So, yeah, it's a deep architecture discussion for voice only, but I think we're getting there. Yeah, that's exactly right. So the way that it works is you fix the memories so that they are the same on every time step. So you just write the new one on every time step. But then you kind of have, if you will, this workspace, it's kind of like your working memory, I guess, that you can then iteratively update in order to execute whatever algorithm makes sense given the information that's fixed in the episodic memory. I hope a little bit later we can talk about contrasting these different aspects of episodic memory. But let's move on to the next paper. We're going to talk about today is synthetic returns for long term credit assignment. And that is David Raposo at all in 2021. What was the basic idea of this paper Sam? I mentioned that I felt slightly frustrated that I couldn't like hand off the the meta RL algorithms that we were developing to, you know, other researchers who are trying to solve Atari or, you know, other tasks that don't have a whole distribution built in. And so after we finished that previous piece of work, I was really thinking about what can we do, given the expertise that we've built up and kind of improving the capabilities of these DBRL agents with this kind of data structure. What can we do that we can hand off that we'll just work in any environment, even those that don't come with like a whole distribution over similar environments. And this paper was kind of the result of that. So we kind of identified that the credit assignment, especially long term credit assignment, is kind of a primary bottleneck for for DBRL agents, especially for the kinds of tasks that, or at least some of the tasks that are of very central interest in organizations like DMI right now. And we kind of, you know, identified that with this data structure, we have a lot of opportunities for doing credit assignment that aren't available when you have, you know, only your present state or only your kind of belief state all rolled into one vector. And so this paper was basically an effort towards kind of making good on the promise of this data structure for doing credit assignment over a much longer period of periods of time. And with some kind of better variance properties, then what you get is sort of standard RL algorithms. So can you contrast the idea of synthetic returns with with N step TD learning like it seems like there's something a little bit similar there in terms of bringing the value far forward or far backwards. N step TD learning is, I guess, an example of this very general class of credit assignment algorithms, which I think, you know, the vast majority of the credit assignment algorithms. I know of our in this category, the ones that are prominent deep RL certainly, where basically in order to sort of define the value or the utility of a state or a state action pair, you basically try to sum up over all the rewards that happened after that state or state action pair. And you know, with with deep learning, we do this with a particular class of function approximator. We use neural networks to try to predict value and then maybe we use maybe we also will kind of train a policy that basically optimizes the same signal with some of all the future rewards. This is true in in step TD learning and you know, even if you just do Monte Carlo unrolls all the way out, it's the same property and one issue with that is that you might have a bunch of rewards that came after some state action pair or some state that are completely unrelated to that state. But then you have some future rewards that are related and you don't really have a way to segregate those unrelated and related rewards out segregate them from each other in order to sort of learn only about the ones that matter. Instead, you kind of just have to sample a lot of experience so that you can let the unrelated ones average out. They'll just average out to zero, but it takes a lot of data to do that. And so we really wanted to get around that. And so we do and I actually will just take a step back as I think when you asked about TD learning, you might have been asking about this point, like what TD learning, you can just bring rewards arbitrarily far back if you're willing to wait long enough through kind of sort of the value function learning the boot, the value function bootstrapping. But you still are faced with this fact that the value function bootstrap that you're predicting is itself trying to predict a bunch of rewards that might be totally unrelated to the state or state action pair you want to get a valuation for. And so with the synthetic return algorithm, the idea was maybe using the episodic memory, we can for each state and we could have done state action pairs, but in this case, we did states because it was a little simpler. Maybe we can learn which future states have rewards that are related or are predictable by the current state and then we'll create a value estimate or a utility estimate, it may not be exactly a value will create a utility estimate that is sensitive only to those ones so we can just right off the bat ignore all the ones that are unrelated. And that's done with with kind of straight up regression, is that right? Yeah, exactly that. Yeah, so it's basically a linear regression where we say at every time step, we'll get some reward and we'll try to predict that reward using a sum over some scalar output function of all the past time steps, where all the memories, if you will. And the result of that is like a linear regression model that for each state tries to output its reward as a sum overall past states and then the sort of the weights of that regression model act like they act like a credit assignment signal, they basically say. This past state contributed this amount to the current reward that I'm getting. And so using this method, I think you got really pretty dramatic results, is that right? Like in the skiing game and others, can you tell us about the performance here? Sure, so basically we started with really simple, the simplest possible task to illustrate for ourselves, including how this work called it the chain task. But then we moved to something slightly more complicated, which is catch, this sort of classic debugging task. But with a twist where you would base the agent would play like 20 rounds of catch without any rewards, and then at the end it would get rewards for all of them. So kind of had this challenge of figuring out which of these catch games went well or basically in this undifferentiated stream of experience, what happened to lead to whatever rewards signal you got at the end of 20 rounds. And basically we saw that the regression model worked just as we would want to, it worked in such a way that at that last time step, the memory output function would be high only for the time steps at which the agent caught the ball. And it would be zero, or be close to zero everywhere else. So as a result, you would basically see these little spikes. If you kind of plot the memory output function for each memory over time, you would see a spike every time the agent caught the ball. And sort of intuitively, if you just plug that signal, the spiking signal, and as an auxiliary reward, you'll learn a policy. And when I say plug it in as an auxiliary reward, I mean you just take whatever agent you have, in our case it was an Apollo agent, and you just augment the reward with this memory output signal. If you just do that, then yeah, you'll find a policy that catches the ball. It works super well. And it's actually, yeah, we didn't have a lot of sea variance there. It's pretty hypergramminer and sensitive. So that was really nice to see. And then yeah, you mentioned skiing. Skiing actually has a pretty similar structure to that where you have to kind of go through all of these gates. This is a tary game. And at the end, you get a reward for all the gates that you hit are all the gates that you pass through. And you know, just as little context, this game was among the last to be solved at all by DeepRL agent. And that was just by agent 57 and spring of 2020 that it was solved. And in that case, they had like a giant discount factor and they had to wait for 80 billion steps almost for this thing to be solved. And you know, it was believed that the reason for that is this really long reward delay. And this kind of variance in the credit assignment signal. So yeah, we're really happy that we were able to just totally nail that task, solving it in more like a billion and a half. So I think it was like a 25 25x gain because actually they had some seeds that were solving it in like 50 million or something like this. But it was, you know, a really big speed up even with them much worse or much less sophisticated agent than the Rgt2 agent 57. So it really felt like we kind of figured out what was hard in that task and just really solved it. And that was basically where we ended with that paper. So do you feel like this is making progress towards solving a temporal abstraction in RL? Yeah, that's a really interesting one. I think it could be one way I could imagine this sort of contributing to, you know, effort towards having. Temporal abstraction agents would basically be to like compress the memories that go into this regression model. So right now we have this regression model where we have basically one weight, one grished for every time step. But it doesn't really have to be that way. We could have one regression weight for every 15 time steps or every 30 time steps. And I guess we could even learn the number of time steps that are kind of treated as a single chunk for the purposes of credit assignment. And then we could kind of do the same process hierarchically. We could say, well, my regression model says that there is, you know, or reward component of three associated with this 30 time steps. So now I'm going to do another regression over those 30 time steps to see which of them contributed, which of them, you know, in the context of this policy contributed that reward. Yeah, as a result, you're kind of getting sort of a hierarchical and sort of temporarily abstract learning algorithm. The performance here is super interesting, like with such a simple method to improve results so much on a really hard task. Congrats. I appreciate it. Yeah. It was really exciting to see that result. I'm not going to lie. So, but can we talk about the limits here? Like what are first of all, is this entirely on policy? Because I think the regression is assuming that the policy is going to be similar next time around. Is that right? Oh, I think that those are really good questions. So, so yeah, I'm happy to talk about the general limitations. And specifically on policy case, I think is an interesting one. So, it's, yeah, it's almost on policy with Impala. It's very slightly off policy, basically. And I think it's, it's an interesting question how this will play out in the off policy case. And I think maybe this is, is this what you're getting at? So, basically, you know, we're learning something that's like a value function as like this utility function that is policy dependent, right? Like the, you know, the regression way that I get to some particular time step depends on what I did, on what the agent did in the future in between that time step and the reward that's being decomposed. So, you've got this, yeah, policy dependent utility function. And so, is this whole setup going to work when we're learning off policy? Basically, when we're doing, you know, some of the data points in our regression data set are from old policies. And I think it's a, at least from a theoretical point of view, it's a really interesting and sort of tough question because, you know, the data points are from old policies. And I think it's a, at least from a theoretical point of view, it's a really interesting and sort of tough question because, you know, the prior work that I know of, they basically do this off policy correction that amounts to, you know, down weight in the gradient or down weight in the learning signal. As you roll time forward, if your policy differs, if your current policy differs from the one that generated the data that you're learning from. And in this case, our stated intention is to learn on really long term dependencies where a lot of the stuff that happened in the middle doesn't matter. So, if we use like a, you know, retrace style off policy correction, we're going to just kill our learning signal, even though it may have been unnecessary to do that. Is that the issue that you were getting at? Yeah, I just, I guess I just wanted to talk about some of the assumptions that go into that, that regression. And it seems like that's one of them that the regression is kind of conditioned on the policy or maybe being near policy or something. Yeah, it could be, I mean, one, maybe count and counterpoint here is that, I'm not sure if it's exactly like an assumption of the algorithm more than it is just the setting we were in experimentally before. It's possible that things will actually work better when we're, when we have kind of a distribution over policies that we're learning from. Like something that we saw actually is that, for instance, in skiing, there's this tendency of the, of the, of the agent to learn the task and then like forget it. So it kind of seems like it learned a good regression model that, you know, spikes for the gate hits or for some other good actions, but then the agent starts doing those actions all the time and it kind of no longer needs to decompose reward in that way because it's just always getting that reward. It doesn't need to look at those gate hit variables anymore. And it's, and it's because of the on-policiness that this happens. Because your policy is always exactly the same, you don't have to do regression. You just know what the reward is going to be in the current state because you always do the same thing. So the regression model kind of breaks down if you're too on policy and your policy is too perfect. So I think it's a really interesting experimental question for, for, at this point, what happens when we have a replay buffer so that now our utility function is no longer defined over, you know, a policy pie. It's defined over a expectation over policies pie that are sampled from a replay. I, yeah, I'm really excited to explore that somewhere. In terms of the brain, we talked about TD learning and one step TD learning with bootstrapping can be quite inefficient, slowly propagating that value signal. Can you say anything about how that compares to what's happening in the brain? Like, do we have that problem in the brain of inefficient one step TD learning or, or our steps in time treated in a very different way in the brain or maybe we just don't know yet. Yeah, I think it's a really interesting question. I think, I think with a lot of questions about neuroscience, this one verges mostly towards the last option you gave that we don't totally know. So, right, I mean, this, this classic result that dopamine tracks the, the RPE is suggested that there is something, there is some evidence that there is kind of time step by time step TD like reward, learn and going on in the brain. I think there are some caveats that I have around that, but those are super detailed. I don't think we need to go that far in the weeds at this point. I think it's enough to say that there's evidence for that. But there isn't evidence that that's the only kind of learning that goes on in the brain. And as a matter of fact, given some behaviors that, behaviors that humans exhibit, it would be almost impossible for that to be the only way that learning is done in the brain. And specifically, it seems like humans are able to do very jumpy credit assignment. This project, actually, I think I had to track it all the way back, it got started, or its inception was when my advisor, Mapof, and I gave this example, it was like years ago, it was way before we started the project. He gave this example of how, he gave this example of eating some food, let's say you're in a foreign country and you eat some food, and then the next day you get sick, and you have this big gap between the thing that caused the bad outcome and the bad outcome itself. But he gave this example that generally we don't really have much trouble with this, especially if there was something a little bit suspect about the foods like if you're in a country where it's not advised to drink the water, but you eat some food that's probably been washed with tap water. Then if you get sick the next day, it's relatively straightforward to think back and be like, okay, what might it have been? Oh yeah, I did that thing yesterday, it was a little bit suspect, maybe that's why I'm feeling sick today. And it's very jumpy, it's clearly not like TD learning, it's not like you have to get sick a thousand times, and by incrementally propagating backwards, negative reward, get back to the thing that you ate. So I guess that's a long way of saying, I think that it's likely that the brand does something very roughly akin to what we've done in this paper. But I don't think that there's at least I don't know of yet specific evidence for that. I think that is a really interesting direction for research, maybe even for something to pick up if I have time. In the future. Okay, so let's talk more about episodic memory. Can you contrast the idea of episodic memory with the idea of replays in that we might be more used to in deep or there's something in common there, but what's the difference? Sort of the broad distinction is that replay tends to be memory that persists across episode boundaries, whereas the Merlin style episodic memory is just within episode replay is generally used, for instance, in neural episodic control, this paper from Pritzel and Blondow on colleagues. Replay is used when you're trying to remember back to some situation that was pretty much just like yours, your current one, and you want to remember if things were good or bad afterwards in order to evaluate your state. Whereas episodic memory is where you're trying to remember back to something earlier in the previous episode that might be completely different from what's happening now, but it has some bearing on what's about to happen. And so you need to, so for instance, we have like the key door task where this is in a couple of recent papers from us and others where you have an early phase where you need to pick up a key, you need to learn to pick up and then some other stuff happens later, you need to open the door with the key in order to get a bigger word. And in that case, you need to use your episodic memory when you're about to open the door to think back to whether or not you've got the key, and it's not about thinking like what are other situations where I was near a door where they go to bad, like with replay and episodic control. Instead, it's about kind of the connection between a priori unrelated events. Does that kind of get to the question? Yeah, and it raises another question for me, like what is an episode? And if you think of it in terms of neuroscience, is an episode of like very short term memory or like we don't have this notion of resetting in our brain. So how do you see the notion of an episode and the episode boundaries? Yeah, yeah, totally. Yeah, as I was saying that I was realizing how confusing the naming there is because it's like episodic memories within an episode. But you might think it would be remembering back to a previous episode, but that's what replay does. So yeah, I think it's one of these cases where the the nomenclature is kind of, you know, the name space is a bit polluted because one use of it is coming from neuroscience and psychology and the other is coming from like reinforcement learning. But yeah, so to answer that question about what what it means, I guess in psychology and neuroscience, it's often defined kind of loosely to just be like some specific thing that you remember from the past, a specific event. And then I think the word event kind of encodes a lot of vagueness. And I think that plays out and there being kind of a whole literature, a very long running literature in psych and neuroscience on event representations. Like the question is how do people decide when an event is over and when an event has begun that that sort of an open open question and for I guess for our purposes or for my purposes developing these kinds of agents. I basically think of this is going to sound really confusing, but I think of an episode as just a time step. I think of poor or it could be multiple time steps compressed into one slot. So it's kind of like whatever I've put into a slot in my episodic memory, that's going to be the quote episode as bad as the nomenclature is there. So if we have an agent that's maybe doing continual or lifelong learning, does that does that distinction break down between replay and episodic memory because there is just one episode. Yeah, I think I think it definitely could. I think if you're in a setting like that, whether it's replay, you know, whether it's kind of like DQN or neural episodic control style replay versus Merlin style episodic memory. I think it might come down to the algorithm that you're using on that data. You know, if it looks more like just a kernel regression to try to estimate your current state value, then maybe it's like replay. But if you're doing more of like sophisticated transformer architecture over a bunch of past dates that you retrieved from this data store, then maybe it's like, looks more like an episodic memory. I think it would really become a fuzzier boundary. So if I understand correctly, it seems like the episodic memory is used in slightly different ways in these different agents. Can you kind of summarize how or the different ways in which episodic memory is accessed and what is stored here in this range of agents? Like I get the get the sense that some of it is like sort of content addressed, finding similar things. And then in other cases, you had a notion of attention. So it seems like it's not, there's a range of designs here. Can you just help us understand the design space a little bit in summary? Right. I mean, I guess you've got this data structure, which is just a buffer that you can put vectors in. And the design question is what vectors should I put in it? How should I produce them? And what should I do with those vectors when I pull them out? And so I guess the answer to those questions from sort of basic, I guess like the standard models of cortical and hippocampal learning. So kind of the model of how the storage and retrieval is done that you would find in a Randy Orrily's neuroscience models or Edmund Rolls neuroscience models basically says that what you store is just your cortical state itself. So this is kind of what's in like the model in my thesis. You just don't project it. You don't do anything special whether you just store it. And then when you retrieve it, you retrieve it using this sort of associative mechanism, like you mentioned, you basically at every time step, you look for things that are similar to your current cortical state, maybe along the some axis. So maybe you learn some projection of your current cortical state for doing that associative retrieval. And then when you retrieve it, you just, you know, you've reinstated it. You just plug those activations back in. So with something like Merlin, you kind of have a little bit more, you know, mechanism in between the cortical state, if you will, or the working memory and the storage. So in Merlin, you say, well, I don't think that's going to be quite enough. Like I don't know enough about the future when I'm in the past to know what I should look for. Like what about my sensory experience matters. Like you got this huge, barrage of visual data coming in and a pretty small vectorized or a memory. I don't know what I should encode. So I'll I'll use like a self supervised learning objective to shape the representation I store. And then when I retrieve it, I mean, this was the thing. I don't remember exactly what they do in Merlin. I think that they do an associative retrieval using the LSTM hidden state. And then I think that they feed it back. They feed the result into the working memory as an input. So it's like, you know, you've got your observation coming in through a comb net and then into your LSTM through the input gate. And then you've got this additional vector concatenated with the comb output. Maybe I'll just mention that Transformers also can be kind of described with this paradigm. So you can basically say with a transformer at every time step, I'm writing some vector. And maybe I'm going to back propagate through the retrieval process in order to determine what to write or maybe I won't. So in the episodic planning networks paper, we have this Transformers like architecture and we just don't back propagate through. We found that we didn't need to. And then on the retrieval side, we basically instead of just querying with one attention head from the current working memory state to all of the memories, we just have each memory query all the other ones. And then we aggregate the result with like a reduced max, I think. So at this point, you're getting into like the full spectrum of just like deep learning hackery. All to answer this question of like, I've got this big batch of vectors. How am I going to turn it into one vector to send it out to my policy layer. And yeah, I think, you know, it's a really exciting space of space of architectures. It's also explored. I'll just make a quick mention of Emilio Pericidos paper where he uses Transformers and RL. It's a bit of a different architecture, but still fits into this same kind of paradigm where it's just some different decisions about exactly what that self attention architecture looks like. Can you talk a bit about the relationship between RL and neuroscience versus theoretical RL versus empirical RL? Like how do you see them informing each other? I can't say what it's like from inside of the theoretical RL, you know, looking out to Nuro and empirical because I'm just not inside that community enough. But I can definitely say that RL and neuroscience seems to have gained a huge amount from theoretical RL. I mean, just sort of the most basic ideas from the Sutton and Bartow even going back to the Sutton and Bartow, but kind of show that connection. I'm pretty clearly, you know, more some more advanced theoretical RL like, you know, involved convergence proofs and whatnot, maybe doesn't show up so much in the neuroscience literature. But I think we all feel good knowing that it's there. That makes sense. Like somehow when we see like dopamine, dopamine, dopamine neuron firing, tracking the reward prediction error really well, we feel even better about it knowing that there's so much theory, you know, underlying those convergence properties, for instance, of those kinds of algorithms. And then for empirical RL, it's, you know, clearly empirical RL has, you know, tremendous takes tremendous value from theory. And I think I can only speak from my experience. I obviously think a lot about, you know, at least intuitive psychology and some ideas from neuroscience when trying to build agents. And I think it might be that way for a lot of researchers. So there seems to be sort of a, you know, connection between those two nodes as well. And then how do you think about learning in the brain in terms of how efficient it is compared to what we might do with algorithms? I guess the brain is still so far advanced compared to our current day algorithms. But can we imagine exceeding the brains efficiency in learning at some point or is that just just unimaginably far off? Yeah, clearly right now people can learn in certain settings a lot faster than any algorithms can. I think a well documented hypothesis for why that is is that, you know, humans have a lot of, especially adult humans, have a lot of knowledge that our tabular rasa, you know, deeper LH, it doesn't have. And yeah, I think that there's there's this open question of whether with sort of enough in the right data, you could get current methods to behave like humans do in terms of their learning speed. And I think that was part of why meta RL was was really popular for a little while. I think though it's unclear whether it's going to be possible to procedurally generate the kind of data that's required to have a really convincing demonstration that these kinds of algorithms can learn at the pace that humans do in any kind of like a convincing environment, I suppose. And I think it's going to be really interesting to see over the next few years or decade, whether there are improvements in the out lunch on the algorithm side that enable a tabular rasa agent to get kind of close to what humans can do. Or whether there will be cases where some researchers are able to find problem settings where there is the right kind of data to learn to do something really impressive. I, I don't want to, you know, bet against progress here, I think that it probably will happen that we see some kind of compelling demonstration. But I'm not sure how long it'll be. I see that you co authored a paper in nature communications on decoding brain activations that was toward a universal decoder of linguistic meaning from brain activation, pariera 2018. And recently we saw neural link decoding a monkey brains signals to use in a game controller directly with thoughts. So, wondering what you think about what they're trying to do. Is this kind of brain computer interfaces is inevitable or is it unclear how well it'll work out. What do you think about that? Yeah, it's an interesting one. So, I guess I'll just initially say that the neural link results is kind of a demonstration of something that has been done before and quite a long time ago. If memory serves, I think it was in the early 2000s that it was first demonstrated that you could have a monkey controlling a cursor on a screen. And as far as I know, that's basically what the neural link demonstration was. So, yeah, I mean, I'm excited that it's driving up some kind of public interest in brain machine interfaces. I'm also slightly quizzical because it's been around for a long time. I think it's probably because Elon Musk just has such a star factor. He just kind of makes it more interesting, I suppose. So, yeah, I think neural link specifically seems to be, I don't know, maybe in the middle of the pack of a bunch of startups that are working on this space. And there's been a lot of work and academic labs for ages, really. I guess since the 90s, it seems like things were really taking off with respect to this. I think it's a really interesting direction to go. So, yeah, I did work on that paper that you mentioned. And actually, most of my PhD I was working on decoding sentences from FMRI data, which is nice because it's FMRI is not an invasive method like the neural link demonstration. You kind of have to correct the skull open and stick some electrodes in its very, you know, invasive. But the signal to an or noise ratio is just too low. That that work didn't really pan out. And I don't see much evidence that anyone else has been able to get it to work really well either. Both with FMRI and with EEG, it just doesn't seem to be quite enough signal to do really useful things. So, yeah, with these more invasive methods though, it's possible to do really amazing things. And this seems to be most evident in medical applications, where you have someone who has an injury or an injury. Or for some other reason has lost control of their body and able and then to do that is super exciting. As far as I know, a main impediment is just the ability to leave the recording device in the brain for very long. I'm not going to do this as it's slightly outdated, maybe a couple years old. But when I was briefly thinking about moving in this direction, that's what I was hearing as kind of the primary issue. So one of the reasons I didn't go that way is it seemed like it was really more of a task for kind of like immunologists and material scientists really to get these electrodes working the actual machine learning side of things or the neuroscience side of you will as does seem to be the bottleneck. So I'm curious what will happen with that field over the next few decades. Cool. Okay. And then besides what we've talked about here, what's going on in neuroscience and RL these days that you're excited about? I'm really excited about all the Batch RL papers that have been coming out. It seems like people are getting really serious about making RL kind of application ready, industry ready. I'm really keen on that. Also just to sort of deep RL agents, the canonical methods that are kind of nearly on policy replay based are just getting a lot better. So like Muzero and Musely and even Agent 57 are showing that there really is a lot more room for improvement there in terms of final performance and sample efficiency. So I'm really excited to see where that goes. In neuroscience, just make a quick shout out something I'm excited about is coming out of Princeton from my old department. Qiong Lu and Ken Norman are kind of working on using these sort of slot based memory, episodic memory architectures to model all sorts of phenomenon human behavior and cognition. And I think that's really exciting because the old modeling paradigm, I think I might have mentioned it with like a tractor network. She's had some inconveniences that made it hard to make progress. So I'm excited to see what will kind of happen with that modeling paradigm moving forward. And then looking forward, what do you see yourself doing? How do you see your path going forward? Are you going to continue on these themes we've talked about here? I think for a while, at least I'm excited to explore more the possibility of developing agents that learn to get higher final performance and learn more efficiently, you know, most likely continue to use algorithms with this nonparametric agent state, just because it seems to have, I haven't run out of ideas with it yet. And yeah, if there's a chance along the way to do, I guess to say something that would be meaningful or useful to neuroscientists by treating those agents as models of the brain and cognition, then yeah, I'll definitely be trying to do that in the next couple of years. Dr. Sam Ritter, thank you so much for doing this and taking the time out of your day for speaking with me and our audience. I've learned a ton today and I'm sure audience is going to love this. Thank you, Sam Ritter. Awesome, thank you so much, Robin. It was a ton of fun and if anyone in the audience wants to chat about this kind of stuff, usually around on email, so yeah, feel free to pick me and thanks again, Robin. Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways. One, subscribe on your favorite podcast platform. Subscriptions make a big difference. Two, follow us on Twitter and talkrl podcast. We love retweets. Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better.
[ { "end": 12, "start": 0, "text": " This is Talk by Rail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 12, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 24, "start": 20, "text": " Dr. Sam Ritter is a research scientist and deep-mind. Thanks for joining us today, Sam." }, { "end": 29, "start": 24, "text": " Thanks so much. I'm really happy to be here. I'm a big fan of the show, so I'm especially excited to" }, { "end": 33, "start": 29, "text": " chat with you about episodic memory and deep RL in neuroscience." }, { "end": 38, "start": 33, "text": " Great. Okay, so you've already started, but how do you describe your area of research?" }, { "end": 44, "start": 38, "text": " Yeah, so I guess for the last almost four years now, I've been really focused on deep reinforcement learning." }, { "end": 52, "start": 44, "text": " And especially the use of a particular data structure in deep RL, and that data structure is called episodic memory," }, { "end": 58, "start": 52, "text": " which sounds, I don't know, really specific or fancy, but really it just means that your deep RL agent has" }, { "end": 67, "start": 58, "text": " an agent state that can grow to arbitrary size, so it's kind of the opposite of having a fixed size agent state." }, { "end": 78, "start": 67, "text": " In the kind of course of working with that data structure in deep RL, I've gotten the chance to do some kind of agent performance development work," }, { "end": 87, "start": 78, "text": " as well as some neuroscience work. So using these deep RL systems as models of human cognition and what might be happening in the brain." }, { "end": 93, "start": 87, "text": " So let's start with meta learning. In your dissertation, you focused on episodic meta RL." }, { "end": 99, "start": 93, "text": " Is that right? Not just episodic RL? And can you remind us how meta RL is defined?" }, { "end": 109, "start": 99, "text": " Episodic RL, you know, I kind of think of as starting with this at this point, fairly classic paper by Matei Lignel and Peter Diane," }, { "end": 121, "start": 109, "text": " or they basically said, you know, look, we can do, you know, value function learning with a sort of a kernel regression model rather than a parametric system." }, { "end": 137, "start": 121, "text": " So in that setup, we're going to have basically a storage of all the past states that we've seen, and we're going to, with each one of those past states, record, you know, something about the return, the rewards that we're seeing after those states." }, { "end": 154, "start": 137, "text": " And then we go to estimate the value of some new state. We'll do like awaited sum over those past states and some embedding space using, you know, a kernel that'll give us basically a scalar for each past state has that kind of estimates how similar it is the current one." }, { "end": 161, "start": 154, "text": " And we'll just do awaited sum over those returns that are associated with them in order to estimate the current value function." }, { "end": 175, "start": 161, "text": " So it's a very specific kind of algorithm. And it's recently been the term episodic RL has been kind of broadened a little bit in some recent work by Nathaniel Daw and colleagues." }, { "end": 203, "start": 175, "text": " But that's kind of the most canonical form of it really. In contrast, episodic meta-RL comes out of this, you know, meta reinforcement learning work, which really started with, you know, a couple of papers, really, it was popularized very least by a couple of papers in 2016, one by Jane Wong and another one by Duane and colleagues." }, { "end": 211, "start": 203, "text": " And in those papers, they demonstrated that you could learn by reinforcement learning how to do reinforcement learning." }, { "end": 222, "start": 211, "text": " So specifically, they would just train a, you know, a three C style agent. So basically an RL agent with a recurrent memory." }, { "end": 233, "start": 222, "text": " In those cases in LSTM, you could train those agents on tasks that required RL to be done sort of inside the task. So an example would be a bandit problem." }, { "end": 245, "start": 233, "text": " So in those papers, they trained these LSTM based agents on, you know, these can step or 100 step bandit problems." }, { "end": 263, "start": 245, "text": " And what they observed was that these agents could actually learn to perform what appear to be close to Bayes optimal exploration and sort of handle the explain and learn to handle the exploration exploitation trade off." }, { "end": 279, "start": 263, "text": " So it was a really kind of cool way of thinking about what goes on in these recurrent reinforcement learning systems. We can actually think of them as learning reinforcement learning algorithms that they will execute at inference time." }, { "end": 304, "start": 279, "text": " So then episodic met RL is basically the some work that comes out of some work that I did following on Jane Wong's met RL work where we basically said, all right, but look, if we have this fixed width memory recurrent controller and it's capable of doing, you know, of learning bandit algorithms." }, { "end": 311, "start": 304, "text": " Maybe we can learn more sophisticated algorithms if we provide that system with an episodic store." }, { "end": 320, "start": 311, "text": " And so yeah, a lot of the work that you and I will talk about today, Robin will be kind of expounding on this point." }, { "end": 337, "start": 320, "text": " I'm kind of demonstrating what we can do if we add episodic to met RL and I'll just actually point out that the sort of narrow form of episodic RL that I mentioned is the kind of thing that you could in principle learn to do by met RL with an episodic memory." }, { "end": 351, "start": 337, "text": " So it does kind of all fit together. We can say, oh, by episodic met RL, we can learn to do episodic RL. We have the data structure and we have sort of the neural networks in place to learn to do that." }, { "end": 364, "start": 351, "text": " So meta RL is that something that happens in the brain? I guess I always assume that we evolved to do RL in the brain and maybe our meta RL is just something we do in software." }, { "end": 367, "start": 364, "text": " Is that true or does that happen in neuroscience?" }, { "end": 386, "start": 367, "text": " I think it's question is definitely an open one. So the hypothesis that meta RL might happen in the brain is sort of built up in a fairly recent paper from Jane Wong and colleagues at DeepMind from I think 2017 or 2018 when it was published." }, { "end": 408, "start": 386, "text": " It's in nature neuroscience and they basically use this LSTM A3C agent as a model of human behavior and animal behavior and even neural recordings from animals as they carry out various sort of lab tasks." }, { "end": 431, "start": 408, "text": " And what Jane and colleagues show in that paper is that with this really simple model, which is basically just a recurrent working memory that has sort of parameters that are or weights or synapses, if you will, that are learned to maximize reward as it's defined in these lab tasks." }, { "end": 449, "start": 431, "text": " And by using that as a model, you can recover some really specific characteristics of human and animal behavior and some striking sort of features in the neural recordings that are made as they're carrying out that behavior." }, { "end": 471, "start": 449, "text": " Do we have definitive evidence that the brain does meta RL? I think that's always going to be hard to ascertain for certain. But I think the evidence is pretty good and it's a model that I think is at this point has a fair amount of recognition by the cognitive neuroscience community." }, { "end": 480, "start": 471, "text": " I guess for my point of view, episodic RL and meta RL are both a little bit exotic and here you are treating these two things, the intersection of these two things together." }, { "end": 485, "start": 480, "text": " Can you talk about why you wanted to treat them together, the pair of us up?" }, { "end": 500, "start": 485, "text": " Yeah, I mean, I guess it seemed like a really obvious pairing in context. So maybe I can describe that context and it'll seem obvious to you in the audience as well." }, { "end": 509, "start": 500, "text": " So basically I was in the middle of my PhD around the time that Jane did this meta RL work and I thought it was really, really interesting." }, { "end": 516, "start": 509, "text": " I've been working on some language stuff before but I wanted to make a switch I was getting really keen on reinforcement learning." }, { "end": 529, "start": 516, "text": " So I went to my advisor, Matt Bafinick, who was the senior author on that paper. I asked him, what's bugging you about meta RL? What's not quite there? What are some interesting avenues?" }, { "end": 545, "start": 529, "text": " What he said was that the meta RL agent, the one that has basically this recurrent working memory in the form of an LSTM, that agent basically behaves like a human who has hippocampal amnesia." }, { "end": 565, "start": 545, "text": " So what is that? So the hippocampus is this structure in the mammalian brain that appears to have a large role in the storage of information for long periods of time and specifically in what's called episodic memory." }, { "end": 593, "start": 565, "text": " So the ability to remember specific events from the past and what Matt was getting out there is that these meta RL agents, you could drop them into say a bandit problem and they would be really smart at trying an arm and then making the correct inference based on the reward they got, what the right arm to try next would be and basically doing really smart close to base optimal decision making." }, { "end": 602, "start": 593, "text": " But as soon as you pull them out of that bandit problem and drop them in another one, they completely forget the solution that they learned." }, { "end": 611, "start": 602, "text": " So throughout the course of their experience with that previous bandit problem, they pretty much figured out the task, they solved the task, and as soon as you pull them out, they completely forget it." }, { "end": 624, "start": 611, "text": " So you can drop them back in that same bandit problem again and just like a hippocampal amnesia would do, they have to start from scratch and they have the general knowledge about how to solve bandit problems." }, { "end": 631, "start": 624, "text": " But they just don't remember that specific one. They don't remember the episode, if you will, so they kind of lack episodic memory." }, { "end": 647, "start": 631, "text": " And at the same time, Greg Wayne and colleagues at DeepMind had demonstrated sort of success using episodic memory that is in the definition I was using earlier," }, { "end": 666, "start": 647, "text": " like using episodic memory, this ever growing buffer of vectors in deep reinforcement learning. So the Merlin paper had, I don't know if it would come out, but we at that point knew that that agent worked well." }, { "end": 675, "start": 666, "text": " And we knew that you could do episodic memory in deep RL, which was quite exciting at that point. It was a novel demonstration." }, { "end": 686, "start": 675, "text": " And so my advisor Matt and I were thinking, okay, let's see if we can endow these meta-RL agents with a hippocampus." }, { "end": 704, "start": 686, "text": " And that basically is what kind of kick started the work that ended up in my dissertation was basically, and it was basically starting with this goal of like, let's make it so that meta-RL agents can remember the solutions that they discover," }, { "end": 707, "start": 704, "text": " using their smart meta-RL strategies." }, { "end": 717, "start": 707, "text": " I just want to say I enjoyed your two meta-learning talks on YouTube, and I encourage the audience to check them out. We'll have links to them in the episode page." }, { "end": 727, "start": 717, "text": " So can you tell us more about how you contrast meta-learning to some closely related terms, like transfer learning and fuchsia learning? Is it closely related to those?" }, { "end": 734, "start": 727, "text": " I'd say, yeah, it feels very related to me. So I guess first, kind of a basic definition of meta-learning." }, { "end": 739, "start": 734, "text": " Sort of intuitively, like a broad definition of meta-learning is just learning how to learn." }, { "end": 752, "start": 739, "text": " So for instance, you learn your first programming language, and it's really hard to do. You haven't learned how to browse stack over flow or read docs, and you don't know what data structures are, etc." }, { "end": 759, "start": 752, "text": " But then the second programming language you learn is a lot easier because you've learned some background knowledge, and you kind of know the strategies that work." }, { "end": 771, "start": 759, "text": " You know kind of little exercises you can set for yourself. And things like that that enable you to learn much faster by having learned how to do it." }, { "end": 780, "start": 771, "text": " In machine learning, we have a more sort of specific and narrow, and also a more formal definition of what meta-learning is." }, { "end": 787, "start": 780, "text": " And I think it's useful to have that formal definition to contrast with the other problem settings that you pointed out, Robin." }, { "end": 796, "start": 787, "text": " So the formal definition of meta-learning, the meta-learning setting, is that you have a learner, which is faced with a series of learning tasks." }, { "end": 804, "start": 796, "text": " So tasks where, in order to solve it, you have to learn some stuff. And each one of those learning tasks is drawn from some distribution." }, { "end": 814, "start": 804, "text": " So basically I have this big bag of learning tasks, and I'm just going to, for each episode of my learner's experience, I'm going to pull out a learning task, make the learner try to solve it." }, { "end": 823, "start": 814, "text": " So with the bandit tasks in the meta-rl setting, you've got this big bag of bandit tasks, and each one has a different set of parameters, reward parameters for each arm." }, { "end": 838, "start": 823, "text": " And you kind of make your, in this case, meta reinforcement learner, you make that agent learn how to discover the rewarding arms or the arm reward parameters in any given task that it might face." }, { "end": 846, "start": 838, "text": " So yeah, then coming to the, you know, the question of how meta-learning relates to a few shot learning, transfer learning, etc." }, { "end": 865, "start": 846, "text": " So I think it, it depends on which setting we contrast against. So, a few shot learning, I guess I could say that meta-learning is one way of, you know, producing a learning system that can do few shot learning." }, { "end": 886, "start": 865, "text": " So with meta-learning, I could say, oh, I've got this few shot learning problem, like, you know, I've got, you know, for whatever reason I have the need to see one or two examples of a few, let's say, image-neck category classes of a few object categories, and each example is an image." }, { "end": 911, "start": 886, "text": " And then I have to, you know, generalize from those examples to categorize other images that might be in the same category. One way that I could do that provided that I had a bunch of other image-neck categories and data examples is that I could set up a meta-learning training distribution from which I could draw a whole bunch of these few shot learning problems." }, { "end": 920, "start": 911, "text": " And then I could evaluate this system on held out few shot learning problems, or in other words, on held out categories." }, { "end": 927, "start": 920, "text": " And in fact, this is what, you know, the matching networks paper does, and a whole bunch of other papers after that." }, { "end": 944, "start": 927, "text": " And yeah, it's an exciting way to do it. Similarly, with transfer, you know, if you kind of have the setup that are the problem setting where you want your learner to learn some stuff in some environment or in some, some task setting." }, { "end": 963, "start": 944, "text": " And then in another task setting, make use of the same information, you know, a primary problem there if you're just using standard neural network training is that your network overfits to the first task, and it's not able to make use of the commonalities between the first task and the second task." }, { "end": 979, "start": 963, "text": " And the way that meta-learning gets around that is to say, well, okay, we're actually going to sample a lot of these tasks and train the neural network in sequence to solve them. And this prohibits it from overfitting to any particular one." }, { "end": 1002, "start": 979, "text": " And kind of similarly with domain generalization, it's the same kind of idea you want to be able to generalize to a new domain. If you can sample a bunch of different domains, then you can prevent from overfitting to any particular one such that the sort of the key commonalities, the key generalities across them, can kind of emerge in your policy." }, { "end": 1021, "start": 1002, "text": " Yeah, and in continual learning, I'll just do one more of these. Continual learning, I guess I always think of that setting as one where you've kind of asserted that your agent has to experience task A for a while, and then it will experience task B." }, { "end": 1035, "start": 1021, "text": " And there's no option to have the task setting or the environment give you experience from those two tasks in interspersed. So it's very much an on-ID." }, { "end": 1046, "start": 1035, "text": " And I think that that one, in contrast to the other ones we mentioned, I think that that is just a very different problem setting from meta-learning, where you're kind of insisting that we are not allowed to do this IID task sampling like we do in meta-learning." }, { "end": 1053, "start": 1046, "text": " So in some ways it's maybe a, it feels like a more challenging and more open problem than meta-learning to me." }, { "end": 1065, "start": 1053, "text": " Which I guess our current generation of function approximators really likes IID and really likes being able to go back or kind of needs to go back to sample previous tasks and has trouble with this domain shift. Is that right?" }, { "end": 1083, "start": 1065, "text": " That's exactly how I think about it. Yeah, I know that people are developing lots of methods for trying to make the learning system itself or to make the neural networks less reliant on this IID experience generation." }, { "end": 1102, "start": 1083, "text": " But you know, I think it's an exciting open research area. There are some replay methods that seem particularly promising to me. But I think you nailed it. Yeah, it's sort of the bog standard deep learning methods really require IID sampling and meta-learning kind of just says fine. We'll just IID sample learning tasks for you." }, { "end": 1115, "start": 1102, "text": " Okay, so let's move to the first paper we're going to talk about today that is unsupervised predictive memory in a goal directed agent. That's the Merlin paper by Wayne et al in 2018. Can you give us an overview of this paper ship?" }, { "end": 1127, "start": 1115, "text": " Totally. Yeah, so, yeah, I mentioned this paper a little bit before because at least for us for kind of the little community of researchers that I am a part of." }, { "end": 1136, "start": 1127, "text": " It kind of opened the door for using non parametric memory or just like an agent state that grows in D. Barrel." }, { "end": 1147, "start": 1136, "text": " So, so a little bit of context here. This was around 2016 when when this work was being done. And you know, the A3C paper had come out quite recently." }, { "end": 1161, "start": 1147, "text": " So, so Vlad, me kind of shown, okay, right, we can do D. UN, which has a feed forward architecture. But now with A3C we can actually use architectures with memory." }, { "end": 1176, "start": 1161, "text": " So, we could use an LSTM in a D. Barrel agent and get great results. And you know, around this time we had the NTM, neural-turning machines and what became the DNC and there were the memory networks from Jason Wadsen and colleagues." }, { "end": 1189, "start": 1176, "text": " So, in addition to kind of wanting to go from feed forward to recurrent, there was this really obvious next step of like let's also try to do the NTM thing of having an external memory." }, { "end": 1204, "start": 1189, "text": " So, not only do we have, you know, recurrence, we have some kind of memory. We also can get away from the fixed, fixed width through the fixed vector size inherent in a recurrent working memory." }, { "end": 1215, "start": 1204, "text": " And this seemed to be producing good results and supervised learning. So, I think it was a really, you know, obvious and exciting direction. The issue was it didn't really work right away in RL." }, { "end": 1224, "start": 1215, "text": " So, around the time that I started a D. Mine, this was kind of the state of things. And I wasn't working on D. Barrel agents at the time." }, { "end": 1252, "start": 1224, "text": " But I kind of was aware that there was, you know, some impediment here. And with the Merlin paper, the basic idea was let's move on the assumption that we're not getting great results in RL with external memory because there's something about the noisy RL gradients that make it so that when you propagate gradients between these sort of retrieval from the memory and the writing to the memory," }, { "end": 1270, "start": 1252, "text": " things break down and it doesn't work. So, they kind of proceed on that assumption and say, well, maybe we can use an unsupervised learning method to determine what we write to memory so that we don't have to backprop those gradients through or we don't have to rely on them anyway." }, { "end": 1276, "start": 1270, "text": " And then we can just learn the retrieval mechanism onto these sort of fixed vectors." }, { "end": 1282, "start": 1276, "text": " And that worked really well. That's basically the Merlin paper is demonstrating that." }, { "end": 1291, "start": 1282, "text": " So, I have kind of puzzled over this agent a few times over the years. It seems like every year. So, I pull up the diagram like, what the heck is this thing trying to do?" }, { "end": 1291, "start": 1291, "text": " Yeah." }, { "end": 1296, "start": 1291, "text": " Seems very different than agent design design. When more are you still looking at?" }, { "end": 1297, "start": 1296, "text": " Totally." }, { "end": 1313, "start": 1297, "text": " So, in the figure, and this is one of the times that I wish we had video, but in the figure that describes the overall agent design figure one, it shows a few different agents that RL LSTM, RL Mem, and then the final Merlin agent." }, { "end": 1322, "start": 1313, "text": " And these seem to be, I guess, variations on A3C. Could you very briefly talk about how these three agents differ? What are they doing that's different?" }, { "end": 1336, "start": 1322, "text": " Yeah, definitely. So, first, I want to say, like, I'm totally with you and being confused by that diagram. I actually hadn't looked at that diagram in quite a long time because I just sort of knew the architecture from talking to the authors over the years." }, { "end": 1344, "start": 1336, "text": " And I went back to look at it before the interview today because I just wanted to remember exactly what was in the paper. And I found it incredibly hard to parse." }, { "end": 1345, "start": 1344, "text": " So, thank you." }, { "end": 1355, "start": 1345, "text": " Yeah, I mean, I don't think you're off base by any means finding that thing slightly inscrutable." }, { "end": 1367, "start": 1355, "text": " I think that the main, like, the primary components of those agents are quite straightforward though. So, the RL LSTM, as far as I know that is just A3C." }, { "end": 1377, "start": 1367, "text": " There's nothing different about that agent, substantively from what's in sort of Vlad's A3C paper." }, { "end": 1391, "start": 1377, "text": " Yeah, RL Mem is then the addition of an external memory where at every time step you're projecting your hidden state with a neural network." }, { "end": 1401, "start": 1391, "text": " And I don't remember if that's a linear layer in MLP to, you know, some size. So, let's say you've got, like, a 128 size LSTM state." }, { "end": 1407, "start": 1401, "text": " I'm going to project out to, you know, a 50-dimensional vector that I'm going to store." }, { "end": 1421, "start": 1407, "text": " And then, at the same time, at every time step, you do a retrieval by, with a different neural network projecting out to some, you know, 50 dimensions again." }, { "end": 1425, "start": 1421, "text": " And then, I don't remember actually if that architecture does, I think it's probably a dot product retrieval there." }, { "end": 1439, "start": 1425, "text": " Where you basically do a dot product between the vector that you generated for retrieval or the key, as it's often called, and the vector you want to retrieve, which is often called the key." }, { "end": 1451, "start": 1439, "text": " And, you know, based on the weight you get back from the dot product, you basically do a weighted sum over all of the memories." }, { "end": 1455, "start": 1451, "text": " So, you're trying to find similar memories that are similar somehow to the current situation, is that right?" }, { "end": 1466, "start": 1455, "text": " I think that's exactly right. You can imagine learning, you know, an embedding space where that similarity is something really, really specific." }, { "end": 1477, "start": 1466, "text": " It's like, I want states that had a similar color, but I don't care about what objects were present, or I want states where I was seeing an object with a similar shape, but I don't care what the color was." }, { "end": 1491, "start": 1477, "text": " So, it's kind of similarity in some learned space, and you retrieve information based on, yeah, based on those embeddings and the similarity in that space." }, { "end": 1498, "start": 1491, "text": " Cool. And then the Merlin agent goes beyond that, adding this memory-based predictor. What's going on there?" }, { "end": 1509, "start": 1498, "text": " Yeah, that's right. So, the way that that works is you basically, rather than, you know, just storing a projection that you're then going to-" }, { "end": 1514, "start": 1509, "text": " I don't remember actually if that, with that RLMM baseline, they back-propagated through it, or if they didn't." }, { "end": 1526, "start": 1514, "text": " In the diagram, it suggests that they did- that they did back-prop through it, which slightly surprising, because there's a lighter paper, this MRA paper, where they show that things actually work pretty well." }, { "end": 1534, "start": 1526, "text": " It's different tasks when you do that. So, I'm a little bit reluctant to read too much into, like, little details about that." }, { "end": 1546, "start": 1534, "text": " But in any case, with that agent, you're just storing a vector that you've either learned from gradients from the future that are back-propagated through your retrieval pathway, or are just untrained." }, { "end": 1553, "start": 1546, "text": " So, you're just storing something, and it's a random projection, which believe it or not, that actually does work pretty well out of the time." }, { "end": 1558, "start": 1553, "text": " But sometimes, not sufficiently well, depending on the task." }, { "end": 1572, "start": 1558, "text": " So, with Merlin, what they do is they just run a variational autoencoder on the- sort of the current LSTM state in order to decide what to store." }, { "end": 1585, "start": 1572, "text": " So, it's basically the standard, again, this is like 2016, so VAE's, or all the rage. So, there's this kind of obvious logical step, which is like, oh, we're not happy with the random projections." }, { "end": 1590, "start": 1585, "text": " And we're basically not getting good performance when we put random projections in the memory." }, { "end": 1594, "start": 1590, "text": " And things aren't getting much better when we're back-propagating through the memory." }, { "end": 1602, "start": 1594, "text": " So, maybe there's some unsupervised loss that we can put on this thing that will make it so that the things in memory are useful in the future." }, { "end": 1612, "start": 1602, "text": " And because VAE's are really popular right now, let's try to VAE. So, we'll basically take the vector that we were going to store, and we'll treat it as the latent for a variational autoencoder." }, { "end": 1621, "start": 1612, "text": " And I think in this work, they actually were trying to predict the next time step. So, I guess it's a variational next step predictor." }, { "end": 1627, "start": 1621, "text": " But, you know, a lot of these details like that, again, I wouldn't read too much into them because it changed a lot over the cycle of this project." }, { "end": 1637, "start": 1627, "text": " Basically, the idea is like, we're going to sample some latent and try to either reconstruct the current frame that we're seeing, or predict the next frame." }, { "end": 1641, "start": 1637, "text": " And I think it's actually predicting the next frame in that diagram, if I remember right." }, { "end": 1663, "start": 1641, "text": " And basically, I think the main thrust of this paper is like, look, if we add this unsupervised loss, such that we're not just putting random projections in the memory, we're putting representations of the current time step that have been shaped by this next step prediction loss, then we can get better performance in our tasks." }, { "end": 1668, "start": 1663, "text": " So, what type of tasks does this type of agent work well for?" }, { "end": 1678, "start": 1668, "text": " Right. So, they have this latent learning task where the agent is kind of just running around in an environment and then, you know, for a while without a goal." }, { "end": 1691, "start": 1678, "text": " And then it's given a goal and it has to navigate to it. They have the memory game, which is a really nice task. It's kind of the one that you might have played as a kid, where I think maybe clue us like this." }, { "end": 1702, "start": 1691, "text": " I think there are some common games that are like this, where you basically have a bunch of cards laid out on a table face down, and you're allowed to flip over one at a time and see what the face is." }, { "end": 1718, "start": 1702, "text": " And there are some pairs in this set of cards that you have. And your goal is to flip over a pair and sequence. So, you basically have to remember where the things are that you've flipped over so that you can intentionally flip over to an arrow." }, { "end": 1723, "start": 1718, "text": " Is that a familiar game? Yeah, I remember playing that as a kid." }, { "end": 1732, "start": 1723, "text": " Okay, nice. Yeah, that was like a classic. I remember they were really excited when they finally got that one to work." }, { "end": 1746, "start": 1732, "text": " So, yeah, it's tasks like that that require like some of the latent learning setting, for instance, an LSTM will just probably not remember very much of what it's seen during the sort of exploration base." }, { "end": 1759, "start": 1746, "text": " Whereas the agent with the external memory and with representations in it that are shaped in a reasonable way so they're not just random projections can actually remember what it saw before in order to navigate effectively." }, { "end": 1769, "start": 1759, "text": " So, we had Rich Sutton's bitter lesson article recently where he talked about data always wins. And maybe is that more data always wins?" }, { "end": 1787, "start": 1769, "text": " Is that, does that inform your choice to focus on meta-RL that even learning, the algorithm, one level higher than maybe traditionally consider is something worth doing because with the right data and learning system we can learn to do that better than we could hand design?" }, { "end": 1794, "start": 1787, "text": " Yeah, that's an interesting question. I would say that wasn't part of my original motivation, but I think it wouldn't have been a bad reason to do it." }, { "end": 1809, "start": 1794, "text": " And it might be a good reason to keep working on meta-RL now. Yeah, it's interesting that bitter lesson because, you know, there are a lot of settings right now that I would like to be able to make progress in that I can't make progress in with meta-RL." }, { "end": 1820, "start": 1809, "text": " So, it's a very basic one just Atari. So, the whole idea with meta-RL is that you're going to sample all these tasks from a task distribution and by training your learner on it you're going to get good at solving those tasks super fast." }, { "end": 1831, "start": 1820, "text": " So, okay, can I do that with Atari? I really can because I've only got 57 Atari games and they're really quite different from one another." }, { "end": 1849, "start": 1831, "text": " So, you know, I can't sample 10,000 Atari games or a million Atari games and then the 57 real ones are samples from that distribution such that I can, you know, have those as my held out tasks and learn by meta-learning a great, you know, or really great policy for learning these ones that I care about." }, { "end": 1873, "start": 1849, "text": " So, yeah, I wonder about that bitter lesson maybe. I guess there's hypothesis that eventually will have access to so much data, you know, maybe by learning on all of YouTube then we can, you know, with other algorithms, it wouldn't be exactly meta-RL like we have now. But with some clever algorithms we could basically just let the data tell the age and how to solve those." }, { "end": 1893, "start": 1873, "text": " But I have to say right now as of 2021, I'm really keen on methods that are a little bit more, I don't want to say hand design, but where, you know, as a researcher we can design something that can solve Atari a lot faster than we can right now because they're practically speaking isn't a way to just let the data do it." }, { "end": 1903, "start": 1893, "text": " Cool. And then how does going back to Merlin, how does it handle exploration, is it using a standard A3C exploration strategy or is it doing something different there anything?" }, { "end": 1922, "start": 1903, "text": " Yeah, yeah, yeah, it is. So, so that agent and actually all the agents will talk about up until the very last paper basically are just using the exploration and the credit design basically all the RL is the same as A3C or Impala as kind of your standard agent." }, { "end": 1942, "start": 1922, "text": " And it's really just the architecture that's changing in these. There is one caveat to that, which I think in at least one of their experiments they they did like reward shaping like in the late learning one because if you just used the policy entropy exploration from A3C they wouldn't be able to run around the map to see stuff in order to do the late learning." }, { "end": 1955, "start": 1942, "text": " So, yeah, I think that demonstrates just just how much in this past work we were kind of stuck with the some of the limitations of the sort of the basic RL algorithm we were working with." }, { "end": 1971, "start": 1955, "text": " And then speaking of sampling games in Atari and not having enough games to sample from, I mean, would you consider something like ProcGen from OpenAI which is generating new levels, would you consider that kind of a form of Met RL because you're learning how to solve these sample data." }, { "end": 1975, "start": 1971, "text": " So, solve these sample, sample games or is that kind of splitting hairs at that point?" }, { "end": 1987, "start": 1975, "text": " It definitely feels like a Met RL setting to me. I think, you know, there's that, there's the alchemy data set that came out from from DeepMind recently." }, { "end": 2000, "start": 1987, "text": " There's CoinRun, it was another nice one where you could programmatically generate tasks. And of course, like in, you know, Magyoko and continuous control, there's a lot of Met RL work that's just like sampling from like a distribution over." }, { "end": 2010, "start": 2000, "text": " You know, how much the legs weigh and things like that. So, yeah, I mean, I'm definitely excited about those. I think the tough thing is trying to generalize." }, { "end": 2019, "start": 2010, "text": " So, let's say you've trained an agent on like ProcGen or what's that other one? I'm blanking on the name of it, but something like alchemy." }, { "end": 2030, "start": 2019, "text": " Let's say you trained an agent on those, but then you want to generalize to Atari or you want to generalize to some other game or some tasks that you want to solve in an applied setting." }, { "end": 2038, "start": 2030, "text": " There's not really a way to do that generalization. It really only works within the task distribution that you've kind of cooked up." }, { "end": 2050, "start": 2038, "text": " Okay, so let's move on to the next paper here today. That's Met RL without forgetting. Been there, done that, Meta learning with episodic recall. That's by yourself, Samuel Ritter in 2018." }, { "end": 2053, "start": 2050, "text": " So, can you give us a lowdown on the main idea in this paper?" }, { "end": 2063, "start": 2053, "text": " Yeah, for sure. So, this is the one that we were talking about earlier, where my advisors said that Met RL acts like a hippocampal amnesia, amnesia. Let's try to fix that." }, { "end": 2083, "start": 2063, "text": " So, basically, in this work, I kind of picked up where Merlin left off or kind of went through this door that Merlin had opened, if you will, to say, all right, we've got this episodic, or sorry, we've got this recurrent working memory that's doing these cool things, solving these band of tasks, for instance." }, { "end": 2102, "start": 2083, "text": " And I basically want to take the knowledge that this recurrent network has gained through some really smart exploration. And I want to store it in such a way that it can be retrieved later when the agent needs it again." }, { "end": 2119, "start": 2102, "text": " And so, basically, the way we ended up doing that was by having an episodic store of LSTM states. So, in contrast to Merlin, a lot of other episodic memory work, where the thing you're storing is some projection of the LSTM state." }, { "end": 2137, "start": 2119, "text": " In this setting, we were like, no, like the LSTM state has exactly what we need. Like, for instance, it has the bandit parameters encoded in it, or at least it has the sort of state of the bandit solving algorithm encoded in it, if you will, it's kind of got the belief state." }, { "end": 2154, "start": 2137, "text": " And so, we're just going to store that raw in the episodic memory, and then we're going to pair it with a contextual queue. So, these were recurring bandits, right, and the sort of the basic task setting from that paper." }, { "end": 2172, "start": 2154, "text": " And each time a bandit would re-occur, it would come along with, I think in those experiments, it was an Omnichlott digit. So, it was kind of like your hat, a bandit in a casino, and there's a picture above the bandit that you're playing." }, { "end": 2187, "start": 2172, "text": " And so, later on, the next night or later, the same night, you come back to the same bandit, and you're like, oh, I remember that picture, I remember that I actually found that this arm was working really well all the time. I assume an actual casino is more like a wandering bandit, so it's randomized or something." }, { "end": 2210, "start": 2187, "text": " But in this fast setup, when the agent saw the same image again, it could assume that it was in a bandit problem with the same arm parameters. So, if it had solved that bandit problem before, or at least done some exploration and learned, gotten some information about what was probably good or probably not good, then it could assume that that would still be the case." }, { "end": 2227, "start": 2210, "text": " And so, with this episodic memory, what our agents could do is do a search in the memory by doing this neural network-based query like in Maryland over the contextual Q representations." }, { "end": 2247, "start": 2227, "text": " And then one kind of key aspect of this work that differentiates it from pretty much all the other episodic memory, DBRL work I know, is that when we retrieved from the episodic memory, again, rather than doing some projection of what we retrieved and feeding it as input to the recurrent network." }, { "end": 2261, "start": 2247, "text": " Instead, we basically said, well, we know that vector that we stored has the information we need, and we know that the recurrent weights, so the dynamics of the LSTM are already shaped in such a way to process that thing." }, { "end": 2287, "start": 2261, "text": " So, let's just sum it on to the LSTM state as though it was another component of the LSTM. So, within LSTM, you have the input times a gate plus the previous state times a gate. And here we were saying, well, let's actually add another gate for reinstatement for the past LSTM states that we've retrieved based on contextual similarity of another gate." }, { "end": 2298, "start": 2287, "text": " And we'll just multiply the retrieved LSTM states, the old LSTM states by this gate and some goes on with the other two. And that turned out to work super well." }, { "end": 2301, "start": 2298, "text": " Do that surprise you, or did you expect that to work?" }, { "end": 2312, "start": 2301, "text": " I remember, so I kind of expected that it would work to retrieve this information from memory and feed it into the LSTM somehow." }, { "end": 2322, "start": 2312, "text": " I was really happy to see that this particular way of doing it worked a lot better than feeding it as an input or projecting and feeding as an input." }, { "end": 2330, "start": 2322, "text": " And I was especially happy with the formulation where it was basically like another gate. And I think when we talk about the dissertation, part of that will become clear." }, { "end": 2345, "start": 2330, "text": " But I do remember I had this meeting with Ross von Paskanu and he suggested something about that way of gating. Or he mentioned something about how you could treat this thing as though it was another gate." }, { "end": 2356, "start": 2345, "text": " And like I went back to the code and like, ended up trying this particular version. And when that version worked out, it's like, that is so cool. Like I didn't expect that to work. It ended up just being kind of pretty, I suppose." }, { "end": 2368, "start": 2356, "text": " So speaking of your dissertation, let's get to that. This is meta-reinforced learning with episodic recall and integrative theory of reward driven learning. And that's Samuel Ritter 2019." }, { "end": 2371, "start": 2368, "text": " So can you briefly tell us what your dissertation was about?" }, { "end": 2386, "start": 2371, "text": " Sure. Yeah. So basically the paper we just talked about was in ICML. And it's kind of happy with that as a sort of machine learning paper or machine learning project." }, { "end": 2398, "start": 2386, "text": " And then what's kind of natural to ask the question, can we take this architecture that we've designed seriously as a model of reward driven learning as it happens in the brain?" }, { "end": 2410, "start": 2398, "text": " And there were some reasons I think that might be a good idea. So, you know, my advisor had this, you know, qualm with the original meta-RL model that it acted like a hippocampalian music." }, { "end": 2427, "start": 2410, "text": " And they actually had this paper where they argued for that model as a model of what's going on in the brain, specifically as a model of the interaction between prefrontal cortex and the strideal dopamine." }, { "end": 2439, "start": 2427, "text": " And so, I mentioned that Jane and colleagues, Jane Wong and colleagues had this paper in Nature Neuroscience on meta-reinforced learning. And that's the one I'm talking about here." }, { "end": 2454, "start": 2439, "text": " So, yeah, they had this nice paper where they provided all of this evidence that you can explain a bunch of different behavioral and findings and neural recordings using this idea that you have this recurrent working memory." }, { "end": 2475, "start": 2454, "text": " And this is the thing I didn't mention before. And that the weights of that recurrent working memory are trained by a learning signal that's produced by the striateum, which is this sub-cortical structure, it sits below the cortex, and it projects neurons into the cortex as well as other areas." }, { "end": 2486, "start": 2475, "text": " And these neurons emit this neurotransmitter dopamine, which is thought among other effects to kind of modulate plasticity." }, { "end": 2515, "start": 2486, "text": " So, the idea in this Nature Neuroscience paper is that maybe what's going on in human and animal learning is that the strideal dopamine system, which is, by the way, very well studied, and some of the most kind of exciting and believable findings in cognitive neuroscience are about this system." }, { "end": 2528, "start": 2515, "text": " So, it was really exciting when Jane and Matt came out with this paper saying, well, maybe what that system is doing, the system that we generally know a lot about how it works, we don't know exactly what it's kind of downstream effects are." }, { "end": 2539, "start": 2528, "text": " Maybe what it's effects are to train the dynamics of this recurrent working memory such that that working memory can then execute reinforcement learning algorithms." }, { "end": 2545, "start": 2539, "text": " So, just a, that's kind of like a big statement. So, let me just unpack it a little bit more." }, { "end": 2554, "start": 2545, "text": " Like, the strideal dopamine system is really a very simple, I guess it implements a very simple algorithm." }, { "end": 2561, "start": 2554, "text": " Specifically, we think of it as doing reward prediction errors and kind of doing value prediction." }, { "end": 2579, "start": 2561, "text": " So, we think of it as basically a TD learning implementation. And one problem with this is that the animal behavior and human behavior shows evidence for lots of different kinds of learning algorithms that are more seemingly more sophisticated than that." }, { "end": 2598, "start": 2579, "text": " And so, and Jane and Matt's paper they argue, well, the reason that that's possible is that there's more sophisticated behaviors and this more sophisticated learning phenomena are arising via meta reinforcement learning that is carried out by this very simple strideal learning system." }, { "end": 2610, "start": 2598, "text": " That was, that was basically the prior work. And then one big hole in that picture is the fact that that system just forgets everything as soon as it learns it." }, { "end": 2622, "start": 2610, "text": " So, you've, you've got this hard one knowledge that you've gleaned through this really smart reinforcement learning algorithm that you learned through strideal dopamine." }, { "end": 2631, "start": 2622, "text": " But then you just forget it right away as soon as you go into another context and that's really not ideal doesn't seem like a full model of human learning." }, { "end": 2643, "start": 2631, "text": " And so, in my dissertation, the idea was, well, can we model the hippocampuses role in this picture? And can we do that modeling with an episodic memory based deeper reinforcement learning agent?" }, { "end": 2665, "start": 2643, "text": " So, basically, we took that architecture with the, this reinstatement process where we store LSTM states and we say, well, actually, we're storing like cortical states, we're storing like firing rates across cortex, we're storing those in such a way that they can be retrieved when a similar situation happens again." }, { "end": 2675, "start": 2665, "text": " And this is what I said right there is like a very classic model at a high level of what the hippocampus does going back at least to David Mar." }, { "end": 2685, "start": 2675, "text": " And kind of in our model, we were just implementing in a very different way rather than having like an attractor network, like they would have an older modeling work." }, { "end": 2696, "start": 2685, "text": " We just have this slot based memory where we just store everything separately, which kind of gets us away from some implementational challenges that makes it really hard to work with those models." }, { "end": 2714, "start": 2696, "text": " And by doing that, basically, the thesis shows that you can capture some specific data that pertains to the interaction between episodic learning and the kinds of learning that are associated with stridal dopamine and prefrontal cortex." }, { "end": 2728, "start": 2714, "text": " So it kind of puts it all together into one picture with cortex, hippocampus, and the striatum, and they're kind of associated learning phenomena altogether so you can see it via a unified account." }, { "end": 2734, "start": 2728, "text": " Okay, so in this dissertation, you talk about how different parts of the brain are involved in different types of learning." }, { "end": 2745, "start": 2734, "text": " And if I gather correctly, that's model-free learning with dopamine and model-based learning in the prefrontal cortex and episodic learning with the hippocampus." }, { "end": 2754, "start": 2745, "text": " So can you maybe tell us more about these three types of learning in the brain and what is each type best used for?" }, { "end": 2764, "start": 2754, "text": " Sure, so model-free learning is associated with stridal and it's sort of dopamine-ergic projections." }, { "end": 2778, "start": 2764, "text": " And the idea is that that is your very basic habit learning or your kind of learning that comes down to I did something and then later something good happens happened." }, { "end": 2785, "start": 2778, "text": " So I'm just going to do that thing over and over again. I'm going to do it more regardless of what happened in between." }, { "end": 2798, "start": 2785, "text": " So, you know, for example, you develop a habit of, you know, you're out to work every morning and you don't really have to plan that route every time." }, { "end": 2808, "start": 2798, "text": " You just know, oh, when I come to this intersection, I take a left. Simple as that. That's the kind of behavior that we associate with model-free learning." }, { "end": 2816, "start": 2808, "text": " Model-based, on the other hand, is typically associated with the prefrontal cortex and it's more about explicit planning." }, { "end": 2829, "start": 2816, "text": " So, when you're playing chess, for instance, or at least, you know, kind of how I imagine people play chess, I think experts might have slightly different strategies for playing." }, { "end": 2849, "start": 2829, "text": " But some of like me is definitely a amateur. It's all about trying to predict out sequences of moves in order to, you know, predict some outcome that's really relatively far in the future so that I can decide what to do now." }, { "end": 2866, "start": 2849, "text": " And this contrasts very strikingly with habit learning where you basically just have to try entire sequences of experience over and over again in order to see what leads to good outcomes and to gradually learn to do the things that achieve reward." }, { "end": 2886, "start": 2866, "text": " Okay, I think you asked about also episodic. And that's kind of a newcomer, I guess, to the discussion in neuroscience about reward-driven learning strategies. So, whereas, you know, model-free learning and model-based learning have these very historic legacies." }, { "end": 2902, "start": 2886, "text": " Episodic learning really starts with, as far as I can tell, with this paper from Lang-Yan Diane, and then more recently with some, there's kind of a preview-preview paper from Nathaniel Daw and Sam Gershman, and then a bunch of empirical work from Aaron Bornstein." }, { "end": 2923, "start": 2902, "text": " That basically argues that the picture you get with just model-based and model-free is sort of incomplete. In part because you can see in sort of laboratory experiments, the effect of specific single past events on people's decision-making." }, { "end": 2946, "start": 2923, "text": " So, you can like show people a context while they're making some decision and then something good or something bad happens. And then like days later, you can show them that context again, and they'll be very heavily influenced by it in a way that's not predicted by sort of the classical model-free model-based paradigms." }, { "end": 2964, "start": 2946, "text": " And, you know, so these researches have argued that, look, there's this kind of missing, that there's an explanation for this type of behavior that's missing, and it seems really, you know, obvious that the hippocampus would probably be involved with this given its sort of long-standing association with memory for specific past events." }, { "end": 2971, "start": 2964, "text": " Do you think that our use of these kinds of learnings are different between babies and adults? Like, is this change over our life?" }, { "end": 3000, "start": 2971, "text": " Yeah, that's an interesting question. I sort of surprisingly, there's not as much research on that as I might have thought, but there is one paper from 2016. I think it's from the Cornell Medical School, where they show that, yeah, if you have children and adults perform some of these classic lab tasks that assess model-based versus model-free learning behavior, that, you know, children exhibit much less model-based." }, { "end": 3016, "start": 3000, "text": " Much less model-based behavior than adults do, which is kind of intuitive, you know, this sort of habit learning does seem somehow easier or to require less than model-based reasoning does." }, { "end": 3023, "start": 3016, "text": " And I can say that over, of course, in my life, it does seem like I'm better at planning than I was when I was like four years old or five years old." }, { "end": 3030, "start": 3023, "text": " Yeah, it's just one paper and some intuition there, so I think it's still a pretty open area of question." }, { "end": 3043, "start": 3030, "text": " And then looking at the broader picture, do these types of learning appear at a certain point in evolution, or how do you think about, you know, whether some of these are more recent or ancient?" }, { "end": 3052, "start": 3043, "text": " Yeah, yeah, that's an interesting one as well. And again, there's a lot less work on this than I might have thought, but I asked some friends about this." }, { "end": 3065, "start": 3052, "text": " When you asked the question, as we were preparing for this, some folks who have, you know, some really in-depth knowledge of sort of the cognitive neuroscience of these learning strategies." }, { "end": 3081, "start": 3065, "text": " And it did turn up one paper that was interesting, which was showing, or I guess it was arguing for the possibility that the transition from aquatic life to land-based life may have come with greater value." }, { "end": 3094, "start": 3081, "text": " So, it's a greater requirement for planning, and they do some simulations to show that under water there's, you know, more direct line of sight that's possible, and there's less to occlude your vision." }, { "end": 3108, "start": 3094, "text": " And so there's less of a need for planning, and so they do some simulation experiments that suggest that it's plausible that at that transition in evolutionary history, there may have been, you know, a dramatic increase in planning abilities." }, { "end": 3117, "start": 3108, "text": " But, yeah, that's about planning, and not exactly about model-based RL, which is like one way you could do planning." }, { "end": 3131, "start": 3117, "text": " So, I guess the main takeaway is, again, this is a wide open area of questioning. So, yeah, if you want to start a cognitive neurobiology evolutionary neurobiology lab, I think this might be a cool question to start with." }, { "end": 3141, "start": 3131, "text": " Let's move to the next paper that is RapidTas Solving in Novel Environments, that's by Ritter et al. in 2020. Can you give us the just of this one?" }, { "end": 3152, "start": 3141, "text": " Yeah, for sure. And I do want to point out, actually, that at this point, you know, the next two papers we're going to talk about in contrast to the earlier ones, which we're doing in my PhD, so it was all very sort of lonely and isolated work." }, { "end": 3161, "start": 3152, "text": " These were done in tight collaboration with some other people, so I really cannot, by any means, take all or even much of the credit for it." }, { "end": 3170, "start": 3161, "text": " So, especially, I want to say, DeVeter Poso, I've been collaborating with for the last two years has been amazing. I think my work's gotten a lot better from it." }, { "end": 3186, "start": 3170, "text": " And also, in this paper, Ryan Faulkner and Laurent Sartrein, we were kind of all working on it, full time. It wasn't this kind of single author. The first author does 99% of the work sort of thing. It was really, really nice." }, { "end": 3208, "start": 3186, "text": " And, yeah, basically, in this work, we wanted to push the boundaries of metery enforcement learning with episodic memory. And specifically, we wanted to do it in service of building agents that could build models on the fly." }, { "end": 3220, "start": 3208, "text": " So, we wanted an agent that we could drop into some new environment, and it would kind of know how to start exploring the environment in an intelligent way, kind of the way that met RL agents do." }, { "end": 3235, "start": 3220, "text": " But further, we wanted that agent to be able to gather knowledge and then repurpose it, and then basically use that knowledge for planning in ways that previous met RL agents just don't do." }, { "end": 3257, "start": 3235, "text": " So, to be concrete about it, in a navigation setting, because that's maybe the easiest to think about. We wanted to be able to drop an agent into, say, some neighborhood, and tell it, we want you to go to this goal location, show it an image of where we wanted to go, and have it intelligently explore the map in order to find that goal location." }, { "end": 3272, "start": 3257, "text": " And then we wanted to be able to give it another goal, and have it remember what it saw along the way, and piece together a better strategy for getting to that goal than just kind of its basic exploration." }, { "end": 3285, "start": 3272, "text": " And, you know, our reasoning was that after some relatively small amount of experience in an environment, agents with this sort of ability should be able to just plan by short as paths to any goal that we would give it." }, { "end": 3298, "start": 3285, "text": " And this is something that humans can do, at least intuitively, it seems like this is something we're quite good at, but it's definitely not within the wheelhouse of agents before these ones." }, { "end": 3305, "start": 3298, "text": " Part of the reason that we were keen on this is we thought that we could do it really well with episodic memory and deep RL." }, { "end": 3316, "start": 3305, "text": " And so we started out with tasks like the one I described, so maybe I'll just go with that one, we used the street learning environment that came out a couple years ago." }, { "end": 3330, "start": 3316, "text": " And we just kind of modified that environment so that we could sample neighborhoods from many different cities for each task, and then we'd have held out neighborhoods to evaluate the agents on." }, { "end": 3346, "start": 3330, "text": " And with this task, we tried, you know, first of all, the basic RNN-based meta learners, and they do not do well at this sort of thing." }, { "end": 3359, "start": 3346, "text": " Specifically, they don't do well at the planning aspect, so they can learn good exploration strategies, which is what we saw in the sort of the previous generation of meta-RL agents, but they really weren't able to plan these shortest paths." }, { "end": 3366, "start": 3359, "text": " And we assumed that that was because these LSTM agents just couldn't remember what they had seen, because these were pretty long episodes." }, { "end": 3373, "start": 3366, "text": " You can't remember over dozens of time steps, this kind of information we suspected." }, { "end": 3384, "start": 3373, "text": " So we then moved to the next step of more sophisticated memory agents, which was Merlin, and there was this other one called MRA that had come out more recently." }, { "end": 3407, "start": 3384, "text": " And these are agents that are like the one from my dissertation, and like Merlin, they would basically do a weighted sum of the memories in the episodic store, based on a single query, or maybe a multi-headed query, and they would basically use that to inform a policy." }, { "end": 3418, "start": 3407, "text": " And those agents also, we saw, were just not able to do this planning. So at that point, we were like, this is interesting, because these agents clearly remember everything that they need to plan these shortest paths." }, { "end": 3429, "start": 3418, "text": " By enough time in the episode, we can be pretty sure that they've seen what they need to see, but they're not able to actually execute a planning algorithm over that information." }, { "end": 3456, "start": 3429, "text": " And so we tried a bunch of different approaches to enable agents to plan with this information that was in their episodic memory, and the algorithm that we converged on that worked the best, and we kind of liked the best, was this so-called episodic planning network, which is basically an extension to those old memory agents, where you could retrieve from the memory," }, { "end": 3468, "start": 3456, "text": " and then run self-attention over the items that you retrieved. And I think actually in that paper, we just run self-attention over the whole memory, because the episodes aren't that long." }, { "end": 3479, "start": 3468, "text": " And, right, so probably people will be familiar with self-attention. Basically, the idea is like each memory queries to each other memory." }, { "end": 3506, "start": 3479, "text": " And basically, we would iteratively self-attention, so we would select a set of vectors to attend over, and then get a new state over that set of vectors as a result, and then we would iterate with the same attention model, that same process, some number of times, and then we would send the results out to the policy." }, { "end": 3514, "start": 3506, "text": " And even though it might not be obvious, especially because I'm describing it with words, it might be a little hard to see, and the diagrams hopefully is a little easier to see." }, { "end": 3533, "start": 3514, "text": " This kind of architecture is, in principle, capable of learning algorithms like value iteration, where you basically, at least intuitively, you could imagine storing the identity of each state in each one of the rows of the self-attention vectors, or in each self-attention vector." }, { "end": 3544, "start": 3533, "text": " And then the self-attention could carry out this kind of adjacency computation that you do in value iteration." }, { "end": 3558, "start": 3544, "text": " And so basically, we saw that, first of all, this agent worked really well, saw the task it was planning with perfect shortest paths after only a small amount of experience in each environment." }, { "end": 3574, "start": 3558, "text": " But further, we were able to analyze the agent and find evidence that it does actually learn value iteration like algorithm, and further that that algorithm generalizes to much larger maps than the ones that we trained on." }, { "end": 3582, "start": 3574, "text": " So in value iteration, we're writing updates as we go. Is that right? And is there some notion of writing updates as well?" }, { "end": 3597, "start": 3582, "text": " Yeah, definitely. This was the thing that I was worried might not come through with my description. So basically, when you're doing self-attention, you have a set of states, and you attend to all of them, or you attend from each one to each other one." }, { "end": 3603, "start": 3597, "text": " And then the result is another set of vectors of the same dimensionality." }, { "end": 3614, "start": 3603, "text": " And so that basically provides you with something like a little workspace in which you can do this iterative updating. Does that make sense?" }, { "end": 3624, "start": 3614, "text": " So you're okay. So if I understand correctly, the agent's not updating any memories, but it's using its workspace to figure out the value. Is that right?" }, { "end": 3641, "start": 3624, "text": " Exactly. Yeah, yeah, that's right. So, yeah, it's a deep architecture discussion for voice only, but I think we're getting there. Yeah, that's exactly right. So the way that it works is you fix the memories so that they are the same on every time step." }, { "end": 3658, "start": 3641, "text": " So you just write the new one on every time step. But then you kind of have, if you will, this workspace, it's kind of like your working memory, I guess, that you can then iteratively update in order to execute whatever algorithm makes sense given the information that's fixed in the episodic memory." }, { "end": 3673, "start": 3658, "text": " I hope a little bit later we can talk about contrasting these different aspects of episodic memory. But let's move on to the next paper. We're going to talk about today is synthetic returns for long term credit assignment." }, { "end": 3684, "start": 3673, "text": " And that is David Raposo at all in 2021. What was the basic idea of this paper Sam?" }, { "end": 3705, "start": 3684, "text": " I mentioned that I felt slightly frustrated that I couldn't like hand off the the meta RL algorithms that we were developing to, you know, other researchers who are trying to solve Atari or, you know, other tasks that don't have a whole distribution built in." }, { "end": 3721, "start": 3705, "text": " And so after we finished that previous piece of work, I was really thinking about what can we do, given the expertise that we've built up and kind of improving the capabilities of these DBRL agents with this kind of data structure." }, { "end": 3736, "start": 3721, "text": " What can we do that we can hand off that we'll just work in any environment, even those that don't come with like a whole distribution over similar environments. And this paper was kind of the result of that." }, { "end": 3759, "start": 3736, "text": " So we kind of identified that the credit assignment, especially long term credit assignment, is kind of a primary bottleneck for for DBRL agents, especially for the kinds of tasks that, or at least some of the tasks that are of very central interest in organizations like DMI right now." }, { "end": 3777, "start": 3759, "text": " And we kind of, you know, identified that with this data structure, we have a lot of opportunities for doing credit assignment that aren't available when you have, you know, only your present state or only your kind of belief state all rolled into one vector." }, { "end": 3789, "start": 3777, "text": " And so this paper was basically an effort towards kind of making good on the promise of this data structure for doing credit assignment over a much longer period of periods of time." }, { "end": 3796, "start": 3789, "text": " And with some kind of better variance properties, then what you get is sort of standard RL algorithms." }, { "end": 3810, "start": 3796, "text": " So can you contrast the idea of synthetic returns with with N step TD learning like it seems like there's something a little bit similar there in terms of bringing the value far forward or far backwards." }, { "end": 3823, "start": 3810, "text": " N step TD learning is, I guess, an example of this very general class of credit assignment algorithms, which I think, you know, the vast majority of the credit assignment algorithms." }, { "end": 3842, "start": 3823, "text": " I know of our in this category, the ones that are prominent deep RL certainly, where basically in order to sort of define the value or the utility of a state or a state action pair, you basically try to sum up over all the rewards that happened after that state or state action pair." }, { "end": 3859, "start": 3842, "text": " And you know, with with deep learning, we do this with a particular class of function approximator. We use neural networks to try to predict value and then maybe we use maybe we also will kind of train a policy that basically optimizes the same signal with some of all the future rewards." }, { "end": 3880, "start": 3859, "text": " This is true in in step TD learning and you know, even if you just do Monte Carlo unrolls all the way out, it's the same property and one issue with that is that you might have a bunch of rewards that came after some state action pair or some state that are completely unrelated to that state." }, { "end": 3895, "start": 3880, "text": " But then you have some future rewards that are related and you don't really have a way to segregate those unrelated and related rewards out segregate them from each other in order to sort of learn only about the ones that matter." }, { "end": 3906, "start": 3895, "text": " Instead, you kind of just have to sample a lot of experience so that you can let the unrelated ones average out. They'll just average out to zero, but it takes a lot of data to do that." }, { "end": 3933, "start": 3906, "text": " And so we really wanted to get around that. And so we do and I actually will just take a step back as I think when you asked about TD learning, you might have been asking about this point, like what TD learning, you can just bring rewards arbitrarily far back if you're willing to wait long enough through kind of sort of the value function learning the boot, the value function bootstrapping." }, { "end": 3949, "start": 3933, "text": " But you still are faced with this fact that the value function bootstrap that you're predicting is itself trying to predict a bunch of rewards that might be totally unrelated to the state or state action pair you want to get a valuation for." }, { "end": 3965, "start": 3949, "text": " And so with the synthetic return algorithm, the idea was maybe using the episodic memory, we can for each state and we could have done state action pairs, but in this case, we did states because it was a little simpler." }, { "end": 3989, "start": 3965, "text": " Maybe we can learn which future states have rewards that are related or are predictable by the current state and then we'll create a value estimate or a utility estimate, it may not be exactly a value will create a utility estimate that is sensitive only to those ones so we can just right off the bat ignore all the ones that are unrelated." }, { "end": 4013, "start": 3989, "text": " And that's done with with kind of straight up regression, is that right? Yeah, exactly that. Yeah, so it's basically a linear regression where we say at every time step, we'll get some reward and we'll try to predict that reward using a sum over some scalar output function of all the past time steps, where all the memories, if you will." }, { "end": 4041, "start": 4013, "text": " And the result of that is like a linear regression model that for each state tries to output its reward as a sum overall past states and then the sort of the weights of that regression model act like they act like a credit assignment signal, they basically say." }, { "end": 4058, "start": 4041, "text": " This past state contributed this amount to the current reward that I'm getting. And so using this method, I think you got really pretty dramatic results, is that right? Like in the skiing game and others, can you tell us about the performance here?" }, { "end": 4078, "start": 4058, "text": " Sure, so basically we started with really simple, the simplest possible task to illustrate for ourselves, including how this work called it the chain task. But then we moved to something slightly more complicated, which is catch, this sort of classic debugging task." }, { "end": 4089, "start": 4078, "text": " But with a twist where you would base the agent would play like 20 rounds of catch without any rewards, and then at the end it would get rewards for all of them." }, { "end": 4104, "start": 4089, "text": " So kind of had this challenge of figuring out which of these catch games went well or basically in this undifferentiated stream of experience, what happened to lead to whatever rewards signal you got at the end of 20 rounds." }, { "end": 4121, "start": 4104, "text": " And basically we saw that the regression model worked just as we would want to, it worked in such a way that at that last time step, the memory output function would be high only for the time steps at which the agent caught the ball." }, { "end": 4138, "start": 4121, "text": " And it would be zero, or be close to zero everywhere else. So as a result, you would basically see these little spikes. If you kind of plot the memory output function for each memory over time, you would see a spike every time the agent caught the ball." }, { "end": 4161, "start": 4138, "text": " And sort of intuitively, if you just plug that signal, the spiking signal, and as an auxiliary reward, you'll learn a policy. And when I say plug it in as an auxiliary reward, I mean you just take whatever agent you have, in our case it was an Apollo agent, and you just augment the reward with this memory output signal." }, { "end": 4175, "start": 4161, "text": " If you just do that, then yeah, you'll find a policy that catches the ball. It works super well. And it's actually, yeah, we didn't have a lot of sea variance there. It's pretty hypergramminer and sensitive. So that was really nice to see. And then yeah, you mentioned skiing." }, { "end": 4198, "start": 4175, "text": " Skiing actually has a pretty similar structure to that where you have to kind of go through all of these gates. This is a tary game. And at the end, you get a reward for all the gates that you hit are all the gates that you pass through. And you know, just as little context, this game was among the last to be solved at all by DeepRL agent." }, { "end": 4212, "start": 4198, "text": " And that was just by agent 57 and spring of 2020 that it was solved. And in that case, they had like a giant discount factor and they had to wait for 80 billion steps almost for this thing to be solved." }, { "end": 4221, "start": 4212, "text": " And you know, it was believed that the reason for that is this really long reward delay. And this kind of variance in the credit assignment signal." }, { "end": 4235, "start": 4221, "text": " So yeah, we're really happy that we were able to just totally nail that task, solving it in more like a billion and a half. So I think it was like a 25 25x gain because actually they had some seeds that were solving it in like 50 million or something like this." }, { "end": 4244, "start": 4235, "text": " But it was, you know, a really big speed up even with them much worse or much less sophisticated agent than the Rgt2 agent 57." }, { "end": 4253, "start": 4244, "text": " So it really felt like we kind of figured out what was hard in that task and just really solved it. And that was basically where we ended with that paper." }, { "end": 4259, "start": 4253, "text": " So do you feel like this is making progress towards solving a temporal abstraction in RL?" }, { "end": 4273, "start": 4259, "text": " Yeah, that's a really interesting one. I think it could be one way I could imagine this sort of contributing to, you know, effort towards having." }, { "end": 4285, "start": 4273, "text": " Temporal abstraction agents would basically be to like compress the memories that go into this regression model." }, { "end": 4291, "start": 4285, "text": " So right now we have this regression model where we have basically one weight, one grished for every time step." }, { "end": 4306, "start": 4291, "text": " But it doesn't really have to be that way. We could have one regression weight for every 15 time steps or every 30 time steps. And I guess we could even learn the number of time steps that are kind of treated as a single chunk for the purposes of credit assignment." }, { "end": 4318, "start": 4306, "text": " And then we could kind of do the same process hierarchically. We could say, well, my regression model says that there is, you know, or reward component of three associated with this 30 time steps." }, { "end": 4331, "start": 4318, "text": " So now I'm going to do another regression over those 30 time steps to see which of them contributed, which of them, you know, in the context of this policy contributed that reward." }, { "end": 4340, "start": 4331, "text": " Yeah, as a result, you're kind of getting sort of a hierarchical and sort of temporarily abstract learning algorithm." }, { "end": 4349, "start": 4340, "text": " The performance here is super interesting, like with such a simple method to improve results so much on a really hard task. Congrats." }, { "end": 4354, "start": 4349, "text": " I appreciate it. Yeah. It was really exciting to see that result. I'm not going to lie." }, { "end": 4365, "start": 4354, "text": " So, but can we talk about the limits here? Like what are first of all, is this entirely on policy? Because I think the regression is assuming that the policy is going to be similar next time around. Is that right?" }, { "end": 4376, "start": 4365, "text": " Oh, I think that those are really good questions. So, so yeah, I'm happy to talk about the general limitations. And specifically on policy case, I think is an interesting one." }, { "end": 4390, "start": 4376, "text": " So, it's, yeah, it's almost on policy with Impala. It's very slightly off policy, basically. And I think it's, it's an interesting question how this will play out in the off policy case." }, { "end": 4402, "start": 4390, "text": " And I think maybe this is, is this what you're getting at? So, basically, you know, we're learning something that's like a value function as like this utility function that is policy dependent, right?" }, { "end": 4415, "start": 4402, "text": " Like the, you know, the regression way that I get to some particular time step depends on what I did, on what the agent did in the future in between that time step and the reward that's being decomposed." }, { "end": 4425, "start": 4415, "text": " So, you've got this, yeah, policy dependent utility function. And so, is this whole setup going to work when we're learning off policy?" }, { "end": 4432, "start": 4425, "text": " Basically, when we're doing, you know, some of the data points in our regression data set are from old policies. And I think it's a, at least from a theoretical point of view, it's a really interesting and sort of tough question because, you know, the data points are from old policies." }, { "end": 4456, "start": 4432, "text": " And I think it's a, at least from a theoretical point of view, it's a really interesting and sort of tough question because, you know, the prior work that I know of, they basically do this off policy correction that amounts to, you know, down weight in the gradient or down weight in the learning signal." }, { "end": 4465, "start": 4456, "text": " As you roll time forward, if your policy differs, if your current policy differs from the one that generated the data that you're learning from." }, { "end": 4475, "start": 4465, "text": " And in this case, our stated intention is to learn on really long term dependencies where a lot of the stuff that happened in the middle doesn't matter." }, { "end": 4486, "start": 4475, "text": " So, if we use like a, you know, retrace style off policy correction, we're going to just kill our learning signal, even though it may have been unnecessary to do that. Is that the issue that you were getting at?" }, { "end": 4500, "start": 4486, "text": " Yeah, I just, I guess I just wanted to talk about some of the assumptions that go into that, that regression. And it seems like that's one of them that the regression is kind of conditioned on the policy or maybe being near policy or something." }, { "end": 4517, "start": 4500, "text": " Yeah, it could be, I mean, one, maybe count and counterpoint here is that, I'm not sure if it's exactly like an assumption of the algorithm more than it is just the setting we were in experimentally before." }, { "end": 4523, "start": 4517, "text": " It's possible that things will actually work better when we're, when we have kind of a distribution over policies that we're learning from." }, { "end": 4537, "start": 4523, "text": " Like something that we saw actually is that, for instance, in skiing, there's this tendency of the, of the, of the agent to learn the task and then like forget it." }, { "end": 4558, "start": 4537, "text": " So it kind of seems like it learned a good regression model that, you know, spikes for the gate hits or for some other good actions, but then the agent starts doing those actions all the time and it kind of no longer needs to decompose reward in that way because it's just always getting that reward." }, { "end": 4568, "start": 4558, "text": " It doesn't need to look at those gate hit variables anymore. And it's, and it's because of the on-policiness that this happens." }, { "end": 4576, "start": 4568, "text": " Because your policy is always exactly the same, you don't have to do regression. You just know what the reward is going to be in the current state because you always do the same thing." }, { "end": 4582, "start": 4576, "text": " So the regression model kind of breaks down if you're too on policy and your policy is too perfect." }, { "end": 4596, "start": 4582, "text": " So I think it's a really interesting experimental question for, for, at this point, what happens when we have a replay buffer so that now our utility function is no longer defined over, you know, a policy pie." }, { "end": 4602, "start": 4596, "text": " It's defined over a expectation over policies pie that are sampled from a replay." }, { "end": 4605, "start": 4602, "text": " I, yeah, I'm really excited to explore that somewhere." }, { "end": 4616, "start": 4605, "text": " In terms of the brain, we talked about TD learning and one step TD learning with bootstrapping can be quite inefficient, slowly propagating that value signal." }, { "end": 4633, "start": 4616, "text": " Can you say anything about how that compares to what's happening in the brain? Like, do we have that problem in the brain of inefficient one step TD learning or, or our steps in time treated in a very different way in the brain or maybe we just don't know yet." }, { "end": 4647, "start": 4633, "text": " Yeah, I think it's a really interesting question. I think, I think with a lot of questions about neuroscience, this one verges mostly towards the last option you gave that we don't totally know." }, { "end": 4666, "start": 4647, "text": " So, right, I mean, this, this classic result that dopamine tracks the, the RPE is suggested that there is something, there is some evidence that there is kind of time step by time step TD like reward, learn and going on in the brain." }, { "end": 4679, "start": 4666, "text": " I think there are some caveats that I have around that, but those are super detailed. I don't think we need to go that far in the weeds at this point. I think it's enough to say that there's evidence for that." }, { "end": 4699, "start": 4679, "text": " But there isn't evidence that that's the only kind of learning that goes on in the brain. And as a matter of fact, given some behaviors that, behaviors that humans exhibit, it would be almost impossible for that to be the only way that learning is done in the brain." }, { "end": 4717, "start": 4699, "text": " And specifically, it seems like humans are able to do very jumpy credit assignment. This project, actually, I think I had to track it all the way back, it got started, or its inception was when my advisor, Mapof, and I gave this example, it was like years ago, it was way before we started the project." }, { "end": 4739, "start": 4717, "text": " He gave this example of how, he gave this example of eating some food, let's say you're in a foreign country and you eat some food, and then the next day you get sick, and you have this big gap between the thing that caused the bad outcome and the bad outcome itself." }, { "end": 4755, "start": 4739, "text": " But he gave this example that generally we don't really have much trouble with this, especially if there was something a little bit suspect about the foods like if you're in a country where it's not advised to drink the water, but you eat some food that's probably been washed with tap water." }, { "end": 4783, "start": 4755, "text": " Then if you get sick the next day, it's relatively straightforward to think back and be like, okay, what might it have been? Oh yeah, I did that thing yesterday, it was a little bit suspect, maybe that's why I'm feeling sick today. And it's very jumpy, it's clearly not like TD learning, it's not like you have to get sick a thousand times, and by incrementally propagating backwards, negative reward, get back to the thing that you ate." }, { "end": 4797, "start": 4783, "text": " So I guess that's a long way of saying, I think that it's likely that the brand does something very roughly akin to what we've done in this paper." }, { "end": 4812, "start": 4797, "text": " But I don't think that there's at least I don't know of yet specific evidence for that. I think that is a really interesting direction for research, maybe even for something to pick up if I have time." }, { "end": 4828, "start": 4812, "text": " In the future. Okay, so let's talk more about episodic memory. Can you contrast the idea of episodic memory with the idea of replays in that we might be more used to in deep or there's something in common there, but what's the difference?" }, { "end": 4850, "start": 4828, "text": " Sort of the broad distinction is that replay tends to be memory that persists across episode boundaries, whereas the Merlin style episodic memory is just within episode replay is generally used, for instance, in neural episodic control, this paper from Pritzel and Blondow on colleagues." }, { "end": 4862, "start": 4850, "text": " Replay is used when you're trying to remember back to some situation that was pretty much just like yours, your current one, and you want to remember if things were good or bad afterwards in order to evaluate your state." }, { "end": 4875, "start": 4862, "text": " Whereas episodic memory is where you're trying to remember back to something earlier in the previous episode that might be completely different from what's happening now, but it has some bearing on what's about to happen." }, { "end": 4894, "start": 4875, "text": " And so you need to, so for instance, we have like the key door task where this is in a couple of recent papers from us and others where you have an early phase where you need to pick up a key, you need to learn to pick up and then some other stuff happens later, you need to open the door with the key in order to get a bigger word." }, { "end": 4906, "start": 4894, "text": " And in that case, you need to use your episodic memory when you're about to open the door to think back to whether or not you've got the key, and it's not about thinking like what are other situations where I was near a door where they go to bad, like with replay and episodic control." }, { "end": 4915, "start": 4906, "text": " Instead, it's about kind of the connection between a priori unrelated events. Does that kind of get to the question?" }, { "end": 4932, "start": 4915, "text": " Yeah, and it raises another question for me, like what is an episode? And if you think of it in terms of neuroscience, is an episode of like very short term memory or like we don't have this notion of resetting in our brain. So how do you see the notion of an episode and the episode boundaries?" }, { "end": 4942, "start": 4932, "text": " Yeah, yeah, totally. Yeah, as I was saying that I was realizing how confusing the naming there is because it's like episodic memories within an episode." }, { "end": 4964, "start": 4942, "text": " But you might think it would be remembering back to a previous episode, but that's what replay does. So yeah, I think it's one of these cases where the the nomenclature is kind of, you know, the name space is a bit polluted because one use of it is coming from neuroscience and psychology and the other is coming from like reinforcement learning." }, { "end": 4981, "start": 4964, "text": " But yeah, so to answer that question about what what it means, I guess in psychology and neuroscience, it's often defined kind of loosely to just be like some specific thing that you remember from the past, a specific event." }, { "end": 4997, "start": 4981, "text": " And then I think the word event kind of encodes a lot of vagueness. And I think that plays out and there being kind of a whole literature, a very long running literature in psych and neuroscience on event representations." }, { "end": 5013, "start": 4997, "text": " Like the question is how do people decide when an event is over and when an event has begun that that sort of an open open question and for I guess for our purposes or for my purposes developing these kinds of agents." }, { "end": 5031, "start": 5013, "text": " I basically think of this is going to sound really confusing, but I think of an episode as just a time step. I think of poor or it could be multiple time steps compressed into one slot. So it's kind of like whatever I've put into a slot in my episodic memory, that's going to be the quote episode as bad as the nomenclature is there." }, { "end": 5043, "start": 5031, "text": " So if we have an agent that's maybe doing continual or lifelong learning, does that does that distinction break down between replay and episodic memory because there is just one episode." }, { "end": 5058, "start": 5043, "text": " Yeah, I think I think it definitely could. I think if you're in a setting like that, whether it's replay, you know, whether it's kind of like DQN or neural episodic control style replay versus Merlin style episodic memory." }, { "end": 5070, "start": 5058, "text": " I think it might come down to the algorithm that you're using on that data. You know, if it looks more like just a kernel regression to try to estimate your current state value, then maybe it's like replay." }, { "end": 5084, "start": 5070, "text": " But if you're doing more of like sophisticated transformer architecture over a bunch of past dates that you retrieved from this data store, then maybe it's like, looks more like an episodic memory. I think it would really become a fuzzier boundary." }, { "end": 5092, "start": 5084, "text": " So if I understand correctly, it seems like the episodic memory is used in slightly different ways in these different agents." }, { "end": 5102, "start": 5092, "text": " Can you kind of summarize how or the different ways in which episodic memory is accessed and what is stored here in this range of agents?" }, { "end": 5112, "start": 5102, "text": " Like I get the get the sense that some of it is like sort of content addressed, finding similar things. And then in other cases, you had a notion of attention." }, { "end": 5121, "start": 5112, "text": " So it seems like it's not, there's a range of designs here. Can you just help us understand the design space a little bit in summary?" }, { "end": 5134, "start": 5121, "text": " Right. I mean, I guess you've got this data structure, which is just a buffer that you can put vectors in. And the design question is what vectors should I put in it? How should I produce them?" }, { "end": 5152, "start": 5134, "text": " And what should I do with those vectors when I pull them out? And so I guess the answer to those questions from sort of basic, I guess like the standard models of cortical and hippocampal learning." }, { "end": 5167, "start": 5152, "text": " So kind of the model of how the storage and retrieval is done that you would find in a Randy Orrily's neuroscience models or Edmund Rolls neuroscience models basically says that what you store is just your cortical state itself." }, { "end": 5173, "start": 5167, "text": " So this is kind of what's in like the model in my thesis. You just don't project it. You don't do anything special whether you just store it." }, { "end": 5190, "start": 5173, "text": " And then when you retrieve it, you retrieve it using this sort of associative mechanism, like you mentioned, you basically at every time step, you look for things that are similar to your current cortical state, maybe along the some axis." }, { "end": 5199, "start": 5190, "text": " So maybe you learn some projection of your current cortical state for doing that associative retrieval." }, { "end": 5207, "start": 5199, "text": " And then when you retrieve it, you just, you know, you've reinstated it. You just plug those activations back in." }, { "end": 5221, "start": 5207, "text": " So with something like Merlin, you kind of have a little bit more, you know, mechanism in between the cortical state, if you will, or the working memory and the storage." }, { "end": 5232, "start": 5221, "text": " So in Merlin, you say, well, I don't think that's going to be quite enough. Like I don't know enough about the future when I'm in the past to know what I should look for. Like what about my sensory experience matters." }, { "end": 5238, "start": 5232, "text": " Like you got this huge, barrage of visual data coming in and a pretty small vectorized or a memory. I don't know what I should encode." }, { "end": 5248, "start": 5238, "text": " So I'll I'll use like a self supervised learning objective to shape the representation I store." }, { "end": 5259, "start": 5248, "text": " And then when I retrieve it, I mean, this was the thing. I don't remember exactly what they do in Merlin. I think that they do an associative retrieval using the LSTM hidden state." }, { "end": 5266, "start": 5259, "text": " And then I think that they feed it back. They feed the result into the working memory as an input." }, { "end": 5278, "start": 5266, "text": " So it's like, you know, you've got your observation coming in through a comb net and then into your LSTM through the input gate. And then you've got this additional vector concatenated with the comb output." }, { "end": 5284, "start": 5278, "text": " Maybe I'll just mention that Transformers also can be kind of described with this paradigm." }, { "end": 5297, "start": 5284, "text": " So you can basically say with a transformer at every time step, I'm writing some vector. And maybe I'm going to back propagate through the retrieval process in order to determine what to write or maybe I won't." }, { "end": 5304, "start": 5297, "text": " So in the episodic planning networks paper, we have this Transformers like architecture and we just don't back propagate through." }, { "end": 5325, "start": 5304, "text": " We found that we didn't need to. And then on the retrieval side, we basically instead of just querying with one attention head from the current working memory state to all of the memories, we just have each memory query all the other ones." }, { "end": 5336, "start": 5325, "text": " And then we aggregate the result with like a reduced max, I think. So at this point, you're getting into like the full spectrum of just like deep learning hackery." }, { "end": 5343, "start": 5336, "text": " All to answer this question of like, I've got this big batch of vectors. How am I going to turn it into one vector to send it out to my policy layer." }, { "end": 5356, "start": 5343, "text": " And yeah, I think, you know, it's a really exciting space of space of architectures. It's also explored. I'll just make a quick mention of Emilio Pericidos paper where he uses Transformers and RL." }, { "end": 5365, "start": 5356, "text": " It's a bit of a different architecture, but still fits into this same kind of paradigm where it's just some different decisions about exactly what that self attention architecture looks like." }, { "end": 5375, "start": 5365, "text": " Can you talk a bit about the relationship between RL and neuroscience versus theoretical RL versus empirical RL? Like how do you see them informing each other?" }, { "end": 5388, "start": 5375, "text": " I can't say what it's like from inside of the theoretical RL, you know, looking out to Nuro and empirical because I'm just not inside that community enough." }, { "end": 5406, "start": 5388, "text": " But I can definitely say that RL and neuroscience seems to have gained a huge amount from theoretical RL. I mean, just sort of the most basic ideas from the Sutton and Bartow even going back to the Sutton and Bartow, but kind of show that connection." }, { "end": 5420, "start": 5406, "text": " I'm pretty clearly, you know, more some more advanced theoretical RL like, you know, involved convergence proofs and whatnot, maybe doesn't show up so much in the neuroscience literature. But I think we all feel good knowing that it's there." }, { "end": 5440, "start": 5420, "text": " That makes sense. Like somehow when we see like dopamine, dopamine, dopamine neuron firing, tracking the reward prediction error really well, we feel even better about it knowing that there's so much theory, you know, underlying those convergence properties, for instance, of those kinds of algorithms." }, { "end": 5458, "start": 5440, "text": " And then for empirical RL, it's, you know, clearly empirical RL has, you know, tremendous takes tremendous value from theory. And I think I can only speak from my experience." }, { "end": 5476, "start": 5458, "text": " I obviously think a lot about, you know, at least intuitive psychology and some ideas from neuroscience when trying to build agents. And I think it might be that way for a lot of researchers. So there seems to be sort of a, you know, connection between those two nodes as well." }, { "end": 5499, "start": 5476, "text": " And then how do you think about learning in the brain in terms of how efficient it is compared to what we might do with algorithms? I guess the brain is still so far advanced compared to our current day algorithms. But can we imagine exceeding the brains efficiency in learning at some point or is that just just unimaginably far off?" }, { "end": 5522, "start": 5499, "text": " Yeah, clearly right now people can learn in certain settings a lot faster than any algorithms can. I think a well documented hypothesis for why that is is that, you know, humans have a lot of, especially adult humans, have a lot of knowledge that our tabular rasa, you know, deeper LH, it doesn't have." }, { "end": 5538, "start": 5522, "text": " And yeah, I think that there's there's this open question of whether with sort of enough in the right data, you could get current methods to behave like humans do in terms of their learning speed." }, { "end": 5567, "start": 5538, "text": " And I think that was part of why meta RL was was really popular for a little while. I think though it's unclear whether it's going to be possible to procedurally generate the kind of data that's required to have a really convincing demonstration that these kinds of algorithms can learn at the pace that humans do in any kind of like a convincing environment, I suppose." }, { "end": 5586, "start": 5567, "text": " And I think it's going to be really interesting to see over the next few years or decade, whether there are improvements in the out lunch on the algorithm side that enable a tabular rasa agent to get kind of close to what humans can do." }, { "end": 5602, "start": 5586, "text": " Or whether there will be cases where some researchers are able to find problem settings where there is the right kind of data to learn to do something really impressive. I, I don't want to, you know, bet against progress here, I think that it probably will happen that we see some kind of compelling demonstration." }, { "end": 5605, "start": 5602, "text": " But I'm not sure how long it'll be." }, { "end": 5619, "start": 5605, "text": " I see that you co authored a paper in nature communications on decoding brain activations that was toward a universal decoder of linguistic meaning from brain activation, pariera 2018." }, { "end": 5632, "start": 5619, "text": " And recently we saw neural link decoding a monkey brains signals to use in a game controller directly with thoughts. So, wondering what you think about what they're trying to do." }, { "end": 5641, "start": 5632, "text": " Is this kind of brain computer interfaces is inevitable or is it unclear how well it'll work out. What do you think about that?" }, { "end": 5658, "start": 5641, "text": " Yeah, it's an interesting one. So, I guess I'll just initially say that the neural link results is kind of a demonstration of something that has been done before and quite a long time ago." }, { "end": 5666, "start": 5658, "text": " If memory serves, I think it was in the early 2000s that it was first demonstrated that you could have a monkey controlling a cursor on a screen." }, { "end": 5679, "start": 5666, "text": " And as far as I know, that's basically what the neural link demonstration was. So, yeah, I mean, I'm excited that it's driving up some kind of public interest in brain machine interfaces." }, { "end": 5689, "start": 5679, "text": " I'm also slightly quizzical because it's been around for a long time. I think it's probably because Elon Musk just has such a star factor." }, { "end": 5703, "start": 5689, "text": " He just kind of makes it more interesting, I suppose. So, yeah, I think neural link specifically seems to be, I don't know, maybe in the middle of the pack of a bunch of startups that are working on this space." }, { "end": 5712, "start": 5703, "text": " And there's been a lot of work and academic labs for ages, really. I guess since the 90s, it seems like things were really taking off with respect to this." }, { "end": 5732, "start": 5712, "text": " I think it's a really interesting direction to go. So, yeah, I did work on that paper that you mentioned. And actually, most of my PhD I was working on decoding sentences from FMRI data, which is nice because it's FMRI is not an invasive method like the neural link demonstration." }, { "end": 5738, "start": 5732, "text": " You kind of have to correct the skull open and stick some electrodes in its very, you know, invasive." }, { "end": 5752, "start": 5738, "text": " But the signal to an or noise ratio is just too low. That that work didn't really pan out. And I don't see much evidence that anyone else has been able to get it to work really well either." }, { "end": 5763, "start": 5752, "text": " Both with FMRI and with EEG, it just doesn't seem to be quite enough signal to do really useful things." }, { "end": 5781, "start": 5763, "text": " So, yeah, with these more invasive methods though, it's possible to do really amazing things. And this seems to be most evident in medical applications, where you have someone who has an injury or an injury." }, { "end": 5786, "start": 5781, "text": " Or for some other reason has lost control of their body and able and then to do that is super exciting." }, { "end": 5795, "start": 5786, "text": " As far as I know, a main impediment is just the ability to leave the recording device in the brain for very long." }, { "end": 5809, "start": 5795, "text": " I'm not going to do this as it's slightly outdated, maybe a couple years old. But when I was briefly thinking about moving in this direction, that's what I was hearing as kind of the primary issue." }, { "end": 5826, "start": 5809, "text": " So one of the reasons I didn't go that way is it seemed like it was really more of a task for kind of like immunologists and material scientists really to get these electrodes working the actual machine learning side of things or the neuroscience side of you will as does seem to be the bottleneck." }, { "end": 5832, "start": 5826, "text": " So I'm curious what will happen with that field over the next few decades." }, { "end": 5841, "start": 5832, "text": " Cool. Okay. And then besides what we've talked about here, what's going on in neuroscience and RL these days that you're excited about?" }, { "end": 5853, "start": 5841, "text": " I'm really excited about all the Batch RL papers that have been coming out. It seems like people are getting really serious about making RL kind of application ready, industry ready." }, { "end": 5866, "start": 5853, "text": " I'm really keen on that. Also just to sort of deep RL agents, the canonical methods that are kind of nearly on policy replay based are just getting a lot better." }, { "end": 5878, "start": 5866, "text": " So like Muzero and Musely and even Agent 57 are showing that there really is a lot more room for improvement there in terms of final performance and sample efficiency." }, { "end": 5891, "start": 5878, "text": " So I'm really excited to see where that goes. In neuroscience, just make a quick shout out something I'm excited about is coming out of Princeton from my old department." }, { "end": 5904, "start": 5891, "text": " Qiong Lu and Ken Norman are kind of working on using these sort of slot based memory, episodic memory architectures to model all sorts of phenomenon human behavior and cognition." }, { "end": 5913, "start": 5904, "text": " And I think that's really exciting because the old modeling paradigm, I think I might have mentioned it with like a tractor network. She's had some inconveniences that made it hard to make progress." }, { "end": 5921, "start": 5913, "text": " So I'm excited to see what will kind of happen with that modeling paradigm moving forward." }, { "end": 5931, "start": 5921, "text": " And then looking forward, what do you see yourself doing? How do you see your path going forward? Are you going to continue on these themes we've talked about here?" }, { "end": 5951, "start": 5931, "text": " I think for a while, at least I'm excited to explore more the possibility of developing agents that learn to get higher final performance and learn more efficiently, you know, most likely continue to use algorithms with this nonparametric agent state, just because it seems to have, I haven't run out of ideas with it yet." }, { "end": 5969, "start": 5951, "text": " And yeah, if there's a chance along the way to do, I guess to say something that would be meaningful or useful to neuroscientists by treating those agents as models of the brain and cognition, then yeah, I'll definitely be trying to do that in the next couple of years." }, { "end": 5982, "start": 5969, "text": " Dr. Sam Ritter, thank you so much for doing this and taking the time out of your day for speaking with me and our audience. I've learned a ton today and I'm sure audience is going to love this. Thank you, Sam Ritter." }, { "end": 6000, "start": 5982, "text": " Awesome, thank you so much, Robin. It was a ton of fun and if anyone in the audience wants to chat about this kind of stuff, usually around on email, so yeah, feel free to pick me and thanks again, Robin." }, { "end": 6014, "start": 6000, "text": " Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways. One, subscribe on your favorite podcast platform. Subscriptions make a big difference." }, { "end": 6021, "start": 6014, "text": " Two, follow us on Twitter and talkrl podcast. We love retweets." }, { "end": 6031, "start": 6021, "text": " Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better." } ]
Thomas Krendl Gilbert
Thomas Krendl Gilbert on the Political Economy of Reinforcement Learning Systems & Autonomous Vehicles, Sociotechnical Commitments, AI Development for the Public I...
https://media.transistor…412.mp3?src=site
This is TalkAriel Podcast. All reinforcement learning, all the time. Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan. Thomas Crendel Gilbert is a PhD student at UC Berkeley's Center for Human Compatible AI, specializing in machine ethics and epistemology. Thanks for joining us today, Thomas. Thanks for having me. How do you describe your focus area? My focus area of machine ethics research is basically the philosophical intersection of what are present, largely distinct strands of technical research. These being AI safety, ML fairness, machine learning fairness, and what's called human in the loop system design. I think it's very important that the AI systems we are building simultaneously be demonstrably safe, as well as fair, for distinct populations of people, while also remaining under the direct control of the people with whom they will interface or interact. The problem is that it's not entirely clear what it means to do all these things at once. And in fact, that is itself a very old philosophical problem that AI is just now we discovering. These being the problems of defining relevant domain features, the learned model, and the system interface. These are not new, these are actually just being reinvented by AI today. Can you tell us a bit about your journey to this focus? How did you get to this point? My undergraduate degree, I actually started off in astrophysics. I shortly thereafter switched to a double major in philosophy and sociology. I got more interested in modeling people than in stars. I then was a full-bright scholar in Denmark studying the history of existentialism. And then after that got an M-fill in political thought and intellectual history at the University of Cambridge. Then I came to Berkeley after that. I had a got an MA in sociology first. And while at Berkeley realized that most of my interests were going to get reinvented by AI one way or another. So I decided to design my own PhD, named it machine ethics and epistemology. That seemed like the most organic way of combining my interests in the future of society as well as the foundations of political theory. That's so diverse. So you list the fields associated with your PhD as history and theory of AI, moral cognition, technology and delegation. So for those of us that are not familiar with these fields, can you let us tell us what they're about? Yeah, it might be easiest to explain all those in reference to this classic notion of a trolley problem. The question of who should the self-driving car kill? Should it kill one person to save five? That sort of thing. History and theory of AI was basically the question, how would different major canonical AI theorists like John McCarthy, Marvin Minsky, as well as more recent deep learning people like Jeffrey Hinton solve the trolley problem? So that hinted on this question of do we really think that there is some precise set of abstract moral rules that we could program into some AI system for it to just sort of follow? Or instead, is it a question of how much or what kinds of data we would need to show the system for it to learn? How it is that people would actually themselves make a trolley problem decision. So moral cognition was then about, okay, so what exactly is going on inside someone's head when they're doing a trolley problem? Can we isolate the neural mechanisms, the cognitive mechanisms for how they make these decisions? How are abstract philosophies, whether that's utilitarianism, whether that's Kantian ethics related to how the brain actually works? And there's a lot of interesting work in this field of moral cognition, cognitive psychology that tries to study this problem empirically. And then finally, technology and delegation is about, okay, so what difference does it make if we have AI systems or machines making these decisions for us? So if we let them solve trolley problems, what distinctive moral calculus or problem is it's stake for us when we allow them to do that on our behalf? So that ends up being a lot about getting certain stakeholders engaged in these questions, having them explain what their own criteria are for dealing with morally complex questions and how confident we should have shouldn't be in automating how we answer those things. So we had your CHI colleague, Michael Denison recently as well, but can you remind us what CHI is about? And also it's CHI part of a larger network of institutions that focus on these types of issues and how would you characterize CHI's place in that network? Like does it have certain specific areas of relative strength and specialization? CHI is working on technical solutions to this problem known as value alignment, the problem of how we get AI systems to learn our values as we intend them to be learned. Largely through the way that AI systems can observe our own human behavior and our own decisions and preferences and from that extract some kind of objective function that would, that would itself define then what it is we want and how much we want it relative to other things. CHI is probably most comparable to the technical AI safety labs, a deep mind and open AI. It's different in that it's an academic lab, so it's led by Stuart Russell, who's a professor at Berkeley as well as Mark Nitzberg, who is the executive director. I should add that almost all of CHI's work is highly technical, so these are AI theorists and computer scientists. I'm distinctive in that I'm CHI's resident humanist in a sense, so what I do is collaborate with many other members of CHI to try to add an integrated socio-technical perspective on the problems that they are working on. We might say there's like a kind of intellectual stack in ML where the top layer is general ideas in terminology, providing direction, and maybe there's some kind of middle layer that's more theoretical math and practical algorithms and down to engineering and deployment where the rubber heaps meets the road and things get deployed. Do you see things that way or what part of the stack would you say that you're most focused on? In many ways my work consists in rethinking that stack to make it structurally aligned with human values. So a lot of abstract discussions of value alignment tend to focus exclusively on the objective function or on the uncertainty that surrounds the objective function, on the general ideas and terminology of value. Without questioning these other layers or the order in which those layers get stacked. In my view the most pressing and interesting questions in value alignment actually lie in how all these things interact, how explicit conceptions of value, as well as mathematical representation, as well as problems of mechanism design, and finally engineering practice itself are themselves permitted to interact with each other. You know why is it that computer science has a certain relationship with engineering and both of those have a certain relationship with data science and all of those have a certain relationship with the actual stakeholders or consumers of these systems. This is what I mean by the political economy of reinforcement learning which we'll discuss more later on. So let's talk about the hard choices paper you co-authored that is hard choices in artificial intelligence addressing normative uncertainty through sociotechnical commitments that was with doby yourself and mints. So what is the just of this paper? The paper as you mentioned is co-authored with who at the time were fellow graduate students that Berkeley they've now graduated and the paper itself reinterprets many of the prominent positions in machine ethics right now by means of a basic distinction between what we call normative uncertainty and what philosophers have traditionally called vagueness. So normative uncertainty is how many practitioners of machine learning and reinforcement learning have approached the problem of optimizing an AI system namely the notion that if we give it enough data and if we provide a sufficiently good learning algorithm eventually the system will figure out what good behavior actually means in a way that we don't have to directly specify. But this itself actually presupposes an answer to a problem that philosophers have been thinking about since the time of Plato and Aristotle namely the question of how it is we account for a fundamental lack of clarity about the problem we are facing in other words when we don't have a deterministic problem formulation at hand. How can we be confident that any such algorithm could arrive at good behavior period and the original example debated in ancient Greece was called the varieties paradox which was a technical name for this situation in which there's a certain number of grains of sand and the question was how many grains of sand does it take before that constitutes a heap of sand. So it's this question of the heapness of the cranes and indeed there were there were different positions at the time for answering this question you could argue that there was in fact some objective answer to the number of grains that would make a heap even if we don't know what it is. But perhaps if we had more information about the nature of the sand or about how large it is then we could we could discern an answer to the question you could also argue the different communities or city states might legitimately answer that question differently because they use the word heap to refer to slightly different things these are different language communities in other words. So that was another position or you could argue in fact there is no answer that it's impossible to specify a deterministic formulation of heapness because the notion of heat might be inherently vague so these are all different approaches to how we resolve vagueness how we make sense of vagueness either as a feature of the world a feature of our minds or a feature of the communities that we belong to and the point we make in this paper is that all of these positions are actually represented. In the machine ethics literature implicitly and instead of arbitrarily kind of going with the one that makes the most intuitive sense whether you're a computer scientist or an engineer or there are somebody who works on the AI governance formulation of AI the question instead should be what a principle approach to the problem of the agnus really means in the context of the particular system. That is being evaluated for for development. So this paper refers to access central risk or ex-risk and also fairness accountability and transparency approaches to safety like are your concerns totally separate from these or more like a superset of these. So what I call ex-risk and the fat style approaches are examples of two of these schools of thought that I was describing earlier ex-risk has a strong affinity with what philosophers have called epistemicism which is the view that they're as I was saying before that there is an objective answer to this problem but that we might be ignorant and present of what the answer is. So what I think is the answer is that there are more assumes that the answer that could be learned by observing human behaviors or learning from human preferences with enough data at such a scale that eventually the answer can be found. The fairness literature instead often assumes a position that philosophers call semantic indeterminacy which is the notion that different communities might well disagree about what a safe system actually means and that this has to be taken into account this indeterminacy must be considered when building a system to make that concrete a self-driving car might still not be safe even if it never kills people. So if it's often getting in crashes, if it suddenly accelerates or decelerates in ways that may people feel uncomfortable, it's completely imaginable that stakeholders would not feel safe around it even though it's technically never going to cause any fatalities. So our concerns in the paper are superset in the sense that we're saying look all of these assumptions may or may not be relevant as more or less valid interpretations of what a good system would mean in the particular context of the question we are asking. So the question is really what do we want to do with this system? What do we want to call good rather than just treating one of these schools of thought as paradigmatic and just running with it? The paper also includes a case study involving the ACLU that's the American Civil Liberties Union. They're critiquing Amazon's face recognition system. Can you remind us about what happened in that case and how do you interpret the problem in that case? Yes. So the ACLU made use of the recognition API designed by the Amazon team and found that the system, when applied to the faces of members of Congress, was both inaccurate in its classification, so it got many of the faces wrong, and it was also racist in the way it was inaccurate. So it was differentially more likely to misclassify the faces of black members of Congress than non-black members of Congress. So the ACLU did this to show how inadequate the system was from a design standpoint. Interestingly, the Amazon team responded to justify what the purpose of the system was, namely that their API was deliberately designed to be very restrictive, such that they put enormous work into defining exactly which police departments could have access to it based off of standing relationships with those organizations. And so therefore, the system is trustworthy because even though it might be technically inaccurate in certain instances, it's never meant to be used in those instances unless we completely trust the institution or the organization that's using it. So it was sort of like a matter of saying you used it in a way it wasn't meant to be used. And the ACLU responded saying that, well wait, if that's true, why has your team over the weeks that we've been debating this in public suddenly puts so much time and energy into updating the classifier to improve its technical accuracy? There seems to be a contradiction there. So I think that's actually a profound point. What the ACLU was pointing out was that at least two different interpretations of vagueness were simultaneously at stake and the Amazon team was being inconsistent its use of them. You can't simultaneously claim that an ironclad API could solve this problem, which is really a kind of appeal to resolving semantic indeterminacy. And yet also claim that greater system accuracy will help, which is to reference a form of epistemicism. You can't both claim that racial bias is something you can technically minimize or eliminate and something that can be entirely avoided by just partnering with the right law enforcement agencies. Their theory of the case itself was inconsistent. So in the conclusion in this paper, it says this set of socio-technical commitments will need to be integrated into the training of engineers, data scientists and designers as qualifications. So can you tell us what do you mean by socio-technical commitments here? A socio-technical commitment is sort of like a promise or a pledge that the developers of the system make to the stakeholders whose concerns are implicated in the system's specification. It's a way of asking to what extent am I as a designer responsible for how my system performs. For example, if I'm deploying a facial recognition system and I claim it will perform according to a certain threshold of accuracy, what guarantees can I give not just that it will meet that threshold, but that that threshold is in fact the right one that it's the good threshold. So it's both technical and normative, which is sort of how I'm defining socio-technical as the bridge or the boundary between those two things. To make that concrete again, a system that is 99% accurate but whose stakes are life or death might still not be good. It might still not be one stakeholder's want with good reason. Whereas one that is 70% accurate, like just something simple, like a content recommendation in social media or the Pandora app recommending me songs. That might be fine given that the stakes are so much lower, so this is a matter of context. The commitments are context specific and it's a way of indexing your relationship with stakeholders as a developer. Can you tell us more about what you think this type of training might look like? This is an open question. We are considering this right now. I have some follow-up work exploring this idea. I will say that I believe years from now, it will be considered very strange that AI designers were building algorithms for self-driving cars and drones and credit scoring systems without ever talking to a pedestrian or to a homeowner or to a disadvantaged community. I think all the time about how we need AI theorists and engineers to do something like a medical residency, something like a clinical environment where they have to diagnose real systems that already exist and are affecting real people and then diagnose them in terms of the actual stakes as they are able to be observed. Rather than just speculated about based off of what we think may or may not happen once they're deployed. Right now, a lot of AI research is like a lot of very brilliant young people who are very committed to learn how to sail. They then go on to spend the next several years or even in many cases, much of their careers sitting on the shore learning how to expertly tie different kinds of knots. Instead, what we need is to put those people out on the water and actually go sailing, learn how to sail. Learning the kind of knots is only really important so that once you're on the water, you can make a good choice in the context of, okay, given the wind speed and direction where we're trying to go, why should I tie a bowlin, rather than some other kind of knot if there are any sailors listening to this? They'll get that joke. It's a matter of situating your knowledge in the context of the stakes such that you're exercising good judgment rather than just some kind of claim to accuracy. So I guess when you talk about engaging various types of stakeholders and more deep engagement that might be with the stakeholders that might be affected by the system. I guess from the point of view of maybe corporations that just want to start deploying products and start profiting, what would compel these organizations to want to do that, all that extra work and all the costs associated with them? Is it something we would need to force them to do as a society with regulation or how might that work? I think that there is more than one approach here, but yes, in brief, I think that we need to change the incentives of these companies so that it is in their own self interest to do this. And in fact, I think it is in their own self interest for these companies to do this because if you're deploying a system without taking these considerations into account, that system is not in fact going to be a good system. It might work in the short term, it might help in conforming to some short term business model, but it is not actually aligned in any structural sense with our notion of collective good. This is really, really important and I think massively misunderstood or underappreciated in the AI landscape right now, what it means to build a good AI system means not just that it is aligned with preferences that we can model, but that is aligned with where we want society to be headed. And that the system is well aimed with respect to what kind of polity we want to be. And that's why I don't think you can separate questions of corporate governance, questions of regulation, questions of substantive conceptions of human value from technical questions of problem specification and optimization. So in fact, part of the same sociotechnical landscape and this this straddling of that boundary is what all of my papers are trying to do to reveal questions at the intersection of of these two sides, rather than to purport to answer them in advance of being more clear and aware of the stakes of the question. Let's move on to your your white paper on pearls and autonomous vehicles, that is mapping the political economy of reinforcement learning systems, the case of autonomous vehicles by yourself, Thomas Colonel Gilbert. So first to get help us get oriented, can you first explain the phrase political economy for those of us who might not be too familiar with it. Yes, political economy is a very old term. So historically political economy is what social science basically was before it splintered off into modern academic specializations of psychology, economics, political science, sociology, history. It's a combination of all of these fields and that's really just because it tries to examine how we define the good society, what is a good society, and what is the set of institutions, human behaviors, virtues that are needed to constitute that vision. More specifically, political economy has tried to examine how it is that many of the same human values and preferences are expressed differently. They have different modalities, whether they're expressed economically or politically. Just think of the way that a choice you're making as a consumer is different than one that you're making as a citizen, that voting with your dollar is not the same as actually voting. So in terms of reinforcement learning to make this a little bit more technically grounded, it's useful to think about this from the standpoint of what makes optimization and specification different. Markets are sort of like social optimizers for the things that we want that try to effectively match consumers and producers with the producers compete over providing some good or service that consumers are choosing. Whereas politics is more like a social specifier. It's about how we collectively choose to define the things we want in terms of their importance for living well together. Okay, that was helpful. So now what was the main idea in this paper? The central idea is to elaborate this relationship I'm seeing between future forms of RL optimization and different forms of monopoly power that have been studied by legal scholars and political philosophers. A good example to illustrate this is a self-driving car we discussed this in the paper. I discuss it. If you use reinforcement learning to learn a particular kind of routing algorithm for a car, then as you scale up the fleet and the vehicle platform, you are essentially privatizing public infrastructure, namely the access to the road, because you are as a designer in the position of deciding what it means. It means to drive optimally on those roads and in effect nudging other drivers and road users to conform to that resultant definition of driving. So it is a different way of investigating how self-driving cars should drive. So what we are in fact building when we build autonomous vehicles are monopolies in the making. At what point does it make more sense to think of these companies and services as public utilities, rather than what they now are, which is private corporate entities? So you talk about the reward hypothesis. Can you remind us what that is? Yes, the reward hypothesis, and I'm drawing from the formulation of rich Sutton and Michael Lidman, when I'm talking about it, is a kind of philosophical statement about how intelligent behavior can be defined rigorously. The formal definition is that the hypothesis is the maximization of expected value of the cumulative sum of received scalar signal. And the idea that that maximization is all of what we mean by intelligent behavior, any intelligent behavior oriented to achieving some task. The point there in layman's terms is that it should be possible to exhaustively specify what it means to do an activity well in terms of that activity. So for example, if you're playing Super Mario Bros., every jump that you perform, every coin you collect, every enemy you kill, any amount of points you acquire, every level you beat, somehow each of those things brings you either closer or farther away from beating the game and freeing Princess Toadstool at the end, that there's in fact some precise answer to this question in theory of what it would mean to play the game well based off of every action you take within the game. In fact, I think most human activities are not clearly like this because most of the things humans value are not scalar in nature. They are rather pluralistic. They exist on different normative scales. They can point in different directions depending on the context. In other words, most human values are more like vector than they are like a scalar. Instead, I think that when you really think through the word hypothesis, it makes more sense to approach reinforcement learning specification as a kind of translation of human concerns and priorities into a scalar conception of value rather than just assuming that scalar conception is just sort of there for our system to discover. The problem of the designer faces then is how to make that translation a good one rather than a bad one. So some RL systems can include constraints as well as rewards like for safety or other reasons to constraints also fit into this reward hypothesis framework. So the political economy of RL is about identifying and formalizing the constraints needed to complement the reward hypothesis. That is what I mean by a good translation of the domain. When the paper I define the political economy of RL formally as the science of determining the limits of the hypothesis for a given domain, these limits have to be specified in order for the behavior of the system to be interpreted as good or bad by the other sorts of agents, namely other humans, stakeholders, perhaps other AIs affected by its behavior. This is not meant to be a rejection of the hypothesis. It's really just to state, we state what it is, which is a hypothesis. And the way we evaluate hypotheses is by empirically investigating the domain and evaluating our different assumptions about how we think it works such that we arrive at a formulation of our system that is well mapped onto the integrity of that domain. So it seems to me that the reward function in RL is making something explicit, but it's something that already exists. It's a way to balance between multiple concerns. And I would say most organizations today are generally already optimizing balancing multiple concerns, but usually in an implicit way, like without numerical coefficients. And we generally wouldn't know how they prioritize or make trade-offs exactly between these things internally until something goes wrong and we hear about in the news. So when talking about how RL changes things in terms of political economy, do you think it's a difference in degree, like this large scale automation just makes the details of these reward functions like a bit more impactful? Or is it really a change of kind? Does RL take us somewhere very different due to these reward functions? Right, this is the key question. I don't think there is a deterministic answer to this question. It's the kind of question that actually my reading group on the political economy of RL is always trying to ask. Until we identify the constraints on an RL system, we can't really understand whether or how differences of degree and kind are at stake. So it is important to understand that reinforcement learning is often not just making something explicit, which already exists. It's often proposing an interpretation of intelligent behavior that has never before written down. And there may be uncertainty about that definition, even from the standpoint of the designer, there could also be disagreement about it from the standpoint of stakeholders. Or there could be lack of clarity about that definition's application to the particular deployment context. And again, what I just said, those different dimensions of this uncertainty, that's actually the connection to the hard choices paper, because in fact, each of those problems, each of those ambiguities have this legacy in the history of philosophy to these different schools of thought, namely epistemicism, semantic indeterminacy, and what we call in that paper, ontic fagness. The idea, basically, is that look, the political economy of RL is about investigating the domain to figure out how this structural transformation is varying in either degree or in kind. Is your view that moving to RL means that organizations should be exposing, and maybe even publicly debating the design of their reward functions? Or like, do you see a world in which reward functions will be or should be standardized? Like today we have, we would expect that Tesla and Waymo have reward functions that may be differ to some extent, but they're judged mainly on outcomes in terms of safety, performance, profitability, things like that. Right, I believe it will be both. To some extent, these reward functions will need to be subject to external standards, and to some extent competing firms, functions can be allowed to vary, and judged in terms of outcomes on various parameters of concerns, whether that's to different cities, different neighborhoods, different highways, etc. The point of this pearls perspective that I'm outlining is for practitioners to get better at understanding this spectrum, and figuring out according to appropriate assumptions about good behavior in the context of this city with this platform, and this routing. What are the features that should be allowed to vary, and what are the features that should not be allowed to vary? There's a balance implicit there between the standards we need to hold ourselves to in order for the system really to perform well, and also the legitimately distinct approaches to optimization that our platform might pursue based off of what users of that platform or users of roads may or may not want to choose to be a good partner? So this paper mentions the case of potholes in that vehicle fleets could structurally generate them in specific places depending on their routing. So what do you see as a solution to that type of problem? Yes, so I don't think this is specific to reinforcement learning. For example, we already know this is happening with apps like Waze, which have caused congestion on particular routes once the app deems them to be more efficient than other routes for routing purposes. So I think the way this plays out with reinforcement learning is that we need to be thinking about what makes near and medium term solutions different than long term solutions. In the near and medium term, it means we need to be careful about the scale and degree of deployment that we make for self-driven cars. Wearing down public roads with these systems is inevitable as they scale, as they are deployed more and more intensively. That's just of a piece with what it means to test these systems well. So what we need to do is try and interact and consult with stakeholders about what modes of deployment are acceptable as these as this new kind of wearing down becomes a more and more salient feature of the domain. In the long term, as I hinted earlier, I do think it means that self-driving car companies will need to be reconceived as providing a public service and regulated accordingly, namely as public utilities, even if they are or remain privately built and technically overseeing. In other words, self-driving car firms will need to be found liable for the damages that their fleets do, and we will have to substantively reinterpret our conceptions of tort law, our conception of what sorts of harm constitute public damage in order to actually oversee what these what kind of service is being provided to us. We don't have to do that. We don't have to reinterpret the law in this way, but I think that we should, and I also think that even if we don't, we should be honest about what that would mean, which would be that roads would no longer be public. And we would really only retain access to them thanks to the beneficence of these private companies. So I'm wondering if we have the political will or capacity to deal with these types of issues. Like it seems like to me, we live in this world where it's hard to get people to wear masks, even though it's clearly we're in a pandemic. And then we have social media, polarizing people for ad clicks, and we're having trouble dealing with that issue, which seems to seem very straightforward. So would the governments of tomorrow want to want to take that on, or would they be just overburdened and be glad to offload details of reward functions to the fang companies of the day, and then maybe haul them in for televised hearing when things go wrong at some point. It's a great question, and it's one that I think about basically every day in the context of my research. And I don't have great answers, but I will highlight a theme of both the hard choices paper and of this paper as well, which is that I actually think we need much more democratic approaches to our own specification. And to how we approach the problem of AI development more generally, that might sound strange because we live in a political moment where it's very hard to imagine politics going well and engaging stakeholders in a way that doesn't quickly become dysfunctional. I mean, if you imagine some kind of just open Wild West style, you know, debate on Twitter about what the reward function for driving should be, I don't think it would go well. And that's also not really what I mean by a democratic set of solutions. I think that it's more a question of how are we able to hold these companies accountable in ways that it becomes in their own self interest to govern themselves in a more democratic fashion. That doesn't necessarily mean that people inside the company should vote up or down on what the reward function should be. What it means is that we create channels for dissent within these companies and also as a path to external stakeholders so that we are able to have the types of feedback with the environment that will actually inform us whether the RL specification that we chose is good or inadequate. So it might sound strange or ironic, but in fact, I really deeply believe from my collaborations with with computer scientists and AI theorists that the path to good specification is democratic commitments. It is a leveraging of new types and forms of feedback to the environment that in fact constitutes a kind of democratic ethos. It is stakeholder engagement. There should be ways in which citizens, municipal bodies, state level departments of motor vehicles, etc, are able to access the API are able to begin to question the featureization adopted by developers as the system is being deployed. This is in the developer's interest because if you don't have this feedback, I don't see how you are going to be confident technically in the performance of your system. You can watch your hands of it and say it is optimal, but in doing so, you are choosing to not investigate this more substantive question of how optimal performance relates to a substantive conception of good performance. Those are not the same thing. In fact, that is what is at stake and what makes optimization different than specification. So this is where I want the field to move and where I think it has to move. I think computer scientists need to engage more with policy. I think that the opacity and the erosion of our current institutions, civic institutions and political norms needs to be confronted as a problem that demands a new set of institutions, a reformed understanding of our own civic commitments in the context of AI development. I can't help but be reminded how political some of this stuff is. When you get right down to it, if a company is doing something with a large workforce of human labor versus in the future, they are doing it way more optimally with a large fleet of robots, it is kind of the same thing, but it is totally different. It is almost like turning the capitalism dial way up and the speed of capitalism way up and making very clear, blatantly bald face clear, what the real objectives of the whole enterprise are. Maybe environmentalists talk about capitalism as a mechanism to turn nature into money and wastelands or something like that. If that process gets sped up by a factor of umpteen orders of magnitude, we are going to have to really face that problem a lot more directly. What does that reward function really mean? It might just be some formulas in a robot, but it also multiplied over these giant fleets, where does the interest of the public come into that equation? And it seems like that is what you are trying to do with your pearls research group. Yes, so the pearls research group is it's an attempt to extend the themes of my of my white paper and of related work being done by other graduate students, mostly a Berkeley into a community and into a research agenda. So it's an attempt to think about how it is that the systems that we are building are going to find themselves having a certain relationship with markets or a certain relationship with existing social institutions, whether that's politics, whether that's domestic settings, whether that's individual behavior online, whether that's traffic. You know, we don't we don't decide these things as designers. We can design the systems in whatever way we want, but we don't design them on terms of our own choosing. Many of the terms of our that are given to us are a product of the way the world has been set up in advance of our system being deployed. And so the aim of this group, this pearls group is to help people who are trying to already think about this problem, maybe largely in the context of specification of RL specification. And I should also add a multi agent RL, I think, is an interesting field that's emerging in part just because they can't help but think this way. They cannot help but not take the world for granted and reflect very deeply on how their agents are learning simultaneously from each other and from the way that they think other agents are learning about them and so on. It requires you to think about the sociology, the structure of the world that they are learning to navigate. So there there's an interesting way in which these emerging subfields of RL are kind of reinventing political economy in a way that I find very exciting, very intellectually interesting and also very profound in a in a political sense. And also in an economic sense because really what we're going to be able to do with these systems is re-specify the institutions that already exist and specify institutions that have never existed before. And that's very powerful. There's a lot of opportunity for optimism there. And so the attempt of this community is really to kind of lovingly invite different branches of RL into this conversation as a way to advance their own technical work and just as a way to as I was suggesting before a way to improve and refine the metaphors and just the semantics of how the public at large should be thinking about RL. Rather than just as an impending future that they can't do anything about it should be a way for them to re conceive of their own agency and to feel empowered to articulate their values on a new terrain, the terrain of reinforcement learning. And notice one of the readings for your research group is societal implications of deep reinforcement learning by Jess Whitelstone and Kai Erulkomaran and and Crosby Matthew Crosby. And I just want to know to listeners we had Kai on very recently and and we'll be talking to the author Jess Whitelstone about that paper next week. Moving on to your other paper that's AI development for the public interest from abstraction traps to socio technical risks by Andrews at all. Do you want to give us the low down on on this paper? Right. This paper is a collaboration with other members of my student group at Berkeley. So yeah, that would be so it's a co authored paper with all of us. It's McCain andress who's now at the partnership on AI serideen myself Nathan Lambert who I believe you had on previously to this podcast and also Tom zick. And the paper is more of an STS style investigation of different fields of AI that are emerging many of which we've discussed. So a safety fair machine learning human in the loop autonomy as well as control theory. And in the paper we look to the history of these subfields to try to understand how they first became interested in different types of social risk and eventually claimed to have found technical solutions to those types of social risk. So in brief, AI safety found a way to care about extinction, the problem of the complete eradication of life. Fair machine learning is much more specifically interested in inequity or inequality across subgroups and human in the loop. The loop researchers tend to be more strictly concerned with accidents if your system just undergo some kind of mishap or some unanticipated rupture in assumptions that makes it break. So what we what we try and look at in the paper is how these fields definitions of each of those risks are problematic and actually remain quite vague in certain ways. So there is really this attempt to help these communities understand what they can learn from each other rather than just to critique those definitions. It's sort of to look at how these these communities can look to each other to see what other ways there are of conceiving of the original problem that they see their formalizations as making tractable. So do you consider socio-technics as that set of like safety fairness and human in a loop autonomy? And do you think that that's like an exhaustive list or are we just starting to to find what's contained in that set? I don't think it's an exhaustive list. I think of socio-technics as a question. It's the question of how we address the gaps between our definitions of the problem and our tools for working on the problem. It's another way to say it is it's this problem of defining the relationship between your system, the system you're building and external reality. It's to recognize that there is a need for some interface there, but that the definition of that interface is vague or it's in precise. And so there's these two levels of uncertainty. There's model uncertainty, which is something that you can minimize by reducing the model bias. But then there's also uncertainty about the model. Is this the right model? What other assumptions do we need? Which assumptions are wrong? That includes this socio-technics problem. And that's really where it enters into the picture. One way of illustrating that is I've seen papers where each community, for example, I have safety papers that reference the impending death of the human race or the end of human civilization. And then they try to illustrate some technical intervention on this by a toy example of robots that are trying to figure out how to slice a cake in half to share it without explicitly negotiating. From a technical standpoint, that toy example, it doesn't really matter. But as a metaphor, it's quite imprecise. If you were to explain this to someone who doesn't understand what a Markov decision process is or what inverse reinforcement learning means, they would be totally mystified by a paper that is illustrating existential risk through some kind of example that's so constrained. And so, like, socio-technics is just a way of asking, how do we address situations like that in a way that doesn't make our own work misread? Because you want your work to set itself up for success. And setting yourself up for success means that you're able to illustrate and interpret the significance of your formal work in a way that the people who are going to be interacting with the actual system, can understand, because only if they can understand it, can they provide you with the types of feedback that you would need to confirm the assumptions behind the original specification. So we need to see all of these layers of socio-technical abstraction as in a workable relationship with each other. Rather than as sort of not my problem, that's the public's problem or that's the stakeholder's problem or that's PR's problem, that's the lawyers problem. It's pretty clearly all of our problem. And so you want that division of labor to reflect what we want the system to be, rather than just incidental and outside of your specification. This reminds me of a podcast I listened to yesterday, the Robot Brains podcast by Peter Abiel, and he was interviewing Yon the Koon from Facebook about this very issue of what the objective function is for Facebook. Yon the Koon was saying who's claiming that the actual objective function that uses not just engagement, but it's actually some proxies for users and enjoyment of their time. But what does that really mean? Like someone sitting on the couch eating fast food might be saying they're enjoying their time, but from a psychologist perspective, they're not setting themselves up for long-term happiness and success. So I guess it goes back to what you were saying in the very beginning about deciding on what value really means. Yeah, and this is again, this is the oldest problem there is in the book. So Aristotle's name for this was Udaemonia. It's human flourishing. What does it mean to build a system that doesn't just provide me what I expected to provide, but one that actually improves my life, one that helps me aim my own life in a good way in the context of the particular activity. I think that an enormous proportion of the ethical and political problems that we face with RL and AI as a whole is something that are actually very, very well understood by classical thinkers. They obviously didn't talk about RL or about robots. They had their own language for how we should think about these things. I'll just basically as an aside mentioned that the first several books of the politics by Aristotle are about precisely this problem of how it is that artificial beings can be made to reliably service humans in a way that approximates the good life. Of course, that's because for Aristotle that's what a slave was. And so that's the way that this was understood at the time was this problem, but it's kind of funny to me that probably the single most influential work of political theory ever written, at least in Western philosophy, is specifically about the problem of how we make systems that are value aligned with the people they are meant to serve. And yet I see very little work or very little engagement with these ideas in the context of AI safety or of AI theory or even of technical work on value alignment. And one of your papers or one of the papers in this set and I'm not sure which one was referring to these also referring to these old stories like the stories of the genie in Arabian nights and the story that I know for my childhood, the monkeys paw where this thing could make a wish and it would make your wish come true but not in the way you want, which is so, so much like these failure modes of RL. So, so yeah, that makes total sense. You can connect it back to these things that that maybe everyone everyone is familiar with. Yeah, I think the example that Stuart Russell has used repeatedly is what he calls the the King Midas problem, which you know the story of King Midas being that he wished for the ability to touch anything and turned into gold. He almost immediately regretted that decision because he you know starved to death or there was the prospect of that you can't drink, you can't drink water, you can't eat food. I think that some versions of the story have it ending with him in despair reaching out to touch his daughter and then she turns to gold too, so he basically kills her. I think that there's this is actually an example of what I mean. We just need better metaphors to being to describe the problems that we as designers are facing when we are trying to build these systems. And there's an extremely rich treasure trove in Western intellectual history and maybe even beyond it to better communicate these concerns to the public in ways that I think the public is in fact ready to understand. It's just that we've kind of sadly learned to not trust our ability to engage them. And so we haven't trusted our own ability to creatively reimagine these metaphors. I was going to give this example the best example of an abstraction trap that I can think of in the context of this paper is a trolley problem. Someone who builds self-driving cars could tell you that this is not how self-driving cars actually work or how they're actually engineered. The idea that there is some deterministic formula for who the car should kill is one of the most dystopian things that I've ever heard and also just completely off base for how these systems are actually going to exist. It's actually a fact of real people. But we do like to think in terms of trolley problems because in a superficial way it seems to connect a real ethical concern with what in principle could be a design decision or what the public imagines designers doing when they're building these systems. So instead the question should be what would stakeholders think about my system in terms of its actual performance? Would they understand it? Is a technical solution worthwhile? Or if not, what further feedback do I need from them for such a technical solution to be able to be envisioned? So I think trolley problems themselves are an excellent example of a problematic metaphor that is doing much more to limit our ability to engage the public than it is to actually empower it to give us the information we need to really design these systems well. In this AI development paper you spoke about abstraction traps. Can you tell us about abstraction traps and how do we tell if we're stuck in one? So I just gave the example of a trolley problem being a kind of abstraction trap. An abstraction trap is it's a way of thinking that you have a technical handle on a social problem in a way that you have lost the ability to reflexively interrogate. So again, we could imagine a way of designing self-driving cars where in fact there was some some some rule that you could just encode for how they should behave in a particular situation like that. They would lead them to quote unquote deliberately kill one person rather than five to use some kind of utility calculus and security calculus is involved in how we think about MDPs and how we think about reinforcement learning. But not at that level of abstraction so an abstraction trap is really just a way of saying that there are different levels of abstraction that are simultaneously at stake in what it in fact would mean to develop your system responsibly. Again, just to restrict the example of cars. There are different levels of abstraction in terms of is the cars safe from the standpoint of the pedestrian, is the cars safe from the standpoint of other drivers, is it safe from the standpoint of people in the car, is it safe from the standpoint of the municipal planner, is it safe from the standpoint of the department of transportation. say from the standpoint of the state department of motor vehicles. Those are all different layers of abstraction that in their own way would have different criteria for what it would mean to build the system in a way that is safe. And the way you avoid these traps is by, you know, it actually a good example. I think it's called, you know, this phenomenon of rubber ducking where you have a rubber duck on your desk. And if you're stuck designing something or engineering something or programming something, you turn to the rubber duck and you try and explain what you're doing to the rubber duck. And that's meant to kind of lift you out of your cognitive disorientation or confusion. I think we need to rubber duck abstraction traps. And the way you do that is, you know, not by putting a rubber duck on your desk, it's by having some kind of line, some kind of through line or connection to the stakeholders who are involved in the abstraction layer that you're working on. So for example, if you're optimizing traffic over a city grid, you should have some kind of connection to the municipal planner or this the urban layout designer, whoever person's job it is to have overseen the development of that city grid, like whatever office it is. And you should basically rubber duck them and basically try and explain to them what you're doing and have them explain to you in turn, okay, that either makes sense or it doesn't or here's how I would do it or these are the features you need to make sure you can you include and one of the ones you don't have to include because those are my agenda and not yours. Or here's what kind of validation would make sense or not. Like it's a matter of registering the how far removed your design assumptions are from what would make sense to the people who actually inhabit that layer of abstraction. We want to minimize that distance so that it doesn't impede the way we're developing these systems. Is there maybe an economic trap where all of these, you know, socio-technical concerns are satisfied but things still turn out really bad for almost everyone. Like maybe systems end up technically safe and fair in terms of, you know, the fate criteria, but then they produce, you know, economic outcomes that are super unfair. Or is that maybe outside of the scope of this type of work and really lives in economics? I think it's a great example. I think it's absolutely in scope here. This is also the overlap with the pearls paper and the pearls reading group. So economic traps are a specific form of abstraction trap where the system is behaving optimally but was mispecified in a way that could create enormous inequalities or forms of regulatory capture for which there may be no specifically technical solution. And I mean, the field of economics itself often falls into traps like this. And so, you know, that's something that other branches of social science often try and critique it for doing is to say that there's some mismatch between, you know, your assumptions of what are called homo-economicists and how people actually live. You know, this assumption that people are utility maximizing, that people are rational with respect to what they want. This style of investigation that behavioral economists have pioneered or have elaborated in many contexts, that's more or less problematic depending on how you think the domain actually works in which people live. And so, economic traps are not new but they're also very much what's at stake for pearls. So we talked about educating the public but earlier when we were talking about this issue, you mentioned the idea of organizing new publics as part of educating the public. Can you tell, can you explain what, what did you mean by that and any other comments on how the public should be involved? Yeah, I think that we've circled this a few times now. I think that the way we should engage the public should reflect the use of language which to be blunt basically means the use of metaphor to more or less accurately index the way the system is going to perform in terms of the actual expectations of the stakeholders who will be affected by its performance. So yeah, this is really going to ultimately come down to, are we as designers and also just as AI developers at scale, are we going to be able to organize new publics in a way that they can articulate their own concerns to themselves and to us that make us understand features of the environment that we otherwise wouldn't have thought to include in the model specification. I really mean this in a very technical sense that if we don't empower the public, not just by giving them more autonomy or by listening to them more, but by giving them opportunities to articulate their own experience of a system, then we won't actually know if we've misfeatured the environment. There's this kind of, and again, we discussed this in the context of the so-called stack, the machine learning or the RL stack that determines the way that model assumptions are related to the data, related to the model are related to the API are related to this very specific hierarchy. The reason we need to empower the public is not just because it seems more ethical or it seems more responsible or something kind of loosely humanistic like that. It's really so that we can identify the ways in which that hierarchy are themselves misspecified and unworkable with respect to the problem that the system is meant to solve or just to interact with. We need to find ways of soliciting from the people who have the criteria in their own experience for what the specification should be to relay that back to the designers. I really don't consider this to be idealistic or utopian just because I just analytically have become convinced from my own work and collaboration with RL designers that we will need this or else there will be all sorts of systems we're building whose behavior may very well seem optimal but it will be very difficult to evaluate as good or bad. So do you hope that this kind of work eventually influences public policy then I guess, right? What do you think the pipeline is from here to the point where it's influencing policy? Right, I think it's crucial that we find a way to influence policy. I am myself moving into policy in my own work. We need better policy. Policy in many ways is going to be a bottleneck for RL specification. I'm pursuing this right now in the context of autonomous vehicles and trying to point out ways of reconceiving the relationship between the Department of Transportation and self-driving car firms in a way that will improve our own understanding of road features and the way that we go about building these computation stacks. We need more people who are straddling the line between research and policy and as I said earlier I think that we need and hopefully we'll get a new generation of practitioners who are comfortable doing so because there's enormous slow-hanging fruit there to be plodged if people are ready to do so. To besides your own work, what are the things that are happening at Chai or elsewhere that you find really interesting in terms of ML or RL? Yeah, I mentioned multi-agent reinforcement learning topics. I think that they are extremely important. That field is going to grow rapidly. Multi-agent RL has a kind of natural affinity with pearls and that's just because political economy is about thinking of the world in terms of multi-agency and multiple forms of agency and that there's a sense in which that has to happen even before we try to solve AI directly. So I think that multi-agent RL in general is a field that I try and keep a close eye on. Another I would mention, you can imagine I read quite broadly so I'm trying to come up with a good answer. Another field that I pay especially close attention to is law and political economy which is there's a blog, I think it's called it's LPEBlog.org I believe, that is an emerging network of scholars and lawyers who are just now finishing their law degrees on the East Coast, although their branch is actually at Berkeley now as well, who are trying to reconsive of the law as itself kind of like code basically that tries to determine the relationship between markets and politics rather than just being a reaction to the way that markets work that it's actually an active ingredient in social order and social structure. And I think some of the work that they're doing, you know, talk about AI and content recommendation, I think some of the work they're doing is extremely insightful with respect to how we should specify AI systems in ways that the law can speak to and in ways that the law can learn from in order for us to make sure there were building systems that are good rather than just systems that conform to what law already requires us to do or not do as designers. Cool and we will have a link to that and everything else we've mentioned here on the episode page at talkarell.com. Okay, so Thomas, what do you see for yourself in the future? Do you have a clear long-term path in mind or do you see a lot of exploration? I see a lot of exploration. That's how I got this far. I learned a tremendous amount from my collaborators and from keeping my mind open and pursuing all sorts of new ideas and projects intuitively. I don't spend too much time thinking about which projects are more or less likely to pan out, I think just because this is such a growth area. I mean, basically what we've been talking about is the future of capitalism in this podcast. I don't think that that's going to become any less important. Any paper that seems like it's going to be saying something interesting with respect to that, I basically pursue it. My near-term goal is trying to grow pearls as a community. That's I think some of the most important work that I've done in my career and that I'm likely to do for maybe the next couple of years. I would encourage anyone who is listening who might be interested in that to please reach out or email me or visit our website if you go to geyscraduates.org. That has a link to the pearls description and provides you information about how to sign up. Cool. I absolutely wish you and the pearls community luck in helping adjust the outcomes for all of us in the better direction. Tom Skillbert, I got to say, this has been fantastic. I've rambled a lot, but it's only because I really enjoyed the topic and talking with you. It's been such a pleasure to have you. Thanks for sharing your time and your insights with talkorrel. Thanks a lot. It was a pleasure to be here. Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better.
[ { "end": 13, "start": 0, "text": " This is TalkAriel Podcast. All reinforcement learning, all the time." }, { "end": 20.5, "start": 13, "text": " Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 25.3, "start": 20.5, "text": " Thomas Crendel Gilbert is a PhD student at UC Berkeley's Center for Human Compatible" }, { "end": 29, "start": 25.3, "text": " AI, specializing in machine ethics and epistemology." }, { "end": 34, "start": 29, "text": " Thanks for joining us today, Thomas. Thanks for having me. How do you describe your focus area?" }, { "end": 40, "start": 34, "text": " My focus area of machine ethics research is basically the philosophical intersection of what are present," }, { "end": 49, "start": 40, "text": " largely distinct strands of technical research. These being AI safety, ML fairness, machine learning fairness," }, { "end": 57, "start": 49, "text": " and what's called human in the loop system design. I think it's very important that the AI systems we are building simultaneously" }, { "end": 62, "start": 57, "text": " be demonstrably safe, as well as fair, for distinct populations of people," }, { "end": 70, "start": 62, "text": " while also remaining under the direct control of the people with whom they will interface or interact." }, { "end": 74, "start": 70, "text": " The problem is that it's not entirely clear what it means to do all these things at once." }, { "end": 80, "start": 74, "text": " And in fact, that is itself a very old philosophical problem that AI is just now we discovering." }, { "end": 87, "start": 80, "text": " These being the problems of defining relevant domain features, the learned model, and the system interface." }, { "end": 90, "start": 87, "text": " These are not new, these are actually just being reinvented by AI today." }, { "end": 95, "start": 90, "text": " Can you tell us a bit about your journey to this focus? How did you get to this point?" }, { "end": 105, "start": 95, "text": " My undergraduate degree, I actually started off in astrophysics. I shortly thereafter switched to a double major in philosophy and sociology." }, { "end": 115, "start": 105, "text": " I got more interested in modeling people than in stars. I then was a full-bright scholar in Denmark studying the history of existentialism." }, { "end": 121, "start": 115, "text": " And then after that got an M-fill in political thought and intellectual history at the University of Cambridge." }, { "end": 127, "start": 121, "text": " Then I came to Berkeley after that. I had a got an MA in sociology first." }, { "end": 134, "start": 127, "text": " And while at Berkeley realized that most of my interests were going to get reinvented by AI one way or another." }, { "end": 139, "start": 134, "text": " So I decided to design my own PhD, named it machine ethics and epistemology." }, { "end": 145, "start": 139, "text": " That seemed like the most organic way of combining my interests in the future of society as well as the foundations of political theory." }, { "end": 154, "start": 145, "text": " That's so diverse. So you list the fields associated with your PhD as history and theory of AI, moral cognition, technology and delegation." }, { "end": 160, "start": 154, "text": " So for those of us that are not familiar with these fields, can you let us tell us what they're about?" }, { "end": 166, "start": 160, "text": " Yeah, it might be easiest to explain all those in reference to this classic notion of a trolley problem." }, { "end": 173, "start": 166, "text": " The question of who should the self-driving car kill? Should it kill one person to save five? That sort of thing." }, { "end": 187, "start": 173, "text": " History and theory of AI was basically the question, how would different major canonical AI theorists like John McCarthy, Marvin Minsky, as well as more recent deep learning people like Jeffrey Hinton solve the trolley problem?" }, { "end": 199, "start": 187, "text": " So that hinted on this question of do we really think that there is some precise set of abstract moral rules that we could program into some AI system for it to just sort of follow?" }, { "end": 206, "start": 199, "text": " Or instead, is it a question of how much or what kinds of data we would need to show the system for it to learn?" }, { "end": 211, "start": 206, "text": " How it is that people would actually themselves make a trolley problem decision." }, { "end": 219, "start": 211, "text": " So moral cognition was then about, okay, so what exactly is going on inside someone's head when they're doing a trolley problem?" }, { "end": 226, "start": 219, "text": " Can we isolate the neural mechanisms, the cognitive mechanisms for how they make these decisions?" }, { "end": 235, "start": 226, "text": " How are abstract philosophies, whether that's utilitarianism, whether that's Kantian ethics related to how the brain actually works?" }, { "end": 243, "start": 235, "text": " And there's a lot of interesting work in this field of moral cognition, cognitive psychology that tries to study this problem empirically." }, { "end": 253, "start": 243, "text": " And then finally, technology and delegation is about, okay, so what difference does it make if we have AI systems or machines making these decisions for us?" }, { "end": 265, "start": 253, "text": " So if we let them solve trolley problems, what distinctive moral calculus or problem is it's stake for us when we allow them to do that on our behalf?" }, { "end": 281, "start": 265, "text": " So that ends up being a lot about getting certain stakeholders engaged in these questions, having them explain what their own criteria are for dealing with morally complex questions and how confident we should have shouldn't be in automating how we answer those things." }, { "end": 288, "start": 281, "text": " So we had your CHI colleague, Michael Denison recently as well, but can you remind us what CHI is about?" }, { "end": 296, "start": 288, "text": " And also it's CHI part of a larger network of institutions that focus on these types of issues and how would you characterize CHI's place in that network?" }, { "end": 302, "start": 296, "text": " Like does it have certain specific areas of relative strength and specialization?" }, { "end": 314, "start": 302, "text": " CHI is working on technical solutions to this problem known as value alignment, the problem of how we get AI systems to learn our values as we intend them to be learned." }, { "end": 330, "start": 314, "text": " Largely through the way that AI systems can observe our own human behavior and our own decisions and preferences and from that extract some kind of objective function that would, that would itself define then what it is we want and how much we want it relative to other things." }, { "end": 336, "start": 330, "text": " CHI is probably most comparable to the technical AI safety labs, a deep mind and open AI." }, { "end": 345, "start": 336, "text": " It's different in that it's an academic lab, so it's led by Stuart Russell, who's a professor at Berkeley as well as Mark Nitzberg, who is the executive director." }, { "end": 352, "start": 345, "text": " I should add that almost all of CHI's work is highly technical, so these are AI theorists and computer scientists." }, { "end": 367, "start": 352, "text": " I'm distinctive in that I'm CHI's resident humanist in a sense, so what I do is collaborate with many other members of CHI to try to add an integrated socio-technical perspective on the problems that they are working on." }, { "end": 390, "start": 367, "text": " We might say there's like a kind of intellectual stack in ML where the top layer is general ideas in terminology, providing direction, and maybe there's some kind of middle layer that's more theoretical math and practical algorithms and down to engineering and deployment where the rubber heaps meets the road and things get deployed." }, { "end": 397, "start": 390, "text": " Do you see things that way or what part of the stack would you say that you're most focused on?" }, { "end": 404, "start": 397, "text": " In many ways my work consists in rethinking that stack to make it structurally aligned with human values." }, { "end": 418, "start": 404, "text": " So a lot of abstract discussions of value alignment tend to focus exclusively on the objective function or on the uncertainty that surrounds the objective function, on the general ideas and terminology of value." }, { "end": 423, "start": 418, "text": " Without questioning these other layers or the order in which those layers get stacked." }, { "end": 445, "start": 423, "text": " In my view the most pressing and interesting questions in value alignment actually lie in how all these things interact, how explicit conceptions of value, as well as mathematical representation, as well as problems of mechanism design, and finally engineering practice itself are themselves permitted to interact with each other." }, { "end": 457, "start": 445, "text": " You know why is it that computer science has a certain relationship with engineering and both of those have a certain relationship with data science and all of those have a certain relationship with the actual stakeholders or consumers of these systems." }, { "end": 464, "start": 457, "text": " This is what I mean by the political economy of reinforcement learning which we'll discuss more later on." }, { "end": 478, "start": 464, "text": " So let's talk about the hard choices paper you co-authored that is hard choices in artificial intelligence addressing normative uncertainty through sociotechnical commitments that was with doby yourself and mints." }, { "end": 481, "start": 478, "text": " So what is the just of this paper?" }, { "end": 505, "start": 481, "text": " The paper as you mentioned is co-authored with who at the time were fellow graduate students that Berkeley they've now graduated and the paper itself reinterprets many of the prominent positions in machine ethics right now by means of a basic distinction between what we call normative uncertainty and what philosophers have traditionally called vagueness." }, { "end": 531, "start": 505, "text": " So normative uncertainty is how many practitioners of machine learning and reinforcement learning have approached the problem of optimizing an AI system namely the notion that if we give it enough data and if we provide a sufficiently good learning algorithm eventually the system will figure out what good behavior actually means in a way that we don't have to directly specify." }, { "end": 553, "start": 531, "text": " But this itself actually presupposes an answer to a problem that philosophers have been thinking about since the time of Plato and Aristotle namely the question of how it is we account for a fundamental lack of clarity about the problem we are facing in other words when we don't have a deterministic problem formulation at hand." }, { "end": 579, "start": 553, "text": " How can we be confident that any such algorithm could arrive at good behavior period and the original example debated in ancient Greece was called the varieties paradox which was a technical name for this situation in which there's a certain number of grains of sand and the question was how many grains of sand does it take before that constitutes a heap of sand." }, { "end": 596, "start": 579, "text": " So it's this question of the heapness of the cranes and indeed there were there were different positions at the time for answering this question you could argue that there was in fact some objective answer to the number of grains that would make a heap even if we don't know what it is." }, { "end": 620, "start": 596, "text": " But perhaps if we had more information about the nature of the sand or about how large it is then we could we could discern an answer to the question you could also argue the different communities or city states might legitimately answer that question differently because they use the word heap to refer to slightly different things these are different language communities in other words." }, { "end": 649, "start": 620, "text": " So that was another position or you could argue in fact there is no answer that it's impossible to specify a deterministic formulation of heapness because the notion of heat might be inherently vague so these are all different approaches to how we resolve vagueness how we make sense of vagueness either as a feature of the world a feature of our minds or a feature of the communities that we belong to and the point we make in this paper is that all of these positions are actually represented." }, { "end": 676, "start": 649, "text": " In the machine ethics literature implicitly and instead of arbitrarily kind of going with the one that makes the most intuitive sense whether you're a computer scientist or an engineer or there are somebody who works on the AI governance formulation of AI the question instead should be what a principle approach to the problem of the agnus really means in the context of the particular system." }, { "end": 694, "start": 676, "text": " That is being evaluated for for development. So this paper refers to access central risk or ex-risk and also fairness accountability and transparency approaches to safety like are your concerns totally separate from these or more like a superset of these." }, { "end": 722, "start": 694, "text": " So what I call ex-risk and the fat style approaches are examples of two of these schools of thought that I was describing earlier ex-risk has a strong affinity with what philosophers have called epistemicism which is the view that they're as I was saying before that there is an objective answer to this problem but that we might be ignorant and present of what the answer is." }, { "end": 734, "start": 722, "text": " So what I think is the answer is that there are more assumes that the answer that could be learned by observing human behaviors or learning from human preferences with enough data at such a scale that eventually the answer can be found." }, { "end": 761, "start": 734, "text": " The fairness literature instead often assumes a position that philosophers call semantic indeterminacy which is the notion that different communities might well disagree about what a safe system actually means and that this has to be taken into account this indeterminacy must be considered when building a system to make that concrete a self-driving car might still not be safe even if it never kills people." }, { "end": 777, "start": 761, "text": " So if it's often getting in crashes, if it suddenly accelerates or decelerates in ways that may people feel uncomfortable, it's completely imaginable that stakeholders would not feel safe around it even though it's technically never going to cause any fatalities." }, { "end": 793, "start": 777, "text": " So our concerns in the paper are superset in the sense that we're saying look all of these assumptions may or may not be relevant as more or less valid interpretations of what a good system would mean in the particular context of the question we are asking." }, { "end": 805, "start": 793, "text": " So the question is really what do we want to do with this system? What do we want to call good rather than just treating one of these schools of thought as paradigmatic and just running with it?" }, { "end": 815, "start": 805, "text": " The paper also includes a case study involving the ACLU that's the American Civil Liberties Union. They're critiquing Amazon's face recognition system." }, { "end": 820, "start": 815, "text": " Can you remind us about what happened in that case and how do you interpret the problem in that case?" }, { "end": 834, "start": 820, "text": " Yes. So the ACLU made use of the recognition API designed by the Amazon team and found that the system, when applied to the faces of members of Congress," }, { "end": 843, "start": 834, "text": " was both inaccurate in its classification, so it got many of the faces wrong, and it was also racist in the way it was inaccurate." }, { "end": 853, "start": 843, "text": " So it was differentially more likely to misclassify the faces of black members of Congress than non-black members of Congress." }, { "end": 882, "start": 853, "text": " So the ACLU did this to show how inadequate the system was from a design standpoint. Interestingly, the Amazon team responded to justify what the purpose of the system was, namely that their API was deliberately designed to be very restrictive, such that they put enormous work into defining exactly which police departments could have access to it based off of standing relationships with those organizations." }, { "end": 897, "start": 882, "text": " And so therefore, the system is trustworthy because even though it might be technically inaccurate in certain instances, it's never meant to be used in those instances unless we completely trust the institution or the organization that's using it." }, { "end": 902, "start": 897, "text": " So it was sort of like a matter of saying you used it in a way it wasn't meant to be used." }, { "end": 917, "start": 902, "text": " And the ACLU responded saying that, well wait, if that's true, why has your team over the weeks that we've been debating this in public suddenly puts so much time and energy into updating the classifier to improve its technical accuracy?" }, { "end": 922, "start": 917, "text": " There seems to be a contradiction there. So I think that's actually a profound point." }, { "end": 932, "start": 922, "text": " What the ACLU was pointing out was that at least two different interpretations of vagueness were simultaneously at stake and the Amazon team was being inconsistent its use of them." }, { "end": 941, "start": 932, "text": " You can't simultaneously claim that an ironclad API could solve this problem, which is really a kind of appeal to resolving semantic indeterminacy." }, { "end": 961, "start": 941, "text": " And yet also claim that greater system accuracy will help, which is to reference a form of epistemicism. You can't both claim that racial bias is something you can technically minimize or eliminate and something that can be entirely avoided by just partnering with the right law enforcement agencies." }, { "end": 964, "start": 961, "text": " Their theory of the case itself was inconsistent." }, { "end": 976, "start": 964, "text": " So in the conclusion in this paper, it says this set of socio-technical commitments will need to be integrated into the training of engineers, data scientists and designers as qualifications." }, { "end": 981, "start": 976, "text": " So can you tell us what do you mean by socio-technical commitments here?" }, { "end": 995, "start": 981, "text": " A socio-technical commitment is sort of like a promise or a pledge that the developers of the system make to the stakeholders whose concerns are implicated in the system's specification." }, { "end": 1002, "start": 995, "text": " It's a way of asking to what extent am I as a designer responsible for how my system performs." }, { "end": 1017, "start": 1002, "text": " For example, if I'm deploying a facial recognition system and I claim it will perform according to a certain threshold of accuracy, what guarantees can I give not just that it will meet that threshold, but that that threshold is in fact the right one that it's the good threshold." }, { "end": 1027, "start": 1017, "text": " So it's both technical and normative, which is sort of how I'm defining socio-technical as the bridge or the boundary between those two things." }, { "end": 1034, "start": 1027, "text": " To make that concrete again, a system that is 99% accurate but whose stakes are life or death might still not be good." }, { "end": 1038, "start": 1034, "text": " It might still not be one stakeholder's want with good reason." }, { "end": 1048, "start": 1038, "text": " Whereas one that is 70% accurate, like just something simple, like a content recommendation in social media or the Pandora app recommending me songs." }, { "end": 1062, "start": 1048, "text": " That might be fine given that the stakes are so much lower, so this is a matter of context. The commitments are context specific and it's a way of indexing your relationship with stakeholders as a developer." }, { "end": 1066, "start": 1062, "text": " Can you tell us more about what you think this type of training might look like?" }, { "end": 1073, "start": 1066, "text": " This is an open question. We are considering this right now. I have some follow-up work exploring this idea." }, { "end": 1095, "start": 1073, "text": " I will say that I believe years from now, it will be considered very strange that AI designers were building algorithms for self-driving cars and drones and credit scoring systems without ever talking to a pedestrian or to a homeowner or to a disadvantaged community." }, { "end": 1120, "start": 1095, "text": " I think all the time about how we need AI theorists and engineers to do something like a medical residency, something like a clinical environment where they have to diagnose real systems that already exist and are affecting real people and then diagnose them in terms of the actual stakes as they are able to be observed." }, { "end": 1126, "start": 1120, "text": " Rather than just speculated about based off of what we think may or may not happen once they're deployed." }, { "end": 1137, "start": 1126, "text": " Right now, a lot of AI research is like a lot of very brilliant young people who are very committed to learn how to sail." }, { "end": 1149, "start": 1137, "text": " They then go on to spend the next several years or even in many cases, much of their careers sitting on the shore learning how to expertly tie different kinds of knots." }, { "end": 1157, "start": 1149, "text": " Instead, what we need is to put those people out on the water and actually go sailing, learn how to sail." }, { "end": 1176, "start": 1157, "text": " Learning the kind of knots is only really important so that once you're on the water, you can make a good choice in the context of, okay, given the wind speed and direction where we're trying to go, why should I tie a bowlin, rather than some other kind of knot if there are any sailors listening to this?" }, { "end": 1191, "start": 1176, "text": " They'll get that joke. It's a matter of situating your knowledge in the context of the stakes such that you're exercising good judgment rather than just some kind of claim to accuracy." }, { "end": 1200, "start": 1191, "text": " So I guess when you talk about engaging various types of stakeholders and more deep engagement that might be with the stakeholders that might be affected by the system." }, { "end": 1216, "start": 1200, "text": " I guess from the point of view of maybe corporations that just want to start deploying products and start profiting, what would compel these organizations to want to do that, all that extra work and all the costs associated with them?" }, { "end": 1222, "start": 1216, "text": " Is it something we would need to force them to do as a society with regulation or how might that work?" }, { "end": 1233, "start": 1222, "text": " I think that there is more than one approach here, but yes, in brief, I think that we need to change the incentives of these companies so that it is in their own self interest to do this." }, { "end": 1247, "start": 1233, "text": " And in fact, I think it is in their own self interest for these companies to do this because if you're deploying a system without taking these considerations into account, that system is not in fact going to be a good system." }, { "end": 1263, "start": 1247, "text": " It might work in the short term, it might help in conforming to some short term business model, but it is not actually aligned in any structural sense with our notion of collective good." }, { "end": 1285, "start": 1263, "text": " This is really, really important and I think massively misunderstood or underappreciated in the AI landscape right now, what it means to build a good AI system means not just that it is aligned with preferences that we can model, but that is aligned with where we want society to be headed." }, { "end": 1308, "start": 1285, "text": " And that the system is well aimed with respect to what kind of polity we want to be. And that's why I don't think you can separate questions of corporate governance, questions of regulation, questions of substantive conceptions of human value from technical questions of problem specification and optimization." }, { "end": 1327, "start": 1308, "text": " So in fact, part of the same sociotechnical landscape and this this straddling of that boundary is what all of my papers are trying to do to reveal questions at the intersection of of these two sides, rather than to" }, { "end": 1348, "start": 1327, "text": " purport to answer them in advance of being more clear and aware of the stakes of the question. Let's move on to your your white paper on pearls and autonomous vehicles, that is mapping the political economy of reinforcement learning systems, the case of autonomous vehicles by yourself, Thomas Colonel Gilbert." }, { "end": 1357, "start": 1348, "text": " So first to get help us get oriented, can you first explain the phrase political economy for those of us who might not be too familiar with it." }, { "end": 1377, "start": 1357, "text": " Yes, political economy is a very old term. So historically political economy is what social science basically was before it splintered off into modern academic specializations of psychology, economics, political science, sociology, history." }, { "end": 1400, "start": 1377, "text": " It's a combination of all of these fields and that's really just because it tries to examine how we define the good society, what is a good society, and what is the set of institutions, human behaviors, virtues that are needed to constitute that vision." }, { "end": 1415, "start": 1400, "text": " More specifically, political economy has tried to examine how it is that many of the same human values and preferences are expressed differently. They have different modalities, whether they're expressed economically or politically." }, { "end": 1430, "start": 1415, "text": " Just think of the way that a choice you're making as a consumer is different than one that you're making as a citizen, that voting with your dollar is not the same as actually voting." }, { "end": 1440, "start": 1430, "text": " So in terms of reinforcement learning to make this a little bit more technically grounded, it's useful to think about this from the standpoint of what makes optimization and specification different." }, { "end": 1454, "start": 1440, "text": " Markets are sort of like social optimizers for the things that we want that try to effectively match consumers and producers with the producers compete over providing some good or service that consumers are choosing." }, { "end": 1470, "start": 1454, "text": " Whereas politics is more like a social specifier. It's about how we collectively choose to define the things we want in terms of their importance for living well together." }, { "end": 1475, "start": 1470, "text": " Okay, that was helpful. So now what was the main idea in this paper?" }, { "end": 1492, "start": 1475, "text": " The central idea is to elaborate this relationship I'm seeing between future forms of RL optimization and different forms of monopoly power that have been studied by legal scholars and political philosophers." }, { "end": 1521, "start": 1492, "text": " A good example to illustrate this is a self-driving car we discussed this in the paper. I discuss it. If you use reinforcement learning to learn a particular kind of routing algorithm for a car, then as you scale up the fleet and the vehicle platform, you are essentially privatizing public infrastructure, namely the access to the road, because you are as a designer in the position of deciding what it means." }, { "end": 1533, "start": 1521, "text": " It means to drive optimally on those roads and in effect nudging other drivers and road users to conform to that resultant definition of driving." }, { "end": 1538, "start": 1533, "text": " So it is a different way of investigating how self-driving cars should drive." }, { "end": 1555, "start": 1538, "text": " So what we are in fact building when we build autonomous vehicles are monopolies in the making. At what point does it make more sense to think of these companies and services as public utilities, rather than what they now are, which is private corporate entities?" }, { "end": 1575, "start": 1555, "text": " So you talk about the reward hypothesis. Can you remind us what that is? Yes, the reward hypothesis, and I'm drawing from the formulation of rich Sutton and Michael Lidman, when I'm talking about it, is a kind of philosophical statement about how intelligent behavior can be defined rigorously." }, { "end": 1595, "start": 1575, "text": " The formal definition is that the hypothesis is the maximization of expected value of the cumulative sum of received scalar signal. And the idea that that maximization is all of what we mean by intelligent behavior, any intelligent behavior oriented to achieving some task." }, { "end": 1606, "start": 1595, "text": " The point there in layman's terms is that it should be possible to exhaustively specify what it means to do an activity well in terms of that activity." }, { "end": 1619, "start": 1606, "text": " So for example, if you're playing Super Mario Bros., every jump that you perform, every coin you collect, every enemy you kill, any amount of points you acquire, every level you beat," }, { "end": 1639, "start": 1619, "text": " somehow each of those things brings you either closer or farther away from beating the game and freeing Princess Toadstool at the end, that there's in fact some precise answer to this question in theory of what it would mean to play the game well based off of every action you take within the game." }, { "end": 1660, "start": 1639, "text": " In fact, I think most human activities are not clearly like this because most of the things humans value are not scalar in nature. They are rather pluralistic. They exist on different normative scales. They can point in different directions depending on the context. In other words, most human values are more like vector than they are like a scalar." }, { "end": 1682, "start": 1660, "text": " Instead, I think that when you really think through the word hypothesis, it makes more sense to approach reinforcement learning specification as a kind of translation of human concerns and priorities into a scalar conception of value rather than just assuming that scalar conception is just sort of there for our system to discover." }, { "end": 1700, "start": 1682, "text": " The problem of the designer faces then is how to make that translation a good one rather than a bad one. So some RL systems can include constraints as well as rewards like for safety or other reasons to constraints also fit into this reward hypothesis framework." }, { "end": 1715, "start": 1700, "text": " So the political economy of RL is about identifying and formalizing the constraints needed to complement the reward hypothesis. That is what I mean by a good translation of the domain." }, { "end": 1739, "start": 1715, "text": " When the paper I define the political economy of RL formally as the science of determining the limits of the hypothesis for a given domain, these limits have to be specified in order for the behavior of the system to be interpreted as good or bad by the other sorts of agents, namely other humans, stakeholders, perhaps other AIs affected by its behavior." }, { "end": 1766, "start": 1739, "text": " This is not meant to be a rejection of the hypothesis. It's really just to state, we state what it is, which is a hypothesis. And the way we evaluate hypotheses is by empirically investigating the domain and evaluating our different assumptions about how we think it works such that we arrive at a formulation of our system that is well mapped onto the integrity of that domain." }, { "end": 1776, "start": 1766, "text": " So it seems to me that the reward function in RL is making something explicit, but it's something that already exists. It's a way to balance between multiple concerns." }, { "end": 1789, "start": 1776, "text": " And I would say most organizations today are generally already optimizing balancing multiple concerns, but usually in an implicit way, like without numerical coefficients." }, { "end": 1799, "start": 1789, "text": " And we generally wouldn't know how they prioritize or make trade-offs exactly between these things internally until something goes wrong and we hear about in the news." }, { "end": 1812, "start": 1799, "text": " So when talking about how RL changes things in terms of political economy, do you think it's a difference in degree, like this large scale automation just makes the details of these reward functions like a bit more impactful?" }, { "end": 1821, "start": 1812, "text": " Or is it really a change of kind? Does RL take us somewhere very different due to these reward functions?" }, { "end": 1836, "start": 1821, "text": " Right, this is the key question. I don't think there is a deterministic answer to this question. It's the kind of question that actually my reading group on the political economy of RL is always trying to ask." }, { "end": 1847, "start": 1836, "text": " Until we identify the constraints on an RL system, we can't really understand whether or how differences of degree and kind are at stake." }, { "end": 1855, "start": 1847, "text": " So it is important to understand that reinforcement learning is often not just making something explicit, which already exists." }, { "end": 1869, "start": 1855, "text": " It's often proposing an interpretation of intelligent behavior that has never before written down. And there may be uncertainty about that definition, even from the standpoint of the designer, there could also be disagreement about it from the standpoint of stakeholders." }, { "end": 1876, "start": 1869, "text": " Or there could be lack of clarity about that definition's application to the particular deployment context." }, { "end": 1899, "start": 1876, "text": " And again, what I just said, those different dimensions of this uncertainty, that's actually the connection to the hard choices paper, because in fact, each of those problems, each of those ambiguities have this legacy in the history of philosophy to these different schools of thought, namely epistemicism, semantic indeterminacy, and what we call in that paper, ontic fagness." }, { "end": 1912, "start": 1899, "text": " The idea, basically, is that look, the political economy of RL is about investigating the domain to figure out how this structural transformation is varying in either degree or in kind." }, { "end": 1921, "start": 1912, "text": " Is your view that moving to RL means that organizations should be exposing, and maybe even publicly debating the design of their reward functions?" }, { "end": 1927, "start": 1921, "text": " Or like, do you see a world in which reward functions will be or should be standardized?" }, { "end": 1942, "start": 1927, "text": " Like today we have, we would expect that Tesla and Waymo have reward functions that may be differ to some extent, but they're judged mainly on outcomes in terms of safety, performance, profitability, things like that." }, { "end": 1955, "start": 1942, "text": " Right, I believe it will be both. To some extent, these reward functions will need to be subject to external standards, and to some extent competing firms, functions can be allowed to vary," }, { "end": 1965, "start": 1955, "text": " and judged in terms of outcomes on various parameters of concerns, whether that's to different cities, different neighborhoods, different highways, etc." }, { "end": 1984, "start": 1965, "text": " The point of this pearls perspective that I'm outlining is for practitioners to get better at understanding this spectrum, and figuring out according to appropriate assumptions about good behavior in the context of this city with this platform, and this routing." }, { "end": 1991, "start": 1984, "text": " What are the features that should be allowed to vary, and what are the features that should not be allowed to vary?" }, { "end": 2013, "start": 1991, "text": " There's a balance implicit there between the standards we need to hold ourselves to in order for the system really to perform well, and also the legitimately distinct approaches to optimization that our platform might pursue based off of what users of that platform or users of roads may or may not want to choose to be a good partner?" }, { "end": 2028, "start": 2013, "text": " So this paper mentions the case of potholes in that vehicle fleets could structurally generate them in specific places depending on their routing." }, { "end": 2032, "start": 2028, "text": " So what do you see as a solution to that type of problem?" }, { "end": 2048, "start": 2032, "text": " Yes, so I don't think this is specific to reinforcement learning. For example, we already know this is happening with apps like Waze, which have caused congestion on particular routes once the app deems them to be more efficient than other routes for routing purposes." }, { "end": 2066, "start": 2048, "text": " So I think the way this plays out with reinforcement learning is that we need to be thinking about what makes near and medium term solutions different than long term solutions. In the near and medium term, it means we need to be careful about the scale and degree of deployment that we make for self-driven cars." }, { "end": 2075, "start": 2066, "text": " Wearing down public roads with these systems is inevitable as they scale, as they are deployed more and more intensively." }, { "end": 2095, "start": 2075, "text": " That's just of a piece with what it means to test these systems well. So what we need to do is try and interact and consult with stakeholders about what modes of deployment are acceptable as these as this new kind of wearing down becomes a more and more salient feature of the domain." }, { "end": 2114, "start": 2095, "text": " In the long term, as I hinted earlier, I do think it means that self-driving car companies will need to be reconceived as providing a public service and regulated accordingly, namely as public utilities, even if they are or remain privately built and technically overseeing." }, { "end": 2135, "start": 2114, "text": " In other words, self-driving car firms will need to be found liable for the damages that their fleets do, and we will have to substantively reinterpret our conceptions of tort law, our conception of what sorts of harm constitute public damage in order to actually oversee what these what kind of service is being provided to us." }, { "end": 2149, "start": 2135, "text": " We don't have to do that. We don't have to reinterpret the law in this way, but I think that we should, and I also think that even if we don't, we should be honest about what that would mean, which would be that roads would no longer be public." }, { "end": 2157, "start": 2149, "text": " And we would really only retain access to them thanks to the beneficence of these private companies." }, { "end": 2173, "start": 2157, "text": " So I'm wondering if we have the political will or capacity to deal with these types of issues. Like it seems like to me, we live in this world where it's hard to get people to wear masks, even though it's clearly we're in a pandemic." }, { "end": 2180, "start": 2173, "text": " And then we have social media, polarizing people for ad clicks, and we're having trouble dealing with that issue, which seems to seem very straightforward." }, { "end": 2200, "start": 2180, "text": " So would the governments of tomorrow want to want to take that on, or would they be just overburdened and be glad to offload details of reward functions to the fang companies of the day, and then maybe haul them in for televised hearing when things go wrong at some point." }, { "end": 2227, "start": 2200, "text": " It's a great question, and it's one that I think about basically every day in the context of my research. And I don't have great answers, but I will highlight a theme of both the hard choices paper and of this paper as well, which is that I actually think we need much more democratic approaches to our own specification." }, { "end": 2246, "start": 2227, "text": " And to how we approach the problem of AI development more generally, that might sound strange because we live in a political moment where it's very hard to imagine politics going well and engaging stakeholders in a way that doesn't quickly become dysfunctional." }, { "end": 2259, "start": 2246, "text": " I mean, if you imagine some kind of just open Wild West style, you know, debate on Twitter about what the reward function for driving should be, I don't think it would go well." }, { "end": 2265, "start": 2259, "text": " And that's also not really what I mean by a democratic set of solutions." }, { "end": 2282, "start": 2265, "text": " I think that it's more a question of how are we able to hold these companies accountable in ways that it becomes in their own self interest to govern themselves in a more democratic fashion." }, { "end": 2311, "start": 2282, "text": " That doesn't necessarily mean that people inside the company should vote up or down on what the reward function should be. What it means is that we create channels for dissent within these companies and also as a path to external stakeholders so that we are able to have the types of feedback with the environment that will actually inform us whether the RL specification that we chose is good or inadequate." }, { "end": 2329, "start": 2311, "text": " So it might sound strange or ironic, but in fact, I really deeply believe from my collaborations with with computer scientists and AI theorists that the path to good specification is democratic commitments." }, { "end": 2344, "start": 2329, "text": " It is a leveraging of new types and forms of feedback to the environment that in fact constitutes a kind of democratic ethos. It is stakeholder engagement." }, { "end": 2365, "start": 2344, "text": " There should be ways in which citizens, municipal bodies, state level departments of motor vehicles, etc, are able to access the API are able to begin to question the featureization adopted by developers as the system is being deployed." }, { "end": 2375, "start": 2365, "text": " This is in the developer's interest because if you don't have this feedback, I don't see how you are going to be confident technically in the performance of your system." }, { "end": 2389, "start": 2375, "text": " You can watch your hands of it and say it is optimal, but in doing so, you are choosing to not investigate this more substantive question of how optimal performance relates to a substantive conception of good performance." }, { "end": 2396, "start": 2389, "text": " Those are not the same thing. In fact, that is what is at stake and what makes optimization different than specification." }, { "end": 2403, "start": 2396, "text": " So this is where I want the field to move and where I think it has to move." }, { "end": 2431, "start": 2403, "text": " I think computer scientists need to engage more with policy. I think that the opacity and the erosion of our current institutions, civic institutions and political norms needs to be confronted as a problem that demands a new set of institutions, a reformed understanding of our own civic commitments in the context of AI development." }, { "end": 2450, "start": 2431, "text": " I can't help but be reminded how political some of this stuff is. When you get right down to it, if a company is doing something with a large workforce of human labor versus in the future, they are doing it way more optimally with a large fleet of robots," }, { "end": 2472, "start": 2450, "text": " it is kind of the same thing, but it is totally different. It is almost like turning the capitalism dial way up and the speed of capitalism way up and making very clear, blatantly bald face clear, what the real objectives of the whole enterprise are." }, { "end": 2492, "start": 2472, "text": " Maybe environmentalists talk about capitalism as a mechanism to turn nature into money and wastelands or something like that. If that process gets sped up by a factor of umpteen orders of magnitude, we are going to have to really face that problem a lot more directly." }, { "end": 2508, "start": 2492, "text": " What does that reward function really mean? It might just be some formulas in a robot, but it also multiplied over these giant fleets, where does the interest of the public come into that equation?" }, { "end": 2514, "start": 2508, "text": " And it seems like that is what you are trying to do with your pearls research group." }, { "end": 2533, "start": 2514, "text": " Yes, so the pearls research group is it's an attempt to extend the themes of my of my white paper and of related work being done by other graduate students, mostly a Berkeley into a community and into a research agenda." }, { "end": 2562, "start": 2533, "text": " So it's an attempt to think about how it is that the systems that we are building are going to find themselves having a certain relationship with markets or a certain relationship with existing social institutions, whether that's politics, whether that's domestic settings, whether that's individual behavior online, whether that's traffic." }, { "end": 2567, "start": 2562, "text": " You know, we don't we don't decide these things as designers." }, { "end": 2574, "start": 2567, "text": " We can design the systems in whatever way we want, but we don't design them on terms of our own choosing." }, { "end": 2583, "start": 2574, "text": " Many of the terms of our that are given to us are a product of the way the world has been set up in advance of our system being deployed." }, { "end": 2597, "start": 2583, "text": " And so the aim of this group, this pearls group is to help people who are trying to already think about this problem, maybe largely in the context of specification of RL specification." }, { "end": 2607, "start": 2597, "text": " And I should also add a multi agent RL, I think, is an interesting field that's emerging in part just because they can't help but think this way." }, { "end": 2622, "start": 2607, "text": " They cannot help but not take the world for granted and reflect very deeply on how their agents are learning simultaneously from each other and from the way that they think other agents are learning about them and so on." }, { "end": 2630, "start": 2622, "text": " It requires you to think about the sociology, the structure of the world that they are learning to navigate." }, { "end": 2649, "start": 2630, "text": " So there there's an interesting way in which these emerging subfields of RL are kind of reinventing political economy in a way that I find very exciting, very intellectually interesting and also very profound in a in a political sense." }, { "end": 2663, "start": 2649, "text": " And also in an economic sense because really what we're going to be able to do with these systems is re-specify the institutions that already exist and specify institutions that have never existed before." }, { "end": 2669, "start": 2663, "text": " And that's very powerful. There's a lot of opportunity for optimism there." }, { "end": 2696, "start": 2669, "text": " And so the attempt of this community is really to kind of lovingly invite different branches of RL into this conversation as a way to advance their own technical work and just as a way to as I was suggesting before a way to improve and refine the metaphors and just the semantics of how the public at large should be thinking about RL." }, { "end": 2715, "start": 2696, "text": " Rather than just as an impending future that they can't do anything about it should be a way for them to re conceive of their own agency and to feel empowered to articulate their values on a new terrain, the terrain of reinforcement learning." }, { "end": 2727, "start": 2715, "text": " And notice one of the readings for your research group is societal implications of deep reinforcement learning by Jess Whitelstone and Kai Erulkomaran and and Crosby Matthew Crosby." }, { "end": 2738, "start": 2727, "text": " And I just want to know to listeners we had Kai on very recently and and we'll be talking to the author Jess Whitelstone about that paper next week." }, { "end": 2748, "start": 2738, "text": " Moving on to your other paper that's AI development for the public interest from abstraction traps to socio technical risks by Andrews at all." }, { "end": 2751, "start": 2748, "text": " Do you want to give us the low down on on this paper?" }, { "end": 2761, "start": 2751, "text": " Right. This paper is a collaboration with other members of my student group at Berkeley. So yeah, that would be so it's a co authored paper with all of us." }, { "end": 2773, "start": 2761, "text": " It's McCain andress who's now at the partnership on AI serideen myself Nathan Lambert who I believe you had on previously to this podcast and also Tom zick." }, { "end": 2790, "start": 2773, "text": " And the paper is more of an STS style investigation of different fields of AI that are emerging many of which we've discussed. So a safety fair machine learning human in the loop autonomy as well as control theory." }, { "end": 2805, "start": 2790, "text": " And in the paper we look to the history of these subfields to try to understand how they first became interested in different types of social risk and eventually claimed to have found technical solutions to those types of social risk." }, { "end": 2822, "start": 2805, "text": " So in brief, AI safety found a way to care about extinction, the problem of the complete eradication of life. Fair machine learning is much more specifically interested in inequity or inequality across subgroups and human in the loop." }, { "end": 2836, "start": 2822, "text": " The loop researchers tend to be more strictly concerned with accidents if your system just undergo some kind of mishap or some unanticipated rupture in assumptions that makes it break." }, { "end": 2847, "start": 2836, "text": " So what we what we try and look at in the paper is how these fields definitions of each of those risks are problematic and actually remain quite vague in certain ways." }, { "end": 2856, "start": 2847, "text": " So there is really this attempt to help these communities understand what they can learn from each other rather than just to critique those definitions." }, { "end": 2871, "start": 2856, "text": " It's sort of to look at how these these communities can look to each other to see what other ways there are of conceiving of the original problem that they see their formalizations as making tractable." }, { "end": 2888, "start": 2871, "text": " So do you consider socio-technics as that set of like safety fairness and human in a loop autonomy? And do you think that that's like an exhaustive list or are we just starting to to find what's contained in that set?" }, { "end": 2904, "start": 2888, "text": " I don't think it's an exhaustive list. I think of socio-technics as a question. It's the question of how we address the gaps between our definitions of the problem and our tools for working on the problem." }, { "end": 2922, "start": 2904, "text": " It's another way to say it is it's this problem of defining the relationship between your system, the system you're building and external reality. It's to recognize that there is a need for some interface there, but that the definition of that interface is vague or it's in precise." }, { "end": 2935, "start": 2922, "text": " And so there's these two levels of uncertainty. There's model uncertainty, which is something that you can minimize by reducing the model bias. But then there's also uncertainty about the model." }, { "end": 2949, "start": 2935, "text": " Is this the right model? What other assumptions do we need? Which assumptions are wrong? That includes this socio-technics problem. And that's really where it enters into the picture." }, { "end": 2962, "start": 2949, "text": " One way of illustrating that is I've seen papers where each community, for example, I have safety papers that reference the impending death of the human race or the end of human civilization." }, { "end": 2976, "start": 2962, "text": " And then they try to illustrate some technical intervention on this by a toy example of robots that are trying to figure out how to slice a cake in half to share it without explicitly negotiating." }, { "end": 3001, "start": 2976, "text": " From a technical standpoint, that toy example, it doesn't really matter. But as a metaphor, it's quite imprecise. If you were to explain this to someone who doesn't understand what a Markov decision process is or what inverse reinforcement learning means, they would be totally mystified by a paper that is illustrating existential risk through some kind of example that's so constrained." }, { "end": 3012, "start": 3001, "text": " And so, like, socio-technics is just a way of asking, how do we address situations like that in a way that doesn't make our own work misread?" }, { "end": 3030, "start": 3012, "text": " Because you want your work to set itself up for success. And setting yourself up for success means that you're able to illustrate and interpret the significance of your formal work in a way that the people who are going to be interacting with the actual system," }, { "end": 3043, "start": 3030, "text": " can understand, because only if they can understand it, can they provide you with the types of feedback that you would need to confirm the assumptions behind the original specification." }, { "end": 3052, "start": 3043, "text": " So we need to see all of these layers of socio-technical abstraction as in a workable relationship with each other." }, { "end": 3062, "start": 3052, "text": " Rather than as sort of not my problem, that's the public's problem or that's the stakeholder's problem or that's PR's problem, that's the lawyers problem." }, { "end": 3075, "start": 3062, "text": " It's pretty clearly all of our problem. And so you want that division of labor to reflect what we want the system to be, rather than just incidental and outside of your specification." }, { "end": 3089, "start": 3075, "text": " This reminds me of a podcast I listened to yesterday, the Robot Brains podcast by Peter Abiel, and he was interviewing Yon the Koon from Facebook about this very issue of what the objective function is for Facebook." }, { "end": 3102, "start": 3089, "text": " Yon the Koon was saying who's claiming that the actual objective function that uses not just engagement, but it's actually some proxies for users and enjoyment of their time." }, { "end": 3116, "start": 3102, "text": " But what does that really mean? Like someone sitting on the couch eating fast food might be saying they're enjoying their time, but from a psychologist perspective, they're not setting themselves up for long-term happiness and success." }, { "end": 3125, "start": 3116, "text": " So I guess it goes back to what you were saying in the very beginning about deciding on what value really means." }, { "end": 3134, "start": 3125, "text": " Yeah, and this is again, this is the oldest problem there is in the book. So Aristotle's name for this was Udaemonia. It's human flourishing." }, { "end": 3152, "start": 3134, "text": " What does it mean to build a system that doesn't just provide me what I expected to provide, but one that actually improves my life, one that helps me aim my own life in a good way in the context of the particular activity." }, { "end": 3168, "start": 3152, "text": " I think that an enormous proportion of the ethical and political problems that we face with RL and AI as a whole is something that are actually very, very well understood by classical thinkers." }, { "end": 3194, "start": 3168, "text": " They obviously didn't talk about RL or about robots. They had their own language for how we should think about these things. I'll just basically as an aside mentioned that the first several books of the politics by Aristotle are about precisely this problem of how it is that artificial beings can be made to reliably service humans in a way that approximates the good life." }, { "end": 3223, "start": 3194, "text": " Of course, that's because for Aristotle that's what a slave was. And so that's the way that this was understood at the time was this problem, but it's kind of funny to me that probably the single most influential work of political theory ever written, at least in Western philosophy, is specifically about the problem of how we make systems that are value aligned with the people they are meant to serve." }, { "end": 3236, "start": 3223, "text": " And yet I see very little work or very little engagement with these ideas in the context of AI safety or of AI theory or even of technical work on value alignment." }, { "end": 3262, "start": 3236, "text": " And one of your papers or one of the papers in this set and I'm not sure which one was referring to these also referring to these old stories like the stories of the genie in Arabian nights and the story that I know for my childhood, the monkeys paw where this thing could make a wish and it would make your wish come true but not in the way you want, which is so, so much like these failure modes of RL." }, { "end": 3270, "start": 3262, "text": " So, so yeah, that makes total sense. You can connect it back to these things that that maybe everyone everyone is familiar with." }, { "end": 3283, "start": 3270, "text": " Yeah, I think the example that Stuart Russell has used repeatedly is what he calls the the King Midas problem, which you know the story of King Midas being that he wished for the ability to touch anything and turned into gold." }, { "end": 3295, "start": 3283, "text": " He almost immediately regretted that decision because he you know starved to death or there was the prospect of that you can't drink, you can't drink water, you can't eat food." }, { "end": 3304, "start": 3295, "text": " I think that some versions of the story have it ending with him in despair reaching out to touch his daughter and then she turns to gold too, so he basically kills her." }, { "end": 3317, "start": 3304, "text": " I think that there's this is actually an example of what I mean. We just need better metaphors to being to describe the problems that we as designers are facing when we are trying to build these systems." }, { "end": 3331, "start": 3317, "text": " And there's an extremely rich treasure trove in Western intellectual history and maybe even beyond it to better communicate these concerns to the public in ways that I think the public is in fact ready to understand." }, { "end": 3343, "start": 3331, "text": " It's just that we've kind of sadly learned to not trust our ability to engage them. And so we haven't trusted our own ability to creatively reimagine these metaphors." }, { "end": 3354, "start": 3343, "text": " I was going to give this example the best example of an abstraction trap that I can think of in the context of this paper is a trolley problem." }, { "end": 3363, "start": 3354, "text": " Someone who builds self-driving cars could tell you that this is not how self-driving cars actually work or how they're actually engineered." }, { "end": 3383, "start": 3363, "text": " The idea that there is some deterministic formula for who the car should kill is one of the most dystopian things that I've ever heard and also just completely off base for how these systems are actually going to exist." }, { "end": 3402, "start": 3383, "text": " It's actually a fact of real people. But we do like to think in terms of trolley problems because in a superficial way it seems to connect a real ethical concern with what in principle could be a design decision or what the public imagines designers doing when they're building these systems." }, { "end": 3423, "start": 3402, "text": " So instead the question should be what would stakeholders think about my system in terms of its actual performance? Would they understand it? Is a technical solution worthwhile? Or if not, what further feedback do I need from them for such a technical solution to be able to be envisioned?" }, { "end": 3442, "start": 3423, "text": " So I think trolley problems themselves are an excellent example of a problematic metaphor that is doing much more to limit our ability to engage the public than it is to actually empower it to give us the information we need to really design these systems well." }, { "end": 3451, "start": 3442, "text": " In this AI development paper you spoke about abstraction traps. Can you tell us about abstraction traps and how do we tell if we're stuck in one?" }, { "end": 3473, "start": 3451, "text": " So I just gave the example of a trolley problem being a kind of abstraction trap. An abstraction trap is it's a way of thinking that you have a technical handle on a social problem in a way that you have lost the ability to reflexively interrogate." }, { "end": 3487, "start": 3473, "text": " So again, we could imagine a way of designing self-driving cars where in fact there was some some some rule that you could just encode for how they should behave in a particular situation like that." }, { "end": 3503, "start": 3487, "text": " They would lead them to quote unquote deliberately kill one person rather than five to use some kind of utility calculus and security calculus is involved in how we think about MDPs and how we think about reinforcement learning." }, { "end": 3523, "start": 3503, "text": " But not at that level of abstraction so an abstraction trap is really just a way of saying that there are different levels of abstraction that are simultaneously at stake in what it in fact would mean to develop your system responsibly." }, { "end": 3547, "start": 3523, "text": " Again, just to restrict the example of cars. There are different levels of abstraction in terms of is the cars safe from the standpoint of the pedestrian, is the cars safe from the standpoint of other drivers, is it safe from the standpoint of people in the car, is it safe from the standpoint of the municipal planner, is it safe from the standpoint of the department of transportation." }, { "end": 3552.24, "start": 3547, "text": " say from the standpoint of the state department of motor vehicles. Those are all different" }, { "end": 3559.96, "start": 3552.24, "text": " layers of abstraction that in their own way would have different criteria for what it would" }, { "end": 3568.56, "start": 3559.96, "text": " mean to build the system in a way that is safe. And the way you avoid these traps is by," }, { "end": 3573.84, "start": 3568.56, "text": " you know, it actually a good example. I think it's called, you know, this phenomenon of" }, { "end": 3579.2000000000003, "start": 3573.84, "text": " rubber ducking where you have a rubber duck on your desk. And if you're stuck designing" }, { "end": 3583.84, "start": 3579.2000000000003, "text": " something or engineering something or programming something, you turn to the rubber duck and" }, { "end": 3588.44, "start": 3583.84, "text": " you try and explain what you're doing to the rubber duck. And that's meant to kind of" }, { "end": 3595.48, "start": 3588.44, "text": " lift you out of your cognitive disorientation or confusion. I think we need to rubber duck" }, { "end": 3599.04, "start": 3595.48, "text": " abstraction traps. And the way you do that is, you know, not by putting a rubber duck" }, { "end": 3605.92, "start": 3599.04, "text": " on your desk, it's by having some kind of line, some kind of through line or connection" }, { "end": 3613.32, "start": 3605.92, "text": " to the stakeholders who are involved in the abstraction layer that you're working on." }, { "end": 3619.72, "start": 3613.32, "text": " So for example, if you're optimizing traffic over a city grid, you should have some kind" }, { "end": 3627.16, "start": 3619.72, "text": " of connection to the municipal planner or this the urban layout designer, whoever person's" }, { "end": 3633.2, "start": 3627.16, "text": " job it is to have overseen the development of that city grid, like whatever office it" }, { "end": 3637.72, "start": 3633.2, "text": " is. And you should basically rubber duck them and basically try and explain to them what" }, { "end": 3641.96, "start": 3637.72, "text": " you're doing and have them explain to you in turn, okay, that either makes sense or it" }, { "end": 3645.72, "start": 3641.96, "text": " doesn't or here's how I would do it or these are the features you need to make sure you" }, { "end": 3649.3999999999996, "start": 3645.72, "text": " can you include and one of the ones you don't have to include because those are my agenda" }, { "end": 3654.24, "start": 3649.3999999999996, "text": " and not yours. Or here's what kind of validation would make sense or not. Like it's a matter" }, { "end": 3664.8399999999997, "start": 3654.24, "text": " of registering the how far removed your design assumptions are from what would make sense" }, { "end": 3671, "start": 3664.8399999999997, "text": " to the people who actually inhabit that layer of abstraction. We want to minimize that" }, { "end": 3677.12, "start": 3671, "text": " distance so that it doesn't impede the way we're developing these systems." }, { "end": 3683.3599999999997, "start": 3677.12, "text": " Is there maybe an economic trap where all of these, you know, socio-technical concerns" }, { "end": 3689.08, "start": 3683.36, "text": " are satisfied but things still turn out really bad for almost everyone. Like maybe systems" }, { "end": 3696.96, "start": 3689.08, "text": " end up technically safe and fair in terms of, you know, the fate criteria, but then they" }, { "end": 3703.6, "start": 3696.96, "text": " produce, you know, economic outcomes that are super unfair. Or is that maybe outside" }, { "end": 3709.32, "start": 3703.6, "text": " of the scope of this type of work and really lives in economics?" }, { "end": 3714.88, "start": 3709.32, "text": " I think it's a great example. I think it's absolutely in scope here. This is also the" }, { "end": 3721.2000000000003, "start": 3714.88, "text": " overlap with the pearls paper and the pearls reading group. So economic traps are a specific" }, { "end": 3726.6800000000003, "start": 3721.2000000000003, "text": " form of abstraction trap where the system is behaving optimally but was mispecified" }, { "end": 3734, "start": 3726.6800000000003, "text": " in a way that could create enormous inequalities or forms of regulatory capture for which there" }, { "end": 3741.28, "start": 3734, "text": " may be no specifically technical solution. And I mean, the field of economics itself often" }, { "end": 3746.76, "start": 3741.28, "text": " falls into traps like this. And so, you know, that's something that other branches of social" }, { "end": 3753.36, "start": 3746.76, "text": " science often try and critique it for doing is to say that there's some mismatch between," }, { "end": 3757.62, "start": 3753.36, "text": " you know, your assumptions of what are called homo-economicists and how people actually" }, { "end": 3762.68, "start": 3757.62, "text": " live. You know, this assumption that people are utility maximizing, that people are rational" }, { "end": 3769.2, "start": 3762.68, "text": " with respect to what they want. This style of investigation that behavioral economists" }, { "end": 3777.2, "start": 3769.2, "text": " have pioneered or have elaborated in many contexts, that's more or less problematic depending" }, { "end": 3785, "start": 3777.2, "text": " on how you think the domain actually works in which people live. And so, economic traps" }, { "end": 3788.7999999999997, "start": 3785, "text": " are not new but they're also very much what's at stake for pearls." }, { "end": 3794.0800000000004, "start": 3788.8, "text": " So we talked about educating the public but earlier when we were talking about this issue," }, { "end": 3801.36, "start": 3794.0800000000004, "text": " you mentioned the idea of organizing new publics as part of educating the public. Can you" }, { "end": 3806.6800000000003, "start": 3801.36, "text": " tell, can you explain what, what did you mean by that and any other comments on how the" }, { "end": 3807.96, "start": 3806.6800000000003, "text": " public should be involved?" }, { "end": 3813.5600000000004, "start": 3807.96, "text": " Yeah, I think that we've circled this a few times now. I think that the way we should" }, { "end": 3823.2799999999997, "start": 3813.56, "text": " engage the public should reflect the use of language which to be blunt basically means" }, { "end": 3832.84, "start": 3823.2799999999997, "text": " the use of metaphor to more or less accurately index the way the system is going to perform" }, { "end": 3840.12, "start": 3832.84, "text": " in terms of the actual expectations of the stakeholders who will be affected by its performance." }, { "end": 3846.8399999999997, "start": 3840.12, "text": " So yeah, this is really going to ultimately come down to, are we as designers and also" }, { "end": 3853.24, "start": 3846.8399999999997, "text": " just as AI developers at scale, are we going to be able to organize new publics in a way" }, { "end": 3862.64, "start": 3853.24, "text": " that they can articulate their own concerns to themselves and to us that make us understand" }, { "end": 3867.68, "start": 3862.64, "text": " features of the environment that we otherwise wouldn't have thought to include in the model" }, { "end": 3869, "start": 3867.68, "text": " specification." }, { "end": 3876.04, "start": 3869, "text": " I really mean this in a very technical sense that if we don't empower the public, not just" }, { "end": 3884.24, "start": 3876.04, "text": " by giving them more autonomy or by listening to them more, but by giving them opportunities" }, { "end": 3891.24, "start": 3884.24, "text": " to articulate their own experience of a system, then we won't actually know if we've misfeatured" }, { "end": 3897.44, "start": 3891.24, "text": " the environment. There's this kind of, and again, we discussed this in the context of" }, { "end": 3903.88, "start": 3897.44, "text": " the so-called stack, the machine learning or the RL stack that determines the way that" }, { "end": 3908.08, "start": 3903.88, "text": " model assumptions are related to the data, related to the model are related to the API" }, { "end": 3912.28, "start": 3908.08, "text": " are related to this very specific hierarchy." }, { "end": 3917.56, "start": 3912.28, "text": " The reason we need to empower the public is not just because it seems more ethical or" }, { "end": 3923.52, "start": 3917.56, "text": " it seems more responsible or something kind of loosely humanistic like that." }, { "end": 3930.8, "start": 3923.52, "text": " It's really so that we can identify the ways in which that hierarchy are themselves" }, { "end": 3938, "start": 3930.8, "text": " misspecified and unworkable with respect to the problem that the system is meant to solve" }, { "end": 3940.88, "start": 3938, "text": " or just to interact with." }, { "end": 3947.7599999999998, "start": 3940.88, "text": " We need to find ways of soliciting from the people who have the criteria in their own" }, { "end": 3955, "start": 3947.76, "text": " experience for what the specification should be to relay that back to the designers." }, { "end": 3959.92, "start": 3955, "text": " I really don't consider this to be idealistic or utopian just because I just analytically" }, { "end": 3966.1600000000003, "start": 3959.92, "text": " have become convinced from my own work and collaboration with RL designers that we will" }, { "end": 3972.1200000000003, "start": 3966.1600000000003, "text": " need this or else there will be all sorts of systems we're building whose behavior may" }, { "end": 3978.04, "start": 3972.12, "text": " very well seem optimal but it will be very difficult to evaluate as good or bad." }, { "end": 3982.92, "start": 3978.04, "text": " So do you hope that this kind of work eventually influences public policy then I guess, right?" }, { "end": 3988.44, "start": 3982.92, "text": " What do you think the pipeline is from here to the point where it's influencing policy?" }, { "end": 3993.2, "start": 3988.44, "text": " Right, I think it's crucial that we find a way to influence policy." }, { "end": 3998.04, "start": 3993.2, "text": " I am myself moving into policy in my own work." }, { "end": 4005.04, "start": 3998.04, "text": " We need better policy. Policy in many ways is going to be a bottleneck for RL specification." }, { "end": 4009.88, "start": 4005.04, "text": " I'm pursuing this right now in the context of autonomous vehicles and trying to point" }, { "end": 4017.4, "start": 4009.88, "text": " out ways of reconceiving the relationship between the Department of Transportation and self-driving" }, { "end": 4025.04, "start": 4017.4, "text": " car firms in a way that will improve our own understanding of road features and the way" }, { "end": 4028.36, "start": 4025.04, "text": " that we go about building these computation stacks." }, { "end": 4033.32, "start": 4028.36, "text": " We need more people who are straddling the line between research and policy and as I said" }, { "end": 4039.72, "start": 4033.32, "text": " earlier I think that we need and hopefully we'll get a new generation of practitioners" }, { "end": 4045.72, "start": 4039.72, "text": " who are comfortable doing so because there's enormous slow-hanging fruit there to be" }, { "end": 4047.8, "start": 4045.72, "text": " plodged if people are ready to do so." }, { "end": 4051.92, "start": 4047.8, "text": " To besides your own work, what are the things that are happening at Chai or elsewhere that" }, { "end": 4055.28, "start": 4051.92, "text": " you find really interesting in terms of ML or RL?" }, { "end": 4058.2400000000002, "start": 4055.28, "text": " Yeah, I mentioned multi-agent reinforcement learning topics." }, { "end": 4061.76, "start": 4058.2400000000002, "text": " I think that they are extremely important." }, { "end": 4065.56, "start": 4061.76, "text": " That field is going to grow rapidly." }, { "end": 4071.6800000000003, "start": 4065.56, "text": " Multi-agent RL has a kind of natural affinity with pearls and that's just because political" }, { "end": 4077.4, "start": 4071.6800000000003, "text": " economy is about thinking of the world in terms of multi-agency and multiple forms of agency" }, { "end": 4084.84, "start": 4077.4, "text": " and that there's a sense in which that has to happen even before we try to solve AI directly." }, { "end": 4090.04, "start": 4084.84, "text": " So I think that multi-agent RL in general is a field that I try and keep a close eye" }, { "end": 4091.44, "start": 4090.04, "text": " on." }, { "end": 4096.4400000000005, "start": 4091.44, "text": " Another I would mention, you can imagine I read quite broadly so I'm trying to come up" }, { "end": 4098.4, "start": 4096.4400000000005, "text": " with a good answer." }, { "end": 4105.2, "start": 4098.4, "text": " Another field that I pay especially close attention to is law and political economy which" }, { "end": 4112.5199999999995, "start": 4105.2, "text": " is there's a blog, I think it's called it's LPEBlog.org I believe, that is an emerging network" }, { "end": 4119.44, "start": 4112.5199999999995, "text": " of scholars and lawyers who are just now finishing their law degrees on the East Coast, although" }, { "end": 4125.5199999999995, "start": 4119.44, "text": " their branch is actually at Berkeley now as well, who are trying to reconsive of the law" }, { "end": 4134.599999999999, "start": 4125.5199999999995, "text": " as itself kind of like code basically that tries to determine the relationship between" }, { "end": 4139.96, "start": 4134.6, "text": " markets and politics rather than just being a reaction to the way that markets work that" }, { "end": 4144.04, "start": 4139.96, "text": " it's actually an active ingredient in social order and social structure." }, { "end": 4148.6, "start": 4144.04, "text": " And I think some of the work that they're doing, you know, talk about AI and content recommendation," }, { "end": 4154.6, "start": 4148.6, "text": " I think some of the work they're doing is extremely insightful with respect to how we" }, { "end": 4161.92, "start": 4154.6, "text": " should specify AI systems in ways that the law can speak to and in ways that the law can" }, { "end": 4167.12, "start": 4161.92, "text": " learn from in order for us to make sure there were building systems that are good rather" }, { "end": 4175.12, "start": 4167.12, "text": " than just systems that conform to what law already requires us to do or not do as designers." }, { "end": 4180.12, "start": 4175.12, "text": " Cool and we will have a link to that and everything else we've mentioned here on the episode" }, { "end": 4182.12, "start": 4180.12, "text": " page at talkarell.com." }, { "end": 4186.8, "start": 4182.12, "text": " Okay, so Thomas, what do you see for yourself in the future?" }, { "end": 4193.12, "start": 4186.8, "text": " Do you have a clear long-term path in mind or do you see a lot of exploration?" }, { "end": 4195.76, "start": 4193.12, "text": " I see a lot of exploration." }, { "end": 4196.92, "start": 4195.76, "text": " That's how I got this far." }, { "end": 4204.320000000001, "start": 4196.92, "text": " I learned a tremendous amount from my collaborators and from keeping my mind open and pursuing all" }, { "end": 4207.4800000000005, "start": 4204.320000000001, "text": " sorts of new ideas and projects intuitively." }, { "end": 4212.12, "start": 4207.4800000000005, "text": " I don't spend too much time thinking about which projects are more or less likely to" }, { "end": 4217.04, "start": 4212.12, "text": " pan out, I think just because this is such a growth area." }, { "end": 4222.12, "start": 4217.04, "text": " I mean, basically what we've been talking about is the future of capitalism in this podcast." }, { "end": 4225.92, "start": 4222.12, "text": " I don't think that that's going to become any less important." }, { "end": 4229.92, "start": 4225.92, "text": " Any paper that seems like it's going to be saying something interesting with respect" }, { "end": 4232.2, "start": 4229.92, "text": " to that, I basically pursue it." }, { "end": 4236.2, "start": 4232.2, "text": " My near-term goal is trying to grow pearls as a community." }, { "end": 4240.5599999999995, "start": 4236.2, "text": " That's I think some of the most important work that I've done in my career and that I'm" }, { "end": 4244.400000000001, "start": 4240.56, "text": " likely to do for maybe the next couple of years." }, { "end": 4249.92, "start": 4244.400000000001, "text": " I would encourage anyone who is listening who might be interested in that to please reach" }, { "end": 4257.04, "start": 4249.92, "text": " out or email me or visit our website if you go to geyscraduates.org." }, { "end": 4262.240000000001, "start": 4257.04, "text": " That has a link to the pearls description and provides you information about how to sign" }, { "end": 4263.240000000001, "start": 4262.240000000001, "text": " up." }, { "end": 4264.240000000001, "start": 4263.240000000001, "text": " Cool." }, { "end": 4271.719999999999, "start": 4264.24, "text": " I absolutely wish you and the pearls community luck in helping adjust the outcomes for all" }, { "end": 4273.88, "start": 4271.719999999999, "text": " of us in the better direction." }, { "end": 4276.8, "start": 4273.88, "text": " Tom Skillbert, I got to say, this has been fantastic." }, { "end": 4280.679999999999, "start": 4276.8, "text": " I've rambled a lot, but it's only because I really enjoyed the topic and talking with" }, { "end": 4281.679999999999, "start": 4280.679999999999, "text": " you." }, { "end": 4284.04, "start": 4281.679999999999, "text": " It's been such a pleasure to have you." }, { "end": 4287.28, "start": 4284.04, "text": " Thanks for sharing your time and your insights with talkorrel." }, { "end": 4288.76, "start": 4287.28, "text": " Thanks a lot." }, { "end": 4318.04, "start": 4288.76, "text": " It was a pleasure to be here." }, { "end": 4322.92, "start": 4319.76, "text": " Three, give us a five-star rating on Apple podcasts." }, { "end": 4351.4400000000005, "start": 4322.92, "text": " If you don't think we deserve five stars, let us know on Twitter what we could do better." } ]
Marc G. Bellemare
Marc G. Bellemare shares insight on his work including Deep Q-Networks, Distributional RL, Project Loon and RL in the Stratosphere, the origins of the Arcade Learning ...
https://media.transistor…49e.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews with brilliant folks across the world of RL. I'm your host, Rob and Chauhan. So I am super excited to introduce our guest today. Professor Mark Belmer is a research scientist at Google Research. An adjunct professor at McGill University and a candidate CIFAR AI Chair. Thanks so much for joining us today, Professor Belmer. Thank you, it's a real pleasure to be here today. So how do you describe your area of focus? Right, so that's a great question to start. I'm a reinforcement learning researcher. So really, I care about all things reinforcement learning. But if I want to narrow down a little bit, I'd say I care about two things primarily. The first one, I would say, is the problem of representation learning or learning representations. And the other problem is the problem of exploration. And the way I think of these two problems is basically, you know, how do we think or understand how any intelligent agent describes in their brain or in their machines what they know. And then how do they behave or how do they act on the basis of that knowledge. And in some sense, you know, to me, this is really the core of artificial intelligence, especially when we think about agents and reinforcement learning. And as humans, we're incredibly good at this. So let me give you an example. When I've moved to a new city in the past, at first, you know, none of the streets, none of the signs, none of the landmarks were known to me. And so what do we do? Well, you know, at first we start exploring the, maybe the neighborhood looking for grocery store, looking for a pub, looking for a park. And we're very good in general at sort of making this mental map very quickly from very few experience of the samples. And knowing where to go next, you know, I found a grocery store. I'm not going to go look for more grocery stores, even though there might be a better one just around the corner. So can you tell us a bit about your path in coming to RL? How did you end up in RL? I've been excited about reinforcement learning since my early days as an undergraduate student. And actually before then, I can sort of date this back to my teenage, which I was really interested in AI. Actually, there's unfortunately, I can't find it anymore, but the used to be this geosities webpage where very naively had laid out a plan for doing AI research. I think I'm 11 or 12 years old at that point. But I was lucky at McGill University to be introduced to RL specifically by working with professor, professor, doing a prep up who's in Montreal, who was teaching the AI class. And then I really loved the idea of reinforcement learning and made sense to me that AI should be about learning. And so from that point on, we worked together. I joined or lab to be an undergraduate research assistant, did my masters with professor per cup. And then after that, I went to the University of Alberta where Richard Sutton was and is and many other phenomenal researchers in RL. So it was accidental that I ran into RL, but I loved it. It was loved at first sight. And since then, it's been just following where RLS is happening and what's exciting in the field. So as many of our listeners will know, you've been involved in many of the important advances in RL research, including I'll just list a few here, co-authoring the DQN Nature paper that arguably started the Deep RL revolution. Introducing the ALE, the Arcade Learning Environment, which has been a central RL benchmark and still is, and of course, distribution RL. So how much of this path would you say was like kind of planned in advance or did it involve a lot of exploration or luck? It's hard to separate luck from plan. Definitely, I didn't come into this thinking, you know, I'm going to start my research and reinforcement learning as an undergrad. And then 11 years down the road or was it 15 years down the road, I'll have this distribution reinforcement learning idea. I'm very much someone who likes to think of opportunities as something to be taken and not taken. And I think in many cases it's a mix of being in the right place at the right moment, but also challenging myself to be in those places. And you know, one way to think about it maybe, to avoid early local minima and try to push the boundaries of what we know and challenge what other people think about the field. So specifically, if I think about the distribution or reinforcement learning, it's a great example for me of a project that simmered really for a very long time before we eventually put out the paper in 2017us. Still something ongoing now. The project actually started very early at my time at DeepMind. I was working at a time with my Joveness who had been my PhD advisor actually at University of Alberta. And he had this idea of predicting the probability distributions of random returns. And the time this seemed very strange and isoteric. And we worked on this and we actually, to this day, I think this is phenomenal work. We actually use a compression algorithm to do reinforcement learning. It's a bit wild. And when we were done, there were actually a few open questions. We looked at this and we said we have no idea how to deal with these problems. But it feels like we should work on them. And it took about three years to eventually get to distribution around. And so there was no plan to get there. But the question was there. And when it felt like we had the right pieces in place, then we, then with other coauthors, we actually took on this problem. And so I really like to think of it as, you know, the work and most proud of is, is a build up of experience rather than a single sort of idea. So we've seen an explosion in work building on the seminal DQN letter that you coauthored in nature in 2015. And more variants seem to show up all the time on archive almost every week. And Google Scholar says there's over 14,000 citations for that paper. So when you did that work, did you have a sense that that you were creating this whole new field? Right. It's pretty amazing the amount of interest and the revolution that the DQN algorithm created. It's worth pointing out actually that people had used neural networks before with reinforcement learning. Right. So dating back all the way to Jerry Tazoro's TD GAMMEN was explicitly using a network of sigmoid units to learn to play back GAMMEN. And when Andrew Eng and Peter Biel flew helicopters early in mid 2000s, they were also using neural networks as part of the project. I think the big revolution with DQN was sort of taking this to the next level. And in fact, Martin Ridd Miller was a member of the team who worked in this. He'd also been using neural networks in similar context. But with Atari, we had an extra piece, which is we wanted the system to be a general purpose. And this was really a game changer, right. To say you have 60 games that you need to play all of these 60 games. And we'd struggled to come up with the right solution during my PhD to this problem. We could only think of heuristics. DQN was revolutionary because it said here is one way you do it in a very, very clean and simple manner. Now, you asked, did we know we were going to create this whole field at the time? The paper was incredibly controversial because it went against, I think, a lot of the things we thought were true or important in reinforcement learning. So let me say it actually took me about two years to even think that this was an important result in the sense that this is how we should do things from there on. And there's a great anecdote from me that what changed my mind was the we had it happened that actually the DQN work we had done on roughly 55 games from the Atari 2600. But the paper that I'd written during my PhD had a different set of games. There was three games that hadn't been included in the DQN paper for various engineering reasons. So I saw these games and I thought here's my chance to prove to people that they're wrong about deep neural networks. And I'll run DQN at these three games that will fail miserably. I will have, you know, I'll collect my own human scores and then you know, I'm exaggerating a little bit here, but effectively this is a real test set. And I'll be hold I trained DQN at these three games and it beat me single handedly on all three games and I thought that's it. There's just there's just no going around this evidence. So I think when David Silver introduced you at Neurips 2020, he described your work on distribution or as one of the most important innovations in RL to date. I'm paraphrasing there. So I wonder besides distribution RL, would you describe any other innovation since DQN as being very important to the theory and practice of RL on that level? Like what types of things might we consider fundamental advances versus more incremental improvements? I think there's been a lot of both sometimes, you know, even things might think of as incremental still have still have long term value because they get developed over multiple papers. And sometimes they're important even though we haven't finished exploring them in some sense. If I think of a prioritized replay, which is something actually we're revisiting right now, I think prioritized replay has been an important piece in the puzzle. I don't think we fully understand it just yet. Certainly, I think a major technical achievement or almost like a paradigm shift as the idea of doing distributed distributed computing and distributed reinforcement learning. And the idea that if we have a simulator, we can now go and train or run hundreds, if not thousands of agents in parallel to collect the data and also distribute of course the computation, the learning part on multiple accelerators. That's been fundamental in all projects where right now the only way we know how to solve these problems is by throwing a massive amount of compute at them. Right. So it might bring down the training time down from years to a matter of weeks and that's day and night for any kind of practical application. So DQN itself was relatively simple and since then complexity has gone up quite a bit. If we look for example at agent 57, which is also targeting ALE, you'd have to read a lot of papers to understand all the different components in agent 57. So I wonder how you feel about where things are going in terms of figuring out what the key components are that are needed to do our own well and is that process just kind of getting started or are we nearly there in terms of figuring out what that is and what is there. How do we know when we arrived there. I think a challenge in knowing what we need and what we don't need is that we as a research community, I think we need more tests of the methods in new settings. An example I like when I think about this question particular is the UCT algorithm that that was effectively designed for search in large environments and really has had its heyday in the game of computer goal. The UCT is effectively a very fast search technique based on a very few simple principles. UCT is often thought of as it's a very simple idea that's very difficult to break in code. And I think in many ways the state of reinforcement right now is almost the opposite of this, which is we have a lot of bells and whistles and it's not clear that they're reliable or robust. But this set I think the pieces are there. We just need to figure out through more trial and error maybe through more experimentation which parts matter. So I think I think we have most of the parts and if I look at our experience working with Lune on flying balloons with deep reinforcement learning there, we made the choice of keeping it simple. And it was a choice we had to make just because we were building everything from the ground up and when you're building everything for the ground up, any sort of thing that you leave in the system that you haven't tried out might cause you trouble down the road. So we see agents and algorithms getting more complex. Do you think that that diversity and complexity will continue in specialization? Or do you see things kind of unifying at some point like it seems like there's so many different almost like a family tree of RL agents and algorithms. Do you see that continuing to to split or some kind of unification happening? I do think as I don't think we'll see some kind of unification. But I think it's always been a challenge in reinforcement learning given the vast diversity of problems that we want to bring that we want to use our algorithms on really each of these problems maybe needs to be handled a bit differently. An analogy here is that if we think about computer vision and say natural language processing these two things are you know fairly different perceptual spaces but also the kind of problems that people look at are also pretty different. Continuous control and let's say Atari are not as distinct at these two things but they're still pretty distinct right and it might just not be possible to unify the two if really what we care about is stop performance on one of these benchmarks. So this said I think if if we let go a little bit of the state of the art the desire to have state of the art performance in a benchmark and we focused more on will this do the job then we would start unifying algorithms a bit more. So maybe related like we see games like starcraft needing a lot more domain specific structure in their agents like we see an alpha star and then with DQN we had very simple monolithic agents that might not do well in in without that structure. Should we expect monolithic agents to be useful going forward or they just maybe a phase or is it maybe because our function approximators aren't that good yet that we need to we need these these more complex. Agent designs and then in the future could maybe better function approximators allow us to fall back to monolithic designs again. I think it's interesting to also ask the question why do we expect an agent to to be monolithic and so let me try to unpack this a little bit here if you look at DQN DQN was already an agent what I would call an agent architecture where one piece is the network one piece is the learning rule one piece is the replay buffer one piece is the target network one piece is how you select actions. And so so already I think DQN I would actually call it an architecture more than monolithic design and agree with you that this seems to be to be a trend that's continued I think it's actually very natural that if we have a complex system with a lot of moving parts we might want to build specialised modules to deal with each of these parts. And it might not be possible to write down if you will a unifying equation that would unify all these parts in a very nice elegant mathematical or algorithmic formulation think about an operating system right nobody would expect an operating system to be monolithic I think in that respect. So speaking of function approximators like neural networks have come a long way since since the original DQN. Do you think that we need will get more progress from just from like tagging along with supervised learning and the improvements in neural networks and function approximators I think we've seen a pretty impressive gains and performance from using transformers but it's not clear to me that the problems that supervised learning is addressing are the problems that RL needs to address. So in that sense what I would love to see is more transformer like things designed for reinforcement learning and maybe in fact we saw a bit of a flurry of this early in the days of DQN and Atari and we see a bit less of it now I would say we're deep learning or supervised learning as assisted reinforcement learning is when the modalities look the same right if you have images as inputs then you use a convolutional network or something like it to process these images. But if your images if your inputs are you know a vector of atmospheric data then maybe the convolutional network doesn't make sense anymore. So people talk about three types of ML unsupervised supervised and reinforcement learning and then it seems clear that reinforce learning can subsume supervised learning just by treating action labels as or actions as as label predictions. I wonder if do you think that anything could ever a subsume RL or would we ever always think of it as like the cherry on top as John the Coons says or is that even a question that make any sense. That is our territory on top I think there's many models we aren't even considering right now a different way to think about this is why are we why do we do first of all you know RL is is rich or the RL is is a is a is a problem setting not necessarily a solution. And I think in that respect if we just think of this the class of problems we can we can describe with RL then it's a pretty white class there are problems that don't fit in the paradigm of say a mark of decision process right which assumes which assumes that effectively given a state it doesn't matter it doesn't matter what happened in the past there's actually this framework called e i x i by a Marker's footer who who is an incredibly general framework so it's good and interesting to ask the question why aren't we all using a i x i. That's a model that could subsume RL and the way that you're asking and we don't use it I think because it's so general that it's very difficult to make progress so maybe a different way to answer your question is we'll I think we will need a new model and we'll get to a new model once we understand the failings of the current model the same way that for while now we've understood that it's very difficult to learn from trial and error to make decisions in a purely machine learning or super rezoning context. So what would you say are the main bottlenecks to progress in RL right now? I would say that benchmarking is an incredible bottleneck and actually let me let me elaborate a little bit on this it's not so much the availability or an availability of benchmarks but rather we don't really have problems that I feel have fundamentally challenging us in new ways actually want to heart back to the previous question we don't really have problems that we can't really explain to you. So we have problems that are challenging our use of reinforcement learning as a model for how an agent interacts with its environment and you know why is that I think in part it's because ever since deep reinforcementings come around we've had a lot of interesting follow up questions and they're all incredibly important but it's also that it might be a question of hardware or it might be a question of computation that we're lacking the inspiration if you will to go to the next step you know raker as well often talks about s shaped progress right we were the flat part of the progress curve and for a while everything looks the same and then there's this paradigm shift and everything changes and then we we're a new flat part. I definitely feel like we're in a flat part right now and when somebody comes up with that next paradigm then we'll see a massive upheaval and I'll unlock everything. Can you say anything about the relationship between empirical or theoretical or and and or in neuroscience like do they all inform each other or is there maybe more structure than that how do you see those three things they certainly inform each other and for me personally the way the way that this plays out is that I love reading papers across the field and trying to understand the perspectives that these different subfields will take on the problem now now your question is you know do they inform each other do more than that I think one challenge when you when you're in a cross field is to be able to to speak the same language and to understand the problems that a challenges faced by by one of the subfields as somebody who's actually really interested in reinforcement learning and empirical reinforcement learning I'm first of all I'm very grateful that my colleagues on both sides sort of seem to be very happy with with this. This traveling but it does make it really difficult because we have to play catch up understanding both why do people working on the theory care about this specific question and how does it translate into a practical concern does it and and vice versa right people might work on a practical concern which would be a fairly easy to address from a theoretical perspective but in the end of the day I think this is how we make progress is by bringing new perspectives into our own problems. So you work on exploration. How do you explain why exploration in RL is such a hard problem why is it so hard I think there's a number of answers to this question the maybe the most important one is that we don't know I don't think we actually know what we're looking for actually from a theoretical perspective exploration is really well understood there's still phenomenal work coming out in the space but we I think we've identified the major pieces of the puzzle and we can derive algorithms and have sample complexity bounds that say if you collect as much information then then you're done. And so why is that not enough why aren't we done from a practical perspective well first of all it's very difficult for theoretical results to go beyond a certain point or certain level of precision so they tend to be an example to have a worst case analysis of a problem and maybe the worst case problems are just really really hard and we we don't encounter them in practical terms the other aspect of this is it goes back to this this modeling perspective which is I don't think we really know what it means to explore in most scenarios that we actually care about right so so our our notion of what exploration means is is really grounded in theory which is collect enough data that you have the right you can make the right decisions but what if it's never possible to have enough data you know I think it's Jeff Bezos likes to say that we should make decisions when we have about 70% of the available information right that suggests that you know in this is more about business of course but that suggests that there's a lot of situations where we'll never have enough information they'll always be this uncertainty and maybe we need to think about exploration differently in that respect. Do you think humans are good at solving the exploration problem not in the sense that theoretical exploration would have it I think humans are very good at having some heuristics actually I'm going to say no humans a terrible exploration. There's this great example of this is this is this is this is taken from the LC if I can remember correctly now this is a famous athlete at the Olympics that discovered the what's called the Fuzzbreeflop and the Fuzzbreeflop in a nutshell was this was this was for jumping over bars and it was a completely different completely different way of jumping so so running and jumping and actually flopping over backwards to go over the high bar and you know it's a very simple mechanism and once once that athlete discovered that jump everybody started doing it because it made sense and before the nobody had thought about it and so what does that tell us that you had generations of athletes doing these high jumps and not discovering the Fuzzbreeflop and and what did it take to get there and so so I think it in many situations when we find a good enough solution we stick with it and often exploration occurs because we see somebody do it better than we have been doing it or it occurs because we're sort of forced to explore right and it's taken out of our hands but when we have a chance I think we actually were very poor at it. And we saw on Twitter an agent coming up with that with that jump in some kind of simulation. That's right that's right there was just a few days ago that was actually a really fun moment to see this to see this online and exactly I haven't had a chance to prove to prove the whole details of this work but we would exactly imagine that how do you incentivize an agent and whether the conditions in which it's going to be incentivized to say there's something you need to be looking for here that's better. On the other hand the agent couldn't get injured as it tried thousands of variations on jump stuff exactly I think that's actually a very important point that I wonder if some of the some of our biases towards not exploring could be injury it could be it could be time it could be also that you know we have other things in our mind that day and so we're not in a mindset to try things out we just want you know when I order from the restaurant it's been a while now since I ordered a restaurant. But when I ordered restaurant I might stick with something I know just because you know this is I don't go out very often and there's a risk to making the wrong choice and so I might as well maybe we're being my epic in some sense that we are we're making choices that are immediately useful as opposed to optimizing for the long term. So going back to a le when you first did the a le paper. How did you think about when it might be solved or was early DQ and already effective on a le at that point. So by early DQ and I don't know if you mean the the very early the work I did read my PhD is that right. Well there was a 2013 variant. I see. So the early maybe I can actually digress very quickly here so the the early actually was designed all the way back to 2008 although it took a few years to get it off the ground. So for me the earliest experiments with playing a tire games go back even before DQ and to the work I did during my PhD and also master's student did during their master's the Varnadaf. And so these methods were were fairly primitive and what they did is they did actually what we knew how to do before so speaking of exploration we were stuck in a certain way of doing things. We we would write down a program that would extract a large number of features from from the image and we call these domain independent features because they had to be a good program that could work for all 60 games. And then the agent would learn from these features and these actually the learner would learn to some of these games but it also performed quite poorly in other games and was quite slow at times. So to answer your question how long did we think it would take to get to where we got. Let's say in 2013 when we published the arcade learning environment paper we thought it was 5 to 10 years before we would make significant progress on the basis that we didn't know how to make these features. And it turns out we were completely wrong and the answer was well you know through a convolutional neural network at it and let it do its magic. So ALE is continued to be used to this day and so would you consider ALE still unsolved by today's generations of agents or or how we know when ALE is is really outgrown and any guess on when that that might happen how long will it stay relevant. Right so the the term on solve is always tricky because I've used this the label label solved in a context of example checkers so when Jonathan Schaeffer at the University of Alberta and in his team found basically solved checkers and said if you play optimally from the first position in checkers it's a draw. That is what I would call solved when we look at a Tari and it's the same thing for go really in the game is so large that we're not at that point where we can say this is the optimal play in the sense that it will give you the optimal rate of reward but this said I think you know in a different sense we're very close to saying well we have super human players so aren't we done. So I think in that sense the ALE is is solved we have algorithms for creating policies that that do achieve things that you know in many of these games are beyond humans there's actually a few games for various reasons that are really hard to play on the keyboard and humans to pretty poorly break out as one of them and you know you see these agents actually in this case they are finishing the level because there's a bug where the entire record will crash after level two so these are suppose we would call almost solved. But I think the real value of a benchmark is not so much in being solved as much as how it inspires us and challenges us and one place where it's very clear we haven't done this is we haven't really demonstrated something we thought we would be able to do much more quickly when we start working this demonstrated the the ability of an agent to learn quickly and from few experiments and as it has been actually a lot of interesting work in model based our own trying to get there but I don't think we're not we're quite there. Brandon Lake had a great paper in 2016 where they made that point much better than I'm making it right now what they said let's actually look at human human agents and ask the question how long does it take humans to learn to play a game like breakout of frostbite and we see a learning curve with an episodes right it's the third playthrough and you're already twice as good as you were in the first playthrough where no where near this with our all today. So the original Atari games themselves came out in I think 1977 that's 44 years ago and the RL communities is kind of still working on them with the alley. Do you think that games will always be like far ahead of our ability to to learn strong agents to play them that's right so I'm guessing here you mean by games that there are new games being created that are more challenging I don't think so and the answer to this I suppose is multifaceted but actually let's look at the. Games that people are playing today you know my partner and I haven't playing a lot of overcooked lately and in many ways overcooked is a lot simpler game than some games we saw in the 1990s at the early 2000s so I don't think games that is getting more complex but you're right that maybe there's something special about Atari that we've lost since Atari games where games that came out of the arcade cabinet the arcades and then you know they had to be in the market cabinet and they they were all designed for the most part of the game. Actually not all of them at all most almost all of them were designed to be fast to be fast space and to to give the player continuous reward right your play space invaders and you have to keep getting score and increasing your score and the game has to and soon so that the arcade cabinet can collect more quarters that's something very special and even with the NES we don't see this anymore and even less so with with later platforms and you know you have these beautiful open ended games today that that just are completely different Minecraft as a great. So maybe maybe the video games developers will keep coming up with new ways of thinking about games that are more complicated but I think I think we you know we have a good handle on certain kinds of games may be more importantly I think games are just one reflection of our lives and what's more likely to happen is as we understand better how to apply RL in a real life context then games will look easier to right as opposed to to the way we've taken now which is ignored and we're going to be able to see what we're going to do. So I think it's ignored as an entire game is really a depiction a very crude depiction of a real life you know bomb is a game of tennis in some sense. And so once we've cracked out the real life scenario is a bit better than I think it'll be easier to play these games. And then I guess related is there do you see a natural successor to ALE or is this is the successor like a larger set of benchmarks like what it makes sense to collect some games for more recent times into a new benchmark or I guess you partly answered that that the challenges might be of a different kind. Right exactly I don't think I don't think that the bigger ALE is the solution. We've seen some incredible work in the RL community trying to develop these new benchmarks. If I think about OpenAI for example OpenAI has been working hard at this for the last almost since their inception. They started with Jim they looked at universe for a little while you know there's been proctgen that is a procedurally generated benchmark that came out recently. And all of these are challenging the field in their own way but it's really that your question is again which part of games do we still need to figure out. And I think one thing that Atari did that benchmarks previously hadn't done is to say you really have to address the perceptual challenge how do you map images to actions right the domains we had before for the most part let's say except for sort of special case applications where much more you know if you think about mountain car where you have a vector of two it's two real values and you've got to make a decision based on these two real values or a lot of grid world we have a lot of great world reinforcement learning where everything is effectively you could draw it in a piece of paper so when we look at more games are bigger games we're just really saying let's keep the perceptual component and you know crank up the volume to 11 but that's not to me that's not really changing the fundamental question now the more distinct question I think is when games get more complicated and again what is it me to play minecraft it's a very different question and playing ball but maybe there's other ways that we can ask this question. As this question which is why is minecraft an interesting problem in the first place and what is the challenge that it's trying to to make us face and I think we don't really know we don't really have a good answer to this question which is why we haven't seen a natural successor to the early just yet and maybe in some sense we've seen a fragmentation of the field into multiple benchmarks. So we had your PhD advisor doctor Marlos Machado on recently and he spoke about the loon controller which you both worked on and we'll have links to his talks on the episode page at talk or all calm and we will also have a link to your talk that you gave to the University of Maryland recently where you went into detail on this work. So I mean we have limited time here so I'm not going to ask you to repeat all the very interesting things that you said in that talk and I recommend listeners check out the Machado episode and his talk and your talk but could you remind us of the overall goal of the loon project and and of the controller itself. For sure so loon is a subsidiary of alphabet that that is now winding down unfortunately but was tasked with developing basically the loons giant balloons that could fly in the stratosphere and one of the missions is to deliver internet connectivity to regions where that might be difficult for various infrastructure reasons. And so as part of this challenge the the balloons that are being flown are what we would call undirectured effectively one of these balloons is a floating in the stratosphere but 20 kilometers high in the air and the only things that it can do is it can go up down or maintain its altitude. And so if you want to get from point eight point B what you have to do is you have to catch winds going in the right direction and that's as you can imagine it's really complicated when you're flying in this is this messy chaotic wind field in the stratosphere. So what we did is we actually used reinforcement learning to learn a flight controller that could do all of this in simulation and then deploy that reinforcement controller to to fly the balloons and we actually we actually saw these balloons deploy the example over Kenya. This was a massive success for us to maybe in January we could go to flight radar the website and actually on the website see the flight paths of a deep oral agent over Kenya this was magical for me. I understand it performed really well and did you expect that in the beginning or were there points where you had some doubts that this was going to work out well. I don't think I had any doubts in the sense that the way that the CTO of Loon South Candido pitched the project to me and also a colleague of my colleagues James Davidson who was involved a very early in a project. It was very clear that this was a perfect fit for reinforcement learning because of this underectuated nature where really we thought no other controller is really going to be able to do as well here and both the perfect fit in terms of the model but also the tools. That we had available to us so you know discrete small number of discrete actions the the analogy is not perfect but really does look like Atari in the stratosphere and so to me that made sense that this is what we should try to do and indeed you know the way we went about this is not quite Atari but we try to follow the pattern of the ethical project of of you know some choices are less importance of design choices less importance you happy the network what kind of training and and focus on the right. And I think that paid off so I didn't have a doubt that this would work I was surprised at how quickly we got there so was it obvious to you right away sounds like it was that you would use model free off policy distribution or for this like was that very clear from you from the get go or do you ever take any deliberation to decide to to go with that. And then some experiments very early on with actor critic methods which I suppose would be model free but value based and maybe a bit less of policy I was actually hoping following this ethical pattern that we could use something like like a little bit which is three search with value estimates and this is actually what what I learned working with Lune which is you have to understand the problem and the problem will dictate some of your solutions in this case the simulator is pretty slow and trying to do any kind of search with the simulator is really hard. We can do search but it's with a simulator of the simulator and that's not ideal. So model free just emerges the thing we could do well and it also happened to be the really the only thing we could do. The distribution are apart I think the project would have worked well without it but it just made sense because we knew it so well we could control it and guarantee quality of the process. Were you pretty confident that the simulator was going to be good enough to get the results you needed and there wouldn't be like a big sim to real gap or was that not a risk. I was in confident at all and I think that Marlos would say the same we were incredibly surprised when the balloon flew its first flight in July 2019 that it flew so well. I used to have Marlos's words and record about this let's let's say that he was quite surprised. The thing so you use the word risk and in some sense we weren't too concerned about risk because when you fly one of these balloons there's a lot of safety layers of course that you know this is a real system and I think this is something that sometimes our research is forget. When you implement reinforcement learning and a real application the our is just one part of a very large system and so you know down to the engineer looking at the balloon and asking the question that balloon does something fishy I'll take control. So the risk wasn't there but the positive results was a surprise for sure. So if I understood the reward is entirely about staying within that designated circle is that right and there was a bit of shaping outside the circle did that did that that reward function takes some some deliberation or was that pretty obvious to you exactly so the reward function maybe to to restate it is if you're within 50 kilometers of the station that you want the station keep at then you you receive a reward of plus one so it's very classic right we we try to keep something simple zero if you're not in the circle and one otherwise we found it useful to add a bit of shaping outside the region that's an artifact we really didn't need it to get good performance but it helped a little bit. There's another component which is that we discourage power usage and this was done because power when you're flying a balloon is actually of course at a premium the balloon is solar powered and that was necessary so just to get technical for a second it's a multiplicative power penalty where we shrink the reward on the basis of using power. Did it take a lot of tuning not really I think it's funny that early on we actually tuned it quite a bit but it was sort of moving the wrong piece the reason why things were in working is because we didn't have the right distributed training code once we fix the distributed training code we we realized a reward function didn't actually matter that much. The reason for this the way I like to encapsulate this is when you're formulating a problem is a reinforcement problem you don't want to tell the agent how to do it you want to tell it what success is and for us success is one within the region so that's what we should go for really cool. Okay so you mentioned how magical it was I can't imagine what it must have felt like after so much of your work really combined in over the years to come up with this result in terms of DQN the distribution work you've done and and and the whole field is partly driven by ALE as a benchmark. So did you did you see this as a kind of one of the highlights of your career like how what do it mean to you to see that. Well in terms of and you know is it a highlight of my career I hope so it's hard to know I can't judge of the future just yet but what was so great for me here is to to see as you say on a problem that nobody had really considered before that we could we could bring the tools we knew so well to to great success. Let me break it down a little bit here I think the most exciting thing to us was just the fact that our work in that setting forget about good performance forget about beating state of the art even though the controller that loon already had was really really powerful and tuned you know for production capabilities just the fact that it worked was pretty impressive the reason for this is you know reinforcement learning and its core is just a handful of equations and maybe you know now with an architecture thrown on top and to say this thing starts its life you want to call it that knowing nothing about balloons and just by trial and error gets to a point where it's now flying a balloon very very well that is just just just just just just amazing right so you know maybe making a parenthesis here this is actually what got me into reinforcement learning I didn't mention is at the beginning my very first project was actually applying redoing Jerry to Zora's work applying neural networks to backgammon and this was an eye opening moment to me I I knew about our L I dread you know I dread a textbook at taking a professor for cops class and I wrote down this bag and program and because I love to take care with with things I also wrote an interface I could play against the player and I think one or two months within my my internship I trained a program within your network and it beat me and you know I'm not a bad back and player and it beat me and I thought this is amazing you know this is this is a collection of numbers that's beating me at at the game that matters to me. So I think that's generally generally speaking the feeling we've had with the loan project in general. Awesome. Okay so moving to distribution RL you shown you shown everyone how effective distribution RL is. Do you think that we should be learning distributions even in supervised learning or is there something very specific about value functions that makes learning distributions for them especially helpful. I think so I actually have a good colleague of mine Martha White at the University of Alberta actually has a paper where she looked at this question specifically should we should we think of using the classification loss in context where we're doing regression which is in some sense sort of the abstract version of the question you're asking. I do think that in reinforcement learning there's something a little bit more interesting that happens and that's because a lot of the distributions that we encounter are a lot more varied right so in both cases we're mapping inputs maybe images or vectors to outputs and in the group is learning the outputs are targets and we don't really expect the distribution to be too complicated maybe that's wrong right maybe we we should change our view on this. When we think about return distributions they're really more elaborate right we've seen this when we when we look at return distributions coming out of Atari for example we have these visualizations in a 2017 work with space invaders we've seen this a lot of things in continuous control tasks. The analogy that that I like to use here is you can think of of RL classic RL as you know taking a photograph of the world there's the real interactions that the agent has it with the respect that value is a bit like a black and white photograph and it was distribution all our gives you is a color version of that same photograph it really shows you all the details that you you would otherwise be missing out on. I don't think we even know yet what to do with these details but now we have a color a color you know we have color photography what are we going to do with it. So in terms of distribution agents for discrete off policy I guess model for you are we've seen a number of different ways of presenting the value function distribution so you shown C 51 and then there's quantile regression and IQ and implicit quantile networks and then FQF came came along. I'm wondering is the problem of how to represent these distributions solved now or or there's some open questions there still. I guess there's always open questions and it's only question of how deep do you want to go or how precise do you want to be in the context of distribution reinforcement specifically I guess you're asking about the part I would call the representation how do we describe the probability distribution how do we operate the probability distribution I don't think we have a perfect algorithm the a lot of the field is focused on understanding a little bit better how to make algorithm is at our based on the right loss functions and the right coupling the right representation with the right loss function. But maybe the answer to the question are we done depends on what we want to do with the with these algorithms if for example we want to actually we know that if we want to maximize expectation we really need to predict the expectation except we know that predicting distribution has this funny effect where we we get more stable behavior and then we don't really understand but if instead of maximizing expectation now we want to be to have risk sensitive behavior you know maybe I don't want to shave off 30 seconds of my commute if it means that there's a 50% chance that I'll miss my turn for those kinds of questions I think we don't have a good answer and how to do distribution or I'll just yet. I really enjoyed your kind of talk in November 2020 on distribution or that was a tour of distribution reinforcement learning and again we'll have a link to that on the episode page. But in this talk you mentioned that there's some evidence for tea learning and also distributional value functions in the brain is that right and was that part of your inspiration at all for focusing on distribution or. It wasn't actually the evidence as far as an extent came out after we've done the original work but it's been a real thrill to see to see the neuroscience community pick up on this and be quite curious to try to understand this. You know in some sense it makes sense that if you can learn it and it's relevant to behavior that the brain should be learning it. So and then you said in that talk and I'm paraphrasing you said something like the methods use stationary policies for fixed worlds and then in the real world it doesn't really make sense to react in a fixed way. Could you help us understand maybe what you meant by that were talking about exploration or ensuring sensible responses to new observations or continue learning or maybe some of the. I suppose it's a pretty cryptic comment that I made during the talk my feeling here is that maybe the way I like to describe this is that most most of our interactions are one of you know I'll go to a restaurant once or I'll go on vacation to remote location once and when we use reinforcement learning the paradigm we're in is one of repeated trial and error right I've done the same things. Thousands of times so to me there's a bit of a disconnect here between how reality proceeds and the framework that we're operating in so I see this is something that we maybe we need to address going forward so I want to ask you about something you said on Twitter I follow your Twitter and I encourage listeners to to check out Professor Realmeyer's Twitter account. So you said you said the the RL of the future is not an algorithm in the TCS sense we think of algorithm as the equations for learning the policy that's probably too narrow instead we need to think. Agent architectures understand the relation between equations and non equations can you say anything more about that in terms of what do you mean by the non equation part. So I guess that's also another cryptic comment we talked about agent architectures before the thing actually some of these ideas we've talked about today the fact that reinforcement learning at the end of the day gives us a model and if I go back in fact and read my colleague Gerger noise point he was saying maybe we need the right RL algorithms. You know, can we start unifying things, for example, or can we come up with the right learning rule? And my response to this is maybe we just need to change the model. And the non-equations here are all the things that are not modeled or don't really fit in that neat mathematical framework. I think there's a lot that we don't understand that doesn't fit in the mold of RL that is worth revisiting. Cool. And then briefly, I was hoping to ask you about a paper you co-authored hyperbolic discounting and learning over multiple horizons. That's fetus at all 2019. And so far, most work seems to use a fixed gamma discount for a fixed horizon. And this paper looks at, I guess, multiple horizons. Do you think like ALE let us to focus on MDPs with a very certain range of horizons or what type of environments may benefit from these multiple horizons? And maybe hyperbolic discounting? Totally. I don't think anybody has ever made that remark before. But certainly the nature of Atari games means that for the vast majority, they take place at a certain temporal resolution, if you will. You're playing at the arcade and you have to respond every half second. You have to do something important and maybe every three seconds you receive a point for doing something important. And so this is why, in fact, we've been able to use a fairly constant discount factor across the entire ALE. Now, this had, there have been domains, for example, Dota 2 that Open AI worked on where they use a much larger discount factor. I think it's in, you know, tens of thousands of equivalent steps in terms of the equivalent horizon. This would be a hundred times bigger than Atari or maybe even a thousand times bigger. I think the reason why I suppose, you know, what kind of environments would benefit from hyperbolic discounting or multiple horizons? I think as we move towards more naturalistic environments and less game-like environments, we'll see more of this. I see, I think we see plenty of examples of problems where, you know, what matters more is to get a big 80 of the future. Maybe what we haven't figured out is why hyperbolic discounting would be better in that context. Is there truly a reason to be hyperbolic? One theory that I love is that we're bounded agents, bounded rationality agents, and so we have to make some choices on the basis of finite data and a constantly changing world and hyperbolic discounting. It's maybe one solution to this problem. Can you say a bit about your current research interests and type of things your students are working on these days? I think we're looking at a lot of different directions. And in part, that's a consequence of maybe a broad interest on my end. And also, feeling that, in looking for that next paradigm shift, we do have to keep an open mind that we won't get our next big breakthrough just by changing the queue learning into a new version of queue learning. One of the directions we're excited about is, again, in a space of representation learning, trying to understand how do we describe more complex phenomena. Right now, in some sense, we've been tied to the success of DQN that we, the only real way we know how to build representations is to do deep learning and let the magic happen. And while that's good, it's a little bit unsatisfying that we can't go deeper and understand this better. And the place where we've been doing some work that I'm quite excited, I think, is going to come out quite soon, is to revisit benchmarking. Again, one way to challenge ourselves is to understand how we design good benchmarks and steady benchmarks. And so that, what we've been finding, in fact, is a little bit that a lot of the progress that we're making in the field sometimes is maybe stationary and not much as changing. So this is some of the work that we've been working on lately. Your own work and your group's work. Are there things happening in RL lately that you're pretty excited about? I think from an application's perspective, offline RL is going to be a game changer. It used to be called Batch RL, so it's not a new problem. But we see it, for example, when we move from video games where we have a simulator to robotics where we don't, we need to understand how to learn from a fixed set of data. Really what supervised learning does so well, reinforcement, really struggles with. So that I'm very excited. I think it's going to come in quite soon. I'm very excited to see where we can take things. I also love the work that's going on a model-based RL. I think it's addressing an important question of how do we deal with counterfactuals, the fact that many things that arise around us we've mounted a bit exposed to before. I think last but not least, I'm also excited to see what neuroscience can contribute to reinforcement learning. Whenever I pick up a neuroscience paper, I see an incredible amount of interesting phenomena that we're completely ignoring. Professor Belmer, I can't really explain to you how much this episode meant to me. I've been reading your name in the literature for years. I've been a big fan and if anyone would have told me that I would have a chance to interview you in this podcast when I started in 2019, I probably would not have believed them. So I want to thank you so much for sharing your time and your insight with me in the talk our RL community today. Thank you, Professor Belmer. And the same to you, thank you. This has been a fantastic opportunity. Thanks for watching. Give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better.
[ { "end": 13, "start": 0, "text": " This is TalkArail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 13, "text": " Interviews with brilliant folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 27, "start": 20, "text": " So I am super excited to introduce our guest today. Professor Mark Belmer is a research scientist at Google Research." }, { "end": 32, "start": 27, "text": " An adjunct professor at McGill University and a candidate CIFAR AI Chair." }, { "end": 34, "start": 32, "text": " Thanks so much for joining us today, Professor Belmer." }, { "end": 36, "start": 34, "text": " Thank you, it's a real pleasure to be here today." }, { "end": 39, "start": 36, "text": " So how do you describe your area of focus?" }, { "end": 44, "start": 39, "text": " Right, so that's a great question to start. I'm a reinforcement learning researcher." }, { "end": 47, "start": 44, "text": " So really, I care about all things reinforcement learning." }, { "end": 54, "start": 47, "text": " But if I want to narrow down a little bit, I'd say I care about two things primarily." }, { "end": 59, "start": 54, "text": " The first one, I would say, is the problem of representation learning or learning representations." }, { "end": 61, "start": 59, "text": " And the other problem is the problem of exploration." }, { "end": 75, "start": 61, "text": " And the way I think of these two problems is basically, you know, how do we think or understand how any intelligent agent describes in their brain or in their machines what they know." }, { "end": 80, "start": 75, "text": " And then how do they behave or how do they act on the basis of that knowledge." }, { "end": 90, "start": 80, "text": " And in some sense, you know, to me, this is really the core of artificial intelligence, especially when we think about agents and reinforcement learning." }, { "end": 94, "start": 90, "text": " And as humans, we're incredibly good at this." }, { "end": 96, "start": 94, "text": " So let me give you an example." }, { "end": 108, "start": 96, "text": " When I've moved to a new city in the past, at first, you know, none of the streets, none of the signs, none of the landmarks were known to me." }, { "end": 119, "start": 108, "text": " And so what do we do? Well, you know, at first we start exploring the, maybe the neighborhood looking for grocery store, looking for a pub, looking for a park." }, { "end": 128, "start": 119, "text": " And we're very good in general at sort of making this mental map very quickly from very few experience of the samples." }, { "end": 131, "start": 128, "text": " And knowing where to go next, you know, I found a grocery store." }, { "end": 136, "start": 131, "text": " I'm not going to go look for more grocery stores, even though there might be a better one just around the corner." }, { "end": 141, "start": 136, "text": " So can you tell us a bit about your path in coming to RL? How did you end up in RL?" }, { "end": 149, "start": 141, "text": " I've been excited about reinforcement learning since my early days as an undergraduate student." }, { "end": 156, "start": 149, "text": " And actually before then, I can sort of date this back to my teenage, which I was really interested in AI." }, { "end": 166, "start": 156, "text": " Actually, there's unfortunately, I can't find it anymore, but the used to be this geosities webpage where very naively had laid out a plan for doing AI research." }, { "end": 169, "start": 166, "text": " I think I'm 11 or 12 years old at that point." }, { "end": 182, "start": 169, "text": " But I was lucky at McGill University to be introduced to RL specifically by working with professor, professor, doing a prep up who's in Montreal, who was teaching the AI class." }, { "end": 188, "start": 182, "text": " And then I really loved the idea of reinforcement learning and made sense to me that AI should be about learning." }, { "end": 198, "start": 188, "text": " And so from that point on, we worked together. I joined or lab to be an undergraduate research assistant, did my masters with professor per cup." }, { "end": 209, "start": 198, "text": " And then after that, I went to the University of Alberta where Richard Sutton was and is and many other phenomenal researchers in RL." }, { "end": 215, "start": 209, "text": " So it was accidental that I ran into RL, but I loved it. It was loved at first sight." }, { "end": 223, "start": 215, "text": " And since then, it's been just following where RLS is happening and what's exciting in the field." }, { "end": 237, "start": 223, "text": " So as many of our listeners will know, you've been involved in many of the important advances in RL research, including I'll just list a few here, co-authoring the DQN Nature paper that arguably started the Deep RL revolution." }, { "end": 247, "start": 237, "text": " Introducing the ALE, the Arcade Learning Environment, which has been a central RL benchmark and still is, and of course, distribution RL." }, { "end": 255, "start": 247, "text": " So how much of this path would you say was like kind of planned in advance or did it involve a lot of exploration or luck?" }, { "end": 266, "start": 255, "text": " It's hard to separate luck from plan. Definitely, I didn't come into this thinking, you know, I'm going to start my research and reinforcement learning as an undergrad." }, { "end": 273, "start": 266, "text": " And then 11 years down the road or was it 15 years down the road, I'll have this distribution reinforcement learning idea." }, { "end": 279, "start": 273, "text": " I'm very much someone who likes to think of opportunities as something to be taken and not taken." }, { "end": 289, "start": 279, "text": " And I think in many cases it's a mix of being in the right place at the right moment, but also challenging myself to be in those places." }, { "end": 299, "start": 289, "text": " And you know, one way to think about it maybe, to avoid early local minima and try to push the boundaries of what we know and challenge what other people think about the field." }, { "end": 312, "start": 299, "text": " So specifically, if I think about the distribution or reinforcement learning, it's a great example for me of a project that simmered really for a very long time before we eventually put out the paper in 2017us." }, { "end": 327, "start": 312, "text": " Still something ongoing now. The project actually started very early at my time at DeepMind. I was working at a time with my Joveness who had been my PhD advisor actually at University of Alberta." }, { "end": 335, "start": 327, "text": " And he had this idea of predicting the probability distributions of random returns." }, { "end": 343, "start": 335, "text": " And the time this seemed very strange and isoteric. And we worked on this and we actually, to this day, I think this is phenomenal work." }, { "end": 347, "start": 343, "text": " We actually use a compression algorithm to do reinforcement learning. It's a bit wild." }, { "end": 354, "start": 347, "text": " And when we were done, there were actually a few open questions. We looked at this and we said we have no idea how to deal with these problems." }, { "end": 359, "start": 354, "text": " But it feels like we should work on them. And it took about three years to eventually get to distribution around." }, { "end": 370, "start": 359, "text": " And so there was no plan to get there. But the question was there. And when it felt like we had the right pieces in place, then we, then with other coauthors, we actually took on this problem." }, { "end": 380, "start": 370, "text": " And so I really like to think of it as, you know, the work and most proud of is, is a build up of experience rather than a single sort of idea." }, { "end": 392, "start": 380, "text": " So we've seen an explosion in work building on the seminal DQN letter that you coauthored in nature in 2015. And more variants seem to show up all the time on archive almost every week." }, { "end": 402, "start": 392, "text": " And Google Scholar says there's over 14,000 citations for that paper. So when you did that work, did you have a sense that that you were creating this whole new field?" }, { "end": 412, "start": 402, "text": " Right. It's pretty amazing the amount of interest and the revolution that the DQN algorithm created." }, { "end": 429, "start": 412, "text": " It's worth pointing out actually that people had used neural networks before with reinforcement learning. Right. So dating back all the way to Jerry Tazoro's TD GAMMEN was explicitly using a network of sigmoid units to learn to play back GAMMEN." }, { "end": 439, "start": 429, "text": " And when Andrew Eng and Peter Biel flew helicopters early in mid 2000s, they were also using neural networks as part of the project." }, { "end": 450, "start": 439, "text": " I think the big revolution with DQN was sort of taking this to the next level. And in fact, Martin Ridd Miller was a member of the team who worked in this." }, { "end": 460, "start": 450, "text": " He'd also been using neural networks in similar context. But with Atari, we had an extra piece, which is we wanted the system to be a general purpose." }, { "end": 471, "start": 460, "text": " And this was really a game changer, right. To say you have 60 games that you need to play all of these 60 games. And we'd struggled to come up with the right solution during my PhD to this problem." }, { "end": 479, "start": 471, "text": " We could only think of heuristics. DQN was revolutionary because it said here is one way you do it in a very, very clean and simple manner." }, { "end": 491, "start": 479, "text": " Now, you asked, did we know we were going to create this whole field at the time? The paper was incredibly controversial because it went against, I think, a lot of the things we thought were true or important in reinforcement learning." }, { "end": 500, "start": 491, "text": " So let me say it actually took me about two years to even think that this was an important result in the sense that this is how we should do things from there on." }, { "end": 517, "start": 500, "text": " And there's a great anecdote from me that what changed my mind was the we had it happened that actually the DQN work we had done on roughly 55 games from the Atari 2600." }, { "end": 527, "start": 517, "text": " But the paper that I'd written during my PhD had a different set of games. There was three games that hadn't been included in the DQN paper for various engineering reasons." }, { "end": 538, "start": 527, "text": " So I saw these games and I thought here's my chance to prove to people that they're wrong about deep neural networks. And I'll run DQN at these three games that will fail miserably." }, { "end": 547, "start": 538, "text": " I will have, you know, I'll collect my own human scores and then you know, I'm exaggerating a little bit here, but effectively this is a real test set." }, { "end": 557, "start": 547, "text": " And I'll be hold I trained DQN at these three games and it beat me single handedly on all three games and I thought that's it. There's just there's just no going around this evidence." }, { "end": 569, "start": 557, "text": " So I think when David Silver introduced you at Neurips 2020, he described your work on distribution or as one of the most important innovations in RL to date. I'm paraphrasing there." }, { "end": 586, "start": 569, "text": " So I wonder besides distribution RL, would you describe any other innovation since DQN as being very important to the theory and practice of RL on that level? Like what types of things might we consider fundamental advances versus more incremental improvements?" }, { "end": 599, "start": 586, "text": " I think there's been a lot of both sometimes, you know, even things might think of as incremental still have still have long term value because they get developed over multiple papers." }, { "end": 611, "start": 599, "text": " And sometimes they're important even though we haven't finished exploring them in some sense. If I think of a prioritized replay, which is something actually we're revisiting right now, I think prioritized replay has been an important piece in the puzzle." }, { "end": 628, "start": 611, "text": " I don't think we fully understand it just yet. Certainly, I think a major technical achievement or almost like a paradigm shift as the idea of doing distributed distributed computing and distributed reinforcement learning." }, { "end": 643, "start": 628, "text": " And the idea that if we have a simulator, we can now go and train or run hundreds, if not thousands of agents in parallel to collect the data and also distribute of course the computation, the learning part on multiple accelerators." }, { "end": 662, "start": 643, "text": " That's been fundamental in all projects where right now the only way we know how to solve these problems is by throwing a massive amount of compute at them. Right. So it might bring down the training time down from years to a matter of weeks and that's day and night for any kind of practical application." }, { "end": 680, "start": 662, "text": " So DQN itself was relatively simple and since then complexity has gone up quite a bit. If we look for example at agent 57, which is also targeting ALE, you'd have to read a lot of papers to understand all the different components in agent 57." }, { "end": 697, "start": 680, "text": " So I wonder how you feel about where things are going in terms of figuring out what the key components are that are needed to do our own well and is that process just kind of getting started or are we nearly there in terms of figuring out what that is and what is there. How do we know when we arrived there." }, { "end": 710, "start": 697, "text": " I think a challenge in knowing what we need and what we don't need is that we as a research community, I think we need more tests of the methods in new settings." }, { "end": 728, "start": 710, "text": " An example I like when I think about this question particular is the UCT algorithm that that was effectively designed for search in large environments and really has had its heyday in the game of computer goal." }, { "end": 740, "start": 728, "text": " The UCT is effectively a very fast search technique based on a very few simple principles. UCT is often thought of as it's a very simple idea that's very difficult to break in code." }, { "end": 750, "start": 740, "text": " And I think in many ways the state of reinforcement right now is almost the opposite of this, which is we have a lot of bells and whistles and it's not clear that they're reliable or robust." }, { "end": 760, "start": 750, "text": " But this set I think the pieces are there. We just need to figure out through more trial and error maybe through more experimentation which parts matter." }, { "end": 773, "start": 760, "text": " So I think I think we have most of the parts and if I look at our experience working with Lune on flying balloons with deep reinforcement learning there, we made the choice of keeping it simple." }, { "end": 786, "start": 773, "text": " And it was a choice we had to make just because we were building everything from the ground up and when you're building everything for the ground up, any sort of thing that you leave in the system that you haven't tried out might cause you trouble down the road." }, { "end": 795, "start": 786, "text": " So we see agents and algorithms getting more complex. Do you think that that diversity and complexity will continue in specialization?" }, { "end": 809, "start": 795, "text": " Or do you see things kind of unifying at some point like it seems like there's so many different almost like a family tree of RL agents and algorithms. Do you see that continuing to to split or some kind of unification happening?" }, { "end": 815, "start": 809, "text": " I do think as I don't think we'll see some kind of unification." }, { "end": 829, "start": 815, "text": " But I think it's always been a challenge in reinforcement learning given the vast diversity of problems that we want to bring that we want to use our algorithms on really each of these problems maybe needs to be handled a bit differently." }, { "end": 842, "start": 829, "text": " An analogy here is that if we think about computer vision and say natural language processing these two things are you know fairly different perceptual spaces but also the kind of problems that people look at are also pretty different." }, { "end": 858, "start": 842, "text": " Continuous control and let's say Atari are not as distinct at these two things but they're still pretty distinct right and it might just not be possible to unify the two if really what we care about is stop performance on one of these benchmarks." }, { "end": 870, "start": 858, "text": " So this said I think if if we let go a little bit of the state of the art the desire to have state of the art performance in a benchmark and we focused more on will this do the job then we would start unifying algorithms a bit more." }, { "end": 888, "start": 870, "text": " So maybe related like we see games like starcraft needing a lot more domain specific structure in their agents like we see an alpha star and then with DQN we had very simple monolithic agents that might not do well in in without that structure." }, { "end": 902, "start": 888, "text": " Should we expect monolithic agents to be useful going forward or they just maybe a phase or is it maybe because our function approximators aren't that good yet that we need to we need these these more complex." }, { "end": 909, "start": 902, "text": " Agent designs and then in the future could maybe better function approximators allow us to fall back to monolithic designs again." }, { "end": 933, "start": 909, "text": " I think it's interesting to also ask the question why do we expect an agent to to be monolithic and so let me try to unpack this a little bit here if you look at DQN DQN was already an agent what I would call an agent architecture where one piece is the network one piece is the learning rule one piece is the replay buffer one piece is the target network one piece is how you select actions." }, { "end": 959, "start": 933, "text": " And so so already I think DQN I would actually call it an architecture more than monolithic design and agree with you that this seems to be to be a trend that's continued I think it's actually very natural that if we have a complex system with a lot of moving parts we might want to build specialised modules to deal with each of these parts." }, { "end": 978, "start": 959, "text": " And it might not be possible to write down if you will a unifying equation that would unify all these parts in a very nice elegant mathematical or algorithmic formulation think about an operating system right nobody would expect an operating system to be monolithic I think in that respect." }, { "end": 1006, "start": 978, "text": " So speaking of function approximators like neural networks have come a long way since since the original DQN. Do you think that we need will get more progress from just from like tagging along with supervised learning and the improvements in neural networks and function approximators I think we've seen a pretty impressive gains and performance from using transformers but it's not clear to me that the problems that supervised learning is addressing are the problems that RL needs to address." }, { "end": 1033, "start": 1006, "text": " So in that sense what I would love to see is more transformer like things designed for reinforcement learning and maybe in fact we saw a bit of a flurry of this early in the days of DQN and Atari and we see a bit less of it now I would say we're deep learning or supervised learning as assisted reinforcement learning is when the modalities look the same right if you have images as inputs then you use a convolutional network or something like it to process these images." }, { "end": 1043, "start": 1033, "text": " But if your images if your inputs are you know a vector of atmospheric data then maybe the convolutional network doesn't make sense anymore." }, { "end": 1056, "start": 1043, "text": " So people talk about three types of ML unsupervised supervised and reinforcement learning and then it seems clear that reinforce learning can subsume supervised learning just by treating action labels as or actions as as label predictions." }, { "end": 1068, "start": 1056, "text": " I wonder if do you think that anything could ever a subsume RL or would we ever always think of it as like the cherry on top as John the Coons says or is that even a question that make any sense." }, { "end": 1087, "start": 1068, "text": " That is our territory on top I think there's many models we aren't even considering right now a different way to think about this is why are we why do we do first of all you know RL is is rich or the RL is is a is a is a problem setting not necessarily a solution." }, { "end": 1116, "start": 1087, "text": " And I think in that respect if we just think of this the class of problems we can we can describe with RL then it's a pretty white class there are problems that don't fit in the paradigm of say a mark of decision process right which assumes which assumes that effectively given a state it doesn't matter it doesn't matter what happened in the past there's actually this framework called e i x i by a Marker's footer who who is an incredibly general framework so it's good and interesting to ask the question why aren't we all using a i x i." }, { "end": 1145, "start": 1116, "text": " That's a model that could subsume RL and the way that you're asking and we don't use it I think because it's so general that it's very difficult to make progress so maybe a different way to answer your question is we'll I think we will need a new model and we'll get to a new model once we understand the failings of the current model the same way that for while now we've understood that it's very difficult to learn from trial and error to make decisions in a purely machine learning or super rezoning context." }, { "end": 1151, "start": 1145, "text": " So what would you say are the main bottlenecks to progress in RL right now?" }, { "end": 1174, "start": 1151, "text": " I would say that benchmarking is an incredible bottleneck and actually let me let me elaborate a little bit on this it's not so much the availability or an availability of benchmarks but rather we don't really have problems that I feel have fundamentally challenging us in new ways actually want to heart back to the previous question we don't really have problems that we can't really explain to you." }, { "end": 1201, "start": 1174, "text": " So we have problems that are challenging our use of reinforcement learning as a model for how an agent interacts with its environment and you know why is that I think in part it's because ever since deep reinforcementings come around we've had a lot of interesting follow up questions and they're all incredibly important but it's also that it might be a question of hardware or it might be a question of computation that we're" }, { "end": 1219, "start": 1201, "text": " lacking the inspiration if you will to go to the next step you know raker as well often talks about s shaped progress right we were the flat part of the progress curve and for a while everything looks the same and then there's this paradigm shift and everything changes and then we we're a new flat part." }, { "end": 1228, "start": 1219, "text": " I definitely feel like we're in a flat part right now and when somebody comes up with that next paradigm then we'll see a massive upheaval and I'll unlock everything." }, { "end": 1248, "start": 1228, "text": " Can you say anything about the relationship between empirical or theoretical or and and or in neuroscience like do they all inform each other or is there maybe more structure than that how do you see those three things they certainly inform each other and for me personally the way the way that" }, { "end": 1269, "start": 1248, "text": " this plays out is that I love reading papers across the field and trying to understand the perspectives that these different subfields will take on the problem now now your question is you know do they inform each other do more than that I think one challenge when you when" }, { "end": 1284, "start": 1269, "text": " you're in a cross field is to be able to to speak the same language and to understand the problems that a challenges faced by by one of the subfields as somebody who's actually" }, { "end": 1293, "start": 1284, "text": " really interested in reinforcement learning and empirical reinforcement learning I'm first of all I'm very grateful that my colleagues on both sides sort of seem to be very happy with with this." }, { "end": 1308, "start": 1293, "text": " This traveling but it does make it really difficult because we have to play catch up understanding both why do people working on the theory care about this specific question and how does it translate into a practical concern does it and and vice versa right people might" }, { "end": 1318, "start": 1308, "text": " work on a practical concern which would be a fairly easy to address from a theoretical perspective but in the end of the day I think this is how we make progress is by bringing new perspectives into our own problems." }, { "end": 1334, "start": 1318, "text": " So you work on exploration. How do you explain why exploration in RL is such a hard problem why is it so hard I think there's a number of answers to this question the maybe the most important one is that we don't know I don't" }, { "end": 1354, "start": 1334, "text": " think we actually know what we're looking for actually from a theoretical perspective exploration is really well understood there's still phenomenal work coming out in the space but we I think we've identified the major pieces of the puzzle and we can derive algorithms and have sample complexity bounds that say if you collect as much information then then you're done." }, { "end": 1368, "start": 1354, "text": " And so why is that not enough why aren't we done from a practical perspective well first of all it's very difficult for theoretical results to go beyond a certain point or certain level of precision so they tend to" }, { "end": 1397, "start": 1368, "text": " be an example to have a worst case analysis of a problem and maybe the worst case problems are just really really hard and we we don't encounter them in practical terms the other aspect of this is it goes back to this this modeling perspective which is I don't think we really know what it means to explore in most scenarios that we actually care about right so so our our notion of what exploration means is is really grounded in theory which is collect enough data that you have the right you can make the right decisions but what if it's never" }, { "end": 1413, "start": 1397, "text": " possible to have enough data you know I think it's Jeff Bezos likes to say that we should make decisions when we have about 70% of the available information right that suggests that you know in this is more about business" }, { "end": 1421, "start": 1413, "text": " of course but that suggests that there's a lot of situations where we'll never have enough information they'll always be this uncertainty and maybe we need to think about exploration differently in that respect." }, { "end": 1435, "start": 1421, "text": " Do you think humans are good at solving the exploration problem not in the sense that theoretical exploration would have it I think humans are very good at having some heuristics actually I'm going to say no humans a terrible exploration." }, { "end": 1461, "start": 1435, "text": " There's this great example of this is this is this is this is taken from the LC if I can remember correctly now this is a famous athlete at the Olympics that discovered the what's called the Fuzzbreeflop and the Fuzzbreeflop in a nutshell was this was this was for jumping over bars and it was a completely different completely" }, { "end": 1490, "start": 1461, "text": " different way of jumping so so running and jumping and actually flopping over backwards to go over the high bar and you know it's a very simple mechanism and once once that athlete discovered that jump everybody started doing it because it made sense and before the nobody had thought about it and so what does that tell us that you had generations of athletes doing these high jumps and not discovering the Fuzzbreeflop and and what did it take to get there and so so" }, { "end": 1511, "start": 1490, "text": " I think it in many situations when we find a good enough solution we stick with it and often exploration occurs because we see somebody do it better than we have been doing it or it occurs because we're sort of forced to explore right and it's taken out of our hands but when we have a chance I think we actually were very poor at it." }, { "end": 1539, "start": 1511, "text": " And we saw on Twitter an agent coming up with that with that jump in some kind of simulation. That's right that's right there was just a few days ago that was actually a really fun moment to see this to see this online and exactly I haven't had a chance to prove to prove the whole details of this work but we would exactly imagine that how do you incentivize an agent and whether the conditions in which it's going to be incentivized to say there's something you need to be looking for here that's better." }, { "end": 1568, "start": 1539, "text": " On the other hand the agent couldn't get injured as it tried thousands of variations on jump stuff exactly I think that's actually a very important point that I wonder if some of the some of our biases towards not exploring could be injury it could be it could be time it could be also that you know we have other things in our mind that day and so we're not in a mindset to try things out we just want you know when I order from the restaurant it's been a while now since I ordered a restaurant." }, { "end": 1586, "start": 1568, "text": " But when I ordered restaurant I might stick with something I know just because you know this is I don't go out very often and there's a risk to making the wrong choice and so I might as well maybe we're being my epic in some sense that we are we're making choices that are immediately useful as opposed to optimizing for the long term." }, { "end": 1590, "start": 1586, "text": " So going back to a le when you first did the a le paper." }, { "end": 1596, "start": 1590, "text": " How did you think about when it might be solved or was early DQ and already effective on a le at that point." }, { "end": 1604, "start": 1596, "text": " So by early DQ and I don't know if you mean the the very early the work I did read my PhD is that right." }, { "end": 1607, "start": 1604, "text": " Well there was a 2013 variant." }, { "end": 1622, "start": 1607, "text": " I see. So the early maybe I can actually digress very quickly here so the the early actually was designed all the way back to 2008 although it took a few years to get it off the ground." }, { "end": 1634, "start": 1622, "text": " So for me the earliest experiments with playing a tire games go back even before DQ and to the work I did during my PhD and also master's student did during their master's the Varnadaf." }, { "end": 1643, "start": 1634, "text": " And so these methods were were fairly primitive and what they did is they did actually what we knew how to do before so speaking of exploration we were stuck in a certain way of doing things." }, { "end": 1659, "start": 1643, "text": " We we would write down a program that would extract a large number of features from from the image and we call these domain independent features because they had to be a good program that could work for all 60 games." }, { "end": 1669, "start": 1659, "text": " And then the agent would learn from these features and these actually the learner would learn to some of these games but it also performed quite poorly in other games and was quite slow at times." }, { "end": 1674, "start": 1669, "text": " So to answer your question how long did we think it would take to get to where we got." }, { "end": 1687, "start": 1674, "text": " Let's say in 2013 when we published the arcade learning environment paper we thought it was 5 to 10 years before we would make significant progress on the basis that we didn't know how to make these features." }, { "end": 1695, "start": 1687, "text": " And it turns out we were completely wrong and the answer was well you know through a convolutional neural network at it and let it do its magic." }, { "end": 1711, "start": 1695, "text": " So ALE is continued to be used to this day and so would you consider ALE still unsolved by today's generations of agents or or how we know when ALE is is really outgrown and any guess on when that that might happen how long will it stay relevant." }, { "end": 1734, "start": 1711, "text": " Right so the the term on solve is always tricky because I've used this the label label solved in a context of example checkers so when Jonathan Schaeffer at the University of Alberta and in his team found basically solved checkers and said if you play optimally from the first position in checkers it's a draw." }, { "end": 1756, "start": 1734, "text": " That is what I would call solved when we look at a Tari and it's the same thing for go really in the game is so large that we're not at that point where we can say this is the optimal play in the sense that it will give you the optimal rate of reward but this said I think you know in a different sense we're very close to saying well we have super human players so aren't we done." }, { "end": 1785, "start": 1756, "text": " So I think in that sense the ALE is is solved we have algorithms for creating policies that that do achieve things that you know in many of these games are beyond humans there's actually a few games for various reasons that are really hard to play on the keyboard and humans to pretty poorly break out as one of them and you know you see these agents actually in this case they are finishing the level because there's a bug where the entire record will crash after level two so these are suppose we would call almost solved." }, { "end": 1814, "start": 1785, "text": " But I think the real value of a benchmark is not so much in being solved as much as how it inspires us and challenges us and one place where it's very clear we haven't done this is we haven't really demonstrated something we thought we would be able to do much more quickly when we start working this demonstrated the the ability of an agent to learn quickly and from few experiments and as it has been actually a lot of interesting work in model based our own trying to get there but I don't think we're not we're quite there." }, { "end": 1840, "start": 1814, "text": " Brandon Lake had a great paper in 2016 where they made that point much better than I'm making it right now what they said let's actually look at human human agents and ask the question how long does it take humans to learn to play a game like breakout of frostbite and we see a learning curve with an episodes right it's the third playthrough and you're already twice as good as you were in the first playthrough where no where near this with our all today." }, { "end": 1850, "start": 1840, "text": " So the original Atari games themselves came out in I think 1977 that's 44 years ago and the RL communities is kind of still working on them with the alley." }, { "end": 1869, "start": 1850, "text": " Do you think that games will always be like far ahead of our ability to to learn strong agents to play them that's right so I'm guessing here you mean by games that there are new games being created that are more challenging I don't think so and the answer to this I suppose is multifaceted but actually let's look at the." }, { "end": 1898, "start": 1869, "text": " Games that people are playing today you know my partner and I haven't playing a lot of overcooked lately and in many ways overcooked is a lot simpler game than some games we saw in the 1990s at the early 2000s so I don't think games that is getting more complex but you're right that maybe there's something special about Atari that we've lost since Atari games where games that came out of the arcade cabinet the arcades and then you know they had to be in the market cabinet and they they were all designed for the most part of the game." }, { "end": 1927, "start": 1898, "text": " Actually not all of them at all most almost all of them were designed to be fast to be fast space and to to give the player continuous reward right your play space invaders and you have to keep getting score and increasing your score and the game has to and soon so that the arcade cabinet can collect more quarters that's something very special and even with the NES we don't see this anymore and even less so with with later platforms and you know you have these beautiful open ended games today that that just are completely different Minecraft as a great." }, { "end": 1956, "start": 1927, "text": " So maybe maybe the video games developers will keep coming up with new ways of thinking about games that are more complicated but I think I think we you know we have a good handle on certain kinds of games may be more importantly I think games are just one reflection of our lives and what's more likely to happen is as we understand better how to apply RL in a real life context then games will look easier to right as opposed to to the way we've taken now which is ignored and we're going to be able to see what we're going to do." }, { "end": 1966, "start": 1956, "text": " So I think it's ignored as an entire game is really a depiction a very crude depiction of a real life you know bomb is a game of tennis in some sense." }, { "end": 1971, "start": 1966, "text": " And so once we've cracked out the real life scenario is a bit better than I think it'll be easier to play these games." }, { "end": 1985, "start": 1971, "text": " And then I guess related is there do you see a natural successor to ALE or is this is the successor like a larger set of benchmarks like what it makes sense to collect some games for more recent times into a new benchmark or" }, { "end": 1988, "start": 1985, "text": " I guess you partly answered that that the challenges might be of a different kind." }, { "end": 1994, "start": 1988, "text": " Right exactly I don't think I don't think that the bigger ALE is the solution." }, { "end": 2000, "start": 1994, "text": " We've seen some incredible work in the RL community trying to develop these new benchmarks." }, { "end": 2007, "start": 2000, "text": " If I think about OpenAI for example OpenAI has been working hard at this for the last almost since their inception." }, { "end": 2018, "start": 2007, "text": " They started with Jim they looked at universe for a little while you know there's been proctgen that is a procedurally generated benchmark that came out recently." }, { "end": 2028, "start": 2018, "text": " And all of these are challenging the field in their own way but it's really that your question is again which part of games do we still need to figure out." }, { "end": 2054, "start": 2028, "text": " And I think one thing that Atari did that benchmarks previously hadn't done is to say you really have to address the perceptual challenge how do you map images to actions right the domains we had before for the most part let's say except for sort of special case applications where much more you know if you think about mountain car where you have a vector of two it's two real values and you've got to make a decision based on these two real values or" }, { "end": 2083, "start": 2054, "text": " a lot of grid world we have a lot of great world reinforcement learning where everything is effectively you could draw it in a piece of paper so when we look at more games are bigger games we're just really saying let's keep the perceptual component and you know crank up the volume to 11 but that's not to me that's not really changing the fundamental question now the more distinct question I think is when games get more complicated and again what is it me to play minecraft it's a very different question and playing ball but maybe there's other ways that we can ask this question." }, { "end": 2102, "start": 2083, "text": " As this question which is why is minecraft an interesting problem in the first place and what is the challenge that it's trying to to make us face and I think we don't really know we don't really have a good answer to this question which is why we haven't seen a natural successor to the early just yet and maybe in some sense we've seen a fragmentation of the field into multiple benchmarks." }, { "end": 2122, "start": 2102, "text": " So we had your PhD advisor doctor Marlos Machado on recently and he spoke about the loon controller which you both worked on and we'll have links to his talks on the episode page at talk or all calm and we will also have a link to your talk that you gave to the University of Maryland recently where you went into detail on this work." }, { "end": 2141, "start": 2122, "text": " So I mean we have limited time here so I'm not going to ask you to repeat all the very interesting things that you said in that talk and I recommend listeners check out the Machado episode and his talk and your talk but could you remind us of the overall goal of the loon project and and of the controller itself." }, { "end": 2167, "start": 2141, "text": " For sure so loon is a subsidiary of alphabet that that is now winding down unfortunately but was tasked with developing basically the loons giant balloons that could fly in the stratosphere and one of the missions is to deliver internet connectivity to regions where that might be difficult for various infrastructure reasons." }, { "end": 2188, "start": 2167, "text": " And so as part of this challenge the the balloons that are being flown are what we would call undirectured effectively one of these balloons is a floating in the stratosphere but 20 kilometers high in the air and the only things that it can do is it can go up down or maintain its altitude." }, { "end": 2201, "start": 2188, "text": " And so if you want to get from point eight point B what you have to do is you have to catch winds going in the right direction and that's as you can imagine it's really complicated when you're flying in this is this messy chaotic wind field in the stratosphere." }, { "end": 2218, "start": 2201, "text": " So what we did is we actually used reinforcement learning to learn a flight controller that could do all of this in simulation and then deploy that reinforcement controller to to fly the balloons and we actually we actually saw these balloons deploy the example over Kenya." }, { "end": 2230, "start": 2218, "text": " This was a massive success for us to maybe in January we could go to flight radar the website and actually on the website see the flight paths of a deep oral agent over Kenya this was magical for me." }, { "end": 2237, "start": 2230, "text": " I understand it performed really well and did you expect that in the beginning or were there points where you had some doubts that this was going to work out well." }, { "end": 2250, "start": 2237, "text": " I don't think I had any doubts in the sense that the way that the CTO of Loon South Candido pitched the project to me and also a colleague of my colleagues James Davidson who was involved a very early in a project." }, { "end": 2266, "start": 2250, "text": " It was very clear that this was a perfect fit for reinforcement learning because of this underectuated nature where really we thought no other controller is really going to be able to do as well here and both the perfect fit in terms of the model but also the tools." }, { "end": 2295, "start": 2266, "text": " That we had available to us so you know discrete small number of discrete actions the the analogy is not perfect but really does look like Atari in the stratosphere and so to me that made sense that this is what we should try to do and indeed you know the way we went about this is not quite Atari but we try to follow the pattern of the ethical project of of you know some choices are less importance of design choices less importance you happy the network what kind of training and and focus on the right." }, { "end": 2315, "start": 2295, "text": " And I think that paid off so I didn't have a doubt that this would work I was surprised at how quickly we got there so was it obvious to you right away sounds like it was that you would use model free off policy distribution or for this like was that very clear from you from the get go or do you ever take any deliberation to decide to to go with that." }, { "end": 2331, "start": 2315, "text": " And then some experiments very early on with actor critic methods which I suppose would be model free but value based and maybe a bit less of policy I was actually hoping following this ethical pattern that we could use something like like a" }, { "end": 2346, "start": 2331, "text": " little bit which is three search with value estimates and this is actually what what I learned working with Lune which is you have to understand the problem and the problem will dictate some of your solutions in this case the simulator is pretty slow and trying to do any kind of search with the simulator is really hard." }, { "end": 2350, "start": 2346, "text": " We can do search but it's with a simulator of the simulator and that's not ideal." }, { "end": 2357, "start": 2350, "text": " So model free just emerges the thing we could do well and it also happened to be the really the only thing we could do." }, { "end": 2367, "start": 2357, "text": " The distribution are apart I think the project would have worked well without it but it just made sense because we knew it so well we could control it and guarantee quality of the process." }, { "end": 2376, "start": 2367, "text": " Were you pretty confident that the simulator was going to be good enough to get the results you needed and there wouldn't be like a big sim to real gap or was that not a risk." }, { "end": 2389, "start": 2376, "text": " I was in confident at all and I think that Marlos would say the same we were incredibly surprised when the balloon flew its first flight in July 2019 that it flew so well." }, { "end": 2397, "start": 2389, "text": " I used to have Marlos's words and record about this let's let's say that he was quite surprised." }, { "end": 2412, "start": 2397, "text": " The thing so you use the word risk and in some sense we weren't too concerned about risk because when you fly one of these balloons there's a lot of safety layers of course that you know this is a real system and I think this is something that sometimes our research is forget." }, { "end": 2426, "start": 2412, "text": " When you implement reinforcement learning and a real application the our is just one part of a very large system and so you know down to the engineer looking at the balloon and asking the question that balloon does something fishy I'll take control." }, { "end": 2433, "start": 2426, "text": " So the risk wasn't there but the positive results was a surprise for sure." }, { "end": 2444, "start": 2433, "text": " So if I understood the reward is entirely about staying within that designated circle is that right and there was a bit of shaping outside the circle did that did that" }, { "end": 2455, "start": 2444, "text": " that reward function takes some some deliberation or was that pretty obvious to you exactly so the reward function maybe to to restate it is if you're within 50 kilometers of the station that you want the station" }, { "end": 2473, "start": 2455, "text": " keep at then you you receive a reward of plus one so it's very classic right we we try to keep something simple zero if you're not in the circle and one otherwise we found it useful to add a bit of shaping outside the region that's an artifact we really didn't need it to get good performance but it helped a little bit." }, { "end": 2493, "start": 2473, "text": " There's another component which is that we discourage power usage and this was done because power when you're flying a balloon is actually of course at a premium the balloon is solar powered and that was necessary so just to get technical for a second it's a multiplicative power penalty where we shrink the reward on the basis of using power." }, { "end": 2513, "start": 2493, "text": " Did it take a lot of tuning not really I think it's funny that early on we actually tuned it quite a bit but it was sort of moving the wrong piece the reason why things were in working is because we didn't have the right distributed training code once we fix the distributed training code we we realized a reward function didn't actually matter that much." }, { "end": 2531, "start": 2513, "text": " The reason for this the way I like to encapsulate this is when you're formulating a problem is a reinforcement problem you don't want to tell the agent how to do it you want to tell it what success is and for us success is one within the region so that's what we should go for really cool." }, { "end": 2553, "start": 2531, "text": " Okay so you mentioned how magical it was I can't imagine what it must have felt like after so much of your work really combined in over the years to come up with this result in terms of DQN the distribution work you've done and and and the whole field is partly driven by ALE as a benchmark." }, { "end": 2560, "start": 2553, "text": " So did you did you see this as a kind of one of the highlights of your career like how what do it mean to you to see that." }, { "end": 2576, "start": 2560, "text": " Well in terms of and you know is it a highlight of my career I hope so it's hard to know I can't judge of the future just yet but what was so great for me here is to to see as you say on a problem that nobody had really" }, { "end": 2596, "start": 2576, "text": " considered before that we could we could bring the tools we knew so well to to great success. Let me break it down a little bit here I think the most exciting thing to us was just the fact that our work in that setting forget about good performance forget about beating state of the art" }, { "end": 2625, "start": 2596, "text": " even though the controller that loon already had was really really powerful and tuned you know for production capabilities just the fact that it worked was pretty impressive the reason for this is you know reinforcement learning and its core is just a handful of equations and maybe you know now with an architecture thrown on top and to say this thing starts its life you want to call it that knowing nothing about balloons and just by trial and error gets to a point" }, { "end": 2646, "start": 2625, "text": " where it's now flying a balloon very very well that is just just just just just just amazing right so you know maybe making a parenthesis here this is actually what got me into reinforcement learning I didn't mention is at the beginning my very first project was actually applying redoing Jerry to Zora's work applying neural networks to backgammon" }, { "end": 2667, "start": 2646, "text": " and this was an eye opening moment to me I I knew about our L I dread you know I dread a textbook at taking a professor for cops class and I wrote down this bag and program and because I love to take care with with things I also wrote an interface I could play against the player and I think one or two months within my" }, { "end": 2684, "start": 2667, "text": " my internship I trained a program within your network and it beat me and you know I'm not a bad back and player and it beat me and I thought this is amazing you know this is this is a collection of numbers that's beating me at at the game that matters to me." }, { "end": 2690, "start": 2684, "text": " So I think that's generally generally speaking the feeling we've had with the loan project in general." }, { "end": 2698, "start": 2690, "text": " Awesome. Okay so moving to distribution RL you shown you shown everyone how effective distribution RL is." }, { "end": 2708, "start": 2698, "text": " Do you think that we should be learning distributions even in supervised learning or is there something very specific about value functions that makes learning distributions for them especially helpful." }, { "end": 2718, "start": 2708, "text": " I think so I actually have a good colleague of mine Martha White at the University of Alberta actually has a paper where she looked at this question specifically should we should we think of using" }, { "end": 2726, "start": 2718, "text": " the classification loss in context where we're doing regression which is in some sense sort of the abstract version of the question you're asking." }, { "end": 2745, "start": 2726, "text": " I do think that in reinforcement learning there's something a little bit more interesting that happens and that's because a lot of the distributions that we encounter are a lot more varied right so in both cases we're mapping inputs maybe images or vectors to outputs and in the" }, { "end": 2755, "start": 2745, "text": " group is learning the outputs are targets and we don't really expect the distribution to be too complicated maybe that's wrong right maybe we we should change our view on this." }, { "end": 2770, "start": 2755, "text": " When we think about return distributions they're really more elaborate right we've seen this when we when we look at return distributions coming out of Atari for example we have these visualizations in a 2017 work with space invaders we've seen this a" }, { "end": 2789, "start": 2770, "text": " lot of things in continuous control tasks. The analogy that that I like to use here is you can think of of RL classic RL as you know taking a photograph of the world there's the real interactions that the agent has it with the" }, { "end": 2801, "start": 2789, "text": " respect that value is a bit like a black and white photograph and it was distribution all our gives you is a color version of that same photograph it really shows you all the details that you you would otherwise be missing out on." }, { "end": 2809, "start": 2801, "text": " I don't think we even know yet what to do with these details but now we have a color a color you know we have color photography what are we going to do with it." }, { "end": 2832, "start": 2809, "text": " So in terms of distribution agents for discrete off policy I guess model for you are we've seen a number of different ways of presenting the value function distribution so you shown C 51 and then there's quantile regression and IQ and implicit quantile networks and then FQF came came along." }, { "end": 2840, "start": 2832, "text": " I'm wondering is the problem of how to represent these distributions solved now or or there's some open questions there still." }, { "end": 2858, "start": 2840, "text": " I guess there's always open questions and it's only question of how deep do you want to go or how precise do you want to be in the context of distribution reinforcement specifically I guess you're asking about the part I would call the representation how do we describe the probability distribution how do we operate the" }, { "end": 2875, "start": 2858, "text": " probability distribution I don't think we have a perfect algorithm the a lot of the field is focused on understanding a little bit better how to make algorithm is at our based on the right loss functions and the right coupling the right representation with the right loss function." }, { "end": 2904, "start": 2875, "text": " But maybe the answer to the question are we done depends on what we want to do with the with these algorithms if for example we want to actually we know that if we want to maximize expectation we really need to predict the expectation except we know that predicting distribution has this funny effect where we we get more stable behavior and then we don't really understand but if instead of maximizing expectation now we want to be to have risk sensitive behavior you know maybe" }, { "end": 2918, "start": 2904, "text": " I don't want to shave off 30 seconds of my commute if it means that there's a 50% chance that I'll miss my turn for those kinds of questions I think we don't have a good answer and how to do distribution or I'll just yet." }, { "end": 2927, "start": 2918, "text": " I really enjoyed your kind of talk in November 2020 on distribution or that was a tour of distribution reinforcement learning and again we'll have a link to that on the episode page." }, { "end": 2937, "start": 2927, "text": " But in this talk you mentioned that there's some evidence for tea learning and also distributional value functions in the brain is that right and was that part of your inspiration at all for focusing on distribution or." }, { "end": 2953, "start": 2937, "text": " It wasn't actually the evidence as far as an extent came out after we've done the original work but it's been a real thrill to see to see the neuroscience community pick up on this and be quite curious to try to understand this." }, { "end": 2959, "start": 2953, "text": " You know in some sense it makes sense that if you can learn it and it's relevant to behavior that the brain should be learning it." }, { "end": 2973, "start": 2959, "text": " So and then you said in that talk and I'm paraphrasing you said something like the methods use stationary policies for fixed worlds and then in the real world it doesn't really make sense to react in a fixed way." }, { "end": 2982, "start": 2973, "text": " Could you help us understand maybe what you meant by that were talking about exploration or ensuring sensible responses to new observations or continue learning or maybe some of the." }, { "end": 3011, "start": 2982, "text": " I suppose it's a pretty cryptic comment that I made during the talk my feeling here is that maybe the way I like to describe this is that most most of our interactions are one of you know I'll go to a restaurant once or I'll go on vacation to remote location once and when we use reinforcement learning the paradigm we're in is one of repeated trial and error right I've done the same things." }, { "end": 3033, "start": 3011, "text": " Thousands of times so to me there's a bit of a disconnect here between how reality proceeds and the framework that we're operating in so I see this is something that we maybe we need to address going forward so I want to ask you about something you said on Twitter I follow your Twitter and I encourage listeners to to check out Professor Realmeyer's Twitter account." }, { "end": 3047, "start": 3033, "text": " So you said you said the the RL of the future is not an algorithm in the TCS sense we think of algorithm as the equations for learning the policy that's probably too narrow instead we need to think." }, { "end": 3057, "start": 3047, "text": " Agent architectures understand the relation between equations and non equations can you say anything more about that in terms of what do you mean by the non equation part." }, { "end": 3080, "start": 3057, "text": " So I guess that's also another cryptic comment we talked about agent architectures before the thing actually some of these ideas we've talked about today the fact that reinforcement learning at the end of the day gives us a model and if I go back in fact and read my colleague Gerger noise point he was saying maybe we need the right RL algorithms." }, { "end": 3086.44, "start": 3080, "text": " You know, can we start unifying things, for example, or can we come up with the right learning rule?" }, { "end": 3090.6, "start": 3086.44, "text": " And my response to this is maybe we just need to change the model." }, { "end": 3097.44, "start": 3090.6, "text": " And the non-equations here are all the things that are not modeled or don't really fit in that neat mathematical framework." }, { "end": 3106.68, "start": 3097.44, "text": " I think there's a lot that we don't understand that doesn't fit in the mold of RL that is worth revisiting." }, { "end": 3112.68, "start": 3106.68, "text": " Cool. And then briefly, I was hoping to ask you about a paper you co-authored hyperbolic discounting and learning over multiple horizons." }, { "end": 3115.68, "start": 3112.68, "text": " That's fetus at all 2019." }, { "end": 3121.68, "start": 3115.68, "text": " And so far, most work seems to use a fixed gamma discount for a fixed horizon." }, { "end": 3125.68, "start": 3121.68, "text": " And this paper looks at, I guess, multiple horizons." }, { "end": 3135.68, "start": 3125.68, "text": " Do you think like ALE let us to focus on MDPs with a very certain range of horizons or what type of environments may benefit from these multiple horizons?" }, { "end": 3138.68, "start": 3135.68, "text": " And maybe hyperbolic discounting?" }, { "end": 3142.68, "start": 3138.68, "text": " Totally. I don't think anybody has ever made that remark before." }, { "end": 3154.68, "start": 3142.68, "text": " But certainly the nature of Atari games means that for the vast majority, they take place at a certain temporal resolution, if you will." }, { "end": 3160.68, "start": 3154.68, "text": " You're playing at the arcade and you have to respond every half second." }, { "end": 3166.68, "start": 3160.68, "text": " You have to do something important and maybe every three seconds you receive a point for doing something important." }, { "end": 3173.68, "start": 3166.68, "text": " And so this is why, in fact, we've been able to use a fairly constant discount factor across the entire ALE." }, { "end": 3180.68, "start": 3173.68, "text": " Now, this had, there have been domains, for example, Dota 2 that Open AI worked on where they use a much larger discount factor." }, { "end": 3187.68, "start": 3180.68, "text": " I think it's in, you know, tens of thousands of equivalent steps in terms of the equivalent horizon." }, { "end": 3193.68, "start": 3187.68, "text": " This would be a hundred times bigger than Atari or maybe even a thousand times bigger." }, { "end": 3200.68, "start": 3193.68, "text": " I think the reason why I suppose, you know, what kind of environments would benefit from hyperbolic discounting or multiple horizons?" }, { "end": 3207.68, "start": 3200.68, "text": " I think as we move towards more naturalistic environments and less game-like environments, we'll see more of this." }, { "end": 3214.68, "start": 3207.68, "text": " I see, I think we see plenty of examples of problems where, you know, what matters more is to get a big 80 of the future." }, { "end": 3218.68, "start": 3214.68, "text": " Maybe what we haven't figured out is why hyperbolic discounting would be better in that context." }, { "end": 3221.68, "start": 3218.68, "text": " Is there truly a reason to be hyperbolic?" }, { "end": 3227.68, "start": 3221.68, "text": " One theory that I love is that we're bounded agents, bounded rationality agents," }, { "end": 3233.68, "start": 3227.68, "text": " and so we have to make some choices on the basis of finite data and a constantly changing world and hyperbolic discounting." }, { "end": 3236.68, "start": 3233.68, "text": " It's maybe one solution to this problem." }, { "end": 3243.68, "start": 3236.68, "text": " Can you say a bit about your current research interests and type of things your students are working on these days?" }, { "end": 3246.68, "start": 3243.68, "text": " I think we're looking at a lot of different directions." }, { "end": 3252.68, "start": 3246.68, "text": " And in part, that's a consequence of maybe a broad interest on my end." }, { "end": 3267.68, "start": 3252.68, "text": " And also, feeling that, in looking for that next paradigm shift, we do have to keep an open mind that we won't get our next big breakthrough just by changing the queue learning into a new version of queue learning." }, { "end": 3278.68, "start": 3267.68, "text": " One of the directions we're excited about is, again, in a space of representation learning, trying to understand how do we describe more complex phenomena." }, { "end": 3289.68, "start": 3278.68, "text": " Right now, in some sense, we've been tied to the success of DQN that we, the only real way we know how to build representations is to do deep learning and let the magic happen." }, { "end": 3296.68, "start": 3289.68, "text": " And while that's good, it's a little bit unsatisfying that we can't go deeper and understand this better." }, { "end": 3304.68, "start": 3296.68, "text": " And the place where we've been doing some work that I'm quite excited, I think, is going to come out quite soon, is to revisit benchmarking." }, { "end": 3312.68, "start": 3304.68, "text": " Again, one way to challenge ourselves is to understand how we design good benchmarks and steady benchmarks." }, { "end": 3322.68, "start": 3312.68, "text": " And so that, what we've been finding, in fact, is a little bit that a lot of the progress that we're making in the field sometimes is maybe stationary and not much as changing." }, { "end": 3325.68, "start": 3322.68, "text": " So this is some of the work that we've been working on lately." }, { "end": 3331.68, "start": 3325.68, "text": " Your own work and your group's work. Are there things happening in RL lately that you're pretty excited about?" }, { "end": 3336.68, "start": 3331.68, "text": " I think from an application's perspective, offline RL is going to be a game changer." }, { "end": 3340.68, "start": 3336.68, "text": " It used to be called Batch RL, so it's not a new problem." }, { "end": 3350.68, "start": 3340.68, "text": " But we see it, for example, when we move from video games where we have a simulator to robotics where we don't, we need to understand how to learn from a fixed set of data." }, { "end": 3355.68, "start": 3350.68, "text": " Really what supervised learning does so well, reinforcement, really struggles with." }, { "end": 3360.68, "start": 3355.68, "text": " So that I'm very excited. I think it's going to come in quite soon. I'm very excited to see where we can take things." }, { "end": 3373.68, "start": 3360.68, "text": " I also love the work that's going on a model-based RL. I think it's addressing an important question of how do we deal with counterfactuals, the fact that many things that arise around us we've mounted a bit exposed to before." }, { "end": 3380.68, "start": 3373.68, "text": " I think last but not least, I'm also excited to see what neuroscience can contribute to reinforcement learning." }, { "end": 3387.68, "start": 3380.68, "text": " Whenever I pick up a neuroscience paper, I see an incredible amount of interesting phenomena that we're completely ignoring." }, { "end": 3395.68, "start": 3387.68, "text": " Professor Belmer, I can't really explain to you how much this episode meant to me. I've been reading your name in the literature for years." }, { "end": 3405.68, "start": 3395.68, "text": " I've been a big fan and if anyone would have told me that I would have a chance to interview you in this podcast when I started in 2019, I probably would not have believed them." }, { "end": 3412.68, "start": 3405.68, "text": " So I want to thank you so much for sharing your time and your insight with me in the talk our RL community today. Thank you, Professor Belmer." }, { "end": 3416.68, "start": 3412.68, "text": " And the same to you, thank you. This has been a fantastic opportunity." }, { "end": 3426.68, "start": 3416.68, "text": " Thanks for watching." }, { "end": 3458.68, "start": 3446.68, "text": " Give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better." } ]
Robert Osazuwa Ness
Dr. Robert Osazuwa Ness on Causal Inference, Probabilistic and Generative Models, Causality and RL, AltDeep School of AI, Pyro, and more!
https://media.transistor…35b.mp3?src=site
This is Talk by Rail Podcast. All reinforcement learning, all the time. Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan. Dr. Robert Osa-Zoo-Anness is an adjunct at the Northeastern University, an ML research engineer at Gamalon and the founder of all deep school of AI. He holds a PhD in statistics. He studied at John Hopkins and then at Purdue University. Robert, thanks so much for being on the show. Thanks so much for having me. Yeah, so how do you describe your area of interest? Sure. I focus on the intersection of causal modeling and probabilistic modeling and machine learning. I'd say that my big goal of mine is to introduce more causal reasoning methods into machine learning community, particularly when it comes to generative models within machine learning. So can you tell us a bit about your PhD thesis? I think it had to do with causal models in system biology, is that right? Yeah, that's right. So it's kind of a story there for it is so prior to my PhD I was living in China. I worked at some internet companies. And I got interested in working with data and engineering data-driven apps. So that's what drew me to statistics. I also had read a few books at a time on synthetic biology and this idea that you could use symbolic logic like you could code a program into biological circuits and it would serve some function. And so that was the research that I was interested in on working on when I started my PhD. And so I ended up working on problems in systems biology because statistical inference is if synthetic biology is about engineering cells, systems biology is about reverse engineering them. So the reverse engineering problem inference techniques become much more important. And so part of a big area in systems biology is this attempt to take data and reconstruct molecular pathways or even go from a built-up pathway model and even turn that model into something that can actually simulate data. And so that's what drove me to causal inference. So one thing that you can do there, one approach is to take data, say for example, of protein signaling within a cell and apply algorithms that will reconstruct cause and affect relationships between various components of that system. So some people see actually in the systems biology community they call that causal inference more precisely that's structured learning or causal discovery. And so my PhD research was trying to take causal discovery algorithms and you and and and and sconce them in a experimental and a sequential experimental design framework so that experimentalists could actually use these techniques to drive scientific discovery. So the way that happened was there was this woman named Karen Sachs. She pioneered this method of using causal Bayesian network learning algorithms to reconstruct signaling pathways. And it was funny because I had read this paper and I was you know I was really inspired by it and then not shortly after I ran into her at a conference. I didn't know she was it was just we were watching a talk we both thought it was boring so we snuck out to get to the little coffee food tables before everybody else did beat the rush. And so we just started chatting each other up and then she introduced herself and I'm like oh my god that's who that is. And so we actually became collaborators and we're still really good friends and but and so I so I ended up taking her methods and wrapping them with an active learning framework that would allow you to say okay well. I'm not so much interested in kind of getting this big old Harry causal graph I really want to kind of drive some kind of reward function say for example some new discovery some hypothesis that you have a low probability of being true and then turns out to be true for example and then you get a paper the acolytes and the funding and all that great stuff that comes after so. I took and so I built that active learning framework around that and and so you can you can it was a Bayesian active learning approach that would allow you to take causal discovery and operationalize it essentially and so that was my first introduction to. Cause of modeling and to reinforce learning and so far as active learning is a special case of reinforce learning. So you had an agent that was maximizing your funding. I was building an agent for people who need to maximize funding personally I will never I guarantee you I'll never work on another grand application while I live. Okay I'm sure there's a lot behind that statement. So it seems like causality became a really hot topic over the past few years in the M.O. community and and and I and I can't help but wonder why something so so fundamental took took a really long time for everyone to get around to and and think about clearly. Any comments on that yeah I think there's a lot of reasons for that so I think one is the problem of transfer learning we basically I think we're all kind of starting to realize that. Trading for a loss function that optimizes predictive performance is insufficient to give you good transfer ability and or or or or stable or stable performance of your model across environments and and there are lots of ways that we're trying to address that and we've used a lot of eristics say for example ways of trying to avoid overfitting but I think we're realizing that even that doesn't really get us to where you want to go. I also think there's a there's been a lot there's been a cultural gap between the people who've worked on these causal inference problems is causal inference research community and the machine learning community I think that you're working on different problems so causal inference people tend to be working on problems and social sciences and public health the M.O. community is what you know when the M.O. community rights papers are often looking at specific data sets or milestones and trying to define performance against doing doing well on on those benchmarks there's also. I think there's different stakes right so and a different. Epistemological values so what do I mean that by that I mean I think that you know causal inference researchers focus very much on objective truth right so and because the stakes are high because we're working on problems in policy and health right so if I say that. If I'm talking about does smoking cause cancer that's you know that's a that's a problem that's going to affect people's lives is also going to affect the bottom line of some major corporations is also going to affect public health policy so you want to you want to make sure that you're right that the cost of being wrong there is expensive and and not just in terms of money but in terms of of life often. And so because of that focus on on objective truth there's. There's a lot of emphasis on mathematical rigor. And contrast I think the machine learning community is focused on predictive performance and benchmarks because they're trying to push the state of the art right and if it if something works extremely well and we don't have a mathematical theory for why it works that well that's okay so long as we know as long as we have a track. A trajectory for for moving forward and also we're off we're often not looking you know if you look at so that in that kind of characterizes deep learning but there's other branches of of. Machine learning say for example you know latent variable models models and machine learning that are inspired by computational psychology or the idea is you know we want we have this model for how the system works say for example topic model like a document is is driven by the topics that are present in the document we know that's in complete we know that's not. We know that we know that's not entirely true we know that there's other there's many other things that go into what what you know what determines a document but it that model might be good enough for the problem that we're trying to solve. Alternatively if you're if you're taking account if you're taking the approach to the computational cognitive science approach of saying like all right well. I want to build AI here's how humans here's my theory about how humans reason about the problem let me build a model that can duplicate that and so. You know because if you know what's that that's a good way of maybe building AI which is to say like well let's look at real AI and see if we can't if we can reverse engineer it and so that's again that's not about being you know that that real human who's making those judgments are those decisions or those predictions might be wrong but you're not interested in whether or not the prediction is right or wrong or interested in and how faithful you can replicate the way the human reasons because the human intelligence you know it's pretty good so. Yeah those are two different views on all of those are two different sets of values and two different sets of goals and so I think that creates a kind of cultural divide that makes it difficult for the you know if you're interested in if you're working if you're machine learning research here and you have some problem that you want to solve and you want to dive into the in realize that you need some kind of causal inference solution and so you want to go dive into the causal inference literature it's a little bit opaque because. And they're not only talking about different things but they have a whole different set of values and a different set of goals. I think also it should be mentioned that there's different workflows right so deep learning has led to you. And you know very strong improvements in the state of the art and and that's led to this this workflow where you kind of focus on mapping raw data to the output right you just kind of you you want to have end to end machine learning. And you don't want to kind of reconstruct the new model each time you just want to kind of have a you want to have the right architecture with the right inductive bias for the right problem applied to the raw inputs and and and predicts the outputs whatever it is the label the reward whatever and everything you just let you let gradient the sentence take care of everything in between right and that doesn't work. In causality because you have to make explicit structural causal you have to make explicit structural assumptions about how the system works. And you know so you and there are people in machine learning who are trying to say avoid that by say for example using deep learning approaches to learn causal structure which from data which will allow you to in theory kind of skip the structural assumptions because you're learning the structure but there are some. We can it's mathematically proven that there are some assumptions that you can't learn from data right not without some kind of inductive bias and that inductive bias tends to need to be provided in some explicit way to the modeling algorithm you're not going to get it kind of implicitly through you know you know max pooling or or you know attention. Right it's not going to come off it's not going to just come out of some off the shelf the architecture. Not going to what I think that's true I mean maybe tomorrow they come up with some new architecture that that that's all everything but I but you can you can point to the math and say like now listen this is you can't hear the thing that you can't learn from data if you want to advertise this you're very going to use that the expo some kind of inductive bias or some kind of you know Bayesian prior on the thing otherwise it's not going to work and and that and a lot of our end to end machine learning workflows don't really admit that kind of that didn't really have that kind of interface. That's my that's my three I think there's other reasons to but yeah I don't know does that make sense yeah I mean I think if you spend a lot of time thinking about Atari and mojoko environments it seems like notions of causality are kind of optional in a sense like you can get so far without doing that stuff and you might just forget that it's there if you're not dealing with this messy real world real world data with with with the domain shifts and things like that is that fair to say. Like in Atari do you care that what caused the death of your agent does your does your agent care what causes death beyond. The immediate actions it needs to take to know one of the things that I've been I've been kind of harping other researchers in you know who are talking about causal inference in the domain of AI and artificial and and machine learning I say. You know they often come up with papers of these little nice little toy models like you know little for no dag or a you know structural causal model with with you know a linear assumption and I'm saying here's like you show this to machine learning person you're going to scoff like there this is not the kind you know these simple little little pocket models they might be useful for kind of proving proving some idea. Yeah causal inferences causal inference people do this lab because we're like oh look here's this little tiny network and let me show you how this big you know how you can kind of if you try to estimate things a certain way that you can kind of go completely in the wrong direction and that's useful because it's a very simple way of showing how things can go wrong but they're off these these little tiny models are often I don't know they feel a bit contrived and so if you. I would say like if you have an idea for how how this thing could could improve sequential decision making on our uncertainty implemented in open AI right like don't go for the simplest model possible go for the simplest model possible in in the in opening I Jim right so you use a frozen like example use you know use what's the game whether shooting aliens when they're coming down space invaders space invaders. You know use use one of those simple I mean those are still those are still simple right there are tary games I mean it's not like we're it's not like your you know you're playing cyberpunk right so it's it's um and those games have a there is caught so there's a reinforce learning course by Charles is Bell and on I think it's on. I don't know if you actually something but anyway you know one of the things he says in that work in that course that's interesting is that the so Charles is about when he explains the transition function in the Bellman equations he says you know this this thing in caps in capsulates the physics I actually I don't remember for was Charles is my Michael litman that says this but anyway he says that the transition function in cap encapsulates the physics of the world right and so there should be some connection. Between causality and physics hopefully right so I would say that the physics of the Atari game is a is the causal specification of that system right and so if you could and so you could in theory. Yeah if you if you had an agent you had a learner who was and who knew in a model based reinforcement learning a pro model based reinforcement learning approach had some kind of knowledge of the physics of the underlying system. The game that it's playing then could certainly reason causally about about you know about which house actions are going to affect the environment. But to your point I think and you might and so you might think you might think yours of well you know like that seems like a lot like should every agent have an understanding of the underlying physics of the world that is that is operating in. And the answer could be depends right like so if we look at cognitive science there's there's a lot to be said about this idea that humans have an intuitive physics and you know they have a or folk physics model in their head when it comes to understanding you know physical objects and their interactions so cars hitting deer billiard balls bouncing off the sides of the table. And that they also say that humans have you know a quote and folk kind of intuitive. Quote and quote physics for other domains like a intuitive psychology for example like you could you and I could be sitting at a cafe and watch some some couple across the cafe have a have a conversation and we would make pretty good inferences about that conversation based on the theory of you know intuitive and intuitive theory of psychology that we're able to that we're applying there that we're not. We don't really learn from data we're just kind of born with it and so. And there's there's a lot to be said for a kind of a model based approach that has a or the transition function is kind of driven by some kind of domain physics. But to your point the problem of trying to train something in a simulated environment and then. Taking it out of the simulated environment and having it work in reality say for example you know robots. And often they're often the reason why this has this is hard to do can be characterized in causal terms which is to say that you know when you create a when you create a simulation environment you try to make. You try to reduce the all the variables in the system to to only those that are that you think the the agent needs to be worrying about and so if you're wrong about that if there are some things the agent needs to worry about that you've excluded from the simulation system that that would that could hurt that agent or affect that agent in the real world. Then it's going to you're going to have an issue and and and causal inference terms we call this the problem of confounding by you know or latent confounders so. And it's two aspects of how causality kind of comes into play there. So how do you define the idea of cause like why are we doing the show today is it because we clicked on the interview link or because you're a successful researcher. Or because of the big bang how do we think about what really caused something. I mean it's a good question so for one we have to kind of be recognized that philosophers have been trying to define and parse causality for like millennia now right like even Buddha had a definition of causality right and so. So you know I think oftentimes people what just kind of want to focus on on kind of bread and butter machine learning problems when I focus on the math and it gets to get a little bit uncomfortable when you when you delve into philosophy but unfortunately here you have to actually I think fortunately you have to because it's actually really interesting to be talking about kind of what's you know. What does it mean for something to be a cause what does it mean for something to be effect you know these are this is right up there with those kinds of other philosophical problems of data science machine learning like you know the problem of induction for example. And it's relevant today in terms of you know how people approach causal problems safer example there's there's a few. There's a lot of economies there like so there is something called. There's a there's a manipul a manipul excuse me manipulability theory of causality and there's a counterfactual theory of causality so the manipulability means that say. A and B are correlated but if I do something to a B is affected but if I do something to be and a is not affected then a causes B and then. Somebody might object to that and say well the problem with that argument is that you're defining causes causality by or your definition of causality requires a presence of a human agent right but presumably things caused things to happen on Mars. And has nothing to do with us right and so another competing theory there might be the kind of factual theory of causality where you say. Yeah a and B moved together and. A did this and so I observed the a did this and then B had this B did that but had a not done this B would not have done that right so that's a counterfactual definition. There's other there's other philosophical aspects to it select there's a teleological understanding of causality so for example if I ask you. Why is this knife sharp you might say because it was sharpened on a what on a what stone or you might say because it is it is it is supposed to cut things cut things in half right and so. Those you know so there we're talking about you know is where is causality there in terms of mechanism or isn't there in terms of function. We have we have dependence notions of causality which says that you know this is what you might think of with the directed graph where you have. You're basically trying to boil down causality into ideas of of things being dependent on one another or things being conditionally independent of one another. There's this idea of type causality and actual causality so again type causality is again what you might is what we typically see in when we draw a graph like we say smoking causes cancer while type cause while actual causality is focused on. The events on outcomes right so like Eli has Eli smokes for 20 years and as a result he has this kind of cancer right and so. I have a sort of there's a lot to kind of impact there when we're trying to define a causality and. The examples that you gave so you're saying like is it because. Somebody clicked the link or is because of my background or is it because of you know my mother gave birth to me so this that's actually an example of proximal cause so this is a legal term. But it being and we have a formal definition of it in cause win for instance but you know the colloquial or rather legal term the more popular term is proximal cause but it's it's you know. If you know with the holocaust have happened had hit of Hitler's mother not met Hitler's father. It wouldn't have happened and yet we are probably wouldn't have happened and yet we could you know we wouldn't blame Hitler's mother and father for this for that outcome so or at least their meat for that outcome or and so that's the idea proximal cause which is really important when you're trying to ask why something happened or assign blame or. You know in reinforce learning terms figure out regret. So you teach about both Rubens potential outcomes framework and also Judea pearls structural cause of models can you talk about the kind of these two frameworks should they coexist do we need them both. Yeah so this is people talk about this I think what I'll say is that there's there any equivalence between both approaches which means that you know the accident the actions are such that if you can solve if you can use the potential outcomes framework to solve a problem then you can also solve it and the perlian framework and vice versa and so. You know it's practically practically no different terms of like how easy it is to solve a problem and then of course depends on the problem that you're trying to solve you know I like to think of it as. You know the difference between functional programming and object oriented programming for example there's nothing that you can do in one paradigm that you can't do in the other but you might prefer to solve certain problems in one program and one paradigm relative to the other for various personal reasons like maybe it's. One way is funner for you and as well practical reasons right like maybe you know if you're working with I don't know if you're working with me old piece better if you're working with a you know a database schema and so it's easy to think about things in terms of entities and attributes. Yeah so like I think one needs to think about it from in those terms like what is you know what is best for the class of problems that you're trying to solve there's a lot of practical differences so one for me one of the key distinguishing features of the perlian approach is that it makes a crystal clear distinction between cause ideas and statistical ideas. And then contrast the potential outcomes literature focuses closely on the statistics and so so does that mean so number one that making that clear distinction and so if I for me when I want to teach you how to reason cause or your system it's nice if I can separate the cause ideas from the statistical ideas because the statistical ideas I can tell you to go look up in a stats book right while the cause of ideas you know we can focus on the and it makes it easier for you to learn especially if you don't have a background in you know statistics or economics or social science and so it doesn't feel like you're having to pick up a new you know masters degree just to learn how to solve you know how to apply causal reasoning to your problem. But also it provides a nice unifying framework for thinking about various problems that you might face within causal inference like say for example I often teach economist and economist when you learn economics you learn a bunch of causal inference techniques that are very much methods right they're not they don't have any like you don't really know what the theory is behind them. Like you know here's the problem that you you have this kind of problem uses this type of methodology and you know you have that problem use that methodology you don't really have any overarching link in your head that explains all of these and why they work and and the Perlian approach does that so the example might be propensity square matching which you know is a technique for adjusting for confounding and it's really easy for me to come to you from a program. To come to you from a Perlian standpoint and explain to you using graphs and ideas like de separation why how exactly propensity square matching works. And so like I that's often a win for me when I work with people with economics with economics or econometrics background I can finally explain to them or a tie together all these causal inference methods are using what they and it'll actually know why they work. But on the other hand the potential outcomes approach is extremely practical right so they like I said they focus on the stats which is important when you actually need to solve a problem which you know so for say for example like you work at a tech company and you need to construct some. Alright so usually a reinforcement learning example let's say that you are you are you are a marketer and you need to construct some kind of email sequential email campaign that's going to. Send you emails in the content of the email the time of the email is sent depends on you know whether or not the previous emails were opened and. Maybe the reward is not just that they click on email but maybe they go and they you know engage on a website right and engagement is you know like you're going to say all right well in my company we quantify engagement by. Quantifying how much time they spend on the site over two weeks and then just like taking the area under that curve or something like that so you have you know a. You have this response that's probably tricky to model parametrically you have you have some kind of confounding that's happened that confounding is depends on time right and so you need to adjust for that confounding and so you so you might go and say I'm going to go and construct some kind of instrumental variable so I'm throwing words out there but this is just a technique that people use. I want to find some instrumental variable and you know so you know you in order to find that instrument you have these certain instruments available to you and you have to kind of figure out exactly how to how to make it all work like the potential outcomes literature will provide papers that's kind of say here's how you solve. When you have this kind of data and you have this kind of problem is here's how you solve the problem and and by the way like it's going to be really nice because it's going to reduce variance and it's going to and you can get there it's going to converge very quickly. You know perlian stuff never talks about any of that they just they just they don't they they assume that you can solve your own kind of statistical and parametric modeling issues and focus on the high level concepts and that's fine and so it's it really kind of depends on how you want to approach it and when an example I was thinking about kind of our conversation today and I remember reading. Rich Sutton's the bitter lesson that essay you don't talk about that essay he writes about now let me ask you because I was I whenever I try to talk about this with people I think sometimes that the under the interpretation of the paper depends on kind of how you so it's almost the the essay is a little bit kind of a mirror that reflects kind of your own prior beliefs going into the essay but. The way I might take away from the paper was that I guess it's really more of a blog post but that's if there is a brute force computational method that can solve a problem and you're thinking about an alternative method for solving the problems that incorporates domain knowledge and you're thinking about it because that domain knowledge might make it more efficient. It might make me solve the problem faster more easily with with less kind of computational work don't do it because eventually the compute is just going to get cheaper and and the cost will become moot while you have wasted your time trying to come up with this domain's just domain knowledge based approach is that is that kind of is that a good takeaway or good read. I read it in a summer way for sure but I agree with you I think it's like it's like those ink block tests that just tell you what you think about you can read so many things into it I mean just just what you've been saying so far today when I compare that to the blog post like you were saying that there's certain types of information that you can't learn from data so adding more data doesn't actually get you there and so maybe the better lesson. Maybe the better lesson doesn't fully apply in cases where you need some extra some information outside of the data like like causal dag. Great I say like you know so yeah my raw shock test for reading that essay as a person with a causal modeling background is that the what's nice about the Perlian approaches again is it when you when you took that approach learning it you get this very clear line in your head between causal ideas and statistical ideas. And one of the key causal ideas is that you know this idea of identifiability which is to say that hey you you know you are interested in asking questions like why is this happening or what would happen if I did that and though we did what we did is we take these we we formalized those ideas with very clear definitions of things like intervention and counterfactual and so and what we we we we've done we we've arrived at a problem which is to say like you can't answer those questions with the data you have to do that. It's fundamentally not identifiable that means that you can't answer that question even with infinite data right and and so like this nice separation of these causal ideas and statistics means that you can evaluate a problem for identifiability without the statistical concerns because you know because we're certain is right that if it's once you've solved the identifiability issues once you once you know it's a problem that you can't answer the question. You know it's a problem that you can solve that you that you it's a causal created you can answer that it's just and it's just a matter of coming up with the right statistical approach to answering it and if you can brute force that if you can brute force that approach then great do it. But if you if it if it fundamentally cannot be answered then maybe then you need to start thinking about how you can you can bring in domain knowledge to close that that gap right so like if there's some you know safer example I can only solve a problem to some kind of class of values and equivalence class of values then you know I then then maybe then I can bring in some kind of domain knowledge in a form of a prior or in a form of some kind of inductive bias that would actually turn it from an unsolvable problem into a to a solvable problem and then I can throw all my computer at it right and so I think so I think the kind of probably an approach makes it really easy to think in those terms but yeah I you know I use potential outcomes approaches all the time and so they come together for me so I don't really think there's much there's not much much substance that debate although other people definitely think there is. Yeah the better lesson keep coming back to that and it seems like maybe he was talking about something specific like some part of the problem that you can solve with brute force but even then if you look at you know leading agents today yeah they do their performance does depend on on huge amounts of data but also the components of the agent have been carefully hand engineered to make best use of that data so it seems like there's always some kind of there's always some kind of line between domain knowledge hand design things made by engineers I mean even engineers came up with or researchers came up with the CNN design to allow it to allow these agents to benefit from from brute force. Yeah I always think of it yeah I mean I think the idea that of like radical empiricism and machine learning is a bit of a kind of red herring I think of it in terms of inductive bias right so you know Tom Mitchell wrote that that great paper I guess was a mid 80s about kind of inductive bias and kind of laid it out in these clear Cartesian terms where you have where you know he shows you that like if I you know there will be multiple solutions to the world. So if I want to generalize from this this training data there's just multiple solutions to this problem and and I need to have my algorithm needs some needs to be able to just pick one over the other ones and so like that's the inductive bias right and so the way and there's a lot of ways we can approach inductive bias we could so we could we could for example for being Bayesian we can encode it into a prior right the way deep learning architectures tend to do it is make it implicit in the architecture itself so you know convolutions and max pooling right attention these are you know these are inductive biases and oftentimes what happens is we kind of we get to the these we get to these we discover kind of what the inductive bias is kind of retrospectively like somebody kind of figures out that hey if you use this architecture then it works why does it work let me let's guess let's go look as analyze oh it's it's favoring it seems to you know natural language it does really well with you know hierarchy and and it tends to I don't know preserve you know assume that words that core core tend to be close together something like that but like you know it's still like you're inductive bias is just implicit in the architecture in so you know in your example like if people are kind of engineering are kind of hand crafting agents there there essentially trying to say like okay one day agent and in counter encounters this situation needs to kind of back to certain way and so all that is arguably kind of encoding some kind of domain knowledge into the agent I think culturally what I sometimes this argument is presented to me like machine learning engineers don't want to use any kind of domain knowledge at all and I'm at I'm and I said I don't think that's true I say that I think that they just want to do it once what does that mean means that like they just want to they don't want to every time there's a problem they have the answer they got to construct a new model and put in all these structural structural assumptions and do it and then like you know kind of make bespoke modeling for every problem models for every problem I think that's a good idea but most people don't want to do that they just want to kind of have this approach that they can kind of import and then train and then spin up and they just and and and they're just comfortable that it's going to adjust to the situation if you have to kind of if you rights if you and so they'll they'll they'll they're happy for it to have it inducted by us like you know in variance to translation or or like you have with a convolutional neural networks so long as they don't have to like re re implement it each time and so but you know in very instant to translation if you're assuming that your your domain is in and variant to translation then it's you make your your you're putting in some domain knowledge right like if you are modeling Picasso's then it wouldn't work. So you could see started in economics and early on which I gather is mostly focused on on the potential outcomes. Yeah I mean I didn't learn honestly like I didn't learn any causal inference in when I was studying economics and I say I didn't learn any causal I learned a little bit of like like I said difference and differences propensity score matching those kinds of techniques like they're basically like you know Batman utility belt you know pull out this technique for when you when this in this situation but you don't you never really understand what they were doing but and so I guess that was a little bit of my an introduction potential outcomes but again they're even thought of it as causal inference it was just like when this thing happens do this when that happens do that kind of thinking even during the you know I got my PhDs and stats I didn't they didn't teach us anything about causal inference beyond the gold standard the clinical trial right and so over randomized clinical trials and so I guess just ordinary randomized experiments I really do it's all self taught really I mean I was during my PhD was my dissertation but I I was taking like you know definitely colors course on Coursera on proper physical models I was buying books I was you know I was like right in Coursera yeah yeah writing writing code submitting the code basis so like I was I had to kind of pick it up on my own cool okay and you definitely did and now you teach it. So Gamalon can we move on to Gamalon can you tell us a bit about Gamalon and what what you're working on there sure so yeah so I work at an I still have called Gamalon Gamalon is focused on building natural language understanding SAS for sales and marketing so our flagship con our flagship product is a conversational AI whose main distinguishing feature is that it tries to understand who it's talking to I mean this is me explaining it as you know an engineer who builds it I'm not sure kind of what you know what's what the what the you know go to market people say publicly so if this sounds a little bit different from what you find online so be it but I mean to me the main distinguishing feature is that it's trying to understand who it's talking to what they want to know what they're trying to understand and then provide that information and escalating that to some kind of conversion like you know here's a informational PDF or or you know do you can download or you know here you can leave an email or you can schedule a demo if it's appropriate right we don't want you know we actually have a very strong policy against not being aggressive but focusing on understanding the understanding the visitor and understanding what is you're trying to do and understand and helping them do so and so I can't talk too much about kind of the tech except to say you know that I was offered the role because of my experience in probabilistic modeling probabilistic programming and working in learning graphical structures and you know are kind of working with unstructured data doing unsupervised learning and discrete discrete structural spaces so you also take that for what is worth and I and you know in my role there I focus very much on this challenge of building a SaaS product around and core AI tech which is you know it's fantastically interesting and challenging problem and I think a lot of you see a lot of startups that are basically kind of investor funded research and so like you know it's really I feel really kind of fortunate to be at a place where you have this very strong you know cross-pollination between building a software product and and building AI and building you know and kind of research kind of research problems that happen in when you're trying to solve an AI problem and so that's that's you know that's the intersection where I want to live at to have all the all the challenges of of doing a SaaS business and then plus you're pushing the envelope on the AI as well it's got to be a challenge and we often talk about it's like you know we have to solve all these little problems and each one of these problems could be a paper and we have to fit it fit it within like you know kind of a scrum process of two-week sprints and it's it and work with full-sac developers and product people and it's really fun well it's a fun place to work and we're hiring so you guys that you're interested people should apply awesome okay and then at the same time you're doing you're doing all the all deep your school of AI can you tell us about that what is the what is the idea with all deep yeah so the goal of all deep is to help clients right just people who are who are working in quantitative quantitative fields you know improve their practice their reasoning their decision making you know the problem that I'm trying to solve is you know working this multi-disciplinary field and you know that's oh man I need to know more about similar processing to solve this problems like oh I need to say I wish I knew more about you know category theory or I wish I knew more about you know causal causal machine learning and but it's really hard to do that in a labor market where the fangs are incentivized us to over specialized right and so we work in this multi-disciplinary field and we feel like oh I need to learn more about you know signal processing ideas to solve this solve my problems or like oh like they seem to have a nice framework for or let nice mental model for understanding this this domain but I have this background or I need to learn more about causal machine learning to solve this this problem that I'm working on but the labor market tends to you know it's driven by these fangs who encourage us to over specialized right and just kind of get really good at using a specific set of tools and have a very narrow way of looking at things and so and go a vault deep is to make it easy for you to acquire mental models that people and other disciplines have already figured out and so that's what we do and we do that with these workshops and courses that we run we newsletter and a community site it's it's currently closed but you know I'll tell you what if you want I could send you a link that you can pass to your listeners who are interested we're going to open it up soon but for now it's kind of it and by only thing that'd be awesome so so can you tell us a little more about the the types of courses that you have right now and we have a causal machine learning workshop that's run in three cohorts a year although we're going to increase that soon so the bushes stay tuned there we have a probabilistic modeling workshop and so what that does is it takes some of the traditional workflows that we've seen develop in Bayesian statistical computational Bayesian statistics and then combining them with advances in approximate inference and and and then extending them to some of the more kind of cutting edge approaches that we see things like hardcore implicit models using simulators as without they don't have likelihood functions as models and still being able to do inference deep probabilistic models and so so I kind of see that as you know if you look at some of the best kind of textbooks on Bayesian inference they tend to focus it then they ignore kind of the cutting edge of machine learning and so this one solves that problem we have a let's see we got a course called refactor devolved beyond a glorified curve fitter and so this is this is this is a course that is is takes a high level looks at kind of coxide decision theory Bayesian modeling not just like Bayes rule but the actual kind of high level concepts of thinking Bayesian about a problem and you know takes some takes aspects of communication theory that as a whole we were just talking about inductive biases does this whole breakdown of inductive biases across various problems of machine learning and then some things that we have in the pipeline we have a course on building if Ethereum dApps or Ethereum applications that is that are focused on social decision science behavioral science game theory we have a course on applied category theory that's in the works and so another work another course coming on on decision science I took your causal modeling course and it really opened my eyes to so many things on causality and I feel like I was missing on a lot before I before I took that course I love the course I have this new vocabulary and skill set and probably but probably the most important thing for me is like the just the mental frameworks like you were saying I've had to think about this stuff and where you need it and kind of seeing how how blind I was about this stuff before so I chose your course one because my friend I am a savvy at mda and Vancouver recommended it and that's how I met you actually because because of him so show to to I am but also I found you you have this ability to talk from all different sides you're so you're like authoritative on the causal modeling side and then also with the latest with with machine learning methods and deep learning methods and like you were saying usually people are one or the other and just being able to easily converse about how these things relate and to do that with clarity and so that that made it a great decision I'm super happy I did it highly recommended and and thanks for the the excellence experience with that course. Thank you I'm dark skin so not much of a blusher but I would I would be blessing. Well you deserve it no that was great that was very helpful to me and sure that anyone who takes it. So you also call co author to well known our package beyond learn can you tell us tell us about that can we talk a bit about some tools. Yeah I was a contributor to be unlearn I think co author is strong because I mean the main author has done the bulk of the work but yeah so what my contributions to that package were really about so that package is generally about is our package is generally focused on learning structure not necessarily causal structure and so what my but you know it can be used to learn causal structure but so what I did is my contributions there were mostly in terms of adding algorithms for learning causal structure particularly in a in a sequential decision making style where like if you know let's say like I you know here and this thing is like I'm trying to learn a causal graph and the actions that I'm able to apply are interventions to nodes on the on the graph and let's suppose that each action cost you know 10 bucks and so what is I want to kind of resolve all of the you know what is the path what is the trajectory of actions that leads to fully realizing the causal graph as inexpensively as possible so like that was so I built a lot of actions that allows you to do that kind of thing that's being learned and so so can you tell us about in other tools of your favorite tools in terms of causal inference and building these data generating process models. Yeah so I'm a big fan of probabilistic programming particularly probabilistic programming languages that use a deep learning kind of framework like pie torch or tensor flow so I'm going to plug a pyro which is a pie torch based probabilistic programming language so you can implement deep learning models but within this kind of probabilistic modeling setting it has abstractions for causal reasoning and has an operator that do operator that will contain a model of an intervention. I'm close with with some of the developers of that platform Eli Bingham is one of the core developers and he's a member of the all deep community he participates in a weekly causal probabilistic programming reading group that we run Mondays. There's a lot of really cutting edge probabilistic model what yeah probabilistic modeling techniques in Julia that are really amenable to causal reasoning because say for example they'll they'll allow you to have the build a generative model where the generator is a simulator you and then give you an ability to kind of do inference on inference on that on that model which is not trivial because simulators don't don't necessarily have likelihood functions and so therefore it's hard to apply base rule. So if you can do that and you if you all you need to do is add a little bit of causal semantics and you can do some really powerful causal reasoning those tend to be much more those tools often comes out of research labs and I haven't seen any become kind of widely popular yet I tend not to focus on tools until there's a nice developer community behind them and so but the the broader Julia community is certainly very healthy and growing quickly and doing a lot of interesting things and so that's that's kind of why. So you know, pyro and in Python in the Python environment and various things happening in Julia or focusing these days so where would you say we are with our understanding of of causal inference and the tools that we have are we as again I mean like as a community or where the field is are we still in the kindergarten stage of just figuring out the basics or are we kind of close to having everything sorted out. Is there any way to the only comment on that I say we're still pretty early days because there are few libraries so for example fell on a mits Sharma over at Microsoft research has a package called do I which allows you to do causal inference without kind of having to understand like you specifically specify directed graph that represents the causal assumptions of your system and then thing that you're trying to infer and I'm going to say that you're trying to do that. So you're trying to infer and you provide it with the data and it takes care of the rest and so that's a really powerful tool I think if you're just kind of looking at bread and butter causal inference problems if you're you know but say for example like something that allows you to I don't know say for example learning like you want to kind of spin up a an agents that is you're going to try and learn a transition function using causal semantics you would have to hand code that you're going to try and do some kind of planning procedure where the the agent has a causal model about how its interventions are going to affect the environment you would have to implement that let's say you're going to try and come up with a model that generates explanations based on a causal model that would you would have to implement that selectors I think we're very very early stages in terms of tooling. So let's talk about causality and RL and I know there's been a lot of papers that look at different aspects of the relationship between these two things one very basic thing that I've been kind of having has never been super clear to me is like can we say that RL online RL seems to be inherently causal in the sense that it's learning interventions by doing the interventions by intervening where it may be offline RL it's trying to learn learn about these interventions but not by doing them just by observing some some fixed data RL that term means a lot it can depends on who you talked to exactly what it means like I want to take the broader definition that I kind of reach certain uses which is that RL is means not just the method like you know kind of optimizing the word but also the class of problems that we want to solve and if we want to be able to do that we want to do that. So we want to solve and if we want to be even broader than that we can just talk about kind of agent models which you know we're talking about agents making decision and like built writing you know creating agents that can make decisions under uncertainty particularly in sequence and yeah it's the it's inherently causal in the sense that actions change the environment and so if there is an action so in causal inference we have this very clear idea of what an intervention is and how it changes the data generation. How it changes the underlying joint probability distribution created by that data generating process and that's exactly what an action is right and so we can not still get the address your question about online or offline and it applies in both settings right so if I'm trying to learn an intervention distribution so like it's a probability of some distribution of reward some reward some conditional probability distribution on a reward function given I do some kind of action if I'm able to actually do actions then I'm using interventions to learn intervention distribution and that's and that's and that's great if I'm observing other agents and in an offline setting then I and then I have to kind of reason about OK well should I be you know should I be treating these agents actions as as it is kind of active interventions or should I be thinking of them kind of these actions like random variables and that just going to model them alongside like there's there's some decisions that I have to make there but you know these are all kinds of things that we can define clearly within the framework of causal inference and you know so if there is some kind of confounder confounding variable that's causing this agent to be bias in the actions that that's making it if so can I adjust for it yeah so I think it's relevant both in the online and offline setting you know there's Elias Baron Boyam has this has plenty of examples in the online setting for you know with banded algorithms where there's some confounding you know he talks about an example of a case where the bandit in this casino problem is changing its behavior according to the state of the subject of the person who's playing the game for example and so I can kind of there's an adversarial and it's kind of an adversarial bandit and that you can solve this problem using kind of causal approaches he has a technique called causal Thompson Thompson sampling in the offline setting there's kind of factual reasoning right so like kind of factual means like you know I married this person and now I'm happy had I not married this person what I still be happy that kind of that's a kind of factual statement like you can do you can you and you you and you what you would want to causal model to answer that kind of query in in reinforcement learning offline setting you say like okay well here's this agent in production making decisions according to this policy had it's been had it deployed a different policy what it was that in higher reward so that's called you know counterfactual policy evaluation and so yeah and that's certainly kind of an offline problem but you can actually have it in online setting as well actually so you could say for example the agent is I have been playing the game up to this time and I got this reward a time T and at time let's say like a time T minus K I made this decision had I made another decision or had another action what my reward be right now and honestly like that's how we as humans interact right like we when when you're living your life is not based on a million past lives that you've lived and you've learned from all of them it's like you're you're learning as you go and you only get really one you know epoch you know so like that's it so you're making decisions based on counterfactual reasoning and you're trying to minimize counterfactual regret we're kind of in the continual learning regime I suppose in real life so yeah so like you know so online causal kind of factual reasoning is super important in that and that kind of regime so do we need to like how do we know when we need to worry about the quality and RL like just and the very simple side do we need do we need to think about it when we're in Atari and we're in in the Joku land or can we just completely safely ignored there's something to be gained by by paying attention to causal inference even in those simpler settings. All right so the causal inference research or me says that like listen you're dealing with intervention distributions and you're but you're modeling observational distributions right and so like if there's any kind of confounding your choice of the best action can be biased and exactly how biased it's not clear but it's always going to be a risk. The engineer in me says fine just kind of use the baseline method and if it proves insufficient to the problem then then try to enhance it with some kind of causal model I don't know what comes the mind is some work by carriers AI you know that some of their their work on schema networks where they kind of they they show you can not just have the the agent learn a good policy with fewer with with with far less data but if you change the game in a way it hasn't seen before it's able to adapt because it has you know a causal model of how the game works. I love that paper I've been a fan of that paper for ages and and it's so it makes it so simple right and it's kind of how we think of the game I think right how like if you had a aastic child to explain how to space invaders work what is it doing they probably come up with some kind of explanation some yeah yeah and and and and if you change the rules of the game where you add you know you add some kind of variations to the rules the human who is familiar with the old rules is is typically able to to make predictions for how things are going to work on the new rules and it seems to me that a lot of algorithms you would want to you would want to be able to have that but I don't know I you know I'm not a I mean there are people who work on a lot of different practical reinforce learning problems and maybe like maybe there are some problems where that doesn't matter like that the rules are always going to be the same and that you know if you want to handle interventions you just you know you just treat interventions like a random variable in the system and and just make sure that you simulate every possible combination and the training data and so you know I don't know maybe that works in a lot of scenarios so yeah the engineering me and says like you know do what you can get done quickly and and works in a robust way and and let's you get to your next milestone they cause a inference researcher me and says hey if you're working with interventions and actions are interventions that you're working with an intervention distribution and you're trying to to approximate them with a with just observations and you know I have an intervention model then you could be biased in fact you could be severely biased and the in a lead to catastrophe I guess the other thing to mention here is that it's the Perlian causal models like like causal graphical models structural causal models there are a special kind of probabilistic model right especially kind of generative model there is a lot of work from the probabilistic modeling standpoint of an agent models right so like you know one thing you can do if you want to optimize and you know an objective function is you know I have some function that takes an action and you find the argomax right but the other thing that you can do is treat it like a Bayesian inference problem and so like this is this paradigm is called planning as inference there's a class of researchers who work on applying a Bayesian or more general kind of probabilistic approaches to agent modeling and you could take some of the the Perlian kind of structural causal models or causal graphical models and implement them in those frameworks and then have the additional semantics of causal semantics and then make some and then whatever whatever advantages those probabilistic modeling approaches provide you can enhance them with some new capabilities if you have the causal abstractions built in. Well we had Taylor Killingen on the show recently and he talked about how in some of his research applying policies learned at one hospital to optimize treatment didn't work so well if you took it to another hospital that had a different distribution of patients and so these types of tools helped improve the results there and I won't try to summarize that any further. In your cosality course you talk about this notion of free will in an agent. Can you remind us what do you mean by that and and how is that helpful beyond outside of philosophy like doesn't Atari agent have this type of free will or can we make like RL agents. So the discussion of free will of blame of intention of explanation these are things that we would like to have in engineered agents that are making decisions under certainty. So why when word is free will come up. So if people are listening here if they kind of Google kind of causal decision theory compare it to evidential decision theory. This is kind of an old argument in decision theory and you can and they have problems like you know Newcombe's dilemma for example where you can show that you get different results under different procedures and this you know to the idea of why a policy learned in one hospital doesn't transfer to another one. This is this would be a setting where evidential decision theory failed and causal decision theory would succeed. And so the idea is simple causes decision theory says that you can't model actions as random variables you need to you need to model them as interventions as as a and an intervention is something that that is a an operation on a on the joint probability solution between random variables. And even in the setting where you have a policy that's the caustic right so it's generating an action you know in the causal modeling approach we once the action is realized like yes it's the caustic output but we don't treat it as a as a random variable we treated as a as an operation. How does that apply to free will? The idea is that if an agent's actions are just some stochastic function of elements of its environment so it's just a random like my actions are just generated from a conditional probability distribution that's conditioned on elements you know the state of the environment. And then I'm really just kind of like this automata just reacting to things right as opposed to something as something that's introspective and deliberate and and choosing my actions according to some some some reasoning about how my actions are going to affect my environment. You know so it's free well in a sense that you know we might say like you know an amiiba or a white blood cell that you watch in a petri dish kind of making decisions and move around trying to absorb some kind of antigen or or foods or something like that. We wouldn't we typically want to think of it as having free will which just kind of reacting to signals in its environment. And and so like I mentioned things like regret and and blame right so the free will argument there is when we explain and explanations right so when we explain why something happened oftentimes we we delete we kind of oscillate between this idea of you know us being kind of passive reactors to our environment right like you know usually Adam and Eve stories what I teach in the course and like you know when when when when God asks Adam why did you eat the apple he says well the woman that you you gave that you put here with me gave me the apple and so I ate it's and then the woman says well the snake said it was good deed and so I ate it but when when when Eve is considering whether to eat the apple she's you know she's she's listening to what the snake says but she's being delivered to it for the deliberative she's like you know if I you know she's reasoning if I know about the intervention like if I eat this then this thing will happen the explanation that she gives her and her decision is actually kind of oscillating between that's framing of the problem as as of being an active agent and the passive agent and so this is important in the reinforce learning setting it's not just philosophy it means that like if I have a household robot that is it's cleaning the house but it you know it does so in a way that inconveniences me and maybe it wakes me up in a middle of the night maybe it's you know running laundry what I'm taking a shower right and I go to the and I go to the robot and I say hey don't do that right well the robot now is a reason if it's going to improve in the future it needs the reason why it what I don't want it to do that it you know it could just kind of work on reinforcements but you know it's probably going to get stuck in some kind of you know it's going to probably get stuck in weird behaviors if I if it does that it you really wanted to do is reason why you were unhappy right are you and so it needs to kind of think about it easy kind of think in terms of I mentioned earlier actual causality and he's to think about like okay well retrospectively this happened was a was a which which led that to happen and so I did you know so these he's unhappy because I did this thing while he was showering this is not the say he doesn't want me to do this thing at this time or I just you know I that or I just you know or when he's only he I just should not do this thing because or when he's souring and so I could and so I could and so I can update its policy accordingly and so you know having an this kind of this discussion about these philosophical discussions kind of concern how we can create agents that can not just kind of react to plus or minus signals about whether an action was good or bad but rather generate explanations for why an action was good or bad or led to a good or bad outcome and and after accordingly this becomes really important in particularly in settings where we have partial information and and outcomes are there's a there's maybe there's a random element in the environment because you know like say for example you're playing no limit Texas Holdham right like that's a kind of game where you know if you lose it you could be playing an optimal strategy and still lose and you can believe playing an inferior strategy and still win right and so you want you want to you know if you can if you can generate explanations for why you win or lose in a way that's robust to that kind of uncertainty you can you can cope with those partial information random you know stochastic settings. I guess in that example the robot now needs to know about showers which before maybe it didn't care about wasn't part of its observation was like an unobserved confounder would we say that? If the robot is going to generate an explanation as to why you are unhappy it needs to and the real reason that's the way you're unhappy is because you know it ran you know instead of washing clothes while you were in the shower you know then in order for it to land on that explanation it needs to have a model that has showers and has a relationship between showers and clothes washing that shows that when the when when when you run a hot a hot wash cycle when somebody's showering if they're going to lose hot water right and so like that's that's assuming a lot but you know maybe that's a rich sudden problem maybe we can brute force that. Maybe we can move on to the the ICML workshop on causal RL by professor alias Barron Boyme and he's also from Purdue I just realized so I saw you covered this on your twitch channel and by the way we'll link to the twitch channel and all the links we talked about on the on the page at on talk rl.com but so that was that was a really intense lecture to cover tons of ground but could you for me and my listeners can you share a few takeaways from from that workshop? Yeah so I think that lias has done the most more work than anybody in connecting causal inference particularly pearl pearly and causal inference his his advisor was Judeo pearl is a hyperlipses name sorry to reinforce learning problems and yeah so it that workshop is kind of his overview of breaking down reinforcement learning problems in terms of different causal terms so breaking breaking down reinforcement learning in causal terms into different sets of problems that you want to solve and so it's a yeah I definitely recommend it and he he updates it every time he gives it so I think it's the second or third time he's given that talk yeah so I guess you know how much which channel I do a little bit live coding do a little bit of paper reading but I you know one of the things I really like to do is live watch a workshop or a summary of a paper or you know a podcast on that's covering a paper or something like that so yeah that's what you're talking about that's what I did I did a few twitch streams covering covering that that workshop by lias. I've been joined the twitch streams and your reading groups yeah we'll link to all that good stuff moving forward so what does a viewer look like for you you got a lot of things going on when you what are you looking forward to the next year and and can you tell us a bit about your path going forward. I like doing three things like like engineering I like teaching I like writing and I plan to do a lot more of that all three of those when it comes to the engineering I'm very much focused on at that intersection between AI and you know building SaaS product right I think we need more people who are willing to kind of get on the front line of that problem they said software is eating the world and they were hoping that AI would too and we're hitting a whole bunch of problems when it comes to cost the training models and how robust they are after they we've trained them and trying to actually shoehorn these things into the path the past several decades of software and the best practices is not really working and so that's and that's a problem to be solved but also we have hard researchy problems that we need to solve all the while getting feedback from customers and trying to make it so that we actually get some product market fit and so I think that is a really fantastic you know problem space and I encourage more people to think about it in terms of teaching I am working on more coursework for all deep like a city where starting looking at some of these the ways that we can reconcile things from behavioral science and or behavioral economics and connecting them to Ethereum we're looking at another workshop on applied category theory a few more items in the pipeline there in terms of writing I really hope to be writing more I have a newsletter that's fairly popular and I'm at a weekly cadence now I'd like heck I'd like to get it up to a daily cadence one of the advisors the kind of the early members of the board that's a sub stack that newsletter company was an omen to remind for my days in China and he has a really popular newsletter called sinicism and it's a daily newsletter and it's I'd love to be able to do that kind of thing with machine learning but you know there's only that much time in a day so you know we also have this growing community and we're interested in getting more people to join up so there's a lot of work happening there as well so many things happening on many fronts trying to narrow it down a little bit try to learn how to say no but it's tough I look forward to it all there are links to everything will be in the episode page Dr. Robert Ness it's been it's been a real pleasure having you on the show chatting with you thanks so much for sharing your time and your insight with with myself and our listeners today thanks Robin for having me and I really enjoyed this and I I hope that's you know I think if there's any community within the machine learning space where causal reasoning is going to be the killer app it's going to be reinforcement learning I'm not going to say that like the absence of the adoption of causal methods is kind of what's holding RL back for becoming kind of much more practical and applied but I think that if people were to adopt the mental models that come with adopting this manner of thinking and then we would see a lot of breakthroughs I'll leave with that notes and links for this episode are at talkrl.com if you like this show I need your support you can help in a few ways once subscribe on your favorite podcast platform subscriptions make a big difference two follow us on Twitter and talk RL podcast we love retweets three give us a five star rating on Apple podcasts if you don't think we deserve five stars let us know on Twitter what we could do better
[ { "end": 12, "start": 0, "text": " This is Talk by Rail Podcast. All reinforcement learning, all the time." }, { "end": 19, "start": 12, "text": " Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 24, "start": 19, "text": " Dr. Robert Osa-Zoo-Anness is an adjunct at the Northeastern University," }, { "end": 29, "start": 24, "text": " an ML research engineer at Gamalon and the founder of all deep school of AI." }, { "end": 36, "start": 29, "text": " He holds a PhD in statistics. He studied at John Hopkins and then at Purdue University." }, { "end": 38, "start": 36, "text": " Robert, thanks so much for being on the show." }, { "end": 39, "start": 38, "text": " Thanks so much for having me." }, { "end": 42, "start": 39, "text": " Yeah, so how do you describe your area of interest?" }, { "end": 51, "start": 42, "text": " Sure. I focus on the intersection of causal modeling and probabilistic modeling and machine learning." }, { "end": 61, "start": 51, "text": " I'd say that my big goal of mine is to introduce more causal reasoning methods" }, { "end": 69, "start": 61, "text": " into machine learning community, particularly when it comes to generative models within machine learning." }, { "end": 76, "start": 69, "text": " So can you tell us a bit about your PhD thesis? I think it had to do with causal models in system biology, is that right?" }, { "end": 78, "start": 76, "text": " Yeah, that's right." }, { "end": 83, "start": 78, "text": " So it's kind of a story there for it is so prior to my PhD I was living in China." }, { "end": 85, "start": 83, "text": " I worked at some internet companies." }, { "end": 94, "start": 85, "text": " And I got interested in working with data and engineering data-driven apps." }, { "end": 96, "start": 94, "text": " So that's what drew me to statistics." }, { "end": 105, "start": 96, "text": " I also had read a few books at a time on synthetic biology and this idea that you could use symbolic logic" }, { "end": 112, "start": 105, "text": " like you could code a program into biological circuits and it would serve some function." }, { "end": 119, "start": 112, "text": " And so that was the research that I was interested in on working on when I started my PhD." }, { "end": 131, "start": 119, "text": " And so I ended up working on problems in systems biology because statistical inference is if synthetic biology is about engineering cells," }, { "end": 135, "start": 131, "text": " systems biology is about reverse engineering them." }, { "end": 139, "start": 135, "text": " So the reverse engineering problem inference techniques become much more important." }, { "end": 152, "start": 139, "text": " And so part of a big area in systems biology is this attempt to take data and reconstruct molecular pathways" }, { "end": 158, "start": 152, "text": " or even go from a built-up pathway model and even turn that model into something that can actually simulate data." }, { "end": 164, "start": 158, "text": " And so that's what drove me to causal inference." }, { "end": 174, "start": 164, "text": " So one thing that you can do there, one approach is to take data, say for example, of protein signaling within a cell" }, { "end": 187, "start": 174, "text": " and apply algorithms that will reconstruct cause and affect relationships between various components of that system." }, { "end": 196, "start": 187, "text": " So some people see actually in the systems biology community they call that causal inference more precisely" }, { "end": 200, "start": 196, "text": " that's structured learning or causal discovery." }, { "end": 213, "start": 200, "text": " And so my PhD research was trying to take causal discovery algorithms and you and and and and sconce them in a experimental and a sequential experimental design framework" }, { "end": 221, "start": 213, "text": " so that experimentalists could actually use these techniques to drive scientific discovery." }, { "end": 227, "start": 221, "text": " So the way that happened was there was this woman named Karen Sachs." }, { "end": 236, "start": 227, "text": " She pioneered this method of using causal Bayesian network learning algorithms to reconstruct signaling pathways." }, { "end": 244, "start": 236, "text": " And it was funny because I had read this paper and I was you know I was really inspired by it and then not shortly after I ran into her at a conference." }, { "end": 254, "start": 244, "text": " I didn't know she was it was just we were watching a talk we both thought it was boring so we snuck out to get to the little coffee food tables before everybody else did beat the rush." }, { "end": 260, "start": 254, "text": " And so we just started chatting each other up and then she introduced herself and I'm like oh my god that's who that is." }, { "end": 278, "start": 260, "text": " And so we actually became collaborators and we're still really good friends and but and so I so I ended up taking her methods and wrapping them with an active learning framework that would allow you to say okay well." }, { "end": 300, "start": 278, "text": " I'm not so much interested in kind of getting this big old Harry causal graph I really want to kind of drive some kind of reward function say for example some new discovery some hypothesis that you have a low probability of being true and then turns out to be true for example and then you get a paper the acolytes and the funding and all that great stuff that comes after so." }, { "end": 315, "start": 300, "text": " I took and so I built that active learning framework around that and and so you can you can it was a Bayesian active learning approach that would allow you to take causal discovery and operationalize it essentially and so that was my first introduction to." }, { "end": 321, "start": 315, "text": " Cause of modeling and to reinforce learning and so far as active learning is a special case of reinforce learning." }, { "end": 336, "start": 321, "text": " So you had an agent that was maximizing your funding. I was building an agent for people who need to maximize funding personally I will never I guarantee you I'll never work on another grand application while I live." }, { "end": 340, "start": 336, "text": " Okay I'm sure there's a lot behind that statement." }, { "end": 360, "start": 340, "text": " So it seems like causality became a really hot topic over the past few years in the M.O. community and and and I and I can't help but wonder why something so so fundamental took took a really long time for everyone to get around to and and think about clearly." }, { "end": 375, "start": 360, "text": " Any comments on that yeah I think there's a lot of reasons for that so I think one is the problem of transfer learning we basically I think we're all kind of starting to realize that." }, { "end": 397, "start": 375, "text": " Trading for a loss function that optimizes predictive performance is insufficient to give you good transfer ability and or or or or stable or stable performance of your model across environments and and there are lots of ways that we're trying to address that and we've used a lot of" }, { "end": 409, "start": 397, "text": " eristics say for example ways of trying to avoid overfitting but I think we're realizing that even that doesn't really get us to where you want to go." }, { "end": 422, "start": 409, "text": " I also think there's a there's been a lot there's been a cultural gap between the people who've worked on these causal inference problems is causal inference research community and the machine learning community I think that" }, { "end": 448, "start": 422, "text": " you're working on different problems so causal inference people tend to be working on problems and social sciences and public health the M.O. community is what you know when the M.O. community rights papers are often looking at specific data sets or milestones and trying to define performance against doing doing well on on those benchmarks there's also." }, { "end": 454, "start": 448, "text": " I think there's different stakes right so and a different." }, { "end": 472, "start": 454, "text": " Epistemological values so what do I mean that by that I mean I think that you know causal inference researchers focus very much on objective truth right so and because the stakes are high because we're working on problems in policy and health right so if I say that." }, { "end": 500, "start": 472, "text": " If I'm talking about does smoking cause cancer that's you know that's a that's a problem that's going to affect people's lives is also going to affect the bottom line of some major corporations is also going to affect public health policy so you want to you want to make sure that you're right that the cost of being wrong there is expensive and and not just in terms of money but in terms of of life often." }, { "end": 505, "start": 500, "text": " And so because of that focus on on objective truth there's." }, { "end": 511, "start": 505, "text": " There's a lot of emphasis on mathematical rigor." }, { "end": 529, "start": 511, "text": " And contrast I think the machine learning community is focused on predictive performance and benchmarks because they're trying to push the state of the art right and if it if something works extremely well and we don't have a mathematical theory for why it works that well that's okay so long as we know as long as we have a track." }, { "end": 543, "start": 529, "text": " A trajectory for for moving forward and also we're off we're often not looking you know if you look at so that in that kind of characterizes deep learning but there's other branches of of." }, { "end": 571, "start": 543, "text": " Machine learning say for example you know latent variable models models and machine learning that are inspired by computational psychology or the idea is you know we want we have this model for how the system works say for example topic model like a document is is driven by the topics that are present in the document we know that's in complete we know that's not." }, { "end": 586, "start": 571, "text": " We know that we know that's not entirely true we know that there's other there's many other things that go into what what you know what determines a document but it that model might be good enough for the problem that we're trying to solve." }, { "end": 595, "start": 586, "text": " Alternatively if you're if you're taking account if you're taking the approach to the computational cognitive science approach of saying like all right well." }, { "end": 611, "start": 595, "text": " I want to build AI here's how humans here's my theory about how humans reason about the problem let me build a model that can duplicate that and so." }, { "end": 640, "start": 611, "text": " You know because if you know what's that that's a good way of maybe building AI which is to say like well let's look at real AI and see if we can't if we can reverse engineer it and so that's again that's not about being you know that that real human who's making those judgments are those decisions or those predictions might be wrong but you're not interested in whether or not the prediction is right or wrong or interested in and how faithful you can replicate the way the human reasons because the human intelligence you know it's pretty good so." }, { "end": 669, "start": 640, "text": " Yeah those are two different views on all of those are two different sets of values and two different sets of goals and so I think that creates a kind of cultural divide that makes it difficult for the you know if you're interested in if you're working if you're machine learning research here and you have some problem that you want to solve and you want to dive into the in realize that you need some kind of causal inference solution and so you want to go dive into the causal inference literature it's a little bit opaque because." }, { "end": 675, "start": 669, "text": " And they're not only talking about different things but they have a whole different set of values and a different set of goals." }, { "end": 685, "start": 675, "text": " I think also it should be mentioned that there's different workflows right so deep learning has led to you." }, { "end": 703, "start": 685, "text": " And you know very strong improvements in the state of the art and and that's led to this this workflow where you kind of focus on mapping raw data to the output right you just kind of you you want to have end to end machine learning." }, { "end": 730, "start": 703, "text": " And you don't want to kind of reconstruct the new model each time you just want to kind of have a you want to have the right architecture with the right inductive bias for the right problem applied to the raw inputs and and and predicts the outputs whatever it is the label the reward whatever and everything you just let you let gradient the sentence take care of everything in between right and that doesn't work." }, { "end": 740, "start": 730, "text": " In causality because you have to make explicit structural causal you have to make explicit structural assumptions about how the system works." }, { "end": 757, "start": 740, "text": " And you know so you and there are people in machine learning who are trying to say avoid that by say for example using deep learning approaches to learn causal structure which from data which will allow you to in theory kind of skip the structural assumptions because you're learning the structure but there are some." }, { "end": 786, "start": 757, "text": " We can it's mathematically proven that there are some assumptions that you can't learn from data right not without some kind of inductive bias and that inductive bias tends to need to be provided in some explicit way to the modeling algorithm you're not going to get it kind of implicitly through you know you know max pooling or or you know attention." }, { "end": 792, "start": 786, "text": " Right it's not going to come off it's not going to just come out of some off the shelf the architecture." }, { "end": 807, "start": 792, "text": " Not going to what I think that's true I mean maybe tomorrow they come up with some new architecture that that that's all everything but I but you can you can point to the math and say like now listen this is you can't hear the thing that you can't learn from data if you want to" }, { "end": 825, "start": 807, "text": " advertise this you're very going to use that the expo some kind of inductive bias or some kind of you know Bayesian prior on the thing otherwise it's not going to work and and that and a lot of our end to end machine learning workflows don't really admit that kind of that didn't really have that kind of interface." }, { "end": 853, "start": 825, "text": " That's my that's my three I think there's other reasons to but yeah I don't know does that make sense yeah I mean I think if you spend a lot of time thinking about Atari and mojoko environments it seems like notions of causality are kind of optional in a sense like you can get so far without doing that stuff and you" }, { "end": 862, "start": 853, "text": " might just forget that it's there if you're not dealing with this messy real world real world data with with with the domain shifts and things like that is that fair to say." }, { "end": 872, "start": 862, "text": " Like in Atari do you care that what caused the death of your agent does your does your agent care what causes death beyond." }, { "end": 889, "start": 872, "text": " The immediate actions it needs to take to know one of the things that I've been I've been kind of harping other researchers in you know who are talking about causal inference in the domain of AI and artificial and and machine learning I say." }, { "end": 914, "start": 889, "text": " You know they often come up with papers of these little nice little toy models like you know little for no dag or a you know structural causal model with with you know a linear assumption and I'm saying here's like you show this to machine learning person you're going to scoff like there this is not the kind you know these simple little little pocket models they might be useful for kind of proving proving some idea." }, { "end": 940, "start": 914, "text": " Yeah causal inferences causal inference people do this lab because we're like oh look here's this little tiny network and let me show you how this big you know how you can kind of if you try to estimate things a certain way that you can kind of go completely in the wrong direction and that's useful because it's a very simple way of showing how things can go wrong but they're off these these little tiny models are often I don't know they feel a bit contrived and so if you." }, { "end": 969, "start": 940, "text": " I would say like if you have an idea for how how this thing could could improve sequential decision making on our uncertainty implemented in open AI right like don't go for the simplest model possible go for the simplest model possible in in the in opening I Jim right so you use a frozen like example use you know use what's the game whether shooting aliens when they're coming down space invaders space invaders." }, { "end": 993, "start": 969, "text": " You know use use one of those simple I mean those are still those are still simple right there are tary games I mean it's not like we're it's not like your you know you're playing cyberpunk right so it's it's um and those games have a there is caught so there's a reinforce learning course by Charles is Bell and on I think it's on." }, { "end": 1022, "start": 993, "text": " I don't know if you actually something but anyway you know one of the things he says in that work in that course that's interesting is that the so Charles is about when he explains the transition function in the Bellman equations he says you know this this thing in caps in capsulates the physics I actually I don't remember for was Charles is my Michael litman that says this but anyway he says that the transition function in cap encapsulates the physics of the world right and so there should be some connection." }, { "end": 1042, "start": 1022, "text": " Between causality and physics hopefully right so I would say that the physics of the Atari game is a is the causal specification of that system right and so if you could and so you could in theory." }, { "end": 1059, "start": 1042, "text": " Yeah if you if you had an agent you had a learner who was and who knew in a model based reinforcement learning a pro model based reinforcement learning approach had some kind of knowledge of the physics of the underlying system." }, { "end": 1073, "start": 1059, "text": " The game that it's playing then could certainly reason causally about about you know about which house actions are going to affect the environment." }, { "end": 1087, "start": 1073, "text": " But to your point I think and you might and so you might think you might think yours of well you know like that seems like a lot like should every agent have an understanding of the underlying physics of the world that is that is operating in." }, { "end": 1112, "start": 1087, "text": " And the answer could be depends right like so if we look at cognitive science there's there's a lot to be said about this idea that humans have an intuitive physics and you know they have a or folk physics model in their head when it comes to understanding you know physical objects and their interactions so cars hitting deer billiard balls bouncing off the sides of the table." }, { "end": 1118, "start": 1112, "text": " And that they also say that humans have you know a quote and folk kind of intuitive." }, { "end": 1141, "start": 1118, "text": " Quote and quote physics for other domains like a intuitive psychology for example like you could you and I could be sitting at a cafe and watch some some couple across the cafe have a have a conversation and we would make pretty good inferences about that conversation based on the theory of you know intuitive and intuitive theory of psychology that we're able to that we're applying there that we're not." }, { "end": 1144, "start": 1141, "text": " We don't really learn from data we're just kind of born with it and so." }, { "end": 1156, "start": 1144, "text": " And there's there's a lot to be said for a kind of a model based approach that has a or the transition function is kind of driven by some kind of domain physics." }, { "end": 1164, "start": 1156, "text": " But to your point the problem of trying to train something in a simulated environment and then." }, { "end": 1172, "start": 1164, "text": " Taking it out of the simulated environment and having it work in reality say for example you know robots." }, { "end": 1190, "start": 1172, "text": " And often they're often the reason why this has this is hard to do can be characterized in causal terms which is to say that you know when you create a when you create a simulation environment you try to make." }, { "end": 1213, "start": 1190, "text": " You try to reduce the all the variables in the system to to only those that are that you think the the agent needs to be worrying about and so if you're wrong about that if there are some things the agent needs to worry about that you've excluded from the simulation system that that would that could hurt that agent or affect that agent in the real world." }, { "end": 1222, "start": 1213, "text": " Then it's going to you're going to have an issue and and and causal inference terms we call this the problem of confounding by you know or latent confounders so." }, { "end": 1225, "start": 1222, "text": " And it's two aspects of how causality kind of comes into play there." }, { "end": 1235, "start": 1225, "text": " So how do you define the idea of cause like why are we doing the show today is it because we clicked on the interview link or because you're a successful researcher." }, { "end": 1240, "start": 1235, "text": " Or because of the big bang how do we think about what really caused something." }, { "end": 1258, "start": 1240, "text": " I mean it's a good question so for one we have to kind of be recognized that philosophers have been trying to define and parse causality for like millennia now right like even Buddha had a definition of causality right and so." }, { "end": 1284, "start": 1258, "text": " So you know I think oftentimes people what just kind of want to focus on on kind of bread and butter machine learning problems when I focus on the math and it gets to get a little bit uncomfortable when you when you delve into philosophy but unfortunately here you have to actually I think fortunately you have to because it's actually really interesting to be talking about kind of what's you know." }, { "end": 1297, "start": 1284, "text": " What does it mean for something to be a cause what does it mean for something to be effect you know these are this is right up there with those kinds of other philosophical problems of data science machine learning like you know the problem of induction for example." }, { "end": 1307, "start": 1297, "text": " And it's relevant today in terms of you know how people approach causal problems safer example there's there's a few." }, { "end": 1314, "start": 1307, "text": " There's a lot of economies there like so there is something called." }, { "end": 1328, "start": 1314, "text": " There's a there's a manipul a manipul excuse me manipulability theory of causality and there's a counterfactual theory of causality so the manipulability means that say." }, { "end": 1340, "start": 1328, "text": " A and B are correlated but if I do something to a B is affected but if I do something to be and a is not affected then a causes B and then." }, { "end": 1354, "start": 1340, "text": " Somebody might object to that and say well the problem with that argument is that you're defining causes causality by or your definition of causality requires a presence of a human agent right but presumably things caused things to happen on Mars." }, { "end": 1363, "start": 1354, "text": " And has nothing to do with us right and so another competing theory there might be the kind of factual theory of causality where you say." }, { "end": 1367, "start": 1363, "text": " Yeah a and B moved together and." }, { "end": 1381, "start": 1367, "text": " A did this and so I observed the a did this and then B had this B did that but had a not done this B would not have done that right so that's a counterfactual definition." }, { "end": 1393, "start": 1381, "text": " There's other there's other philosophical aspects to it select there's a teleological understanding of causality so for example if I ask you." }, { "end": 1407, "start": 1393, "text": " Why is this knife sharp you might say because it was sharpened on a what on a what stone or you might say because it is it is it is supposed to cut things cut things in half right and so." }, { "end": 1416, "start": 1407, "text": " Those you know so there we're talking about you know is where is causality there in terms of mechanism or isn't there in terms of function." }, { "end": 1426, "start": 1416, "text": " We have we have dependence notions of causality which says that you know this is what you might think of with the directed graph where you have." }, { "end": 1437, "start": 1426, "text": " You're basically trying to boil down causality into ideas of of things being dependent on one another or things being conditionally independent of one another." }, { "end": 1454, "start": 1437, "text": " There's this idea of type causality and actual causality so again type causality is again what you might is what we typically see in when we draw a graph like we say smoking causes cancer while type cause while actual causality is focused on." }, { "end": 1467, "start": 1454, "text": " The events on outcomes right so like Eli has Eli smokes for 20 years and as a result he has this kind of cancer right and so." }, { "end": 1473, "start": 1467, "text": " I have a sort of there's a lot to kind of impact there when we're trying to define a causality and." }, { "end": 1477, "start": 1473, "text": " The examples that you gave so you're saying like is it because." }, { "end": 1488, "start": 1477, "text": " Somebody clicked the link or is because of my background or is it because of you know my mother gave birth to me so this that's actually an example of proximal cause so this is a legal term." }, { "end": 1500, "start": 1488, "text": " But it being and we have a formal definition of it in cause win for instance but you know the colloquial or rather legal term the more popular term is proximal cause but it's it's you know." }, { "end": 1510, "start": 1500, "text": " If you know with the holocaust have happened had hit of Hitler's mother not met Hitler's father." }, { "end": 1529, "start": 1510, "text": " It wouldn't have happened and yet we are probably wouldn't have happened and yet we could you know we wouldn't blame Hitler's mother and father for this for that outcome so or at least their meat for that outcome or and so that's the idea proximal cause which is really important when you're trying to ask why something happened or assign blame or." }, { "end": 1533, "start": 1529, "text": " You know in reinforce learning terms figure out regret." }, { "end": 1549, "start": 1533, "text": " So you teach about both Rubens potential outcomes framework and also Judea pearls structural cause of models can you talk about the kind of these two frameworks should they coexist do we need them both." }, { "end": 1571, "start": 1549, "text": " Yeah so this is people talk about this I think what I'll say is that there's there any equivalence between both approaches which means that you know the accident the actions are such that if you can solve if you can use the potential outcomes framework to solve a problem then you can also solve it and the perlian framework and vice versa and so." }, { "end": 1583, "start": 1571, "text": " You know it's practically practically no different terms of like how easy it is to solve a problem and then of course depends on the problem that you're trying to solve you know I like to think of it as." }, { "end": 1599, "start": 1583, "text": " You know the difference between functional programming and object oriented programming for example there's nothing that you can do in one paradigm that you can't do in the other but you might prefer to solve certain problems in one program and one paradigm relative to the other for various personal reasons like maybe it's." }, { "end": 1619, "start": 1599, "text": " One way is funner for you and as well practical reasons right like maybe you know if you're working with I don't know if you're working with me old piece better if you're working with a you know a database schema and so it's easy to think about things in terms of entities and attributes." }, { "end": 1645, "start": 1619, "text": " Yeah so like I think one needs to think about it from in those terms like what is you know what is best for the class of problems that you're trying to solve there's a lot of practical differences so one for me one of the key distinguishing features of the perlian approach is that it makes a crystal clear distinction between cause ideas and statistical ideas." }, { "end": 1651, "start": 1645, "text": " And then contrast the potential outcomes literature focuses closely on the statistics and so so does that mean so number one that making that clear distinction and so if I for me when I want to teach you how to reason cause or your system it's nice if I can separate the cause ideas from the statistical ideas because the statistical ideas I can tell you to go look up in a stats book right while the cause of ideas you know we can focus on the" }, { "end": 1691, "start": 1675, "text": " and it makes it easier for you to learn especially if you don't have a background in you know statistics or economics or social science and so it doesn't feel like you're having to pick up a new you know masters degree just to learn how to solve you know how to apply causal reasoning to your problem." }, { "end": 1715, "start": 1691, "text": " But also it provides a nice unifying framework for thinking about various problems that you might face within causal inference like say for example I often teach economist and economist when you learn economics you learn a bunch of causal inference techniques that are very much methods right they're not they don't have any like you don't really know what the theory is behind them." }, { "end": 1743, "start": 1715, "text": " Like you know here's the problem that you you have this kind of problem uses this type of methodology and you know you have that problem use that methodology you don't really have any overarching link in your head that explains all of these and why they work and and the Perlian approach does that so the example might be propensity square matching which you know is a technique for adjusting for confounding and it's really easy for me to come to you from a program." }, { "end": 1755, "start": 1743, "text": " To come to you from a Perlian standpoint and explain to you using graphs and ideas like de separation why how exactly propensity square matching works." }, { "end": 1769, "start": 1755, "text": " And so like I that's often a win for me when I work with people with economics with economics or econometrics background I can finally explain to them or a tie together all these causal inference methods are using what they and it'll actually know why they work." }, { "end": 1789, "start": 1769, "text": " But on the other hand the potential outcomes approach is extremely practical right so they like I said they focus on the stats which is important when you actually need to solve a problem which you know so for say for example like you work at a tech company and you need to construct some." }, { "end": 1802, "start": 1789, "text": " Alright so usually a reinforcement learning example let's say that you are you are you are a marketer and you need to construct some kind of email sequential email campaign that's going to." }, { "end": 1812, "start": 1802, "text": " Send you emails in the content of the email the time of the email is sent depends on you know whether or not the previous emails were opened and." }, { "end": 1826, "start": 1812, "text": " Maybe the reward is not just that they click on email but maybe they go and they you know engage on a website right and engagement is you know like you're going to say all right well in my company we quantify engagement by." }, { "end": 1835, "start": 1826, "text": " Quantifying how much time they spend on the site over two weeks and then just like taking the area under that curve or something like that so you have you know a." }, { "end": 1856, "start": 1835, "text": " You have this response that's probably tricky to model parametrically you have you have some kind of confounding that's happened that confounding is depends on time right and so you need to adjust for that confounding and so you so you might go and say I'm going to go and construct some kind of instrumental variable so I'm throwing words out there but this is just a technique that people use." }, { "end": 1873, "start": 1856, "text": " I want to find some instrumental variable and you know so you know you in order to find that instrument you have these certain instruments available to you and you have to kind of figure out exactly how to how to make it all work like the potential outcomes literature will provide papers that's kind of say here's how you solve." }, { "end": 1885, "start": 1873, "text": " When you have this kind of data and you have this kind of problem is here's how you solve the problem and and by the way like it's going to be really nice because it's going to reduce variance and it's going to and you can get there it's going to converge very quickly." }, { "end": 1909, "start": 1885, "text": " You know perlian stuff never talks about any of that they just they just they don't they they assume that you can solve your own kind of statistical and parametric modeling issues and focus on the high level concepts and that's fine and so it's it really kind of depends on how you want to approach it and when an example I was thinking about kind of our conversation today and I remember reading." }, { "end": 1936, "start": 1909, "text": " Rich Sutton's the bitter lesson that essay you don't talk about that essay he writes about now let me ask you because I was I whenever I try to talk about this with people I think sometimes that the under the interpretation of the paper depends on kind of how you so it's almost the the essay is a little bit kind of a mirror that reflects kind of your own prior beliefs going into the essay but." }, { "end": 1961, "start": 1936, "text": " The way I might take away from the paper was that I guess it's really more of a blog post but that's if there is a brute force computational method that can solve a problem and you're thinking about an alternative method for solving the problems that incorporates domain knowledge and you're thinking about it because that domain knowledge might make it more efficient." }, { "end": 1985, "start": 1961, "text": " It might make me solve the problem faster more easily with with less kind of computational work don't do it because eventually the compute is just going to get cheaper and and the cost will become moot while you have wasted your time trying to come up with this domain's just domain knowledge based approach is that is that kind of is that a good takeaway or good read." }, { "end": 2014, "start": 1985, "text": " I read it in a summer way for sure but I agree with you I think it's like it's like those ink block tests that just tell you what you think about you can read so many things into it I mean just just what you've been saying so far today when I compare that to the blog post like you were saying that there's certain types of information that you can't learn from data so adding more data doesn't actually get you there and so maybe the better lesson." }, { "end": 2023, "start": 2014, "text": " Maybe the better lesson doesn't fully apply in cases where you need some extra some information outside of the data like like causal dag." }, { "end": 2042, "start": 2023, "text": " Great I say like you know so yeah my raw shock test for reading that essay as a person with a causal modeling background is that the what's nice about the Perlian approaches again is it when you when you took that approach learning it you get this very clear line in your head between causal ideas and statistical ideas." }, { "end": 2071, "start": 2042, "text": " And one of the key causal ideas is that you know this idea of identifiability which is to say that hey you you know you are interested in asking questions like why is this happening or what would happen if I did that and though we did what we did is we take these we we formalized those ideas with very clear definitions of things like intervention and counterfactual and so and what we we we we've done we we've arrived at a problem which is to say like you can't answer those questions with the data you have to do that." }, { "end": 2100, "start": 2071, "text": " It's fundamentally not identifiable that means that you can't answer that question even with infinite data right and and so like this nice separation of these causal ideas and statistics means that you can evaluate a problem for identifiability without the statistical concerns because you know because we're certain is right that if it's once you've solved the identifiability issues once you once you know it's a problem that you can't answer the question." }, { "end": 2114, "start": 2100, "text": " You know it's a problem that you can solve that you that you it's a causal created you can answer that it's just and it's just a matter of coming up with the right statistical approach to answering it and if you can brute force that if you can brute force that approach then great do it." }, { "end": 2143, "start": 2114, "text": " But if you if it if it fundamentally cannot be answered then maybe then you need to start thinking about how you can you can bring in domain knowledge to close that that gap right so like if there's some you know safer example I can only solve a problem to some kind of class of values and equivalence class of values then you know I then then maybe then I can bring in some kind of domain knowledge in a form of a prior or in a form of some kind of inductive bias that would actually" }, { "end": 2170, "start": 2143, "text": " turn it from an unsolvable problem into a to a solvable problem and then I can throw all my computer at it right and so I think so I think the kind of probably an approach makes it really easy to think in those terms but yeah I you know I use potential outcomes approaches all the time and so they come together for me so I don't really think there's much there's not much much substance that debate although other people definitely think there is." }, { "end": 2198, "start": 2170, "text": " Yeah the better lesson keep coming back to that and it seems like maybe he was talking about something specific like some part of the problem that you can solve with brute force but even then if you look at you know leading agents today yeah they do their performance does depend on on huge amounts of data but also the components of the agent have been carefully hand engineered to make best use of that data" }, { "end": 2216, "start": 2198, "text": " so it seems like there's always some kind of there's always some kind of line between domain knowledge hand design things made by engineers I mean even engineers came up with or researchers came up with the CNN design to allow it to allow these agents to benefit from from brute force." }, { "end": 2245, "start": 2216, "text": " Yeah I always think of it yeah I mean I think the idea that of like radical empiricism and machine learning is a bit of a kind of red herring I think of it in terms of inductive bias right so you know Tom Mitchell wrote that that great paper I guess was a mid 80s about kind of inductive bias and kind of laid it out in these clear Cartesian terms where you have where you know he shows you that like if I you know there will be multiple solutions to the world." }, { "end": 2274, "start": 2245, "text": " So if I want to generalize from this this training data there's just multiple solutions to this problem and and I need to have my algorithm needs some needs to be able to just pick one over the other ones and so like that's the inductive bias right and so the way and there's a lot of ways we can approach inductive bias we could so we could we could for example for being Bayesian we can encode it into a prior right" }, { "end": 2303, "start": 2274, "text": " the way deep learning architectures tend to do it is make it implicit in the architecture itself so you know convolutions and max pooling right attention these are you know these are inductive biases and oftentimes what happens is we kind of we get to the these we get to these we discover kind of what the inductive bias is kind of retrospectively like somebody kind of figures out that hey if you use this architecture" }, { "end": 2316, "start": 2303, "text": " then it works why does it work let me let's guess let's go look as analyze oh it's it's favoring it seems to you know natural language it does really well with you know hierarchy and and it tends to" }, { "end": 2327, "start": 2316, "text": " I don't know preserve you know assume that words that core core tend to be close together something like that but like you know it's still like you're inductive bias is just implicit in the architecture" }, { "end": 2356, "start": 2327, "text": " in so you know in your example like if people are kind of engineering are kind of hand crafting agents there there essentially trying to say like okay one day agent and in counter encounters this situation needs to kind of back to certain way and so all that is arguably kind of encoding some kind of domain knowledge into the agent I think culturally what I sometimes this argument is presented to me like machine learning engineers don't want to use any kind of domain knowledge at all and I'm at I'm" }, { "end": 2370, "start": 2356, "text": " and I said I don't think that's true I say that I think that they just want to do it once what does that mean means that like they just want to they don't want to every time there's a problem they have the answer they got to construct a new model and put in all these structural" }, { "end": 2382, "start": 2370, "text": " structural assumptions and do it and then like you know kind of make bespoke modeling for every problem models for every problem I think that's a good idea but most people don't want to do that they just want to kind of have this" }, { "end": 2396, "start": 2382, "text": " approach that they can kind of import and then train and then spin up and they just and and and they're just comfortable that it's going to adjust to the situation if you have to kind of if you rights if you" }, { "end": 2407, "start": 2396, "text": " and so they'll they'll they'll they're happy for it to have it inducted by us like you know in variance to translation or or like you have with" }, { "end": 2419, "start": 2407, "text": " a convolutional neural networks so long as they don't have to like re re implement it each time and so but you know in very instant to translation if you're assuming that your your domain is in" }, { "end": 2427, "start": 2419, "text": " and variant to translation then it's you make your your you're putting in some domain knowledge right like if you are modeling Picasso's then it wouldn't work." }, { "end": 2443, "start": 2427, "text": " So you could see started in economics and early on which I gather is mostly focused on on the potential outcomes. Yeah I mean I didn't learn honestly like I didn't learn any causal inference in when I was studying economics and I say I didn't learn any" }, { "end": 2455, "start": 2443, "text": " causal I learned a little bit of like like I said difference and differences propensity score matching those kinds of techniques like they're basically like you know Batman utility belt" }, { "end": 2463, "start": 2455, "text": " you know pull out this technique for when you when this in this situation but you don't you never really understand what they were doing but and so I guess that was a little bit of my" }, { "end": 2473, "start": 2463, "text": " an introduction potential outcomes but again they're even thought of it as causal inference it was just like when this thing happens do this when that happens do that kind of thinking even during the" }, { "end": 2483, "start": 2473, "text": " you know I got my PhDs and stats I didn't they didn't teach us anything about causal inference beyond the gold standard the clinical trial right and so" }, { "end": 2495, "start": 2483, "text": " over randomized clinical trials and so I guess just ordinary randomized experiments I really do it's all self taught really I mean I was during my PhD was my dissertation but I I was" }, { "end": 2502, "start": 2495, "text": " taking like you know definitely colors course on Coursera on proper physical models I was buying books I was you know I was like" }, { "end": 2514, "start": 2502, "text": " right in Coursera yeah yeah writing writing code submitting the code basis so like I was I had to kind of pick it up on my own cool okay and you definitely did and now you teach it." }, { "end": 2523, "start": 2514, "text": " So Gamalon can we move on to Gamalon can you tell us a bit about Gamalon and what what you're working on there sure so yeah so I work at an" }, { "end": 2533, "start": 2523, "text": " I still have called Gamalon Gamalon is focused on building natural language understanding SAS for sales and marketing so our flagship con our" }, { "end": 2542, "start": 2533, "text": " flagship product is a conversational AI whose main distinguishing feature is that it tries to understand who it's talking to I mean this is me" }, { "end": 2553, "start": 2542, "text": " explaining it as you know an engineer who builds it I'm not sure kind of what you know what's what the what the you know go to market people say publicly so if this sounds a little bit" }, { "end": 2561, "start": 2553, "text": " different from what you find online so be it but I mean to me the main distinguishing feature is that it's trying to understand who it's" }, { "end": 2573, "start": 2561, "text": " talking to what they want to know what they're trying to understand and then provide that information and escalating that to some kind of conversion like you know here's" }, { "end": 2581, "start": 2573, "text": " a informational PDF or or you know do you can download or you know here you can leave an email or you can schedule a demo if it's" }, { "end": 2589, "start": 2581, "text": " appropriate right we don't want you know we actually have a very strong policy against not being aggressive but focusing on understanding" }, { "end": 2598, "start": 2589, "text": " the understanding the visitor and understanding what is you're trying to do and understand and helping them do so and so I can't talk too" }, { "end": 2605, "start": 2598, "text": " much about kind of the tech except to say you know that I was offered the role because of my experience in" }, { "end": 2612, "start": 2605, "text": " probabilistic modeling probabilistic programming and working in learning graphical structures and you know are kind of" }, { "end": 2622, "start": 2612, "text": " working with unstructured data doing unsupervised learning and discrete discrete structural spaces so you also take that for what is worth" }, { "end": 2635, "start": 2622, "text": " and I and you know in my role there I focus very much on this challenge of building a SaaS product around and core AI tech which is you know it's" }, { "end": 2644, "start": 2635, "text": " fantastically interesting and challenging problem and I think a lot of you see a lot of startups that are basically kind of investor funded research" }, { "end": 2651, "start": 2644, "text": " and so like you know it's really I feel really kind of fortunate to be at a place where you have this very strong" }, { "end": 2660, "start": 2651, "text": " you know cross-pollination between building a software product and and building AI and building you know and kind of research" }, { "end": 2669, "start": 2660, "text": " kind of research problems that happen in when you're trying to solve an AI problem and so that's that's you know that's the intersection where I want to live at" }, { "end": 2676, "start": 2669, "text": " to have all the all the challenges of of doing a SaaS business and then plus you're pushing the envelope on the AI as well" }, { "end": 2681, "start": 2676, "text": " it's got to be a challenge and we often talk about it's like you know we have to solve all these little problems and each one of" }, { "end": 2690, "start": 2681, "text": " these problems could be a paper and we have to fit it fit it within like you know kind of a scrum process of two-week sprints and it's" }, { "end": 2700, "start": 2690, "text": " it and work with full-sac developers and product people and it's really fun well it's a fun place to work and we're hiring so you guys" }, { "end": 2707, "start": 2700, "text": " that you're interested people should apply awesome okay and then at the same time you're doing you're doing all the all deep your" }, { "end": 2714, "start": 2707, "text": " school of AI can you tell us about that what is the what is the idea with all deep yeah so the goal of all deep is to help" }, { "end": 2721, "start": 2714, "text": " clients right just people who are who are working in quantitative quantitative fields you know improve their practice their" }, { "end": 2728, "start": 2721, "text": " reasoning their decision making you know the problem that I'm trying to solve is you know working this multi-disciplinary field and you" }, { "end": 2734, "start": 2728, "text": " know that's oh man I need to know more about similar processing to solve this problems like oh I need to say I wish I" }, { "end": 2743, "start": 2734, "text": " knew more about you know category theory or I wish I knew more about you know causal causal machine learning and but it's" }, { "end": 2755, "start": 2743, "text": " really hard to do that in a labor market where the fangs are incentivized us to over specialized right and so we work in this multi-disciplinary field and we" }, { "end": 2763, "start": 2755, "text": " feel like oh I need to learn more about you know signal processing ideas to solve this solve my problems or like oh like they seem" }, { "end": 2774, "start": 2763, "text": " to have a nice framework for or let nice mental model for understanding this this domain but I have this background or I need to learn more" }, { "end": 2781, "start": 2774, "text": " about causal machine learning to solve this this problem that I'm working on but the labor market tends to you know it's driven by these" }, { "end": 2789, "start": 2781, "text": " fangs who encourage us to over specialized right and just kind of get really good at using a specific set of tools and have a very narrow way" }, { "end": 2795, "start": 2789, "text": " of looking at things and so and go a vault deep is to make it easy for you to acquire mental models that people and other" }, { "end": 2801, "start": 2795, "text": " disciplines have already figured out and so that's what we do and we do that with these workshops and courses that we run" }, { "end": 2808, "start": 2801, "text": " we newsletter and a community site it's it's currently closed but you know I'll tell you what if you want I could send you" }, { "end": 2815, "start": 2808, "text": " a link that you can pass to your listeners who are interested we're going to open it up soon but for now it's kind of it" }, { "end": 2823, "start": 2815, "text": " and by only thing that'd be awesome so so can you tell us a little more about the the types of courses that you have right now and we" }, { "end": 2831, "start": 2823, "text": " have a causal machine learning workshop that's run in three cohorts a year although we're going to increase that soon so the" }, { "end": 2839, "start": 2831, "text": " bushes stay tuned there we have a probabilistic modeling workshop and so what that does is it takes some of the traditional" }, { "end": 2851, "start": 2839, "text": " workflows that we've seen develop in Bayesian statistical computational Bayesian statistics and then combining them with advances in" }, { "end": 2863, "start": 2851, "text": " approximate inference and and and then extending them to some of the more kind of cutting edge approaches that we see" }, { "end": 2875, "start": 2863, "text": " things like hardcore implicit models using simulators as without they don't have likelihood functions as models and still being able to do inference" }, { "end": 2887, "start": 2875, "text": " deep probabilistic models and so so I kind of see that as you know if you look at some of the best kind of textbooks on" }, { "end": 2895, "start": 2887, "text": " Bayesian inference they tend to focus it then they ignore kind of the cutting edge of machine learning and so this one solves that" }, { "end": 2905, "start": 2895, "text": " problem we have a let's see we got a course called refactor devolved beyond a glorified curve fitter and so this is" }, { "end": 2919, "start": 2905, "text": " this is this is a course that is is takes a high level looks at kind of coxide decision theory Bayesian modeling not just like Bayes rule but the actual kind of" }, { "end": 2926, "start": 2919, "text": " high level concepts of thinking Bayesian about a problem and you know takes some takes aspects of communication" }, { "end": 2936, "start": 2926, "text": " theory that as a whole we were just talking about inductive biases does this whole breakdown of inductive biases across various problems of machine learning" }, { "end": 2950, "start": 2936, "text": " and then some things that we have in the pipeline we have a course on building if Ethereum dApps or Ethereum applications that is that are focused on" }, { "end": 2961, "start": 2950, "text": " social decision science behavioral science game theory we have a course on applied category theory that's in the works and so another" }, { "end": 2969, "start": 2961, "text": " work another course coming on on decision science I took your causal modeling course and it really opened my eyes to so many things on" }, { "end": 2977, "start": 2969, "text": " causality and I feel like I was missing on a lot before I before I took that course I love the course I have this new vocabulary and skill set and probably" }, { "end": 2985, "start": 2977, "text": " but probably the most important thing for me is like the just the mental frameworks like you were saying I've had to think about this stuff and where you need it and kind of seeing how how blind I was about this" }, { "end": 2994, "start": 2985, "text": " stuff before so I chose your course one because my friend I am a savvy at mda and Vancouver recommended it and that's how I met you actually because because of him so" }, { "end": 3001, "start": 2994, "text": " show to to I am but also I found you you have this ability to talk from all different sides you're so you're like authoritative on the causal" }, { "end": 3009, "start": 3001, "text": " modeling side and then also with the latest with with machine learning methods and deep learning methods and like you were saying usually people are one or the" }, { "end": 3017, "start": 3009, "text": " other and just being able to easily converse about how these things relate and to do that with clarity and so that that made it a great" }, { "end": 3024, "start": 3017, "text": " decision I'm super happy I did it highly recommended and and thanks for the the excellence experience with that course." }, { "end": 3035, "start": 3024, "text": " Thank you I'm dark skin so not much of a blusher but I would I would be blessing. Well you deserve it no that was great that was very helpful to me and sure that anyone who takes it." }, { "end": 3045, "start": 3037, "text": " So you also call co author to well known our package beyond learn can you tell us tell us about that can we talk a bit about some tools." }, { "end": 3062, "start": 3045, "text": " Yeah I was a contributor to be unlearn I think co author is strong because I mean the main author has done the bulk of the work but yeah so what my contributions to that package were really about so that" }, { "end": 3073, "start": 3062, "text": " package is generally about is our package is generally focused on learning structure not necessarily causal structure and so what my but you" }, { "end": 3101, "start": 3073, "text": " know it can be used to learn causal structure but so what I did is my contributions there were mostly in terms of adding algorithms for learning causal structure particularly in a in a sequential decision making style where like if you know let's say like I you know here and this thing is like I'm trying to learn a causal graph and the actions that I'm able to apply are interventions to nodes on the on the graph and" }, { "end": 3125, "start": 3101, "text": " let's suppose that each action cost you know 10 bucks and so what is I want to kind of resolve all of the you know what is the path what is the trajectory of actions that leads to fully realizing the causal graph as inexpensively as possible so like that was so I built a lot of" }, { "end": 3139, "start": 3125, "text": " actions that allows you to do that kind of thing that's being learned and so so can you tell us about in other tools of your favorite tools in terms of causal inference and building these data generating process models." }, { "end": 3165, "start": 3139, "text": " Yeah so I'm a big fan of probabilistic programming particularly probabilistic programming languages that use a deep learning kind of framework like pie torch or tensor flow so I'm going to plug a pyro which is a pie torch based probabilistic programming language so you can implement deep learning models but within this kind of" }, { "end": 3176, "start": 3165, "text": " probabilistic modeling setting it has abstractions for causal reasoning and has an operator that do operator that will contain a model of an intervention." }, { "end": 3192, "start": 3176, "text": " I'm close with with some of the developers of that platform Eli Bingham is one of the core developers and he's a member of the all deep community he participates in a weekly causal probabilistic programming reading group that we run Mondays." }, { "end": 3210, "start": 3192, "text": " There's a lot of really cutting edge probabilistic model what yeah probabilistic modeling techniques in Julia that are really amenable to causal reasoning because say for example they'll they'll allow you to have the" }, { "end": 3227, "start": 3210, "text": " build a generative model where the generator is a simulator you and then give you an ability to kind of do inference on inference on that on that model which is not trivial because simulators don't don't necessarily have likelihood functions and so therefore it's hard to apply base rule." }, { "end": 3256, "start": 3227, "text": " So if you can do that and you if you all you need to do is add a little bit of causal semantics and you can do some really powerful causal reasoning those tend to be much more those tools often comes out of research labs and I haven't seen any become kind of widely popular yet I tend not to focus on tools until there's a nice developer community behind them and so but the the broader Julia community is certainly very healthy and growing quickly and doing a lot of interesting things and so that's that's kind of why." }, { "end": 3283, "start": 3256, "text": " So you know, pyro and in Python in the Python environment and various things happening in Julia or focusing these days so where would you say we are with our understanding of of causal inference and the tools that we have are we as again I mean like as a community or where the field is are we still in the kindergarten stage of just figuring out the basics or are we kind of close to having everything sorted out." }, { "end": 3312, "start": 3283, "text": " Is there any way to the only comment on that I say we're still pretty early days because there are few libraries so for example fell on a mits Sharma over at Microsoft research has a package called do I which allows you to do causal inference without kind of having to understand like you specifically specify directed graph that represents the causal assumptions of your system and then thing that you're trying to infer and I'm going to say that you're trying to do that." }, { "end": 3332, "start": 3312, "text": " So you're trying to infer and you provide it with the data and it takes care of the rest and so that's a really powerful tool I think if you're just kind of looking at bread and butter causal inference problems if you're you know but say for example like something that allows you to I don't know say for example" }, { "end": 3350, "start": 3332, "text": " learning like you want to kind of spin up a an agents that is you're going to try and learn a transition function using causal semantics you would have to hand code that you're going to try and do some kind of planning procedure where the" }, { "end": 3370, "start": 3350, "text": " the agent has a causal model about how its interventions are going to affect the environment you would have to implement that let's say you're going to try and come up with a model that generates explanations based on a causal model that would you would have to implement that selectors I think we're very very early stages in terms of tooling." }, { "end": 3398, "start": 3370, "text": " So let's talk about causality and RL and I know there's been a lot of papers that look at different aspects of the relationship between these two things one very basic thing that I've been kind of having has never been super clear to me is like can we say that RL online RL seems to be inherently causal in the sense that it's learning interventions by doing the interventions by intervening" }, { "end": 3426, "start": 3398, "text": " where it may be offline RL it's trying to learn learn about these interventions but not by doing them just by observing some some fixed data RL that term means a lot it can depends on who you talked to exactly what it means like I want to take the broader definition that I kind of reach certain uses which is that RL is means not just the method like you know kind of optimizing the word but also the class of problems that we want to solve and if we want to be able to do that we want to do that." }, { "end": 3455, "start": 3426, "text": " So we want to solve and if we want to be even broader than that we can just talk about kind of agent models which you know we're talking about agents making decision and like built writing you know creating agents that can make decisions under uncertainty particularly in sequence and yeah it's the it's inherently causal in the sense that actions change the environment and so if there is an action so in causal inference we have this very clear idea of what an intervention is and how it changes the data generation." }, { "end": 3480, "start": 3455, "text": " How it changes the underlying joint probability distribution created by that data generating process and that's exactly what an action is right and so we can not still get the address your question about online or offline and it applies in both settings right so if I'm trying to learn an intervention distribution so like it's a probability of some distribution of reward some" }, { "end": 3508, "start": 3480, "text": " reward some conditional probability distribution on a reward function given I do some kind of action if I'm able to actually do actions then I'm using interventions to learn intervention distribution and that's and that's and that's great if I'm observing other agents and in an offline setting then I and then I have to kind of reason about OK well should I be you know" }, { "end": 3521, "start": 3508, "text": " should I be treating these agents actions as as it is kind of active interventions or should I be thinking of them kind of these actions like random variables and that just going to model them alongside like there's there's some decisions that I have to make" }, { "end": 3537, "start": 3521, "text": " there but you know these are all kinds of things that we can define clearly within the framework of causal inference and you know so if there is some kind of confounder confounding variable that's causing this agent to be bias in the actions that that's making it if so can I adjust for it yeah so" }, { "end": 3565, "start": 3537, "text": " I think it's relevant both in the online and offline setting you know there's Elias Baron Boyam has this has plenty of examples in the online setting for you know with banded algorithms where there's some confounding you know he talks about an example of a case where the bandit in this casino problem is changing its behavior according to the state of the subject of the person who's playing the game for example and so I can kind of" }, { "end": 3575, "start": 3565, "text": " there's an adversarial and it's kind of an adversarial bandit and that you can solve this problem using kind of causal approaches he has a technique called causal Thompson Thompson sampling" }, { "end": 3587, "start": 3575, "text": " in the offline setting there's kind of factual reasoning right so like kind of factual means like you know I married this person and now I'm happy had I not married this person what I still be happy" }, { "end": 3597, "start": 3587, "text": " that kind of that's a kind of factual statement like you can do you can you and you you and you what you would want to causal model to answer that kind of query in in" }, { "end": 3609, "start": 3597, "text": " reinforcement learning offline setting you say like okay well here's this agent in production making decisions according to this policy had it's been had it deployed a different policy what it" }, { "end": 3619, "start": 3609, "text": " was that in higher reward so that's called you know counterfactual policy evaluation and so yeah and that's certainly kind of an offline problem but you can" }, { "end": 3627, "start": 3619, "text": " actually have it in online setting as well actually so you could say for example the agent is I have been playing the game up to this time" }, { "end": 3642, "start": 3627, "text": " and I got this reward a time T and at time let's say like a time T minus K I made this decision had I made another decision or had another action what my reward be right now" }, { "end": 3651, "start": 3642, "text": " and honestly like that's how we as humans interact right like we when when you're living your life is not based on a million past lives that you've lived and you've learned from all of them" }, { "end": 3660, "start": 3651, "text": " it's like you're you're learning as you go and you only get really one you know epoch you know so like that's it so you're making decisions based on" }, { "end": 3671, "start": 3660, "text": " counterfactual reasoning and you're trying to minimize counterfactual regret we're kind of in the continual learning regime I suppose in real life so yeah so like you know so online" }, { "end": 3679, "start": 3671, "text": " causal kind of factual reasoning is super important in that and that kind of regime so do we need to like how do we know when we need to worry about" }, { "end": 3685, "start": 3679, "text": " the quality and RL like just and the very simple side do we need do we need to think about it when we're in Atari and we're in in" }, { "end": 3692, "start": 3685, "text": " the Joku land or can we just completely safely ignored there's something to be gained by by paying attention to" }, { "end": 3699, "start": 3692, "text": " causal inference even in those simpler settings. All right so the causal inference research or me says that like listen" }, { "end": 3706, "start": 3699, "text": " you're dealing with intervention distributions and you're but you're modeling observational distributions right and" }, { "end": 3717, "start": 3706, "text": " so like if there's any kind of confounding your choice of the best action can be biased and exactly how biased it's not clear but it's always" }, { "end": 3725, "start": 3717, "text": " going to be a risk. The engineer in me says fine just kind of use the baseline method and if it proves insufficient to the" }, { "end": 3733, "start": 3725, "text": " problem then then try to enhance it with some kind of causal model I don't know what comes the mind is some work by" }, { "end": 3741, "start": 3733, "text": " carriers AI you know that some of their their work on schema networks where they kind of they they show you can not just" }, { "end": 3751, "start": 3741, "text": " have the the agent learn a good policy with fewer with with with far less data but if you change the game in a way it" }, { "end": 3758, "start": 3751, "text": " hasn't seen before it's able to adapt because it has you know a causal model of how the game works. I love that" }, { "end": 3765, "start": 3758, "text": " paper I've been a fan of that paper for ages and and it's so it makes it so simple right and it's kind of how we think of the game I think right how" }, { "end": 3772, "start": 3765, "text": " like if you had a aastic child to explain how to space invaders work what is it doing they probably come up with some kind of" }, { "end": 3780, "start": 3772, "text": " explanation some yeah yeah and and and and if you change the rules of the game where you add you know you add some kind of" }, { "end": 3788, "start": 3780, "text": " variations to the rules the human who is familiar with the old rules is is typically able to to make predictions for how things are" }, { "end": 3795, "start": 3788, "text": " going to work on the new rules and it seems to me that a lot of algorithms you would want to you would want to be able to have that but I don't" }, { "end": 3802, "start": 3795, "text": " know I you know I'm not a I mean there are people who work on a lot of different practical reinforce learning problems and maybe like" }, { "end": 3811, "start": 3802, "text": " maybe there are some problems where that doesn't matter like that the rules are always going to be the same and that you know if you want to handle interventions you just you" }, { "end": 3819, "start": 3811, "text": " know you just treat interventions like a random variable in the system and and just make sure that you simulate every possible combination" }, { "end": 3826, "start": 3819, "text": " and the training data and so you know I don't know maybe that works in a lot of scenarios so yeah the engineering me and says like you know" }, { "end": 3834, "start": 3826, "text": " do what you can get done quickly and and works in a robust way and and let's you get to your next milestone they cause a" }, { "end": 3840, "start": 3834, "text": " inference researcher me and says hey if you're working with interventions and actions are interventions that you're working" }, { "end": 3847, "start": 3840, "text": " with an intervention distribution and you're trying to to approximate them with a with just observations and you" }, { "end": 3853, "start": 3847, "text": " know I have an intervention model then you could be biased in fact you could be severely biased and the in a" }, { "end": 3860, "start": 3853, "text": " lead to catastrophe I guess the other thing to mention here is that it's the Perlian causal models like like" }, { "end": 3867, "start": 3860, "text": " causal graphical models structural causal models there are a special kind of probabilistic model right" }, { "end": 3875, "start": 3867, "text": " especially kind of generative model there is a lot of work from the probabilistic modeling standpoint of an" }, { "end": 3883, "start": 3875, "text": " agent models right so like you know one thing you can do if you want to optimize and you know an objective" }, { "end": 3888, "start": 3883, "text": " function is you know I have some function that takes an action and you find the argomax right but the other" }, { "end": 3894, "start": 3888, "text": " thing that you can do is treat it like a Bayesian inference problem and so like this is this paradigm is called" }, { "end": 3901, "start": 3894, "text": " planning as inference there's a class of researchers who work on applying a Bayesian or more general kind of" }, { "end": 3907, "start": 3901, "text": " probabilistic approaches to agent modeling and you could take some of the the Perlian kind of" }, { "end": 3912, "start": 3907, "text": " structural causal models or causal graphical models and implement them in those frameworks and then have" }, { "end": 3919, "start": 3912, "text": " the additional semantics of causal semantics and then make some and then whatever whatever advantages" }, { "end": 3924, "start": 3919, "text": " those probabilistic modeling approaches provide you can enhance them with some new capabilities if you have" }, { "end": 3930, "start": 3924, "text": " the causal abstractions built in. Well we had Taylor Killingen on the show recently and he talked about" }, { "end": 3940, "start": 3930, "text": " how in some of his research applying policies learned at one hospital to optimize treatment didn't work so" }, { "end": 3946, "start": 3940, "text": " well if you took it to another hospital that had a different distribution of patients and so these types of" }, { "end": 3953, "start": 3946, "text": " tools helped improve the results there and I won't try to summarize that any further. In your cosality course you" }, { "end": 3959, "start": 3953, "text": " talk about this notion of free will in an agent. Can you remind us what do you mean by that and and how is that" }, { "end": 3967, "start": 3959, "text": " helpful beyond outside of philosophy like doesn't Atari agent have this type of free will or can we make" }, { "end": 3979, "start": 3967, "text": " like RL agents. So the discussion of free will of blame of intention of explanation these are" }, { "end": 3986, "start": 3979, "text": " things that we would like to have in engineered agents that are making decisions under certainty. So why" }, { "end": 3991, "start": 3986, "text": " when word is free will come up. So if people are listening here if they kind of Google kind of causal" }, { "end": 3998, "start": 3991, "text": " decision theory compare it to evidential decision theory. This is kind of an old argument in" }, { "end": 4004, "start": 3998, "text": " decision theory and you can and they have problems like you know Newcombe's dilemma for example where you can" }, { "end": 4010, "start": 4004, "text": " show that you get different results under different procedures and this you know to the idea of why a" }, { "end": 4015, "start": 4010, "text": " policy learned in one hospital doesn't transfer to another one. This is this would be a setting" }, { "end": 4020, "start": 4015, "text": " where evidential decision theory failed and causal decision theory would succeed. And so the idea is simple" }, { "end": 4028, "start": 4020, "text": " causes decision theory says that you can't model actions as random variables you need to you need to" }, { "end": 4039, "start": 4028, "text": " model them as interventions as as a and an intervention is something that that is a an operation on a on the" }, { "end": 4045, "start": 4039, "text": " joint probability solution between random variables. And even in the setting where you have a policy that's" }, { "end": 4053, "start": 4045, "text": " the caustic right so it's generating an action you know in the causal modeling approach we once the" }, { "end": 4060, "start": 4053, "text": " action is realized like yes it's the caustic output but we don't treat it as a as a random variable we" }, { "end": 4067, "start": 4060, "text": " treated as a as an operation. How does that apply to free will? The idea is that if an agent's actions are" }, { "end": 4075, "start": 4067, "text": " just some stochastic function of elements of its environment so it's just a random like my actions are" }, { "end": 4081, "start": 4075, "text": " just generated from a conditional probability distribution that's conditioned on elements you know the state of the" }, { "end": 4087, "start": 4081, "text": " environment. And then I'm really just kind of like this automata just reacting to things right as opposed to" }, { "end": 4096, "start": 4087, "text": " something as something that's introspective and deliberate and and choosing my actions according to" }, { "end": 4104, "start": 4096, "text": " some some some reasoning about how my actions are going to affect my environment. You know so it's free" }, { "end": 4110, "start": 4104, "text": " well in a sense that you know we might say like you know an amiiba or a white blood cell that you watch in a" }, { "end": 4117, "start": 4110, "text": " petri dish kind of making decisions and move around trying to absorb some kind of antigen or or" }, { "end": 4122, "start": 4117, "text": " foods or something like that. We wouldn't we typically want to think of it as having free will which" }, { "end": 4127, "start": 4122, "text": " just kind of reacting to signals in its environment. And and so like I mentioned things like" }, { "end": 4137, "start": 4127, "text": " regret and and blame right so the free will argument there is when we explain and explanations right so" }, { "end": 4145, "start": 4137, "text": " when we explain why something happened oftentimes we we delete we kind of oscillate between this idea of you" }, { "end": 4152, "start": 4145, "text": " know us being kind of passive reactors to our environment right like you know usually Adam and Eve stories" }, { "end": 4159, "start": 4152, "text": " what I teach in the course and like you know when when when when God asks Adam why did you eat the apple" }, { "end": 4166, "start": 4159, "text": " he says well the woman that you you gave that you put here with me gave me the apple and so I" }, { "end": 4175, "start": 4166, "text": " ate it's and then the woman says well the snake said it was good deed and so I ate it but when when" }, { "end": 4180, "start": 4175, "text": " when Eve is considering whether to eat the apple she's you know she's she's listening to what the snake says" }, { "end": 4185, "start": 4180, "text": " but she's being delivered to it for the deliberative she's like you know if I you know she's reasoning if I" }, { "end": 4190, "start": 4185, "text": " know about the intervention like if I eat this then this thing will happen the explanation that she gives her" }, { "end": 4196, "start": 4190, "text": " and her decision is actually kind of oscillating between that's framing of the problem as as" }, { "end": 4201, "start": 4196, "text": " of being an active agent and the passive agent and so this is important in the reinforce learning setting" }, { "end": 4208, "start": 4201, "text": " it's not just philosophy it means that like if I have a household robot that is it's cleaning the house" }, { "end": 4213, "start": 4208, "text": " but it you know it does so in a way that inconveniences me and maybe it wakes me up in a middle of the night" }, { "end": 4218, "start": 4213, "text": " maybe it's you know running laundry what I'm taking a shower right and I go to the and I go to the robot" }, { "end": 4223, "start": 4218, "text": " and I say hey don't do that right well the robot now is a reason if it's going to improve in the" }, { "end": 4227, "start": 4223, "text": " future it needs the reason why it what I don't want it to do that it you know it could just kind of" }, { "end": 4232, "start": 4227, "text": " work on reinforcements but you know it's probably going to get stuck in some kind of you know" }, { "end": 4238, "start": 4232, "text": " it's going to probably get stuck in weird behaviors if I if it does that it you really wanted to do" }, { "end": 4246, "start": 4238, "text": " is reason why you were unhappy right are you and so it needs to kind of think about it easy kind of" }, { "end": 4251, "start": 4246, "text": " think in terms of I mentioned earlier actual causality and he's to think about like okay well" }, { "end": 4257, "start": 4251, "text": " retrospectively this happened was a was a which which led that to happen and so I did you know" }, { "end": 4263, "start": 4257, "text": " so these he's unhappy because I did this thing while he was showering this is not the" }, { "end": 4268, "start": 4263, "text": " say he doesn't want me to do this thing at this time or I just you know I that or I just you know" }, { "end": 4274, "start": 4268, "text": " or when he's only he I just should not do this thing because or when he's souring and so I could" }, { "end": 4279, "start": 4274, "text": " and so I could and so I can update its policy accordingly and so you know having an this kind of" }, { "end": 4285, "start": 4279, "text": " this discussion about these philosophical discussions kind of concern how we can create" }, { "end": 4291, "start": 4285, "text": " agents that can not just kind of react to plus or minus signals about whether an action" }, { "end": 4297, "start": 4291, "text": " was good or bad but rather generate explanations for why an action was good or bad or led" }, { "end": 4303, "start": 4297, "text": " to a good or bad outcome and and after accordingly this becomes really important in particularly" }, { "end": 4309, "start": 4303, "text": " in settings where we have partial information and and outcomes are there's a" }, { "end": 4314, "start": 4309, "text": " there's maybe there's a random element in the environment because you know like say for example" }, { "end": 4320, "start": 4314, "text": " you're playing no limit Texas Holdham right like that's a kind of game where you know if you" }, { "end": 4325, "start": 4320, "text": " lose it you could be playing an optimal strategy and still lose and you can believe playing an" }, { "end": 4330, "start": 4325, "text": " inferior strategy and still win right and so you want you want to you know if you can if you can" }, { "end": 4335, "start": 4330, "text": " generate explanations for why you win or lose in a way that's robust to that kind of uncertainty" }, { "end": 4342, "start": 4335, "text": " you can you can cope with those partial information random you know stochastic settings." }, { "end": 4348, "start": 4342, "text": " I guess in that example the robot now needs to know about showers which before maybe it didn't" }, { "end": 4353, "start": 4348, "text": " care about wasn't part of its observation was like an unobserved confounder would we say that?" }, { "end": 4358, "start": 4353, "text": " If the robot is going to generate an explanation as to why you are unhappy it needs to" }, { "end": 4362, "start": 4358, "text": " and the real reason that's the way you're unhappy is because you know it ran you know" }, { "end": 4366, "start": 4362, "text": " instead of washing clothes while you were in the shower you know then in order for it to land" }, { "end": 4372, "start": 4366, "text": " on that explanation it needs to have a model that has showers and has a relationship between" }, { "end": 4377, "start": 4372, "text": " showers and clothes washing that shows that when the when when when you run a hot a hot" }, { "end": 4382, "start": 4377, "text": " wash cycle when somebody's showering if they're going to lose hot water right and so like that's that's assuming" }, { "end": 4389, "start": 4382, "text": " a lot but you know maybe that's a rich sudden problem maybe we can brute force that." }, { "end": 4395, "start": 4389, "text": " Maybe we can move on to the the ICML workshop on causal RL by professor alias Barron Boyme" }, { "end": 4400, "start": 4395, "text": " and he's also from Purdue I just realized so I saw you covered this on your twitch channel" }, { "end": 4405, "start": 4400, "text": " and by the way we'll link to the twitch channel and all the links we talked about on the on the" }, { "end": 4413, "start": 4405, "text": " page at on talk rl.com but so that was that was a really intense lecture to cover tons of ground" }, { "end": 4418, "start": 4413, "text": " but could you for me and my listeners can you share a few takeaways from from that workshop?" }, { "end": 4424, "start": 4418, "text": " Yeah so I think that lias has done the most more work than anybody in connecting causal" }, { "end": 4429, "start": 4424, "text": " inference particularly pearl pearly and causal inference his his advisor was Judeo" }, { "end": 4436, "start": 4429, "text": " pearl is a hyperlipses name sorry to reinforce learning problems and yeah so" }, { "end": 4442, "start": 4436, "text": " it that workshop is kind of his overview of breaking down reinforcement learning problems" }, { "end": 4446, "start": 4442, "text": " in terms of different causal terms so breaking breaking down reinforcement learning" }, { "end": 4451, "start": 4446, "text": " in causal terms into different sets of problems that you want to solve and so it's a yeah I" }, { "end": 4456, "start": 4451, "text": " definitely recommend it and he he updates it every time he gives it so I think it's the second" }, { "end": 4460, "start": 4456, "text": " or third time he's given that talk yeah so I guess you know how much which channel I do a little bit" }, { "end": 4464, "start": 4460, "text": " live coding do a little bit of paper reading but I you know one of the things I really like to do is" }, { "end": 4472, "start": 4464, "text": " live watch a workshop or a summary of a paper or you know a podcast on that's covering a paper" }, { "end": 4476, "start": 4472, "text": " or something like that so yeah that's what you're talking about that's what I did I did a few" }, { "end": 4480, "start": 4476, "text": " twitch streams covering covering that that workshop by lias." }, { "end": 4484, "start": 4480, "text": " I've been joined the twitch streams and your reading groups yeah we'll link to all that good stuff" }, { "end": 4490, "start": 4484, "text": " moving forward so what does a viewer look like for you you got a lot of things going on when you" }, { "end": 4496, "start": 4490, "text": " what are you looking forward to the next year and and can you tell us a bit about your path going forward." }, { "end": 4502, "start": 4496, "text": " I like doing three things like like engineering I like teaching I like writing and I plan to do" }, { "end": 4508, "start": 4502, "text": " a lot more of that all three of those when it comes to the engineering I'm very much focused on" }, { "end": 4516, "start": 4508, "text": " at that intersection between AI and you know building SaaS product right I think we need more people who are" }, { "end": 4522, "start": 4516, "text": " willing to kind of get on the front line of that problem they said software is eating the world and they were hoping that AI" }, { "end": 4528, "start": 4522, "text": " would too and we're hitting a whole bunch of problems when it comes to cost the training models and how robust they are after they" }, { "end": 4536, "start": 4528, "text": " we've trained them and trying to actually shoehorn these things into the path the past several decades of software" }, { "end": 4542, "start": 4536, "text": " and the best practices is not really working and so that's and that's a problem to be solved but also" }, { "end": 4550, "start": 4542, "text": " we have hard researchy problems that we need to solve all the while getting feedback from customers" }, { "end": 4558, "start": 4550, "text": " and trying to make it so that we actually get some product market fit and so I think that is a really" }, { "end": 4565, "start": 4558, "text": " fantastic you know problem space and I encourage more people to think about it in terms of teaching I am working" }, { "end": 4573, "start": 4565, "text": " on more coursework for all deep like a city where starting looking at some of these the ways that we can" }, { "end": 4581, "start": 4573, "text": " reconcile things from behavioral science and or behavioral economics and connecting them to Ethereum" }, { "end": 4587, "start": 4581, "text": " we're looking at another workshop on applied category theory a few more items in the pipeline there" }, { "end": 4592, "start": 4587, "text": " in terms of writing I really hope to be writing more I have a newsletter that's fairly popular" }, { "end": 4597, "start": 4592, "text": " and I'm at a weekly cadence now I'd like heck I'd like to get it up to a daily cadence" }, { "end": 4603, "start": 4597, "text": " one of the advisors the kind of the early members of the board that's a sub stack that newsletter company" }, { "end": 4610, "start": 4603, "text": " was an omen to remind for my days in China and he has a really popular newsletter called sinicism" }, { "end": 4616, "start": 4610, "text": " and it's a daily newsletter and it's I'd love to be able to do that kind of thing with machine learning" }, { "end": 4621, "start": 4616, "text": " but you know there's only that much time in a day so you know we also have this growing community" }, { "end": 4626, "start": 4621, "text": " and we're interested in getting more people to join up so there's a lot of work happening there as well" }, { "end": 4632, "start": 4626, "text": " so many things happening on many fronts trying to narrow it down a little bit try to learn how to say no but it's tough" }, { "end": 4636, "start": 4632, "text": " I look forward to it all there are links to everything will be in the episode page" }, { "end": 4641, "start": 4636, "text": " Dr. Robert Ness it's been it's been a real pleasure having you on the show chatting with you" }, { "end": 4646, "start": 4641, "text": " thanks so much for sharing your time and your insight with with myself and our listeners today" }, { "end": 4654, "start": 4646, "text": " thanks Robin for having me and I really enjoyed this and I I hope that's you know I think if there's any community within the machine learning space" }, { "end": 4659, "start": 4654, "text": " where causal reasoning is going to be the killer app it's going to be reinforcement learning" }, { "end": 4668, "start": 4659, "text": " I'm not going to say that like the absence of the adoption of causal methods is kind of what's holding RL back for becoming kind of much more practical and applied" }, { "end": 4676, "start": 4668, "text": " but I think that if people were to adopt the mental models that come with adopting this manner of thinking" }, { "end": 4681, "start": 4676, "text": " and then we would see a lot of breakthroughs I'll leave with that" }, { "end": 4692, "start": 4688, "text": " notes and links for this episode are at talkrl.com" }, { "end": 4700, "start": 4692, "text": " if you like this show I need your support you can help in a few ways once subscribe on your favorite podcast platform" }, { "end": 4703, "start": 4700, "text": " subscriptions make a big difference" }, { "end": 4709, "start": 4703, "text": " two follow us on Twitter and talk RL podcast we love retweets" }, { "end": 4724, "start": 4709, "text": " three give us a five star rating on Apple podcasts if you don't think we deserve five stars let us know on Twitter what we could do better" } ]
Marlos C. Machado
Marlos C. Machado on Arcade Learning Environment Evaluation, Generalization and Exploration in RL, Eigenoptions, Autonomous navigation of stratospheric balloons with R...
https://media.transistor…e7d.mp3?src=site
This is Taka Rail Podcast, all reinforcement learning, all the time. Interviews with brilliant folks across the world of RL. I'm your host, Robin Chohan. Dr. Marlos Machado is a research scientist at DeepMind and an adjunct professor at the University of Alberta. He holds a PhD from the University of Alberta and Master's in Science and Bachelor's Science from UFMG in Brazil. Thanks so much for joining us, Marlos. Thanks for having me. I'm excited to start with you today. So how do you describe your area of interest? I am generally interested in artificial intelligence, which is quite broad, but I am interested in this different aspect of intelligence, not necessarily married to an approach. I've been especially fascinated by the problem of decision making. And specifically, the sequential decision making when actions have delayed consequences. So in the past couple of years, I've been specializing more and more in reinforcement learning, but I am interested in how things related to that, related to representation, learning abstractions and things like that, which is pretty broad. Let's put this way. So the journey of your career is included in University of Alberta, Google Brain and now DeepMind. Can you tell us how your perspective and maybe your approach to research has evolved between these different chapters of your career? I did my PhD at University of Alberta and obviously your approach to research changes a lot during your PhD when you start your just learning. And I think that I don't even know if it's fair to say that something changed from when I left UFM, but I think that it's definitely, I think that UFM shaped a lot of how I see research, how the big concern for the fundamentals for being sound, for being precise, at what you're saying, and trying to understand the phenomena, not trying to get too much into maybe getting people excited about science, but to actually be a scientist and ask the question, what is the phenomena we are observing, what is the hypothesis and how can we do good science about that? When I left UFM and I went to Google Brain, one thing that I was very excited about was disability to scale the research. As much as I was doing what people call Deep RL at UFM, it was always a struggle with a couple of those NGOs to do research. When I went to Google Brain, then I definitely had access to more resources and I thought I could explore that a little bit more, ask different questions or be more careful about some of the questions that I was going to ask about that. And there is also this perspective that obviously you start to be exposed too much more once you are not a grad student, because as a grad student, you are fundamentally concerned about your thesis. So as much as you explore other areas and you're talking to people, you still eventually have to write a PhD thesis into the research. And at Braña, I was also happy that I could explore more and diversify my interests or my research. And Deep Mind, I joined Deep Mind a month or two ago. So it's hard to say that something changed about my research, but one thing that I'm excited about Deep Mind is the amount of people around me that have reinforcement learning as their main research interest. And I'm already benefiting a lot from having these different perspectives and having very deep and meaningful discussions about reinforcement learning problems that I care about. And I'm very excited to let's come out of that. Awesome. So could you say that or would you say that the conception of what reinforcement learning is and what it's about and the important aspects of it? These fundamental things, do you think the conception of these things differs much between these institutions or are they all looking at it in a similar way? I find it hard to characterize that because I don't think that there is a mandate from an institution of what reinforcement learning is. And I've been very lucky that all these institutions that I've been part of, they are very broad and that have several researchers and several groups doing research on reinforcement learning. So it's hard for me to characterize this is how Google Brain or Deep Mind sees reinforcement learning. I think that there are definitely differences that I could perceive. And again, this is just a very personal note, maybe I'm even quoting too much the response, but it's obviously this is a personal perspective and this mainly brain and deep mind, they are such a wide institution that I'm grossly mischaracterizing anything that I say because I'm sure that there is a group that I'm not aware of that is doing things differently. But one distinction that I see is that at Huawei there was always this very big discussion about yes, how can we come up with intelligent agents, but the focus has never been so much in what we would call the URL. I did the deep URL, I wrote a couple of papers about the reinforcement learning when I was in my PhD, but Huawei is not like a lot of the professors and a lot of the research groups there, they are not necessarily so excited about the deep reinforcement learning research person. They are very interested in the fundamentals of a row and then for that you if you need the function approximation, you can just do linear function approximation and things like that, because they really want to control as much as you can to isolate everything and explain one process. And then once you once and then at brain, brain deep mind, I think that they have they share a lot of similarities, I would say that one of the and I guess one of the big difference that brains that the groups are often localized. So like there is the Montreal group and then it has its own flavor research, the mountain view group. And as much as the different groups talk, you can see even in the publications that this group stage will publish together. Different groups have different approaches, but I think that one of the things that brains that brain has a big focus are more than reinforcement learning, right? Brain reinforcement from a cartonage perspective as much as they are amazing researchers at brain that do research, they're reinforcement learning and there are a lot of them. The majority of the research brain still seems to me that it's focused on deep learning. Why I guess a deep mind reinforcement learning, it's at the center or at least for from my perspective, feels at the center of this. So it's not so much about how you see the problem or the problem formulation, but it's maybe a metaphor, it's and it's shaping some of the things that the discussions that you have. But in all these three places I always had absolutely freedom to do whatever I want. So in a sense, it's my perspective of a reinforcement learning and of anything imposed to me by anyone else. That was super interesting and it's great to chat with you. You being in this unique position to be able to comment on the different perspectives of these world leading institutions. So we were going to talk about a few of your papers and starting with revisiting the ALE. So this is a first author paper viewers revisiting the arcade learning environment, evaluation protocols and open problems for general agents and that was Machado 2018. Before this paper came out, can you help us understand like what were the issues with comparing results across studies in Atari and then how did you address them here? Sure, yeah, that's an interesting paper because it wasn't the works for a very long time. The original arcade learning environment paper came out in 2013 from Mike Polnitz group at 12 a.m. with Mark Belmer as a first author. And then people who started to get excited is slowly get excited about it until of course the deep parallel explosion that got everyone's attention and Atari was the main benchmark that people used. And I think that there was that first phase that people were just getting used to the framework and getting used to the problems and the questions that you could ask and what were the limits of computation that we could explore. And because people were exploring from so many perspectives, sometimes it felt that they were not making Apple's travel comparisons. And I am very, very annoying about that. Those who work with me know that I get very upset with these type of comparisons. And it always bugged. And to give a concrete example, when the original Atari paper came out, it was about the number of, they were using episodes as a metric. So the number of episodes in the environment show measure the agent. So basically they just get straight to the environment for a thousand episodes. I don't remember what the number was. And that's you're going to report the performance at the end. But the tricky thing here is that if you have an agent that it gets, that it learns well and it starts to learn a good policy either by chance or because it's a better agent or early on, you're going to live longer, right? And then the episodes become much longer. If your episodes are much longer, then the agent is actually see much more data than it was before. Because this is a thing that it's evolving. When the VQN paper came out, I think that they did a more appropriate thing, which is said, no, we're going to count the number of frames, for example, of this that the agent has seen, the number of interactions with the environment. But because everyone else was doing episodes before, well, now you start to have comparisons between number of frames and number of episodes and then you are not even comparing the same number of interactions with the environment. And you could see this in a couple of papers. There are other things like, oh, is the agent allowed to know when it dies or does the agent like when it loses a life or does the agent only knows when all the lives are lost and then the agent gets to start the game again. And you can keep adding up those things like in the paper, we have a whole list of those. But you can see that this is more details matter. They matter a lot. Of course, if you're talking about the number of interactions with the environment, the same coefficients of your algorithm, it's a major thing. But even like, do you get to know the number of lives or do you don't? How do you do with that? Is this another thing, for example, do you get to know the number of actions that the agent has access to? Because, for example, if you're playing a pong, you only go up and down, right? Like, you don't have that many actions. So you have three actions up, down, and stay. Why, in other games, you have actual 18 actions. So is the agent supposed to learn that some of the actions have no effect? Or can we start telling the agent that they are not available? And in the paper, we were very careful. It irritated a lot about this. Should not say that, oh, this is what you should do. But in a sense, it's important that you acknowledge that you're maybe, if you are assuming that your agent knows what is the effect, the action set, the minimum action set, let's call it the minimum number of actions that are effective in the environment. Well, maybe the agent's not going to spend so much time trying to figure out that it should not consider those other actions. And so it's not fair to make this comparison. So when we wrote this paper, this paper actually started back in 2015, when we organized a workshop at 3.5 about all this research that was doing about how to do this general, this research and reinforcement learning and this vast range of domains and how we could get this performance that it's in a sense general purpose, what we would call. And a lot of the people, like the leads of the reinforcement learning community at the time, they were discussing this and they were saying, yes, we have to fix it. We have to have some guidelines to help the whole field. And at the time, I was one of the organizers of the workshop and then we said, yeah, let's write a paper about that. The paper took much longer to be written for all sorts of reasons, but at the end, I think that it did what it was set up to do and what a lot of the people were expecting us to do, which is to come up with at least some discussion about that. And some examples of apples to apples, comparisons and things that you would expect. One of the reasons that the paper took so long to be written is also because from the moment that we started doing this, we started to realize that, oh, maybe we can do some things better. We can add stochasticity to the environment because it was a terministic or we can add modes as we added, which is a which vastly increases the number of games that you have access to. And as we kept adding these and we wanted to write a solid paper, this took quite some time to get out, but eventually 2018, we published that chair and to be fair to chair because journals got a bad reputation, the reveals that it was fairly short, so it was not that the journal was holding us back. And how was the paper received? Like did everyone latch onto this as the definitive way to benchmark with Atari or did it take some time to diffuse? Did everyone agree that this is the right way to do it, the protocol? I mean, I want to say that the paper was well received. People oftentimes associate my name to that paper. So yeah, I think it was well received. There is a big difference between being well received and becoming a standard in the field. And I think you cannot force people to do what you want them to do. Maybe you're not even suggesting the right thing. So in this context, I think that the whether the paper was how much people actually decided to listen to various even recently, we had some big results that were not following. Still people use other different protocols to do with statistics. It depends on the version that they are using on the Atari. But I want to say that I've seen I've been on the review process a couple of times where I see other reviewers saying, oh, you're not following the this paper's guidelines and you should. So I think that there is this general consensus in the community that it's at least one good standard to fall. And I'll take that. I think it's good enough. So this paper is really how I first encountered your name. And as I told you before, our interview, it came up during my first Neurops as 2018 and Go Explorer was a big part of the D-Bar All Workshop. If some of the discussion after that was around whether there are methods adhered to the guidelines in your paper. So that's how I came to you to know you for the first time. Yeah, it was interesting because when the go explorer paper came out, I didn't say anything but I could say that people bringing up my paper say, oh, you're not following the app and essentially somehow our paper got, I want to say got it. I also a lot of attention because of the explorer. So I think it was good for everyone. And I was tweeted about this yesterday or the day before. I don't remember. But I finally got the chance of reading the go explorer to the final version, the nature paper. And I was very happy and impressed by how much they actually listened and they wanted to join us to write. And I really congratulate the authors for like taking the feedback from the community and saying that yes, if this toxicity is something that is important, we are going to address that. I think it was a good outcome and a good example of science in the community talking to each other. So can you tell us about a determinism in ALE and stochasticity and sticky actions? How does all that stuff work? We take a lot of things for granted nowadays, but back in the day for a far, they also predates stochasticity in a sense. So the Atari 2600 didn't have a source of stochasticity built in the controller. The best they could do was to use the state of the RAM when the game was loading to try to come up with some notion of stochasticity. Which means that most of the games, the vast majority of the games, they are deterministic. Like, if you execute the same sequence of actions, you're always going to have exactly the same outcome, which is fine. And we have a lot of domains that are a lot of problems in the real world that where this is the how it operates. So on the other hand, it feels that there is something missing, right? Because a lot of the problems that we have, they also have some inherent stochasticity maybe because you don't control the environment, but maybe because you don't control your own actions like you cannot time every microsecond of how you're going to do each other muscles. So we felt that there was something missing and this was the stochasticity. We felt that by adding, because the original notion of the Atari, what's called arcade learning environment, which are the set of Atari games that we use for reinforcement learning evaluation, the original idea, at least how I see it, I was not in the first paper, was that we want our agent to basically do a single agent that we can deploy after all these different environments. And it's exactly the same algorithm. It's just going to run and it's going to learn how to do well. So there is no specialization per game, right? There is nothing. And this is in a sense what allows us to have a general purpose algorithm because well, if I have an algorithm that I say, hey, learn how to play tennis and it does. And then learn how to shoot aliens and it does. Of course, under the same interface and so on, it's a much better algorithm than you just say, oh, this algorithm I just evaluated it to, I don't know, play tennis. So you had this general purpose approach and it felt to us with time that the stochasticity was a big part of it because we could see or we had hypothesized that some of the papers that we were seeing came out. They were in a sense implicitly exploiting the determinism. And in the paper, we came up with the simplest version of that that we call the brute, which was to show that we could come up with a learning algorithm if you will, that even if we didn't look at the state at all, basically we're just going to ignore this screen. We're just going to blindly, which is what we call open loop planning, we're just going to blindly learn a sequence of actions. We could sometimes do better than the state of the algorithm at the time. And somehow, to us, it felt wrong. We're like, quick, how can we have what we call an intelligent agent that is learning something that is not even considering what it's observing just like. And the stochasticity was a way that we could bring this discussion and at least at this extra dimension of it should be considered. And our solution was the stick actions, which basically because the ALE, I did a lot of the development of the framework itself on the back end. And it's, yeah, it's very low level. Let's put this way. But when you look at the code for the ALE or like the Atari's emulator, you don't have a source of randomness. So it was very difficult to say, oh, we're going to add randomness in the game itself, because that was going to be like, oh, a lot of work. I didn't want to spend two years of my PhD doing that. So what we saw, well, but what would be a meaningful way of thinking about that? And then it comes as stick actions, which was this notion as well. Even if a human is playing at heart, they didn't have this feeling that the game was deterministic. And the reason they didn't have it is because a human cannot time. Oh, I'm going to shoot every 30 milliseconds or something like that because humans have legs and the reflex and things like that. So by what stick actions does is that there is a probability that every time that you execute an action, it's going to take a little bit longer, maybe one or two in interactions with environment for that action actually take effect, which is trying to make some delay that humans could have in your reacting. And that was already like, as we showed in some of the results when we are trying to see how to break the brute, for example, which is this notion of this deterministic algorithm that could do well, we showed that yes, even this very simple notion of stochasticity would break it because it was clearly exploiting something that, at least from my perspective, it was not ideal to exploit. So it seems very realistic. You're in the 80s, you go to the arcade, it's an old machine, someone spilled Pepsi in the controller, and this is your sticky actions. It's perfect. I love that. And I love the fact that sticky now has two puns because it has two minis because I don't know if I want to play sticky buttons, but sure. Okay, so let's move on to your work in generalization. So there's a couple of papers here first by, first author, fair brother, generalization and regularization in DQN from 2018. And more recently, by Agarwal at all, contrasted behavioral similarity and vettings for generalization and reinforcement learning. That's at ICLR-21. So can you help us understand in simple terms what's going on in these papers? Yeah, sure. I think that, so this question of generalization started to bother me in a sense when I was writing the revisiting the ALP. And how it came to be was this notion that one of the things that we added in the ALP was this notion of modes. So when you see, I don't know, freeway, for example, the game that where the chicken is crossing the road, everyone is very familiar with the talks and so on, or you have a yellow blob trying to cross the road and cars are coming by. But what people don't realize is that the developers of this Atari games, they were so good that they were not satisfied in putting a single game in 2K of RAM. They wanted to put 16 32 games. And somehow they managed to do that. So in the Atari console, what you had is that you had some switches and you could actually change the mode of the game. So in freeway, for example, if you can cross the road, you could change the time so you could go to rush hour and then you had more cars. And then it's a different game in a sense. But it's not, right? It's the same sprites. It's the same principle of the game. It's the same idea. You go up and you avoid cars. But by flipping this switch in this new mode, you have a new environment, a new reinforcement or a new environment. And when I was seeing this, and even when we were proposing these modes, like introducing it as a research problem, it felt that, yes, you can call it all sorts of things. But to me, this isn't generally a problem of generalization. I want my agent to be able to learn by playing two or three games of freeway that, yes, I want to go up and avoid cars. So now if there is a new pattern of cars showing up or the cars are at different speeds, ideally the agent would not suck at playing that game. And this first paper that you mentioned, which is a paper with Jesse Fearbrother and Michael Bowling, was when I was working with Jesse, who was at the time, an undergrad student. And I was posing this question to him and he was like, yes, that's very interesting. And then we start to explore how much the gold standard at the time we chose to QN was, in a sense, overfeiting to this, to a single game that it was being trained. What would happen if we actually put DQN to train one of these environments and then basically just change the speed of the cars? And low and behold, as by now we all know, these algorithms, they are not, by just the simple definition of them, they have no incentive to generalize beyond the tasks that they are seeing. So we were showing these and we were showing that even if we revisited some basics of machine learning, like, look at what would we do if we, what would happen if we regularized this network? If we use regularization to improve generalization, we could see some benefits. And we were asking quite a lot of, but what if we, we want to reload the weights of the network and just trying a last layer, for example, something like that, would we still be able to leverage what it's the representation because arguably the sprites are the same. So the ability to extract, we could represent it should be transferable. So we were exploring a lot of these questions in this paper. And, and, and, and, and, and the solutions, like, even if you want to call it a solution, it was more raising our weariness to the problem than necessarily proposing any new solutions. They, they were too simple and then life happened. I don't know, finishing PhD, getting a job and so on. But this question was always at the back of my mind and eventually I, I, I managed to come in Sri Shad, Agarwal, who, who was a resident at Google brand at the time that this question of generalization was an interesting one. And we worked with Pablo Castro and, and Mark Belmer on that. And eventually I was really happy with one of the solutions, the solution that we came up with, we, we, we, we, was this notion that, and I say we, but, like, we should get on the credit, which was this notion that, what if, maybe even taking a step back, we were asking the question, how can we learn a representation? Because by now it's, it seems pretty clear to all of us that the representation that you're being, that we're learning, we're not, was not generalizing. And we're asking the question, how can we train an agent to learn a representation in such a way that if it's seen a different environment, a different test, but it's very similar. It's still no, it's still going to know what to do. And, and then we can, and some folks at Microsoft research, in Montreal had come up with this very simple environment that I think that captures the, what the, like, some of these notions very well, which is this, this notion of all you want to do is to have an agent learning to jump over a block. And then what you can do is that you can move the position of the block that agent needs to jump over, like only on the x axis. So basically just move it right or left. But you can also put, you can also put this, this, let's say, this is a pixel task, a pixel-based task, you can always put this screen that the agent's looking on a bigger screen and then you can just move it up and down. So now, or like what I mean by that is that you can have the floor where the agent is leaving and then you can just shift the floor up or down. So now you have two dimensions of that you can vary, you can just shift the floor up or down, but the agent is still sitting on the same floor and the agent still needs to jump over the same block. And lo and behold, it's literally the same problem, the pixels are just shifted and then the network can't do it. The network is really bad at doing that. And if you shift the obstacle as well, the position is again really bad, but there isn't under line representation here that would solve all the problems, right? Like if instead of latching to random pixels on the screen or something like that, what we were seeing is that well, maybe the agent should be able to learn the distance between the agent and the block. Well, nothing matters anymore because this is invariant, right? That's the key word here. Now the representation is going to be invariant to all these changes. And I'm talking about this jumping word because it's, I think it's the most dietic example of this, but eventually from this discussion that we had about this notion of invariance, we start to ask what could we do to learn representations that are invariant. And then it comes this paper which is what we said, well, maybe what we should do is that we should learn in a couple of these different environments. Let's put this way. How should we allow the optimal policy? And then we should try to look back and say, wait, but if I'm acting the same on this true environment, even though they look very different from the network's perspective, does it mean that these states are actually the same? So we don't do research, we didn't run experiments on super Mario Brothers, but it's a famous game I like to give this example, which is just like, let's say that you learn to jump over the turtle. And that's what you need to do, right? If you go forward and on the background is completely different, but you still are only jumping over the turtle or you're avoiding an obstacle. So it's kind of the same thing, right? It's just like, oh, yeah, I guess now I'm in this state that I should learn how to execute the sequence of actions. And by doing that, you should learn to say, oh, yeah, so I guess this doesn't matter, this doesn't matter. What we're trying to do with this paper was this, and thus it comes the type of the paper, which is this notion of behavior similarity. And we wanted, like, if the agent is behaving similarly in different instantiations of the same problem, maybe this means that the states should be, at least consider it should be equivalent. And we do this. And then we, the paper, I really like the paper because it has both theory and also a lot of empirical data and we did this in a way that eventually we were able to create a loss function that allows us to learn and embed in that captures this similarity. And it starts to put together the states that yes, if you're behaving the same in two different ways, even though in these two different setups, even though they look very differently, maybe these things are the same. And this is one of the things that the network is trying to do to put to learn this in a different way. So, you talk about finding state embeddings with similar long-term behavior here. How do you define long-term behavior? Yeah. So what we can do here is that we can think about how is the agent going to act at the current time step. Right? So if you want to think about very short term behavior, it's going to be one step. And basically you can say, well, am I going to go up here and am I going to go up like this under the extension of the environment? And this would be the short behavior. And then what you do is that then you start to make this longer. So now you're not only looking at one action, you're looking at multiple actions in the future. And the way we do this is inspired by this notion of bicimulation metric. And just look at how similar the policy is at the current time step that you are at. And then you also look at the discounted distance between the distribution of states that you're going to look in the future. So by doing that, it's discounted. So the long term comes from this discounting, right? Because if we have a gamma equals zero, basically we're not looking in the future. And if we have gamma equals something bigger than zero, let's say 0.9. And we're looking at a couple of times steps. We still concern a lot about where we are at the beginning, but there is this exponential decay. And then we are looking at this distribution of things that we're going to see in the future. And if they match, then we're going to say, or they're close enough because of course it's not about matching exactly. Then we start to try to put these things together. Is there a relationship here between this work and the idea of options? Like, is there a close relationship? Yes and no. In this list, we are not trying to learn, because this sequence of actions are like this, this course is of actions. So at first, no. But the reason I say yes is just because I like to think about this thing as trying to find different. It's all about abstractions, right? And I think that the way it's about options is that options are abstractions in the action space. So given that I'm going to act, how can I abstract the sequence of actions into something more meaningful? And what we're looking from this in this paper, I would say that it's more an ocean of abstraction in the state space, which is, even the observations, how can I abstract these states into something more amenable and more useful for generalization? So they're definitely touching a different notion of abstraction, I would say. But yeah, but there is no notion of explicitly trying to use this extended sequence of actions from this paper. And I think I saw that there was some notion of being agnostic to reward. In the embedding, is that true? And is the policy here still trying to maximize returns? Yes, it is true. We are agnostic in the sense that as I was just described in the math, I was not talking about rewards at any point, right? So we would just look at the different behaviors as in the toe. If the agent is behaving differently, like behaving similarly at two different places, maybe this states are together. So it is rewarding agnostic in that context. But this is just one of the last functions that we use, which is the one that they are trying to shape the representation learning process. We still have this standard D-parallel formulation, if you will. We are trying to maximize return, and this loss is driving the learning the policy as well. So we're definitely, we definitely want to maximize return that's the goal. But we have something extra, let's say, that is just trying to nudge the representation learning process. So all other things being equal, maybe we should learn a representation that it's better if you would. Cool. Okay, let's move on to exploration. You've done a lot of working exploration. You said you focused on exploration for your PhD and had a number of papers in this area. Do you want to tell us a bit about your thesis? Yeah, sure. So when I started my PhD back in 2013, which is literally the year that AirKid Learning Environment came out, so I was very excited about that framework. And I was asking, and I was looking at those problems, and I was like, what are agents actually fail to do? And even when the D-parallel agents came, it was the same question, like, what they can do? And one of the things that they couldn't do was this notion of, there is a set of games that basically these agents can't do well, they couldn't do well at the time. And this was this game where basically you had, it was very difficult to find a positive reward in the environment. It required a very long sequence of actions or it required you to, or it required you the right sequence of actions because you could die before getting there. And this was something that was interesting to me, and maybe I like to make the joke that in the first term, the first semester that I started my PhD, I was taking a read sentence reinforcement learning grade course. And the project that he asked us to do was to get a Rumba and the I-Rubba, then say, you have to do, you have to make it learn something. So you have to implement reinforcement learning algorithm in this robot, and you have to be able to demonstrate that this robot is learning that. And be creative on what you want the robot to learn. And then I was like, of course, I want to impress with certain, so I'm going to do something very fancy. And what I wanted to do was I wanted to have the robot to learn how to dock into the charging station. And I tried, and I failed miserably at the time. And I remember that at the end of the course, I was the only one to go out there and say, hey, look, I tried all these things, but I failed. And I don't have a learning demonstration to show you. And the reason I failed was exactly because the robot would never latch for the first time if it's falling around the mock. So how could I expect it to learn? And I make the joke that I started my, my, each of these is out of spite of like, no, I have to be able to solve this problem because like it was an embarrassing moment in my, in my, in the beginning of my career, and then comes Atari and, and all those things. So I was generally curious about this question. Like, well, I believe that we shouldn't be hand crafting rewards that are telling the agent how to do something like, oh, you should follow along this path because then we are solving the problem for the agent. But if we want to reward the agent by just doing the right thing, let's say, talking to a charging station, well, how can we expect it to do that? And this was a very, a very important question that kept bugging me for a long time. And, and then the Atari games, this is all these Atari successes start to show up and then lo and behold, like I guess everyone has heard about Motezuma's revenge, except how challenging it is. And it's just another instance of the same problem. And as you can expect, this problem starts to show up all sorts of places when you start to think about reinforcement learning problems. So it seemed to, it was a question that picked my mind and I was curious about it. And eventually what I ended up proposing, like it's, and we can talk more about this, in a more low level detail. But the, let's say, the thesis statement that I had was that I was proposing that at the end, we should be learning representations. And these representations have, we should be able to learn the representations without relying on the reward function. Meaning that if you just say that, oh, I'm going to train a DPRL agent with, I don't know, the squareity delos or some other loss that you like. And I'm going to learn to call the representation of whatever are the ways that I learned by back prop at the beginning of the network. This is not going to cut it because if you never see a reward, you're not going to have a signal to back point. So, but if we learn a representation that doesn't not depend on a non-zero reward, and we should use that representation to guide the exploration. Meaning that, if I mean in an environment in a room, let's say, and I learned a representation about that room, I'm not going to be able to learn a representation, a very good representation about a door if I rarely go to the others. And that's actually the really big problem, right? Like, this is exploration problem because now you have a, let's say, a bottleneck and you have to go over that, just to give an example. And the representation is, if you, depending on the representation that you learn, you're able to capture exactly that, and then what I was proposed that we should use this representation to actually guide us in the exploration process and tell the agent, oh, no, no, look, this, all this is you mastered, but that part over there you didn't. So maybe you should try to go there. And that's, and that was the general just of the work. So in these papers, a number of terms come up. I wonder if we can take a moment to just to talk about these terms and brief, for example, proto-value function. Yeah. What does that mean? Is that right? And is that a useful concept today? Yes, it is. So, or I mean, I think it is. So it was exactly to the question that I was telling you about, right? Like, if we learn the representation, then the representation should guide us, sure, sure, sure, where we want to be. And proto-value functions are one of those representations that you could learn. It predates the parallel. It predates the, the DQN paper. It was introduced actually in 2005. And at the time, it was introduced just as a way of learning a representation. And the word proto-value functions comes exactly because it comes before you learn the value function. And it was this method that says that, look, if we think about the environment as a graph, we could actually try to capture the properties of that graph into a set of features. And these properties are then, and then the way the paper, the, the, the, the, sure, the armahadevance paper, a, a, a, a, my, join this paper. What this paper does, what, what they do is that they, they say, look, these properties are good enough that you can actually use them as features and you learn to maximize return. So proto-value functions were this representation learning method. Let's put this way. Now, there are some very pretty pictures that, in the original papers, and then I really like them. So oftentimes you find them in my papers as well. I find them pretty, which is, let's say, you, you have a grid world, and then you try to learn this proto-value function. Then you can see like what the representation looks like. And if you think about, for example, an environment with four rooms, you can see that what this proto-value function capture are exactly the four different rooms that you, that you have. These are the first features that you learn. So realize that look, they are different. These, these rooms are kind of, when you're inside the room, all these states kind of look the same, but it's very different from being outside the room. And when I was looking at them, and at the time this was a representation learning process, they, and, and just to be more precise here, what proto-value functions are is you, you, you think about the environment as a graph. From that graph, you can compute the JSON-C matrix or, and from that, the JSON matrix, you can compute a matrix that is called the graph-flow-placian. And then the proto-value functions are the eigenfactors of that graph-flow-placian. And the reason I'm saying this is because the eigenfactors, they are, they are what actually captures this dynamics of the environment. Well, this diffusion properties of the environment, if you, how, how, how they can, it would diffuse in that environment. And what, what I realized that way, but wait, if, if, if we have a representation that, if we have a representation that it's telling me that, look, there is one room, and here is another room, and we can just learn that, if it, it's just learnable. What if instead of using that as a representation, I used that as a, as a goal, and I said, well, I want to actually learn an option that takes me to the room that I can identify. So out of the box, you immediately learn four options, which are like this sequence of actions that take you to the four rooms in the environment. And which allows you, or allows the agent to operate at a different level of abstraction, right? So if I, if I tell someone, oh, there is a million dollars hidden somewhere in your house, you are not going to go back and forth in terms of steps until you find that, right? You're going to say, oh, maybe I should go to this room or to this room. You're going to think about a different level of abstraction. And what at the time, this was the first paper that I wrote on this topic was about the portfolio value functions. We could use this representation to actually learn the options that allow us to explore much better and assess better, connecting the points that were further apart in this graph that is the environment. And that's what we did. So as you say, the portfolio value function and some of these concepts show up in earlier papers and we see a lot of examples in grid worlds. I wonder, are these, do these notions carry over to higher dimensional observations? Like, I guess, if we thought about the graph of states and conactivities and adjacencies in Atari, it would be quite a graph. We can do these concepts carry over to the higher dimensional cases where the graphs are less crisp and simple. Yeah, no, that's a great question. And a lot of the research that I did in my PhD on this line of work about learning options out of that was exactly an essential. I actually scale those things. So the eventually out of the portfolio functions, what I noticed was that, so maybe there are two threads there. One is that from portfolio functions, what we are actually talking about is this notion of looking at the eigenvectors of the graph class, which is the name of the portfolio function. And by 2019, even 2008, I believe, a couple of papers start to come up how you could actually use Neuronatrix to estimate this Laplace function with Neuronatrix in a higher dimensional scale. So one of the papers that comes to mind, for example, is a paper that is entitled Laplace in an Arale by if I knew and others at the time they were, if I was an internet Google Prey. It was literally scaling these ideas and say, look, we can use Neuronatrix to estimate this in a much better way. And at the time, they had experiments with Mujoko, for example. So there was research being done and there are other papers as well, Spectraifers Networks from David Fall is another example that comes to mind, that they were trying to say that, look, we can capture this Spectraport properties with Neuronatrix and learn them. So the answer is yes. Some other people did scale this up. And on a different thread, which is what I followed, I was also looking at, when I was trying to scale this up, what I noticed is that, what I noticed is that the eigenvectors of the graph Laplace, during the same as the eigenvectors of something that is called the success representation. And the success representation also had extensions to larger domains, known as successor features. And I was also able to leverage that to learn this option. So there were different threads where we could explore that basically allowed us to scale this up. I don't think that the research is over. I still think that there are a lot of challenges should do. I was never happy with the results that we got in Atari, for example. But I think that it shows promise. And our ability to scale this exactly, to come up with new methods, show that it's a fruitful research area. Interesting. Okay. But is there a kind of a chicken and egg problem here? Like if I just think back to the linear algebra to get the eigenvectors for matrix, you kind of need the matrix, at least some approximation of the matrix, not just a little corner of it. And in RL, we, starting out, we wouldn't know all the state transitions. So how do we determine the exploration strategies when we don't know the state's space in advance? How does that work? Yeah. So I think that's a great question. And that's the question that I'm actually excited about on this method. Because you're absolutely right for a lot of these methods, like even the dating back to the first one, which is this graph, the path of protovalu functions, there's some knowledge of the environment. So in a sense, you know the graph and then you compute the eigenvectors. But if you don't know the graph, how can you know? How can you know? And what I noticed, and this has been a big chunk of my research, is that we can have incremental, we can have learning in incremental ways. And what I mean by that is the following. You can have an agent, a different and just wondering the environment for a while and learning a representation. This is not going to be a representation that captures the whole environment. It's just the place that the agent can get to. Out of that, you, if you look at what you get by learning with these methods that I proposed, you'll basically, you'll have an agent that learns. You can extract options out of this representation that takes what I call the boundary of your knowledge. So in a sense, you've, you've did around for quite some time and they were realizing, okay, now you want to learn how to get to the border of my knowledge, you, to the border of where I've been, you go there and then you can do it again, but now you're starting from somewhere else. And that makes the, and then you can slowly build your knowledge about the environment for slowly visit the environment because you are not being having to pay the price of starting from always the same set of states when a random walk, which is really slow for you to move. The very simple example that I had, it's a workshop, a workshop paper that we wrote back in 2016, is that imagine that you just want to walk down a line, okay, and they state that you have, you're going to represent your states by the binary encoding of their numbers. So states one, for example, is going to be 0, 0, 0, 1. State tree is going to be 0, 0, 1, 0. State tree is going to be 0, 0, 0, 0. And if you do that, and you just start from the state number one, let's say, you're going to go a lot, just a two, a state three, state four, right, because a random walk moves at the square root, number of time steps and expectation from its start state. So it's going to be very close to the origin, which means that we're going to flip those first bits on your representation a lot. Oh, I see the first bit flipping all the time and so on. At the moment that you, at the moment that you, that you say, okay, but I want to learn an option out of that, which is what I call an eigen option. What is, this method is try to do is that they try to learn the thing that you know you can do, but it's very difficult for you to do. Means that maybe it's flipping the fourth bit because you haven't, you have flipped it once or twice, but you had flipped it constantly. So now you learn an option that says flip the fourth bit, right? So now if you're going to do a random walk again, because we're just doing random walks, every now and then you might sample that option, which says flip the fourth bit. So now you're not starting from the start state anymore, you're starting from the state 16 or 18 now, I'm just 16, I guess, 16 and then if you want to flip the, the state, the fifth bit, which is the state 32, you're already halfway there, right? You don't have to start from the start state and do 32 right actions. Let's put this way. You just take one action that takes you halfway through there and then you keep going. And we could show that, I mean, this is a cartoonish depiction of this, but these ideas are very powerful and it does allow us to go around and around on the environment while you're the random walk is too struggling to get past the first couple of those in states. And it's exactly this notion of, I want you to learn, I want you to learn to do things that I know I can do, but it's difficult. And the representation is capturing this and the fact that we don't have the whole graph is exactly what's giving us this information. So in the past we did these reforptions. When I was trying to learn these options online, I also realized that we could do these ref counts because the representation, the success representation was implicitly encoding state dissertation counts. So we actually had a paper doing that as well. So but it's, I would say, all it stems from the same, very same nature of the problem. And it's the side of the representation is guiding you because you are learning and you exactly don't have information about the whole environment. Very cool. Okay, thanks for explaining that for us. And can you tell us what do you mean by successor representation? Briefly, what does that mean? Yeah, yeah. So the success representation is this idea that it's relatively old for machine learning standards. It's from 1993 when Peter Diana introduced it. And it was this notion that I'm not going to look at states that when you think about how you represent a state, a state, you don't want to think about the state just in terms of where it is in this space, like in the Euclidean terms. But you also want to represent the state compared to the states that you can get that you also want to represent that state as as a function of these states that you can get to from that state, whether the success or states. So the example is that imagine that you are behind a wall, right? They state that it's exactly opposite to you on the other side of the wall. It's very far from you because you have to go around the wall. But if you just look at the Euclidean distance, arguably these states are similar. And Peter Diana had this inside of say that no, no, let's look at what the successors states are. So given that I'm in a current state, what are the discounted number of states that are, and I'm going to just look at the policy that I'm following, what is the expectation that I, what is the expected number of times I'm going to visit the state of this in my tree jack. And this success representation gives you this notion of the dynamics of the environment. And again, without thinking about the reward, and it has all this connection neuroscience that like there is, there are a couple of papers now suggesting that the hippocampus actually encodes something really similar to the successful presentation. And I like to think about it as a credit assignment object that if I know where the rewards are in the environment, because I know exactly what the next states are, the dynamics of the environment, I can easily assign credit because I know, oh, given that the reward is here and this is how I'm going to visit states in the future, I now have value functions. So it's a super powerful object, we have been using it for discovering options, for exploration, people at the point, for example, under a barato and others have been doing this for transfer, but it's this notion of just knowing what the future holds in a sense and anticipating that. And so if I understood you right, it's not a pure representation of the transition function but it's conditioned on the agent's policy, on some kind of policy. Exactly. So, actually, if you want to be very precise, what we're going to do is that you're just going to, the success representation is, you can think about it not anymore as just learning how to maximize a reward, let's say, so you can think about doing temporal difference learning or something like that, but instead of actually observing the reward, you're going to get a reward of one when you visit a state. So you're going to have one success representation if you will. So the success representation is always with respect to two states, right? But given a current state, you basically every time that you visit a state, let's say two, you've got a reward of one. And then you learn a success representation for that state. And then you do this for a state three and four. So basically, you, and then you are going to have a vector at the end, which is for the given a state, you have the vector for a state, and it's conditioned on this policy. It's literally just a value function, but instead of a reward, you actually have a cumulative line, which is the state visitation. Okay. And another phrase that came up here is covering options. Can you help us understand that phrase as well, Marles? Yes, yes. That's a really neat idea that came out of George the Darnes group. Eugene, I was the grad student who proposed it. He was the first author. And when I was finishing my PhD, one of the problems that I have with this concept of eigenoptions was that I was learning too many options because I was looking at the eigenvectors of the Graph Laplace, and I had as many eigenvectors as I had states. And it was not clear to me how many options do I want? Do I want 10? Do I want 15? I could do a empirical analysis, but I was not able to develop the theory at the time for that. And then Eugene, I came along and they had this very cute idea, which is what they call the covering options, which was, wait, no, if you have the eigenvector, the first eigenvector, and what the eigenvector is giving is this notion of connectivity. You just need to connect the two states that you would get from the eigenvector, like the one that the eigenvector has shown you, like how distant different states are. And you just connect the two furthest apart states that you have. What you get by doing that is that you could very well be reducing the diameter of the Graph. By reducing the diameter of the Graph, you're making the environment more connected. And if you do this enough times, you're going to get to prove exploration. The idea of covering options was discovering options that would connect the two states that were further apart. And the pretty thing about it is that it allows you to just answer the question, how many eigenvectors do I need? Because the top one is there when they're giving you this notion of distance. So Eugene, I did this. And I don't know, and I saw this. It was very exciting. This is really cool because it actually answers one of the questions that I was thinking about. So I reached out to him, and we worked together to extend this notion of covering options to bigger domains in a more scalable way, using also the expertise that I had about how tricks to scale these things up. And eventually we wrote this deep covering options paper that builds on this idea of, again, trying to find the sequence of actions that allow you to better connect the environment. Awesome. Okay. So how can we compare these types of approaches to maybe how humans might think of exploration? Like I'm imagining, if you ask a kid who's playing Atari, can you try something new? They might respond to something a little bit more semantic. Like, I'm going to make the ship go to somewhere where it hasn't been before, or I'm going to shoot a type of enemy that I haven't shot before. My sense is that they would describe things in terms of concepts that they've parsed out like ship and shot and enemy, which I think basic RO wouldn't have access to those distilled concepts. I wonder if it was any of your work and representations related to this idea of distilling these concepts in a semantic way, or do you think we might, at some point, need a semantic layer to explore in the same way that a child might do? Yeah, I think that's a good question. I may be to address your first question about how this exploration methods relate to these notions. I think that they relate very little. There is definitely a notion of I want to be a true place that I've never been before. But I don't think that often times when people are thinking about exploration, this is what I try to do in my work that was a little bit different in a sense. People often think about only exploration of the interplay that representation has on that. But I think that, and of course now, this is changing a lot. But I think that when we go to think about these notions of, in a sense, semantics, there are two things there that are important. One is that we have to acknowledge that the kids that we're talking about here, it lives in a world. This kid has a lot of experience outside this game that they are playing. This is actually something that pieces me off a little bit when I read in paper saying that Atari already parallel agents take 30 days to learn how to play a game of gameplay. Why are humans learning how to play a new game in two minutes? No, they don't. Because if I put my newborn daughter to play this game, she's not going to play it after two minutes. She actually needs years of experience to be able to just learn how to do this. So I think that there is a lot going on there. It's a lot about trying to abstract concepts that we learn elsewhere. The abstraction of a ship, for example, it's useful not because it's the only place that it shows up is a game, but it shows up out everywhere else. And oftentimes, if you, I actually would like to see this study, like just go to a kindergarten today and show a screenshot of Atari games and ask, what are those things? And I bet that kids do not even cannot even labor this out. This is clearly a ship or a name. It's so extraterrestrial for them. So we come up with these abstractions and they are obviously useful. But it's expecting a lot from the agents that we have right now to do this because they don't have all the rich experience that we have elsewhere in the world. You control submarine in sequest, for example, in Atari game, you control submarine and you should shoot fish. Like what? But yeah, we have this notion that we shoot incoming objects and then that's what you're going to do. So there is this notion of trying to anthropomorphize a lot, Dei. I think it's important to be able to, in a sense, and their Sandwax Dei is doing in terms of explainability and reliability. But it's a complicated discussion. And then it's, I think that a lot of these things matter a lot because it starts to touch upon the notion of models, for example, we want a model, we probably want to be model-based when we want to do exploration in this very complex environment. But it's definitely, I don't think it's something like, oh, I want this to be baked in because there is a lot of social constructs. And words are just words, right? Like labeling something, I don't know, an outlet. It's just a label. If I'm Portuguese, I'm going to call it a tomato and it's the same thing. But now if we can crowd this on the agent experience, I think that that's a much more meaningful question. Let's say that you have a robot. And the robot says, I don't know what it's called, but I know that if I approach that thing and I plug in there, my power level goes up. And then suddenly, now you have a more grounded notion of, oh, this is how I can call this. And by doing this, now I see a completely different outlet, but I can say, oh, wait, this is also the same thing because the outcome is the same. And then we start to be able to label things. But I just expect that AI is going to randomly pick up labels that we as humans defined, because it seems, I think it's, it's in a sense, not even useful. Unless we want to spend hundreds of man hours labeling things and expecting the AI to do this, and it's still, it's going to struggle with generalize because it's just labels. It's not grounded on its experience in a sense. Awesome. Okay. So let's move on to the Loan project and your work on that. So I got to say, I really enjoyed your presentation on the Loan work at the University of Alberta AI seminar series, and we'll include a link to that in our episode page. And I encourage listeners to definitely check that out. And I was, I was really excited to read this paper because it showed, you know, reinforcement learning succeeding in it. Actually, you actually use full real world task outside of simulation and doing a great job. There's a ton going on in this paper. But could you start out maybe giving us a general overview of what the goal here was and an overview of the solution that you and your team came up with? Yeah, sure. So, yeah, I was very excited about that paper when I joined, I joined Brain and pretty much for the first year that's what I spent most of my time working on when I saw the opportunity of working on that project because I thought it was really exciting. And in a sense, it was the goal. One of the things that was exciting to me was exactly this opportunity to actually deploy reinforcement learning in the real world and test our algorithms and see how far we could go. And what we were trying to do actually was that we partnered with Lung, which was this other bat, an alphabet. And what Lung was trying to do from my perspective as a scientist was that they wanted to provide internet to places that were hard to get. So the problem is that once you think about how we get internet, we often get antennas in our cities. And the antennas have a range of, I don't know, five kilometers or so. So you build up a big antenna and then you can serve a circle of radios or a sphere of radios, five kilometers. And that's really good if you think about, I know, you're in a big city because you're serving a lot of people. The problem though, that a lot of times it's there are places that are very scarcely populated. And sometimes it's even hard to get to these places to build antennas. Let's add on tribes in the middle of the Amazon forest. So how do you provide internet to those people? And then these low folks had this idea of saying that, well, what if we had a very, very big antenna? Let's say 50 kilometers tall antenna. Of course, you're not going to build an antenna that is 50 kilometers tall. But then they had this idea of what if we put a balloon up there and the balloon is going to operate as an antenna. And because it's going to be so much higher than you would get, it's going to serve a much bigger region. And then it makes sense that we can actually provide internet to this people. Because it was their idea as a company as far as I can tell, or one of their ideas. And then what you have to do is that if you start from this premise and you want a balloon to be there, the balloon needs to be stationary above a region to serve the internet. The problems that, of course, they're winds and the balloons are going to be blown from where they want to be all the time. So in a sense, the balloon can just stay there and we leave it there. So the balloon needs to navigate to make sure that it's always going back to that position. So it's the right in the winds, if you will. This balloons, they have only the ability to, they are not propelled. So they only have the ability to go up or down. They are fixed, fixed volume balloon. And in a sense, they work very similar to hot air balloons or super marine. So the tuition is the same as hot air balloons, but they work very similar to submarines, which is that you have a fixed volume. So now, if you want to go up and they stratosphere, what you need to do is that you need to reduce your density. So what do you do to reduce your density? That well, with the same volume, you pump air out of the balloon. So now the volume is going to be, the density is going to be lower and then you go up. And if you want to go down, you just pump air inside and then you sink the balloon. And by just being able to pump air up and down, you are able to pump air in and out, you are able to go up and down. And by going up and down, luckily, I guess, the stratosphere has winds going all sorts of directions. So basically, you can now try to go up or down to go to the altitude where you have an wind blowing in the right direction. And what we were trying to do was to have an agent that would learn how to navigate those winds in a way that it was going to go up or down and ride the right winds to be always surfing the same region. And this is what we did at the end and we deployed. So I don't know much about a stratosphere at all, but is it always possible to find a way to get back? Like, does some of them ever just, is it just sometimes just completely impossible to go the other way and the balloon? Regardless of how great the controller is, the balloon is forced to drift away outside of the zone. Is that, is that happen? Yes, absolutely. So sometimes you, you, you, how the winds are blowing on one single direction and then there is nothing can do the balloon is going to be blown away. We didn't go there, but when, but loom was not surfing a region at the time with only one balloon, right? They had multiple balloons except one of the risks because of that as well. But yes, it happens. We could see this happen to far controllers as well. But then there is also a meaningful question here. It is just like, well, so even though you're going to be blown away, how can you minimize how far you're going to be blown away? Or how can you, or how can you make sure that you are blown away in a region that you can eventually come back? Right? And that's one of the things that are really difficult about this plot because the time horizon we were talking here is about days. Sometimes you're going to be blown away for a whole day or two or three and it's like, now I want to go back, right? And somehow you still need to plan. There are some interesting things about this status here that I'm not going to pretend that I can confidently say things about them. So for example, this is much easier to be done in the equatorial region, the equator, than it is to be done in the poles, exactly because of the patterns of wings that we have, for example. And a lot of the things that we did along was between the tropics that are somehow close to the equator. But even in our paper, one of the things that we did was to estimate how, what was the maximum you could do if you were, what's the best you could do with our controller? And at the time, if the number is all right, it was about 70 percent because other 30 percent of the time, it didn't matter what you wanted to do. There were no wings that would allow you to do this. This was only simulation. This was not in the real world. But this shows that sometimes it does happen. And then you said that sometimes you have to consider sequences of days. How do you break down the time steps? What is the step size? What is the time scale for the actions here? Yeah. So the time scale for, so what we did was that we broke it down in a way that the balloon would take an action every three minutes. And the gamma that we were using in the discount factor is 0.993. So it gave us, if you want to think about the effective horizon of 0.993, it was around 200 ish steps in the future. So we had almost a day that we could, at any time step, we were looking ahead in a sense, almost a day in the future. And not only that, then how, depending on how we incorporate our features and how we learn, actually what we observed is that our balloons were being effective in a very long time scales. Okay, so there's a number of names on this paper. Can you tell us about the structure of the team and how the team broke out the work and the roles? And what was your role in this team? Yeah, it's like, this is definitely the biggest project that I've worked in my life. And this was reflected on the authors and the paper, right? So let me start saying that we can break down this. So what I'm trying to say here is that one thing that we could break this down by sure is between the brain collaborators and the lung collaborators, right? We worked very closely all the process. So they were running experiments of the parallel agents and we were discussing how to do the experiments in the real world and the deployment. So it was a work that we did really together. There were amazing collaborators, the lung collaborators, but you can see that there was definitely this notion of there was a group of people that had very deep expertise on the balloons and stratosphere and how these things work, why we had expertise on reinforcement learning. But at the same time that there is a bunch of names in the paper, this was a relatively small team for the size of the effort. And what I mean by that is that naturally what happened is that a lot of us touched in pretty much all the components of the solution. So I worked developing the algorithm, coming up with what we actually deployed, so how we developed the features, which was something that was related to people that I wrote a while ago, how we do exploration, what algorithms do we use, how do we train those things. So this was one thing that I did, but there was also, I mean, listing all the things that I did, it's kind of silly because we worked on this fairly heavily on our sorts of fronts. But one thing that I spent a lot of time doing was thinking about the empirical evaluation and actually dealing with some of the challenges that come up when we were working off with a real product, which is this notion that the environment, like the things are moving, right? Like you have a product, they had the balloon that was flying the stratosphere, they wanted to make it better. It's not that everyone was freezing the whole company for us, so we could have a stable environment. So sometimes things would change, sometimes the balloons would change and the simulator would change some parameters and so on. And I spent a lot of time also trying to have sanity on this, like how can we make sure that we're doing meaningful, we're making meaningful progress on this ever changing task. And what was changed, how did you change, what was the impact in performance and understanding this, understand the simulator, how the simulator worked, how we could actually get this, meaningful data and what the simulator was telling us and how this was working, analyzing the data, writing the paper. So yeah, in a sense, it's silly because I'm enumerating all the things that should be done to happen in the paper, but maybe what I'm trying to say is that this was a truly collaborative effort. It was very exciting because I got to work together with a lot of people on a project and it was big. So yeah, we touched a lot of us touched in a lot of pieces of that. So in the very beginning, when you heard about this project, did you think that, oh yeah, this is going to work out great or were you, do you have any doubts or an end where you've surprised, the final performance seemed actually really good. So did that surprise you when you got to that point? How did you, how did the feeling change throughout the project? Well, so when I first heard about it, I was excited because when I was excited about long, even before I thought, oh, this is a, this is a cool idea. And back when, even when I was doing my job interviews, people would ask, oh, what do you want to do for your research? And I would say that, well, one of the things that I want to do is that I want to make sure that I develop algorithms that are going to allow us to actually deploy algorithms in this RL agents in the real world. So it just seemed to like an amazing opportunity. I just joined this company and then there is this project that was actually starting. And I was like, well, I could do that. It could, it doesn't have to only be a promise. So I was excited about that. At first, I think that at first, I was very excited. I was very hopeful. And I said, this is going to work great. Of course, I was naive. I didn't know how difficult the problem was. But we managed to get some good successes at the end at the beginning. And I think that this gave us hope. So in July of 2019, we had already deployed the first balloon. And the project I started at Brian and March. So if you like in a couple of months, we had already deployed the first balloon. It was not the final solution that we came up with. But I think that we had some early wins that kept us hopeful of the progress of the project. I was definitely very surprised. And it's scary when the first time was with the deploy the balloon. So oh my god, we're doing this. Of course, the the the loo engineers, they know they knew much better than we did as researchers. All the safety protocols that they have put in place to make sure that nothing crazy was going to happen. And a lot of the work that we did was actually learning with them, like, oh, can a balloon hit each other? They're like, no, they're riding the winds. How could they hit each other? Like they're literally at the same speed. Unless you just manage a sync one balloon that goes up while the other one is the same anyway. So yeah, they can't collide on the on the space. There is nothing there. So they're not cannot collide with other things. And the safety layers, they are not going to burst and all those things. So we I was excited. Then honestly, a little bit scary. The first time that the balloon was deployed because I had no idea like how the safety layers that there was with time. It's just like, yeah, I'm comfortable. I know how to assign this controller to this balloons in the real world and how how you can do this experiments. So yeah, maybe I'm just circling around your question, but it should say that although this this project seemed challenge, it was very challenging in terms of dealing with the infrastructure because it's a real product to infrastructure, real world infrastructure. The iteration cycle is very slow, right? Because we're like, it's not a simulator that you get a result in a couple of hours or even days. So there was this challenge, but I think that the early successes that we had and excitement of everyone just made the theory of just that we were going to keep working on this. And it's one of those things, right? And then you worked for so long and you know so much the product. Once the final result gets in and it looks like, yes, we knew this was going to be the case. And then the surprise was gone a couple of months ago because now it's you already know what to expect. But I was still very happy and anxious after we processed the data. Or like I was anxious why we were processing the data when we were spent on flying balloons in the equator just to make sure that our models was right. And it was very exciting just to see that yes, we have statistical confidence, we are better and it works pretty well. I bet that must have been a great moment for the whole team. And I guess another thing that I was not thinking about it at the beginning, but it was excited that it was actually being used by people, right? Like they were flying balloons in Kenya with our agent and people were getting internet because we developed something that was allowing them to have an hour, couple of hours of extra hours of internet every day. And it was very rewarding in essence. Awesome. So can you tell us a bit more specifics like about the highly observation and action spaces work? Yeah, yeah. So the actually space it's abstracted in a way that the balloon just goes up, down or stays. And when you get through three minutes, the balloon gets an action that says go up, go down and stay. And what this means is that the balloon is going to go up for three minutes or go down for three minutes or stay where it is until another three minutes is done. Of course behind the curtains, there is a lot of low level controlling going on, right? Of like hope, up air each of these are open devolved. But from our perspective at the end for the perspective of the agent, at least the action was up down and stay. And the observation space was something that we terraided a couple of times. But in a high level, what the agent had access to was the information about the wings above it and below it. So you can think about this as a wind column going on over the stratosphere that the balloon could navigate. And then we discretize that into what we call pressure levels. And basically this was the information about the wings in each one of those. And by that information, it means the direction of the wind, the angle that the wind was blowing. And the third variable that was quite important and different from what people are usually, which is uncertainty about those wings. Because the balloon is fly and because the balloon knows exactly what the wings are, where the balloon is. Right? But the balloon doesn't know what the wings look like five kilometers above it. So what the agent did is that we were using this prediction that comes from other sources about what the wings were going to look like. We had the Gaussian process that was kind of using those predictions to what the observations that we had and giving us a lower granularity of the wings and say that look, given that the prediction says that the wind is going to blow on the north at 50 kilometers. But you are just, I don't know, 500 meters above this pressure level. And we're seeing something completely different. We're going to fuse this and this is what we think the wind looked like. And if we think that, and this is the uncertainty that we have. So basically we were characterizing the stratosphere, the winds and the stratosphere based on this notion of distance, like the velocity angle and the uncertainty that we had about those. And not only that, we also had then what we would call the global state variables of the balloon, right? Like the amount of power that the balloon had, the amount of the time of day where the station is. So this is what the balloon was observing, which is basically the winds and it's a status. You mentioned that you used some insights from your shallow RL paper in designing this controller. Do you want to briefly mention the relationship there? Yeah, so back in 2016, we were trying to develop features, linear features that captured the properties of the networks in the parallel. So basically we were asking the question of, well, we have this deep parallel agents and they are doing them is only well on a target. What are the features? What are the inductive bias that this network had that actually allowed this agent to so well? And we came up with a couple of them. One of them was really important was this notion of these in the areas like this transition in variance that you get from convolutional networks, right? That you can apply the same filter in outsource of places and the image. And then you also have this relationship which is like, oh, there are two filters that one is above the other. So we know that and if this happens in different parts of the strings is to the same relationship. And this was one of the things that one of the tricks that I learned fairly early on that if we have a representation that is centered, let's say that it's agent centric and everything is relative to the agent. This makes a lot of difference because it requires the agent can generalize much better because it doesn't have to learn the same thing. I know all sorts of places that it observes, it can every time that something is above the agent, well, it's the same input. So one of the, so this was one of the tricks that we used when we were designing these inputs for the agent and when we were flying the balloons, which is that our features are relative to the agent. So actually when the balloon goes up, the whole feature vector in a sense shifts with the balloon to make sure that we always have the notion of what are the winds above me, and not what are the winds at the 15 kilometer altitude, but like what are the winds above the balloon. And this was a huge boosting performance and that was very important on how the network learned to do this. And it was, yeah, it's quite neat. I noticed that relative observation issue when I was doing the Palmerman 2018 Neurob's competition as well, having that relative observation. So so when you talk about the observation of the winds above and below the balloon, is that coming from off board, off board data, like it's not, it's not from sensors on the balloon, is it? How does the balloon know what's happening above and below? So the balloon doesn't know what's happening above and below. The balloon knows exactly what's happening where it is and then above and below, it's both the predictions that we have and observations from other balloon. So there's definitely communication of the balloon about what are the surrounding. And so you mentioned how important uncertainty is here. Was that surprising to you? Well, it's not a surprise in this, is that it seems fairly obvious that we should, the agent should be able to reason about how confident it is of its surroundings and what it believes to be the state of the world, right? So in that context, it was not surprising, but it was what was interesting was that this is not common practice in the field. So it was interesting to see how important that ended up being for us. And in a sense, it's, I want to say that it's one of the interesting contributions of the paper as well in terms of our Earth mix side and it still needs to be explored. Or like how far can we go with this notion of incorporating uncertainty into the agent, and how much, how well the agent can, can reason about that or learn a representation that takes the answer to the central consideration, if you will. Can you talk about the decision to use Model 3 or L here? Like could Model based planning or Model predictive control approach his work here, or would they discard it right away? I find it hard to say that could they work and like I will never ask you no because it's always a matter of making them work, right? Like, give it a try. But I think that one of the really challenging things is that we cannot model this practice here, right? We cannot model the weather. So although there could be some components of Model based here, if you want to be completely, like if you really want to do planning, like we had, even in the baselines, we have search based methods and planning methods that rely on a model of the weather. And they work relatively well in the simulation, but they would, they never worked in practice because like the model, the mismatch between what the model is and what the winds actually are in the stratosphere is so big that it's hopeless. So in the paper we had this search based controller that if we used as a baseline, almost as an work on the simulation, but then when we're going to the real world, it's just like, yeah, it's never going to work. So maybe there is a way, but there is definitely the problem of Model mismatch and the inability that we have, if we as humans have of modeling the stratosphere and the winds at the level of granularity that we needed for that project. So I heard that the Lume project itself was canceled and I'm sure that had nothing to do with the controller because your controller seemed like it worked really well, which is, and I felt really sad when I heard the news, but I guess it's glad that we also glad that we benefited from seeing how this would work. Is it, do you think it might continue in the future or do you think that's a permanent, it's closed for now? I don't know. I don't necessarily have much more insight than any of the people who read the blog posts have about that because I was actually with the, they would apply it and then we had the people accept I ended up leaving Brian going to the mine. So it was fairly close to the time that the announcement came out as well. I think that definitely, I mean, I want to say definitely not with respect to the RL controller, the RL controller was working great. It seems that it was much more a business decision than anything else. And yeah, I don't know. I was very happy to work with them. I think it's a great project and a great, it was a great experience, but there is more to those things than necessarily just the scientific endeavor, right? There is a business plan and it's way above my pay grade if you will. So to understand what's going on or even understand like how they ended up making this decision and the clients, we were working with a fairly small team and just trying to get the best controller. And that was my real, as a scientist. I meant to ask you about exploration. I think the exploration strategy was quite simple, but did that strategy take some iterations to settle on or was it very clear to you from early on that that would be the right way to do exploration? Yeah, the exploration approach was fairly simple, but at the same time it was rewarding from a personal perspective because we did not iterate over that, but the solution that we ended up having was to have a balloon saying that, look, I want to say, I'm going to go to that altitude and stay there for a while. And if you think about how we have often do exploration, it's with random odds. And that would mean I just want to go up or down for three minutes. And that was never going to work, right? So the fact that we ended up doing this temporary extended exploration, which ties back to my work all the way back to my PhD, which is I was advocating for this, let's use options to do this temporary extent of exploration because it's much better than the different, was really rewarding and I was happy with that. And it worked really well. And honestly, we never resisted that solution. It's one of the things that it would be interesting to do. If this project continued, if you had a chance to work on a more wood, would you have feature work in mind? Can you say anything about that? Or do you feel it was like a completely wrapped up? Oh, no, absolutely. I think there is a lot of future work that could be done, like both to improve the controller, but also in terms of scientific terms. And the further understanding this role of uncertainty in the inputs would be very interesting. There's no sound of exploration and understanding that if there were better exploration strategies would also be something interesting. One thing that we were also discussing was that, well, right now we were just training a simulation to applying the real world, but naturally we were collecting more data in the real world. Could we use that data to make our controller better? And fine-tune it to actually the balloon that we were flying because each balloon is a little bit different, the balloon is a little bit older, the battery is a little bit stronger or weaker. And all those things were questions that are fairly interesting. And yeah, there are just interesting scientific questions. Actually, the general issues outside of Lune, but besides your own work, are there things happening in reinforcement learning these days that you're personally excited about? Yeah, I think that we're getting to, we're in a very exciting time and the research. I think that some of the things that I'm excited about is one of the things is model-based around and how we are seeing some of the first big successes in model-based around and just like this large domains, we use zero is one example, but there are others. So like, how do we actually learn models that allow us to plan in this environment? I think that this is a very promising research area and we're just scratching the surface. What this model should look like, what should we be modeling? This is one of the things that I'm excited about, I'm very curious, excited, interested, but also a little bit scared and trying to catch up with the literature on this notion of how we incorporate causality into reinforcement learning. Because it seems that if we go back to this notion of generalization and this notion of we want in various across observations, it seems that we want to be able to extract the causal factors in the environment that are causing this changes. And this is something that I'm very excited about and there are some people doing research on this and curious to see what's going to come out of that. And overall, I guess, representation learning. I'm curious to see, we are seeing some efforts now on how we start to think about learning representations for reinforcement learning and not just using representations learned from I don't know, a random metric game back prop, but how can we actually think about the reinforcement learning problem and what would be useful representations for that. I think that these are some of the things that I'm excited about and I want to see how much progress we can make. And then looking forward, what do you see yourself working on in the future? Do you expect to continue working on the themes you've touched on here and taking them further? So I am still very curious and excited about this notion of learning options, terribly that become more and more complex, more and more complex behavior in this notion of lifelong learning, if you will, that it's one of the or continual learning that people use. I think that this is really interesting, this notion of how can we learn? Maybe I'm first going to just learn some very basic skills, but this is going to allow me to bootstrap to see things that I couldn't before. And then I can learn even more complex skills that allow me to bootstrap even more until we get to very complex behavior. This is something that I'm still curious about. I am very curious about generalization in RL, it's something that I might be doing in the future like more. I am also doing some more careful work about representation learning and understanding what are the properties of representations in RL that we want. The representation we want our RL agents to have, and to learn. So all these are things that are related to what I did in the past and I'm excited about. But there are also some things that one other thing that I didn't we didn't discuss today, but that I'm also, which is kind of new for me, I've been doing this in the past since I joined Brain, which is this collaboration with Nikola Lehu and some other researchers. It's about going to the basics of policy gradient methods and really understanding what they are doing and how they work and challenging some common beliefs. This is also something that I want to continue doing. I think that we have learned a lot in the past couple of years and we can now start to get some good results out of that as well. So yeah, I think that these are, but overall I would say that I'm very interested in this notion of abstraction. How do we learn abstractions in space, which means representation of abstractions in directions at space and state space. The state space would be generalization and action space would be options and how we can use that to the better credit assignment and exploration and things like that. So Dr. Marles Pachado, this has been a super enjoyable conversation for me and not definitely not the shortest interview we've ever done, but I know I've learned a ton from hearing from you and I'm sure audience has to. Thank you so much for sharing your time and your insight with talk Arrell. Thank you very much for having me. Notes and links for this episode are at talkarrell.com. If you like this show, I need your support. You can help in a few ways. Want to subscribe on your favorite podcast platform? Subscriptions make a big difference. 2. Follow us on Twitter and talkarrell podcast. 3. Give us a 5 star rating on Apple Podcasts. If you don't think we deserve 5 stars, let us know on Twitter what we could do better.
[ { "end": 12.8, "start": 0, "text": " This is Taka Rail Podcast, all reinforcement learning, all the time." }, { "end": 15.8, "start": 12.8, "text": " Interviews with brilliant folks across the world of RL." }, { "end": 21.1, "start": 15.8, "text": " I'm your host, Robin Chohan." }, { "end": 26.32, "start": 21.1, "text": " Dr. Marlos Machado is a research scientist at DeepMind and an adjunct professor at the" }, { "end": 27.96, "start": 26.32, "text": " University of Alberta." }, { "end": 32.2, "start": 27.96, "text": " He holds a PhD from the University of Alberta and Master's in Science and Bachelor's" }, { "end": 34.44, "start": 32.2, "text": " Science from UFMG in Brazil." }, { "end": 36.08, "start": 34.44, "text": " Thanks so much for joining us, Marlos." }, { "end": 37.08, "start": 36.08, "text": " Thanks for having me." }, { "end": 38.96, "start": 37.08, "text": " I'm excited to start with you today." }, { "end": 41.32, "start": 38.96, "text": " So how do you describe your area of interest?" }, { "end": 47.28, "start": 41.32, "text": " I am generally interested in artificial intelligence, which is quite broad, but I am interested" }, { "end": 52.88, "start": 47.28, "text": " in this different aspect of intelligence, not necessarily married to an approach." }, { "end": 56.96, "start": 52.88, "text": " I've been especially fascinated by the problem of decision making." }, { "end": 60.68, "start": 56.96, "text": " And specifically, the sequential decision making when actions have delayed consequences." }, { "end": 66.28, "start": 60.68, "text": " So in the past couple of years, I've been specializing more and more in reinforcement" }, { "end": 71.28, "start": 66.28, "text": " learning, but I am interested in how things related to that, related to representation," }, { "end": 74.44, "start": 71.28, "text": " learning abstractions and things like that, which is pretty broad." }, { "end": 75.8, "start": 74.44, "text": " Let's put this way." }, { "end": 80.38, "start": 75.8, "text": " So the journey of your career is included in University of Alberta, Google Brain and" }, { "end": 81.38, "start": 80.38, "text": " now DeepMind." }, { "end": 86.76, "start": 81.38, "text": " Can you tell us how your perspective and maybe your approach to research has evolved between" }, { "end": 88.68, "start": 86.76, "text": " these different chapters of your career?" }, { "end": 95.52000000000001, "start": 88.68, "text": " I did my PhD at University of Alberta and obviously your approach to research changes a lot" }, { "end": 98.4, "start": 95.52000000000001, "text": " during your PhD when you start your just learning." }, { "end": 102.44, "start": 98.4, "text": " And I think that I don't even know if it's fair to say that something changed from" }, { "end": 108.08000000000001, "start": 102.44, "text": " when I left UFM, but I think that it's definitely, I think that UFM shaped a lot of how I see" }, { "end": 115.60000000000001, "start": 108.08000000000001, "text": " research, how the big concern for the fundamentals for being sound, for being precise, at what" }, { "end": 121.52, "start": 115.6, "text": " you're saying, and trying to understand the phenomena, not trying to get too much into" }, { "end": 125.36, "start": 121.52, "text": " maybe getting people excited about science, but to actually be a scientist and ask the" }, { "end": 130.51999999999998, "start": 125.36, "text": " question, what is the phenomena we are observing, what is the hypothesis and how can we do good" }, { "end": 131.88, "start": 130.51999999999998, "text": " science about that?" }, { "end": 137.07999999999998, "start": 131.88, "text": " When I left UFM and I went to Google Brain, one thing that I was very excited about was" }, { "end": 138.95999999999998, "start": 137.07999999999998, "text": " disability to scale the research." }, { "end": 145.56, "start": 138.96, "text": " As much as I was doing what people call Deep RL at UFM, it was always a struggle with a" }, { "end": 150.04000000000002, "start": 145.56, "text": " couple of those NGOs to do research." }, { "end": 153.76000000000002, "start": 150.04000000000002, "text": " When I went to Google Brain, then I definitely had access to more resources and I thought I" }, { "end": 157.8, "start": 153.76000000000002, "text": " could explore that a little bit more, ask different questions or be more careful about some" }, { "end": 162.08, "start": 157.8, "text": " of the questions that I was going to ask about that." }, { "end": 166.68, "start": 162.08, "text": " And there is also this perspective that obviously you start to be exposed too much more once" }, { "end": 170.76000000000002, "start": 166.68, "text": " you are not a grad student, because as a grad student, you are fundamentally concerned" }, { "end": 171.76000000000002, "start": 170.76000000000002, "text": " about your thesis." }, { "end": 176.28, "start": 171.76000000000002, "text": " So as much as you explore other areas and you're talking to people, you still eventually" }, { "end": 178.96, "start": 176.28, "text": " have to write a PhD thesis into the research." }, { "end": 184.36, "start": 178.96, "text": " And at Braña, I was also happy that I could explore more and diversify my interests or" }, { "end": 185.36, "start": 184.36, "text": " my research." }, { "end": 190.48000000000002, "start": 185.36, "text": " And Deep Mind, I joined Deep Mind a month or two ago." }, { "end": 195.32, "start": 190.48000000000002, "text": " So it's hard to say that something changed about my research, but one thing that I'm excited" }, { "end": 199.84, "start": 195.32, "text": " about Deep Mind is the amount of people around me that have reinforcement learning as their" }, { "end": 201.16, "start": 199.84, "text": " main research interest." }, { "end": 205.44, "start": 201.16, "text": " And I'm already benefiting a lot from having these different perspectives and having very" }, { "end": 209.23999999999998, "start": 205.44, "text": " deep and meaningful discussions about reinforcement learning problems that I care about." }, { "end": 212.28, "start": 209.23999999999998, "text": " And I'm very excited to let's come out of that." }, { "end": 213.28, "start": 212.28, "text": " Awesome." }, { "end": 218.79999999999998, "start": 213.28, "text": " So could you say that or would you say that the conception of what reinforcement learning" }, { "end": 223.64, "start": 218.79999999999998, "text": " is and what it's about and the important aspects of it?" }, { "end": 228.44, "start": 223.64, "text": " These fundamental things, do you think the conception of these things differs much between" }, { "end": 232.64, "start": 228.44, "text": " these institutions or are they all looking at it in a similar way?" }, { "end": 238.2, "start": 232.64, "text": " I find it hard to characterize that because I don't think that there is a mandate from" }, { "end": 241.64, "start": 238.2, "text": " an institution of what reinforcement learning is." }, { "end": 247.64, "start": 241.64, "text": " And I've been very lucky that all these institutions that I've been part of, they are very broad" }, { "end": 253.35999999999999, "start": 247.64, "text": " and that have several researchers and several groups doing research on reinforcement learning." }, { "end": 258.36, "start": 253.36, "text": " So it's hard for me to characterize this is how Google Brain or Deep Mind sees reinforcement" }, { "end": 259.92, "start": 258.36, "text": " learning." }, { "end": 263.92, "start": 259.92, "text": " I think that there are definitely differences that I could perceive." }, { "end": 269.28000000000003, "start": 263.92, "text": " And again, this is just a very personal note, maybe I'm even quoting too much the response," }, { "end": 275.04, "start": 269.28000000000003, "text": " but it's obviously this is a personal perspective and this mainly brain and deep mind, they are" }, { "end": 279.2, "start": 275.04, "text": " such a wide institution that I'm grossly mischaracterizing anything that I say because" }, { "end": 282.92, "start": 279.2, "text": " I'm sure that there is a group that I'm not aware of that is doing things differently." }, { "end": 290.08000000000004, "start": 282.92, "text": " But one distinction that I see is that at Huawei there was always this very big discussion" }, { "end": 297.40000000000003, "start": 290.08000000000004, "text": " about yes, how can we come up with intelligent agents, but the focus has never been so much" }, { "end": 299.6, "start": 297.40000000000003, "text": " in what we would call the URL." }, { "end": 305.28000000000003, "start": 299.6, "text": " I did the deep URL, I wrote a couple of papers about the reinforcement learning when I was" }, { "end": 310.56, "start": 305.28000000000003, "text": " in my PhD, but Huawei is not like a lot of the professors and a lot of the research groups" }, { "end": 314.24, "start": 310.56, "text": " there, they are not necessarily so excited about the deep reinforcement learning research" }, { "end": 315.24, "start": 314.24, "text": " person." }, { "end": 320.08, "start": 315.24, "text": " They are very interested in the fundamentals of a row and then for that you if you need" }, { "end": 324.56, "start": 320.08, "text": " the function approximation, you can just do linear function approximation and things like" }, { "end": 330, "start": 324.56, "text": " that, because they really want to control as much as you can to isolate everything and" }, { "end": 331.44, "start": 330, "text": " explain one process." }, { "end": 337.96, "start": 331.44, "text": " And then once you once and then at brain, brain deep mind, I think that they have they share" }, { "end": 342.59999999999997, "start": 337.96, "text": " a lot of similarities, I would say that one of the and I guess one of the big difference" }, { "end": 346.79999999999995, "start": 342.59999999999997, "text": " that brains that the groups are often localized." }, { "end": 351.44, "start": 346.79999999999995, "text": " So like there is the Montreal group and then it has its own flavor research, the mountain" }, { "end": 352.44, "start": 351.44, "text": " view group." }, { "end": 356.47999999999996, "start": 352.44, "text": " And as much as the different groups talk, you can see even in the publications that this" }, { "end": 358.47999999999996, "start": 356.47999999999996, "text": " group stage will publish together." }, { "end": 362.44, "start": 358.47999999999996, "text": " Different groups have different approaches, but I think that one of the things that brains" }, { "end": 367.96, "start": 362.44, "text": " that brain has a big focus are more than reinforcement learning, right?" }, { "end": 372.4, "start": 367.96, "text": " Brain reinforcement from a cartonage perspective as much as they are amazing researchers at" }, { "end": 375.52, "start": 372.4, "text": " brain that do research, they're reinforcement learning and there are a lot of them." }, { "end": 379.32, "start": 375.52, "text": " The majority of the research brain still seems to me that it's focused on deep learning." }, { "end": 385.12, "start": 379.32, "text": " Why I guess a deep mind reinforcement learning, it's at the center or at least for from" }, { "end": 388.12, "start": 385.12, "text": " my perspective, feels at the center of this." }, { "end": 393.24, "start": 388.12, "text": " So it's not so much about how you see the problem or the problem formulation, but it's" }, { "end": 397.52, "start": 393.24, "text": " maybe a metaphor, it's and it's shaping some of the things that the discussions that" }, { "end": 398.52, "start": 397.52, "text": " you have." }, { "end": 402.28000000000003, "start": 398.52, "text": " But in all these three places I always had absolutely freedom to do whatever I want." }, { "end": 405.84000000000003, "start": 402.28000000000003, "text": " So in a sense, it's my perspective of a reinforcement learning and of anything imposed" }, { "end": 407.56, "start": 405.84000000000003, "text": " to me by anyone else." }, { "end": 410.6, "start": 407.56, "text": " That was super interesting and it's great to chat with you." }, { "end": 414.24, "start": 410.6, "text": " You being in this unique position to be able to comment on the different perspectives of" }, { "end": 415.84000000000003, "start": 414.24, "text": " these world leading institutions." }, { "end": 420.28, "start": 415.84, "text": " So we were going to talk about a few of your papers and starting with revisiting the" }, { "end": 421.28, "start": 420.28, "text": " ALE." }, { "end": 425.59999999999997, "start": 421.28, "text": " So this is a first author paper viewers revisiting the arcade learning environment, evaluation" }, { "end": 431.64, "start": 425.59999999999997, "text": " protocols and open problems for general agents and that was Machado 2018." }, { "end": 435.84, "start": 431.64, "text": " Before this paper came out, can you help us understand like what were the issues with" }, { "end": 442.32, "start": 435.84, "text": " comparing results across studies in Atari and then how did you address them here?" }, { "end": 447.88, "start": 442.32, "text": " Sure, yeah, that's an interesting paper because it wasn't the works for a very long time." }, { "end": 454.4, "start": 447.88, "text": " The original arcade learning environment paper came out in 2013 from Mike Polnitz group" }, { "end": 458.6, "start": 454.4, "text": " at 12 a.m. with Mark Belmer as a first author." }, { "end": 464.6, "start": 458.6, "text": " And then people who started to get excited is slowly get excited about it until of course" }, { "end": 471, "start": 464.6, "text": " the deep parallel explosion that got everyone's attention and Atari was the main benchmark" }, { "end": 472.4, "start": 471, "text": " that people used." }, { "end": 477.08, "start": 472.4, "text": " And I think that there was that first phase that people were just getting used to the framework" }, { "end": 481.2, "start": 477.08, "text": " and getting used to the problems and the questions that you could ask and what were the limits" }, { "end": 483.4, "start": 481.2, "text": " of computation that we could explore." }, { "end": 489.12, "start": 483.4, "text": " And because people were exploring from so many perspectives, sometimes it felt that" }, { "end": 492.68, "start": 489.12, "text": " they were not making Apple's travel comparisons." }, { "end": 495.76, "start": 492.68, "text": " And I am very, very annoying about that." }, { "end": 501.32, "start": 495.76, "text": " Those who work with me know that I get very upset with these type of comparisons." }, { "end": 502.32, "start": 501.32, "text": " And it always bugged." }, { "end": 511.64, "start": 502.32, "text": " And to give a concrete example, when the original Atari paper came out, it was about the" }, { "end": 515.92, "start": 511.64, "text": " number of, they were using episodes as a metric." }, { "end": 519.08, "start": 515.92, "text": " So the number of episodes in the environment show measure the agent." }, { "end": 523.16, "start": 519.08, "text": " So basically they just get straight to the environment for a thousand episodes." }, { "end": 525.12, "start": 523.16, "text": " I don't remember what the number was." }, { "end": 528.08, "start": 525.12, "text": " And that's you're going to report the performance at the end." }, { "end": 534.76, "start": 528.08, "text": " But the tricky thing here is that if you have an agent that it gets, that it learns well" }, { "end": 540.24, "start": 534.76, "text": " and it starts to learn a good policy either by chance or because it's a better agent" }, { "end": 543.88, "start": 540.24, "text": " or early on, you're going to live longer, right?" }, { "end": 546.08, "start": 543.88, "text": " And then the episodes become much longer." }, { "end": 550.36, "start": 546.08, "text": " If your episodes are much longer, then the agent is actually see much more data than it" }, { "end": 552.44, "start": 550.36, "text": " was before." }, { "end": 555.72, "start": 552.44, "text": " Because this is a thing that it's evolving." }, { "end": 562.0400000000001, "start": 555.72, "text": " When the VQN paper came out, I think that they did a more appropriate thing, which is" }, { "end": 566.48, "start": 562.0400000000001, "text": " said, no, we're going to count the number of frames, for example, of this that the agent" }, { "end": 568.84, "start": 566.48, "text": " has seen, the number of interactions with the environment." }, { "end": 574.24, "start": 568.84, "text": " But because everyone else was doing episodes before, well, now you start to have comparisons" }, { "end": 577.2800000000001, "start": 574.24, "text": " between number of frames and number of episodes and then you are not even comparing" }, { "end": 580.2, "start": 577.2800000000001, "text": " the same number of interactions with the environment." }, { "end": 582.6400000000001, "start": 580.2, "text": " And you could see this in a couple of papers." }, { "end": 587.6400000000001, "start": 582.6400000000001, "text": " There are other things like, oh, is the agent allowed to know when it dies or does the agent" }, { "end": 592.5600000000001, "start": 587.6400000000001, "text": " like when it loses a life or does the agent only knows when all the lives are lost and" }, { "end": 598.32, "start": 592.5600000000001, "text": " then the agent gets to start the game again." }, { "end": 603.88, "start": 598.32, "text": " And you can keep adding up those things like in the paper, we have a whole list of those." }, { "end": 606.4000000000001, "start": 603.88, "text": " But you can see that this is more details matter." }, { "end": 607.4000000000001, "start": 606.4000000000001, "text": " They matter a lot." }, { "end": 610.48, "start": 607.4, "text": " Of course, if you're talking about the number of interactions with the environment, the" }, { "end": 613.72, "start": 610.48, "text": " same coefficients of your algorithm, it's a major thing." }, { "end": 618.04, "start": 613.72, "text": " But even like, do you get to know the number of lives or do you don't?" }, { "end": 620.1999999999999, "start": 618.04, "text": " How do you do with that?" }, { "end": 624.68, "start": 620.1999999999999, "text": " Is this another thing, for example, do you get to know the number of actions that the" }, { "end": 625.68, "start": 624.68, "text": " agent has access to?" }, { "end": 629.3199999999999, "start": 625.68, "text": " Because, for example, if you're playing a pong, you only go up and down, right?" }, { "end": 630.76, "start": 629.3199999999999, "text": " Like, you don't have that many actions." }, { "end": 633.24, "start": 630.76, "text": " So you have three actions up, down, and stay." }, { "end": 635.64, "start": 633.24, "text": " Why, in other games, you have actual 18 actions." }, { "end": 639.6, "start": 635.64, "text": " So is the agent supposed to learn that some of the actions have no effect?" }, { "end": 645.28, "start": 639.6, "text": " Or can we start telling the agent that they are not available?" }, { "end": 647.04, "start": 645.28, "text": " And in the paper, we were very careful." }, { "end": 648.16, "start": 647.04, "text": " It irritated a lot about this." }, { "end": 651.16, "start": 648.16, "text": " Should not say that, oh, this is what you should do." }, { "end": 654.88, "start": 651.16, "text": " But in a sense, it's important that you acknowledge that you're maybe, if you are assuming" }, { "end": 659.48, "start": 654.88, "text": " that your agent knows what is the effect, the action set, the minimum action set, let's" }, { "end": 662.64, "start": 659.48, "text": " call it the minimum number of actions that are effective in the environment." }, { "end": 666, "start": 662.64, "text": " Well, maybe the agent's not going to spend so much time trying to figure out that it" }, { "end": 668.16, "start": 666, "text": " should not consider those other actions." }, { "end": 670.16, "start": 668.16, "text": " And so it's not fair to make this comparison." }, { "end": 676.68, "start": 670.16, "text": " So when we wrote this paper, this paper actually started back in 2015, when we organized a" }, { "end": 683.1999999999999, "start": 676.68, "text": " workshop at 3.5 about all this research that was doing about how to do this general, this" }, { "end": 688.72, "start": 683.1999999999999, "text": " research and reinforcement learning and this vast range of domains and how we could get" }, { "end": 693.32, "start": 688.72, "text": " this performance that it's in a sense general purpose, what we would call." }, { "end": 698.08, "start": 693.32, "text": " And a lot of the people, like the leads of the reinforcement learning community at the" }, { "end": 702.32, "start": 698.08, "text": " time, they were discussing this and they were saying, yes, we have to fix it." }, { "end": 706.44, "start": 702.32, "text": " We have to have some guidelines to help the whole field." }, { "end": 710.2, "start": 706.44, "text": " And at the time, I was one of the organizers of the workshop and then we said, yeah, let's" }, { "end": 712.64, "start": 710.2, "text": " write a paper about that." }, { "end": 718.88, "start": 712.64, "text": " The paper took much longer to be written for all sorts of reasons, but at the end, I" }, { "end": 723.4, "start": 718.88, "text": " think that it did what it was set up to do and what a lot of the people were expecting" }, { "end": 727.36, "start": 723.4, "text": " us to do, which is to come up with at least some discussion about that." }, { "end": 731.3199999999999, "start": 727.36, "text": " And some examples of apples to apples, comparisons and things that you would expect." }, { "end": 735.3199999999999, "start": 731.3199999999999, "text": " One of the reasons that the paper took so long to be written is also because from the moment" }, { "end": 738.52, "start": 735.3199999999999, "text": " that we started doing this, we started to realize that, oh, maybe we can do some things" }, { "end": 739.52, "start": 738.52, "text": " better." }, { "end": 743.6, "start": 739.52, "text": " We can add stochasticity to the environment because it was a terministic or we can add" }, { "end": 749.68, "start": 743.6, "text": " modes as we added, which is a which vastly increases the number of games that you have access" }, { "end": 750.68, "start": 749.68, "text": " to." }, { "end": 755.28, "start": 750.68, "text": " And as we kept adding these and we wanted to write a solid paper, this took quite some" }, { "end": 760.64, "start": 755.28, "text": " time to get out, but eventually 2018, we published that chair and to be fair to chair" }, { "end": 764.24, "start": 760.64, "text": " because journals got a bad reputation, the reveals that it was fairly short, so it was" }, { "end": 766.52, "start": 764.24, "text": " not that the journal was holding us back." }, { "end": 773, "start": 766.52, "text": " And how was the paper received? Like did everyone latch onto this as the definitive way to" }, { "end": 776.36, "start": 773, "text": " benchmark with Atari or did it take some time to diffuse?" }, { "end": 780.12, "start": 776.36, "text": " Did everyone agree that this is the right way to do it, the protocol?" }, { "end": 783.96, "start": 780.12, "text": " I mean, I want to say that the paper was well received." }, { "end": 788.88, "start": 783.96, "text": " People oftentimes associate my name to that paper." }, { "end": 790.84, "start": 788.88, "text": " So yeah, I think it was well received." }, { "end": 794.68, "start": 790.84, "text": " There is a big difference between being well received and becoming a standard in the" }, { "end": 795.68, "start": 794.68, "text": " field." }, { "end": 799.3199999999999, "start": 795.68, "text": " And I think you cannot force people to do what you want them to do." }, { "end": 801.68, "start": 799.3199999999999, "text": " Maybe you're not even suggesting the right thing." }, { "end": 808.92, "start": 801.68, "text": " So in this context, I think that the whether the paper was how much people actually decided" }, { "end": 814.4, "start": 808.92, "text": " to listen to various even recently, we had some big results that were not following." }, { "end": 816.76, "start": 814.4, "text": " Still people use other different protocols to do with statistics." }, { "end": 820.0799999999999, "start": 816.76, "text": " It depends on the version that they are using on the Atari." }, { "end": 824.1999999999999, "start": 820.0799999999999, "text": " But I want to say that I've seen I've been on the review process a couple of times where" }, { "end": 829, "start": 824.2, "text": " I see other reviewers saying, oh, you're not following the this paper's guidelines and" }, { "end": 830, "start": 829, "text": " you should." }, { "end": 833.6800000000001, "start": 830, "text": " So I think that there is this general consensus in the community that it's at least one good" }, { "end": 835.4000000000001, "start": 833.6800000000001, "text": " standard to fall." }, { "end": 836.4000000000001, "start": 835.4000000000001, "text": " And I'll take that." }, { "end": 837.6800000000001, "start": 836.4000000000001, "text": " I think it's good enough." }, { "end": 840.32, "start": 837.6800000000001, "text": " So this paper is really how I first encountered your name." }, { "end": 846.72, "start": 840.32, "text": " And as I told you before, our interview, it came up during my first Neurops as 2018" }, { "end": 850.4000000000001, "start": 846.72, "text": " and Go Explorer was a big part of the D-Bar All Workshop." }, { "end": 854.88, "start": 850.4, "text": " If some of the discussion after that was around whether there are methods adhered to the" }, { "end": 856.8, "start": 854.88, "text": " guidelines in your paper." }, { "end": 860.12, "start": 856.8, "text": " So that's how I came to you to know you for the first time." }, { "end": 865.0799999999999, "start": 860.12, "text": " Yeah, it was interesting because when the go explorer paper came out, I didn't say anything" }, { "end": 868.56, "start": 865.0799999999999, "text": " but I could say that people bringing up my paper say, oh, you're not following the" }, { "end": 873.88, "start": 868.56, "text": " app and essentially somehow our paper got, I want to say got it." }, { "end": 875.92, "start": 873.88, "text": " I also a lot of attention because of the explorer." }, { "end": 877.72, "start": 875.92, "text": " So I think it was good for everyone." }, { "end": 883.36, "start": 877.72, "text": " And I was tweeted about this yesterday or the day before." }, { "end": 884.36, "start": 883.36, "text": " I don't remember." }, { "end": 888.52, "start": 884.36, "text": " But I finally got the chance of reading the go explorer to the final version, the nature" }, { "end": 889.52, "start": 888.52, "text": " paper." }, { "end": 895.64, "start": 889.52, "text": " And I was very happy and impressed by how much they actually listened and they wanted to" }, { "end": 898.32, "start": 895.64, "text": " join us to write." }, { "end": 903.5600000000001, "start": 898.32, "text": " And I really congratulate the authors for like taking the feedback from the community and" }, { "end": 907.52, "start": 903.5600000000001, "text": " saying that yes, if this toxicity is something that is important, we are going to address" }, { "end": 908.52, "start": 907.52, "text": " that." }, { "end": 912.52, "start": 908.52, "text": " I think it was a good outcome and a good example of science in the community talking to each" }, { "end": 913.52, "start": 912.52, "text": " other." }, { "end": 920.0799999999999, "start": 913.52, "text": " So can you tell us about a determinism in ALE and stochasticity and sticky actions?" }, { "end": 921.48, "start": 920.0799999999999, "text": " How does all that stuff work?" }, { "end": 927.6, "start": 921.48, "text": " We take a lot of things for granted nowadays, but back in the day for a far, they also predates" }, { "end": 929.72, "start": 927.6, "text": " stochasticity in a sense." }, { "end": 935.88, "start": 929.72, "text": " So the Atari 2600 didn't have a source of stochasticity built in the controller." }, { "end": 941.32, "start": 935.88, "text": " The best they could do was to use the state of the RAM when the game was loading to try" }, { "end": 944.32, "start": 941.32, "text": " to come up with some notion of stochasticity." }, { "end": 948.12, "start": 944.32, "text": " Which means that most of the games, the vast majority of the games, they are deterministic." }, { "end": 952.08, "start": 948.12, "text": " Like, if you execute the same sequence of actions, you're always going to have exactly the" }, { "end": 954.64, "start": 952.08, "text": " same outcome, which is fine." }, { "end": 959.48, "start": 954.64, "text": " And we have a lot of domains that are a lot of problems in the real world that where" }, { "end": 964.08, "start": 959.48, "text": " this is the how it operates." }, { "end": 967.48, "start": 964.08, "text": " So on the other hand, it feels that there is something missing, right?" }, { "end": 971.6800000000001, "start": 967.48, "text": " Because a lot of the problems that we have, they also have some inherent stochasticity maybe" }, { "end": 975.24, "start": 971.6800000000001, "text": " because you don't control the environment, but maybe because you don't control your own" }, { "end": 980.8000000000001, "start": 975.24, "text": " actions like you cannot time every microsecond of how you're going to do each other muscles." }, { "end": 986.12, "start": 980.8000000000001, "text": " So we felt that there was something missing and this was the stochasticity." }, { "end": 992.44, "start": 986.12, "text": " We felt that by adding, because the original notion of the Atari, what's called arcade" }, { "end": 997.2, "start": 992.44, "text": " learning environment, which are the set of Atari games that we use for reinforcement learning" }, { "end": 1003.2800000000001, "start": 997.2, "text": " evaluation, the original idea, at least how I see it, I was not in the first paper, was" }, { "end": 1008.96, "start": 1003.2800000000001, "text": " that we want our agent to basically do a single agent that we can deploy after all these" }, { "end": 1009.96, "start": 1008.96, "text": " different environments." }, { "end": 1012.4000000000001, "start": 1009.96, "text": " And it's exactly the same algorithm." }, { "end": 1015.7600000000001, "start": 1012.4000000000001, "text": " It's just going to run and it's going to learn how to do well." }, { "end": 1017.96, "start": 1015.7600000000001, "text": " So there is no specialization per game, right?" }, { "end": 1019.24, "start": 1017.96, "text": " There is nothing." }, { "end": 1023.88, "start": 1019.24, "text": " And this is in a sense what allows us to have a general purpose algorithm because well," }, { "end": 1026.92, "start": 1023.88, "text": " if I have an algorithm that I say, hey, learn how to play tennis and it does." }, { "end": 1030.96, "start": 1026.92, "text": " And then learn how to shoot aliens and it does." }, { "end": 1034.68, "start": 1030.96, "text": " Of course, under the same interface and so on, it's a much better algorithm than you" }, { "end": 1039.52, "start": 1034.68, "text": " just say, oh, this algorithm I just evaluated it to, I don't know, play tennis." }, { "end": 1044.96, "start": 1039.52, "text": " So you had this general purpose approach and it felt to us with time that the stochasticity" }, { "end": 1049.72, "start": 1044.96, "text": " was a big part of it because we could see or we had hypothesized that some of the papers" }, { "end": 1051.48, "start": 1049.72, "text": " that we were seeing came out." }, { "end": 1055.56, "start": 1051.48, "text": " They were in a sense implicitly exploiting the determinism." }, { "end": 1061.64, "start": 1055.56, "text": " And in the paper, we came up with the simplest version of that that we call the brute, which" }, { "end": 1066.4, "start": 1061.64, "text": " was to show that we could come up with a learning algorithm if you will, that even if we didn't" }, { "end": 1069.4, "start": 1066.4, "text": " look at the state at all, basically we're just going to ignore this screen." }, { "end": 1073.92, "start": 1069.4, "text": " We're just going to blindly, which is what we call open loop planning, we're just going" }, { "end": 1075.92, "start": 1073.92, "text": " to blindly learn a sequence of actions." }, { "end": 1080.52, "start": 1075.92, "text": " We could sometimes do better than the state of the algorithm at the time." }, { "end": 1082.24, "start": 1080.52, "text": " And somehow, to us, it felt wrong." }, { "end": 1086.44, "start": 1082.24, "text": " We're like, quick, how can we have what we call an intelligent agent that is learning something" }, { "end": 1090.28, "start": 1086.44, "text": " that is not even considering what it's observing just like." }, { "end": 1095.16, "start": 1090.28, "text": " And the stochasticity was a way that we could bring this discussion and at least at this" }, { "end": 1097.92, "start": 1095.16, "text": " extra dimension of it should be considered." }, { "end": 1103.76, "start": 1097.92, "text": " And our solution was the stick actions, which basically because the ALE, I did a lot" }, { "end": 1108.24, "start": 1103.76, "text": " of the development of the framework itself on the back end." }, { "end": 1111.24, "start": 1108.24, "text": " And it's, yeah, it's very low level." }, { "end": 1112.24, "start": 1111.24, "text": " Let's put this way." }, { "end": 1116.76, "start": 1112.24, "text": " But when you look at the code for the ALE or like the Atari's emulator, you don't have" }, { "end": 1117.92, "start": 1116.76, "text": " a source of randomness." }, { "end": 1122.12, "start": 1117.92, "text": " So it was very difficult to say, oh, we're going to add randomness in the game itself," }, { "end": 1126.32, "start": 1122.12, "text": " because that was going to be like, oh, a lot of work." }, { "end": 1128.6, "start": 1126.32, "text": " I didn't want to spend two years of my PhD doing that." }, { "end": 1133.12, "start": 1128.6, "text": " So what we saw, well, but what would be a meaningful way of thinking about that?" }, { "end": 1136.1999999999998, "start": 1133.12, "text": " And then it comes as stick actions, which was this notion as well." }, { "end": 1140.8799999999999, "start": 1136.1999999999998, "text": " Even if a human is playing at heart, they didn't have this feeling that the game was deterministic." }, { "end": 1143.3999999999999, "start": 1140.8799999999999, "text": " And the reason they didn't have it is because a human cannot time." }, { "end": 1148.4399999999998, "start": 1143.3999999999999, "text": " Oh, I'm going to shoot every 30 milliseconds or something like that because humans have" }, { "end": 1150.6399999999999, "start": 1148.4399999999998, "text": " legs and the reflex and things like that." }, { "end": 1154.6, "start": 1150.6399999999999, "text": " So by what stick actions does is that there is a probability that every time that you" }, { "end": 1159.36, "start": 1154.6, "text": " execute an action, it's going to take a little bit longer, maybe one or two in interactions" }, { "end": 1163.6, "start": 1159.36, "text": " with environment for that action actually take effect, which is trying to make some delay" }, { "end": 1166.9599999999998, "start": 1163.6, "text": " that humans could have in your reacting." }, { "end": 1171.84, "start": 1166.9599999999998, "text": " And that was already like, as we showed in some of the results when we are trying to see" }, { "end": 1177.24, "start": 1171.84, "text": " how to break the brute, for example, which is this notion of this deterministic algorithm" }, { "end": 1182.04, "start": 1177.24, "text": " that could do well, we showed that yes, even this very simple notion of stochasticity" }, { "end": 1187.56, "start": 1182.04, "text": " would break it because it was clearly exploiting something that, at least from my perspective," }, { "end": 1189.12, "start": 1187.56, "text": " it was not ideal to exploit." }, { "end": 1190.12, "start": 1189.12, "text": " So it seems very realistic." }, { "end": 1194.4799999999998, "start": 1190.12, "text": " You're in the 80s, you go to the arcade, it's an old machine, someone spilled Pepsi in" }, { "end": 1197.4399999999998, "start": 1194.4799999999998, "text": " the controller, and this is your sticky actions." }, { "end": 1198.4399999999998, "start": 1197.4399999999998, "text": " It's perfect." }, { "end": 1199.4399999999998, "start": 1198.4399999999998, "text": " I love that." }, { "end": 1203.2399999999998, "start": 1199.4399999999998, "text": " And I love the fact that sticky now has two puns because it has two minis because I don't" }, { "end": 1206.1599999999999, "start": 1203.2399999999998, "text": " know if I want to play sticky buttons, but sure." }, { "end": 1210.56, "start": 1206.1599999999999, "text": " Okay, so let's move on to your work in generalization." }, { "end": 1217.56, "start": 1210.56, "text": " So there's a couple of papers here first by, first author, fair brother, generalization" }, { "end": 1220.9199999999998, "start": 1217.56, "text": " and regularization in DQN from 2018." }, { "end": 1227.28, "start": 1220.9199999999998, "text": " And more recently, by Agarwal at all, contrasted behavioral similarity and vettings for generalization" }, { "end": 1228.48, "start": 1227.28, "text": " and reinforcement learning." }, { "end": 1230.8799999999999, "start": 1228.48, "text": " That's at ICLR-21." }, { "end": 1236.8799999999999, "start": 1230.8799999999999, "text": " So can you help us understand in simple terms what's going on in these papers?" }, { "end": 1237.8799999999999, "start": 1236.8799999999999, "text": " Yeah, sure." }, { "end": 1245.24, "start": 1237.8799999999999, "text": " I think that, so this question of generalization started to bother me in a sense when I was" }, { "end": 1248.16, "start": 1245.24, "text": " writing the revisiting the ALP." }, { "end": 1253.92, "start": 1248.16, "text": " And how it came to be was this notion that one of the things that we added in the ALP" }, { "end": 1255.72, "start": 1253.92, "text": " was this notion of modes." }, { "end": 1259.96, "start": 1255.72, "text": " So when you see, I don't know, freeway, for example, the game that where the chicken is crossing" }, { "end": 1264.16, "start": 1259.96, "text": " the road, everyone is very familiar with the talks and so on, or you have a yellow blob" }, { "end": 1266.48, "start": 1264.16, "text": " trying to cross the road and cars are coming by." }, { "end": 1271.68, "start": 1266.48, "text": " But what people don't realize is that the developers of this Atari games, they were so good" }, { "end": 1275.44, "start": 1271.68, "text": " that they were not satisfied in putting a single game in 2K of RAM." }, { "end": 1278.04, "start": 1275.44, "text": " They wanted to put 16 32 games." }, { "end": 1279.72, "start": 1278.04, "text": " And somehow they managed to do that." }, { "end": 1285, "start": 1279.72, "text": " So in the Atari console, what you had is that you had some switches and you could actually" }, { "end": 1286.44, "start": 1285, "text": " change the mode of the game." }, { "end": 1290.28, "start": 1286.44, "text": " So in freeway, for example, if you can cross the road, you could change the time so you" }, { "end": 1294.64, "start": 1290.28, "text": " could go to rush hour and then you had more cars." }, { "end": 1297.0800000000002, "start": 1294.64, "text": " And then it's a different game in a sense." }, { "end": 1298.0800000000002, "start": 1297.0800000000002, "text": " But it's not, right?" }, { "end": 1299.0800000000002, "start": 1298.0800000000002, "text": " It's the same sprites." }, { "end": 1300.72, "start": 1299.0800000000002, "text": " It's the same principle of the game." }, { "end": 1301.72, "start": 1300.72, "text": " It's the same idea." }, { "end": 1303.28, "start": 1301.72, "text": " You go up and you avoid cars." }, { "end": 1307.8, "start": 1303.28, "text": " But by flipping this switch in this new mode, you have a new environment, a new reinforcement" }, { "end": 1308.8, "start": 1307.8, "text": " or a new environment." }, { "end": 1313.04, "start": 1308.8, "text": " And when I was seeing this, and even when we were proposing these modes, like introducing" }, { "end": 1318.48, "start": 1313.04, "text": " it as a research problem, it felt that, yes, you can call it all sorts of things." }, { "end": 1321.44, "start": 1318.48, "text": " But to me, this isn't generally a problem of generalization." }, { "end": 1326.2, "start": 1321.44, "text": " I want my agent to be able to learn by playing two or three games of freeway that, yes, I" }, { "end": 1327.64, "start": 1326.2, "text": " want to go up and avoid cars." }, { "end": 1332.76, "start": 1327.64, "text": " So now if there is a new pattern of cars showing up or the cars are at different speeds, ideally" }, { "end": 1337.3600000000001, "start": 1332.76, "text": " the agent would not suck at playing that game." }, { "end": 1342.4, "start": 1337.3600000000001, "text": " And this first paper that you mentioned, which is a paper with Jesse Fearbrother and Michael" }, { "end": 1347.8000000000002, "start": 1342.4, "text": " Bowling, was when I was working with Jesse, who was at the time, an undergrad student." }, { "end": 1350.92, "start": 1347.8000000000002, "text": " And I was posing this question to him and he was like, yes, that's very interesting." }, { "end": 1356.48, "start": 1350.92, "text": " And then we start to explore how much the gold standard at the time we chose to QN was," }, { "end": 1360.3600000000001, "start": 1356.48, "text": " in a sense, overfeiting to this, to a single game that it was being trained." }, { "end": 1365.24, "start": 1360.3600000000001, "text": " What would happen if we actually put DQN to train one of these environments and then" }, { "end": 1368.08, "start": 1365.24, "text": " basically just change the speed of the cars?" }, { "end": 1374.6, "start": 1368.08, "text": " And low and behold, as by now we all know, these algorithms, they are not, by just the" }, { "end": 1379.24, "start": 1374.6, "text": " simple definition of them, they have no incentive to generalize beyond the tasks that they" }, { "end": 1380.24, "start": 1379.24, "text": " are seeing." }, { "end": 1386.3600000000001, "start": 1380.24, "text": " So we were showing these and we were showing that even if we revisited some basics of" }, { "end": 1391.24, "start": 1386.36, "text": " machine learning, like, look at what would we do if we, what would happen if we regularized" }, { "end": 1392.24, "start": 1391.24, "text": " this network?" }, { "end": 1396.6799999999998, "start": 1392.24, "text": " If we use regularization to improve generalization, we could see some benefits." }, { "end": 1400.84, "start": 1396.6799999999998, "text": " And we were asking quite a lot of, but what if we, we want to reload the weights of the" }, { "end": 1404.9199999999998, "start": 1400.84, "text": " network and just trying a last layer, for example, something like that, would we still" }, { "end": 1409.08, "start": 1404.9199999999998, "text": " be able to leverage what it's the representation because arguably the sprites are the same." }, { "end": 1414.84, "start": 1409.08, "text": " So the ability to extract, we could represent it should be transferable." }, { "end": 1418.84, "start": 1414.84, "text": " So we were exploring a lot of these questions in this paper." }, { "end": 1423.6799999999998, "start": 1418.84, "text": " And, and, and, and, and, and the solutions, like, even if you want to call it a solution," }, { "end": 1429.08, "start": 1423.6799999999998, "text": " it was more raising our weariness to the problem than necessarily proposing any new solutions." }, { "end": 1431.8799999999999, "start": 1429.08, "text": " They, they were too simple and then life happened." }, { "end": 1435.72, "start": 1431.8799999999999, "text": " I don't know, finishing PhD, getting a job and so on." }, { "end": 1439.12, "start": 1435.72, "text": " But this question was always at the back of my mind and eventually I, I, I managed to" }, { "end": 1444.6399999999999, "start": 1439.12, "text": " come in Sri Shad, Agarwal, who, who was a resident at Google brand at the time that this question" }, { "end": 1446.84, "start": 1444.6399999999999, "text": " of generalization was an interesting one." }, { "end": 1451.1599999999999, "start": 1446.84, "text": " And we worked with Pablo Castro and, and Mark Belmer on that." }, { "end": 1455.32, "start": 1451.1599999999999, "text": " And eventually I was really happy with one of the solutions, the solution that we came" }, { "end": 1460.8, "start": 1455.32, "text": " up with, we, we, we, we, was this notion that, and I say we, but, like, we should get" }, { "end": 1466.3999999999999, "start": 1460.8, "text": " on the credit, which was this notion that, what if, maybe even taking a step back, we" }, { "end": 1469, "start": 1466.3999999999999, "text": " were asking the question, how can we learn a representation?" }, { "end": 1473.44, "start": 1469, "text": " Because by now it's, it seems pretty clear to all of us that the representation that you're" }, { "end": 1476.12, "start": 1473.44, "text": " being, that we're learning, we're not, was not generalizing." }, { "end": 1479.44, "start": 1476.12, "text": " And we're asking the question, how can we train an agent to learn a representation in" }, { "end": 1485.2, "start": 1479.44, "text": " such a way that if it's seen a different environment, a different test, but it's very similar." }, { "end": 1488.56, "start": 1485.2, "text": " It's still no, it's still going to know what to do." }, { "end": 1494.8, "start": 1488.56, "text": " And, and then we can, and some folks at Microsoft research, in Montreal had come up with this" }, { "end": 1500.04, "start": 1494.8, "text": " very simple environment that I think that captures the, what the, like, some of these notions" }, { "end": 1504.44, "start": 1500.04, "text": " very well, which is this, this notion of all you want to do is to have an agent learning" }, { "end": 1506.52, "start": 1504.44, "text": " to jump over a block." }, { "end": 1511.1599999999999, "start": 1506.52, "text": " And then what you can do is that you can move the position of the block that agent needs" }, { "end": 1513.08, "start": 1511.1599999999999, "text": " to jump over, like only on the x axis." }, { "end": 1516.1599999999999, "start": 1513.08, "text": " So basically just move it right or left." }, { "end": 1521.56, "start": 1516.1599999999999, "text": " But you can also put, you can also put this, this, let's say, this is a pixel task," }, { "end": 1526.56, "start": 1521.56, "text": " a pixel-based task, you can always put this screen that the agent's looking on a bigger" }, { "end": 1529.24, "start": 1526.56, "text": " screen and then you can just move it up and down." }, { "end": 1534.3999999999999, "start": 1529.24, "text": " So now, or like what I mean by that is that you can have the floor where the agent is" }, { "end": 1537.6399999999999, "start": 1534.3999999999999, "text": " leaving and then you can just shift the floor up or down." }, { "end": 1541.6, "start": 1537.6399999999999, "text": " So now you have two dimensions of that you can vary, you can just shift the floor up or" }, { "end": 1545.8799999999999, "start": 1541.6, "text": " down, but the agent is still sitting on the same floor and the agent still needs to jump" }, { "end": 1547.72, "start": 1545.8799999999999, "text": " over the same block." }, { "end": 1552.44, "start": 1547.72, "text": " And lo and behold, it's literally the same problem, the pixels are just shifted and then" }, { "end": 1554.96, "start": 1552.44, "text": " the network can't do it." }, { "end": 1556.96, "start": 1554.96, "text": " The network is really bad at doing that." }, { "end": 1562.56, "start": 1556.96, "text": " And if you shift the obstacle as well, the position is again really bad, but there isn't" }, { "end": 1566.8, "start": 1562.56, "text": " under line representation here that would solve all the problems, right?" }, { "end": 1570.64, "start": 1566.8, "text": " Like if instead of latching to random pixels on the screen or something like that, what" }, { "end": 1574.56, "start": 1570.64, "text": " we were seeing is that well, maybe the agent should be able to learn the distance between" }, { "end": 1575.72, "start": 1574.56, "text": " the agent and the block." }, { "end": 1578.76, "start": 1575.72, "text": " Well, nothing matters anymore because this is invariant, right?" }, { "end": 1579.76, "start": 1578.76, "text": " That's the key word here." }, { "end": 1583.64, "start": 1579.76, "text": " Now the representation is going to be invariant to all these changes." }, { "end": 1588.72, "start": 1583.64, "text": " And I'm talking about this jumping word because it's, I think it's the most dietic example" }, { "end": 1595.88, "start": 1588.72, "text": " of this, but eventually from this discussion that we had about this notion of invariance," }, { "end": 1600.64, "start": 1595.88, "text": " we start to ask what could we do to learn representations that are invariant." }, { "end": 1604.88, "start": 1600.64, "text": " And then it comes this paper which is what we said, well, maybe what we should do is that" }, { "end": 1608.1200000000001, "start": 1604.88, "text": " we should learn in a couple of these different environments." }, { "end": 1609.8400000000001, "start": 1608.1200000000001, "text": " Let's put this way." }, { "end": 1612.88, "start": 1609.8400000000001, "text": " How should we allow the optimal policy?" }, { "end": 1618.5600000000002, "start": 1612.88, "text": " And then we should try to look back and say, wait, but if I'm acting the same on this" }, { "end": 1624, "start": 1618.5600000000002, "text": " true environment, even though they look very different from the network's perspective," }, { "end": 1626.0800000000002, "start": 1624, "text": " does it mean that these states are actually the same?" }, { "end": 1631.8400000000001, "start": 1626.0800000000002, "text": " So we don't do research, we didn't run experiments on super Mario Brothers, but it's a famous" }, { "end": 1635.36, "start": 1631.84, "text": " game I like to give this example, which is just like, let's say that you learn to jump" }, { "end": 1637.24, "start": 1635.36, "text": " over the turtle." }, { "end": 1638.8799999999999, "start": 1637.24, "text": " And that's what you need to do, right?" }, { "end": 1642.84, "start": 1638.8799999999999, "text": " If you go forward and on the background is completely different, but you still are only" }, { "end": 1645.24, "start": 1642.84, "text": " jumping over the turtle or you're avoiding an obstacle." }, { "end": 1646.84, "start": 1645.24, "text": " So it's kind of the same thing, right?" }, { "end": 1652.24, "start": 1646.84, "text": " It's just like, oh, yeah, I guess now I'm in this state that I should learn how to execute" }, { "end": 1653.72, "start": 1652.24, "text": " the sequence of actions." }, { "end": 1657.1999999999998, "start": 1653.72, "text": " And by doing that, you should learn to say, oh, yeah, so I guess this doesn't matter," }, { "end": 1658.3999999999999, "start": 1657.1999999999998, "text": " this doesn't matter." }, { "end": 1664.24, "start": 1658.4, "text": " What we're trying to do with this paper was this, and thus it comes the type of the paper," }, { "end": 1667.24, "start": 1664.24, "text": " which is this notion of behavior similarity." }, { "end": 1671.96, "start": 1667.24, "text": " And we wanted, like, if the agent is behaving similarly in different instantiations of the" }, { "end": 1677.68, "start": 1671.96, "text": " same problem, maybe this means that the states should be, at least consider it should be" }, { "end": 1678.68, "start": 1677.68, "text": " equivalent." }, { "end": 1680.68, "start": 1678.68, "text": " And we do this." }, { "end": 1685.64, "start": 1680.68, "text": " And then we, the paper, I really like the paper because it has both theory and also a lot" }, { "end": 1689.88, "start": 1685.64, "text": " of empirical data and we did this in a way that eventually we were able to create a" }, { "end": 1694.72, "start": 1689.88, "text": " loss function that allows us to learn and embed in that captures this similarity." }, { "end": 1699.2, "start": 1694.72, "text": " And it starts to put together the states that yes, if you're behaving the same in two" }, { "end": 1704.24, "start": 1699.2, "text": " different ways, even though in these two different setups, even though they look very differently," }, { "end": 1706.0400000000002, "start": 1704.24, "text": " maybe these things are the same." }, { "end": 1709.2800000000002, "start": 1706.0400000000002, "text": " And this is one of the things that the network is trying to do to put to learn this in" }, { "end": 1716.28, "start": 1709.28, "text": " a different way." }, { "end": 1727.44, "start": 1716.28, "text": " So, you talk about finding state embeddings with similar long-term behavior here." }, { "end": 1730.12, "start": 1727.44, "text": " How do you define long-term behavior?" }, { "end": 1731.12, "start": 1730.12, "text": " Yeah." }, { "end": 1736.12, "start": 1731.12, "text": " So what we can do here is that we can think about how is the agent going to act at the" }, { "end": 1737.44, "start": 1736.12, "text": " current time step." }, { "end": 1738.44, "start": 1737.44, "text": " Right?" }, { "end": 1744.6000000000001, "start": 1738.44, "text": " So if you want to think about very short term behavior, it's going to be one step." }, { "end": 1750.88, "start": 1744.6000000000001, "text": " And basically you can say, well, am I going to go up here and am I going to go up like" }, { "end": 1753.6000000000001, "start": 1750.88, "text": " this under the extension of the environment?" }, { "end": 1755.04, "start": 1753.6000000000001, "text": " And this would be the short behavior." }, { "end": 1758.44, "start": 1755.04, "text": " And then what you do is that then you start to make this longer." }, { "end": 1761.56, "start": 1758.44, "text": " So now you're not only looking at one action, you're looking at multiple actions in the" }, { "end": 1762.56, "start": 1761.56, "text": " future." }, { "end": 1767.56, "start": 1762.56, "text": " And the way we do this is inspired by this notion of bicimulation metric." }, { "end": 1775.8799999999999, "start": 1767.56, "text": " And just look at how similar the policy is at the current time step that you are at." }, { "end": 1780.36, "start": 1775.8799999999999, "text": " And then you also look at the discounted distance between the distribution of states that you're" }, { "end": 1782.6399999999999, "start": 1780.36, "text": " going to look in the future." }, { "end": 1786.6399999999999, "start": 1782.6399999999999, "text": " So by doing that, it's discounted." }, { "end": 1789.12, "start": 1786.6399999999999, "text": " So the long term comes from this discounting, right?" }, { "end": 1793.84, "start": 1789.12, "text": " Because if we have a gamma equals zero, basically we're not looking in the future." }, { "end": 1796.8799999999999, "start": 1793.84, "text": " And if we have gamma equals something bigger than zero, let's say 0.9." }, { "end": 1798.64, "start": 1796.88, "text": " And we're looking at a couple of times steps." }, { "end": 1803.8000000000002, "start": 1798.64, "text": " We still concern a lot about where we are at the beginning, but there is this exponential" }, { "end": 1804.8000000000002, "start": 1803.8000000000002, "text": " decay." }, { "end": 1807.48, "start": 1804.8000000000002, "text": " And then we are looking at this distribution of things that we're going to see in the future." }, { "end": 1811.5600000000002, "start": 1807.48, "text": " And if they match, then we're going to say, or they're close enough because of course" }, { "end": 1813.44, "start": 1811.5600000000002, "text": " it's not about matching exactly." }, { "end": 1815.24, "start": 1813.44, "text": " Then we start to try to put these things together." }, { "end": 1820.0400000000002, "start": 1815.24, "text": " Is there a relationship here between this work and the idea of options?" }, { "end": 1823.2, "start": 1820.0400000000002, "text": " Like, is there a close relationship?" }, { "end": 1824.2, "start": 1823.2, "text": " Yes and no." }, { "end": 1829.44, "start": 1824.2, "text": " In this list, we are not trying to learn, because this sequence of actions are like this," }, { "end": 1831.68, "start": 1829.44, "text": " this course is of actions." }, { "end": 1833.6000000000001, "start": 1831.68, "text": " So at first, no." }, { "end": 1838.96, "start": 1833.6000000000001, "text": " But the reason I say yes is just because I like to think about this thing as trying to" }, { "end": 1839.96, "start": 1838.96, "text": " find different." }, { "end": 1842.28, "start": 1839.96, "text": " It's all about abstractions, right?" }, { "end": 1847.64, "start": 1842.28, "text": " And I think that the way it's about options is that options are abstractions in the action" }, { "end": 1848.64, "start": 1847.64, "text": " space." }, { "end": 1852.48, "start": 1848.64, "text": " So given that I'm going to act, how can I abstract the sequence of actions into something" }, { "end": 1853.48, "start": 1852.48, "text": " more meaningful?" }, { "end": 1857.52, "start": 1853.48, "text": " And what we're looking from this in this paper, I would say that it's more an ocean of" }, { "end": 1863.04, "start": 1857.52, "text": " abstraction in the state space, which is, even the observations, how can I abstract these" }, { "end": 1868.04, "start": 1863.04, "text": " states into something more amenable and more useful for generalization?" }, { "end": 1872.64, "start": 1868.04, "text": " So they're definitely touching a different notion of abstraction, I would say." }, { "end": 1878.52, "start": 1872.64, "text": " But yeah, but there is no notion of explicitly trying to use this extended sequence of actions" }, { "end": 1880.4, "start": 1878.52, "text": " from this paper." }, { "end": 1884.4, "start": 1880.4, "text": " And I think I saw that there was some notion of being agnostic to reward." }, { "end": 1885.64, "start": 1884.4, "text": " In the embedding, is that true?" }, { "end": 1889, "start": 1885.64, "text": " And is the policy here still trying to maximize returns?" }, { "end": 1890.52, "start": 1889, "text": " Yes, it is true." }, { "end": 1895.5600000000002, "start": 1890.52, "text": " We are agnostic in the sense that as I was just described in the math, I was not talking" }, { "end": 1897.52, "start": 1895.5600000000002, "text": " about rewards at any point, right?" }, { "end": 1901.2800000000002, "start": 1897.52, "text": " So we would just look at the different behaviors as in the toe." }, { "end": 1907.3600000000001, "start": 1901.2800000000002, "text": " If the agent is behaving differently, like behaving similarly at two different places," }, { "end": 1910.3200000000002, "start": 1907.3600000000001, "text": " maybe this states are together." }, { "end": 1913.8, "start": 1910.32, "text": " So it is rewarding agnostic in that context." }, { "end": 1917.8799999999999, "start": 1913.8, "text": " But this is just one of the last functions that we use, which is the one that they are" }, { "end": 1920.24, "start": 1917.8799999999999, "text": " trying to shape the representation learning process." }, { "end": 1923.4399999999998, "start": 1920.24, "text": " We still have this standard D-parallel formulation, if you will." }, { "end": 1930.52, "start": 1923.4399999999998, "text": " We are trying to maximize return, and this loss is driving the learning the policy as" }, { "end": 1931.52, "start": 1930.52, "text": " well." }, { "end": 1936.6799999999998, "start": 1931.52, "text": " So we're definitely, we definitely want to maximize return that's the goal." }, { "end": 1940.44, "start": 1936.68, "text": " But we have something extra, let's say, that is just trying to nudge the representation" }, { "end": 1941.44, "start": 1940.44, "text": " learning process." }, { "end": 1945.24, "start": 1941.44, "text": " So all other things being equal, maybe we should learn a representation that it's better" }, { "end": 1946.24, "start": 1945.24, "text": " if you would." }, { "end": 1947.24, "start": 1946.24, "text": " Cool." }, { "end": 1948.24, "start": 1947.24, "text": " Okay, let's move on to exploration." }, { "end": 1950.68, "start": 1948.24, "text": " You've done a lot of working exploration." }, { "end": 1955.2, "start": 1950.68, "text": " You said you focused on exploration for your PhD and had a number of papers in this area." }, { "end": 1958.28, "start": 1955.2, "text": " Do you want to tell us a bit about your thesis?" }, { "end": 1959.28, "start": 1958.28, "text": " Yeah, sure." }, { "end": 1964.4, "start": 1959.28, "text": " So when I started my PhD back in 2013, which is literally the year that AirKid Learning" }, { "end": 1969.1200000000001, "start": 1964.4, "text": " Environment came out, so I was very excited about that framework." }, { "end": 1974, "start": 1969.1200000000001, "text": " And I was asking, and I was looking at those problems, and I was like, what are agents actually" }, { "end": 1975.16, "start": 1974, "text": " fail to do?" }, { "end": 1980.68, "start": 1975.16, "text": " And even when the D-parallel agents came, it was the same question, like, what they can" }, { "end": 1981.68, "start": 1980.68, "text": " do?" }, { "end": 1986.2800000000002, "start": 1981.68, "text": " And one of the things that they couldn't do was this notion of, there is a set of games" }, { "end": 1993.52, "start": 1986.2800000000002, "text": " that basically these agents can't do well, they couldn't do well at the time." }, { "end": 1998.24, "start": 1993.52, "text": " And this was this game where basically you had, it was very difficult to find a positive" }, { "end": 2000.4, "start": 1998.24, "text": " reward in the environment." }, { "end": 2005.52, "start": 2000.4, "text": " It required a very long sequence of actions or it required you to, or it required you" }, { "end": 2010.28, "start": 2005.52, "text": " the right sequence of actions because you could die before getting there." }, { "end": 2015.4, "start": 2010.28, "text": " And this was something that was interesting to me, and maybe I like to make the joke that" }, { "end": 2021.52, "start": 2015.4, "text": " in the first term, the first semester that I started my PhD, I was taking a read sentence" }, { "end": 2024.44, "start": 2021.52, "text": " reinforcement learning grade course." }, { "end": 2030.4, "start": 2024.44, "text": " And the project that he asked us to do was to get a Rumba and the I-Rubba, then say," }, { "end": 2033.12, "start": 2030.4, "text": " you have to do, you have to make it learn something." }, { "end": 2038.76, "start": 2033.12, "text": " So you have to implement reinforcement learning algorithm in this robot, and you have to" }, { "end": 2041.92, "start": 2038.76, "text": " be able to demonstrate that this robot is learning that." }, { "end": 2044.4, "start": 2041.92, "text": " And be creative on what you want the robot to learn." }, { "end": 2047.2, "start": 2044.4, "text": " And then I was like, of course, I want to impress with certain, so I'm going to do something" }, { "end": 2048.2, "start": 2047.2, "text": " very fancy." }, { "end": 2053.68, "start": 2048.2, "text": " And what I wanted to do was I wanted to have the robot to learn how to dock into the" }, { "end": 2056.08, "start": 2053.68, "text": " charging station." }, { "end": 2059.96, "start": 2056.08, "text": " And I tried, and I failed miserably at the time." }, { "end": 2062.96, "start": 2059.96, "text": " And I remember that at the end of the course, I was the only one to go out there and say," }, { "end": 2065.12, "start": 2062.96, "text": " hey, look, I tried all these things, but I failed." }, { "end": 2068.2799999999997, "start": 2065.12, "text": " And I don't have a learning demonstration to show you." }, { "end": 2072.52, "start": 2068.2799999999997, "text": " And the reason I failed was exactly because the robot would never latch for the first" }, { "end": 2074.52, "start": 2072.52, "text": " time if it's falling around the mock." }, { "end": 2076.8799999999997, "start": 2074.52, "text": " So how could I expect it to learn?" }, { "end": 2081.48, "start": 2076.88, "text": " And I make the joke that I started my, my, each of these is out of spite of like, no," }, { "end": 2086, "start": 2081.48, "text": " I have to be able to solve this problem because like it was an embarrassing moment in my," }, { "end": 2091.4, "start": 2086, "text": " in my, in the beginning of my career, and then comes Atari and, and all those things." }, { "end": 2094.36, "start": 2091.4, "text": " So I was generally curious about this question." }, { "end": 2098.52, "start": 2094.36, "text": " Like, well, I believe that we shouldn't be hand crafting rewards that are telling the" }, { "end": 2102.08, "start": 2098.52, "text": " agent how to do something like, oh, you should follow along this path because then we" }, { "end": 2103.36, "start": 2102.08, "text": " are solving the problem for the agent." }, { "end": 2106.52, "start": 2103.36, "text": " But if we want to reward the agent by just doing the right thing, let's say, talking" }, { "end": 2109.96, "start": 2106.52, "text": " to a charging station, well, how can we expect it to do that?" }, { "end": 2116.28, "start": 2109.96, "text": " And this was a very, a very important question that kept bugging me for a long time." }, { "end": 2120.72, "start": 2116.28, "text": " And, and then the Atari games, this is all these Atari successes start to show up and then" }, { "end": 2124.8, "start": 2120.72, "text": " lo and behold, like I guess everyone has heard about Motezuma's revenge, except how challenging" }, { "end": 2125.8, "start": 2124.8, "text": " it is." }, { "end": 2128.08, "start": 2125.8, "text": " And it's just another instance of the same problem." }, { "end": 2132.08, "start": 2128.08, "text": " And as you can expect, this problem starts to show up all sorts of places when you start" }, { "end": 2134.56, "start": 2132.08, "text": " to think about reinforcement learning problems." }, { "end": 2139.88, "start": 2134.56, "text": " So it seemed to, it was a question that picked my mind and I was curious about it." }, { "end": 2147.12, "start": 2139.88, "text": " And eventually what I ended up proposing, like it's, and we can talk more about this," }, { "end": 2149.48, "start": 2147.12, "text": " in a more low level detail." }, { "end": 2154.08, "start": 2149.48, "text": " But the, let's say, the thesis statement that I had was that I was proposing that at the" }, { "end": 2157.68, "start": 2154.08, "text": " end, we should be learning representations." }, { "end": 2161.52, "start": 2157.68, "text": " And these representations have, we should be able to learn the representations without" }, { "end": 2164.32, "start": 2161.52, "text": " relying on the reward function." }, { "end": 2168.88, "start": 2164.32, "text": " Meaning that if you just say that, oh, I'm going to train a DPRL agent with, I don't" }, { "end": 2171.48, "start": 2168.88, "text": " know, the squareity delos or some other loss that you like." }, { "end": 2175.4, "start": 2171.48, "text": " And I'm going to learn to call the representation of whatever are the ways that I learned by" }, { "end": 2178.56, "start": 2175.4, "text": " back prop at the beginning of the network." }, { "end": 2181.44, "start": 2178.56, "text": " This is not going to cut it because if you never see a reward, you're not going to have" }, { "end": 2183.04, "start": 2181.44, "text": " a signal to back point." }, { "end": 2189.84, "start": 2183.04, "text": " So, but if we learn a representation that doesn't not depend on a non-zero reward, and we" }, { "end": 2193.2000000000003, "start": 2189.84, "text": " should use that representation to guide the exploration." }, { "end": 2197.72, "start": 2193.2, "text": " Meaning that, if I mean in an environment in a room, let's say, and I learned a representation" }, { "end": 2202.72, "start": 2197.72, "text": " about that room, I'm not going to be able to learn a representation, a very good representation" }, { "end": 2205.48, "start": 2202.72, "text": " about a door if I rarely go to the others." }, { "end": 2207.52, "start": 2205.48, "text": " And that's actually the really big problem, right?" }, { "end": 2211.56, "start": 2207.52, "text": " Like, this is exploration problem because now you have a, let's say, a bottleneck and" }, { "end": 2214.4399999999996, "start": 2211.56, "text": " you have to go over that, just to give an example." }, { "end": 2218.3599999999997, "start": 2214.4399999999996, "text": " And the representation is, if you, depending on the representation that you learn, you're" }, { "end": 2223.7200000000003, "start": 2218.36, "text": " able to capture exactly that, and then what I was proposed that we should use this representation" }, { "end": 2228.32, "start": 2223.7200000000003, "text": " to actually guide us in the exploration process and tell the agent, oh, no, no, look, this," }, { "end": 2231.84, "start": 2228.32, "text": " all this is you mastered, but that part over there you didn't." }, { "end": 2234.2000000000003, "start": 2231.84, "text": " So maybe you should try to go there." }, { "end": 2238.48, "start": 2234.2000000000003, "text": " And that's, and that was the general just of the work." }, { "end": 2242.1200000000003, "start": 2238.48, "text": " So in these papers, a number of terms come up." }, { "end": 2249.2, "start": 2242.12, "text": " I wonder if we can take a moment to just to talk about these terms and brief, for example," }, { "end": 2250.3599999999997, "start": 2249.2, "text": " proto-value function." }, { "end": 2251.3599999999997, "start": 2250.3599999999997, "text": " Yeah." }, { "end": 2252.3599999999997, "start": 2251.3599999999997, "text": " What does that mean?" }, { "end": 2253.3599999999997, "start": 2252.3599999999997, "text": " Is that right?" }, { "end": 2255.3599999999997, "start": 2253.3599999999997, "text": " And is that a useful concept today?" }, { "end": 2256.3599999999997, "start": 2255.3599999999997, "text": " Yes, it is." }, { "end": 2259.7599999999998, "start": 2256.3599999999997, "text": " So, or I mean, I think it is." }, { "end": 2264.24, "start": 2259.7599999999998, "text": " So it was exactly to the question that I was telling you about, right?" }, { "end": 2268.4, "start": 2264.24, "text": " Like, if we learn the representation, then the representation should guide us, sure," }, { "end": 2270.44, "start": 2268.4, "text": " sure, sure, where we want to be." }, { "end": 2275, "start": 2270.44, "text": " And proto-value functions are one of those representations that you could learn." }, { "end": 2277.2400000000002, "start": 2275, "text": " It predates the parallel." }, { "end": 2280.16, "start": 2277.2400000000002, "text": " It predates the, the DQN paper." }, { "end": 2282.6, "start": 2280.16, "text": " It was introduced actually in 2005." }, { "end": 2286.56, "start": 2282.6, "text": " And at the time, it was introduced just as a way of learning a representation." }, { "end": 2290.8, "start": 2286.56, "text": " And the word proto-value functions comes exactly because it comes before you learn the" }, { "end": 2292.12, "start": 2290.8, "text": " value function." }, { "end": 2297.32, "start": 2292.12, "text": " And it was this method that says that, look, if we think about the environment as a graph," }, { "end": 2302.76, "start": 2297.32, "text": " we could actually try to capture the properties of that graph into a set of features." }, { "end": 2306.92, "start": 2302.76, "text": " And these properties are then, and then the way the paper, the, the, the, the, sure," }, { "end": 2310.6800000000003, "start": 2306.92, "text": " the armahadevance paper, a, a, a, a, my, join this paper." }, { "end": 2315.28, "start": 2310.6800000000003, "text": " What this paper does, what, what they do is that they, they say, look, these properties" }, { "end": 2319.56, "start": 2315.28, "text": " are good enough that you can actually use them as features and you learn to maximize" }, { "end": 2320.56, "start": 2319.56, "text": " return." }, { "end": 2324.56, "start": 2320.56, "text": " So proto-value functions were this representation learning method." }, { "end": 2325.76, "start": 2324.56, "text": " Let's put this way." }, { "end": 2331.32, "start": 2325.76, "text": " Now, there are some very pretty pictures that, in the original papers, and then I really" }, { "end": 2332.32, "start": 2331.32, "text": " like them." }, { "end": 2333.88, "start": 2332.32, "text": " So oftentimes you find them in my papers as well." }, { "end": 2339.2000000000003, "start": 2333.88, "text": " I find them pretty, which is, let's say, you, you have a grid world, and then you try" }, { "end": 2340.8, "start": 2339.2000000000003, "text": " to learn this proto-value function." }, { "end": 2343.88, "start": 2340.8, "text": " Then you can see like what the representation looks like." }, { "end": 2348.36, "start": 2343.88, "text": " And if you think about, for example, an environment with four rooms, you can see that what this" }, { "end": 2352.88, "start": 2348.36, "text": " proto-value function capture are exactly the four different rooms that you, that you have." }, { "end": 2354.4, "start": 2352.88, "text": " These are the first features that you learn." }, { "end": 2356.08, "start": 2354.4, "text": " So realize that look, they are different." }, { "end": 2359.7200000000003, "start": 2356.08, "text": " These, these rooms are kind of, when you're inside the room, all these states kind of look" }, { "end": 2363.36, "start": 2359.7200000000003, "text": " the same, but it's very different from being outside the room." }, { "end": 2368.7200000000003, "start": 2363.36, "text": " And when I was looking at them, and at the time this was a representation learning process," }, { "end": 2374.92, "start": 2368.7200000000003, "text": " they, and, and just to be more precise here, what proto-value functions are is you, you," }, { "end": 2376.92, "start": 2374.92, "text": " you think about the environment as a graph." }, { "end": 2381.88, "start": 2376.92, "text": " From that graph, you can compute the JSON-C matrix or, and from that, the JSON matrix, you" }, { "end": 2385.2400000000002, "start": 2381.88, "text": " can compute a matrix that is called the graph-flow-placian." }, { "end": 2388.96, "start": 2385.2400000000002, "text": " And then the proto-value functions are the eigenfactors of that graph-flow-placian." }, { "end": 2393.96, "start": 2388.96, "text": " And the reason I'm saying this is because the eigenfactors, they are, they are what actually" }, { "end": 2396.36, "start": 2393.96, "text": " captures this dynamics of the environment." }, { "end": 2400.48, "start": 2396.36, "text": " Well, this diffusion properties of the environment, if you, how, how, how they can, it would diffuse" }, { "end": 2401.96, "start": 2400.48, "text": " in that environment." }, { "end": 2408, "start": 2401.96, "text": " And what, what I realized that way, but wait, if, if, if we have a representation that," }, { "end": 2411.88, "start": 2408, "text": " if we have a representation that it's telling me that, look, there is one room, and here" }, { "end": 2416.52, "start": 2411.88, "text": " is another room, and we can just learn that, if it, it's just learnable." }, { "end": 2421.44, "start": 2416.52, "text": " What if instead of using that as a representation, I used that as a, as a goal, and I said, well," }, { "end": 2425.6, "start": 2421.44, "text": " I want to actually learn an option that takes me to the room that I can identify." }, { "end": 2430.48, "start": 2425.6, "text": " So out of the box, you immediately learn four options, which are like this sequence of" }, { "end": 2433.8, "start": 2430.48, "text": " actions that take you to the four rooms in the environment." }, { "end": 2439.1600000000003, "start": 2433.8, "text": " And which allows you, or allows the agent to operate at a different level of abstraction," }, { "end": 2440.1600000000003, "start": 2439.1600000000003, "text": " right?" }, { "end": 2445.1200000000003, "start": 2440.1600000000003, "text": " So if I, if I tell someone, oh, there is a million dollars hidden somewhere in your house," }, { "end": 2450.1200000000003, "start": 2445.1200000000003, "text": " you are not going to go back and forth in terms of steps until you find that, right?" }, { "end": 2452.6000000000004, "start": 2450.1200000000003, "text": " You're going to say, oh, maybe I should go to this room or to this room." }, { "end": 2455.48, "start": 2452.6000000000004, "text": " You're going to think about a different level of abstraction." }, { "end": 2459.88, "start": 2455.48, "text": " And what at the time, this was the first paper that I wrote on this topic was about" }, { "end": 2464.2000000000003, "start": 2459.88, "text": " the portfolio value functions." }, { "end": 2468.36, "start": 2464.2000000000003, "text": " We could use this representation to actually learn the options that allow us to explore" }, { "end": 2473.32, "start": 2468.36, "text": " much better and assess better, connecting the points that were further apart in this" }, { "end": 2476.48, "start": 2473.32, "text": " graph that is the environment." }, { "end": 2478.2000000000003, "start": 2476.48, "text": " And that's what we did." }, { "end": 2483.6, "start": 2478.2000000000003, "text": " So as you say, the portfolio value function and some of these concepts show up in earlier" }, { "end": 2487, "start": 2483.6, "text": " papers and we see a lot of examples in grid worlds." }, { "end": 2493.28, "start": 2487, "text": " I wonder, are these, do these notions carry over to higher dimensional observations?" }, { "end": 2498.68, "start": 2493.28, "text": " Like, I guess, if we thought about the graph of states and conactivities and adjacencies" }, { "end": 2501.52, "start": 2498.68, "text": " in Atari, it would be quite a graph." }, { "end": 2505.8, "start": 2501.52, "text": " We can do these concepts carry over to the higher dimensional cases where the graphs" }, { "end": 2507.84, "start": 2505.8, "text": " are less crisp and simple." }, { "end": 2510.32, "start": 2507.84, "text": " Yeah, no, that's a great question." }, { "end": 2515, "start": 2510.32, "text": " And a lot of the research that I did in my PhD on this line of work about learning options" }, { "end": 2516.92, "start": 2515, "text": " out of that was exactly an essential." }, { "end": 2518.48, "start": 2516.92, "text": " I actually scale those things." }, { "end": 2525.08, "start": 2518.48, "text": " So the eventually out of the portfolio functions, what I noticed was that, so maybe there are" }, { "end": 2526.56, "start": 2525.08, "text": " two threads there." }, { "end": 2531.2000000000003, "start": 2526.56, "text": " One is that from portfolio functions, what we are actually talking about is this notion" }, { "end": 2536.44, "start": 2531.2000000000003, "text": " of looking at the eigenvectors of the graph class, which is the name of the portfolio" }, { "end": 2537.44, "start": 2536.44, "text": " function." }, { "end": 2546.04, "start": 2537.44, "text": " And by 2019, even 2008, I believe, a couple of papers start to come up how you could" }, { "end": 2556.32, "start": 2546.04, "text": " actually use Neuronatrix to estimate this Laplace function with Neuronatrix in a higher" }, { "end": 2557.32, "start": 2556.32, "text": " dimensional scale." }, { "end": 2560.72, "start": 2557.32, "text": " So one of the papers that comes to mind, for example, is a paper that is entitled Laplace" }, { "end": 2565.48, "start": 2560.72, "text": " in an Arale by if I knew and others at the time they were, if I was an internet Google" }, { "end": 2566.48, "start": 2565.48, "text": " Prey." }, { "end": 2571.24, "start": 2566.48, "text": " It was literally scaling these ideas and say, look, we can use Neuronatrix to estimate" }, { "end": 2572.6, "start": 2571.24, "text": " this in a much better way." }, { "end": 2575.12, "start": 2572.6, "text": " And at the time, they had experiments with Mujoko, for example." }, { "end": 2580.64, "start": 2575.12, "text": " So there was research being done and there are other papers as well, Spectraifers Networks" }, { "end": 2585.08, "start": 2580.64, "text": " from David Fall is another example that comes to mind, that they were trying to say that," }, { "end": 2589.56, "start": 2585.08, "text": " look, we can capture this Spectraport properties with Neuronatrix and learn them." }, { "end": 2591.12, "start": 2589.56, "text": " So the answer is yes." }, { "end": 2593.12, "start": 2591.12, "text": " Some other people did scale this up." }, { "end": 2598.6, "start": 2593.12, "text": " And on a different thread, which is what I followed, I was also looking at, when I was trying" }, { "end": 2603.68, "start": 2598.6, "text": " to scale this up, what I noticed is that, what I noticed is that the eigenvectors of" }, { "end": 2610.44, "start": 2603.68, "text": " the graph Laplace, during the same as the eigenvectors of something that is called the" }, { "end": 2612.2, "start": 2610.44, "text": " success representation." }, { "end": 2619.56, "start": 2612.2, "text": " And the success representation also had extensions to larger domains, known as successor features." }, { "end": 2623.16, "start": 2619.56, "text": " And I was also able to leverage that to learn this option." }, { "end": 2629.52, "start": 2623.16, "text": " So there were different threads where we could explore that basically allowed us to scale" }, { "end": 2630.6, "start": 2629.52, "text": " this up." }, { "end": 2632.8399999999997, "start": 2630.6, "text": " I don't think that the research is over." }, { "end": 2636.52, "start": 2632.84, "text": " I still think that there are a lot of challenges should do." }, { "end": 2639.76, "start": 2636.52, "text": " I was never happy with the results that we got in Atari, for example." }, { "end": 2642.08, "start": 2639.76, "text": " But I think that it shows promise." }, { "end": 2646.48, "start": 2642.08, "text": " And our ability to scale this exactly, to come up with new methods, show that it's a fruitful" }, { "end": 2648.1600000000003, "start": 2646.48, "text": " research area." }, { "end": 2649.1600000000003, "start": 2648.1600000000003, "text": " Interesting." }, { "end": 2650.1600000000003, "start": 2649.1600000000003, "text": " Okay." }, { "end": 2653.2400000000002, "start": 2650.1600000000003, "text": " But is there a kind of a chicken and egg problem here?" }, { "end": 2658.08, "start": 2653.2400000000002, "text": " Like if I just think back to the linear algebra to get the eigenvectors for matrix, you" }, { "end": 2664.08, "start": 2658.08, "text": " kind of need the matrix, at least some approximation of the matrix, not just a little corner of" }, { "end": 2665.08, "start": 2664.08, "text": " it." }, { "end": 2670.7599999999998, "start": 2665.08, "text": " And in RL, we, starting out, we wouldn't know all the state transitions." }, { "end": 2677.12, "start": 2670.7599999999998, "text": " So how do we determine the exploration strategies when we don't know the state's space in advance?" }, { "end": 2678.12, "start": 2677.12, "text": " How does that work?" }, { "end": 2679.12, "start": 2678.12, "text": " Yeah." }, { "end": 2681.64, "start": 2679.12, "text": " So I think that's a great question." }, { "end": 2685.64, "start": 2681.64, "text": " And that's the question that I'm actually excited about on this method." }, { "end": 2691.16, "start": 2685.64, "text": " Because you're absolutely right for a lot of these methods, like even the dating back" }, { "end": 2695.3599999999997, "start": 2691.16, "text": " to the first one, which is this graph, the path of protovalu functions, there's some" }, { "end": 2696.8399999999997, "start": 2695.3599999999997, "text": " knowledge of the environment." }, { "end": 2699.6, "start": 2696.8399999999997, "text": " So in a sense, you know the graph and then you compute the eigenvectors." }, { "end": 2702.4, "start": 2699.6, "text": " But if you don't know the graph, how can you know?" }, { "end": 2703.96, "start": 2702.4, "text": " How can you know?" }, { "end": 2711.12, "start": 2703.96, "text": " And what I noticed, and this has been a big chunk of my research, is that we can have incremental," }, { "end": 2713.68, "start": 2711.12, "text": " we can have learning in incremental ways." }, { "end": 2715.72, "start": 2713.68, "text": " And what I mean by that is the following." }, { "end": 2721.3199999999997, "start": 2715.72, "text": " You can have an agent, a different and just wondering the environment for a while and learning" }, { "end": 2722.3199999999997, "start": 2721.3199999999997, "text": " a representation." }, { "end": 2726.64, "start": 2722.3199999999997, "text": " This is not going to be a representation that captures the whole environment." }, { "end": 2730.7999999999997, "start": 2726.64, "text": " It's just the place that the agent can get to." }, { "end": 2738.8399999999997, "start": 2730.7999999999997, "text": " Out of that, you, if you look at what you get by learning with these methods that I" }, { "end": 2743.3599999999997, "start": 2738.8399999999997, "text": " proposed, you'll basically, you'll have an agent that learns." }, { "end": 2747.2000000000003, "start": 2743.36, "text": " You can extract options out of this representation that takes what I call the boundary of your" }, { "end": 2748.2000000000003, "start": 2747.2000000000003, "text": " knowledge." }, { "end": 2754.56, "start": 2748.2000000000003, "text": " So in a sense, you've, you've did around for quite some time and they were realizing," }, { "end": 2758.56, "start": 2754.56, "text": " okay, now you want to learn how to get to the border of my knowledge, you, to the border" }, { "end": 2763.2400000000002, "start": 2758.56, "text": " of where I've been, you go there and then you can do it again, but now you're starting" }, { "end": 2765.04, "start": 2763.2400000000002, "text": " from somewhere else." }, { "end": 2770, "start": 2765.04, "text": " And that makes the, and then you can slowly build your knowledge about the environment" }, { "end": 2774.12, "start": 2770, "text": " for slowly visit the environment because you are not being having to pay the price of" }, { "end": 2778.56, "start": 2774.12, "text": " starting from always the same set of states when a random walk, which is really slow for" }, { "end": 2780.04, "start": 2778.56, "text": " you to move." }, { "end": 2784.88, "start": 2780.04, "text": " The very simple example that I had, it's a workshop, a workshop paper that we wrote back" }, { "end": 2790.68, "start": 2784.88, "text": " in 2016, is that imagine that you just want to walk down a line, okay, and they state" }, { "end": 2794.52, "start": 2790.68, "text": " that you have, you're going to represent your states by the binary encoding of their" }, { "end": 2795.52, "start": 2794.52, "text": " numbers." }, { "end": 2798.28, "start": 2795.52, "text": " So states one, for example, is going to be 0, 0, 0, 1." }, { "end": 2800.76, "start": 2798.28, "text": " State tree is going to be 0, 0, 1, 0." }, { "end": 2803.6000000000004, "start": 2800.76, "text": " State tree is going to be 0, 0, 0, 0." }, { "end": 2809.44, "start": 2803.6000000000004, "text": " And if you do that, and you just start from the state number one, let's say, you're going" }, { "end": 2813.8, "start": 2809.44, "text": " to go a lot, just a two, a state three, state four, right, because a random walk moves" }, { "end": 2817.1200000000003, "start": 2813.8, "text": " at the square root, number of time steps and expectation from its start state." }, { "end": 2821.1600000000003, "start": 2817.1200000000003, "text": " So it's going to be very close to the origin, which means that we're going to flip those" }, { "end": 2823.4, "start": 2821.1600000000003, "text": " first bits on your representation a lot." }, { "end": 2828.48, "start": 2823.4, "text": " Oh, I see the first bit flipping all the time and so on." }, { "end": 2834.6800000000003, "start": 2828.48, "text": " At the moment that you, at the moment that you, that you say, okay, but I want to learn" }, { "end": 2837.8, "start": 2834.6800000000003, "text": " an option out of that, which is what I call an eigen option." }, { "end": 2842.64, "start": 2837.8, "text": " What is, this method is try to do is that they try to learn the thing that you know you" }, { "end": 2845.2000000000003, "start": 2842.64, "text": " can do, but it's very difficult for you to do." }, { "end": 2849.32, "start": 2845.2000000000003, "text": " Means that maybe it's flipping the fourth bit because you haven't, you have flipped it" }, { "end": 2852.12, "start": 2849.32, "text": " once or twice, but you had flipped it constantly." }, { "end": 2857, "start": 2852.12, "text": " So now you learn an option that says flip the fourth bit, right?" }, { "end": 2861.3599999999997, "start": 2857, "text": " So now if you're going to do a random walk again, because we're just doing random walks," }, { "end": 2864.68, "start": 2861.3599999999997, "text": " every now and then you might sample that option, which says flip the fourth bit." }, { "end": 2869.68, "start": 2864.68, "text": " So now you're not starting from the start state anymore, you're starting from the state" }, { "end": 2877.6, "start": 2869.68, "text": " 16 or 18 now, I'm just 16, I guess, 16 and then if you want to flip the, the state," }, { "end": 2881.4, "start": 2877.6, "text": " the fifth bit, which is the state 32, you're already halfway there, right?" }, { "end": 2885.48, "start": 2881.4, "text": " You don't have to start from the start state and do 32 right actions." }, { "end": 2886.88, "start": 2885.48, "text": " Let's put this way." }, { "end": 2891.6800000000003, "start": 2886.88, "text": " You just take one action that takes you halfway through there and then you keep going." }, { "end": 2896.6, "start": 2891.6800000000003, "text": " And we could show that, I mean, this is a cartoonish depiction of this, but these ideas" }, { "end": 2901.52, "start": 2896.6, "text": " are very powerful and it does allow us to go around and around on the environment while" }, { "end": 2906.2400000000002, "start": 2901.52, "text": " you're the random walk is too struggling to get past the first couple of those in states." }, { "end": 2911.2000000000003, "start": 2906.2400000000002, "text": " And it's exactly this notion of, I want you to learn, I want you to learn to do things" }, { "end": 2913.68, "start": 2911.2, "text": " that I know I can do, but it's difficult." }, { "end": 2916.8399999999997, "start": 2913.68, "text": " And the representation is capturing this and the fact that we don't have the whole graph" }, { "end": 2920.52, "start": 2916.8399999999997, "text": " is exactly what's giving us this information." }, { "end": 2922.52, "start": 2920.52, "text": " So in the past we did these reforptions." }, { "end": 2925.3999999999996, "start": 2922.52, "text": " When I was trying to learn these options online, I also realized that we could do these" }, { "end": 2930.16, "start": 2925.3999999999996, "text": " ref counts because the representation, the success representation was implicitly encoding" }, { "end": 2932.3999999999996, "start": 2930.16, "text": " state dissertation counts." }, { "end": 2935.2, "start": 2932.3999999999996, "text": " So we actually had a paper doing that as well." }, { "end": 2940.24, "start": 2935.2, "text": " So but it's, I would say, all it stems from the same, very same nature of the problem." }, { "end": 2943.7599999999998, "start": 2940.24, "text": " And it's the side of the representation is guiding you because you are learning and you" }, { "end": 2946.7599999999998, "start": 2943.7599999999998, "text": " exactly don't have information about the whole environment." }, { "end": 2947.7599999999998, "start": 2946.7599999999998, "text": " Very cool." }, { "end": 2949.56, "start": 2947.7599999999998, "text": " Okay, thanks for explaining that for us." }, { "end": 2953.3999999999996, "start": 2949.56, "text": " And can you tell us what do you mean by successor representation?" }, { "end": 2954.72, "start": 2953.3999999999996, "text": " Briefly, what does that mean?" }, { "end": 2955.72, "start": 2954.72, "text": " Yeah, yeah." }, { "end": 2961.6, "start": 2955.72, "text": " So the success representation is this idea that it's relatively old for machine learning" }, { "end": 2962.6, "start": 2961.6, "text": " standards." }, { "end": 2966.6, "start": 2962.6, "text": " It's from 1993 when Peter Diana introduced it." }, { "end": 2972, "start": 2966.6, "text": " And it was this notion that I'm not going to look at states that when you think about" }, { "end": 2977.4, "start": 2972, "text": " how you represent a state, a state, you don't want to think about the state just in terms" }, { "end": 2981.3199999999997, "start": 2977.4, "text": " of where it is in this space, like in the Euclidean terms." }, { "end": 2985.56, "start": 2981.3199999999997, "text": " But you also want to represent the state compared to the states that you can get that you" }, { "end": 2990.8399999999997, "start": 2985.56, "text": " also want to represent that state as as a function of these states that you can get to from" }, { "end": 2993.16, "start": 2990.8399999999997, "text": " that state, whether the success or states." }, { "end": 2998.44, "start": 2993.16, "text": " So the example is that imagine that you are behind a wall, right?" }, { "end": 3002.72, "start": 2998.44, "text": " They state that it's exactly opposite to you on the other side of the wall." }, { "end": 3005.3999999999996, "start": 3002.72, "text": " It's very far from you because you have to go around the wall." }, { "end": 3009.64, "start": 3005.3999999999996, "text": " But if you just look at the Euclidean distance, arguably these states are similar." }, { "end": 3013.24, "start": 3009.64, "text": " And Peter Diana had this inside of say that no, no, let's look at what the successors" }, { "end": 3014.3999999999996, "start": 3013.24, "text": " states are." }, { "end": 3020.8799999999997, "start": 3014.3999999999996, "text": " So given that I'm in a current state, what are the discounted number of states that" }, { "end": 3025.44, "start": 3020.88, "text": " are, and I'm going to just look at the policy that I'm following, what is the expectation" }, { "end": 3029.76, "start": 3025.44, "text": " that I, what is the expected number of times I'm going to visit the state of this in my" }, { "end": 3031.52, "start": 3029.76, "text": " tree jack." }, { "end": 3036.28, "start": 3031.52, "text": " And this success representation gives you this notion of the dynamics of the environment." }, { "end": 3042.6, "start": 3036.28, "text": " And again, without thinking about the reward, and it has all this connection neuroscience" }, { "end": 3046.12, "start": 3042.6, "text": " that like there is, there are a couple of papers now suggesting that the hippocampus actually" }, { "end": 3050.64, "start": 3046.12, "text": " encodes something really similar to the successful presentation." }, { "end": 3056.68, "start": 3050.64, "text": " And I like to think about it as a credit assignment object that if I know where the rewards are" }, { "end": 3060.7599999999998, "start": 3056.68, "text": " in the environment, because I know exactly what the next states are, the dynamics of the" }, { "end": 3065.7599999999998, "start": 3060.7599999999998, "text": " environment, I can easily assign credit because I know, oh, given that the reward is here" }, { "end": 3070.7599999999998, "start": 3065.7599999999998, "text": " and this is how I'm going to visit states in the future, I now have value functions." }, { "end": 3078.36, "start": 3070.76, "text": " So it's a super powerful object, we have been using it for discovering options, for exploration," }, { "end": 3083.44, "start": 3078.36, "text": " people at the point, for example, under a barato and others have been doing this for transfer," }, { "end": 3088.84, "start": 3083.44, "text": " but it's this notion of just knowing what the future holds in a sense and anticipating" }, { "end": 3089.84, "start": 3088.84, "text": " that." }, { "end": 3095.1200000000003, "start": 3089.84, "text": " And so if I understood you right, it's not a pure representation of the transition function" }, { "end": 3099.28, "start": 3095.1200000000003, "text": " but it's conditioned on the agent's policy, on some kind of policy." }, { "end": 3100.28, "start": 3099.28, "text": " Exactly." }, { "end": 3107.1200000000003, "start": 3100.28, "text": " So, actually, if you want to be very precise, what we're going to do is that you're just" }, { "end": 3111.48, "start": 3107.1200000000003, "text": " going to, the success representation is, you can think about it not anymore as just learning" }, { "end": 3116.6400000000003, "start": 3111.48, "text": " how to maximize a reward, let's say, so you can think about doing temporal difference" }, { "end": 3120.48, "start": 3116.6400000000003, "text": " learning or something like that, but instead of actually observing the reward, you're going" }, { "end": 3122.96, "start": 3120.48, "text": " to get a reward of one when you visit a state." }, { "end": 3125.76, "start": 3122.96, "text": " So you're going to have one success representation if you will." }, { "end": 3129.48, "start": 3125.76, "text": " So the success representation is always with respect to two states, right?" }, { "end": 3133, "start": 3129.48, "text": " But given a current state, you basically every time that you visit a state, let's say" }, { "end": 3135.6, "start": 3133, "text": " two, you've got a reward of one." }, { "end": 3137.72, "start": 3135.6, "text": " And then you learn a success representation for that state." }, { "end": 3139.88, "start": 3137.72, "text": " And then you do this for a state three and four." }, { "end": 3144.16, "start": 3139.88, "text": " So basically, you, and then you are going to have a vector at the end, which is for the" }, { "end": 3148.28, "start": 3144.16, "text": " given a state, you have the vector for a state, and it's conditioned on this policy." }, { "end": 3152.52, "start": 3148.28, "text": " It's literally just a value function, but instead of a reward, you actually have a cumulative" }, { "end": 3154.52, "start": 3152.52, "text": " line, which is the state visitation." }, { "end": 3155.52, "start": 3154.52, "text": " Okay." }, { "end": 3158.2, "start": 3155.52, "text": " And another phrase that came up here is covering options." }, { "end": 3161.24, "start": 3158.2, "text": " Can you help us understand that phrase as well, Marles?" }, { "end": 3162.24, "start": 3161.24, "text": " Yes, yes." }, { "end": 3167.68, "start": 3162.24, "text": " That's a really neat idea that came out of George the Darnes group." }, { "end": 3170, "start": 3167.68, "text": " Eugene, I was the grad student who proposed it." }, { "end": 3171.7999999999997, "start": 3170, "text": " He was the first author." }, { "end": 3176.8799999999997, "start": 3171.7999999999997, "text": " And when I was finishing my PhD, one of the problems that I have with this concept of eigenoptions" }, { "end": 3181.7599999999998, "start": 3176.8799999999997, "text": " was that I was learning too many options because I was looking at the eigenvectors of the" }, { "end": 3185.2, "start": 3181.7599999999998, "text": " Graph Laplace, and I had as many eigenvectors as I had states." }, { "end": 3189.52, "start": 3185.2, "text": " And it was not clear to me how many options do I want?" }, { "end": 3190.52, "start": 3189.52, "text": " Do I want 10?" }, { "end": 3191.52, "start": 3190.52, "text": " Do I want 15?" }, { "end": 3197.68, "start": 3191.52, "text": " I could do a empirical analysis, but I was not able to develop the theory at the time" }, { "end": 3198.68, "start": 3197.68, "text": " for that." }, { "end": 3203.2, "start": 3198.68, "text": " And then Eugene, I came along and they had this very cute idea, which is what they call" }, { "end": 3211.4399999999996, "start": 3203.2, "text": " the covering options, which was, wait, no, if you have the eigenvector, the first eigenvector," }, { "end": 3214.64, "start": 3211.4399999999996, "text": " and what the eigenvector is giving is this notion of connectivity." }, { "end": 3221.64, "start": 3214.64, "text": " You just need to connect the two states that you would get from the eigenvector, like" }, { "end": 3226.64, "start": 3221.64, "text": " the one that the eigenvector has shown you, like how distant different states are." }, { "end": 3231.6, "start": 3226.64, "text": " And you just connect the two furthest apart states that you have." }, { "end": 3236.4, "start": 3231.6, "text": " What you get by doing that is that you could very well be reducing the diameter of the" }, { "end": 3237.4, "start": 3236.4, "text": " Graph." }, { "end": 3241.48, "start": 3237.4, "text": " By reducing the diameter of the Graph, you're making the environment more connected." }, { "end": 3248.12, "start": 3241.48, "text": " And if you do this enough times, you're going to get to prove exploration." }, { "end": 3251.4, "start": 3248.12, "text": " The idea of covering options was discovering options that would connect the two states" }, { "end": 3253.32, "start": 3251.4, "text": " that were further apart." }, { "end": 3258.72, "start": 3253.32, "text": " And the pretty thing about it is that it allows you to just answer the question, how many" }, { "end": 3260.72, "start": 3258.72, "text": " eigenvectors do I need?" }, { "end": 3264.32, "start": 3260.72, "text": " Because the top one is there when they're giving you this notion of distance." }, { "end": 3266.48, "start": 3264.32, "text": " So Eugene, I did this." }, { "end": 3267.48, "start": 3266.48, "text": " And I don't know, and I saw this." }, { "end": 3268.48, "start": 3267.48, "text": " It was very exciting." }, { "end": 3272.04, "start": 3268.48, "text": " This is really cool because it actually answers one of the questions that I was thinking" }, { "end": 3273.04, "start": 3272.04, "text": " about." }, { "end": 3277.8, "start": 3273.04, "text": " So I reached out to him, and we worked together to extend this notion of covering options" }, { "end": 3282.36, "start": 3277.8, "text": " to bigger domains in a more scalable way, using also the expertise that I had about how" }, { "end": 3286.16, "start": 3282.36, "text": " tricks to scale these things up." }, { "end": 3291.12, "start": 3286.16, "text": " And eventually we wrote this deep covering options paper that builds on this idea of, again," }, { "end": 3294.76, "start": 3291.12, "text": " trying to find the sequence of actions that allow you to better connect the environment." }, { "end": 3295.76, "start": 3294.76, "text": " Awesome." }, { "end": 3296.76, "start": 3295.76, "text": " Okay." }, { "end": 3304.84, "start": 3296.76, "text": " So how can we compare these types of approaches to maybe how humans might think of exploration?" }, { "end": 3310.96, "start": 3304.84, "text": " Like I'm imagining, if you ask a kid who's playing Atari, can you try something new?" }, { "end": 3314.1600000000003, "start": 3310.96, "text": " They might respond to something a little bit more semantic." }, { "end": 3319.84, "start": 3314.1600000000003, "text": " Like, I'm going to make the ship go to somewhere where it hasn't been before, or I'm going" }, { "end": 3322.5200000000004, "start": 3319.84, "text": " to shoot a type of enemy that I haven't shot before." }, { "end": 3327.16, "start": 3322.52, "text": " My sense is that they would describe things in terms of concepts that they've parsed" }, { "end": 3335.68, "start": 3327.16, "text": " out like ship and shot and enemy, which I think basic RO wouldn't have access to those" }, { "end": 3336.92, "start": 3335.68, "text": " distilled concepts." }, { "end": 3345.2, "start": 3336.92, "text": " I wonder if it was any of your work and representations related to this idea of distilling these concepts" }, { "end": 3351.24, "start": 3345.2, "text": " in a semantic way, or do you think we might, at some point, need a semantic layer to explore" }, { "end": 3354.04, "start": 3351.24, "text": " in the same way that a child might do?" }, { "end": 3356.9599999999996, "start": 3354.04, "text": " Yeah, I think that's a good question." }, { "end": 3361.04, "start": 3356.9599999999996, "text": " I may be to address your first question about how this exploration methods relate to" }, { "end": 3362.04, "start": 3361.04, "text": " these notions." }, { "end": 3364.12, "start": 3362.04, "text": " I think that they relate very little." }, { "end": 3369.12, "start": 3364.12, "text": " There is definitely a notion of I want to be a true place that I've never been before." }, { "end": 3373.7599999999998, "start": 3369.12, "text": " But I don't think that often times when people are thinking about exploration, this is" }, { "end": 3377.12, "start": 3373.7599999999998, "text": " what I try to do in my work that was a little bit different in a sense." }, { "end": 3381.16, "start": 3377.12, "text": " People often think about only exploration of the interplay that representation has on" }, { "end": 3383.3599999999997, "start": 3381.16, "text": " that." }, { "end": 3386.68, "start": 3383.3599999999997, "text": " But I think that, and of course now, this is changing a lot." }, { "end": 3393.04, "start": 3386.68, "text": " But I think that when we go to think about these notions of, in a sense, semantics," }, { "end": 3395.7999999999997, "start": 3393.04, "text": " there are two things there that are important." }, { "end": 3401.08, "start": 3395.7999999999997, "text": " One is that we have to acknowledge that the kids that we're talking about here, it lives" }, { "end": 3402.08, "start": 3401.08, "text": " in a world." }, { "end": 3406.68, "start": 3402.08, "text": " This kid has a lot of experience outside this game that they are playing." }, { "end": 3409.64, "start": 3406.68, "text": " This is actually something that pieces me off a little bit when I read in paper saying" }, { "end": 3418.64, "start": 3409.64, "text": " that Atari already parallel agents take 30 days to learn how to play a game of gameplay." }, { "end": 3420.56, "start": 3418.64, "text": " Why are humans learning how to play a new game in two minutes?" }, { "end": 3421.56, "start": 3420.56, "text": " No, they don't." }, { "end": 3425.64, "start": 3421.56, "text": " Because if I put my newborn daughter to play this game, she's not going to play it after" }, { "end": 3426.64, "start": 3425.64, "text": " two minutes." }, { "end": 3431.8799999999997, "start": 3426.64, "text": " She actually needs years of experience to be able to just learn how to do this." }, { "end": 3434.72, "start": 3431.8799999999997, "text": " So I think that there is a lot going on there." }, { "end": 3440.3199999999997, "start": 3434.72, "text": " It's a lot about trying to abstract concepts that we learn elsewhere." }, { "end": 3445.04, "start": 3440.3199999999997, "text": " The abstraction of a ship, for example, it's useful not because it's the only place that" }, { "end": 3449.2799999999997, "start": 3445.04, "text": " it shows up is a game, but it shows up out everywhere else." }, { "end": 3454.4399999999996, "start": 3449.2799999999997, "text": " And oftentimes, if you, I actually would like to see this study, like just go to a kindergarten" }, { "end": 3457.6, "start": 3454.4399999999996, "text": " today and show a screenshot of Atari games and ask, what are those things?" }, { "end": 3460.2, "start": 3457.6, "text": " And I bet that kids do not even cannot even labor this out." }, { "end": 3463.04, "start": 3460.2, "text": " This is clearly a ship or a name." }, { "end": 3465.68, "start": 3463.04, "text": " It's so extraterrestrial for them." }, { "end": 3469.72, "start": 3465.68, "text": " So we come up with these abstractions and they are obviously useful." }, { "end": 3473.6, "start": 3469.72, "text": " But it's expecting a lot from the agents that we have right now to do this because they" }, { "end": 3477.7599999999998, "start": 3473.6, "text": " don't have all the rich experience that we have elsewhere in the world." }, { "end": 3481.72, "start": 3477.7599999999998, "text": " You control submarine in sequest, for example, in Atari game, you control submarine and you" }, { "end": 3483.36, "start": 3481.72, "text": " should shoot fish." }, { "end": 3484.92, "start": 3483.36, "text": " Like what?" }, { "end": 3488.8, "start": 3484.92, "text": " But yeah, we have this notion that we shoot incoming objects and then that's what you're" }, { "end": 3489.8, "start": 3488.8, "text": " going to do." }, { "end": 3494.76, "start": 3489.8, "text": " So there is this notion of trying to anthropomorphize a lot, Dei." }, { "end": 3499.04, "start": 3494.76, "text": " I think it's important to be able to, in a sense, and their Sandwax Dei is doing in terms" }, { "end": 3503.1200000000003, "start": 3499.04, "text": " of explainability and reliability." }, { "end": 3506.2400000000002, "start": 3503.1200000000003, "text": " But it's a complicated discussion." }, { "end": 3511.1600000000003, "start": 3506.2400000000002, "text": " And then it's, I think that a lot of these things matter a lot because it starts to touch" }, { "end": 3516.04, "start": 3511.1600000000003, "text": " upon the notion of models, for example, we want a model, we probably want to be model-based" }, { "end": 3520.36, "start": 3516.04, "text": " when we want to do exploration in this very complex environment." }, { "end": 3525.36, "start": 3520.36, "text": " But it's definitely, I don't think it's something like, oh, I want this to be baked in because" }, { "end": 3528.12, "start": 3525.36, "text": " there is a lot of social constructs." }, { "end": 3530.36, "start": 3528.12, "text": " And words are just words, right?" }, { "end": 3533.2799999999997, "start": 3530.36, "text": " Like labeling something, I don't know, an outlet." }, { "end": 3534.84, "start": 3533.2799999999997, "text": " It's just a label." }, { "end": 3538.48, "start": 3534.84, "text": " If I'm Portuguese, I'm going to call it a tomato and it's the same thing." }, { "end": 3543.2, "start": 3538.48, "text": " But now if we can crowd this on the agent experience, I think that that's a much more meaningful" }, { "end": 3544.2, "start": 3543.2, "text": " question." }, { "end": 3545.52, "start": 3544.2, "text": " Let's say that you have a robot." }, { "end": 3548.64, "start": 3545.52, "text": " And the robot says, I don't know what it's called, but I know that if I approach that thing" }, { "end": 3554.68, "start": 3548.64, "text": " and I plug in there, my power level goes up." }, { "end": 3560.44, "start": 3554.68, "text": " And then suddenly, now you have a more grounded notion of, oh, this is how I can call this." }, { "end": 3564.44, "start": 3560.44, "text": " And by doing this, now I see a completely different outlet, but I can say, oh, wait, this" }, { "end": 3568.24, "start": 3564.44, "text": " is also the same thing because the outcome is the same." }, { "end": 3570.12, "start": 3568.24, "text": " And then we start to be able to label things." }, { "end": 3575.48, "start": 3570.12, "text": " But I just expect that AI is going to randomly pick up labels that we as humans defined," }, { "end": 3579.96, "start": 3575.48, "text": " because it seems, I think it's, it's in a sense, not even useful." }, { "end": 3584.84, "start": 3579.96, "text": " Unless we want to spend hundreds of man hours labeling things and expecting the AI to do" }, { "end": 3589.48, "start": 3584.84, "text": " this, and it's still, it's going to struggle with generalize because it's just labels." }, { "end": 3592.52, "start": 3589.48, "text": " It's not grounded on its experience in a sense." }, { "end": 3593.52, "start": 3592.52, "text": " Awesome." }, { "end": 3594.52, "start": 3593.52, "text": " Okay." }, { "end": 3597.36, "start": 3594.52, "text": " So let's move on to the Loan project and your work on that." }, { "end": 3603.56, "start": 3597.36, "text": " So I got to say, I really enjoyed your presentation on the Loan work at the University of Alberta" }, { "end": 3607.2799999999997, "start": 3603.56, "text": " AI seminar series, and we'll include a link to that in our episode page." }, { "end": 3610.96, "start": 3607.2799999999997, "text": " And I encourage listeners to definitely check that out." }, { "end": 3616, "start": 3610.96, "text": " And I was, I was really excited to read this paper because it showed, you know, reinforcement" }, { "end": 3617.7599999999998, "start": 3616, "text": " learning succeeding in it." }, { "end": 3622.7599999999998, "start": 3617.7599999999998, "text": " Actually, you actually use full real world task outside of simulation and doing a great" }, { "end": 3623.7599999999998, "start": 3622.7599999999998, "text": " job." }, { "end": 3625, "start": 3623.7599999999998, "text": " There's a ton going on in this paper." }, { "end": 3629.04, "start": 3625, "text": " But could you start out maybe giving us a general overview of what the goal here was" }, { "end": 3633.64, "start": 3629.04, "text": " and an overview of the solution that you and your team came up with?" }, { "end": 3634.64, "start": 3633.64, "text": " Yeah, sure." }, { "end": 3641, "start": 3634.64, "text": " So, yeah, I was very excited about that paper when I joined, I joined Brain and pretty much" }, { "end": 3645.16, "start": 3641, "text": " for the first year that's what I spent most of my time working on when I saw the opportunity" }, { "end": 3647.64, "start": 3645.16, "text": " of working on that project because I thought it was really exciting." }, { "end": 3649.36, "start": 3647.64, "text": " And in a sense, it was the goal." }, { "end": 3653.6, "start": 3649.36, "text": " One of the things that was exciting to me was exactly this opportunity to actually deploy" }, { "end": 3658.36, "start": 3653.6, "text": " reinforcement learning in the real world and test our algorithms and see how far we" }, { "end": 3660, "start": 3658.36, "text": " could go." }, { "end": 3664.2000000000003, "start": 3660, "text": " And what we were trying to do actually was that we partnered with Lung, which was this" }, { "end": 3666.6, "start": 3664.2000000000003, "text": " other bat, an alphabet." }, { "end": 3673.28, "start": 3666.6, "text": " And what Lung was trying to do from my perspective as a scientist was that they wanted to provide" }, { "end": 3676.76, "start": 3673.28, "text": " internet to places that were hard to get." }, { "end": 3682.44, "start": 3676.76, "text": " So the problem is that once you think about how we get internet, we often get antennas" }, { "end": 3684.2400000000002, "start": 3682.44, "text": " in our cities." }, { "end": 3687.04, "start": 3684.2400000000002, "text": " And the antennas have a range of, I don't know, five kilometers or so." }, { "end": 3692.4, "start": 3687.04, "text": " So you build up a big antenna and then you can serve a circle of radios or a sphere of" }, { "end": 3694.12, "start": 3692.4, "text": " radios, five kilometers." }, { "end": 3698.48, "start": 3694.12, "text": " And that's really good if you think about, I know, you're in a big city because you're" }, { "end": 3700.16, "start": 3698.48, "text": " serving a lot of people." }, { "end": 3706.68, "start": 3700.16, "text": " The problem though, that a lot of times it's there are places that are very scarcely populated." }, { "end": 3709.48, "start": 3706.68, "text": " And sometimes it's even hard to get to these places to build antennas." }, { "end": 3713.6, "start": 3709.48, "text": " Let's add on tribes in the middle of the Amazon forest." }, { "end": 3715.84, "start": 3713.6, "text": " So how do you provide internet to those people?" }, { "end": 3719.84, "start": 3715.84, "text": " And then these low folks had this idea of saying that, well, what if we had a very, very" }, { "end": 3720.84, "start": 3719.84, "text": " big antenna?" }, { "end": 3723.1200000000003, "start": 3720.84, "text": " Let's say 50 kilometers tall antenna." }, { "end": 3726.1200000000003, "start": 3723.1200000000003, "text": " Of course, you're not going to build an antenna that is 50 kilometers tall." }, { "end": 3729, "start": 3726.1200000000003, "text": " But then they had this idea of what if we put a balloon up there and the balloon is" }, { "end": 3730.6800000000003, "start": 3729, "text": " going to operate as an antenna." }, { "end": 3735.32, "start": 3730.6800000000003, "text": " And because it's going to be so much higher than you would get, it's going to serve a" }, { "end": 3737.1200000000003, "start": 3735.32, "text": " much bigger region." }, { "end": 3740.56, "start": 3737.1200000000003, "text": " And then it makes sense that we can actually provide internet to this people." }, { "end": 3745.92, "start": 3740.56, "text": " Because it was their idea as a company as far as I can tell, or one of their ideas." }, { "end": 3751.48, "start": 3745.92, "text": " And then what you have to do is that if you start from this premise and you want a balloon" }, { "end": 3756.08, "start": 3751.48, "text": " to be there, the balloon needs to be stationary above a region to serve the internet." }, { "end": 3761.44, "start": 3756.08, "text": " The problems that, of course, they're winds and the balloons are going to be blown from" }, { "end": 3763.7599999999998, "start": 3761.44, "text": " where they want to be all the time." }, { "end": 3768.04, "start": 3763.7599999999998, "text": " So in a sense, the balloon can just stay there and we leave it there." }, { "end": 3773.72, "start": 3768.04, "text": " So the balloon needs to navigate to make sure that it's always going back to that position." }, { "end": 3777.52, "start": 3773.72, "text": " So it's the right in the winds, if you will." }, { "end": 3782.88, "start": 3777.52, "text": " This balloons, they have only the ability to, they are not propelled." }, { "end": 3786.2799999999997, "start": 3782.88, "text": " So they only have the ability to go up or down." }, { "end": 3789.88, "start": 3786.2799999999997, "text": " They are fixed, fixed volume balloon." }, { "end": 3793.7599999999998, "start": 3789.88, "text": " And in a sense, they work very similar to hot air balloons or super marine." }, { "end": 3799.6800000000003, "start": 3793.76, "text": " So the tuition is the same as hot air balloons, but they work very similar to submarines," }, { "end": 3801.88, "start": 3799.6800000000003, "text": " which is that you have a fixed volume." }, { "end": 3807.44, "start": 3801.88, "text": " So now, if you want to go up and they stratosphere, what you need to do is that you need to reduce" }, { "end": 3808.44, "start": 3807.44, "text": " your density." }, { "end": 3810.0400000000004, "start": 3808.44, "text": " So what do you do to reduce your density?" }, { "end": 3813.6400000000003, "start": 3810.0400000000004, "text": " That well, with the same volume, you pump air out of the balloon." }, { "end": 3819.0800000000004, "start": 3813.6400000000003, "text": " So now the volume is going to be, the density is going to be lower and then you go up." }, { "end": 3823.0400000000004, "start": 3819.0800000000004, "text": " And if you want to go down, you just pump air inside and then you sink the balloon." }, { "end": 3829.32, "start": 3823.04, "text": " And by just being able to pump air up and down, you are able to pump air in and out, you" }, { "end": 3831.2799999999997, "start": 3829.32, "text": " are able to go up and down." }, { "end": 3835.88, "start": 3831.2799999999997, "text": " And by going up and down, luckily, I guess, the stratosphere has winds going all sorts of" }, { "end": 3836.88, "start": 3835.88, "text": " directions." }, { "end": 3840.52, "start": 3836.88, "text": " So basically, you can now try to go up or down to go to the altitude where you have an" }, { "end": 3843.24, "start": 3840.52, "text": " wind blowing in the right direction." }, { "end": 3848.2799999999997, "start": 3843.24, "text": " And what we were trying to do was to have an agent that would learn how to navigate those" }, { "end": 3854.1600000000003, "start": 3848.28, "text": " winds in a way that it was going to go up or down and ride the right winds to be always" }, { "end": 3855.52, "start": 3854.1600000000003, "text": " surfing the same region." }, { "end": 3858.5600000000004, "start": 3855.52, "text": " And this is what we did at the end and we deployed." }, { "end": 3864.28, "start": 3858.5600000000004, "text": " So I don't know much about a stratosphere at all, but is it always possible to find a" }, { "end": 3865.6400000000003, "start": 3864.28, "text": " way to get back?" }, { "end": 3870.0800000000004, "start": 3865.6400000000003, "text": " Like, does some of them ever just, is it just sometimes just completely impossible to go" }, { "end": 3872.6000000000004, "start": 3870.0800000000004, "text": " the other way and the balloon?" }, { "end": 3877, "start": 3872.6000000000004, "text": " Regardless of how great the controller is, the balloon is forced to drift away outside" }, { "end": 3878, "start": 3877, "text": " of the zone." }, { "end": 3879, "start": 3878, "text": " Is that, is that happen?" }, { "end": 3880, "start": 3879, "text": " Yes, absolutely." }, { "end": 3884.4, "start": 3880, "text": " So sometimes you, you, you, how the winds are blowing on one single direction and then" }, { "end": 3886.68, "start": 3884.4, "text": " there is nothing can do the balloon is going to be blown away." }, { "end": 3891.48, "start": 3886.68, "text": " We didn't go there, but when, but loom was not surfing a region at the time with only" }, { "end": 3892.48, "start": 3891.48, "text": " one balloon, right?" }, { "end": 3895.36, "start": 3892.48, "text": " They had multiple balloons except one of the risks because of that as well." }, { "end": 3896.72, "start": 3895.36, "text": " But yes, it happens." }, { "end": 3899.56, "start": 3896.72, "text": " We could see this happen to far controllers as well." }, { "end": 3901.16, "start": 3899.56, "text": " But then there is also a meaningful question here." }, { "end": 3905.8, "start": 3901.16, "text": " It is just like, well, so even though you're going to be blown away, how can you minimize" }, { "end": 3908.5600000000004, "start": 3905.8, "text": " how far you're going to be blown away?" }, { "end": 3912.88, "start": 3908.5600000000004, "text": " Or how can you, or how can you make sure that you are blown away in a region that you can" }, { "end": 3913.88, "start": 3912.88, "text": " eventually come back?" }, { "end": 3914.88, "start": 3913.88, "text": " Right?" }, { "end": 3917.76, "start": 3914.88, "text": " And that's one of the things that are really difficult about this plot because the time" }, { "end": 3920.1600000000003, "start": 3917.76, "text": " horizon we were talking here is about days." }, { "end": 3923.92, "start": 3920.1600000000003, "text": " Sometimes you're going to be blown away for a whole day or two or three and it's like," }, { "end": 3925.5600000000004, "start": 3923.92, "text": " now I want to go back, right?" }, { "end": 3927.8, "start": 3925.5600000000004, "text": " And somehow you still need to plan." }, { "end": 3931.52, "start": 3927.8, "text": " There are some interesting things about this status here that I'm not going to pretend" }, { "end": 3935.04, "start": 3931.52, "text": " that I can confidently say things about them." }, { "end": 3942.04, "start": 3935.04, "text": " So for example, this is much easier to be done in the equatorial region, the equator," }, { "end": 3946.12, "start": 3942.04, "text": " than it is to be done in the poles, exactly because of the patterns of wings that we have," }, { "end": 3947.6, "start": 3946.12, "text": " for example." }, { "end": 3954.6, "start": 3947.6, "text": " And a lot of the things that we did along was between the tropics that are somehow close" }, { "end": 3957.36, "start": 3954.6, "text": " to the equator." }, { "end": 3963.44, "start": 3957.36, "text": " But even in our paper, one of the things that we did was to estimate how, what was the" }, { "end": 3968.68, "start": 3963.44, "text": " maximum you could do if you were, what's the best you could do with our controller?" }, { "end": 3973.08, "start": 3968.68, "text": " And at the time, if the number is all right, it was about 70 percent because other 30 percent" }, { "end": 3977.04, "start": 3973.08, "text": " of the time, it didn't matter what you wanted to do." }, { "end": 3979, "start": 3977.04, "text": " There were no wings that would allow you to do this." }, { "end": 3980, "start": 3979, "text": " This was only simulation." }, { "end": 3982.84, "start": 3980, "text": " This was not in the real world." }, { "end": 3985.64, "start": 3982.84, "text": " But this shows that sometimes it does happen." }, { "end": 3992.48, "start": 3985.64, "text": " And then you said that sometimes you have to consider sequences of days." }, { "end": 3994.72, "start": 3992.48, "text": " How do you break down the time steps?" }, { "end": 3996.72, "start": 3994.72, "text": " What is the step size?" }, { "end": 3999.68, "start": 3996.72, "text": " What is the time scale for the actions here?" }, { "end": 4000.68, "start": 3999.68, "text": " Yeah." }, { "end": 4006.76, "start": 4000.68, "text": " So the time scale for, so what we did was that we broke it down in a way that the balloon" }, { "end": 4011.44, "start": 4006.76, "text": " would take an action every three minutes." }, { "end": 4018.2400000000002, "start": 4011.44, "text": " And the gamma that we were using in the discount factor is 0.993." }, { "end": 4024.68, "start": 4018.24, "text": " So it gave us, if you want to think about the effective horizon of 0.993, it was around" }, { "end": 4029, "start": 4024.68, "text": " 200 ish steps in the future." }, { "end": 4035, "start": 4029, "text": " So we had almost a day that we could, at any time step, we were looking ahead in a sense," }, { "end": 4038.08, "start": 4035, "text": " almost a day in the future." }, { "end": 4042.8399999999997, "start": 4038.08, "text": " And not only that, then how, depending on how we incorporate our features and how we" }, { "end": 4048.2, "start": 4042.8399999999997, "text": " learn, actually what we observed is that our balloons were being effective in a very" }, { "end": 4049.2, "start": 4048.2, "text": " long time scales." }, { "end": 4052.12, "start": 4049.2, "text": " Okay, so there's a number of names on this paper." }, { "end": 4058.68, "start": 4052.12, "text": " Can you tell us about the structure of the team and how the team broke out the work and" }, { "end": 4059.68, "start": 4058.68, "text": " the roles?" }, { "end": 4062.52, "start": 4059.68, "text": " And what was your role in this team?" }, { "end": 4068.3199999999997, "start": 4062.52, "text": " Yeah, it's like, this is definitely the biggest project that I've worked in my life." }, { "end": 4076.12, "start": 4068.3199999999997, "text": " And this was reflected on the authors and the paper, right?" }, { "end": 4081.3599999999997, "start": 4076.12, "text": " So let me start saying that we can break down this." }, { "end": 4086.12, "start": 4081.3599999999997, "text": " So what I'm trying to say here is that one thing that we could break this down by sure" }, { "end": 4091.16, "start": 4086.12, "text": " is between the brain collaborators and the lung collaborators, right?" }, { "end": 4094.04, "start": 4091.16, "text": " We worked very closely all the process." }, { "end": 4100.68, "start": 4094.04, "text": " So they were running experiments of the parallel agents and we were discussing how to do the" }, { "end": 4102.64, "start": 4100.68, "text": " experiments in the real world and the deployment." }, { "end": 4106.240000000001, "start": 4102.64, "text": " So it was a work that we did really together." }, { "end": 4110.52, "start": 4106.240000000001, "text": " There were amazing collaborators, the lung collaborators, but you can see that there was definitely" }, { "end": 4115.280000000001, "start": 4110.52, "text": " this notion of there was a group of people that had very deep expertise on the balloons" }, { "end": 4120.52, "start": 4115.280000000001, "text": " and stratosphere and how these things work, why we had expertise on reinforcement learning." }, { "end": 4126.72, "start": 4120.52, "text": " But at the same time that there is a bunch of names in the paper, this was a relatively" }, { "end": 4130.12, "start": 4126.72, "text": " small team for the size of the effort." }, { "end": 4135.88, "start": 4130.12, "text": " And what I mean by that is that naturally what happened is that a lot of us touched in" }, { "end": 4139.48, "start": 4135.88, "text": " pretty much all the components of the solution." }, { "end": 4149.12, "start": 4139.48, "text": " So I worked developing the algorithm, coming up with what we actually deployed, so how we" }, { "end": 4152.64, "start": 4149.12, "text": " developed the features, which was something that was related to people that I wrote" }, { "end": 4161.88, "start": 4152.64, "text": " a while ago, how we do exploration, what algorithms do we use, how do we train those things." }, { "end": 4167.56, "start": 4161.88, "text": " So this was one thing that I did, but there was also, I mean, listing all the things that" }, { "end": 4172.200000000001, "start": 4167.56, "text": " I did, it's kind of silly because we worked on this fairly heavily on our sorts of" }, { "end": 4173.200000000001, "start": 4172.200000000001, "text": " fronts." }, { "end": 4177.96, "start": 4173.200000000001, "text": " But one thing that I spent a lot of time doing was thinking about the empirical evaluation" }, { "end": 4183.2, "start": 4177.96, "text": " and actually dealing with some of the challenges that come up when we were working off with" }, { "end": 4188.12, "start": 4183.2, "text": " a real product, which is this notion that the environment, like the things are moving," }, { "end": 4189.12, "start": 4188.12, "text": " right?" }, { "end": 4193.4800000000005, "start": 4189.12, "text": " Like you have a product, they had the balloon that was flying the stratosphere, they wanted" }, { "end": 4194.56, "start": 4193.4800000000005, "text": " to make it better." }, { "end": 4198.68, "start": 4194.56, "text": " It's not that everyone was freezing the whole company for us, so we could have a stable" }, { "end": 4199.68, "start": 4198.68, "text": " environment." }, { "end": 4205.76, "start": 4199.68, "text": " So sometimes things would change, sometimes the balloons would change and the simulator" }, { "end": 4207.92, "start": 4205.76, "text": " would change some parameters and so on." }, { "end": 4214.2, "start": 4207.92, "text": " And I spent a lot of time also trying to have sanity on this, like how can we make sure" }, { "end": 4219.84, "start": 4214.2, "text": " that we're doing meaningful, we're making meaningful progress on this ever changing task." }, { "end": 4225.64, "start": 4219.84, "text": " And what was changed, how did you change, what was the impact in performance and understanding" }, { "end": 4231.04, "start": 4225.64, "text": " this, understand the simulator, how the simulator worked, how we could actually get this," }, { "end": 4237.08, "start": 4231.04, "text": " meaningful data and what the simulator was telling us and how this was working, analyzing" }, { "end": 4239.12, "start": 4237.08, "text": " the data, writing the paper." }, { "end": 4244.12, "start": 4239.12, "text": " So yeah, in a sense, it's silly because I'm enumerating all the things that should" }, { "end": 4249.12, "start": 4244.12, "text": " be done to happen in the paper, but maybe what I'm trying to say is that this was a truly" }, { "end": 4251.36, "start": 4249.12, "text": " collaborative effort." }, { "end": 4256.16, "start": 4251.36, "text": " It was very exciting because I got to work together with a lot of people on a project" }, { "end": 4257.56, "start": 4256.16, "text": " and it was big." }, { "end": 4261.84, "start": 4257.56, "text": " So yeah, we touched a lot of us touched in a lot of pieces of that." }, { "end": 4265.92, "start": 4261.84, "text": " So in the very beginning, when you heard about this project, did you think that, oh yeah," }, { "end": 4271.240000000001, "start": 4265.92, "text": " this is going to work out great or were you, do you have any doubts or an end where you've" }, { "end": 4274.160000000001, "start": 4271.240000000001, "text": " surprised, the final performance seemed actually really good." }, { "end": 4276.4800000000005, "start": 4274.160000000001, "text": " So did that surprise you when you got to that point?" }, { "end": 4279.84, "start": 4276.4800000000005, "text": " How did you, how did the feeling change throughout the project?" }, { "end": 4284.280000000001, "start": 4279.84, "text": " Well, so when I first heard about it, I was excited because when I was excited about" }, { "end": 4287.4, "start": 4284.28, "text": " long, even before I thought, oh, this is a, this is a cool idea." }, { "end": 4291.32, "start": 4287.4, "text": " And back when, even when I was doing my job interviews, people would ask, oh, what do" }, { "end": 4293.08, "start": 4291.32, "text": " you want to do for your research?" }, { "end": 4295.44, "start": 4293.08, "text": " And I would say that, well, one of the things that I want to do is that I want to make" }, { "end": 4299.679999999999, "start": 4295.44, "text": " sure that I develop algorithms that are going to allow us to actually deploy algorithms" }, { "end": 4302.96, "start": 4299.679999999999, "text": " in this RL agents in the real world." }, { "end": 4305.12, "start": 4302.96, "text": " So it just seemed to like an amazing opportunity." }, { "end": 4308.92, "start": 4305.12, "text": " I just joined this company and then there is this project that was actually starting." }, { "end": 4311.92, "start": 4308.92, "text": " And I was like, well, I could do that." }, { "end": 4315.72, "start": 4311.92, "text": " It could, it doesn't have to only be a promise." }, { "end": 4318.12, "start": 4315.72, "text": " So I was excited about that." }, { "end": 4322.32, "start": 4318.12, "text": " At first, I think that at first, I was very excited." }, { "end": 4323.32, "start": 4322.32, "text": " I was very hopeful." }, { "end": 4324.84, "start": 4323.32, "text": " And I said, this is going to work great." }, { "end": 4325.84, "start": 4324.84, "text": " Of course, I was naive." }, { "end": 4330.32, "start": 4325.84, "text": " I didn't know how difficult the problem was." }, { "end": 4334.92, "start": 4330.32, "text": " But we managed to get some good successes at the end at the beginning." }, { "end": 4336.56, "start": 4334.92, "text": " And I think that this gave us hope." }, { "end": 4344.160000000001, "start": 4336.56, "text": " So in July of 2019, we had already deployed the first balloon." }, { "end": 4347.080000000001, "start": 4344.160000000001, "text": " And the project I started at Brian and March." }, { "end": 4349.84, "start": 4347.080000000001, "text": " So if you like in a couple of months, we had already deployed the first balloon." }, { "end": 4353.080000000001, "start": 4349.84, "text": " It was not the final solution that we came up with." }, { "end": 4361.400000000001, "start": 4353.080000000001, "text": " But I think that we had some early wins that kept us hopeful of the progress of the project." }, { "end": 4363.080000000001, "start": 4361.400000000001, "text": " I was definitely very surprised." }, { "end": 4365.96, "start": 4363.080000000001, "text": " And it's scary when the first time was with the deploy the balloon." }, { "end": 4368.92, "start": 4365.96, "text": " So oh my god, we're doing this." }, { "end": 4374.28, "start": 4368.92, "text": " Of course, the the the loo engineers, they know they knew much better than we did as researchers." }, { "end": 4378.92, "start": 4374.28, "text": " All the safety protocols that they have put in place to make sure that nothing crazy" }, { "end": 4381.24, "start": 4378.92, "text": " was going to happen." }, { "end": 4385.2, "start": 4381.24, "text": " And a lot of the work that we did was actually learning with them, like, oh, can a balloon" }, { "end": 4386.2, "start": 4385.2, "text": " hit each other?" }, { "end": 4388.04, "start": 4386.2, "text": " They're like, no, they're riding the winds." }, { "end": 4389.4, "start": 4388.04, "text": " How could they hit each other?" }, { "end": 4391.44, "start": 4389.4, "text": " Like they're literally at the same speed." }, { "end": 4395.68, "start": 4391.44, "text": " Unless you just manage a sync one balloon that goes up while the other one is the same" }, { "end": 4396.68, "start": 4395.68, "text": " anyway." }, { "end": 4400.200000000001, "start": 4396.68, "text": " So yeah, they can't collide on the on the space." }, { "end": 4401.200000000001, "start": 4400.200000000001, "text": " There is nothing there." }, { "end": 4403.96, "start": 4401.200000000001, "text": " So they're not cannot collide with other things." }, { "end": 4406.96, "start": 4403.96, "text": " And the safety layers, they are not going to burst and all those things." }, { "end": 4410, "start": 4406.96, "text": " So we I was excited." }, { "end": 4411.8, "start": 4410, "text": " Then honestly, a little bit scary." }, { "end": 4415.12, "start": 4411.8, "text": " The first time that the balloon was deployed because I had no idea like how the safety" }, { "end": 4418.400000000001, "start": 4415.12, "text": " layers that there was with time." }, { "end": 4420.400000000001, "start": 4418.400000000001, "text": " It's just like, yeah, I'm comfortable." }, { "end": 4424.400000000001, "start": 4420.400000000001, "text": " I know how to assign this controller to this balloons in the real world and how how" }, { "end": 4426.96, "start": 4424.4, "text": " you can do this experiments." }, { "end": 4431.759999999999, "start": 4426.96, "text": " So yeah, maybe I'm just circling around your question, but it should say that although" }, { "end": 4436.719999999999, "start": 4431.759999999999, "text": " this this project seemed challenge, it was very challenging in terms of dealing with" }, { "end": 4442.839999999999, "start": 4436.719999999999, "text": " the infrastructure because it's a real product to infrastructure, real world infrastructure." }, { "end": 4445.12, "start": 4442.839999999999, "text": " The iteration cycle is very slow, right?" }, { "end": 4449.679999999999, "start": 4445.12, "text": " Because we're like, it's not a simulator that you get a result in a couple of hours or" }, { "end": 4451.48, "start": 4449.679999999999, "text": " even days." }, { "end": 4456.24, "start": 4451.48, "text": " So there was this challenge, but I think that the early successes that we had and excitement" }, { "end": 4460.599999999999, "start": 4456.24, "text": " of everyone just made the theory of just that we were going to keep working on this." }, { "end": 4462.799999999999, "start": 4460.599999999999, "text": " And it's one of those things, right?" }, { "end": 4465.36, "start": 4462.799999999999, "text": " And then you worked for so long and you know so much the product." }, { "end": 4469.2, "start": 4465.36, "text": " Once the final result gets in and it looks like, yes, we knew this was going to be the" }, { "end": 4470.2, "start": 4469.2, "text": " case." }, { "end": 4474.5599999999995, "start": 4470.2, "text": " And then the surprise was gone a couple of months ago because now it's you already know" }, { "end": 4475.799999999999, "start": 4474.5599999999995, "text": " what to expect." }, { "end": 4479.799999999999, "start": 4475.799999999999, "text": " But I was still very happy and anxious after we processed the data." }, { "end": 4483.72, "start": 4479.8, "text": " Or like I was anxious why we were processing the data when we were spent on flying balloons" }, { "end": 4487.4800000000005, "start": 4483.72, "text": " in the equator just to make sure that our models was right." }, { "end": 4490.72, "start": 4487.4800000000005, "text": " And it was very exciting just to see that yes, we have statistical confidence, we are" }, { "end": 4492.52, "start": 4490.72, "text": " better and it works pretty well." }, { "end": 4495.12, "start": 4492.52, "text": " I bet that must have been a great moment for the whole team." }, { "end": 4501.72, "start": 4495.12, "text": " And I guess another thing that I was not thinking about it at the beginning, but it was" }, { "end": 4504.76, "start": 4501.72, "text": " excited that it was actually being used by people, right?" }, { "end": 4509.56, "start": 4504.76, "text": " Like they were flying balloons in Kenya with our agent and people were getting internet" }, { "end": 4514.080000000001, "start": 4509.56, "text": " because we developed something that was allowing them to have an hour, couple of hours of extra" }, { "end": 4516.64, "start": 4514.080000000001, "text": " hours of internet every day." }, { "end": 4519.160000000001, "start": 4516.64, "text": " And it was very rewarding in essence." }, { "end": 4520.160000000001, "start": 4519.160000000001, "text": " Awesome." }, { "end": 4527.200000000001, "start": 4520.160000000001, "text": " So can you tell us a bit more specifics like about the highly observation and action spaces" }, { "end": 4528.200000000001, "start": 4527.200000000001, "text": " work?" }, { "end": 4529.200000000001, "start": 4528.200000000001, "text": " Yeah, yeah." }, { "end": 4535.240000000001, "start": 4529.200000000001, "text": " So the actually space it's abstracted in a way that the balloon just goes up, down or" }, { "end": 4536.240000000001, "start": 4535.240000000001, "text": " stays." }, { "end": 4540.04, "start": 4536.24, "text": " And when you get through three minutes, the balloon gets an action that says go up, go" }, { "end": 4541.04, "start": 4540.04, "text": " down and stay." }, { "end": 4545.2, "start": 4541.04, "text": " And what this means is that the balloon is going to go up for three minutes or go down" }, { "end": 4549.16, "start": 4545.2, "text": " for three minutes or stay where it is until another three minutes is done." }, { "end": 4553.48, "start": 4549.16, "text": " Of course behind the curtains, there is a lot of low level controlling going on, right?" }, { "end": 4556.8, "start": 4553.48, "text": " Of like hope, up air each of these are open devolved." }, { "end": 4560.88, "start": 4556.8, "text": " But from our perspective at the end for the perspective of the agent, at least the action" }, { "end": 4563.48, "start": 4560.88, "text": " was up down and stay." }, { "end": 4568.44, "start": 4563.48, "text": " And the observation space was something that we terraided a couple of times." }, { "end": 4576.04, "start": 4568.44, "text": " But in a high level, what the agent had access to was the information about the wings above" }, { "end": 4577.839999999999, "start": 4576.04, "text": " it and below it." }, { "end": 4582.799999999999, "start": 4577.839999999999, "text": " So you can think about this as a wind column going on over the stratosphere that the balloon" }, { "end": 4584.679999999999, "start": 4582.799999999999, "text": " could navigate." }, { "end": 4589.2, "start": 4584.679999999999, "text": " And then we discretize that into what we call pressure levels." }, { "end": 4593.04, "start": 4589.2, "text": " And basically this was the information about the wings in each one of those." }, { "end": 4598.44, "start": 4593.04, "text": " And by that information, it means the direction of the wind, the angle that the wind was blowing." }, { "end": 4603.92, "start": 4598.44, "text": " And the third variable that was quite important and different from what people are usually," }, { "end": 4607.2, "start": 4603.92, "text": " which is uncertainty about those wings." }, { "end": 4612.08, "start": 4607.2, "text": " Because the balloon is fly and because the balloon knows exactly what the wings are, where" }, { "end": 4613.08, "start": 4612.08, "text": " the balloon is." }, { "end": 4614.08, "start": 4613.08, "text": " Right?" }, { "end": 4617.88, "start": 4614.08, "text": " But the balloon doesn't know what the wings look like five kilometers above it." }, { "end": 4626.72, "start": 4617.88, "text": " So what the agent did is that we were using this prediction that comes from other sources" }, { "end": 4629.16, "start": 4626.72, "text": " about what the wings were going to look like." }, { "end": 4635.36, "start": 4629.16, "text": " We had the Gaussian process that was kind of using those predictions to what the observations" }, { "end": 4641.64, "start": 4635.36, "text": " that we had and giving us a lower granularity of the wings and say that look, given that" }, { "end": 4646.32, "start": 4641.64, "text": " the prediction says that the wind is going to blow on the north at 50 kilometers." }, { "end": 4651.36, "start": 4646.32, "text": " But you are just, I don't know, 500 meters above this pressure level." }, { "end": 4653.16, "start": 4651.36, "text": " And we're seeing something completely different." }, { "end": 4657.04, "start": 4653.16, "text": " We're going to fuse this and this is what we think the wind looked like." }, { "end": 4660.84, "start": 4657.04, "text": " And if we think that, and this is the uncertainty that we have." }, { "end": 4665.719999999999, "start": 4660.84, "text": " So basically we were characterizing the stratosphere, the winds and the stratosphere based on this" }, { "end": 4673.639999999999, "start": 4665.719999999999, "text": " notion of distance, like the velocity angle and the uncertainty that we had about those." }, { "end": 4678.64, "start": 4673.64, "text": " And not only that, we also had then what we would call the global state variables of the" }, { "end": 4679.64, "start": 4678.64, "text": " balloon, right?" }, { "end": 4684.64, "start": 4679.64, "text": " Like the amount of power that the balloon had, the amount of the time of day where the" }, { "end": 4685.64, "start": 4684.64, "text": " station is." }, { "end": 4692.280000000001, "start": 4685.64, "text": " So this is what the balloon was observing, which is basically the winds and it's a status." }, { "end": 4697.160000000001, "start": 4692.280000000001, "text": " You mentioned that you used some insights from your shallow RL paper in designing this" }, { "end": 4698.160000000001, "start": 4697.160000000001, "text": " controller." }, { "end": 4700.72, "start": 4698.160000000001, "text": " Do you want to briefly mention the relationship there?" }, { "end": 4707.88, "start": 4700.72, "text": " Yeah, so back in 2016, we were trying to develop features, linear features that captured" }, { "end": 4711.8, "start": 4707.88, "text": " the properties of the networks in the parallel." }, { "end": 4716.04, "start": 4711.8, "text": " So basically we were asking the question of, well, we have this deep parallel agents and" }, { "end": 4718.4400000000005, "start": 4716.04, "text": " they are doing them is only well on a target." }, { "end": 4720.04, "start": 4718.4400000000005, "text": " What are the features?" }, { "end": 4724.240000000001, "start": 4720.04, "text": " What are the inductive bias that this network had that actually allowed this agent to" }, { "end": 4726.04, "start": 4724.240000000001, "text": " so well?" }, { "end": 4728, "start": 4726.04, "text": " And we came up with a couple of them." }, { "end": 4734.52, "start": 4728, "text": " One of them was really important was this notion of these in the areas like this transition" }, { "end": 4736.88, "start": 4734.52, "text": " in variance that you get from convolutional networks, right?" }, { "end": 4741.08, "start": 4736.88, "text": " That you can apply the same filter in outsource of places and the image." }, { "end": 4743.72, "start": 4741.08, "text": " And then you also have this relationship which is like, oh, there are two filters that" }, { "end": 4745.36, "start": 4743.72, "text": " one is above the other." }, { "end": 4751.76, "start": 4745.36, "text": " So we know that and if this happens in different parts of the strings is to the same relationship." }, { "end": 4755.04, "start": 4751.76, "text": " And this was one of the things that one of the tricks that I learned fairly early on that" }, { "end": 4760.12, "start": 4755.04, "text": " if we have a representation that is centered, let's say that it's agent centric and everything" }, { "end": 4763.08, "start": 4760.12, "text": " is relative to the agent." }, { "end": 4767.44, "start": 4763.08, "text": " This makes a lot of difference because it requires the agent can generalize much better" }, { "end": 4769.32, "start": 4767.44, "text": " because it doesn't have to learn the same thing." }, { "end": 4773.4, "start": 4769.32, "text": " I know all sorts of places that it observes, it can every time that something is above the" }, { "end": 4775.88, "start": 4773.4, "text": " agent, well, it's the same input." }, { "end": 4780.72, "start": 4775.88, "text": " So one of the, so this was one of the tricks that we used when we were designing these inputs" }, { "end": 4785.320000000001, "start": 4780.72, "text": " for the agent and when we were flying the balloons, which is that our features are relative" }, { "end": 4786.52, "start": 4785.320000000001, "text": " to the agent." }, { "end": 4790.6, "start": 4786.52, "text": " So actually when the balloon goes up, the whole feature vector in a sense shifts with the" }, { "end": 4795.240000000001, "start": 4790.6, "text": " balloon to make sure that we always have the notion of what are the winds above me," }, { "end": 4799.92, "start": 4795.240000000001, "text": " and not what are the winds at the 15 kilometer altitude, but like what are the winds above" }, { "end": 4800.92, "start": 4799.92, "text": " the balloon." }, { "end": 4805.2, "start": 4800.92, "text": " And this was a huge boosting performance and that was very important on how the network" }, { "end": 4806.2, "start": 4805.2, "text": " learned to do this." }, { "end": 4809.8, "start": 4806.2, "text": " And it was, yeah, it's quite neat." }, { "end": 4815.56, "start": 4809.8, "text": " I noticed that relative observation issue when I was doing the Palmerman 2018 Neurob's" }, { "end": 4819.400000000001, "start": 4815.56, "text": " competition as well, having that relative observation." }, { "end": 4824.56, "start": 4819.400000000001, "text": " So so when you talk about the observation of the winds above and below the balloon, is" }, { "end": 4830.2, "start": 4824.56, "text": " that coming from off board, off board data, like it's not, it's not from sensors on the" }, { "end": 4831.2, "start": 4830.2, "text": " balloon, is it?" }, { "end": 4833.6, "start": 4831.2, "text": " How does the balloon know what's happening above and below?" }, { "end": 4835.72, "start": 4833.6, "text": " So the balloon doesn't know what's happening above and below." }, { "end": 4839.64, "start": 4835.72, "text": " The balloon knows exactly what's happening where it is and then above and below, it's" }, { "end": 4843.68, "start": 4839.64, "text": " both the predictions that we have and observations from other balloon." }, { "end": 4848.04, "start": 4843.68, "text": " So there's definitely communication of the balloon about what are the surrounding." }, { "end": 4850.92, "start": 4848.04, "text": " And so you mentioned how important uncertainty is here." }, { "end": 4852.76, "start": 4850.92, "text": " Was that surprising to you?" }, { "end": 4857.16, "start": 4852.76, "text": " Well, it's not a surprise in this, is that it seems fairly obvious that we should," }, { "end": 4861.68, "start": 4857.16, "text": " the agent should be able to reason about how confident it is of its surroundings and what" }, { "end": 4865.280000000001, "start": 4861.68, "text": " it believes to be the state of the world, right?" }, { "end": 4868.88, "start": 4865.28, "text": " So in that context, it was not surprising, but it was what was interesting was that this" }, { "end": 4871.24, "start": 4868.88, "text": " is not common practice in the field." }, { "end": 4876.759999999999, "start": 4871.24, "text": " So it was interesting to see how important that ended up being for us." }, { "end": 4880.32, "start": 4876.759999999999, "text": " And in a sense, it's, I want to say that it's one of the interesting contributions of" }, { "end": 4884.04, "start": 4880.32, "text": " the paper as well in terms of our Earth mix side and it still needs to be explored." }, { "end": 4890, "start": 4884.04, "text": " Or like how far can we go with this notion of incorporating uncertainty into the agent," }, { "end": 4894.2, "start": 4890, "text": " and how much, how well the agent can, can reason about that or learn a representation" }, { "end": 4897.28, "start": 4894.2, "text": " that takes the answer to the central consideration, if you will." }, { "end": 4900.48, "start": 4897.28, "text": " Can you talk about the decision to use Model 3 or L here?" }, { "end": 4905.88, "start": 4900.48, "text": " Like could Model based planning or Model predictive control approach his work here, or would" }, { "end": 4907.88, "start": 4905.88, "text": " they discard it right away?" }, { "end": 4914.48, "start": 4907.88, "text": " I find it hard to say that could they work and like I will never ask you no because it's" }, { "end": 4916.92, "start": 4914.48, "text": " always a matter of making them work, right?" }, { "end": 4917.92, "start": 4916.92, "text": " Like, give it a try." }, { "end": 4922.88, "start": 4917.92, "text": " But I think that one of the really challenging things is that we cannot model this practice" }, { "end": 4923.88, "start": 4922.88, "text": " here, right?" }, { "end": 4925.68, "start": 4923.88, "text": " We cannot model the weather." }, { "end": 4932.72, "start": 4925.68, "text": " So although there could be some components of Model based here, if you want to be completely," }, { "end": 4937.24, "start": 4932.72, "text": " like if you really want to do planning, like we had, even in the baselines, we have search" }, { "end": 4941.92, "start": 4937.24, "text": " based methods and planning methods that rely on a model of the weather." }, { "end": 4947.04, "start": 4941.92, "text": " And they work relatively well in the simulation, but they would, they never worked in practice" }, { "end": 4951.28, "start": 4947.04, "text": " because like the model, the mismatch between what the model is and what the winds actually" }, { "end": 4953.64, "start": 4951.28, "text": " are in the stratosphere is so big that it's hopeless." }, { "end": 4960.280000000001, "start": 4953.64, "text": " So in the paper we had this search based controller that if we used as a baseline, almost as an" }, { "end": 4963.96, "start": 4960.280000000001, "text": " work on the simulation, but then when we're going to the real world, it's just like, yeah," }, { "end": 4965.64, "start": 4963.96, "text": " it's never going to work." }, { "end": 4970.88, "start": 4965.64, "text": " So maybe there is a way, but there is definitely the problem of Model mismatch and the inability" }, { "end": 4974.88, "start": 4970.88, "text": " that we have, if we as humans have of modeling the stratosphere and the winds at the level" }, { "end": 4978.280000000001, "start": 4974.88, "text": " of granularity that we needed for that project." }, { "end": 4982.92, "start": 4978.280000000001, "text": " So I heard that the Lume project itself was canceled and I'm sure that had nothing to do" }, { "end": 4987.96, "start": 4982.92, "text": " with the controller because your controller seemed like it worked really well, which is," }, { "end": 4992.2, "start": 4987.96, "text": " and I felt really sad when I heard the news, but I guess it's glad that we also glad that" }, { "end": 4995.96, "start": 4992.2, "text": " we benefited from seeing how this would work." }, { "end": 5000.4400000000005, "start": 4995.96, "text": " Is it, do you think it might continue in the future or do you think that's a permanent," }, { "end": 5001.4400000000005, "start": 5000.4400000000005, "text": " it's closed for now?" }, { "end": 5002.76, "start": 5001.4400000000005, "text": " I don't know." }, { "end": 5008.08, "start": 5002.76, "text": " I don't necessarily have much more insight than any of the people who read the blog posts" }, { "end": 5013.36, "start": 5008.08, "text": " have about that because I was actually with the, they would apply it and then we had the" }, { "end": 5016.5599999999995, "start": 5013.36, "text": " people accept I ended up leaving Brian going to the mine." }, { "end": 5022.76, "start": 5016.5599999999995, "text": " So it was fairly close to the time that the announcement came out as well." }, { "end": 5027.04, "start": 5022.76, "text": " I think that definitely, I mean, I want to say definitely not with respect to the RL" }, { "end": 5029.68, "start": 5027.04, "text": " controller, the RL controller was working great." }, { "end": 5034.12, "start": 5029.68, "text": " It seems that it was much more a business decision than anything else." }, { "end": 5036, "start": 5034.12, "text": " And yeah, I don't know." }, { "end": 5037.72, "start": 5036, "text": " I was very happy to work with them." }, { "end": 5044.96, "start": 5037.72, "text": " I think it's a great project and a great, it was a great experience, but there is more" }, { "end": 5048.36, "start": 5044.96, "text": " to those things than necessarily just the scientific endeavor, right?" }, { "end": 5053.76, "start": 5048.36, "text": " There is a business plan and it's way above my pay grade if you will." }, { "end": 5059.24, "start": 5053.76, "text": " So to understand what's going on or even understand like how they ended up making this decision" }, { "end": 5063.44, "start": 5059.24, "text": " and the clients, we were working with a fairly small team and just trying to get the best" }, { "end": 5064.96, "start": 5063.44, "text": " controller." }, { "end": 5068.12, "start": 5064.96, "text": " And that was my real, as a scientist." }, { "end": 5070.92, "start": 5068.12, "text": " I meant to ask you about exploration." }, { "end": 5075.72, "start": 5070.92, "text": " I think the exploration strategy was quite simple, but did that strategy take some iterations" }, { "end": 5080.64, "start": 5075.72, "text": " to settle on or was it very clear to you from early on that that would be the right way" }, { "end": 5081.64, "start": 5080.64, "text": " to do exploration?" }, { "end": 5086.92, "start": 5081.64, "text": " Yeah, the exploration approach was fairly simple, but at the same time it was rewarding" }, { "end": 5091.64, "start": 5086.92, "text": " from a personal perspective because we did not iterate over that, but the solution that" }, { "end": 5094.92, "start": 5091.64, "text": " we ended up having was to have a balloon saying that, look, I want to say, I'm going" }, { "end": 5098.96, "start": 5094.92, "text": " to go to that altitude and stay there for a while." }, { "end": 5103.68, "start": 5098.96, "text": " And if you think about how we have often do exploration, it's with random odds." }, { "end": 5106.52, "start": 5103.68, "text": " And that would mean I just want to go up or down for three minutes." }, { "end": 5108.8, "start": 5106.52, "text": " And that was never going to work, right?" }, { "end": 5113.04, "start": 5108.8, "text": " So the fact that we ended up doing this temporary extended exploration, which ties back to my" }, { "end": 5117.4, "start": 5113.04, "text": " work all the way back to my PhD, which is I was advocating for this, let's use options" }, { "end": 5120.88, "start": 5117.4, "text": " to do this temporary extent of exploration because it's much better than the different," }, { "end": 5124.04, "start": 5120.88, "text": " was really rewarding and I was happy with that." }, { "end": 5125.04, "start": 5124.04, "text": " And it worked really well." }, { "end": 5127.96, "start": 5125.04, "text": " And honestly, we never resisted that solution." }, { "end": 5131.72, "start": 5127.96, "text": " It's one of the things that it would be interesting to do." }, { "end": 5136.16, "start": 5131.72, "text": " If this project continued, if you had a chance to work on a more wood, would you have feature" }, { "end": 5137.16, "start": 5136.16, "text": " work in mind?" }, { "end": 5138.16, "start": 5137.16, "text": " Can you say anything about that?" }, { "end": 5140.56, "start": 5138.16, "text": " Or do you feel it was like a completely wrapped up?" }, { "end": 5141.56, "start": 5140.56, "text": " Oh, no, absolutely." }, { "end": 5149.04, "start": 5141.56, "text": " I think there is a lot of future work that could be done, like both to improve the controller," }, { "end": 5151.04, "start": 5149.04, "text": " but also in terms of scientific terms." }, { "end": 5155.28, "start": 5151.04, "text": " And the further understanding this role of uncertainty in the inputs would be very interesting." }, { "end": 5160.8, "start": 5155.28, "text": " There's no sound of exploration and understanding that if there were better exploration strategies" }, { "end": 5162.76, "start": 5160.8, "text": " would also be something interesting." }, { "end": 5167.4, "start": 5162.76, "text": " One thing that we were also discussing was that, well, right now we were just training" }, { "end": 5170.8, "start": 5167.4, "text": " a simulation to applying the real world, but naturally we were collecting more data in" }, { "end": 5171.8, "start": 5170.8, "text": " the real world." }, { "end": 5174.76, "start": 5171.8, "text": " Could we use that data to make our controller better?" }, { "end": 5178.16, "start": 5174.76, "text": " And fine-tune it to actually the balloon that we were flying because each balloon is a" }, { "end": 5183.88, "start": 5178.16, "text": " little bit different, the balloon is a little bit older, the battery is a little bit stronger" }, { "end": 5185.36, "start": 5183.88, "text": " or weaker." }, { "end": 5188.5199999999995, "start": 5185.36, "text": " And all those things were questions that are fairly interesting." }, { "end": 5191.44, "start": 5188.5199999999995, "text": " And yeah, there are just interesting scientific questions." }, { "end": 5196.92, "start": 5191.44, "text": " Actually, the general issues outside of Lune, but besides your own work, are there things" }, { "end": 5201.639999999999, "start": 5196.92, "text": " happening in reinforcement learning these days that you're personally excited about?" }, { "end": 5206.92, "start": 5201.639999999999, "text": " Yeah, I think that we're getting to, we're in a very exciting time and the research." }, { "end": 5211.4, "start": 5206.92, "text": " I think that some of the things that I'm excited about is one of the things is model-based" }, { "end": 5216.04, "start": 5211.4, "text": " around and how we are seeing some of the first big successes in model-based around and" }, { "end": 5219.68, "start": 5216.04, "text": " just like this large domains, we use zero is one example, but there are others." }, { "end": 5223.8, "start": 5219.68, "text": " So like, how do we actually learn models that allow us to plan in this environment?" }, { "end": 5228.28, "start": 5223.8, "text": " I think that this is a very promising research area and we're just scratching the surface." }, { "end": 5231.64, "start": 5228.28, "text": " What this model should look like, what should we be modeling?" }, { "end": 5237.8, "start": 5231.64, "text": " This is one of the things that I'm excited about, I'm very curious, excited, interested," }, { "end": 5241.96, "start": 5237.8, "text": " but also a little bit scared and trying to catch up with the literature on this notion" }, { "end": 5245.92, "start": 5241.96, "text": " of how we incorporate causality into reinforcement learning." }, { "end": 5249.4800000000005, "start": 5245.92, "text": " Because it seems that if we go back to this notion of generalization and this notion of" }, { "end": 5255.04, "start": 5249.4800000000005, "text": " we want in various across observations, it seems that we want to be able to extract the" }, { "end": 5259.76, "start": 5255.04, "text": " causal factors in the environment that are causing this changes." }, { "end": 5263.96, "start": 5259.76, "text": " And this is something that I'm very excited about and there are some people doing research" }, { "end": 5268.4400000000005, "start": 5263.96, "text": " on this and curious to see what's going to come out of that." }, { "end": 5270.84, "start": 5268.4400000000005, "text": " And overall, I guess, representation learning." }, { "end": 5275.84, "start": 5270.84, "text": " I'm curious to see, we are seeing some efforts now on how we start to think about learning" }, { "end": 5280.64, "start": 5275.84, "text": " representations for reinforcement learning and not just using representations learned from" }, { "end": 5284.08, "start": 5280.64, "text": " I don't know, a random metric game back prop, but how can we actually think about the" }, { "end": 5288, "start": 5284.08, "text": " reinforcement learning problem and what would be useful representations for that." }, { "end": 5293.48, "start": 5288, "text": " I think that these are some of the things that I'm excited about and I want to see how" }, { "end": 5295.72, "start": 5293.48, "text": " much progress we can make." }, { "end": 5300.04, "start": 5295.72, "text": " And then looking forward, what do you see yourself working on in the future?" }, { "end": 5306.36, "start": 5300.04, "text": " Do you expect to continue working on the themes you've touched on here and taking them further?" }, { "end": 5313.56, "start": 5306.36, "text": " So I am still very curious and excited about this notion of learning options, terribly" }, { "end": 5320.320000000001, "start": 5313.56, "text": " that become more and more complex, more and more complex behavior in this notion of lifelong" }, { "end": 5324.72, "start": 5320.320000000001, "text": " learning, if you will, that it's one of the or continual learning that people use." }, { "end": 5329.64, "start": 5324.72, "text": " I think that this is really interesting, this notion of how can we learn?" }, { "end": 5333.120000000001, "start": 5329.64, "text": " Maybe I'm first going to just learn some very basic skills, but this is going to allow" }, { "end": 5335.8, "start": 5333.120000000001, "text": " me to bootstrap to see things that I couldn't before." }, { "end": 5339.52, "start": 5335.8, "text": " And then I can learn even more complex skills that allow me to bootstrap even more until" }, { "end": 5341.360000000001, "start": 5339.52, "text": " we get to very complex behavior." }, { "end": 5344.88, "start": 5341.36, "text": " This is something that I'm still curious about." }, { "end": 5349.88, "start": 5344.88, "text": " I am very curious about generalization in RL, it's something that I might be doing in" }, { "end": 5352.12, "start": 5349.88, "text": " the future like more." }, { "end": 5359.759999999999, "start": 5352.12, "text": " I am also doing some more careful work about representation learning and understanding" }, { "end": 5363.5599999999995, "start": 5359.759999999999, "text": " what are the properties of representations in RL that we want." }, { "end": 5368.36, "start": 5363.5599999999995, "text": " The representation we want our RL agents to have, and to learn." }, { "end": 5372.799999999999, "start": 5368.36, "text": " So all these are things that are related to what I did in the past and I'm excited about." }, { "end": 5377.08, "start": 5372.799999999999, "text": " But there are also some things that one other thing that I didn't we didn't discuss today," }, { "end": 5380.799999999999, "start": 5377.08, "text": " but that I'm also, which is kind of new for me, I've been doing this in the past since" }, { "end": 5385.16, "start": 5380.799999999999, "text": " I joined Brain, which is this collaboration with Nikola Lehu and some other researchers." }, { "end": 5389.92, "start": 5385.16, "text": " It's about going to the basics of policy gradient methods and really understanding what" }, { "end": 5393.839999999999, "start": 5389.92, "text": " they are doing and how they work and challenging some common beliefs." }, { "end": 5396, "start": 5393.839999999999, "text": " This is also something that I want to continue doing." }, { "end": 5401.08, "start": 5396, "text": " I think that we have learned a lot in the past couple of years and we can now start to" }, { "end": 5403.68, "start": 5401.08, "text": " get some good results out of that as well." }, { "end": 5408.44, "start": 5403.68, "text": " So yeah, I think that these are, but overall I would say that I'm very interested in this" }, { "end": 5410.2, "start": 5408.44, "text": " notion of abstraction." }, { "end": 5415.2, "start": 5410.2, "text": " How do we learn abstractions in space, which means representation of abstractions in" }, { "end": 5417.8, "start": 5415.2, "text": " directions at space and state space." }, { "end": 5422.4, "start": 5417.8, "text": " The state space would be generalization and action space would be options and how we can" }, { "end": 5427.879999999999, "start": 5422.4, "text": " use that to the better credit assignment and exploration and things like that." }, { "end": 5434.08, "start": 5427.879999999999, "text": " So Dr. Marles Pachado, this has been a super enjoyable conversation for me and not definitely" }, { "end": 5439.799999999999, "start": 5434.08, "text": " not the shortest interview we've ever done, but I know I've learned a ton from hearing" }, { "end": 5442.36, "start": 5439.799999999999, "text": " from you and I'm sure audience has to." }, { "end": 5446.36, "start": 5442.36, "text": " Thank you so much for sharing your time and your insight with talk Arrell." }, { "end": 5456.16, "start": 5446.36, "text": " Thank you very much for having me." }, { "end": 5461.08, "start": 5456.16, "text": " Notes and links for this episode are at talkarrell.com." }, { "end": 5463.32, "start": 5461.08, "text": " If you like this show, I need your support." }, { "end": 5465.12, "start": 5463.32, "text": " You can help in a few ways." }, { "end": 5468.12, "start": 5465.12, "text": " Want to subscribe on your favorite podcast platform?" }, { "end": 5470.12, "start": 5468.12, "text": " Subscriptions make a big difference." }, { "end": 5478.04, "start": 5470.12, "text": " 2. Follow us on Twitter and talkarrell podcast." }, { "end": 5481.2, "start": 5478.04, "text": " 3. Give us a 5 star rating on Apple Podcasts." }, { "end": 5508.84, "start": 5481.2, "text": " If you don't think we deserve 5 stars, let us know on Twitter what we could do better." } ]
Nathan Lambert
Nathan Lambert on Model-based RL, Trajectory-based models, Quadrotor control, Hyperparameter Optimization for MBRL, RL vs PID control, and more!
https://media.transistor…c7e.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Robin Chohan. Nathan Lambert is a PhD candidate at UC Berkeley. Nathan, thanks so much for being here. Hi, Robin. Thanks for having me. What is the focus of your work? So I kind of emerged into this model-based RL area from a potentially different path than a lot of people in the D-BRL space. And I came from kind of a minimum data framing where I was assembling these robots by hand that take hours to make and they break within just like a minute of collective data. So I wanted to know what is the fastest way to generate a controller with an unknown system. And we kind of arrived at Model-based RL because it was showing to be very sample efficient in some simulated tasks. But then also when you have a model, there's a lot of interesting interpretation that you can do to try to understand what your system is doing and kind of learn from the experiments that you run. And RL is very interesting as a framework, so we've kind of been growing into studying how Model-based RL works and then with some applications to novel robotics. So I encountered your work at Neurup's 2020, just recently, specifically this next paper, learning accurate long-term dynamics for Model-based reinforcement learning, Lambert at all 2020. Can you tell us about this paper? What's the main idea here? Yeah, so in this paper we are kind of, we started trying to think about Model-Preditive Control where what happens in a lot of Model-based RL algorithms now is they kind of unroll these dynamics predictions by compounding predictions through a neural network. And anyone that's tried to compound predictions through a neural network knows that you get some weird numerical behavior and essentially diverging predictions. And we're trying to rethink how predictions in Model-based RL could be done. And it's pretty hard, the standard paradigm I'll refer to a couple times is like this one step model where in a one step model you have a discrete Markov decision process and you're modeling the change in the state. So it's like a delta formulation where you pass in a state and an action matter time and it predicts the change to give you the state of the next time. And that works, but you have this compounding problem. But in this work we are trying to come up with a new time dependent prediction mechanism. And we call this the trajectory-based model, which is really coming from the fact that it's trained on all of the sub trajectories within a rollout. So you take an episode of trial and simulation or the real robot, you get this like a couple hundred time step long segment and then you have, and what we did is we take each state in there and you label each segment into the future. So you have a lot of segments of that trajectory and we explicitly added a time variable to the prediction. So instead of a state in an action, it takes in a state and a time step and then also control parameters, which is kind of a, it's related to the action, but really the core of it is we wanted to be able to predict at a specific time to get away from the compounding predictions that one step models use. What about the policy? Is it, I guess that the model that's learned is specific to that one policy? Is that right? Yes, it is in a practical way and this is something that a lot of my research has come to work on, but maybe not necessarily intentionally is like the idea of a dynamic model is going to be like focusing on a specific thing and you should control that. So we'll talk about this more with some later papers, but in this case the trajectory based model we pass in some closed form controller parameters. So we did our experiments with LQR with PID and on some robotic tasks and we passed those tuned controller parameters into the network as an input to help with the long term prediction accuracy. The thought behind that is an action sequence that you would use to kind of unroll one step models, the action sequence are all taken and correlated from those control parameters. So it's kind of a compression of information, but then it comes at a cost where say we're currently trying to do research to figure out how to embed some neural network policy into control parameters because you no longer can take data from a different algorithm. Like I can't take dynamic data from a robot running software, doctor critic and try to incorporate it into my model based approach. So it's kind of a trade off. It's like what the dynamics model does is it is specialized in something and that should be hopefully correlated with how you're going to use it with control. So the control parameters are telling the model something about the policy that will be want to use. Yeah, so explicitly like an example that we use in the papers, we do a a reader task, which is a robotic arm in space and we control the joint angles with the PID control, which is a classic like it's a proportional integral integral derivative control. It kind of gives these nice smooth responses and what we do is we pass in the constants that define the PID control and then also a joint angle target. So one of the most important ones is actually the joint angle target for the PID control because that kind of like controls where in space the end effect or would be so. And then from there it can predict kind of the long term dynamics of where the arm will end up. So you train over a large range of those control parameters and then it's able to to interpolate between. Is that the idea there? Yeah, so we collect a whole bunch of different PID tunings and joint angle targets and then hope in hopes that it kind of covers the whole space of the task. So if you like running the task at a low level generally the goal is resampled at a different 3D position in every trial. So then the policy parameters in this case PID parameters we generalize to cover that space. And are these deterministic models that are being learned or I guess the concept could carry over to the classic models as well. Yeah, so in the paper we mostly characterize the deterministic models but if you're digging deeper into a model based or a literature there's kind of two axes that people have been working on. There's the idea of unsombling and then there's the idea of using a different loss function to create a probabilistic model. Unsombling is designed to help with the epistemic uncertainty by kind of smoothing out your model capacity over the dataset with cross validation. And probabilistic models are designed to kind of capture the uncertainty that is inherent to the task at hand. So if you rerun a robotic task multiple times you're going to get multiple outcomes because most of these processes are somewhat stochastic. And generally there's a paper from Kurt Lentua and Sergei Levin's group that did this with they have like their pets algorithm that characterize these tradeoffs. And the same changes to models work for the trajectory based model. So we didn't really have room to go into all the details in the paper but one of the interesting ones is when you apply this probabilistic loss function to the trajectory based model. You get a stable uncertainty estimate where the uncertainty is roughly proportional to kind of like the oscillations or the uncertainty in the task. If you run like a standard feedback controller like PID or LQR depending on the tuning of very common behavior is kind of oscillations and we found that the uncertainty in the trajectory based model with the probabilistic loss function, the bounds kind of model where the oscillations could be rather than tracking the exact frequency of the oscillations. And that's pretty nice because if you're trying to plan into the future with that you can kind of say like, oh the state will likely be somewhere in this region. And kind of that's we don't talk about safety in the paper but understanding where the distribution of potential states could be is very important if you know that your individual prediction is likely to have some error. Yeah, I love the fact that you get just so many more samples from a trajectory using your method. It seems so elegant and kind of beautiful. I write away as like, wow that's something very appealing about this. And that's why I wanted to reach out to you, you know, besides all your other interesting work. But help me understand model a traditional model I think would also predict reward. Are you trying to predict reward two or is this mostly about the state transitions? We mostly focused on the state transitions. So it's kind of a very practical point in model based RL. So in building these frameworks, if you're like doing model predictive control, you kind of have to pass it a reward function. And you pass it a reward function and I can compute the compute the actual reward from the states. And the other way to do it is then have the model predict the rewards. And I personally haven't seen a big difference in the original thrust of this paper was just purely on like, how do we improve prediction accuracy? So we didn't try predicting rewards. But that is something. And like in terms of when I say predict rewards, something that people do is explicitly with the neural network, you have a output that is the reward at each state. So it's kind of trying to learn what the reward function would be. And then in this paper, we did do something related, which is computing the reward from the projected state trajectory, which is a little bit different because you're using the reward function from the environment. And generally what we found is that the trajectories that we were predicting were much more stable, stable in the sense that you don't get weird numerical errors from predicting really far into the future or they just fit well to the tasks at hand where using the reward function from the environment, we could predict the downstream rewards pretty well. Where the problem with one step models is that as this compounding error comes about, the reward is very hard to predict. And ultimately, there's kind of this, the crux of doing any RL problem is you're trying to optimize the task at hand and it kind of gives you one more tool to understand the learning of the dynamics that you have if you can correlate it with the actual reward of a task. That makes total sense. Yeah, I love this paper and I've definitely experienced that problem of the I call it the tyranny of the one step. And so this is a really elegant way of handling that. Okay, so let's move on. You have a few of the papers here. We would love to chat about so you have objective mismatch in model based reinforcement learning, Lambert at all 2020 and that was at a conference on learning for dynamics and control. Can you tell us what's the idea here? So this paper is very related to the one that we have talked about and it's kind of the backwards order from how I did the research, but we're kind of learning about both of them in synergy. So this paper was looking at the model based RL framework and trying to understand how this modular framework where we have a dynamic model training. And then so generally separate and then we do the control optimization and how this handoff and is more of a handoff than an exchange of information. It's kind of one and then the other and how doing this sequentially results in potentially suboptimal performance and what signals we may be able to see that can improve the model based RL framework. And what we did what we centered this paper on is kind of trying to understand what the optimization of the one step model for prediction accuracy. What it is doing and what it is not doing in terms of understanding the environment in terms of optimizing the task at hand. So you show how model likelihood does not correspond closely with episode reward and that seems to be the thrust of some of the most important charts. Can you explain what exactly does that mean? Yeah, so this is something that I was hinting at earlier in this conversation and ultimately I've come to phrase this as when you're learning a dynamic model from data, there's really no free lunch. So what you're doing is you have some distribution of data and in RL it's kind of iteratively learned distribution. So as you learn a task you're kind of building this data set out it grows a little bit into your state space and a dynamics model learns accuracy kind of uniformly with respect to the density of data that you have. But in reality the distribution of your data does not match up exactly well with the distribution of the data for the tasks that you're trying to solve. So the task you're trying to solve could be like an expert distribution, but the data that you get is some on policy data normally started with purely random state action pairs and those are probably not very relevant to the task you're trying to solve. And when you're using one step models it's optimizing over that entire state space rather than the task at hand. So you can end up with an accurate model overall that is not necessarily accurate at the task you want to do. So the readout for the machine learning engineer is going to say I have done supervised learning I have done it well. My loss is very low. I think that this model will be useful for downstream control. But in reality a globally accurate model in terms of is like is accurate on average and there are sub areas such as for the task that might actually result in lower reward. So that's that's the long story of it. But it's very it's very nuanced. So then it comes to say like what do we do with our model. It's like do we keep training for accuracy? And I would say yes, but we want to be able to focus our accuracy on the areas that we are interested in. And I guess it's challenging because we don't always know in advance what the area is we're interested in. We don't know where the experts going to want to go. Is that true. Is that the case? Yes, so there's an example in this paper where we take a robotic task and we predefined an expert trajectory and we show that when you predefined an expert trajectory and wait the dynamics model around that trajectory. So you prioritize samples near the expert. The learning rate is much higher. The data efficiency is higher in the performance of the downstream controllers better. So it's like, okay, we knew what we needed to learn. So if you focus there, it does better. But kind of when I have the time, I would like to return to this paper and kind of build out that algorithm and you have to figure out how to iteratively update your expert and iteratively figure out what your expert trajectory should be. So that's kind of the open question in terms of applying this in the real world and not in a simulated environment with a task you've already solved. So this paper had a whole section on adversarial attack on model performance that I thought was pretty interesting. Do you want to tell us more about that, how that worked and what that was for? Yeah, this is one of my favorite results from the paper, probably because it was also so easy to get done. And a lot of times in research people know that you kind of work at your keyboard a long time to get any numerical result to work. This one was done in an afternoon. So it ultimately like kind of just set it up and it worked immediately, which is when you know that it's probably doing something right and you're not just forcing random seeds or anything. And what we did is we kind of wanted to fine tune a model so that the accuracy you get as a read out over the whole training data set remains high. But the task performance goes down and this kind of goes back to what I was saying about no free lunch and dynamics models. So ultimately what we did is we took like a CMA ES optimizer to change the weights and biases of the output layer of a feed forward dynamics model. And so the one step dynamics model. And then as we are tuning the dynamics model parameters, we used it for a model predictive controller in the carpal environment. And we are looking for a model that reported a high average accuracy, but got a low reward. So by just iterating a few trials of CMA ES. We were able to find a model that reported the same level of global accuracy over the on policy data set. But in reality, if you probed deeper the model accuracy over like the area of interest or what we have called the expert was very low. And therefore the reward for the carpal drop task dropped from full to just 50% performance very rapidly. So it's kind of saying like there's we're not doing that much intelligent design with how to wait the dynamics models and what the one step models are doing. And it's pretty easy to find a model that isn't useful for solving the task, but might look useful if you were just training this on the data set at hand. So it's like a worst case type thing. Yeah, I kind of interpret this as like the space for bad models might be well more populated than the space for useful models. So it's like searching in the sparse space, but we don't have the right metrics built for it. It's a very, very deep paper and I have talked with some of my co authors about this and we like need to keep digging into the subject. And on one hand is exciting because I think model based or else has a lot of opportunity for growth where there's these problems that are pretty numerically obvious. And I think that there's the right people to solve them. But it's like it definitely warrant some more deep thinking cool. Okay, then that'll give us lots more to talk about in the future. So how about moving on to your next paper here low level control of a quadruder with deep model based reinforcement learning that was Lambert at all 2019. So this is using a really small robot was it the crazy fly robot can you tell us about the robots to start with. Yeah, so this is a good connection to my research vision that I portrayed in the intro. So the crazy fly is I'm going to guess you are 27 grams or 32 grams. It's a wallet sized so pretty small very lightweight and we were we chose it one because a lot of researchers use it in two because it was the smallest one that we could find that's kind of ready to play and it was related to this. Trying to use model based or else to control a micro robot which is called the ionicraft in my lab which is the one that I hand assemble it uses 2000 volts to create a plasma region and literally uses silent ion thrust to fly it's it's pretty magical and we were trying to develop methods that we could use to control that and robots like it and the crazy fly is what we settled on because it is very well supported I would happily work with the crazy fly more. And we're trying to see if we know nothing about it can we learn to fly by just controlling the motor voltages so it turned all the on board controllers often like how hard is it to learn a simple locomotion task. Okay, so that was the goal of the paper can you tell us about the kind of the paper overall what's the thrust here. Yeah, so this was this is my entrance into model based our out and I think the paper is a good example of practically what you need to do to get one of these systems set up. Now I easily see some of the limitations of model based are all through the paper but it was just vanilla deterministic one step models and we didn't use ensembles online because ensembles take longer to compute in model predictive control we set up the model predictive control on a GPU and with kuda it's just a lot of for loop and you kind of optimize it to remove everything except for the feed forward neural network passes in parallel and you can see that the model is not going to be able to do that. So you can run the process in parallel and you can run model based model predictive control anywhere from 25 to 100 hertz. You can run it at 200 hertz but that's not really NPC because you're predicting like one step into the future. So if you look at the paper it can give you some frequencies with different number of samples that you number number of action samples are evaluating in different time horizons but it kind of made me see where the point of computational hardware is. And thankfully the computation was kind of the bottleneck but thankfully things are continuing to improve there so in a couple of years we'll get another 2x or 4x improvement in computation and I think that could actually translate to a lot more useful model based control as we can take more samples and more rapidly run our controller which is a huge limitation in practical tasks. And this was a was it off board control. So pretty nice support nicely supported infrastructure with the robot operating system Ross and what happens is we with some hacking we increased the radio communication frequency from the robot to a base station. So the robot would send encoded state data to a base station and the base station is where the GPU was so the GPU would wait for the event which is new state data and upon new state data it would start the NPC and they were kind of communicating back and forth well faster than the actual control frequency to kind of combat some practical difficulties with RF transmission i.e. dropped packets. So it was a kind of well engineered as a strong word for research but we spent a lot of time on the engineering of the actual crazy fly hardware system in order to get it to be robust with low level control when you're doing low level control if you take one bad action things will likely crash if you're doing trajectory planning you tend to have a more robust low level controller on so you have a little bit more flexibility which is ultimately the low level control is what made the pilot. So the low level control is what made the paper hard is we're running at 25 or 50 Hertz where the internal controllers are normally running at 500 Hertz so if our NPC took a couple bad actions in a row it would crash which is breath because sometimes I had to just replace the propellers and replace motors and start over but I think that is good lessons for where robotics research is that getting these results is actually really hard so we need to think about some things like how is the good thing. How is the computer infrastructure setup can we integrate safer exploration and all these things that I kind of know now after working in deeper a little bit more but at the time it was just so motivated to actually get something to learn in front of me which is a pretty magical experience the first time you do it is a robot kind of comes to life cool okay so I was looking at slide to slide deck you had. And you noted on one of the slides that this is just a really challenging control problem at low frequencies like you mentioned lower frequency than the frequency that the I guess the bot is expecting less than 50 Hertz and then you said not a direct competitor to PID so can you tell us about that like good or our model based our eventually be the direct competitive PID is an amateur matter of just speeding things up or can you say more with the line between. PID and what what makes sense for PID and what makes sense for for something like our our model based our own is a very important point so the paper is an important demonstration of what can be done but thing like a lot of commercial off the shelf robots use this thing called PID because it's so easy to work with. And for a system that's not going to have a lot of changing dynamics and it's kind of one and done you make the robot it's it's not going to change payloads it's not going to crash much of PID is great because you tune you can tune the parameters and then it's going to work really well it's low computation it's well studied but I think model based learning and any sort of RL comes to be very well motivated when some of these things come into the real world when they become interactive when they change environments. So there's kind of an idea that I think is it's kind of like model based RL could be operating at one step up in the hierarchy so PIDs would take over for the low level control because they're honestly way more impressive than what I demonstrated what we demonstrated was like a was a proof of concept and that hopefully we can try this on other robots as well. But if model based RL is then a task planner it could tune the PID parameters on the fly like if you pick up a new object in your inertial properties change or if one of the motors is weakened model based RL kind of has the capability to retrain from recent data and then adapt which PID mechanisms are not set up to handle well. And model free RL kind of fits in between there where model free RL policies are not very adaptable but a model free RL policy potentially could replace a PID policy so a basic processor like I've baseline to my laptop many of times and that's years old at this point it can run a soft actor critic policy over killer hurts so that's pretty good that's much better than a model predictive controller and kind of using model free RL for skill primitives is something that I think would be interesting and then I still think the idea of having some model that reasons about the world that a higher level is very important so then I like the idea of model based RL kind of or model based RL. It's a higher level of the hierarchy and then classical controllers and model free RL operating below it. I'm definitely no expert in control theory although I did take a course in that when I with my computer engineering degree and it's it's it's seen mostly about achieving set points like setting the begin stabilizing on a new target value. And where is RL is I guess optimizing this reward which can be anything we want so free form and I guess I've seen some people in robotics use the RL to set the set point for the PID to try to achieve that set point so is there is there multiple ways that these things can be combined and maybe that's is the open question of exactly how they can be combined like or or is it seemed pretty clear to you the right way to to fit them together. Yeah, I would say that's like the big open question. Tell me that kind of grants my gears sometimes is held little the two communities can learn from each other and I have been lucky to work with some people that kind of try to split the middle between control theory and learning based methods because they kind of solve the same problems in different ways and they all have their pros and cons but I mean it's to the point where like the fields use different letters for things it's like states versus states and observations and actions and inputs it's like but there's a lot of overlap and that intersection is something that I'm very interested in like I haven't had much time to explore it now there's one paper where we applied the same model base or L framework and in simulation to kind of compare it to like closed form derived controllers for non-linear control so we derived something called a lead bracket and then kind of set up this contrived model based our problem to see if the model based our learns exactly what the closed form non-linear controller does. So it kind of does and it's like okay it's it's like showing you can do a learning based method to solve something that would take an expert to derive by hand but there really are these methods that can hand off between one and another and we need more people to kind of work at the intersection and understand what has been solved by control theory and what we don't need to resolve with data. So there's I've seen some talks by Professor Ben Rack from UC Berkeley as well on on this exact topic and how you know which which ones better or or things things that RL is just trying to reinvent without understanding what control theory is done and I think a lot of control theory seems to be about like stability and guarantees and things like that which in RL is kind of like all bets are off right how do we prove that something doesn't oscillate or how do we prevent anything at all. Yeah and I do think that there will be continued progress here I have seen a bit more caution from the RL community recently I mean if you want go back and watch the DPRL workshop from last year at NEROPS with like the panel the panel is all kind of like OOO like RL's people think RL is fading a bit and and stuff like that and it's mostly just it's mostly represented by model for RL practice learners and the intersection with control theory. The intersection with control is limited but I think that's something that we'll see to continue to grow in the coming years it's it's there's a new conference like learning for decision and control or learning for dynamics and control is L4 DC which kind of occupies the crowd of like Ben Rack I think was on the on the board and then Claire Tomlin and representatives from similar similar groups but other schools that I'm less familiar with and I I think that's one to watch when it comes to actual practical. So I think that's a good thing to do is to actually practical uses of learning and how that interfaces with control and my last comment before we move on to your next fascinating paper here I guess when I first came to RL I right away I was I found model free are all kind of almost just tasteful in a way like I use it it's useful I get it but it seems like we're just throwing away data in a certain sense because as soon as we want to adjust the task we're kind of starting from scratch again. And the it's hard for me to imagine the far future that I'm or going forward that model based won't have like a much bigger share the pie basically just for that fundamental reason. Yeah I like to push back a little bit I work in model based and I love to figure out the best ways to motivate it and I think for the reason you brought up which is like building models and using some of the data more I think that's something the industry will like like industry likes guarantees industry likes to know what's happening so even if you're using if you're using our all it all it's it's hard to see outside the box so to say it's hard to know exactly what's happening but I think some adoption in model based or all my industry might just be due to the case that they can see a bit more that's going on like they don't want to lose money on things but model for RL says has a super elegant motivation which is it's end to end and it's just learning purely from data and it does work very well so as long as model for RL is solving some things that model based or all doesn't solve I don't don't have any qualms with recommending people to try it on things and the kind of the burdens on the model model based people it kind of catch up and surpass if we're going to make all the these claims about models being so elegant and useful which I think there is some momentum for but it is a two sided discussion. So let's move on to on the importance of hyper parameter optimization for model based reinforcement learning that's zang at all 2021 can you give us the high level on this paper? Yeah, this is a paper that I join after I think it's first round of review and it's a it's a very dense one but the high level the title says it perfectly it's studying how if you use auto parameter auto ML so automatic parameter tuning within the model based RL framework it's studying the wide variety of effects that you can see and these effects best are captured by task performance but really you see other interesting trade offs at the model side which is like what is what is actually being modeled by the dynamics model and it kind of connects these questions of what should we be modeling in model based neural but it's very interesting just numerical study of showing how young the field is in some ways in terms of most practitioners like the best case study I get is people talk to me about model based RL and they're like oh we're using deep networks like how much how much is it going to be a lot of things that we can do is really important to you and I think it's very interesting. So what I get is people talk to me about model based RL and they're like oh you're using deep networks like how much how much tuning do people do there like what are the tricks to getting your deep neural network to predict states well is like everyone uses two hidden layers of about 200 neurons and then doesn't really touch it and we just train it from there and don't really do anything which is like kind of a joke when you read this paper and it goes to the point where you tune the model and the control optimizers if you find tune them. And if you do something called dynamic hyper parameter tuning you can literally break the simulators that you're working in so the framework was set up for success and then maybe us grad students as grad student parameter tuners didn't exploit quite enough or we weren't good enough to do so by hand but there is a lot of opportunity for studying like how to incorporate parameter tuning into some of algorithms that have already been published. Okay, so you've been talking about dynamic tuning and I gather from the paper that's that that's talking about the fact that the same hyper parameters not might not be ideal to use throughout the whole trainings is that right that we mean by dynamic tuning. Yeah, so I learned that this existed well later than I would have wanted to and how I learned this is from working on a re implementation of the pets code which is that same currently to a at all paper that I referenced from about 2018 and if you look at it the different environments have different dynamics model parameters and the interesting one is the half cheetah environment does something called incremental model training. So it gets its first random batch of data and it trains that like a normal network and then after the first trial it kind of does something that's metal learning like where it takes gradient steps from the previous models parameters which is different because a lot of times people just retrain models from scratch and ultimately they did that because that's what worked and that is a discrete change in hyper parameters which which is still not a lot. But dynamic hyper parameter optimization is the idea that at each trial you can kind of fine tune your model parameters and you would want to do that because you have maybe broader data you might have more data so supervised learning is easier to do and as you have more data and might be a little bit easier to run a model predictive controller and then you might be able to like increase the model horizon as your model gets better or run more samples in your model predictive controller if you are at a high level. So you are at a harder state and need to find a more precise action to choose and kind of changing these things online is really something that's not exploited. Mostly because it does take a lot of computation to try to integrate a whole nother research like it's a whole nother set of research code which is automatic parameter tuning into a model based or a library while running it online that's a lot of infrastructure that most academic labs don't really have. It almost reminds me of like things like RL squared and like would we want to use RL to tune the hyper parameters of the model based RL in our loop is that is that kind of what we're getting out of her or how how might we do that. Yeah, so I kind of see auto ML is kind of like RL it just uses tends to use simpler methods than what is deployed simpler is not necessarily the best word it uses different methods than deeper also uses the elite a classic example is like Bayesian optimization I've set up parameter tuning with Bayesian optimization which is another iterative algorithm that works pretty well and in the paper they also talk about population based methods which. Really all go back to the rich history of RL and you could call it like RL on RL I just think that the in like the magnitude of results shown in this paper without using something like deep RL I don't think we need to do deep RL on R and deep RL but it really is and then it is when you're running a RL loop around another RL loop with the specification of your problem is is really hard to reason about it's like because RL is pretty vague and it's definition. It's just a it's a world and an agent and then all the variables within there are fair game for it to optimize it's not the most specified. Okay, but with this dynamic tuning are we are we optimizing a one step problem like is it is a kind of like a band it setting or are we trying to say you know this sequence of three sets of hyper parameters is getting us to the where we want to go and then and we're looking at a multi step framing does that make sense. Open disclaimer I am not the expert on auto ml this group that we are working with was developing some new auto ml algorithm so I definitely defer to the paper if you want to be 100% certain in my answers but my understanding is kind of it it has a sense of history says or momentum and. It kind of understand what understands what worked at the previous time step and incorporate some of that information in and then kind of builds into the future so does have a short memory but the optimization is run at every time stop think if there's a. A new set of parameters to use cool okay and so on a high level like do you think the cost. Of you know all this complexity with more hyper parameters in model based RL is just an unavoidable thing that we have to cost we have to pay to get the benefit of the better sample complexity with model based RL or or maybe is that the complexity in that all that extra hyper parameters is that maybe just an artifact of of where things stand today and and and one day we might find a simpler way that doesn't. Require all that cost and complexity and large set of hyper parameters what do you think about that. Yeah I still think we're kind of on the upward trend of system complexity with RL and machine learning models where we're also going to start seeing more hierarchical solutions that are both model free and model based and. As the complexity increases the tools for handling complexity are going to develop so I think auto ml is going to be kind of slotted in automatically to new most research groups in any data driven area I think that's going to happen within. I don't know if I won to 10 years depending on how cutting edge like group is and the resources just because it it's I'm not going to say it's a solved problem but it there is kind of free gains to be had by running more compute and we all know compute is getting more accessible but. Over time the variables and importance will kind of be understood and then some of the other variables I think will be more static and then I model based or by having more subsystems is likely going to have more parameters but I don't think that that is going to be a huge limiting factor forever I think it will kind of stabilize. I guess if we look at the brain as the prototype of a truly intelligent system then there's just got to be umpteen hyper parameters in there that well we never had to tune but they were tuned over you know millions of years of evolution so so we don't have to think about them now and so maybe we're kind of going through the same process but having to do it intentionally which could be kind of painful the first time around. Yeah I like that analogy like I'm not going to you hopefully I'm not going to use the genetics of getting chased by a lion but there is definitely already learned and tuned parameters in my body to respond to those type of situations first there. Okay so um so Nathan how do you see your path going forward you have like a certain goal of very specific goal or already have a certain way of deciding what you focus on next kind of within the the area that you stated. Yeah I definitely have a lot more going on broadly in the robotics space right now then specifically a model based or all and some of that is due to who I'm working with and kind of the problems at hand and goes back to the conversation on control theory intersection with learning based methods and I think to understand autonomy at a big picture understanding the full picture of methods is very important so I'm enjoying learning some more more like decentralized control multi-dent systems but the kind of like next work on my what would say thesis path or my personal path is we're trying to reduce the computation of model predictive control model based or we're doing so by learning about learning more on imitation learning looking at some offline or literature to try to make it so we don't have to run NPC online if you can do all that information offline so you can put your predicted trajectories offline from log to data and then kind of do imitation learning of the model predictive controller into a feed forward policy that would make it a lot easier to run model based or on a real robot because you won't need a GPU online so that's kind of the next thing that I'm focused on but I tend to take a pretty broad approach to things and kind of wander around a bit and that might change as my I'm playing to finish my PhD probably around the end of this year and I might move somewhere else where there's more support for push on our L type of things and I would happily brought in the scope of that work cool yeah that that I'm the dichotomy between like running NPC and how expensive that is and versus just selling something like I don't know if this is a fair question but just it makes me wonder like are we doing NPC in our minds we actually doing NPC when we're playing sports and stuff or are we just kind of doing muscle memory I don't know enough to say but that that's an interesting question to me I've had a lot of conversations with this that work in areas about that relate to biology per se and leveraging the neuroscience analogy strongly is not something I like to do but take the statement weekly is they do think that they're mice kind of replay models in their brain so there's some preliminary evidence that if you have a mice in an maze trying to get to the output they like replay their visual neurons that correspond to traveling through the maze and I don't know this is something that I'm trying to figure out kind of my own time I like blog about robotics and RL a lot it's just like I don't know there's also the interesting process of writing and distilling the ideas which is something that I think academia is good for but I don't know a lot of our research kind of prioritizes just getting papers out the door so some of that idea distillation in terms of what we're actually doing might get missed. So for the audience we will we'll link to Nathan's blog and the on the episode show notes on talk our calm I want to go actually go back to that little robot you had the ion of craft and we didn't we didn't say too much about it you said how small it was and how it had a very very interesting way that you produced the thrust but so how small is I on a craft and and do you have some affinity for these tiny robots. I it's probably a love hate relationship I mean it's a better project that's going on through my whole page dean it's kind of grounded where how I think of problems and so it's this kind of nickel sized robot in area it's probably essentially weighs zero to the human hand it weighs in milligrams like if you put it off the shelf I am you on it that I'm you races the mass by like 50% and that's just a little silicon die. It's like a few grains of sand type of thing and so this is very tiny it's made in a silicon nano fab process which is kind of what my advisor specializes in it's some of the stuff that I did an undergrad so this is kind of my unusual path into RL into math it's kind of like I worked on a lot of electrical engineering hardware I learned a lot about models I learned a lot about like for you and kind of the old school of genetic algorithms and kind of all that area. The some of the older math and I I like that I'm trained in a bit different ways I think it produces some cool results in RL when you kind of have people that are from different backgrounds and that's why I kind of want to work with neuroscientists and stuff like that but back to smaller lots it's kind of the most interesting application space at hand it's like we we might be able to hand tune PID parameters for the robot but the limiting factor and the uncertainty of the environment is that we have to hook up like five to eight. Five to eight tethers to the robot to supply power and to actually control the thrusters and there's a lot of analytical models that you can do in figuring out like what is the wire force it's a certain spring constant and also the wire mass affects the flight and for every flight the wires would be different so it's quickly becoming like either a rapid adaptation of model learning or you need to you're on the clock and you're trying to hand tune a PID parameter. Set within like a minute or two because it probably won't work on a different robot so it's. It's kind of been interesting and it's just kind of like a carrot in front of me that's always leading me on through the PhD and kind of discovering these interesting things about model based or along the way which I definitely have the benefit of having someone that's allowed me to kind of. Chase the carrot wherever it takes me and not just require me to solve any one task at hand which has resulted in a whole bunch of different investigations and model based learning. Yeah I love these tiny bots like do you do you see yourself going for even smaller robots or is that was that more a phase. If you can solve the self assembly problem so the scale that they're at now is grad student assembly which is really hard and if you go up a factor of two or four the grad student assembly becomes really easy. But if you go down any further you kind of need to get some automatic assembly mechanism which is pretty tricky but I think. Like the idea of novel robots and small robots weeks is really good for the imagination and kind of understanding that we are still coming up with new robots and there are an infinite amount of tasks to solve with respect to those I mean it took me a bit to get here but ultimately if you look at my website my calls talk the second slide is just like look at all these cool robots. There's a lot of Stanford there's my group there they're really all over and I think that's what the. A future where you can kind of come up with an idea for how something can move and then if something can move it probably can solve some task and being able to just learn a controller for any real set of actuators and structures would be really exciting. It's just infinite robots. So besides your own work and what we talked about so far are there other things going on in RL that you find pretty interesting these days. I think offline RL is going to be big and haven't studied this specifics there's people that I respect a lot that are putting a lot of chips on the table in that area but also just because. It removes some of the safety concerns if you can give RL systems just the right data set and then I can give you an output you don't have the interaction with the. With the real world that can be tricky so like I'm somewhat worried about RL systems for internet processes so like if you just start unrolling a general RL system for what's displayed on my phone it's really hard to model with the effects on me are going to be in harder to model with the effects on me in the context of my peers is and. Offline RL kind of slows some of those feedback loops down and I don't think we've talked about feedback a lot in this talk but RL is inherently a feedback structure and control theory showed that feedback is incredibly powerful and like trying to understand what RL's notion of feedback is how that relates to feedback control which is some classical methods and then kind of like how feedback compounds intelligence and kind of creates these emergent properties I think is. Something that is really powerful cool and then I mean for my for my little experience with control systems it seemed like it was also saying feedback can be super dangerous because it can set up these these oscillations I can make things super unstable yeah and there's also there's also fun. Funny things like if you have two stable stable modes that you switch between and both are under feedback if you switch between two stable modes you can get an unstable mode so there's a lot of like little nuggets and control theory that might be unexpected. That sounds yeah that's the kind of stuff I think that RL can learn from control for sure. There's definitely some people that are starting to explore it I mean in looking at this model predictive control work there's work from Francesco Burrelli and V. J. Camar who are trying to do similar distillations and applying learning methods to optimal model predictive control and they're doing things like with optimal model predictive control you can learn a second controller that kind of acts as a safety critic and things like this and still try to set up some of the optimality constraints. So I am falling behind in terms of understanding all the math in the optimal control but hopefully it can catch up and those things excite me a lot with keeping any notion of optimality it seems impossible but they seem to be making progress on it. Nathan Lambert this has been really fascinating and a good time thanks thanks so much for sharing your time and your insight with the talk our audience and with me today really appreciate it. Yeah I'm really happy to be here Robin. Notes and links for this episode are at talkrl.com. If you like this show I need your support you can help in a few ways. One subscribe on your favorite podcast platform subscriptions make a big difference. Two follow us on Twitter and talkrl podcast we love retweets. Three give us a five star rating on Apple podcasts if you don't think we deserve five stars let us know on Twitter what we could do better.
[ { "end": 12, "start": 0, "text": " This is TalkArail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 12, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 26, "start": 20, "text": " Nathan Lambert is a PhD candidate at UC Berkeley. Nathan, thanks so much for being here." }, { "end": 39, "start": 26, "text": " Hi, Robin. Thanks for having me. What is the focus of your work? So I kind of emerged into this model-based RL area from a potentially different path than a lot of people in the D-BRL space." }, { "end": 50, "start": 39, "text": " And I came from kind of a minimum data framing where I was assembling these robots by hand that take hours to make and they break within just like a minute of collective data." }, { "end": 61, "start": 50, "text": " So I wanted to know what is the fastest way to generate a controller with an unknown system. And we kind of arrived at Model-based RL because it was showing to be very sample efficient in some simulated tasks." }, { "end": 70, "start": 61, "text": " But then also when you have a model, there's a lot of interesting interpretation that you can do to try to understand what your system is doing and kind of learn from the experiments that you run." }, { "end": 80, "start": 70, "text": " And RL is very interesting as a framework, so we've kind of been growing into studying how Model-based RL works and then with some applications to novel robotics." }, { "end": 92, "start": 80, "text": " So I encountered your work at Neurup's 2020, just recently, specifically this next paper, learning accurate long-term dynamics for Model-based reinforcement learning, Lambert at all 2020." }, { "end": 95, "start": 92, "text": " Can you tell us about this paper? What's the main idea here?" }, { "end": 113, "start": 95, "text": " Yeah, so in this paper we are kind of, we started trying to think about Model-Preditive Control where what happens in a lot of Model-based RL algorithms now is they kind of unroll these dynamics predictions by compounding predictions through a neural network." }, { "end": 126, "start": 113, "text": " And anyone that's tried to compound predictions through a neural network knows that you get some weird numerical behavior and essentially diverging predictions. And we're trying to rethink how predictions in Model-based RL could be done." }, { "end": 138, "start": 126, "text": " And it's pretty hard, the standard paradigm I'll refer to a couple times is like this one step model where in a one step model you have a discrete Markov decision process and you're modeling the change in the state." }, { "end": 146, "start": 138, "text": " So it's like a delta formulation where you pass in a state and an action matter time and it predicts the change to give you the state of the next time." }, { "end": 154, "start": 146, "text": " And that works, but you have this compounding problem. But in this work we are trying to come up with a new time dependent prediction mechanism." }, { "end": 163, "start": 154, "text": " And we call this the trajectory-based model, which is really coming from the fact that it's trained on all of the sub trajectories within a rollout." }, { "end": 178, "start": 163, "text": " So you take an episode of trial and simulation or the real robot, you get this like a couple hundred time step long segment and then you have, and what we did is we take each state in there and you label each segment into the future." }, { "end": 185, "start": 178, "text": " So you have a lot of segments of that trajectory and we explicitly added a time variable to the prediction." }, { "end": 202, "start": 185, "text": " So instead of a state in an action, it takes in a state and a time step and then also control parameters, which is kind of a, it's related to the action, but really the core of it is we wanted to be able to predict at a specific time to get away from the compounding predictions that one step models use." }, { "end": 208, "start": 202, "text": " What about the policy? Is it, I guess that the model that's learned is specific to that one policy? Is that right?" }, { "end": 226, "start": 208, "text": " Yes, it is in a practical way and this is something that a lot of my research has come to work on, but maybe not necessarily intentionally is like the idea of a dynamic model is going to be like focusing on a specific thing and you should control that." }, { "end": 249, "start": 226, "text": " So we'll talk about this more with some later papers, but in this case the trajectory based model we pass in some closed form controller parameters. So we did our experiments with LQR with PID and on some robotic tasks and we passed those tuned controller parameters into the network as an input to help with the long term prediction accuracy." }, { "end": 260, "start": 249, "text": " The thought behind that is an action sequence that you would use to kind of unroll one step models, the action sequence are all taken and correlated from those control parameters." }, { "end": 277, "start": 260, "text": " So it's kind of a compression of information, but then it comes at a cost where say we're currently trying to do research to figure out how to embed some neural network policy into control parameters because you no longer can take data from a different algorithm." }, { "end": 293, "start": 277, "text": " Like I can't take dynamic data from a robot running software, doctor critic and try to incorporate it into my model based approach. So it's kind of a trade off. It's like what the dynamics model does is it is specialized in something and that should be hopefully correlated with how you're going to use it with control." }, { "end": 298, "start": 293, "text": " So the control parameters are telling the model something about the policy that will be want to use." }, { "end": 314, "start": 298, "text": " Yeah, so explicitly like an example that we use in the papers, we do a a reader task, which is a robotic arm in space and we control the joint angles with the PID control, which is a classic like it's a proportional integral integral derivative control." }, { "end": 323, "start": 314, "text": " It kind of gives these nice smooth responses and what we do is we pass in the constants that define the PID control and then also a joint angle target." }, { "end": 333, "start": 323, "text": " So one of the most important ones is actually the joint angle target for the PID control because that kind of like controls where in space the end effect or would be so." }, { "end": 339, "start": 333, "text": " And then from there it can predict kind of the long term dynamics of where the arm will end up." }, { "end": 346, "start": 339, "text": " So you train over a large range of those control parameters and then it's able to to interpolate between. Is that the idea there?" }, { "end": 356, "start": 346, "text": " Yeah, so we collect a whole bunch of different PID tunings and joint angle targets and then hope in hopes that it kind of covers the whole space of the task." }, { "end": 364, "start": 356, "text": " So if you like running the task at a low level generally the goal is resampled at a different 3D position in every trial." }, { "end": 371, "start": 364, "text": " So then the policy parameters in this case PID parameters we generalize to cover that space." }, { "end": 379, "start": 371, "text": " And are these deterministic models that are being learned or I guess the concept could carry over to the classic models as well." }, { "end": 390, "start": 379, "text": " Yeah, so in the paper we mostly characterize the deterministic models but if you're digging deeper into a model based or a literature there's kind of two axes that people have been working on." }, { "end": 396, "start": 390, "text": " There's the idea of unsombling and then there's the idea of using a different loss function to create a probabilistic model." }, { "end": 406, "start": 396, "text": " Unsombling is designed to help with the epistemic uncertainty by kind of smoothing out your model capacity over the dataset with cross validation." }, { "end": 413, "start": 406, "text": " And probabilistic models are designed to kind of capture the uncertainty that is inherent to the task at hand." }, { "end": 420, "start": 413, "text": " So if you rerun a robotic task multiple times you're going to get multiple outcomes because most of these processes are somewhat stochastic." }, { "end": 431, "start": 420, "text": " And generally there's a paper from Kurt Lentua and Sergei Levin's group that did this with they have like their pets algorithm that characterize these tradeoffs." }, { "end": 435, "start": 431, "text": " And the same changes to models work for the trajectory based model." }, { "end": 446, "start": 435, "text": " So we didn't really have room to go into all the details in the paper but one of the interesting ones is when you apply this probabilistic loss function to the trajectory based model." }, { "end": 457, "start": 446, "text": " You get a stable uncertainty estimate where the uncertainty is roughly proportional to kind of like the oscillations or the uncertainty in the task." }, { "end": 471, "start": 457, "text": " If you run like a standard feedback controller like PID or LQR depending on the tuning of very common behavior is kind of oscillations and we found that the uncertainty in the trajectory based model with the probabilistic loss function," }, { "end": 478, "start": 471, "text": " the bounds kind of model where the oscillations could be rather than tracking the exact frequency of the oscillations." }, { "end": 486, "start": 478, "text": " And that's pretty nice because if you're trying to plan into the future with that you can kind of say like, oh the state will likely be somewhere in this region." }, { "end": 498, "start": 486, "text": " And kind of that's we don't talk about safety in the paper but understanding where the distribution of potential states could be is very important if you know that your individual prediction is likely to have some error." }, { "end": 511, "start": 498, "text": " Yeah, I love the fact that you get just so many more samples from a trajectory using your method. It seems so elegant and kind of beautiful. I write away as like, wow that's something very appealing about this." }, { "end": 516, "start": 511, "text": " And that's why I wanted to reach out to you, you know, besides all your other interesting work." }, { "end": 526, "start": 516, "text": " But help me understand model a traditional model I think would also predict reward. Are you trying to predict reward two or is this mostly about the state transitions?" }, { "end": 537, "start": 526, "text": " We mostly focused on the state transitions. So it's kind of a very practical point in model based RL. So in building these frameworks, if you're like doing model predictive control, you kind of have to pass it a reward function." }, { "end": 545, "start": 537, "text": " And you pass it a reward function and I can compute the compute the actual reward from the states. And the other way to do it is then have the model predict the rewards." }, { "end": 556, "start": 545, "text": " And I personally haven't seen a big difference in the original thrust of this paper was just purely on like, how do we improve prediction accuracy? So we didn't try predicting rewards." }, { "end": 570, "start": 556, "text": " But that is something. And like in terms of when I say predict rewards, something that people do is explicitly with the neural network, you have a output that is the reward at each state." }, { "end": 585, "start": 570, "text": " So it's kind of trying to learn what the reward function would be. And then in this paper, we did do something related, which is computing the reward from the projected state trajectory, which is a little bit different because you're using the reward function from the environment." }, { "end": 608, "start": 585, "text": " And generally what we found is that the trajectories that we were predicting were much more stable, stable in the sense that you don't get weird numerical errors from predicting really far into the future or they just fit well to the tasks at hand where using the reward function from the environment, we could predict the downstream rewards pretty well." }, { "end": 633, "start": 608, "text": " Where the problem with one step models is that as this compounding error comes about, the reward is very hard to predict. And ultimately, there's kind of this, the crux of doing any RL problem is you're trying to optimize the task at hand and it kind of gives you one more tool to understand the learning of the dynamics that you have if you can correlate it with the actual reward of a task." }, { "end": 644, "start": 633, "text": " That makes total sense. Yeah, I love this paper and I've definitely experienced that problem of the I call it the tyranny of the one step. And so this is a really elegant way of handling that." }, { "end": 659, "start": 644, "text": " Okay, so let's move on. You have a few of the papers here. We would love to chat about so you have objective mismatch in model based reinforcement learning, Lambert at all 2020 and that was at a conference on learning for dynamics and control. Can you tell us what's the idea here?" }, { "end": 671, "start": 659, "text": " So this paper is very related to the one that we have talked about and it's kind of the backwards order from how I did the research, but we're kind of learning about both of them in synergy." }, { "end": 680, "start": 671, "text": " So this paper was looking at the model based RL framework and trying to understand how this modular framework where we have a dynamic model training." }, { "end": 690, "start": 680, "text": " And then so generally separate and then we do the control optimization and how this handoff and is more of a handoff than an exchange of information." }, { "end": 702, "start": 690, "text": " It's kind of one and then the other and how doing this sequentially results in potentially suboptimal performance and what signals we may be able to see that can improve the model based RL framework." }, { "end": 712, "start": 702, "text": " And what we did what we centered this paper on is kind of trying to understand what the optimization of the one step model for prediction accuracy." }, { "end": 720, "start": 712, "text": " What it is doing and what it is not doing in terms of understanding the environment in terms of optimizing the task at hand." }, { "end": 733, "start": 720, "text": " So you show how model likelihood does not correspond closely with episode reward and that seems to be the thrust of some of the most important charts. Can you explain what exactly does that mean?" }, { "end": 745, "start": 733, "text": " Yeah, so this is something that I was hinting at earlier in this conversation and ultimately I've come to phrase this as when you're learning a dynamic model from data, there's really no free lunch." }, { "end": 753, "start": 745, "text": " So what you're doing is you have some distribution of data and in RL it's kind of iteratively learned distribution." }, { "end": 766, "start": 753, "text": " So as you learn a task you're kind of building this data set out it grows a little bit into your state space and a dynamics model learns accuracy kind of uniformly with respect to the density of data that you have." }, { "end": 776, "start": 766, "text": " But in reality the distribution of your data does not match up exactly well with the distribution of the data for the tasks that you're trying to solve." }, { "end": 789, "start": 776, "text": " So the task you're trying to solve could be like an expert distribution, but the data that you get is some on policy data normally started with purely random state action pairs and those are probably not very relevant to the task you're trying to solve." }, { "end": 796, "start": 789, "text": " And when you're using one step models it's optimizing over that entire state space rather than the task at hand." }, { "end": 804, "start": 796, "text": " So you can end up with an accurate model overall that is not necessarily accurate at the task you want to do." }, { "end": 811, "start": 804, "text": " So the readout for the machine learning engineer is going to say I have done supervised learning I have done it well." }, { "end": 830, "start": 811, "text": " My loss is very low. I think that this model will be useful for downstream control. But in reality a globally accurate model in terms of is like is accurate on average and there are sub areas such as for the task that might actually result in lower reward." }, { "end": 845, "start": 830, "text": " So that's that's the long story of it. But it's very it's very nuanced. So then it comes to say like what do we do with our model. It's like do we keep training for accuracy? And I would say yes, but we want to be able to focus our accuracy on the areas that we are interested in." }, { "end": 851, "start": 845, "text": " And I guess it's challenging because we don't always know in advance what the area is we're interested in." }, { "end": 854, "start": 851, "text": " We don't know where the experts going to want to go. Is that true. Is that the case?" }, { "end": 869, "start": 854, "text": " Yes, so there's an example in this paper where we take a robotic task and we predefined an expert trajectory and we show that when you predefined an expert trajectory and wait the dynamics model around that trajectory." }, { "end": 880, "start": 869, "text": " So you prioritize samples near the expert. The learning rate is much higher. The data efficiency is higher in the performance of the downstream controllers better." }, { "end": 897, "start": 880, "text": " So it's like, okay, we knew what we needed to learn. So if you focus there, it does better. But kind of when I have the time, I would like to return to this paper and kind of build out that algorithm and you have to figure out how to iteratively update your expert and iteratively figure out what your expert trajectory should be." }, { "end": 905, "start": 897, "text": " So that's kind of the open question in terms of applying this in the real world and not in a simulated environment with a task you've already solved." }, { "end": 915, "start": 905, "text": " So this paper had a whole section on adversarial attack on model performance that I thought was pretty interesting. Do you want to tell us more about that, how that worked and what that was for?" }, { "end": 919, "start": 915, "text": " Yeah, this is one of my favorite results from the paper, probably because it was also so easy to get done." }, { "end": 928, "start": 919, "text": " And a lot of times in research people know that you kind of work at your keyboard a long time to get any numerical result to work." }, { "end": 938, "start": 928, "text": " This one was done in an afternoon. So it ultimately like kind of just set it up and it worked immediately, which is when you know that it's probably doing something right and you're not just forcing random seeds or anything." }, { "end": 949, "start": 938, "text": " And what we did is we kind of wanted to fine tune a model so that the accuracy you get as a read out over the whole training data set remains high." }, { "end": 965, "start": 949, "text": " But the task performance goes down and this kind of goes back to what I was saying about no free lunch and dynamics models. So ultimately what we did is we took like a CMA ES optimizer to change the weights and biases of the output layer of a feed forward dynamics model." }, { "end": 975, "start": 965, "text": " And so the one step dynamics model. And then as we are tuning the dynamics model parameters, we used it for a model predictive controller in the carpal environment." }, { "end": 985, "start": 975, "text": " And we are looking for a model that reported a high average accuracy, but got a low reward. So by just iterating a few trials of CMA ES." }, { "end": 1002, "start": 985, "text": " We were able to find a model that reported the same level of global accuracy over the on policy data set. But in reality, if you probed deeper the model accuracy over like the area of interest or what we have called the expert was very low." }, { "end": 1019, "start": 1002, "text": " And therefore the reward for the carpal drop task dropped from full to just 50% performance very rapidly. So it's kind of saying like there's we're not doing that much intelligent design with how to wait the dynamics models and what the one step models are doing." }, { "end": 1029, "start": 1019, "text": " And it's pretty easy to find a model that isn't useful for solving the task, but might look useful if you were just training this on the data set at hand." }, { "end": 1044, "start": 1029, "text": " So it's like a worst case type thing. Yeah, I kind of interpret this as like the space for bad models might be well more populated than the space for useful models. So it's like searching in the sparse space, but we don't have the right metrics built for it." }, { "end": 1065, "start": 1044, "text": " It's a very, very deep paper and I have talked with some of my co authors about this and we like need to keep digging into the subject. And on one hand is exciting because I think model based or else has a lot of opportunity for growth where there's these problems that are pretty numerically obvious. And I think that there's the right people to solve them." }, { "end": 1082, "start": 1065, "text": " But it's like it definitely warrant some more deep thinking cool. Okay, then that'll give us lots more to talk about in the future. So how about moving on to your next paper here low level control of a quadruder with deep model based reinforcement learning that was Lambert at all 2019." }, { "end": 1099, "start": 1082, "text": " So this is using a really small robot was it the crazy fly robot can you tell us about the robots to start with. Yeah, so this is a good connection to my research vision that I portrayed in the intro. So the crazy fly is I'm going to guess you are 27 grams or 32 grams. It's" }, { "end": 1114, "start": 1099, "text": " a wallet sized so pretty small very lightweight and we were we chose it one because a lot of researchers use it in two because it was the smallest one that we could find that's kind of ready to play and it was related to this." }, { "end": 1141, "start": 1114, "text": " Trying to use model based or else to control a micro robot which is called the ionicraft in my lab which is the one that I hand assemble it uses 2000 volts to create a plasma region and literally uses silent ion thrust to fly it's it's pretty magical and we were trying to develop methods that we could use to control that and robots like it and the crazy fly is what we settled on because it is very well supported I would happily work with the crazy fly more." }, { "end": 1153, "start": 1141, "text": " And we're trying to see if we know nothing about it can we learn to fly by just controlling the motor voltages so it turned all the on board controllers often like how hard is it to learn a simple locomotion task." }, { "end": 1170, "start": 1153, "text": " Okay, so that was the goal of the paper can you tell us about the kind of the paper overall what's the thrust here. Yeah, so this was this is my entrance into model based our out and I think the paper is a good example of practically what you need to do to get one of these systems set up." }, { "end": 1199, "start": 1170, "text": " Now I easily see some of the limitations of model based are all through the paper but it was just vanilla deterministic one step models and we didn't use ensembles online because ensembles take longer to compute in model predictive control we set up the model predictive control on a GPU and with kuda it's just a lot of for loop and you kind of optimize it to remove everything except for the feed forward neural network passes in parallel and you can see that the model is not going to be able to do that." }, { "end": 1206, "start": 1199, "text": " So you can run the process in parallel and you can run model based model predictive control anywhere from 25 to 100 hertz." }, { "end": 1225, "start": 1206, "text": " You can run it at 200 hertz but that's not really NPC because you're predicting like one step into the future. So if you look at the paper it can give you some frequencies with different number of samples that you number number of action samples are evaluating in different time horizons but it kind of made me see where the point of computational hardware is." }, { "end": 1249, "start": 1225, "text": " And thankfully the computation was kind of the bottleneck but thankfully things are continuing to improve there so in a couple of years we'll get another 2x or 4x improvement in computation and I think that could actually translate to a lot more useful model based control as we can take more samples and more rapidly run our controller which is a huge limitation in practical tasks." }, { "end": 1252, "start": 1249, "text": " And this was a was it off board control." }, { "end": 1268, "start": 1252, "text": " So pretty nice support nicely supported infrastructure with the robot operating system Ross and what happens is we with some hacking we increased the radio communication frequency from the robot to a base station." }, { "end": 1293, "start": 1268, "text": " So the robot would send encoded state data to a base station and the base station is where the GPU was so the GPU would wait for the event which is new state data and upon new state data it would start the NPC and they were kind of communicating back and forth well faster than the actual control frequency to kind of combat some practical difficulties with RF transmission i.e. dropped packets." }, { "end": 1322, "start": 1293, "text": " So it was a kind of well engineered as a strong word for research but we spent a lot of time on the engineering of the actual crazy fly hardware system in order to get it to be robust with low level control when you're doing low level control if you take one bad action things will likely crash if you're doing trajectory planning you tend to have a more robust low level controller on so you have a little bit more flexibility which is ultimately the low level control is what made the pilot." }, { "end": 1351, "start": 1322, "text": " So the low level control is what made the paper hard is we're running at 25 or 50 Hertz where the internal controllers are normally running at 500 Hertz so if our NPC took a couple bad actions in a row it would crash which is breath because sometimes I had to just replace the propellers and replace motors and start over but I think that is good lessons for where robotics research is that getting these results is actually really hard so we need to think about some things like how is the good thing." }, { "end": 1373, "start": 1351, "text": " How is the computer infrastructure setup can we integrate safer exploration and all these things that I kind of know now after working in deeper a little bit more but at the time it was just so motivated to actually get something to learn in front of me which is a pretty magical experience the first time you do it is a robot kind of comes to life cool okay so I was looking at slide to slide deck you had." }, { "end": 1401, "start": 1373, "text": " And you noted on one of the slides that this is just a really challenging control problem at low frequencies like you mentioned lower frequency than the frequency that the I guess the bot is expecting less than 50 Hertz and then you said not a direct competitor to PID so can you tell us about that like good or our model based our eventually be the direct competitive PID is an amateur matter of just speeding things up or can you say more with the line between." }, { "end": 1420, "start": 1401, "text": " PID and what what makes sense for PID and what makes sense for for something like our our model based our own is a very important point so the paper is an important demonstration of what can be done but thing like a lot of commercial off the shelf robots use this thing called PID because it's so easy to work with." }, { "end": 1449, "start": 1420, "text": " And for a system that's not going to have a lot of changing dynamics and it's kind of one and done you make the robot it's it's not going to change payloads it's not going to crash much of PID is great because you tune you can tune the parameters and then it's going to work really well it's low computation it's well studied but I think model based learning and any sort of RL comes to be very well motivated when some of these things come into the real world when they become interactive when they change environments." }, { "end": 1472, "start": 1449, "text": " So there's kind of an idea that I think is it's kind of like model based RL could be operating at one step up in the hierarchy so PIDs would take over for the low level control because they're honestly way more impressive than what I demonstrated what we demonstrated was like a was a proof of concept and that hopefully we can try this on other robots as well." }, { "end": 1494, "start": 1472, "text": " But if model based RL is then a task planner it could tune the PID parameters on the fly like if you pick up a new object in your inertial properties change or if one of the motors is weakened model based RL kind of has the capability to retrain from recent data and then adapt which PID mechanisms are not set up to handle well." }, { "end": 1509, "start": 1494, "text": " And model free RL kind of fits in between there where model free RL policies are not very adaptable but a model free RL policy potentially could replace a PID policy so a basic processor like I've" }, { "end": 1538, "start": 1509, "text": " baseline to my laptop many of times and that's years old at this point it can run a soft actor critic policy over killer hurts so that's pretty good that's much better than a model predictive controller and kind of using model free RL for skill primitives is something that I think would be interesting and then I still think the idea of having some model that reasons about the world that a higher level is very important so then I like the idea of model based RL kind of or model based RL." }, { "end": 1545, "start": 1538, "text": " It's a higher level of the hierarchy and then classical controllers and model free RL operating below it." }, { "end": 1559, "start": 1545, "text": " I'm definitely no expert in control theory although I did take a course in that when I with my computer engineering degree and it's it's it's seen mostly about achieving set points like setting the begin stabilizing on a new target value." }, { "end": 1579, "start": 1559, "text": " And where is RL is I guess optimizing this reward which can be anything we want so free form and I guess I've seen some people in robotics use the RL to set the set point for the PID to try to achieve that set point so is there is there multiple ways that these things can be combined and maybe that's is the" }, { "end": 1589, "start": 1579, "text": " open question of exactly how they can be combined like or or is it seemed pretty clear to you the right way to to fit them together." }, { "end": 1608, "start": 1589, "text": " Yeah, I would say that's like the big open question. Tell me that kind of grants my gears sometimes is held little the two communities can learn from each other and I have been lucky to work with some people that kind of try to split the middle between control theory and learning based methods because they kind of solve the same problems in different ways and they all have their pros and cons but I mean" }, { "end": 1636, "start": 1608, "text": " it's to the point where like the fields use different letters for things it's like states versus states and observations and actions and inputs it's like but there's a lot of overlap and that intersection is something that I'm very interested in like I haven't had much time to explore it now there's one paper where we applied the same model base or L framework and in simulation to kind of compare it to like closed form derived" }, { "end": 1650, "start": 1636, "text": " controllers for non-linear control so we derived something called a lead bracket and then kind of set up this contrived model based our problem to see if the model based our learns exactly what the closed form non-linear controller does." }, { "end": 1674, "start": 1650, "text": " So it kind of does and it's like okay it's it's like showing you can do a learning based method to solve something that would take an expert to derive by hand but there really are these methods that can hand off between one and another and we need more people to kind of work at the intersection and understand what has been solved by control theory and what we don't need to resolve with data." }, { "end": 1703, "start": 1674, "text": " So there's I've seen some talks by Professor Ben Rack from UC Berkeley as well on on this exact topic and how you know which which ones better or or things things that RL is just trying to reinvent without understanding what control theory is done and I think a lot of control theory seems to be about like stability and guarantees and things like that which in RL is kind of like all bets are off right how do we prove that something doesn't oscillate or how do we prevent anything at all." }, { "end": 1732, "start": 1703, "text": " Yeah and I do think that there will be continued progress here I have seen a bit more caution from the RL community recently I mean if you want go back and watch the DPRL workshop from last year at NEROPS with like the panel the panel is all kind of like OOO like RL's people think RL is fading a bit and and stuff like that and it's mostly just it's mostly represented by model for RL practice learners and the intersection with control theory." }, { "end": 1761, "start": 1732, "text": " The intersection with control is limited but I think that's something that we'll see to continue to grow in the coming years it's it's there's a new conference like learning for decision and control or learning for dynamics and control is L4 DC which kind of occupies the crowd of like Ben Rack I think was on the on the board and then Claire Tomlin and representatives from similar similar groups but other schools that I'm less familiar with and I I think that's one to watch when it comes to actual practical." }, { "end": 1789, "start": 1761, "text": " So I think that's a good thing to do is to actually practical uses of learning and how that interfaces with control and my last comment before we move on to your next fascinating paper here I guess when I first came to RL I right away I was I found model free are all kind of almost just tasteful in a way like I use it it's useful I get it but it seems like we're just throwing away data in a certain sense because as soon as we want to adjust the task we're kind of starting from scratch again." }, { "end": 1799, "start": 1789, "text": " And the it's hard for me to imagine the far future that I'm or going forward that model based won't have like a much bigger share the pie basically just for that fundamental reason." }, { "end": 1818, "start": 1799, "text": " Yeah I like to push back a little bit I work in model based and I love to figure out the best ways to motivate it and I think for the reason you brought up which is like building models and using some of the data more I think that's something the industry will like like industry likes guarantees industry likes to know what's happening so even if you're using if you're using our all it all it's it's hard to" }, { "end": 1847, "start": 1818, "text": " see outside the box so to say it's hard to know exactly what's happening but I think some adoption in model based or all my industry might just be due to the case that they can see a bit more that's going on like they don't want to lose money on things but model for RL says has a super elegant motivation which is it's end to end and it's just learning purely from data and it does work very well so as long as model for RL is solving some things that model based or all doesn't solve I don't" }, { "end": 1866, "start": 1847, "text": " don't have any qualms with recommending people to try it on things and the kind of the burdens on the model model based people it kind of catch up and surpass if we're going to make all the these claims about models being so elegant and useful which I think there is some momentum for but it is a two sided discussion." }, { "end": 1876, "start": 1866, "text": " So let's move on to on the importance of hyper parameter optimization for model based reinforcement learning that's zang at all 2021 can you give us the high level on this paper?" }, { "end": 1905, "start": 1876, "text": " Yeah, this is a paper that I join after I think it's first round of review and it's a it's a very dense one but the high level the title says it perfectly it's studying how if you use auto parameter auto ML so automatic parameter tuning within the model based RL framework it's studying the wide variety of effects that you can see and these effects best are captured by task performance but" }, { "end": 1930, "start": 1905, "text": " really you see other interesting trade offs at the model side which is like what is what is actually being modeled by the dynamics model and it kind of connects these questions of what should we be modeling in model based neural but it's very interesting just numerical study of showing how young the field is in some ways in terms of most practitioners like the best case study I get is people talk to me about model based RL and they're like oh we're using deep networks like how much how much is it going to be a lot of things that we can do is really important to you and I think it's very interesting." }, { "end": 1959, "start": 1930, "text": " So what I get is people talk to me about model based RL and they're like oh you're using deep networks like how much how much tuning do people do there like what are the tricks to getting your deep neural network to predict states well is like everyone uses two hidden layers of about 200 neurons and then doesn't really touch it and we just train it from there and don't really do anything which is like kind of a joke when you read this paper and it goes to the point where you tune the model and the control optimizers if you find tune them." }, { "end": 1985, "start": 1959, "text": " And if you do something called dynamic hyper parameter tuning you can literally break the simulators that you're working in so the framework was set up for success and then maybe us grad students as grad student parameter tuners didn't exploit quite enough or we weren't good enough to do so by hand but there is a lot of opportunity for studying like how to incorporate parameter tuning into some of algorithms that have already been published." }, { "end": 1999, "start": 1985, "text": " Okay, so you've been talking about dynamic tuning and I gather from the paper that's that that's talking about the fact that the same hyper parameters not might not be ideal to use throughout the whole trainings is that right that we mean by dynamic tuning." }, { "end": 2024, "start": 1999, "text": " Yeah, so I learned that this existed well later than I would have wanted to and how I learned this is from working on a re implementation of the pets code which is that same currently to a at all paper that I referenced from about 2018 and if you look at it the different environments have different dynamics model parameters and the interesting one is the half cheetah environment does something called incremental model training." }, { "end": 2051, "start": 2024, "text": " So it gets its first random batch of data and it trains that like a normal network and then after the first trial it kind of does something that's metal learning like where it takes gradient steps from the previous models parameters which is different because a lot of times people just retrain models from scratch and ultimately they did that because that's what worked and that is a discrete change in hyper parameters which which is still not a lot." }, { "end": 2080, "start": 2051, "text": " But dynamic hyper parameter optimization is the idea that at each trial you can kind of fine tune your model parameters and you would want to do that because you have maybe broader data you might have more data so supervised learning is easier to do and as you have more data and might be a little bit easier to run a model predictive controller and then you might be able to like increase the model horizon as your model gets better or run more samples in your model predictive controller if you are at a high level." }, { "end": 2090, "start": 2080, "text": " So you are at a harder state and need to find a more precise action to choose and kind of changing these things online is really something that's not exploited." }, { "end": 2109, "start": 2090, "text": " Mostly because it does take a lot of computation to try to integrate a whole nother research like it's a whole nother set of research code which is automatic parameter tuning into a model based or a library while running it online that's a lot of infrastructure that most academic labs don't really have." }, { "end": 2125, "start": 2109, "text": " It almost reminds me of like things like RL squared and like would we want to use RL to tune the hyper parameters of the model based RL in our loop is that is that kind of what we're getting out of her or how how might we do that." }, { "end": 2153, "start": 2125, "text": " Yeah, so I kind of see auto ML is kind of like RL it just uses tends to use simpler methods than what is deployed simpler is not necessarily the best word it uses different methods than deeper also uses the elite a classic example is like Bayesian optimization I've set up parameter tuning with Bayesian optimization which is another iterative algorithm that works pretty well and in the paper they also talk about population based methods which." }, { "end": 2182, "start": 2153, "text": " Really all go back to the rich history of RL and you could call it like RL on RL I just think that the in like the magnitude of results shown in this paper without using something like deep RL I don't think we need to do deep RL on R and deep RL but it really is and then it is when you're running a RL loop around another RL loop with the specification of your problem is is really hard to reason about it's like because RL is pretty vague and it's definition." }, { "end": 2192, "start": 2182, "text": " It's just a it's a world and an agent and then all the variables within there are fair game for it to optimize it's not the most specified." }, { "end": 2209, "start": 2192, "text": " Okay, but with this dynamic tuning are we are we optimizing a one step problem like is it is a kind of like a band it setting or are we trying to say you know this sequence of three sets of hyper parameters is getting us to the where we want to go and then and we're looking at a multi step framing does that make sense." }, { "end": 2227, "start": 2209, "text": " Open disclaimer I am not the expert on auto ml this group that we are working with was developing some new auto ml algorithm so I definitely defer to the paper if you want to be 100% certain in my answers but my understanding is kind of it it has a sense of history says or momentum and." }, { "end": 2241, "start": 2227, "text": " It kind of understand what understands what worked at the previous time step and incorporate some of that information in and then kind of builds into the future so does have a short memory but the optimization is run at every time stop think if there's a." }, { "end": 2249, "start": 2241, "text": " A new set of parameters to use cool okay and so on a high level like do you think the cost." }, { "end": 2275, "start": 2249, "text": " Of you know all this complexity with more hyper parameters in model based RL is just an unavoidable thing that we have to cost we have to pay to get the benefit of the better sample complexity with model based RL or or maybe is that the complexity in that all that extra hyper parameters is that maybe just an artifact of of where things stand today and and and one day we might find a simpler way that doesn't." }, { "end": 2281, "start": 2275, "text": " Require all that cost and complexity and large set of hyper parameters what do you think about that." }, { "end": 2293, "start": 2281, "text": " Yeah I still think we're kind of on the upward trend of system complexity with RL and machine learning models where we're also going to start seeing more hierarchical solutions that are both model free and model based and." }, { "end": 2310, "start": 2293, "text": " As the complexity increases the tools for handling complexity are going to develop so I think auto ml is going to be kind of slotted in automatically to new most research groups in any data driven area I think that's going to happen within." }, { "end": 2329, "start": 2310, "text": " I don't know if I won to 10 years depending on how cutting edge like group is and the resources just because it it's I'm not going to say it's a solved problem but it there is kind of free gains to be had by running more compute and we all know compute is getting more accessible but." }, { "end": 2351, "start": 2329, "text": " Over time the variables and importance will kind of be understood and then some of the other variables I think will be more static and then I model based or by having more subsystems is likely going to have more parameters but I don't think that that is going to be a huge limiting factor forever I think it will kind of stabilize." }, { "end": 2376, "start": 2351, "text": " I guess if we look at the brain as the prototype of a truly intelligent system then there's just got to be umpteen hyper parameters in there that well we never had to tune but they were tuned over you know millions of years of evolution so so we don't have to think about them now and so maybe we're kind of going through the same process but having to do it intentionally which could be kind of painful the first time around." }, { "end": 2391, "start": 2376, "text": " Yeah I like that analogy like I'm not going to you hopefully I'm not going to use the genetics of getting chased by a lion but there is definitely already learned and tuned parameters in my body to respond to those type of situations first there." }, { "end": 2404, "start": 2391, "text": " Okay so um so Nathan how do you see your path going forward you have like a certain goal of very specific goal or already have a certain way of deciding what you focus on next kind of within the the area that you stated." }, { "end": 2433, "start": 2404, "text": " Yeah I definitely have a lot more going on broadly in the robotics space right now then specifically a model based or all and some of that is due to who I'm working with and kind of the problems at hand and goes back to the conversation on control theory intersection with learning based methods and I think to understand autonomy at a big picture understanding the full picture of methods is very important so I'm enjoying learning some more more like decentralized control multi-dent systems but" }, { "end": 2460, "start": 2433, "text": " the kind of like next work on my what would say thesis path or my personal path is we're trying to reduce the computation of model predictive control model based or we're doing so by learning about learning more on imitation learning looking at some offline or literature to try to make it so we don't have to run NPC online if you can do all that information offline so you can put your predicted trajectories offline from log to data" }, { "end": 2489.96, "start": 2460, "text": " and then kind of do imitation learning of the model predictive controller into a feed forward policy that would make it a lot easier to run model based or on a real robot because you won't need a GPU online so that's kind of the next thing that I'm focused on but I tend to take a pretty broad approach to things and kind of wander around a bit and that might change as my I'm playing to finish my PhD probably around the end of this year and I might move somewhere else where there's more support for push" }, { "end": 2517.96, "start": 2489.96, "text": " on our L type of things and I would happily brought in the scope of that work cool yeah that that I'm the dichotomy between like running NPC and how expensive that is and versus just selling something like I don't know if this is a fair question but just it makes me wonder like are we doing NPC in our minds we actually doing NPC when we're playing sports and stuff or are we just kind of doing muscle memory I don't know enough to say but that that's an interesting question to me" }, { "end": 2547.92, "start": 2517.96, "text": " I've had a lot of conversations with this that work in areas about that relate to biology per se and leveraging the neuroscience analogy strongly is not something I like to do but take the statement weekly is they do think that they're mice kind of replay models in their brain so there's some preliminary evidence that if you have a mice in an maze trying to get to the output they like replay their visual neurons that correspond to traveling through the maze and" }, { "end": 2574.92, "start": 2547.92, "text": " I don't know this is something that I'm trying to figure out kind of my own time I like blog about robotics and RL a lot it's just like I don't know there's also the interesting process of writing and distilling the ideas which is something that I think academia is good for but I don't know a lot of our research kind of prioritizes just getting papers out the door so some of that idea distillation in terms of what we're actually doing might get missed." }, { "end": 2602.92, "start": 2574.92, "text": " So for the audience we will we'll link to Nathan's blog and the on the episode show notes on talk our calm I want to go actually go back to that little robot you had the ion of craft and we didn't we didn't say too much about it you said how small it was and how it had a very very interesting way that you produced the thrust but so how small is I on a craft and and do you have some affinity for these tiny robots." }, { "end": 2631.92, "start": 2602.92, "text": " I it's probably a love hate relationship I mean it's a better project that's going on through my whole page dean it's kind of grounded where how I think of problems and so it's this kind of nickel sized robot in area it's probably essentially weighs zero to the human hand it weighs in milligrams like if you put it off the shelf I am you on it that I'm you races the mass by like 50% and that's just a little silicon die." }, { "end": 2659.92, "start": 2631.92, "text": " It's like a few grains of sand type of thing and so this is very tiny it's made in a silicon nano fab process which is kind of what my advisor specializes in it's some of the stuff that I did an undergrad so this is kind of my unusual path into RL into math it's kind of like I worked on a lot of electrical engineering hardware I learned a lot about models I learned a lot about like for you and kind of the old school of genetic algorithms and kind of all that area." }, { "end": 2688.92, "start": 2659.92, "text": " The some of the older math and I I like that I'm trained in a bit different ways I think it produces some cool results in RL when you kind of have people that are from different backgrounds and that's why I kind of want to work with neuroscientists and stuff like that but back to smaller lots it's kind of the most interesting application space at hand it's like we we might be able to hand tune PID parameters for the robot but the limiting factor and the uncertainty of the environment is that we have to hook up like five to eight." }, { "end": 2717.92, "start": 2688.92, "text": " Five to eight tethers to the robot to supply power and to actually control the thrusters and there's a lot of analytical models that you can do in figuring out like what is the wire force it's a certain spring constant and also the wire mass affects the flight and for every flight the wires would be different so it's quickly becoming like either a rapid adaptation of model learning or you need to you're on the clock and you're trying to hand tune a PID parameter." }, { "end": 2723.92, "start": 2717.92, "text": " Set within like a minute or two because it probably won't work on a different robot so it's." }, { "end": 2739.92, "start": 2723.92, "text": " It's kind of been interesting and it's just kind of like a carrot in front of me that's always leading me on through the PhD and kind of discovering these interesting things about model based or along the way which I definitely have the benefit of having someone that's allowed me to kind of." }, { "end": 2748.92, "start": 2739.92, "text": " Chase the carrot wherever it takes me and not just require me to solve any one task at hand which has resulted in a whole bunch of different investigations and model based learning." }, { "end": 2755.92, "start": 2748.92, "text": " Yeah I love these tiny bots like do you do you see yourself going for even smaller robots or is that was that more a phase." }, { "end": 2767.92, "start": 2755.92, "text": " If you can solve the self assembly problem so the scale that they're at now is grad student assembly which is really hard and if you go up a factor of two or four the grad student assembly becomes really easy." }, { "end": 2775.92, "start": 2767.92, "text": " But if you go down any further you kind of need to get some automatic assembly mechanism which is pretty tricky but I think." }, { "end": 2796.92, "start": 2775.92, "text": " Like the idea of novel robots and small robots weeks is really good for the imagination and kind of understanding that we are still coming up with new robots and there are an infinite amount of tasks to solve with respect to those I mean it took me a bit to get here but ultimately if you look at my website my calls talk the second slide is just like look at all these cool robots." }, { "end": 2801.92, "start": 2796.92, "text": " There's a lot of Stanford there's my group there they're really all over and I think that's what the." }, { "end": 2815.92, "start": 2801.92, "text": " A future where you can kind of come up with an idea for how something can move and then if something can move it probably can solve some task and being able to just learn a controller for any real set of actuators and structures would be really exciting." }, { "end": 2817.92, "start": 2815.92, "text": " It's just infinite robots." }, { "end": 2825.92, "start": 2817.92, "text": " So besides your own work and what we talked about so far are there other things going on in RL that you find pretty interesting these days." }, { "end": 2836.92, "start": 2825.92, "text": " I think offline RL is going to be big and haven't studied this specifics there's people that I respect a lot that are putting a lot of chips on the table in that area but also just because." }, { "end": 2846.92, "start": 2836.92, "text": " It removes some of the safety concerns if you can give RL systems just the right data set and then I can give you an output you don't have the interaction with the." }, { "end": 2866.92, "start": 2846.92, "text": " With the real world that can be tricky so like I'm somewhat worried about RL systems for internet processes so like if you just start unrolling a general RL system for what's displayed on my phone it's really hard to model with the effects on me are going to be in harder to model with the effects on me in the context of my peers is and." }, { "end": 2895.92, "start": 2866.92, "text": " Offline RL kind of slows some of those feedback loops down and I don't think we've talked about feedback a lot in this talk but RL is inherently a feedback structure and control theory showed that feedback is incredibly powerful and like trying to understand what RL's notion of feedback is how that relates to feedback control which is some classical methods and then kind of like how feedback compounds intelligence and kind of creates these emergent properties I think is." }, { "end": 2914.92, "start": 2895.92, "text": " Something that is really powerful cool and then I mean for my for my little experience with control systems it seemed like it was also saying feedback can be super dangerous because it can set up these these oscillations I can make things super unstable yeah and there's also there's also fun." }, { "end": 2929.92, "start": 2914.92, "text": " Funny things like if you have two stable stable modes that you switch between and both are under feedback if you switch between two stable modes you can get an unstable mode so there's a lot of like little nuggets and control theory that might be unexpected." }, { "end": 2934.92, "start": 2929.92, "text": " That sounds yeah that's the kind of stuff I think that RL can learn from control for sure." }, { "end": 2963.92, "start": 2934.92, "text": " There's definitely some people that are starting to explore it I mean in looking at this model predictive control work there's work from Francesco Burrelli and V. J. Camar who are trying to do similar distillations and applying learning methods to optimal model predictive control and they're doing things like with optimal model predictive control you can learn a second controller that kind of acts as a safety critic and things like this and still try to set up some of the optimality constraints." }, { "end": 2979.92, "start": 2963.92, "text": " So I am falling behind in terms of understanding all the math in the optimal control but hopefully it can catch up and those things excite me a lot with keeping any notion of optimality it seems impossible but they seem to be making progress on it." }, { "end": 2988.92, "start": 2979.92, "text": " Nathan Lambert this has been really fascinating and a good time thanks thanks so much for sharing your time and your insight with the talk our audience and with me today really appreciate it." }, { "end": 2995.92, "start": 2988.92, "text": " Yeah I'm really happy to be here Robin." }, { "end": 3003.92, "start": 2995.92, "text": " Notes and links for this episode are at talkrl.com." }, { "end": 3008.92, "start": 3003.92, "text": " If you like this show I need your support you can help in a few ways." }, { "end": 3020.92, "start": 3008.92, "text": " One subscribe on your favorite podcast platform subscriptions make a big difference. Two follow us on Twitter and talkrl podcast we love retweets." }, { "end": 3035.92, "start": 3020.92, "text": " Three give us a five star rating on Apple podcasts if you don't think we deserve five stars let us know on Twitter what we could do better." } ]
Kai Arulkumaran
Kai Arulkumaran on AlphaStar and Evolutionary Computation, Domain Randomisation, Upside-Down Reinforcement Learning, Araya, NNAISENSE, and more!
https://media.transistor…6c7.mp3?src=site
This is Taka Rail Podcast. All reinforcement learning, all the time. Interviews at Brilliant Bokes across the world of RL. I'm your host, Robin Chohan. Dr. Kai Arul Kamaran is a researcher at Araya in Tokyo, Japan. Kai, thanks for joining us today. Thank you. Can you tell us a bit about your main area of focus? Right. So I don't really have a focus. I think I'm known for kind of branching out and doing lots of different things. But this sort of underlying thing that's interested me is to understand and replicate biological intelligence and apply this to real world tasks. So my main topics are deep learning and reinforcement learning. If you place researchers who are interested in general AI on a sort of spectrum between first principles or good old fashioned symbolic AI versus whole brain replication, then I'm interested in cognitive principles and less at the level of neurons. They're sitting a little more in the neuroscience side than maybe some end to end deep learning people. But I'm also not tied to this sort of thing. So I've worked on practical applications from robotics to medical imaging and even more fundamental deep learning things like architectures like adaptive neural trees. Can you tell us a bit about Araya? Sure. So Araya was founded by Riyota Kanai about half a decade ago to work at the intersection of AI and neuroscience. So the engineering department provides AI solutions, whereas the research department just goes ahead and does fundamental research. So Riyota and a lot of the other researchers here specialize in consciousness, which is fascinating. So we all believe we have it. But we don't really know what it is, what it's useful. If it's useful at all, or if it's just a side effect of other things that are happening in the brain. So one can argue that being self-aware, having a model of yourself is useful for interacting with the world. But why do we actually feel things? And this idea of quality is my feeling of redness the same as your feeling of redness. So theories of consciousness are holistic theories of brain function and some examples include global workspace theory. So I see these as inspiration for AI, but also like better AI models that have been developed in recent years can drive forward neuroscience research. And so we set up the synergy hoping that we can use the latest in AI to drive neuroscience research and neuroscience research to drive AI. And so yeah, we're looking for experienced AI researchers who want to look at this intersection. So how did you get into reinforcement learning? So when I finished my undergrad degree, which was computer science at Cambridge University, I became a web developer just because I enjoyed it. But half a year later, when the startup I joined failed to monetize and everyone got laid off, I thought about what I really wanted to do and remembered that as a kid, I love robots. So I applied to the Bunginary Masters at Imperial College with the hope of learning how to make Androids. When I was there, I remembered I was terrible at maths, but I could program. So I joined a computer vision lab for my master's project. They are the supervisor and al-Berrath, worked on biologically inspired computer vision. And this was back in 2014 and he was ready to transition into deep learning. So he asked me to join and help with it. And given that deep mind was showing demos of the DQN back then, felt like it was the good time to try and do end-to-end control for robots directly from pixels. So during the MSE, I actually did my first course in machine learning, but I switched off after the lecture, started talking about infinite dimensions and managed to just about scrape a sea at the end of it. So before my PhD, I did Androids Coursera course on machine learning, which is fantastic. And that sort of pragmatic view really helped me understand what was going on. And then once my PhD started, actually I did Jeff Hinton's Coursera course in deep learning. And then a few months in, I actually realized that what I wanted to do was a field that existed and it was called reinforcement learning and started learning from there using Sutton and Bottos book. So yeah, even now I feel like I'm just health padded maths and it goes over my head, but I try and learn when I can. So I started studying and then once later when I handed in my first year plan, I told my supervisor, I had bad news and he said, don't tell me Google have announced that they get into high deep reinforcement learning to robotics. And I was just sion because the day before David Silver had announced that Google was interested in them playing their techniques to robotics. So one of my first year examiners, Mark Deeson, writes that it's tough to be really stubborn to try and compete with deep mind. So yeah, we still love about it because I am really stubborn. But yeah, I'd really like to express my gratitude to my supervisor, Mark, and others in Imperial, who believed in someone who's basically turned us off AGI without even knowing what optimal control was. So can you tell us a bit more about your early days with RRL? Yeah, so back then there was far less research software available, especially very little for reinforcement learning. So I had to start from there, I felt so first I investigated with frameworks there, there was the Anno, Cafe, Torch 7. They were somewhere paint install and manage especially Cafe and CUDA's still a bit painful to deal with. So I used my web developer experience to leverage Docker, made CUDA dock images for all of the main frameworks and that turned out to be really popular and frameworks have started adopting this and eventually Nvidia told me, oh, we're going to make our own CUDA Docker support. So how many things are in a much better position? So I chose Torch 7 because I like that best and then I realized I needed a way to manage experiments and there was sacred in Python but nothing language agnostic in Torch 7 was lower. So I built my own called FGLAB using Node.js which uses this servant client architecture to dispatch jobs and collect data in MongoDB. So again, like the scene is a lot better now but I guess language agnostic frameworks is still a bit rare. So then going on to RRL, I realized we'd lack standard benchmark code. So inspired by this previous research project called RLGLU which has built in Java, I built this library called RLNs which is a set of classic reinforcement learning environments in Lua also including Atari and someone contributed a Minecraft wrapper. And then months later, OpenAI released OpenAI Jim in Python. So I had a good idea but chose the wrong language. So yeah, so we like to get RLBase lines to do research with so I made this Atari code base which contain all of DeepMind's DQN variants up to that point. So those are actually most of the way to Rainbow, maybe I should have done that. And yeah, actually later on when my PhD I made a clean reproduction of DeepMind's Rainbow agent which took months of work and though ironically I still haven't used it to do my own novel research. It's great to see it acknowledged in a lot of research papers from other labs such as Berkeley. So one question I often get is how did I manage to get so many research internships and it actually came from doing all of this work. So Sumis, Chintala and Alicund, Chani at Farron Twitter liked my contributions to the Twitch 7 ecosystem and they recommended me for internships and similarly Tom Scholl at DeepMind like my Tari library and Adlet Saka at Microsoft Research recommended me for being able to work on DeepRL in research engineering. So actually all of these opportunities came through my open source work which is really awesome and totally unexpected. So yeah, I got to work on super resolution with Wenji Shee, model based RL Katja Hoffman with a preprint out on Akove, multi agent RL with Suming Ling and Mermi augmented RNNs with David Reichert. So a lot of fun to branch out with those. And yeah just because I had recommendations doesn't mean I didn't have to go through the normal recruitment process so don't worry, like I still had to sit through with the interviews. So yeah, well this is sort of three years in. I had a lot of opportunities actually looking back. I spent you know almost a year of my PhD just re-implementing other's work and reading and studying before I felt confident enough to work on my own novel research, especially as nobody in my lab knew about reinforcement learning and the only deep learning we had was on restricted balsamon machines. So I was largely self-taught. So yeah, in 2016 at that point the main recipe you for research was take something from classic RL and make it deep. So I was interested in hierarchical reinforcement learning and made the deep option Q network with Nat DelicTanical from Marisiana Hanslab. And this was just about the right timing. So simultaneously there's work on the hierarchical DQN and the hierarchical DPR on network but we only managed to run us on this simple catch domain. But this turned out to be the sort of some fruitful collaborations with Marislabs. So this including helping Mottagonello on her work combining deep and symbolic approaches to RL to overcome poor generalization of standard deep reinforcement learning methods. So one advantage that I had of being naughty self-taught was that I really had to en masse a lot of knowledge myself instead of relying on others. So when my supervisor suggested writing a survey paper which I did with the help of Mark and Miles Brandage, it took about half a year full time. It was really a lot of work. But pretty something that I'm really proud of and happily it was really well received by community. Awesome man it also received a good count of citations I noticed and will have a link to it in the in the episode page. Let's talk about one of your first author papers Alpha Star an evolutionary computation perspective and trying to bridge DRL and the evolutionary computation communities. So what's the basic idea here with this paper? Right so in January 2019 DeepMind revealed Alpha Star to the world and this was the first AI system to let's say fairly beat a professional player at StarCraft 2. And DeepMind has experts in reinforcement learning and game theory so they used insights from those fields to develop Alpha Star. But if you think about populations of agents interacting and learning well that's evolution. And I felt there was enough information from the blood post to actually try and form a link between Alpha Star and the field of evolutionary computation which also has decades of research. So I'm not too familiar with the evolutionary computation world. I did one project using genetic algorithms for civil engineering which worked out but I don't know much about the rest. I gathered includes things like genetic algorithms but can you tell us more about what EC entails? Sure so EC is family of global optimization algorithms inspired by biology. So essentially you have a population of solutions which slowly change over time and they're subject to natural or artificial selection as it were. So the fitness just increases gradually over time and it can include approaches such as ant colony optimization and particles from optimization. So evolutionary algorithms use mechanisms that are inspired by biological evolution which includes selection so you know think of survival of the fitness crossover which is akin to sexual reproduction and mutation. So for example vanilla particles from optimization doesn't have crossover or mutation. So the most famous the basic genetic algorithm is given your population of candidate solutions you evaluate their fitness select some of the best use these to create the next population and then just repeat. Some well-known classes include evolution strategies which sort of performs approximately gradient descent. Genetic programming where you evolve programs and neural evolution where you can evolve neural network architectures. So the pro and the com is that they're black box and they don't eat gradients so they're good for exploring noisy fitness landscapes but they don't normally exploit local knowledge like gradients. So they're not so competitive for supervised learning but they can be useful for things like reinforcement learning where evaluation is noisy or say nested optimization where it might be quite expensive to calculate gradients. Okay so in the abstract of this paper it says we highlight some of its most interesting aspects the use of Lamarkey and evolution, competitive co-evolution and quality diversity. Can you help us understand these terms and how they map to to the alpha star concepts? Sure so firstly population based training was used to the alpha star league and can categorize that as a steady state Lamarkey and evolutionary algorithm. So Lamarkey and evolution in biology was this idea that osprey can inherit characteristics that were obtained during the lifetime with the parents so for example if your parents studied a lot then somehow you'd be born smart so this doesn't happen but it can be used algorithmically of course so in population based training the solutions which are these agents are trained using local optimization so back propagation on this reinforcement learning objective but at the outer level the fitters networks are copied and the hyper parameters are mutated and so these might find better solutions and as opposed to generational algorithms where only candidate solutions from one generation exist at once steady state algorithms have this continuously evolving population which is much better suited for asynchronous and distributed computation. Interestingly there's a slightly different thing called bold-winian evolution where instead of the final parameters of the solution being copied the initial parameters are reproduced and this is actually a meta-learning algorithm so it's the solutions that optimize well that actually survive. Bold-winian is that a bit more like biological evolution? So not an expert on biology but as far as I know yes this is part of the modern sort of theories of biological evolution. Cool okay and then what about the co-evolution? Right so with co-evolution the fitness of agents is evaluated against other agents so this is as opposed to a static fitness landscape so you can actually see this as a superset of self-play so rather than evaluating against yourself or even past selves you evaluate against a diversity of solutions and therefore you're less likely to overfit or get stuck in cycles and like self-play this induces a curriculum so things just get better over time. One of the interesting things with alpha-star is that while in evolutionary computation usually these interactions these pairings are typically randomly uniform alpha-star actually pairs agents with similar finisers so this is something that's less common in the EC literature and then the third spot which we talked about is quality diversity so there is a single main objective which is optimizing the ealerating but in a complex game there's no single best strategy so like rock paper scissors it's non-transitive and from the game theory perspective deep mind says that they want to find this complementary set of least exploitable strategies but we can also look at this somewhat as a quality diversity algorithm so in a quality diversity algorithm we define a set of niches which is typically based on some behavior descriptor and we keep the best solutions in each niche even if they're not globally optimal and this allows us to basically collect a diverse set of solutions where some of them might be better in different settings so for example Antoine in his 2015 nature paper used this to find a set of locomotions gates for a robot and if it was damaged then it could quickly find one of these diverse gates that would allow to still walk. So in the case of alpha-star the sort of niches could be building more of a unit type or beating another unit type or even a mix of this and so it's quite a complex quality diversity algorithm with niches that are dot over time which is a bit like Uber AI's pervade algorithm. So when you first encountered it like were you surprised by how much detail they put into the design of alpha-star league like there's just so many things going on in there I wonder would the evolutionary computation community kind of recognize some of this stuff or were they really pushing the envelope in the alpha-star league design? Yeah so I think they really did push the envelope with with a lot of the ideas I'm not I'm still many more of a deep learning reinforcement learning researcher so I can't say everything that's going on in the EC community so from my perspective I think there were some novel things and yeah with writing this paper the goal was really to try and bridge the community so this was presented at Gecko and so hopefully the EC community could see really what was being pushing the envelope of AI in general but also on the other side I'm hoping that people who are interested in multi-agent systems and diversity will be interested in checking out the literature and evolutionary computation find out what's happening there and then try and connect it back to their own interests cool okay let's move on to your next paper where your co-author analyzing deep reinforcement learning agents trained with domain randomization and that was die at all 2019 so can you tell us about the main idea in this paper? I was originally motivated by trying to apply deeper reinforcement learning to robotics and you know we thought that we might need fundamental advances to do so but domain randomization which is essentially data augmentation applied to reinforcement learning allowed quite a substantial amount of semterial transfer maybe only requiring a little bit of fine tuning and it's it's really simple so what's going on here? So the idea with this paper was to try and open up the black box a little bit and find out what's going on with DR as opposed to not training with DR so the setup is we train two different robots the fetch mobile manipulator and the canova-jeko arm with or without proprioceptive sensors with or without DR and we train it on this simple vision-based target reaching task which is quite common and then we test its out of distribution generalization and also use a range of interpretability methods to characterize what strategies will learn. So can we talk about some of the findings and especially maybe what was what wasn't surprising to you or what was surprising to you? Right so as a forward I'll say that the results we found are specific to our setup so we expect that these might generalize a little bit but the sort of large takeaway is that if anyone's interested in looking and understanding their agents then you should really actually run a wide range of interpretive methods in order to help you understanding of them especially because a lot of these can be subjective and you know if we're doing proper science then our prior assumptions about what's going on may be wrong and one of the most important things was having a sort of control element to make relative statements so for example analyzing agents trained with DR versus no DR is much better than trying to make absolute statements about agents trained with DR. With DR as she did is we varied colors and textures so this is simple form of DR and this provides some robustness to add of distribution inputs such as distractor objects but it can fail with global changes like changes in illumination or translating the camera unless you explicitly trained for this so the question is do we have to account for all of these things in advance or do we need smarter solutions? So what's going on with the agents that we trained? So one of the main techniques, saliency maps show that agents trained without DR respond to distractors while those that were trained with DR generally don't. Interestingly the fetch agent actually uses vision to help localize the gripper even when it's provided a perfect proprioceptive information so it seems like it's an easy enough solution to learn visual localization that the agent actually does this. An interesting point is that the JCo agent without proprioception it barely has any visible saliency around the arm so actually you can't really see it unless you're expecting it and this is something that really shows that saliency maps can be subjective because if the agent if we didn't know that the agent solved the problem and had no access to proprioception then we wouldn't have realized that it must be looking at the arm in order to solve the problem. Another common technique we use activation maximization and we see that without domain randomization the agents mainly learn red or blue color filters in the second convolutional layer whereas the agent trained with DR actually learn these oriented vertical horizontal red blue grids with actually a little bit of a ball motif so you can really see the strategy that the agent is learning through these filters and quantitatively with domain randomization the layer two filters have significantly higher over norms and the layer one filters have lower power spectral entropy which is a measure introduced by my supervisor and it basically uses the for you domain to characterize the structure. So what do these tell you when you find these things about the grids and what's happening to different layers and the norms what does your take away from from finding those. So this gives us an idea of like how is the agent actually solving the problem so for example with the red or blue color filters and doing that in tandem with say the destructor test we can see that if we change if we use destructors that are different shape then the agent might still respond because it's it's just the same color right whereas if the destructor is a different color then it might still be fine whereas the agents that are trained with domain randomization which are like learning these filters which we can see really will localize and do kind of look for color and shape then this is how they're so much more robust to distract us for example. Cool. So looking more broadly the looking at the representations learned we use this measure called entanglement which is basically how much the representations between classes overlap as a measure of invariance. So in RL instead of classes actually we use the different out of distribution test scenarios so this is distractors or illumination for example and we see that entanglement always increases with depth from the convolutional layers to the fully connected layers but for the Dior agents the entanglement of each layer is always higher than the equivalent layers entanglement in the Nundior agents. So this is a thumb thing we might expect but it's nice to have this quantitative confirmation that the agents are basically learning to be invariant to nuisance factors. Now moving towards the top of the network we ablate the recurrent state by setting it constant and technically this reaching problem is an MDP so it's not needed and this ablation has a small effect on agents trained without Dior but it really has a large effect on agents trained with it and so we can't tell exactly what's going on but we assume that the agent has learned some sort of state estimation implicitly in order to help it solve the problem and lastly we use re-initialization robustness as a measure of the actual network capacity that's used during training and generally the convolutional layers take much longer to stabilize than the recurrent fully connected layers except for the Jacob agent trained with proprioception where the fully connected layer is important and takes much longer to learn. One of the most surprising findings is that the Jacob agent trained with domain randomization but without proprioception learn to solve the task almost perfectly when the visuals were randomized but only 65% of the time with the standard simulation visuals which indicates some sort of a fitting and it generalizes fine for the agent that was trained with proprioception so it shows that it's not just this Dior versus no Dior condition but actually the input modalities and even the morphologies the robots can influence the strategies that are learned. So yeah in summary this was I believe the largest set of interpretability techniques supplied to deep room for some learning agents and there's lots of results so check the paper for a more thorough breakdown and references to all of these different methods. Cool and they get a link to all these papers will be in the episode page. I wonder do you think that domain randomization will become like even more important going forward? I mean will it is this a long-term strategy or because I guess I've wondered about how it scales the more factors of randomization you add and like do we need to you know do we need to show it every single combination of everything that can vary like it seems as the number of factors grows the number of combinations grow very quickly so I wonder what do you think about that? Right so there's still a lot of interest growing interest even on using procedural content generation to apply domain randomization in different ways so even so we look very specifically at a small set of visual perturbations and show that under this set then there's some generalization but it's still limited whereas you can randomize the dynamics and you can do much more cover things as well and it seems like we do need some level of domain randomization or procedural content generation to really gather enough data but it's also clear that if we want to solve problems at least in this sort of period of history without having to rely on an enormous amount of computer giant models then we will also need to bake some more prize into our models in the training. Cool okay let's move to your next co-author paper that's training agents using upside down reinforcement learning that's Srivastava at all 2019 so you mentioned you were an intern at Naysense and you worked on upside down reinforcement learning there and you're planning to do more more on that topic can you tell us a bit about Naysense? So Naysense was formed in 2014 from Idsia which is Yogan Schmidt who was lab that has been at the forefront of AI research since the early 90s so it has many top AI researchers working on both fundamental research and applications industry and the main focus is industrial automation so using AI to help manufacturing and production. So I remember seeing this I believe I remember seeing this in Europe 2019 at the Deep RL workshop but I don't think I understood at the time and and I maybe I'm just trying to understand it now so can you help us understand what is going on with the paper and what is the main idea what is upside down or all? So what is the upside down part? Well instead of using the return as part of the objective which you do in policy search or when you learn value functions you use it directly as an input and turns out this turns reinforcement learning into a supervised learning problem and you just maximize the likelihood of a behavior function on past experiences which is conditioned on observed returns and the time horizon in which it was achieved. So for example if a return of 10 was achieved in five time steps given a statement action then you just train to maximize the probability of taking that action in that state in order to achieve that. So it's a bit like behavioral cloning but it generalizes beyond rewards achieved by a demonstrator and goes beyond one step deviations in the loss. So the pros of upside down reinforcement learning is that it's completely supervised we don't have any value functions and there's no bootstrapping involved because it's really just supervised learning it can benefit from scaling the architecture and we have some unpublished results in that. You can use data augmentation quite easily this nothing special you have to do and you can bring in all sorts of other tricks that have been used for supervised learning and like imitation learning you can quickly mimic an expert if you have expert data instead of having to take conservative policy steps like you might do with other methods. We can train it without a discount factor so you just say the returns within a time horizon so it's unbiased in that sense. One of the nice properties is that it has the flexibility to not just achieve high returns but medium returns or low returns based on what you condition on. So playing alpha-go wouldn't be any fun but if you can condition it on how well the policy should perform then actually you could come up with an agent that plays at a range of difficulties and finally it still uses the mark of assumption so it's not like a black box optimization on this. The sort of main con is that it's off the bin and track a bit so we're still working on improving both the theory and the practice. It's hard to jump straight to state of the art results with a different paradigm and so one of the things that we do have to do is that you'll notice it misses an explicit reward maximization phase so the training actually involves something like expectation maximization so we alternate training with gathering improved data through exploration. And so is the behavior function here kind of like a parameterized policy? Can you tell us more about the behavior function and how it works? Yeah that is exactly it so it's a return condition and a time condition policy and one could argue that choosing the appropriate return or time horizon is difficult and it is in the general case but it's not the case so much in say constrained robotics environments and yeah so actually we have unpublished results but it works well as an offline RL algorithm where this is very relevant. So this is not in the empirical paper that Rupesh headed but Jürgen had a conceptual paper on upside down RL which contains some really nifty ideas and how to extend it so firstly the return and the time are just very general goals so it's really quite trivial to extend upside down RL to goal condition policies so I think interesting research actually lies elsewhere so to be clear the commands in the paper include the desired return and the desired time horizon but they could also include traditional goal specifications but also more abstract commands so if you're interested in where it could go then you can check out this conceptual paper for more details. So I guess I was a bit surprised in the paper that upside down RL outperformed A2C in some of these environments which I thought was kind of cool. Can you can you say anything about why why it might be able to do that even though it was it's doing something that seems simpler? Right so I think the key difference here against more traditional reinforcement learning is the presence of value functions and how you actually learn these value functions so value functions compute the expectation over the future returns which is a potentially powerful low variant signal to learn from. So if you learn a good value function then you can learn quickly but vice versa if it's difficult then you might get stuck. So we know that one step returns for example in the dqn or ddpg is low variance but biased whilst n step or full Monte Carlo returns like A2C or ppo are high variance but less bias or even unbiased. On the other hand upside down RL doesn't use a value function so it has different temporal credit assignment properties. So you can see that it works very well as is even when the rewards are delayed so these are the sparse delayed reward experiments in the paper. This is actually technically a pom dp as the reward function is delayed but because the transition dynamics are still fully observed we can solve this with a feed forward net. If we want to extend upside down RL to full mdp's then it's true or to do so by just adding in LSTM for example. Okay and could you tell us more about how exploration works in this setting like are we trying to explore command space or what how does exploration work with upside down or all? So in this paper the simple strategy is that you learn a stochastic policy and so exploration is done through this and at the beginning of the episode we basically find some commands we want to use and this is basically we give it the highest return episodes from the replay buffer that's used to train the agent so this is very simple and it seems to work reasonably well but it could obviously be an improved and optimizing the command proposals to improve exploration is certainly an interesting direction for future work. Okay now and then I'm looking forward for you what are the next few years look like are you going to work on more RL research or RL applications? So there's lots of things for me to do here. I'm going to continue pursuing fundamental RL research such as working on upside down RL but also working at the intersection of neuroscience and deep learning so implementing models of consciousness for example and I may or may not get involved in more applied research here at Araya. Another thing is that I'm in Tokyo to bolster the research community here so there's lots of universities and startups and there's a strong crowd working on machine learning theory but relatively little research going into more empirical side of deep learning and Japan is known for robotics neuroscience and artificial life so hoping I can interact more with those communities. Do you want to share an opinion on what you think is missing in RL today and what we can do about it? So having spent most of my PhD with access to only one GPU I'm keenly aware of the sample inefficiency of DPRL so I used to run an experiment on Atari for about a week and a half check the results tweak some hyperparameters and repeat so yeah I think we definitely still need to work on sample efficiency. So nonparametric methods like Keanuurs neighbors or Gaussian processes can learn very quickly but they are trickier to scale than deep neural networks but we can combine and get the best of both worlds so might call these semi-parametric models. So I'm a big fan of DeepMind's neural episodic control and actually had two papers extending it to use online clustering and the other and maximum entropy policies which is work done with Andrea Agustinelli, Marta Soraco and Pierre Richmond at Imperial College and more recently episodic memory has cropped up in some of DeepMind's set of the art agents like never give up in agent 57 so I think that's an interesting avenue to investigate. Any other strong opinions on where you think we'll find progress in RL going forward? I've always believed that we need general purpose representations maybe even some sort of common sense and I always thought that multimodal representations which has exemplified really recently by OpenEyes Clip Model are going to be vital to understanding the world in a human like way and relatively I also think it's important to work on embodied agents and there's lots out there at the moment but one of the more interesting to me is the animal AI test bed which is based on animal cognition experiments. I should say that the intersection of neuroscience and AI will lead to developments. Though it almost feels like AI is feeding a bit more into neuroscience than the other way around at the moment but still there's plenty of opportunities. I'm interested to see how Fycareus who work on probabilistic graphical models and especially HMMs who are heavily inspired by neuroscience actually managed to progress but also maybe British Sutton's bitter lesson is correct and all we need is scale. Do you find anything else in the reinforcement learning world interesting lately Kai? Yeah so I'm encouraged to see more interest in offline RL which is one way to improve sample efficiency in terms of environment interactions and it's of practical value so you've got benchmarks algorithms and better understanding of the problem. A bit longer term but there's also been some nice progress and model based RL which people generally think can result in better sample efficiency more flexibility and maybe even better generalization. Be about AI, finally had their first return then explore paper published in nature so that's a cool idea and I look forward to more work from Ken Stanley, Jeff Kloon and others who work on evolutionary computation but are also aware of the broader landscape of machine learning and finally there was a recent paper from DeepMind on tackling the temporal credit assignment problem called synthetic returns for long-term credit assignment which uses the predictability of rewards from past dates to determine influence in assigned synthetic rewards and this is a really important problem it's exciting like Rado was back in the day and I think making progress on this could really improve the state of RL. So Dr. Kai Aruh Kamarang this has been great you've given us lots of food for thought here so thanks for sharing your time and insight with the talk arrow community thanks notes and links for this episode are at talkrl.com if you like this show I need your support you can help in a few ways one subscribe on your favorite podcast platform subscriptions make a big difference two follow us on Twitter and talk our own podcast we love retweets three give us a five-star rating on Apple podcasts if you don't think we deserve five stars let us know on Twitter what we could do better
[ { "end": 12.68, "start": 0, "text": " This is Taka Rail Podcast. All reinforcement learning, all the time." }, { "end": 15.8, "start": 12.68, "text": " Interviews at Brilliant Bokes across the world of RL." }, { "end": 19.12, "start": 15.8, "text": " I'm your host, Robin Chohan." }, { "end": 24.400000000000002, "start": 19.12, "text": " Dr. Kai Arul Kamaran is a researcher at Araya in Tokyo, Japan." }, { "end": 26.32, "start": 24.400000000000002, "text": " Kai, thanks for joining us today." }, { "end": 29.84, "start": 26.32, "text": " Thank you. Can you tell us a bit about your main area of focus?" }, { "end": 33.44, "start": 29.84, "text": " Right. So I don't really have a focus." }, { "end": 38.4, "start": 33.44, "text": " I think I'm known for kind of branching out and doing lots of different things." }, { "end": 46.480000000000004, "start": 38.4, "text": " But this sort of underlying thing that's interested me is to understand and replicate biological intelligence" }, { "end": 49.4, "start": 46.480000000000004, "text": " and apply this to real world tasks." }, { "end": 53.760000000000005, "start": 49.4, "text": " So my main topics are deep learning and reinforcement learning." }, { "end": 61.519999999999996, "start": 53.76, "text": " If you place researchers who are interested in general AI on a sort of spectrum between first principles" }, { "end": 70.64, "start": 61.519999999999996, "text": " or good old fashioned symbolic AI versus whole brain replication, then I'm interested in cognitive principles" }, { "end": 72.24, "start": 70.64, "text": " and less at the level of neurons." }, { "end": 80.32, "start": 72.24, "text": " They're sitting a little more in the neuroscience side than maybe some end to end deep learning people." }, { "end": 86.47999999999999, "start": 80.32, "text": " But I'm also not tied to this sort of thing. So I've worked on practical applications from robotics" }, { "end": 93.44, "start": 86.47999999999999, "text": " to medical imaging and even more fundamental deep learning things like architectures like adaptive neural trees." }, { "end": 95.75999999999999, "start": 93.44, "text": " Can you tell us a bit about Araya?" }, { "end": 105.35999999999999, "start": 95.75999999999999, "text": " Sure. So Araya was founded by Riyota Kanai about half a decade ago to work at the intersection of AI and neuroscience." }, { "end": 111.68, "start": 105.36, "text": " So the engineering department provides AI solutions, whereas the research department just goes ahead" }, { "end": 113.68, "start": 111.68, "text": " and does fundamental research." }, { "end": 119.92, "start": 113.68, "text": " So Riyota and a lot of the other researchers here specialize in consciousness, which is fascinating." }, { "end": 123.03999999999999, "start": 119.92, "text": " So we all believe we have it." }, { "end": 127.44, "start": 123.03999999999999, "text": " But we don't really know what it is, what it's useful." }, { "end": 133.6, "start": 127.44, "text": " If it's useful at all, or if it's just a side effect of other things that are happening in the brain." }, { "end": 141.2, "start": 133.6, "text": " So one can argue that being self-aware, having a model of yourself is useful for interacting with the world." }, { "end": 143.51999999999998, "start": 141.2, "text": " But why do we actually feel things?" }, { "end": 150.32, "start": 143.51999999999998, "text": " And this idea of quality is my feeling of redness the same as your feeling of redness." }, { "end": 159.28, "start": 150.32, "text": " So theories of consciousness are holistic theories of brain function and some examples include global workspace theory." }, { "end": 166.72, "start": 159.28, "text": " So I see these as inspiration for AI, but also like better AI models that have been developed in recent years" }, { "end": 169.12, "start": 166.72, "text": " can drive forward neuroscience research." }, { "end": 176.4, "start": 169.12, "text": " And so we set up the synergy hoping that we can use the latest in AI to drive neuroscience research" }, { "end": 178.8, "start": 176.4, "text": " and neuroscience research to drive AI." }, { "end": 183.76, "start": 178.8, "text": " And so yeah, we're looking for experienced AI researchers who want to look at this intersection." }, { "end": 187.76, "start": 183.76, "text": " So how did you get into reinforcement learning?" }, { "end": 194.32, "start": 187.76, "text": " So when I finished my undergrad degree, which was computer science at Cambridge University," }, { "end": 198, "start": 194.32, "text": " I became a web developer just because I enjoyed it." }, { "end": 204.48, "start": 199.04, "text": " But half a year later, when the startup I joined failed to monetize and everyone got laid off," }, { "end": 210.32, "start": 204.48, "text": " I thought about what I really wanted to do and remembered that as a kid, I love robots." }, { "end": 216.16, "start": 211.28, "text": " So I applied to the Bunginary Masters at Imperial College with the hope of learning how to make" }, { "end": 222, "start": 216.16, "text": " Androids. When I was there, I remembered I was terrible at maths, but I could program." }, { "end": 225.68, "start": 222, "text": " So I joined a computer vision lab for my master's project." }, { "end": 231.92, "start": 226.64, "text": " They are the supervisor and al-Berrath, worked on biologically inspired computer vision." }, { "end": 237.35999999999999, "start": 231.92, "text": " And this was back in 2014 and he was ready to transition into deep learning." }, { "end": 240.07999999999998, "start": 237.35999999999999, "text": " So he asked me to join and help with it." }, { "end": 245.44000000000003, "start": 240.08, "text": " And given that deep mind was showing demos of the DQN back then," }, { "end": 252.64000000000001, "start": 246.32000000000002, "text": " felt like it was the good time to try and do end-to-end control for robots directly from pixels." }, { "end": 260.16, "start": 253.92000000000002, "text": " So during the MSE, I actually did my first course in machine learning, but I switched off after" }, { "end": 266.40000000000003, "start": 260.16, "text": " the lecture, started talking about infinite dimensions and managed to just about scrape a sea at" }, { "end": 272.79999999999995, "start": 266.4, "text": " the end of it. So before my PhD, I did Androids Coursera course on machine learning, which is" }, { "end": 278.4, "start": 272.79999999999995, "text": " fantastic. And that sort of pragmatic view really helped me understand what was going on." }, { "end": 283.91999999999996, "start": 279.35999999999996, "text": " And then once my PhD started, actually I did Jeff Hinton's Coursera course in deep learning." }, { "end": 291.12, "start": 285.12, "text": " And then a few months in, I actually realized that what I wanted to do was a field that existed" }, { "end": 296.64, "start": 291.12, "text": " and it was called reinforcement learning and started learning from there using Sutton and Bottos book." }, { "end": 303.36, "start": 298, "text": " So yeah, even now I feel like I'm just health padded maths and it goes over my head, but I try and learn" }, { "end": 312.16, "start": 303.36, "text": " when I can. So I started studying and then once later when I handed in my first year plan," }, { "end": 318.96, "start": 312.16, "text": " I told my supervisor, I had bad news and he said, don't tell me Google have announced that they" }, { "end": 327.59999999999997, "start": 318.96, "text": " get into high deep reinforcement learning to robotics. And I was just sion because the day before" }, { "end": 332.96, "start": 327.59999999999997, "text": " David Silver had announced that Google was interested in them playing their techniques to robotics." }, { "end": 339.28, "start": 334.32, "text": " So one of my first year examiners, Mark Deeson, writes that it's tough to be really stubborn" }, { "end": 345.35999999999996, "start": 340.32, "text": " to try and compete with deep mind. So yeah, we still love about it because I am really stubborn." }, { "end": 353.44, "start": 345.36, "text": " But yeah, I'd really like to express my gratitude to my supervisor, Mark, and others in Imperial," }, { "end": 358.96000000000004, "start": 353.44, "text": " who believed in someone who's basically turned us off AGI without even knowing what optimal control" }, { "end": 365.44, "start": 358.96000000000004, "text": " was. So can you tell us a bit more about your early days with RRL?" }, { "end": 372.64, "start": 366, "text": " Yeah, so back then there was far less research software available, especially very little" }, { "end": 378.56, "start": 372.64, "text": " for reinforcement learning. So I had to start from there, I felt so first I investigated with" }, { "end": 386.32, "start": 378.56, "text": " frameworks there, there was the Anno, Cafe, Torch 7. They were somewhere paint install and manage" }, { "end": 393.59999999999997, "start": 386.32, "text": " especially Cafe and CUDA's still a bit painful to deal with. So I used my web developer experience" }, { "end": 400, "start": 393.59999999999997, "text": " to leverage Docker, made CUDA dock images for all of the main frameworks and that turned out to" }, { "end": 405.92, "start": 400, "text": " be really popular and frameworks have started adopting this and eventually Nvidia told me," }, { "end": 412.8, "start": 405.92, "text": " oh, we're going to make our own CUDA Docker support. So how many things are in a much better position?" }, { "end": 419.68, "start": 413.92, "text": " So I chose Torch 7 because I like that best and then I realized I needed a way to manage" }, { "end": 426.48, "start": 419.68, "text": " experiments and there was sacred in Python but nothing language agnostic in Torch 7 was lower. So" }, { "end": 433.52000000000004, "start": 426.48, "text": " I built my own called FGLAB using Node.js which uses this servant client architecture to dispatch" }, { "end": 440.48, "start": 433.52000000000004, "text": " jobs and collect data in MongoDB. So again, like the scene is a lot better now but I guess language" }, { "end": 448.16, "start": 440.48, "text": " agnostic frameworks is still a bit rare. So then going on to RRL, I realized we'd lack standard" }, { "end": 456.08000000000004, "start": 448.16, "text": " benchmark code. So inspired by this previous research project called RLGLU which has" }, { "end": 463.2, "start": 456.08, "text": " built in Java, I built this library called RLNs which is a set of classic reinforcement learning" }, { "end": 469.03999999999996, "start": 463.2, "text": " environments in Lua also including Atari and someone contributed a Minecraft wrapper." }, { "end": 478.24, "start": 470.32, "text": " And then months later, OpenAI released OpenAI Jim in Python. So I had a good idea but chose the" }, { "end": 489.44, "start": 478.24, "text": " wrong language. So yeah, so we like to get RLBase lines to do research with so I made this Atari" }, { "end": 495.04, "start": 489.44, "text": " code base which contain all of DeepMind's DQN variants up to that point. So those are actually" }, { "end": 501.52, "start": 495.04, "text": " most of the way to Rainbow, maybe I should have done that. And yeah, actually later on when my PhD" }, { "end": 508.64, "start": 501.52, "text": " I made a clean reproduction of DeepMind's Rainbow agent which took months of work and though ironically" }, { "end": 515.1999999999999, "start": 508.64, "text": " I still haven't used it to do my own novel research. It's great to see it acknowledged in a lot of" }, { "end": 523.92, "start": 515.1999999999999, "text": " research papers from other labs such as Berkeley. So one question I often get is how did I manage to" }, { "end": 529.04, "start": 523.92, "text": " get so many research internships and it actually came from doing all of this work. So" }, { "end": 537.1999999999999, "start": 529.04, "text": " Sumis, Chintala and Alicund, Chani at Farron Twitter liked my contributions to the Twitch 7 ecosystem" }, { "end": 544.0799999999999, "start": 537.1999999999999, "text": " and they recommended me for internships and similarly Tom Scholl at DeepMind like my Tari library" }, { "end": 550.88, "start": 544.0799999999999, "text": " and Adlet Saka at Microsoft Research recommended me for being able to work on DeepRL in research" }, { "end": 557.36, "start": 550.88, "text": " engineering. So actually all of these opportunities came through my open source work which is really" }, { "end": 563.92, "start": 557.36, "text": " awesome and totally unexpected. So yeah, I got to work on super resolution with Wenji Shee," }, { "end": 571.92, "start": 564.72, "text": " model based RL Katja Hoffman with a preprint out on Akove, multi agent RL with Suming Ling" }, { "end": 577.76, "start": 571.92, "text": " and Mermi augmented RNNs with David Reichert. So a lot of fun to branch out with those." }, { "end": 584.64, "start": 579.04, "text": " And yeah just because I had recommendations doesn't mean I didn't have to go through the normal" }, { "end": 588.48, "start": 584.64, "text": " recruitment process so don't worry, like I still had to sit through with the interviews." }, { "end": 597.52, "start": 590.88, "text": " So yeah, well this is sort of three years in. I had a lot of opportunities actually looking back." }, { "end": 605.6, "start": 597.52, "text": " I spent you know almost a year of my PhD just re-implementing other's work and reading" }, { "end": 612.64, "start": 606.24, "text": " and studying before I felt confident enough to work on my own novel research, especially as nobody" }, { "end": 618.64, "start": 612.64, "text": " in my lab knew about reinforcement learning and the only deep learning we had was on restricted" }, { "end": 627.6, "start": 618.64, "text": " balsamon machines. So I was largely self-taught. So yeah, in 2016 at that point the main recipe" }, { "end": 635.12, "start": 627.6, "text": " you for research was take something from classic RL and make it deep. So I was interested in" }, { "end": 641.52, "start": 635.12, "text": " hierarchical reinforcement learning and made the deep option Q network with Nat DelicTanical" }, { "end": 648.88, "start": 641.52, "text": " from Marisiana Hanslab. And this was just about the right timing. So simultaneously there's" }, { "end": 656.0799999999999, "start": 648.88, "text": " work on the hierarchical DQN and the hierarchical DPR on network but we only managed to run us" }, { "end": 662.8, "start": 656.0799999999999, "text": " on this simple catch domain. But this turned out to be the sort of some fruitful collaborations with" }, { "end": 670.56, "start": 662.8, "text": " Marislabs. So this including helping Mottagonello on her work combining deep and symbolic approaches to" }, { "end": 676.16, "start": 670.56, "text": " RL to overcome poor generalization of standard deep reinforcement learning methods." }, { "end": 684.4, "start": 677.68, "text": " So one advantage that I had of being naughty self-taught was that I really had to en masse a lot of" }, { "end": 691.04, "start": 684.4, "text": " knowledge myself instead of relying on others. So when my supervisor suggested writing a survey paper" }, { "end": 698.4799999999999, "start": 691.76, "text": " which I did with the help of Mark and Miles Brandage, it took about half a year full time. It was" }, { "end": 704.96, "start": 698.48, "text": " really a lot of work. But pretty something that I'm really proud of and happily it was really" }, { "end": 712.96, "start": 704.96, "text": " well received by community. Awesome man it also received a good count of citations I noticed and" }, { "end": 718.16, "start": 712.96, "text": " will have a link to it in the in the episode page. Let's talk about one of your first author" }, { "end": 725.36, "start": 718.16, "text": " papers Alpha Star an evolutionary computation perspective and trying to bridge DRL and the" }, { "end": 729.2, "start": 725.36, "text": " evolutionary computation communities. So what's the basic idea here with this paper?" }, { "end": 740.24, "start": 730.96, "text": " Right so in January 2019 DeepMind revealed Alpha Star to the world and this was the first AI system" }, { "end": 748.32, "start": 740.24, "text": " to let's say fairly beat a professional player at StarCraft 2. And DeepMind has experts in" }, { "end": 755.2, "start": 748.32, "text": " reinforcement learning and game theory so they used insights from those fields to develop Alpha Star." }, { "end": 761.9200000000001, "start": 755.2, "text": " But if you think about populations of agents interacting and learning well that's evolution." }, { "end": 768.24, "start": 762.48, "text": " And I felt there was enough information from the blood post to actually try and form a link between" }, { "end": 774.32, "start": 769.12, "text": " Alpha Star and the field of evolutionary computation which also has decades of research." }, { "end": 782.08, "start": 774.32, "text": " So I'm not too familiar with the evolutionary computation world. I did one project using" }, { "end": 789.0400000000001, "start": 782.08, "text": " genetic algorithms for civil engineering which worked out but I don't know much about the rest." }, { "end": 794.4000000000001, "start": 789.0400000000001, "text": " I gathered includes things like genetic algorithms but can you tell us more about what EC entails?" }, { "end": 801.44, "start": 794.4000000000001, "text": " Sure so EC is family of global optimization algorithms inspired by biology." }, { "end": 812.32, "start": 801.44, "text": " So essentially you have a population of solutions which slowly change over time and they're subject" }, { "end": 819.36, "start": 812.32, "text": " to natural or artificial selection as it were. So the fitness just increases gradually over time" }, { "end": 825.6800000000001, "start": 819.36, "text": " and it can include approaches such as ant colony optimization and particles from optimization." }, { "end": 834.7199999999999, "start": 825.68, "text": " So evolutionary algorithms use mechanisms that are inspired by biological evolution which includes" }, { "end": 841.04, "start": 834.7199999999999, "text": " selection so you know think of survival of the fitness crossover which is akin to sexual" }, { "end": 848.2399999999999, "start": 841.04, "text": " reproduction and mutation. So for example vanilla particles from optimization doesn't have" }, { "end": 855.84, "start": 848.24, "text": " crossover or mutation. So the most famous the basic genetic algorithm is given your population" }, { "end": 862.96, "start": 855.84, "text": " of candidate solutions you evaluate their fitness select some of the best use these to create" }, { "end": 869.84, "start": 862.96, "text": " the next population and then just repeat. Some well-known classes include evolution strategies which" }, { "end": 877.44, "start": 869.84, "text": " sort of performs approximately gradient descent. Genetic programming where you evolve programs and" }, { "end": 884.96, "start": 877.44, "text": " neural evolution where you can evolve neural network architectures. So the pro and the com is that" }, { "end": 891.9200000000001, "start": 884.96, "text": " they're black box and they don't eat gradients so they're good for exploring noisy fitness landscapes" }, { "end": 899.2, "start": 892.5600000000001, "text": " but they don't normally exploit local knowledge like gradients. So they're not so competitive for" }, { "end": 904.24, "start": 899.2, "text": " supervised learning but they can be useful for things like reinforcement learning where evaluation" }, { "end": 910.5600000000001, "start": 904.24, "text": " is noisy or say nested optimization where it might be quite expensive to calculate gradients." }, { "end": 919.2, "start": 912.88, "text": " Okay so in the abstract of this paper it says we highlight some of its most interesting aspects" }, { "end": 925.04, "start": 919.84, "text": " the use of Lamarkey and evolution, competitive co-evolution and quality diversity." }, { "end": 933.28, "start": 925.84, "text": " Can you help us understand these terms and how they map to to the alpha star concepts? Sure" }, { "end": 941.6, "start": 933.28, "text": " so firstly population based training was used to the alpha star league and can categorize that" }, { "end": 948.8, "start": 941.6, "text": " as a steady state Lamarkey and evolutionary algorithm. So Lamarkey and evolution in biology was" }, { "end": 956, "start": 948.8, "text": " this idea that osprey can inherit characteristics that were obtained during the lifetime with the" }, { "end": 961.1999999999999, "start": 956, "text": " parents so for example if your parents studied a lot then somehow you'd be born smart" }, { "end": 969.6, "start": 961.2, "text": " so this doesn't happen but it can be used algorithmically of course so in population based training" }, { "end": 977.0400000000001, "start": 969.6, "text": " the solutions which are these agents are trained using local optimization so back propagation" }, { "end": 983.2800000000001, "start": 977.0400000000001, "text": " on this reinforcement learning objective but at the outer level the fitters networks are" }, { "end": 988.96, "start": 983.84, "text": " copied and the hyper parameters are mutated and so these might find better solutions" }, { "end": 998, "start": 988.96, "text": " and as opposed to generational algorithms where only candidate solutions from one generation" }, { "end": 1003.9200000000001, "start": 998, "text": " exist at once steady state algorithms have this continuously evolving population" }, { "end": 1008.5600000000001, "start": 1003.9200000000001, "text": " which is much better suited for asynchronous and distributed computation." }, { "end": 1016.4000000000001, "start": 1008.5600000000001, "text": " Interestingly there's a slightly different thing called bold-winian evolution where instead of" }, { "end": 1022.64, "start": 1016.4, "text": " the final parameters of the solution being copied the initial parameters are reproduced" }, { "end": 1029.12, "start": 1023.4399999999999, "text": " and this is actually a meta-learning algorithm so it's the solutions that optimize well that" }, { "end": 1033.28, "start": 1029.12, "text": " actually survive. Bold-winian is that a bit more like biological evolution?" }, { "end": 1040.24, "start": 1033.92, "text": " So not an expert on biology but as far as I know yes this is part of the modern sort of theories" }, { "end": 1049.28, "start": 1040.24, "text": " of biological evolution. Cool okay and then what about the co-evolution? Right so with co-evolution" }, { "end": 1057.1200000000001, "start": 1050, "text": " the fitness of agents is evaluated against other agents so this is as opposed to a static fitness" }, { "end": 1064.48, "start": 1057.1200000000001, "text": " landscape so you can actually see this as a superset of self-play so rather than evaluating" }, { "end": 1070.56, "start": 1064.48, "text": " against yourself or even past selves you evaluate against a diversity of solutions and therefore" }, { "end": 1078.48, "start": 1070.56, "text": " you're less likely to overfit or get stuck in cycles and like self-play this induces a curriculum" }, { "end": 1084.64, "start": 1078.48, "text": " so things just get better over time. One of the interesting things with alpha-star is that while" }, { "end": 1091.68, "start": 1085.2, "text": " in evolutionary computation usually these interactions these pairings are typically randomly" }, { "end": 1097.76, "start": 1091.68, "text": " uniform alpha-star actually pairs agents with similar finisers so this is something that's less" }, { "end": 1106.96, "start": 1097.76, "text": " common in the EC literature and then the third spot which we talked about is quality diversity so" }, { "end": 1115.44, "start": 1107.92, "text": " there is a single main objective which is optimizing the ealerating but in a complex game there's" }, { "end": 1123.52, "start": 1115.44, "text": " no single best strategy so like rock paper scissors it's non-transitive and from the game theory" }, { "end": 1129.6000000000001, "start": 1123.52, "text": " perspective deep mind says that they want to find this complementary set of least exploitable strategies" }, { "end": 1138.48, "start": 1131.04, "text": " but we can also look at this somewhat as a quality diversity algorithm so in a quality diversity" }, { "end": 1147.28, "start": 1138.48, "text": " algorithm we define a set of niches which is typically based on some behavior descriptor and we keep" }, { "end": 1155.28, "start": 1147.28, "text": " the best solutions in each niche even if they're not globally optimal and this allows us to basically" }, { "end": 1162.08, "start": 1155.28, "text": " collect a diverse set of solutions where some of them might be better in different settings so" }, { "end": 1172.1599999999999, "start": 1162.08, "text": " for example Antoine in his 2015 nature paper used this to find a set of locomotions gates for a" }, { "end": 1178.1599999999999, "start": 1172.1599999999999, "text": " robot and if it was damaged then it could quickly find one of these diverse gates that would allow" }, { "end": 1186.32, "start": 1178.1599999999999, "text": " to still walk. So in the case of alpha-star the sort of niches could be building more of a unit type" }, { "end": 1193.6799999999998, "start": 1186.32, "text": " or beating another unit type or even a mix of this and so it's quite a complex quality diversity" }, { "end": 1199.04, "start": 1193.6799999999998, "text": " algorithm with niches that are dot over time which is a bit like Uber AI's pervade algorithm." }, { "end": 1205.6799999999998, "start": 1200, "text": " So when you first encountered it like were you surprised by how much detail they put into the design" }, { "end": 1210.8, "start": 1205.6799999999998, "text": " of alpha-star league like there's just so many things going on in there I wonder would the" }, { "end": 1218.24, "start": 1210.8, "text": " evolutionary computation community kind of recognize some of this stuff or were they really pushing" }, { "end": 1225.2, "start": 1218.24, "text": " the envelope in the alpha-star league design? Yeah so I think they really did push the envelope" }, { "end": 1232.32, "start": 1226.3999999999999, "text": " with with a lot of the ideas I'm not I'm still many more of a deep learning reinforcement" }, { "end": 1238.6399999999999, "start": 1232.32, "text": " learning researcher so I can't say everything that's going on in the EC community so from my perspective" }, { "end": 1246.16, "start": 1238.64, "text": " I think there were some novel things and yeah with writing this paper the goal was really to try" }, { "end": 1253.3600000000001, "start": 1246.16, "text": " and bridge the community so this was presented at Gecko and so hopefully the EC community could see" }, { "end": 1261.0400000000002, "start": 1253.3600000000001, "text": " really what was being pushing the envelope of AI in general but also on the other side I'm hoping" }, { "end": 1267.76, "start": 1261.0400000000002, "text": " that people who are interested in multi-agent systems and diversity will be interested in checking out" }, { "end": 1272.72, "start": 1267.76, "text": " the literature and evolutionary computation find out what's happening there and then try and connect" }, { "end": 1279.68, "start": 1272.72, "text": " it back to their own interests cool okay let's move on to your next paper where your co-author" }, { "end": 1285.44, "start": 1280.56, "text": " analyzing deep reinforcement learning agents trained with domain randomization and that was" }, { "end": 1293.76, "start": 1285.44, "text": " die at all 2019 so can you tell us about the main idea in this paper? I was originally motivated by" }, { "end": 1298.64, "start": 1293.76, "text": " trying to apply deeper reinforcement learning to robotics and you know we thought that we might" }, { "end": 1306.48, "start": 1298.64, "text": " need fundamental advances to do so but domain randomization which is essentially data augmentation" }, { "end": 1312.96, "start": 1306.48, "text": " applied to reinforcement learning allowed quite a substantial amount of semterial transfer" }, { "end": 1319.92, "start": 1313.52, "text": " maybe only requiring a little bit of fine tuning and it's it's really simple so what's going on" }, { "end": 1326.5600000000002, "start": 1319.92, "text": " here? So the idea with this paper was to try and open up the black box a little bit and find out" }, { "end": 1335.2, "start": 1326.5600000000002, "text": " what's going on with DR as opposed to not training with DR so the setup is we train two different robots" }, { "end": 1342.64, "start": 1335.2, "text": " the fetch mobile manipulator and the canova-jeko arm with or without proprioceptive sensors with or" }, { "end": 1349.1200000000001, "start": 1342.64, "text": " without DR and we train it on this simple vision-based target reaching task which is quite common" }, { "end": 1356.2399999999998, "start": 1349.12, "text": " and then we test its out of distribution generalization and also use a range of interpretability" }, { "end": 1362.7199999999998, "start": 1356.2399999999998, "text": " methods to characterize what strategies will learn. So can we talk about some of the findings" }, { "end": 1368.8, "start": 1362.7199999999998, "text": " and especially maybe what was what wasn't surprising to you or what was surprising to you?" }, { "end": 1377.4399999999998, "start": 1370.1599999999999, "text": " Right so as a forward I'll say that the results we found are specific to our setup so we" }, { "end": 1384.48, "start": 1377.44, "text": " expect that these might generalize a little bit but the sort of large takeaway is that if anyone's" }, { "end": 1390.4, "start": 1384.48, "text": " interested in looking and understanding their agents then you should really actually run a wide" }, { "end": 1396.64, "start": 1390.4, "text": " range of interpretive methods in order to help you understanding of them especially because a lot" }, { "end": 1402.72, "start": 1396.64, "text": " of these can be subjective and you know if we're doing proper science then our prior assumptions about" }, { "end": 1409.2, "start": 1402.72, "text": " what's going on may be wrong and one of the most important things was having a sort of control" }, { "end": 1418.08, "start": 1409.2, "text": " element to make relative statements so for example analyzing agents trained with DR versus no DR" }, { "end": 1422.64, "start": 1418.08, "text": " is much better than trying to make absolute statements about agents trained with DR." }, { "end": 1431.28, "start": 1423.28, "text": " With DR as she did is we varied colors and textures so this is simple form of DR" }, { "end": 1438, "start": 1431.28, "text": " and this provides some robustness to add of distribution inputs such as distractor objects" }, { "end": 1445.2, "start": 1438.72, "text": " but it can fail with global changes like changes in illumination or translating the camera" }, { "end": 1452, "start": 1446, "text": " unless you explicitly trained for this so the question is do we have to account for all of these" }, { "end": 1458.8, "start": 1452, "text": " things in advance or do we need smarter solutions? So what's going on with the agents that we trained?" }, { "end": 1467.2, "start": 1458.8, "text": " So one of the main techniques, saliency maps show that agents trained without DR respond to" }, { "end": 1475.52, "start": 1467.2, "text": " distractors while those that were trained with DR generally don't. Interestingly the fetch agent" }, { "end": 1483.44, "start": 1475.52, "text": " actually uses vision to help localize the gripper even when it's provided a perfect proprioceptive" }, { "end": 1490.0800000000002, "start": 1483.44, "text": " information so it seems like it's an easy enough solution to learn visual localization that the" }, { "end": 1498.0800000000002, "start": 1490.0800000000002, "text": " agent actually does this. An interesting point is that the JCo agent without proprioception" }, { "end": 1507.76, "start": 1499.1200000000001, "text": " it barely has any visible saliency around the arm so actually you can't really see it unless" }, { "end": 1513.1200000000001, "start": 1507.76, "text": " you're expecting it and this is something that really shows that saliency maps can be subjective" }, { "end": 1520.08, "start": 1513.12, "text": " because if the agent if we didn't know that the agent solved the problem and had no access" }, { "end": 1525.1999999999998, "start": 1520.08, "text": " to proprioception then we wouldn't have realized that it must be looking at the arm in order to" }, { "end": 1534.32, "start": 1525.1999999999998, "text": " solve the problem. Another common technique we use activation maximization and we see that without" }, { "end": 1540.9599999999998, "start": 1534.32, "text": " domain randomization the agents mainly learn red or blue color filters in the second convolutional" }, { "end": 1548.56, "start": 1540.96, "text": " layer whereas the agent trained with DR actually learn these oriented vertical horizontal red" }, { "end": 1554.88, "start": 1548.56, "text": " blue grids with actually a little bit of a ball motif so you can really see the strategy that" }, { "end": 1563.92, "start": 1554.88, "text": " the agent is learning through these filters and quantitatively with domain randomization the layer" }, { "end": 1573.44, "start": 1563.92, "text": " two filters have significantly higher over norms and the layer one filters have lower power spectral" }, { "end": 1578.16, "start": 1573.44, "text": " entropy which is a measure introduced by my supervisor and it basically uses the" }, { "end": 1585.04, "start": 1578.16, "text": " for you domain to characterize the structure. So what do these tell you when you find these things" }, { "end": 1590.8000000000002, "start": 1585.04, "text": " about the grids and what's happening to different layers and the norms what does your take away from" }, { "end": 1598.3999999999999, "start": 1590.8, "text": " from finding those. So this gives us an idea of like how is the agent actually solving the problem" }, { "end": 1605.9199999999998, "start": 1598.3999999999999, "text": " so for example with the red or blue color filters and doing that in tandem with say the" }, { "end": 1613.28, "start": 1606.56, "text": " destructor test we can see that if we change if we use destructors that are different shape" }, { "end": 1622.6399999999999, "start": 1613.28, "text": " then the agent might still respond because it's it's just the same color right whereas if the" }, { "end": 1631.36, "start": 1622.6399999999999, "text": " destructor is a different color then it might still be fine whereas the agents that are trained" }, { "end": 1636.6399999999999, "start": 1631.36, "text": " with domain randomization which are like learning these filters which we can see really will localize" }, { "end": 1643.68, "start": 1636.64, "text": " and do kind of look for color and shape then this is how they're so much more robust to distract us" }, { "end": 1652.4, "start": 1643.68, "text": " for example. Cool. So looking more broadly the looking at the representations learned we use" }, { "end": 1658.72, "start": 1652.4, "text": " this measure called entanglement which is basically how much the representations between classes" }, { "end": 1667.04, "start": 1658.72, "text": " overlap as a measure of invariance. So in RL instead of classes actually we use the different" }, { "end": 1672.08, "start": 1667.04, "text": " out of distribution test scenarios so this is distractors or illumination for example" }, { "end": 1677.76, "start": 1672.96, "text": " and we see that entanglement always increases with depth from the convolutional layers to the" }, { "end": 1686.08, "start": 1677.76, "text": " fully connected layers but for the Dior agents the entanglement of each layer is always higher than" }, { "end": 1693.9199999999998, "start": 1686.08, "text": " the equivalent layers entanglement in the Nundior agents. So this is a thumb thing we might expect" }, { "end": 1699.36, "start": 1693.9199999999998, "text": " but it's nice to have this quantitative confirmation that the agents are basically learning to be" }, { "end": 1706.8, "start": 1699.36, "text": " invariant to nuisance factors. Now moving towards the top of the network we ablate the recurrent" }, { "end": 1713.4399999999998, "start": 1706.8, "text": " state by setting it constant and technically this reaching problem is an MDP so it's not needed" }, { "end": 1720, "start": 1713.44, "text": " and this ablation has a small effect on agents trained without Dior but it really has a large" }, { "end": 1728.3200000000002, "start": 1720, "text": " effect on agents trained with it and so we can't tell exactly what's going on but we assume that" }, { "end": 1734.8, "start": 1728.3200000000002, "text": " the agent has learned some sort of state estimation implicitly in order to help it solve the problem" }, { "end": 1744.08, "start": 1734.8, "text": " and lastly we use re-initialization robustness as a measure of the actual network capacity that's" }, { "end": 1749.84, "start": 1744.08, "text": " used during training and generally the convolutional layers take much longer to stabilize than the" }, { "end": 1755.68, "start": 1749.84, "text": " recurrent fully connected layers except for the Jacob agent trained with proprioception where the" }, { "end": 1763.04, "start": 1755.68, "text": " fully connected layer is important and takes much longer to learn. One of the most surprising findings" }, { "end": 1769.84, "start": 1763.04, "text": " is that the Jacob agent trained with domain randomization but without proprioception" }, { "end": 1778.1599999999999, "start": 1770.8799999999999, "text": " learn to solve the task almost perfectly when the visuals were randomized but only 65% of the time" }, { "end": 1784.96, "start": 1778.1599999999999, "text": " with the standard simulation visuals which indicates some sort of a fitting and it generalizes fine" }, { "end": 1792.8, "start": 1785.76, "text": " for the agent that was trained with proprioception so it shows that it's not just this Dior versus" }, { "end": 1799.44, "start": 1792.8, "text": " no Dior condition but actually the input modalities and even the morphologies the robots can" }, { "end": 1807.36, "start": 1800.08, "text": " influence the strategies that are learned. So yeah in summary this was I believe the largest set" }, { "end": 1813.44, "start": 1807.36, "text": " of interpretability techniques supplied to deep room for some learning agents and there's lots" }, { "end": 1819.6, "start": 1813.44, "text": " of results so check the paper for a more thorough breakdown and references to all of these different" }, { "end": 1826.24, "start": 1819.6, "text": " methods. Cool and they get a link to all these papers will be in the episode page. I wonder" }, { "end": 1832.1599999999999, "start": 1826.24, "text": " do you think that domain randomization will become like even more important going forward? I mean" }, { "end": 1839.36, "start": 1832.1599999999999, "text": " will it is this a long-term strategy or because I guess I've wondered about how it scales the more" }, { "end": 1848.48, "start": 1839.36, "text": " factors of randomization you add and like do we need to you know do we need to show it every single" }, { "end": 1853.3600000000001, "start": 1848.48, "text": " combination of everything that can vary like it seems as the number of factors grows the number" }, { "end": 1861.44, "start": 1853.3600000000001, "text": " of combinations grow very quickly so I wonder what do you think about that? Right so there's still a" }, { "end": 1869.68, "start": 1861.44, "text": " lot of interest growing interest even on using procedural content generation to apply domain" }, { "end": 1876.88, "start": 1869.68, "text": " randomization in different ways so even so we look very specifically at a small set of visual" }, { "end": 1884.24, "start": 1876.88, "text": " perturbations and show that under this set then there's some generalization but it's still limited" }, { "end": 1891.1200000000001, "start": 1885.7600000000002, "text": " whereas you can randomize the dynamics and you can do much more cover things as well" }, { "end": 1898, "start": 1892.3200000000002, "text": " and it seems like we do need some level of domain randomization or procedural content generation" }, { "end": 1905.3600000000001, "start": 1898.96, "text": " to really gather enough data but it's also clear that if we want to solve problems" }, { "end": 1913.36, "start": 1905.36, "text": " at least in this sort of period of history without having to rely on an enormous amount of" }, { "end": 1920.8, "start": 1913.36, "text": " computer giant models then we will also need to bake some more prize into our models in the training." }, { "end": 1927.1999999999998, "start": 1921.6799999999998, "text": " Cool okay let's move to your next co-author paper that's training agents using upside down" }, { "end": 1935.8400000000001, "start": 1927.2, "text": " reinforcement learning that's Srivastava at all 2019 so you mentioned you were an intern at Naysense" }, { "end": 1941.6000000000001, "start": 1937.28, "text": " and you worked on upside down reinforcement learning there and you're planning to do more" }, { "end": 1950.32, "start": 1942.16, "text": " more on that topic can you tell us a bit about Naysense? So Naysense was formed in 2014 from" }, { "end": 1957.4399999999998, "start": 1950.32, "text": " Idsia which is Yogan Schmidt who was lab that has been at the forefront of AI research since the early 90s" }, { "end": 1964.24, "start": 1958.32, "text": " so it has many top AI researchers working on both fundamental research and applications industry" }, { "end": 1972.8799999999999, "start": 1965.04, "text": " and the main focus is industrial automation so using AI to help manufacturing and production." }, { "end": 1980.64, "start": 1972.88, "text": " So I remember seeing this I believe I remember seeing this in Europe 2019 at the Deep RL workshop" }, { "end": 1987.6000000000001, "start": 1980.64, "text": " but I don't think I understood at the time and and I maybe I'm just trying to understand it now so" }, { "end": 1993.92, "start": 1987.6000000000001, "text": " can you help us understand what is going on with the paper and what is the main idea what is upside down" }, { "end": 2002.8000000000002, "start": 1993.92, "text": " or all? So what is the upside down part? Well instead of using the return as part of the" }, { "end": 2009.52, "start": 2002.8, "text": " objective which you do in policy search or when you learn value functions you use it directly as an" }, { "end": 2015.44, "start": 2009.52, "text": " input and turns out this turns reinforcement learning into a supervised learning problem" }, { "end": 2021.6, "start": 2016.08, "text": " and you just maximize the likelihood of a behavior function on past experiences" }, { "end": 2027.9199999999998, "start": 2022.6399999999999, "text": " which is conditioned on observed returns and the time horizon in which it was achieved." }, { "end": 2037.44, "start": 2027.92, "text": " So for example if a return of 10 was achieved in five time steps given a statement action then you" }, { "end": 2043.2, "start": 2037.44, "text": " just train to maximize the probability of taking that action in that state in order to achieve that." }, { "end": 2051.6800000000003, "start": 2044.4, "text": " So it's a bit like behavioral cloning but it generalizes beyond rewards achieved by a demonstrator" }, { "end": 2060, "start": 2051.68, "text": " and goes beyond one step deviations in the loss. So the pros of upside down reinforcement learning" }, { "end": 2065.52, "start": 2060, "text": " is that it's completely supervised we don't have any value functions and there's no bootstrapping" }, { "end": 2073.2799999999997, "start": 2065.52, "text": " involved because it's really just supervised learning it can benefit from scaling the architecture" }, { "end": 2079.52, "start": 2073.2799999999997, "text": " and we have some unpublished results in that. You can use data augmentation quite easily this" }, { "end": 2085.68, "start": 2079.52, "text": " nothing special you have to do and you can bring in all sorts of other tricks that have been used" }, { "end": 2092.96, "start": 2085.68, "text": " for supervised learning and like imitation learning you can quickly mimic an expert if you have" }, { "end": 2099.04, "start": 2092.96, "text": " expert data instead of having to take conservative policy steps like you might do with other methods." }, { "end": 2106.56, "start": 2100, "text": " We can train it without a discount factor so you just say the returns within a time horizon" }, { "end": 2113.7599999999998, "start": 2106.56, "text": " so it's unbiased in that sense. One of the nice properties is that it has the flexibility to" }, { "end": 2120.56, "start": 2114.72, "text": " not just achieve high returns but medium returns or low returns based on what you condition on." }, { "end": 2128.72, "start": 2121.2, "text": " So playing alpha-go wouldn't be any fun but if you can condition it on how well the policy" }, { "end": 2134, "start": 2128.72, "text": " should perform then actually you could come up with an agent that plays at a range of difficulties" }, { "end": 2140.96, "start": 2134, "text": " and finally it still uses the mark of assumption so it's not like a black box optimization on this." }, { "end": 2149.84, "start": 2142.56, "text": " The sort of main con is that it's off the bin and track a bit so we're still working on improving" }, { "end": 2155.44, "start": 2149.84, "text": " both the theory and the practice. It's hard to jump straight to state of the art results with" }, { "end": 2163.52, "start": 2155.44, "text": " a different paradigm and so one of the things that we do have to do is that you'll notice it" }, { "end": 2169.6, "start": 2163.52, "text": " misses an explicit reward maximization phase so the training actually involves something like" }, { "end": 2176.64, "start": 2169.6, "text": " expectation maximization so we alternate training with gathering improved data through exploration." }, { "end": 2186.48, "start": 2179.52, "text": " And so is the behavior function here kind of like a parameterized policy? Can you tell us more about" }, { "end": 2194.48, "start": 2186.48, "text": " the behavior function and how it works? Yeah that is exactly it so it's a return condition" }, { "end": 2203.04, "start": 2194.48, "text": " and a time condition policy and one could argue that choosing the appropriate return or time horizon" }, { "end": 2210.16, "start": 2203.04, "text": " is difficult and it is in the general case but it's not the case so much in say constrained" }, { "end": 2216.8799999999997, "start": 2210.16, "text": " robotics environments and yeah so actually we have unpublished results but it works well as an" }, { "end": 2225.04, "start": 2216.8799999999997, "text": " offline RL algorithm where this is very relevant. So this is not in the empirical paper that Rupesh" }, { "end": 2232.96, "start": 2225.04, "text": " headed but Jürgen had a conceptual paper on upside down RL which contains some really nifty ideas" }, { "end": 2241.28, "start": 2232.96, "text": " and how to extend it so firstly the return and the time are just very general goals so it's" }, { "end": 2248.48, "start": 2241.28, "text": " really quite trivial to extend upside down RL to goal condition policies so I think interesting" }, { "end": 2255.92, "start": 2248.48, "text": " research actually lies elsewhere so to be clear the commands in the paper include the desired return" }, { "end": 2262.4, "start": 2255.92, "text": " and the desired time horizon but they could also include traditional goal specifications but also" }, { "end": 2269.04, "start": 2262.4, "text": " more abstract commands so if you're interested in where it could go then you can check out this" }, { "end": 2276.48, "start": 2269.04, "text": " conceptual paper for more details. So I guess I was a bit surprised in the paper that upside down RL" }, { "end": 2282.4, "start": 2276.48, "text": " outperformed A2C in some of these environments which I thought was kind of cool. Can you" }, { "end": 2287.2000000000003, "start": 2282.4, "text": " can you say anything about why why it might be able to do that even though it was it's doing something" }, { "end": 2295.04, "start": 2287.2, "text": " that seems simpler? Right so I think the key difference here against more traditional reinforcement" }, { "end": 2301.04, "start": 2295.04, "text": " learning is the presence of value functions and how you actually learn these value functions so" }, { "end": 2307.7599999999998, "start": 2301.04, "text": " value functions compute the expectation over the future returns which is a potentially powerful" }, { "end": 2313.68, "start": 2307.7599999999998, "text": " low variant signal to learn from. So if you learn a good value function then you can learn quickly" }, { "end": 2321.2799999999997, "start": 2313.68, "text": " but vice versa if it's difficult then you might get stuck. So we know that one step returns for" }, { "end": 2331.04, "start": 2321.2799999999997, "text": " example in the dqn or ddpg is low variance but biased whilst n step or full Monte Carlo returns" }, { "end": 2338.56, "start": 2331.04, "text": " like A2C or ppo are high variance but less bias or even unbiased. On the other hand upside down" }, { "end": 2344.08, "start": 2338.56, "text": " RL doesn't use a value function so it has different temporal credit assignment properties." }, { "end": 2350.96, "start": 2345.04, "text": " So you can see that it works very well as is even when the rewards are delayed so these are" }, { "end": 2358, "start": 2350.96, "text": " the sparse delayed reward experiments in the paper. This is actually technically a pom dp as the" }, { "end": 2366.56, "start": 2358, "text": " reward function is delayed but because the transition dynamics are still fully observed we can solve" }, { "end": 2373.52, "start": 2366.56, "text": " this with a feed forward net. If we want to extend upside down RL to full mdp's then it's" }, { "end": 2381.36, "start": 2373.52, "text": " true or to do so by just adding in LSTM for example. Okay and could you tell us more about how" }, { "end": 2387.2, "start": 2381.36, "text": " exploration works in this setting like are we trying to explore command space or what how does" }, { "end": 2394.48, "start": 2387.2, "text": " exploration work with upside down or all? So in this paper the simple strategy is that you learn" }, { "end": 2401.04, "start": 2394.48, "text": " a stochastic policy and so exploration is done through this and at the beginning of the" }, { "end": 2408.2400000000002, "start": 2401.04, "text": " episode we basically find some commands we want to use and this is basically we give it the highest" }, { "end": 2415.28, "start": 2408.2400000000002, "text": " return episodes from the replay buffer that's used to train the agent so this is very simple" }, { "end": 2422.32, "start": 2415.92, "text": " and it seems to work reasonably well but it could obviously be an improved and optimizing the" }, { "end": 2427.6800000000003, "start": 2422.32, "text": " command proposals to improve exploration is certainly an interesting direction for future work." }, { "end": 2433.1200000000003, "start": 2428.48, "text": " Okay now and then I'm looking forward for you what are the next few years look like are you" }, { "end": 2440.2400000000002, "start": 2433.1200000000003, "text": " going to work on more RL research or RL applications? So there's lots of things for me to do here." }, { "end": 2445.52, "start": 2440.2400000000002, "text": " I'm going to continue pursuing fundamental RL research such as working on upside down RL" }, { "end": 2449.6000000000004, "start": 2446.0800000000004, "text": " but also working at the intersection of neuroscience and deep learning so" }, { "end": 2455.8399999999997, "start": 2449.6, "text": " implementing models of consciousness for example and I may or may not get involved in more" }, { "end": 2461.52, "start": 2455.8399999999997, "text": " applied research here at Araya. Another thing is that I'm in Tokyo to bolster the research" }, { "end": 2468.24, "start": 2461.52, "text": " community here so there's lots of universities and startups and there's a strong crowd working" }, { "end": 2474.24, "start": 2468.24, "text": " on machine learning theory but relatively little research going into more empirical side of deep" }, { "end": 2481.9199999999996, "start": 2474.24, "text": " learning and Japan is known for robotics neuroscience and artificial life so hoping I can" }, { "end": 2487.7599999999998, "start": 2481.9199999999996, "text": " interact more with those communities. Do you want to share an opinion on what you think is missing" }, { "end": 2498.3199999999997, "start": 2487.7599999999998, "text": " in RL today and what we can do about it? So having spent most of my PhD with access to only one" }, { "end": 2507.1200000000003, "start": 2498.32, "text": " GPU I'm keenly aware of the sample inefficiency of DPRL so I used to run an experiment on Atari" }, { "end": 2514.6400000000003, "start": 2507.1200000000003, "text": " for about a week and a half check the results tweak some hyperparameters and repeat so yeah I think" }, { "end": 2521.84, "start": 2514.6400000000003, "text": " we definitely still need to work on sample efficiency. So nonparametric methods like" }, { "end": 2528.48, "start": 2521.84, "text": " Keanuurs neighbors or Gaussian processes can learn very quickly but they are trickier to scale than" }, { "end": 2535.52, "start": 2528.48, "text": " deep neural networks but we can combine and get the best of both worlds so might call these semi-parametric" }, { "end": 2541.84, "start": 2535.52, "text": " models. So I'm a big fan of DeepMind's neural episodic control and actually had two papers" }, { "end": 2547.44, "start": 2541.84, "text": " extending it to use online clustering and the other and maximum entropy policies which is" }, { "end": 2554.2400000000002, "start": 2547.44, "text": " work done with Andrea Agustinelli, Marta Soraco and Pierre Richmond at Imperial College" }, { "end": 2561.2000000000003, "start": 2555.44, "text": " and more recently episodic memory has cropped up in some of DeepMind's set of the art agents" }, { "end": 2567.68, "start": 2561.2000000000003, "text": " like never give up in agent 57 so I think that's an interesting avenue to investigate." }, { "end": 2575.2000000000003, "start": 2568.56, "text": " Any other strong opinions on where you think we'll find progress in RL going forward?" }, { "end": 2581.12, "start": 2575.2, "text": " I've always believed that we need general purpose representations maybe even some sort of" }, { "end": 2588.64, "start": 2582.16, "text": " common sense and I always thought that multimodal representations which has exemplified really" }, { "end": 2595.04, "start": 2588.64, "text": " recently by OpenEyes Clip Model are going to be vital to understanding the world in a human" }, { "end": 2602.56, "start": 2595.04, "text": " like way and relatively I also think it's important to work on embodied agents and there's lots" }, { "end": 2608.32, "start": 2602.56, "text": " out there at the moment but one of the more interesting to me is the animal AI test" }, { "end": 2616.16, "start": 2608.32, "text": " bed which is based on animal cognition experiments. I should say that the intersection of neuroscience" }, { "end": 2622.64, "start": 2616.16, "text": " and AI will lead to developments. Though it almost feels like AI is feeding a bit more into neuroscience" }, { "end": 2629.2799999999997, "start": 2622.64, "text": " than the other way around at the moment but still there's plenty of opportunities. I'm interested" }, { "end": 2636.1600000000003, "start": 2629.28, "text": " to see how Fycareus who work on probabilistic graphical models and especially HMMs" }, { "end": 2643.2000000000003, "start": 2637.1200000000003, "text": " who are heavily inspired by neuroscience actually managed to progress but also maybe" }, { "end": 2649.84, "start": 2643.2000000000003, "text": " British Sutton's bitter lesson is correct and all we need is scale. Do you find anything else" }, { "end": 2655.92, "start": 2649.84, "text": " in the reinforcement learning world interesting lately Kai? Yeah so I'm encouraged to see more" }, { "end": 2663.36, "start": 2655.92, "text": " interest in offline RL which is one way to improve sample efficiency in terms of environment" }, { "end": 2670, "start": 2663.36, "text": " interactions and it's of practical value so you've got benchmarks algorithms and better understanding" }, { "end": 2676.16, "start": 2670, "text": " of the problem. A bit longer term but there's also been some nice progress and model based RL" }, { "end": 2682.32, "start": 2676.16, "text": " which people generally think can result in better sample efficiency more flexibility and maybe" }, { "end": 2689.6800000000003, "start": 2682.32, "text": " even better generalization. Be about AI, finally had their first return then explore paper published" }, { "end": 2696.4, "start": 2689.6800000000003, "text": " in nature so that's a cool idea and I look forward to more work from Ken Stanley, Jeff Kloon and" }, { "end": 2702.32, "start": 2696.4, "text": " others who work on evolutionary computation but are also aware of the broader landscape of machine" }, { "end": 2708.48, "start": 2702.32, "text": " learning and finally there was a recent paper from DeepMind on tackling the temporal credit" }, { "end": 2714.96, "start": 2708.48, "text": " assignment problem called synthetic returns for long-term credit assignment which uses the" }, { "end": 2722, "start": 2714.96, "text": " predictability of rewards from past dates to determine influence in assigned synthetic rewards" }, { "end": 2728.56, "start": 2722, "text": " and this is a really important problem it's exciting like Rado was back in the day and I think" }, { "end": 2735.44, "start": 2728.56, "text": " making progress on this could really improve the state of RL. So Dr. Kai Aruh Kamarang this has been" }, { "end": 2739.52, "start": 2735.44, "text": " great you've given us lots of food for thought here so thanks for sharing your time and insight" }, { "end": 2750.16, "start": 2739.52, "text": " with the talk arrow community thanks" }, { "end": 2758.32, "start": 2751.12, "text": " notes and links for this episode are at talkrl.com if you like this show I need your support" }, { "end": 2763.84, "start": 2758.32, "text": " you can help in a few ways one subscribe on your favorite podcast platform subscriptions make a big" }, { "end": 2770.48, "start": 2763.84, "text": " difference two follow us on Twitter and talk our own podcast we love retweets" }, { "end": 2778, "start": 2772.8, "text": " three give us a five-star rating on Apple podcasts if you don't think we deserve five stars" }, { "end": 2803.28, "start": 2778, "text": " let us know on Twitter what we could do better" } ]
Michael Dennis
Michael Dennis on Human-Compatible AI, Game Theory, PAIRED, ARCTIC, EPIC, and lots more!
https://media.transistor…4a8.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan. Michael Dennis is a PhD student at the Center for Human Compatible AI at UC Berkeley. Thanks for joining us, Michael. Thanks for having me. So how do you describe your area of interest? Yeah, so I'm mostly interested in robustness in RL and multi-agent RL, specifically as it applies to making the interactions between AI systems and society at large more beneficial. And how did you come upon that? Yeah, so it's a bit of a long journey. So I guess in undergrad I did an internship that involved a little bit of light penetration testing. And I was already interested in AI for a long time. And it got me thinking about sort of all the ways that AI systems could go wrong. So for instance, you could imagine an AI being used to sort of automatically do some sort of penetration testing through like maybe automated fuzzing. And this could cause like hackers to be able to sort of drastically increase the amounts of attacks they could do. Or you could imagine, I guess sort of what we're saying now, where like content from like GPT 3 and like, again, and stuff like that is making it, or as risk of making it seem harder to detect what's true and false online. And this concerns about this sort of drove me to thinking about like the interactions between AI and society at large more specifically. And I guess to me the multi-agent interaction is sort of the core of all these problems. It's sort of these problems arise from how AI systems and people interact and how the incentives of both of these systems are both like the AI systems and the people sort of drive this interaction into places that either we wanted to do. Or we don't want that to go. So I've been thinking more about the notion of like following once interests and obviously I share some of your interests because that's why I wanted really wanted to have you on the show and I'm so glad you came. The idea of how our interests evolve over time and why they evolve. And sometimes it feels like my curiosity is like almost like a mind of its own and I'm just along for the ride. Like how does that work for you? Can you say anything about the process of how your interests like evolve over time? You ever think about that? I guess initially I was trying to figure out more how I wanted my work to impact the world. But more recently I now that I've been like sort of focused more on like multi-agent and AI sort of stuff. I find that my research is more driven by attempts to resolve my own confusion and that I usually focus more on that than trying to figure out whether something is directly publishable. Because I guess yeah, I think I just find being confused fairly annoying and I find that this is a pretty good heuristic for me where if I can become less confused myself. It feels sort of like the first step towards clarifying the issues I care about for the rest of the field. So can you tell us a bit about the center for human compatible AI and how do you interpret human compatible AI? Yeah, so the center for human compatible AI is a set of groups who work on research trying to make AI I guess more human compatible. And I guess I interpret this fairly broadly to mean that I want to make systems that make the interaction between AI systems and society more likely to be beneficial. This could be anything from not increasing the prevalence of misinformation to ensuring that your personal assistant does what you actually wanted to do. Okay, so let's segue to your co-author paper adversarial policies attacking deeper and forcing learning by Gleeve et al. 2019. That seems emblematic of the kind of topics that try focus on if that makes sense. Can you tell us a bit about that paper? Yeah, so that was a really fun paper. So adversarial policies sort of a great example of how tries research get done in practice. So before adversarial policies be new adversarial attacks to our systems existed through like interventions on the observations. So you would add some sort of adversarial noise to the pixel observations. And this would get like an agent playing in pong to like miss the ball. And from a security perspective, this is a bit of an unrealistic attack model for real world systems like if an attacker can change the pixels of your observations, it probably has root access to your hardware. So you probably already like lost the attack defense game. But even from a robustness angle, it's sort of unclear whether we could expect to make our agents robust to these sorts of like physically impossible inputs. So adversarial policies was hoping to find more realistic and physically possible attacks through learning the policies of like other agents that are also in the environment. So we find that these policies can successfully degrade the performance of some target policy. And they do sort of a way that humans would have been robust to. So we focused on like some agents, for instance, in like a soccer game where one agent is trying to score a goal by like kicking a ball into a net. And the other agents trying to like as a goal and trying to block the goal. And so we trained a policy for the goalie just using like off the shelf RL and found that it could trick the kicker into like forgetting how to kick the ball. By just sort of like squirming on the ground in a way that most humans would just like ignore. So yeah, we found that like this. The policies that come. Came out of it like looked particularly like ill fit for. For the environment yet still perform really well against the like RL policies. Yeah, I really enjoyed the videos with this one. There's some cool videos listeners may want to check out. And it seems to me like you could defeat this current day AI with just you know just finding the right dance move. And they're just so surprised they just fall over. Yeah, I definitely recommend checking out the videos that they're super fun to watch. It really feels like the target is trying to do the right thing and like sort of knows what's what it's getting at. But like it ends up tripping over itself. Actually one of my favorite interactions from this thing is I think last year at NRIPS. There's some panel that Michael Leppman was on where he tries to imitate what the agents are doing. Yeah, I think anything out of Michael Leppman is entertained to watch. I totally agree. And he was our second guest on the show and his episodes amazing. And but I think I missed that. I'm seeing him do that to dance on stage. So I'll look for that. If we can move on to Game Theory. So. I guess like a lot of people like my first intro to Game Theory was hearing about prisoners dilemma. Maybe in high school and then in the Axelrod tournament. And when I first encountered it seemed like a really simple and I guess now I think deceptively simple. And so I couldn't really see how it could be even use or practical use. And then and then later on I encountered it in more complex settings and like deep minds work and the office star Natasha Jakes social influence paper was the first episode on the show. And I got I started to get a new appreciation for the whole concept. But but you can see my am I exposure has been pretty shallow. But my sense is there's like it's a pretty deep universe in in Game Theory. Is that fair to say? Yeah, I think that Game Theory is sort of getting at something pretty fundamental to. I guess how. How the world works. Like games are everywhere right? Like our society is really just a bunch of a bunch of agents playing a bunch of different different games that we sort of collectively agreed are. Are the way that we want to operate our society. And I guess there's a lot of interesting things you can learn about sort of what what behaviors those games motivate and what sort of behaviors can exist sort of in equilibrium when we have the game set up sort of the way that we do. So can you give us a like a hints about the structure of Game Theory like what kinds of major topics and common applications come up in Game Theory. Yeah, so it's a really wide field. Yeah, so I guess the core intuitions behind Game Theory sort of started like with Von Neumann and Nash back with like with I guess the Nash equilibrium is the thing that most people point to. And that's sort of one of the like most productive concepts that we have gotten for understanding how like social science and like economics and these sorts of like fields really really work. It's like a really good predictive model for what how what sort of behavior you should expect out of multi agent interactions. But I guess from AI we're sort of coming from a bit of a different perspective. We don't as much what predictive models of multi systems that already exist, but we want to know how to build AI systems which perform well in the presence of other agents. And so we sort of come from a different perspective than most of the literature and Game Theory sort of directed that. And because of that, I think it's somewhat hard to directly apply the tools of Game Theory in AI without doing a little bit of a translation. I guess some parts of the field that require less translation is so Joe Halpern actually does a lot of really good work at the intersection of AI and Game Theory. Specifically his stuff on reason about knowledge. Yeah, he is like a few textbooks written on this stuff. And it sort of like takes more of the instead of like most games taking like a third person perspective of like trying to figure out what these two agents will do given that they're both rational. Halpern's work more takes the first person perspective and says something more like if I have these sorts of given beliefs, what should I do? And then sort of derives like what a multi agent system would do coming that coming after that. That makes sense. So is that the area that you're most interested in in terms of Game Theory? Yeah, I guess I'm mostly interested in trying to come up with ways of thinking about multi systems in the presence of AI that aren't confusing. Or aren't as confusing as the ways that I think we currently think of them. And I think Halpern's work is a good step in that direction. Do you think of Game Theory as like a very practical thing or more a theoretical tool? Well, I think it depends on how you use it. So I think that if you're trying to analyze a multi agent system, then trying to figure out where the Nash equilibrium are is like a very practical first step to understanding what's going to what's going to happen in the long run. I think also if you're trying to just get a general understanding about how these sorts of systems work more broadly. You can do a lot of good for yourself by learning a lot of this theory and trying to see how it applies to the real world. And so I guess the second brand is ends up being a lot more theoretical and a lot less directly applicable. But I think that the way that you ought to think about a lot of this work is less trying to less trying to like come up with particular things that you should do in particular applications and more trying to like build good intuitions about how to think about these systems. So do you think like we like you mentioned how you know we're playing games all the time society and in social situations and like are those things worth modeling like. Is there any hope of quantifying what we're really doing? I guess maybe that's clear in economics where you can attach dollar values to things but. I guess I guess when I think of that I just think well everything so fuzzy like to people know why they're doing what they're doing. Whereas these payoff matrices are so just nice and neat. Yeah for sure like I think that game theory. That there's an interesting thing in game theory where the better the agents get the more game theory applies to them like game theory sort of one of the base assumptions is that. The agents involved are rational and they're behaving like well with respect to whatever their beliefs are. And the stronger these agents gets or the more capable is the agents get that are involved the more game theory is predictive of what they're going to do. So you should expect game theory to apply less to like individual interpersonal interactions between like two arbitrary humans and more to like behavior corporations the behavior of governments the behavior of like like really high performing people in different fields so I guess one of the. So I guess you mentioned how this applies economics I think a lot of economics is built on top of game theory but I guess it also applies to other areas of society for instance politics you could analyze how like for instance the first past the post voting system that we have in a lot of countries. Influence is the sorts of political parties that would develop and you can find that the sort of the sort of system that we have in the US. If you analyze the equilibria seems almost inevitable that we would end up with a two party system. And so it's sort of interesting how even though like I guess many of the framers didn't want us to end up in a two party system sort of the structures that they left behind sort of unintentionally made that inevitable. So I think there's a lot lot to learn in terms of like how we desire institutions and how we design like incentives to make it more likely that the things that we actually want our society to do are sort of the natural outcomes of like humans following their own incentives locally. So that kind of sounds like a mechanism design thing. I think all of norms and like institutional design and stuff like this is all all mechanisms design. So I guess if von Neumann and Nash were the founding fathers they think we might have had a slightly different outcome. Yeah, I think that there's a we definitely would have had a different outcome. I'm not sure if they would have forgotten something else that I'm not actually actually understood. Like I don't know I respect that I think it was really difficult to design a government at that time because that that was before we knew like like the reason that it causes a two party system wasn't really known at that time. And so it would have been very difficult for them to even even anticipate that I guess they didn't have millions of trajectories to learn from I guess. Yeah, they didn't have millions of trajectories to learn from. So I wonder sometimes when these payoff matrices like where do they come from they seem a bit like handed down from on high kind of like the rewards in RL seem a little bit like that too. Can we deduce these matrices by observing behavior is that kind of like a inverse game theory or inverse RL? Yeah, so I guess the problem of reward of like where rewards come from is like pretty pretty deep and there's a lot of there's a lot of like decision deer and a con philosophical work on that actually inverse game theory is a is a thing that exists. I know there's a I think a paper under that title that that does some work in trying to do inverse game theory. It seems to actually be harder than inverse reinforcement learning directly because of incentives to not be honest about your own intentions. And so it seems to be even more difficult than we otherwise would have had it. But yeah, I think like where where rewards come from and where yeah like like what the definition is of like what a good action is is something that's fairly fundamental like some some fundamental problem in I guess not just AI but like economics and decision theory and game theory more broadly. I think that more work should be done in that area. But I guess it's sort of hard to figure figure out where which ways out. So I read that a ran corporation famously use game theory to plan strategy for nuclear war and I don't know if it's causal but we haven't had a nuclear war yet. So maybe it worked, but if we look at the biggest global problems that we face today in terms of tragedies of the comments like the ocean and carbon emissions competition for resources can game theory provide insight into solving these kind of big problems. Yeah, so it's funny. The dimensions of the ran corporation because I've ended up reading a lot of their papers like like on on different different occasions. I found yeah like a lot of the fundamental working game theory happened in the ran corporation. But yeah, the hope is that you can you can use game theory to sort of find find solutions to these problems. I know that game theory has has been successful in a lot of these applications but the ones that I know more about are more on the algorithmic side. So I know that like mechanism design has been successful in like designing spectrum options for like auctioning off different parts of like the radio waves bands. The US government was like selling like to different radio stations to determine like who could play what songs on what what bands. And that was pretty successful. There's also auctions that happen whenever you see an ad when you search on Google. And those are usually designed through some sort of mechanism design function. And I'm more broad societal level. I know that people who think about these sorts of things do so oftentimes in the lens of game theory. They're not sure if they end up doing so like actually trying to get to the point where they modeled exact situation in terms of an explicit game. I think it's more that they use games as a way of coming to an understanding about like what this sorts of dynamics of the system they are interacting with how that system behaves. And I think that they often use this intuitions to better help like the decisions they have to make. Yeah, I guess from an AI perspective, we I think it's going to be very difficult to make progress in like human robot interaction or in like multi agent interaction without understanding a bit more about how game theory works. Because I guess with humans like there's there's some sort of innate knowledge about how to interact with other humans that that we like all have and sort of I guess either born with or like learning a way that like seems to actually work in practice where there doesn't seem like there's any reason why that should come about naturally through like the mechanisms that we have an RL. And so I guess what human society sort of do naturally we might have to do intentionally and I guess to do that well when I guess the easiest way I can tell to making systems that do that well would be to first try to figure out how humans do it and then see how we can replicate those sorts of the directions in AI systems. Would you say game theory has some kind of grand challenge. Here's some giant goal that we're working towards or is it is it more a set of axioms and like is it kind of solved or we're working on it still. So I'm not to well versed in like the traditional game through tradition so I'm mostly self taught I come from an AI background or like a CS theory background maybe. And I haven't interacted too much with game theorists mostly because they like work in other areas and I do. So I don't know what what grand challenges there they're particularly looking towards but I think that in AI there are a lot of open problems that we don't really know how to address. So so in particular it seems that so a lot of the multi agents interaction sort of work come I guess sort of comes from the idea that we're going to make an agent that solves like a Nash equilibrium. And then we're going to put that agent in an environment with a human and that should just work well and this works well for games like go and chess and poker where there's sort of zero sum and solving the sort of Nash equilibrium gives you a policy that performance like well by human standards. But in many other games the Nash equilibrium sort of don't correspond to the sorts of behaviors that we would actually want out of our systems. For instance if if we were trying to make a poker system that was even better than the Nash equilibrium poker systems we could make ones that actually tried to read the human like read your points to see if they were bluffing and base your strategy off of that. Now a Nash equilibrium system wouldn't do that sort of thing it would just behave in a way that is not exploitable and over time gets reward through just being very consistent about that. But humans don't play poker optimally and so a system could do even better than the Nash equilibrium solution by using the fact that humans are bad at bluffing and trying to read whether or not they're bluffing and base the strategy accordingly. So let's talk about your new paper Arctic that's accumulating risk capital through investing in cooperation 2021 Roman at all. Can you give us the gist of this paper? Yeah so this is joint work with Charlotte Roman and myself. The goal of this paper was to train agents that would be suitable to deploy into sequential search with lumas in sort of zero shot way with the ability to hopefully cooperate while maintaining safety so that you aren't going to be exploited too much by other agents. And so what we notice is that there's sort of a fundamental trade off between cooperation and safety. So whenever you cooperate you risk being defected against which like lowers the like causes you to have some amount of safety risk. But what we show in this papers that this trade off isn't really severe and that in taking a very small amount of risk in terms of trying to cooperate you can get huge returns in the other person cooperating back with you with like a very low amount of risk to safety. So prior work in this direction showed that epsilon safe agent agents risk no more than they have one expectation. So if you're trying to be epsilon safe. You can risk like epsilon on the first step and then if you ever have one anything better than your baseline reward would be in like a worst case way. Then you can risk all of that reward as well without actually losing any any of your like safety. And so we call what the agent is willing to risk their risk capital and we say that everything in agent wins an expectation is added to this pool of risk capital. So our agent invests this risk capital in cooperation every turn and by only cooperating proportional to how much risk capital they actually have accumulated. This maintains their safety so that they actually don't end up risking more over time. And so if they're with an agent that actually reciprocates this sort of cooperation. Then the probability that they actually reciprocates proportion of the probability that we cooperate. So this leads to sort of a proportional return in our investment over time. And this results in sort of an exponential increase in cooperation. And so we call this method a accumulating risk capital through investing in cooperation because the idea is that you just invest your cooperation and it gives you sort of these exponential returns. And so this is sort of a different conclusion than you would reach if you analyze this in sort of this equilibrium friend that we were talking about before. Where in sort of just like trying to find a policy that's an equilibrium. You would always you would always defect every time and you would never risk cooperating because cooperating would only ever hurt you in equilibrium. But in this paper, we are just trying to reveal how extreme the trade off is here. So that if you actually even move like an epsilon amount outside of the equilibrium that that risk that you've done for doing that. That that risk is returned back to you in terms of like an exponential reward in by your opponent deciding to cooperate more and more over time. And so really this idea of trying to like be 100% rational and like stay exactly at the Nash equilibrium, which means just like defect whenever you're in these sorts of it like prison dilemma settings really hurts you a lot more than you would expect. So it seems like is there an aspect of tit for tat in there in our tick in terms of responding to a defection. Yeah, so there's definitely a way that it is similar to tit for tat. So early on it will it will like be cautious in cooperating and like not cooperate that much. If you start cooperating with it, it will at some point start cooperating all the time. And at that point, it'll start behaving like tit for tat does at the beginning. But if it is defected against, then it starts, it's risk capital starts going down. It's accumulated some some harm to it. And so it's less likely to take risks in the future and thus it will defect. So in the long run, it sort of behaves a bit like tit for tat. And it has the same sort of incentive structure where if you know that you're against an architect agent, you want to cooperate because that will make the agent in a long run cooperate with you more. So what was the state of this area before this this paper? Yeah, so I guess in terms of so we sort of combined two different threads. One was safe policies and multi agent learning. And so this was work by Genspreet at all, who showed that a safe policy will risk what they want an expectation. And our observation is that when you combine this with like the dynamics of a sequential social dilemma, then you get the sort of exponential increase in what you can risk because cooperation allows you to get a significant amount of rewards and expectation and thus you can like invest more and more over time. And the sequential social elements literature, it's a really the idea of like the prison Islam is a really old idea and really broad fields. There's a lot of related work over there. I guess more specifically in multi agent learning and trying to make agents that will cooperate well in sequential social slums. The work that comes to mind is the work coming out of Joe Lieber's lab specifically, I guess the social influence and interest motivation work that I think Natasha Jakes talked to you about at some point. And yeah, I guess there's a lot of other work in that area as well that we mentioned the paper. To for me and other people who might not be expert at reading papers like this, I wonder if you could just maybe step through this main sections of the paper and give us a liner to about what is happening in each section and kind of how it builds over the course of the paper. Would that be okay? Yeah, so the paper is sort of structured around these these two extremes. So in one section, we define what we sort of mean by safety. So safety is like trying to be robust to the worst case that your opponent can can throw at you. And so you have some sort of baseline reward that you can guarantee that you get regardless of what your opponent does. And being safe or being approximately safe is maintaining that you keep that level of reward or maintain that you approximately keep that level of reward. So in the next section, we sort of like talk about the other extreme of this trade off, which is the cooperation inducing beliefs. So you, many of the natural things that you want to do in sequential social lemma's like that humans would find natural to do. Are behaviors that would promote cooperation in their opponent. So these are things like cooperating only of the other person cooperates or like tip for tat. And if you think that your opponent is is plausibly going to behave in one of these ways, then we call these things cooperation inducing beliefs. And so we sort of make this point out this trade off between on one hand trying to be safe against the worst case sorts of opponents. And on the other hand, trying to have good performance against opponents who are trying to promote cooperation in sorts of the way in the way that they're structured. And so in the third section, we sort of talk about how this trade off works in practice and like characterize the attention between these two ideas. And the core of that section is the proof about trying to characterize how bad this trade off is. So we assume that we have some amount of like epsilon risk that we're willing to tolerate. And we show that given the sort of epsilon risk that the amount of cooperation or the amount of reward that we can achieve against cooperation promoting beliefs is exponentially growing in that. And so we can tell we hit the cap of like both players cooperating all the time in which point like we just cooperate forever. And so what this shows is that the attention between these two these two ideas isn't actually that strong. And the rest of the paper is trying to make the ground that proof out in an actual algorithm that behaves that way. And running experiments to see how that algorithm performs in practice both against itself and against some other natural agents. Yes, speaking of which, so can you can you help us understand how it how it plays against itself and and other common agent types like like tit for tat or are always I guess always defect always cooperating. If it's against an agent that always cooperates, then it will accumulate risk capital very quickly because it's basically beating its baseline and expectation basically every turn. And so it'll very quickly cooperate every turn if it's against somebody who always defects, then on the first turn, it will spend all its risk capital. The defecting agent won't give it any of it back. And so it will never invest anymore. And so they'll end up in defect effect. If it's against itself, then it will risk a little bit of its capital at the beginning, this epsilon amount that it starts with. And then the other version of it will sort of get that in as like more like that corresponds to the other agent sort of beating its baseline. Because the other agent got some like was cooperated with when it was expecting to be defected against. And so now that agent is more likely to cooperate with the first agent on the next turn. And so this sort of creates a feedback loop between the two agents where both are both become more and more cooperative over time until eventually both in cooperated and it just cooperated for the rest of time. So it sounds like the golden rule of Arctic is something like, let me see, is it something like be nice to others unless they're not nice to you too often or something. How would you put it? In terms of the golden rule. It seems like a conditional golden rule, right? It's a little more conditional. It's a conditional golden rule. At least try to be nice to others. And if if they respond by being ICU, then keep keep going. Cool. I like that. As a, as a rule for life. So it's it's fine. I guess generally a lot of people interpret game theory and like very in a very like zero some lens. Like a lot of people look at game theory and their main takeaways are like, yeah, you should always defect in the person's llama like you should sort of like ruthlessly follow your own goals and like not not care too much about what other people are doing or how they're how they're doing, I guess. And I guess this is sort of trying to push back against that that a lot of the the working game theory is actually showing that though cooperation isn't like maybe the natural first thing you would go to in terms of like what the theory says. It's actually justified in a lot of in a lot of scenarios and that agents that are more cooperative tend to perform better for selfish reasons in the long term. And so I guess this is. I guess adding to the stack of of papers who have been trying to or have been motivating selfish people to cooperate out of their own self interests, which I think is like a good a good path towards a better world. Nice. Okay, can you tell us how how you evaluate it? What do you evaluate evaluation environments like? Yeah, so we started by evaluating in a few. Make sure this game worlds you so we evaluated sort of two versions of the Arctic algorithm one where everything is computed exactly so it's just like a closed form solution not really using any sort of RL. And that sort of did robustly what you would imagine against like the same thing that I described against all the opponents. And then still in the major games we tried doing the same thing with RL and found that it was able to learn the policy fairly well such that it ended up cooperating with cooperative agents and defecting against effective agents. But it was a bit unstable against itself. And we sort of left it to future work trying to scale us up into more environments. So I think where this is at now the theory is pretty solid. It seems that it's fairly clear that this is a principle that you can see applying to most cooperative domains regardless of like how high the mental they are. But the there's still some work needed in terms of making the approach stable when you apply RL to it. And I think there's a lot of interesting work in terms of trying to figure out how to how to scale this up in a scale of like a stable way. So if that was done then could something like Arctic be used for for example for libos social lemma games? Yeah, so I'm hoping that we can get it working for the dose there. We were trying that out a bit. I'm not sure if we can continue with it just because of other other commitments. But yeah, I don't see any reason why this wouldn't be able to be applied directly to those settings. We're hoping that it also applies to real world settings like for instance self-driving cars trying to determine like a sort of a preaches lemma win one of them decides whether or not to cut another one off like you could cut it off and like save a little bit of time. But in doing so it has some risk of causing a traffic jam which is going to like slow down basically everybody. So yeah, I was hoping that you could use these sorts of techniques sort of in the real world domain to where we're trying to release AI either in the presence of other humans or in the presence of other AI and make sort of more cooperative dynamics out of that. Okay, let's move on to your your next paper paired that was emergent complexity and zero shot transfer via unsupervised environment design Dennis at all. 2020 so I really enjoyed your virtual poster session at nureps 2020 for this and as for the audience that's actually how we first met. And when I saw that I was like right away I was like wow this brings together so many interesting things and in such an elegant way so this is really got my attention. Thanks so this is yeah you're welcome and thank you so much again for being here is awesome so can you tell us about this paper what is going on here. Yeah so so first of all this paper is joint work with my co first author Natasha jakes and you Jim Niskey who are both really vital for getting this off the ground. The goal here is to automatically generate environments which you can train our all agents in both for the purposes of providing a curricula and for purposes of promoting transfer to other environments. So this is a very general framework but as a running example we'll just consider a maze environment where the agent must navigate the goal by navigating around blocks placed in the environment. The natural approach to this would be to to sample a random environments by placing blocks randomly but we find this doesn't work very well an agent trained in this way will have a hard time finding its way out of a room. And the intuition here is that this agent has probably never seen a wall before and so it doesn't really know how to behave when it sees any sort of structure. So we sort of want to weigh to generate complex structure environments and so for me this brings to mind the idea of self play which is successful in chess and go for generating really complex structured ways of moving pieces. And so in this setting we tried adversarial training but this leads to an adversary that generates maces which are just completely unsolvable and so that doesn't really solve a problem either. And so we trying to find a way to motivate an adversary to generate difficult solvable environments and we found that we could do this by adding another agent which we call the antagonist which is also trying to solve the general environments. So for clarity we'll call the original agent that we're trying to train the protagnist then the adversary which is generating the environments is trying to generate environments that the antagonist performs well in and the protagonist doesn't perform well in. So the adversary gets the antagonist reward minus the protagonist reward. And so in this structure the adversary is then motivated to make environments and the antagonist solves so that they are actually solvable but it's also motivated to make them hard enough that the protagonist doesn't solve them. And so the adversary is motivated to generate also the simplest environments that the protagonist can't solve since the adversary would solve them faster and get more reward. So as the protagonist then gets better and better through this this sort of results in a natural curriculum of increasing complexity over time. And it also promotes transfer because the adversaries motivated to find environments for the protagonist would perform the most poorly. So this idea of protagonist and antagonist is this a new dichotomy that you came up with for this work and where did this idea come from? Yeah so I guess originally I got the idea through trying to come up the architecture to optimize minimax regret which is a solution concept from the decisions of the recommend slow to chair. And so the idea of having to install the environment can sort of directly be read out of the definition of minimax regret. And then I guess Sergei came up with the name convention which definitely hit it easier to communicate about. Can you tell us more about how the environment is generated like what is the action space of the adversary look like? What is it doing? Yeah so the the adversary is is an LSTM which initially gets a random input and places the block on each turn and on subsequent moves it sees all of the previous block that is placed when it decides to place another one. And so if you want to see this in action there's some videos of this generation process and solving the meses on our YouTube channel. But yeah we found that there were definitely some tricks to get this to work right there were some architectures we tried they didn't quite work. And so I think there's like definitely room for improvement in terms of how how to go about generating environments and I think there's a lot of interesting work to happen there. So we touched on auto curricula back in episode one again with Natasha jakes one of your first authors on this paper. I gather that pair is doing automatic curricula generation would you consider it auto curricula in the in the libosense or could you can craft a contrast paired with that idea of auto curricula. Yeah well we definitely would consider it to be an auto curricula in like the libosense and yeah we were we were been inspired by that paper actually we. I think originally when I was thinking of this architecture of like this game between multiple agents. I didn't initially realize the sort of curricula that would come out of it and I think it was actually Natasha after like having something like this paper in the back of her mind that realized sort of like what can happen when we run this. And then can we contrast paired with something like open AI proxgen I guess proxgen is going more for some more for generalization and maybe not so much like focusing on the curriculum of increasing difficulty. Do you see them as related or or totally not. Yeah so I guess there's sort of two dimensions here one is that that paired sort of lets you get around having to describe this complex distribution of environments. And so proxgen is sort of taking the approach of just specifying this right like making a procedure that will actually generate such a distribution. So they're a bit different in that respect because paired is sort of trying to get around the problem that's proxgen is trying to solve. I guess they paired also does a bit to help generalization and I guess sort of in a way that proxgen isn't in the sense that it tries to get more experiments experience from environments that's. Our agent performs less well in right and so in doing that it promotes some sort of robustness more more strongly than just randomly sampling a large distribution of environments because we're focusing on like specifically more worst case environments. Cool and then can we also compare it to like poet by Jeff Klun that was the paired open ended treblaser system. How would you describe like similarities or differences between paired and poet. Yeah so we were also pretty inspired by poet when making this. So they're actually fairly distinct I guess the main similarity is that poets is also generating environments in which an agent can train. But the mechanism by which it makes new environments and like comes over with new agents to train in them is like a bit more evolutionary. I mean like Jeff Klun's whole agenda is like really interesting I really recommend checking it out if you haven't. But yeah it's I guess brushing a lot of the details under the rug I guess one one difference is that poet is more focused on actual worst case. Like if it's optimizing for something the optimization for the environment generation is more of a worst case sort of optimization. And in doing that is sort of is motivates to generate environments which are unsolvable. And sort of to correct for that there's a thing in in poet which tries to target environments to like be at a certain threshold of difficultness with respect to how likely it is for the the previsation to solve it. And so this makes it works in their setting so that they do get the sort of increasing complexity and they actually get some really cool results. And I think that like the need to have this sort of constant that like maintains like the difficulty right and like tuning that well for the environment makes poet like more difficult to generalize to different environments. And it's hard time implementing something that was sort of like that baseline in our environment. Whereas paired sort of gets away without these sorts of these sorts of thresholds. So I guess I'm our hope is that paired is going to be able to be more easily applied to more environments. So if I understand this right it seems like paired is sort of like an important insight that now that you have it you can do these you can do these make these new types of curricula with this new method. And it kind of stands on its own is that is that the case it kind of solves the problem or what might be left for future work here. Yeah, so I guess I'm two minds at this one here. I think that the observation that we can generate difficult but solveable environments with paired is like a fairly like important step. At the same time I am sort of overwhelmed by the number of different like extensions I can imagine for this and a lot of them are like fairly different from paired. I think that paired I guess sort of opens the door for me to think about a lot of different architectures for using these sorts of like multi agent training regimes to come up with these sorts of increasing complexities. I guess they sort of fall into two camps one is like ways of making paired itself more stable so this could be like different ways of parameterizing the adversary or different different ways of setting up the paired game so I guess in the paper we we also explored versions of symmetric paired where the adversary or the antagonist and protagonist roles are swapped depending on how agents are performing. We also tried some population variants and all these had like stable performance but like a bit different properties and I think there's a lot that can be done in those regimes making them more stable and making like coming up with ways of of yeah I guess just making them more stable and working more settings. I guess there's this other set of future works that I am pretty also pretty excited about where instead of trying to solve for like a min and max regret so objective like those techniques are you could try to solve for like more complicated things or just different things I think that's. Yeah I guess there's a lot of different future works we could get into there but yeah I guess it's a fact is to say that I think there's a lot of follow up work that that can be done in paired that isn't just like a direct like a direct solidification of what is already there so let's move on to another paper you co-authored the epic paper quantifying differences in reward functions that's believe at all 2020. And so this seems like a kind of a surprising result it says here in the abstract a distance to quantify the difference between two reward functions directly without training a policy that seems magical. Can you tell us a bit how about how that works. Yeah so Adam Gleves is again the first author on this the motivation for this comes from the fact that real world applications are such that like they don't come with a pre specified reward function I guess this is again what we were talking about earlier where like the reward function in RL we often think of it as like coming from on high. But in real world applications like somebody has to like either write that down or like learn it from data and often this process could go wrong and so if we have done this like multiple times like a few people have written down like their guesses or like we've learned a few different ones would be good to check to see whether they're actually even saying like saying to train for this same thing or training for similar things because if like you you get some reward function for a bunch of different sources. And they say different things then maybe that's a indication you should go back to drawn board and see like trying to figure out what what you actually want out of the situation. But the way of comparing reward functions right now is sort of checking how the policy works on some test environment and that the performance of reward function on test environment doesn't necessarily transfer to the training environment. So I guess for an example of this so if we have a reward function that rewards you like rewards a car for staying near the speed limit and we have another reward function which rewards you for like a high reward for getting here to your destination and like some penalty for crashing these two could give very similar behaviors in on like a dry road in the middle of the summer where they just like both maintain the same speed for the whole whole trip. But very different things in winter when the first reward function would just like maintain the speed limit and probably crash for the second one would be more careful. And so in order to like trying to like sus out these differences in order to like allow the developers to go back and figure out what they actually mean. We would want to be able to compare the reward functions directly. And I guess the natural way of doing this were just be to do the correlation between the reward functions like these two reward functions are just like a vector of numbers over states. And so you can just like try to compare the two vectors. Yeah, one way to compare two reward functions would be to do the correlation between two reward functions over states. So like you can think of reward functions just like a vector where like every state just has some sort of number and you can just compare those two vectors and see how close they are. But the problem is that like vector like reward functions just aren't like shouldn't just be thought of as arbitrary vectors. They have some sort of structure with them. So particularly like reward functions don't really like shouldn't be thought of different if they're just like different shaping this of each other. And so like comparing them in just like the like as if they're two vectors would make them very vulnerable to like any sort of reward shaping. And so what we do to fix this is we just canonicalize the reward functions to find like sort of a representative sample of what the like this reward function is that sort of immune to these sorts of reward shaping terms. And then we compare the distances between reward functions there. And so we can show that doing this distance metric gives us a linear regret bound on the transfer form performance when a policy is trained by one reward function and test it on another. So any hints on on what you're up to next Michael? Yeah, so sort of in the same vein of the the period work where paired sort of avoid the need to specify this like hard distribution of Mase's sort of like proxgen approaches what I'm working on thinking about other ways of avoiding specifying like hard parts of the problems that we want to solve. And hopefully finding ways of doing this without the systems performance or we degrading. Cool. And besides what you're working on or playing to work on can you share anything about other stuff in or all lately that you find really interesting. Yeah, so I guess sort of in the vein of the stuff I'm thinking about working on that I've been recently thinking about how to specify problems. And I guess two approaches that have been recently released in this like area are the consequences of the line, like miss a line day I by showing it all which studies the effects of leaving features out of a reward function and conservative agency by Turner at all, which proposes a way of making assistance which may get to get mitigate these unintended side effects of like leaving stuff out of your own word function. And I think both of these are like sort of good steps in terms of trying to make our methods less vulnerable to misspecification of the problems that we want to solve because I think the problems that we actually want to solve like climate change and like poverty and like economic issues and stuff like this. Oftentimes these are really difficult to actually specify and so if we make problem specification or make systems that are less vulnerable to misspecification of the problems, but the patients. Then I think we'll be able to apply our like really good AI techniques to more important pressing problems. Michael Dennis, this has been fantastic super fascinating for me. It hasn't been the shortest interview we've done. I really appreciate your patience with that. And thanks so much for sharing your time and your insight with with me in our audience today. Thank you Michael Dennis. Yeah, thanks. It's been great. Thanks. Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways. One, subscribe on your favorite podcast platform. Subscriptions make a big difference. Two, follow us on Twitter and talkrl podcast. We love retweets. Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better. Talkrl.
[ { "end": 12, "start": 0, "text": " This is TalkArail Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 12, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan." }, { "end": 25, "start": 20, "text": " Michael Dennis is a PhD student at the Center for Human Compatible AI at UC Berkeley." }, { "end": 29, "start": 25, "text": " Thanks for joining us, Michael. Thanks for having me." }, { "end": 33, "start": 29, "text": " So how do you describe your area of interest?" }, { "end": 45, "start": 33, "text": " Yeah, so I'm mostly interested in robustness in RL and multi-agent RL, specifically as it applies to making the interactions between AI systems and society at large more beneficial." }, { "end": 49, "start": 45, "text": " And how did you come upon that?" }, { "end": 60, "start": 49, "text": " Yeah, so it's a bit of a long journey. So I guess in undergrad I did an internship that involved a little bit of light penetration testing." }, { "end": 67, "start": 60, "text": " And I was already interested in AI for a long time. And it got me thinking about sort of all the ways that AI systems could go wrong." }, { "end": 77, "start": 67, "text": " So for instance, you could imagine an AI being used to sort of automatically do some sort of penetration testing through like maybe automated fuzzing." }, { "end": 85, "start": 77, "text": " And this could cause like hackers to be able to sort of drastically increase the amounts of attacks they could do." }, { "end": 101, "start": 85, "text": " Or you could imagine, I guess sort of what we're saying now, where like content from like GPT 3 and like, again, and stuff like that is making it, or as risk of making it seem harder to detect what's true and false online." }, { "end": 112, "start": 101, "text": " And this concerns about this sort of drove me to thinking about like the interactions between AI and society at large more specifically." }, { "end": 130, "start": 112, "text": " And I guess to me the multi-agent interaction is sort of the core of all these problems. It's sort of these problems arise from how AI systems and people interact and how the incentives of both of these systems are both like the AI systems and the people sort of drive this interaction into places that either we wanted to do." }, { "end": 133, "start": 130, "text": " Or we don't want that to go." }, { "end": 143, "start": 133, "text": " So I've been thinking more about the notion of like following once interests and obviously I share some of your interests because that's why I wanted really wanted to have you on the show and I'm so glad you came." }, { "end": 148, "start": 143, "text": " The idea of how our interests evolve over time and why they evolve." }, { "end": 154, "start": 148, "text": " And sometimes it feels like my curiosity is like almost like a mind of its own and I'm just along for the ride." }, { "end": 161, "start": 154, "text": " Like how does that work for you? Can you say anything about the process of how your interests like evolve over time?" }, { "end": 162, "start": 161, "text": " You ever think about that?" }, { "end": 168, "start": 162, "text": " I guess initially I was trying to figure out more how I wanted my work to impact the world." }, { "end": 177, "start": 168, "text": " But more recently I now that I've been like sort of focused more on like multi-agent and AI sort of stuff." }, { "end": 193, "start": 177, "text": " I find that my research is more driven by attempts to resolve my own confusion and that I usually focus more on that than trying to figure out whether something is directly publishable." }, { "end": 205, "start": 193, "text": " Because I guess yeah, I think I just find being confused fairly annoying and I find that this is a pretty good heuristic for me where if I can become less confused myself." }, { "end": 210, "start": 205, "text": " It feels sort of like the first step towards clarifying the issues I care about for the rest of the field." }, { "end": 218, "start": 210, "text": " So can you tell us a bit about the center for human compatible AI and how do you interpret human compatible AI?" }, { "end": 232, "start": 218, "text": " Yeah, so the center for human compatible AI is a set of groups who work on research trying to make AI I guess more human compatible." }, { "end": 249, "start": 232, "text": " And I guess I interpret this fairly broadly to mean that I want to make systems that make the interaction between AI systems and society more likely to be beneficial." }, { "end": 258, "start": 249, "text": " This could be anything from not increasing the prevalence of misinformation to ensuring that your personal assistant does what you actually wanted to do." }, { "end": 268, "start": 258, "text": " Okay, so let's segue to your co-author paper adversarial policies attacking deeper and forcing learning by Gleeve et al. 2019." }, { "end": 273, "start": 268, "text": " That seems emblematic of the kind of topics that try focus on if that makes sense." }, { "end": 275, "start": 273, "text": " Can you tell us a bit about that paper?" }, { "end": 278, "start": 275, "text": " Yeah, so that was a really fun paper." }, { "end": 293, "start": 278, "text": " So adversarial policies sort of a great example of how tries research get done in practice. So before adversarial policies be new adversarial attacks to our systems existed through like interventions on the observations." }, { "end": 296, "start": 293, "text": " So you would add some sort of adversarial noise to the pixel observations." }, { "end": 300, "start": 296, "text": " And this would get like an agent playing in pong to like miss the ball." }, { "end": 312, "start": 300, "text": " And from a security perspective, this is a bit of an unrealistic attack model for real world systems like if an attacker can change the pixels of your observations, it probably has root access to your hardware." }, { "end": 316, "start": 312, "text": " So you probably already like lost the attack defense game." }, { "end": 326, "start": 316, "text": " But even from a robustness angle, it's sort of unclear whether we could expect to make our agents robust to these sorts of like physically impossible inputs." }, { "end": 337, "start": 326, "text": " So adversarial policies was hoping to find more realistic and physically possible attacks through learning the policies of like other agents that are also in the environment." }, { "end": 343, "start": 337, "text": " So we find that these policies can successfully degrade the performance of some target policy." }, { "end": 347, "start": 343, "text": " And they do sort of a way that humans would have been robust to." }, { "end": 356, "start": 347, "text": " So we focused on like some agents, for instance, in like a soccer game where one agent is trying to score a goal by like kicking a ball into a net." }, { "end": 360, "start": 356, "text": " And the other agents trying to like as a goal and trying to block the goal." }, { "end": 374, "start": 360, "text": " And so we trained a policy for the goalie just using like off the shelf RL and found that it could trick the kicker into like forgetting how to kick the ball." }, { "end": 380, "start": 374, "text": " By just sort of like squirming on the ground in a way that most humans would just like ignore." }, { "end": 384, "start": 380, "text": " So yeah, we found that like this." }, { "end": 387, "start": 384, "text": " The policies that come." }, { "end": 392, "start": 387, "text": " Came out of it like looked particularly like ill fit for." }, { "end": 398, "start": 392, "text": " For the environment yet still perform really well against the like RL policies." }, { "end": 403, "start": 398, "text": " Yeah, I really enjoyed the videos with this one. There's some cool videos listeners may want to check out." }, { "end": 412, "start": 403, "text": " And it seems to me like you could defeat this current day AI with just you know just finding the right dance move." }, { "end": 415, "start": 412, "text": " And they're just so surprised they just fall over." }, { "end": 419, "start": 415, "text": " Yeah, I definitely recommend checking out the videos that they're super fun to watch." }, { "end": 425, "start": 419, "text": " It really feels like the target is trying to do the right thing and like sort of knows what's what it's getting at." }, { "end": 427, "start": 425, "text": " But like it ends up tripping over itself." }, { "end": 432, "start": 427, "text": " Actually one of my favorite interactions from this thing is I think last year at NRIPS." }, { "end": 439, "start": 432, "text": " There's some panel that Michael Leppman was on where he tries to imitate what the agents are doing." }, { "end": 444, "start": 439, "text": " Yeah, I think anything out of Michael Leppman is entertained to watch." }, { "end": 446, "start": 444, "text": " I totally agree." }, { "end": 449, "start": 446, "text": " And he was our second guest on the show and his episodes amazing." }, { "end": 451, "start": 449, "text": " And but I think I missed that." }, { "end": 453, "start": 451, "text": " I'm seeing him do that to dance on stage." }, { "end": 455, "start": 453, "text": " So I'll look for that." }, { "end": 457, "start": 455, "text": " If we can move on to Game Theory." }, { "end": 459, "start": 457, "text": " So." }, { "end": 464, "start": 459, "text": " I guess like a lot of people like my first intro to Game Theory was hearing about prisoners dilemma." }, { "end": 468, "start": 464, "text": " Maybe in high school and then in the Axelrod tournament." }, { "end": 473, "start": 468, "text": " And when I first encountered it seemed like a really simple and I guess now I think deceptively simple." }, { "end": 477, "start": 473, "text": " And so I couldn't really see how it could be even use or practical use." }, { "end": 482, "start": 477, "text": " And then and then later on I encountered it in more complex settings and like deep minds work and" }, { "end": 489, "start": 482, "text": " the office star Natasha Jakes social influence paper was the first episode on the show." }, { "end": 492, "start": 489, "text": " And I got I started to get a new appreciation for the whole concept." }, { "end": 495, "start": 492, "text": " But but you can see my am I exposure has been pretty shallow." }, { "end": 498, "start": 495, "text": " But my sense is there's like it's a pretty deep universe in in Game Theory." }, { "end": 500, "start": 498, "text": " Is that fair to say?" }, { "end": 508, "start": 500, "text": " Yeah, I think that Game Theory is sort of getting at something pretty fundamental to." }, { "end": 512, "start": 508, "text": " I guess how." }, { "end": 515, "start": 512, "text": " How the world works." }, { "end": 518, "start": 515, "text": " Like games are everywhere right?" }, { "end": 528, "start": 518, "text": " Like our society is really just a bunch of a bunch of agents playing a bunch of different different games that we sort of collectively agreed are." }, { "end": 531, "start": 528, "text": " Are the way that we want to operate our society." }, { "end": 548, "start": 531, "text": " And I guess there's a lot of interesting things you can learn about sort of what what behaviors those games motivate and what sort of behaviors can exist sort of in equilibrium when we have the game set up sort of the way that we do." }, { "end": 558, "start": 548, "text": " So can you give us a like a hints about the structure of Game Theory like what kinds of major topics and common applications come up in Game Theory." }, { "end": 563, "start": 558, "text": " Yeah, so it's a really wide field." }, { "end": 581, "start": 563, "text": " Yeah, so I guess the core intuitions behind Game Theory sort of started like with Von Neumann and Nash back with like with I guess the Nash equilibrium is the thing that most people point to." }, { "end": 600, "start": 581, "text": " And that's sort of one of the like most productive concepts that we have gotten for understanding how like social science and like economics and these sorts of like fields really really work." }, { "end": 610, "start": 600, "text": " It's like a really good predictive model for what how what sort of behavior you should expect out of multi agent interactions." }, { "end": 615, "start": 610, "text": " But I guess from AI we're sort of coming from a bit of a different perspective." }, { "end": 628, "start": 615, "text": " We don't as much what predictive models of multi systems that already exist, but we want to know how to build AI systems which perform well in the presence of other agents." }, { "end": 635, "start": 628, "text": " And so we sort of come from a different perspective than most of the literature and Game Theory sort of directed that." }, { "end": 645, "start": 635, "text": " And because of that, I think it's somewhat hard to directly apply the tools of Game Theory in AI without doing a little bit of a translation." }, { "end": 655, "start": 645, "text": " I guess some parts of the field that require less translation is so Joe Halpern actually does a lot of really good work at the intersection of AI and Game Theory." }, { "end": 660, "start": 655, "text": " Specifically his stuff on reason about knowledge." }, { "end": 665, "start": 660, "text": " Yeah, he is like a few textbooks written on this stuff." }, { "end": 677, "start": 665, "text": " And it sort of like takes more of the instead of like most games taking like a third person perspective of like trying to figure out what these two agents will do given that they're both rational." }, { "end": 686, "start": 677, "text": " Halpern's work more takes the first person perspective and says something more like if I have these sorts of given beliefs, what should I do?" }, { "end": 693, "start": 686, "text": " And then sort of derives like what a multi agent system would do coming that coming after that." }, { "end": 694, "start": 693, "text": " That makes sense." }, { "end": 698, "start": 694, "text": " So is that the area that you're most interested in in terms of Game Theory?" }, { "end": 709, "start": 698, "text": " Yeah, I guess I'm mostly interested in trying to come up with ways of thinking about multi systems in the presence of AI that aren't confusing." }, { "end": 720, "start": 709, "text": " Or aren't as confusing as the ways that I think we currently think of them. And I think Halpern's work is a good step in that direction." }, { "end": 725, "start": 720, "text": " Do you think of Game Theory as like a very practical thing or more a theoretical tool?" }, { "end": 728, "start": 725, "text": " Well, I think it depends on how you use it." }, { "end": 745, "start": 728, "text": " So I think that if you're trying to analyze a multi agent system, then trying to figure out where the Nash equilibrium are is like a very practical first step to understanding what's going to what's going to happen in the long run." }, { "end": 754, "start": 745, "text": " I think also if you're trying to just get a general understanding about how these sorts of systems work more broadly." }, { "end": 764, "start": 754, "text": " You can do a lot of good for yourself by learning a lot of this theory and trying to see how it applies to the real world." }, { "end": 774, "start": 764, "text": " And so I guess the second brand is ends up being a lot more theoretical and a lot less directly applicable." }, { "end": 792, "start": 774, "text": " But I think that the way that you ought to think about a lot of this work is less trying to less trying to like come up with particular things that you should do in particular applications and more trying to like build good intuitions about how to think about these systems." }, { "end": 804, "start": 792, "text": " So do you think like we like you mentioned how you know we're playing games all the time society and in social situations and like are those things worth modeling like." }, { "end": 809, "start": 804, "text": " Is there any hope of quantifying what we're really doing?" }, { "end": 815, "start": 809, "text": " I guess maybe that's clear in economics where you can attach dollar values to things but." }, { "end": 822, "start": 815, "text": " I guess I guess when I think of that I just think well everything so fuzzy like to people know why they're doing what they're doing." }, { "end": 826, "start": 822, "text": " Whereas these payoff matrices are so just nice and neat." }, { "end": 832, "start": 826, "text": " Yeah for sure like I think that game theory." }, { "end": 844, "start": 832, "text": " That there's an interesting thing in game theory where the better the agents get the more game theory applies to them like game theory sort of one of the base assumptions is that." }, { "end": 850, "start": 844, "text": " The agents involved are rational and they're behaving like well with respect to whatever their beliefs are." }, { "end": 862, "start": 850, "text": " And the stronger these agents gets or the more capable is the agents get that are involved the more game theory is predictive of what they're going to do." }, { "end": 881, "start": 862, "text": " So you should expect game theory to apply less to like individual interpersonal interactions between like two arbitrary humans and more to like behavior corporations the behavior of governments the behavior of like like really high performing people in different fields so I guess one of the." }, { "end": 901, "start": 881, "text": " So I guess you mentioned how this applies economics I think a lot of economics is built on top of game theory but I guess it also applies to other areas of society for instance politics you could analyze how like for instance the first past the post voting system that we have in a lot of countries." }, { "end": 914, "start": 901, "text": " Influence is the sorts of political parties that would develop and you can find that the sort of the sort of system that we have in the US." }, { "end": 919, "start": 914, "text": " If you analyze the equilibria seems almost inevitable that we would end up with a two party system." }, { "end": 934, "start": 919, "text": " And so it's sort of interesting how even though like I guess many of the framers didn't want us to end up in a two party system sort of the structures that they left behind sort of unintentionally made that inevitable." }, { "end": 957, "start": 934, "text": " So I think there's a lot lot to learn in terms of like how we desire institutions and how we design like incentives to make it more likely that the things that we actually want our society to do are sort of the natural outcomes of like humans following their own incentives locally." }, { "end": 960, "start": 957, "text": " So that kind of sounds like a mechanism design thing." }, { "end": 970, "start": 960, "text": " I think all of norms and like institutional design and stuff like this is all all mechanisms design." }, { "end": 979, "start": 970, "text": " So I guess if von Neumann and Nash were the founding fathers they think we might have had a slightly different outcome." }, { "end": 984, "start": 979, "text": " Yeah, I think that there's a we definitely would have had a different outcome." }, { "end": 989, "start": 984, "text": " I'm not sure if they would have forgotten something else that I'm not actually actually understood." }, { "end": 1004, "start": 989, "text": " Like I don't know I respect that I think it was really difficult to design a government at that time because that that was before we knew like like the reason that it causes a two party system wasn't really known at that time." }, { "end": 1014, "start": 1004, "text": " And so it would have been very difficult for them to even even anticipate that I guess they didn't have millions of trajectories to learn from I guess." }, { "end": 1017, "start": 1014, "text": " Yeah, they didn't have millions of trajectories to learn from." }, { "end": 1027, "start": 1017, "text": " So I wonder sometimes when these payoff matrices like where do they come from they seem a bit like handed down from on high kind of like the rewards in RL seem a little bit like that too." }, { "end": 1036, "start": 1027, "text": " Can we deduce these matrices by observing behavior is that kind of like a inverse game theory or inverse RL?" }, { "end": 1055, "start": 1036, "text": " Yeah, so I guess the problem of reward of like where rewards come from is like pretty pretty deep and there's a lot of there's a lot of like decision deer and a con philosophical work on that actually inverse game theory is a is a thing that exists." }, { "end": 1062, "start": 1055, "text": " I know there's a I think a paper under that title that that does some work in trying to do inverse game theory." }, { "end": 1075, "start": 1062, "text": " It seems to actually be harder than inverse reinforcement learning directly because of incentives to not be honest about your own intentions." }, { "end": 1082, "start": 1075, "text": " And so it seems to be even more difficult than we otherwise would have had it." }, { "end": 1107, "start": 1082, "text": " But yeah, I think like where where rewards come from and where yeah like like what the definition is of like what a good action is is something that's fairly fundamental like some some fundamental problem in I guess not just AI but like economics and decision theory and game theory more broadly." }, { "end": 1116, "start": 1107, "text": " I think that more work should be done in that area. But I guess it's sort of hard to figure figure out where which ways out." }, { "end": 1126, "start": 1116, "text": " So I read that a ran corporation famously use game theory to plan strategy for nuclear war and I don't know if it's causal but we haven't had a nuclear war yet." }, { "end": 1142, "start": 1126, "text": " So maybe it worked, but if we look at the biggest global problems that we face today in terms of tragedies of the comments like the ocean and carbon emissions competition for resources can game theory provide insight into solving these kind of big problems." }, { "end": 1159, "start": 1142, "text": " Yeah, so it's funny. The dimensions of the ran corporation because I've ended up reading a lot of their papers like like on on different different occasions. I found yeah like a lot of the fundamental working game theory happened in the ran corporation." }, { "end": 1177, "start": 1159, "text": " But yeah, the hope is that you can you can use game theory to sort of find find solutions to these problems. I know that game theory has has been successful in a lot of these applications but the ones that I know more about are more on the algorithmic side." }, { "end": 1189, "start": 1177, "text": " So I know that like mechanism design has been successful in like designing spectrum options for like auctioning off different parts of like the radio waves bands." }, { "end": 1201, "start": 1189, "text": " The US government was like selling like to different radio stations to determine like who could play what songs on what what bands." }, { "end": 1217, "start": 1201, "text": " And that was pretty successful. There's also auctions that happen whenever you see an ad when you search on Google. And those are usually designed through some sort of mechanism design function." }, { "end": 1235, "start": 1217, "text": " And I'm more broad societal level. I know that people who think about these sorts of things do so oftentimes in the lens of game theory. They're not sure if they end up doing so like actually trying to get to the point where they modeled exact situation in terms of an explicit game." }, { "end": 1248, "start": 1235, "text": " I think it's more that they use games as a way of coming to an understanding about like what this sorts of dynamics of the system they are interacting with how that system behaves." }, { "end": 1258, "start": 1248, "text": " And I think that they often use this intuitions to better help like the decisions they have to make." }, { "end": 1280, "start": 1258, "text": " Yeah, I guess from an AI perspective, we I think it's going to be very difficult to make progress in like human robot interaction or in like multi agent interaction without understanding a bit more about how game theory works." }, { "end": 1306, "start": 1280, "text": " Because I guess with humans like there's there's some sort of innate knowledge about how to interact with other humans that that we like all have and sort of I guess either born with or like learning a way that like seems to actually work in practice where there doesn't seem like there's any reason why that should come about naturally through like the mechanisms that we have an RL." }, { "end": 1330, "start": 1306, "text": " And so I guess what human society sort of do naturally we might have to do intentionally and I guess to do that well when I guess the easiest way I can tell to making systems that do that well would be to first try to figure out how humans do it and then see how we can replicate those sorts of the directions in AI systems." }, { "end": 1336, "start": 1330, "text": " Would you say game theory has some kind of grand challenge." }, { "end": 1349, "start": 1336, "text": " Here's some giant goal that we're working towards or is it is it more a set of axioms and like is it kind of solved or we're working on it still." }, { "end": 1364, "start": 1349, "text": " So I'm not to well versed in like the traditional game through tradition so I'm mostly self taught I come from an AI background or like a CS theory background maybe." }, { "end": 1373, "start": 1364, "text": " And I haven't interacted too much with game theorists mostly because they like work in other areas and I do." }, { "end": 1386, "start": 1373, "text": " So I don't know what what grand challenges there they're particularly looking towards but I think that in AI there are a lot of open problems that we don't really know how to address." }, { "end": 1402, "start": 1386, "text": " So so in particular it seems that so a lot of the multi agents interaction sort of work come I guess sort of comes from the idea that we're going to make an agent that solves like a Nash equilibrium." }, { "end": 1426, "start": 1402, "text": " And then we're going to put that agent in an environment with a human and that should just work well and this works well for games like go and chess and poker where there's sort of zero sum and solving the sort of Nash equilibrium gives you a policy that performance like well by human standards." }, { "end": 1437, "start": 1426, "text": " But in many other games the Nash equilibrium sort of don't correspond to the sorts of behaviors that we would actually want out of our systems." }, { "end": 1457, "start": 1437, "text": " For instance if if we were trying to make a poker system that was even better than the Nash equilibrium poker systems we could make ones that actually tried to read the human like read your points to see if they were bluffing and base your strategy off of that." }, { "end": 1474, "start": 1457, "text": " Now a Nash equilibrium system wouldn't do that sort of thing it would just behave in a way that is not exploitable and over time gets reward through just being very consistent about that." }, { "end": 1491, "start": 1474, "text": " But humans don't play poker optimally and so a system could do even better than the Nash equilibrium solution by using the fact that humans are bad at bluffing and trying to read whether or not they're bluffing and base the strategy accordingly." }, { "end": 1498, "start": 1491, "text": " So let's talk about your new paper Arctic that's accumulating risk capital through investing in cooperation 2021 Roman at all." }, { "end": 1505, "start": 1498, "text": " Can you give us the gist of this paper? Yeah so this is joint work with Charlotte Roman and myself." }, { "end": 1521, "start": 1505, "text": " The goal of this paper was to train agents that would be suitable to deploy into sequential search with lumas in sort of zero shot way with the ability to hopefully cooperate while maintaining safety so that you aren't going to be exploited too much by other agents." }, { "end": 1528, "start": 1521, "text": " And so what we notice is that there's sort of a fundamental trade off between cooperation and safety." }, { "end": 1539, "start": 1528, "text": " So whenever you cooperate you risk being defected against which like lowers the like causes you to have some amount of safety risk." }, { "end": 1558, "start": 1539, "text": " But what we show in this papers that this trade off isn't really severe and that in taking a very small amount of risk in terms of trying to cooperate you can get huge returns in the other person cooperating back with you with like a very low amount of risk to safety." }, { "end": 1571, "start": 1558, "text": " So prior work in this direction showed that epsilon safe agent agents risk no more than they have one expectation. So if you're trying to be epsilon safe." }, { "end": 1583, "start": 1571, "text": " You can risk like epsilon on the first step and then if you ever have one anything better than your baseline reward would be in like a worst case way." }, { "end": 1594, "start": 1583, "text": " Then you can risk all of that reward as well without actually losing any any of your like safety." }, { "end": 1605, "start": 1594, "text": " And so we call what the agent is willing to risk their risk capital and we say that everything in agent wins an expectation is added to this pool of risk capital." }, { "end": 1615, "start": 1605, "text": " So our agent invests this risk capital in cooperation every turn and by only cooperating proportional to how much risk capital they actually have accumulated." }, { "end": 1620, "start": 1615, "text": " This maintains their safety so that they actually don't end up risking more over time." }, { "end": 1626, "start": 1620, "text": " And so if they're with an agent that actually reciprocates this sort of cooperation." }, { "end": 1635, "start": 1626, "text": " Then the probability that they actually reciprocates proportion of the probability that we cooperate. So this leads to sort of a proportional return in our investment over time." }, { "end": 1639, "start": 1635, "text": " And this results in sort of an exponential increase in cooperation." }, { "end": 1655, "start": 1639, "text": " And so we call this method a accumulating risk capital through investing in cooperation because the idea is that you just invest your cooperation and it gives you sort of these exponential returns." }, { "end": 1664, "start": 1655, "text": " And so this is sort of a different conclusion than you would reach if you analyze this in sort of this equilibrium friend that we were talking about before." }, { "end": 1670, "start": 1664, "text": " Where in sort of just like trying to find a policy that's an equilibrium." }, { "end": 1683, "start": 1670, "text": " You would always you would always defect every time and you would never risk cooperating because cooperating would only ever hurt you in equilibrium." }, { "end": 1691, "start": 1683, "text": " But in this paper, we are just trying to reveal how extreme the trade off is here." }, { "end": 1701, "start": 1691, "text": " So that if you actually even move like an epsilon amount outside of the equilibrium that that risk that you've done for doing that." }, { "end": 1713, "start": 1701, "text": " That that risk is returned back to you in terms of like an exponential reward in by your opponent deciding to cooperate more and more over time." }, { "end": 1729, "start": 1713, "text": " And so really this idea of trying to like be 100% rational and like stay exactly at the Nash equilibrium, which means just like defect whenever you're in these sorts of it like prison dilemma settings really hurts you a lot more than you would expect." }, { "end": 1738, "start": 1729, "text": " So it seems like is there an aspect of tit for tat in there in our tick in terms of responding to a defection." }, { "end": 1744, "start": 1738, "text": " Yeah, so there's definitely a way that it is similar to tit for tat." }, { "end": 1751, "start": 1744, "text": " So early on it will it will like be cautious in cooperating and like not cooperate that much." }, { "end": 1760, "start": 1751, "text": " If you start cooperating with it, it will at some point start cooperating all the time. And at that point, it'll start behaving like tit for tat does at the beginning." }, { "end": 1771, "start": 1760, "text": " But if it is defected against, then it starts, it's risk capital starts going down. It's accumulated some some harm to it." }, { "end": 1779, "start": 1771, "text": " And so it's less likely to take risks in the future and thus it will defect. So in the long run, it sort of behaves a bit like tit for tat." }, { "end": 1790, "start": 1779, "text": " And it has the same sort of incentive structure where if you know that you're against an architect agent, you want to cooperate because that will make the agent in a long run cooperate with you more." }, { "end": 1795, "start": 1790, "text": " So what was the state of this area before this this paper?" }, { "end": 1804, "start": 1795, "text": " Yeah, so I guess in terms of so we sort of combined two different threads. One was safe policies and multi agent learning." }, { "end": 1818, "start": 1804, "text": " And so this was work by Genspreet at all, who showed that a safe policy will risk what they want an expectation." }, { "end": 1836, "start": 1818, "text": " And our observation is that when you combine this with like the dynamics of a sequential social dilemma, then you get the sort of exponential increase in what you can risk because cooperation allows you to" }, { "end": 1855, "start": 1836, "text": " get a significant amount of rewards and expectation and thus you can like invest more and more over time. And the sequential social elements literature, it's a really the idea of like the prison Islam is a really old idea and really broad fields." }, { "end": 1866, "start": 1855, "text": " There's a lot of related work over there. I guess more specifically in multi agent learning and trying to make agents that will cooperate well in sequential social slums." }, { "end": 1881, "start": 1866, "text": " The work that comes to mind is the work coming out of Joe Lieber's lab specifically, I guess the social influence and interest motivation work that I think Natasha Jakes talked to you about at some point." }, { "end": 1885, "start": 1881, "text": " And yeah, I guess there's a lot of other work in that area as well that we mentioned the paper." }, { "end": 1902, "start": 1885, "text": " To for me and other people who might not be expert at reading papers like this, I wonder if you could just maybe step through this main sections of the paper and give us a liner to about what is happening in each section and kind of how it builds over the course of the paper." }, { "end": 1904, "start": 1902, "text": " Would that be okay?" }, { "end": 1915, "start": 1904, "text": " Yeah, so the paper is sort of structured around these these two extremes. So in one section, we define what we sort of mean by safety." }, { "end": 1924, "start": 1915, "text": " So safety is like trying to be robust to the worst case that your opponent can can throw at you." }, { "end": 1944, "start": 1924, "text": " And so you have some sort of baseline reward that you can guarantee that you get regardless of what your opponent does. And being safe or being approximately safe is maintaining that you keep that level of reward or maintain that you approximately keep that level of reward." }, { "end": 1955, "start": 1944, "text": " So in the next section, we sort of like talk about the other extreme of this trade off, which is the cooperation inducing beliefs." }, { "end": 1965, "start": 1955, "text": " So you, many of the natural things that you want to do in sequential social lemma's like that humans would find natural to do." }, { "end": 1976, "start": 1965, "text": " Are behaviors that would promote cooperation in their opponent. So these are things like cooperating only of the other person cooperates or like tip for tat." }, { "end": 1988, "start": 1976, "text": " And if you think that your opponent is is plausibly going to behave in one of these ways, then we call these things cooperation inducing beliefs." }, { "end": 1999, "start": 1988, "text": " And so we sort of make this point out this trade off between on one hand trying to be safe against the worst case sorts of opponents." }, { "end": 2011, "start": 1999, "text": " And on the other hand, trying to have good performance against opponents who are trying to promote cooperation in sorts of the way in the way that they're structured." }, { "end": 2023, "start": 2011, "text": " And so in the third section, we sort of talk about how this trade off works in practice and like characterize the attention between these two ideas." }, { "end": 2031, "start": 2023, "text": " And the core of that section is the proof about trying to characterize how bad this trade off is." }, { "end": 2056, "start": 2031, "text": " So we assume that we have some amount of like epsilon risk that we're willing to tolerate. And we show that given the sort of epsilon risk that the amount of cooperation or the amount of reward that we can achieve against cooperation promoting beliefs is exponentially growing in that." }, { "end": 2066, "start": 2056, "text": " And so we can tell we hit the cap of like both players cooperating all the time in which point like we just cooperate forever." }, { "end": 2074, "start": 2066, "text": " And so what this shows is that the attention between these two these two ideas isn't actually that strong." }, { "end": 2088, "start": 2074, "text": " And the rest of the paper is trying to make the ground that proof out in an actual algorithm that behaves that way." }, { "end": 2098, "start": 2088, "text": " And running experiments to see how that algorithm performs in practice both against itself and against some other natural agents." }, { "end": 2112, "start": 2098, "text": " Yes, speaking of which, so can you can you help us understand how it how it plays against itself and and other common agent types like like tit for tat or are always I guess always defect always cooperating." }, { "end": 2124, "start": 2112, "text": " If it's against an agent that always cooperates, then it will accumulate risk capital very quickly because it's basically beating its baseline and expectation basically every turn." }, { "end": 2134, "start": 2124, "text": " And so it'll very quickly cooperate every turn if it's against somebody who always defects, then on the first turn, it will spend all its risk capital." }, { "end": 2142, "start": 2134, "text": " The defecting agent won't give it any of it back. And so it will never invest anymore. And so they'll end up in defect effect." }, { "end": 2166, "start": 2142, "text": " If it's against itself, then it will risk a little bit of its capital at the beginning, this epsilon amount that it starts with. And then the other version of it will sort of get that in as like more like that corresponds to the other agent sort of beating its baseline." }, { "end": 2177, "start": 2166, "text": " Because the other agent got some like was cooperated with when it was expecting to be defected against. And so now that agent is more likely to cooperate with the first agent on the next turn." }, { "end": 2190, "start": 2177, "text": " And so this sort of creates a feedback loop between the two agents where both are both become more and more cooperative over time until eventually both in cooperated and it just cooperated for the rest of time." }, { "end": 2204, "start": 2190, "text": " So it sounds like the golden rule of Arctic is something like, let me see, is it something like be nice to others unless they're not nice to you too often or something. How would you put it?" }, { "end": 2208, "start": 2204, "text": " In terms of the golden rule." }, { "end": 2227, "start": 2208, "text": " It seems like a conditional golden rule, right? It's a little more conditional. It's a conditional golden rule. At least try to be nice to others. And if if they respond by being ICU, then keep keep going." }, { "end": 2245, "start": 2227, "text": " Cool. I like that. As a, as a rule for life. So it's it's fine. I guess generally a lot of people interpret game theory and like very in a very like zero some lens." }, { "end": 2263, "start": 2245, "text": " Like a lot of people look at game theory and their main takeaways are like, yeah, you should always defect in the person's llama like you should sort of like ruthlessly follow your own goals and like not not care too much about what other people are doing or how they're how they're doing, I guess." }, { "end": 2280, "start": 2263, "text": " And I guess this is sort of trying to push back against that that a lot of the the working game theory is actually showing that though cooperation isn't like maybe the natural first thing you would go to in terms of like what the theory says." }, { "end": 2293, "start": 2280, "text": " It's actually justified in a lot of in a lot of scenarios and that agents that are more cooperative tend to perform better for selfish reasons in the long term. And so I guess this is." }, { "end": 2306, "start": 2293, "text": " I guess adding to the stack of of papers who have been trying to or have been motivating selfish people to cooperate out of their own self interests, which I think is like a good a good path towards a better world." }, { "end": 2312, "start": 2306, "text": " Nice. Okay, can you tell us how how you evaluate it? What do you evaluate evaluation environments like?" }, { "end": 2316, "start": 2312, "text": " Yeah, so we started by evaluating in a few." }, { "end": 2332, "start": 2316, "text": " Make sure this game worlds you so we evaluated sort of two versions of the Arctic algorithm one where everything is computed exactly so it's just like a closed form solution not really using any sort of RL." }, { "end": 2341, "start": 2332, "text": " And that sort of did robustly what you would imagine against like the same thing that I described against all the opponents." }, { "end": 2358, "start": 2341, "text": " And then still in the major games we tried doing the same thing with RL and found that it was able to learn the policy fairly well such that it ended up cooperating with cooperative agents and defecting against effective agents." }, { "end": 2364, "start": 2358, "text": " But it was a bit unstable against itself." }, { "end": 2370, "start": 2364, "text": " And we sort of left it to future work trying to scale us up into more environments." }, { "end": 2374, "start": 2370, "text": " So I think where this is at now the theory is pretty solid." }, { "end": 2386, "start": 2374, "text": " It seems that it's fairly clear that this is a principle that you can see applying to most cooperative domains regardless of like how high the mental they are." }, { "end": 2396, "start": 2386, "text": " But the there's still some work needed in terms of making the approach stable when you apply RL to it." }, { "end": 2403, "start": 2396, "text": " And I think there's a lot of interesting work in terms of trying to figure out how to how to scale this up in a scale of like a stable way." }, { "end": 2411, "start": 2403, "text": " So if that was done then could something like Arctic be used for for example for libos social lemma games?" }, { "end": 2416, "start": 2411, "text": " Yeah, so I'm hoping that we can get it working for the dose there." }, { "end": 2424, "start": 2416, "text": " We were trying that out a bit. I'm not sure if we can continue with it just because of other other commitments." }, { "end": 2430, "start": 2424, "text": " But yeah, I don't see any reason why this wouldn't be able to be applied directly to those settings." }, { "end": 2439, "start": 2430, "text": " We're hoping that it also applies to real world settings like for instance self-driving cars trying to determine like" }, { "end": 2449, "start": 2439, "text": " a sort of a preaches lemma win one of them decides whether or not to cut another one off like you could cut it off and like save a little bit of time." }, { "end": 2458, "start": 2449, "text": " But in doing so it has some risk of causing a traffic jam which is going to like slow down basically everybody." }, { "end": 2472, "start": 2458, "text": " So yeah, I was hoping that you could use these sorts of techniques sort of in the real world domain to where we're trying to release AI either in the presence of other humans or in the presence of other AI and make sort of more cooperative dynamics out of that." }, { "end": 2481, "start": 2472, "text": " Okay, let's move on to your your next paper paired that was emergent complexity and zero shot transfer via unsupervised environment design Dennis at all." }, { "end": 2493, "start": 2481, "text": " 2020 so I really enjoyed your virtual poster session at nureps 2020 for this and as for the audience that's actually how we first met." }, { "end": 2503, "start": 2493, "text": " And when I saw that I was like right away I was like wow this brings together so many interesting things and in such an elegant way so this is really got my attention." }, { "end": 2513, "start": 2503, "text": " Thanks so this is yeah you're welcome and thank you so much again for being here is awesome so can you tell us about this paper what is going on here." }, { "end": 2524, "start": 2513, "text": " Yeah so so first of all this paper is joint work with my co first author Natasha jakes and you Jim Niskey who are both really vital for getting this off the ground." }, { "end": 2537, "start": 2524, "text": " The goal here is to automatically generate environments which you can train our all agents in both for the purposes of providing a curricula and for purposes of promoting transfer to other environments." }, { "end": 2551, "start": 2537, "text": " So this is a very general framework but as a running example we'll just consider a maze environment where the agent must navigate the goal by navigating around blocks placed in the environment." }, { "end": 2563, "start": 2551, "text": " The natural approach to this would be to to sample a random environments by placing blocks randomly but we find this doesn't work very well an agent trained in this way will have a hard time finding its way out of a room." }, { "end": 2572, "start": 2563, "text": " And the intuition here is that this agent has probably never seen a wall before and so it doesn't really know how to behave when it sees any sort of structure." }, { "end": 2587, "start": 2572, "text": " So we sort of want to weigh to generate complex structure environments and so for me this brings to mind the idea of self play which is successful in chess and go for generating really complex structured ways of moving pieces." }, { "end": 2599, "start": 2587, "text": " And so in this setting we tried adversarial training but this leads to an adversary that generates maces which are just completely unsolvable and so that doesn't really solve a problem either." }, { "end": 2613, "start": 2599, "text": " And so we trying to find a way to motivate an adversary to generate difficult solvable environments and we found that we could do this by adding another agent which we call the antagonist which is also trying to solve the general environments." }, { "end": 2627, "start": 2613, "text": " So for clarity we'll call the original agent that we're trying to train the protagnist then the adversary which is generating the environments is trying to generate environments that the antagonist performs well in and the protagonist doesn't perform well in." }, { "end": 2632, "start": 2627, "text": " So the adversary gets the antagonist reward minus the protagonist reward." }, { "end": 2646, "start": 2632, "text": " And so in this structure the adversary is then motivated to make environments and the antagonist solves so that they are actually solvable but it's also motivated to make them hard enough that the protagonist doesn't solve them." }, { "end": 2657, "start": 2646, "text": " And so the adversary is motivated to generate also the simplest environments that the protagonist can't solve since the adversary would solve them faster and get more reward." }, { "end": 2666, "start": 2657, "text": " So as the protagonist then gets better and better through this this sort of results in a natural curriculum of increasing complexity over time." }, { "end": 2673, "start": 2666, "text": " And it also promotes transfer because the adversaries motivated to find environments for the protagonist would perform the most poorly." }, { "end": 2683, "start": 2673, "text": " So this idea of protagonist and antagonist is this a new dichotomy that you came up with for this work and where did this idea come from?" }, { "end": 2696, "start": 2683, "text": " Yeah so I guess originally I got the idea through trying to come up the architecture to optimize minimax regret which is a solution concept from the decisions of the recommend slow to chair." }, { "end": 2703, "start": 2696, "text": " And so the idea of having to install the environment can sort of directly be read out of the definition of minimax regret." }, { "end": 2708, "start": 2703, "text": " And then I guess Sergei came up with the name convention which definitely hit it easier to communicate about." }, { "end": 2716, "start": 2708, "text": " Can you tell us more about how the environment is generated like what is the action space of the adversary look like? What is it doing?" }, { "end": 2734, "start": 2716, "text": " Yeah so the the adversary is is an LSTM which initially gets a random input and places the block on each turn and on subsequent moves it sees all of the previous block that is placed when it decides to place another one." }, { "end": 2742, "start": 2734, "text": " And so if you want to see this in action there's some videos of this generation process and solving the meses on our YouTube channel." }, { "end": 2749, "start": 2742, "text": " But yeah we found that there were definitely some tricks to get this to work right there were some architectures we tried they didn't quite work." }, { "end": 2758, "start": 2749, "text": " And so I think there's like definitely room for improvement in terms of how how to go about generating environments and I think there's a lot of interesting work to happen there." }, { "end": 2767, "start": 2758, "text": " So we touched on auto curricula back in episode one again with Natasha jakes one of your first authors on this paper." }, { "end": 2781, "start": 2767, "text": " I gather that pair is doing automatic curricula generation would you consider it auto curricula in the in the libosense or could you can craft a contrast paired with that idea of auto curricula." }, { "end": 2793, "start": 2781, "text": " Yeah well we definitely would consider it to be an auto curricula in like the libosense and yeah we were we were been inspired by that paper actually we." }, { "end": 2800, "start": 2793, "text": " I think originally when I was thinking of this architecture of like this game between multiple agents." }, { "end": 2816, "start": 2800, "text": " I didn't initially realize the sort of curricula that would come out of it and I think it was actually Natasha after like having something like this paper in the back of her mind that realized sort of like what can happen when we run this." }, { "end": 2831, "start": 2816, "text": " And then can we contrast paired with something like open AI proxgen I guess proxgen is going more for some more for generalization and maybe not so much like focusing on the curriculum of increasing difficulty." }, { "end": 2835, "start": 2831, "text": " Do you see them as related or or totally not." }, { "end": 2847, "start": 2835, "text": " Yeah so I guess there's sort of two dimensions here one is that that paired sort of lets you get around having to describe this complex distribution of environments." }, { "end": 2858, "start": 2847, "text": " And so proxgen is sort of taking the approach of just specifying this right like making a procedure that will actually generate such a distribution." }, { "end": 2867, "start": 2858, "text": " So they're a bit different in that respect because paired is sort of trying to get around the problem that's proxgen is trying to solve." }, { "end": 2887, "start": 2867, "text": " I guess they paired also does a bit to help generalization and I guess sort of in a way that proxgen isn't in the sense that it tries to get more experiments experience from environments that's." }, { "end": 2905, "start": 2887, "text": " Our agent performs less well in right and so in doing that it promotes some sort of robustness more more strongly than just randomly sampling a large distribution of environments because we're focusing on like specifically more worst case environments." }, { "end": 2921, "start": 2905, "text": " Cool and then can we also compare it to like poet by Jeff Klun that was the paired open ended treblaser system. How would you describe like similarities or differences between paired and poet." }, { "end": 2926, "start": 2921, "text": " Yeah so we were also pretty inspired by poet when making this." }, { "end": 2937, "start": 2926, "text": " So they're actually fairly distinct I guess the main similarity is that poets is also generating environments in which an agent can train." }, { "end": 2949, "start": 2937, "text": " But the mechanism by which it makes new environments and like comes over with new agents to train in them is like a bit more evolutionary." }, { "end": 2957, "start": 2949, "text": " I mean like Jeff Klun's whole agenda is like really interesting I really recommend checking it out if you haven't." }, { "end": 2978, "start": 2957, "text": " But yeah it's I guess brushing a lot of the details under the rug I guess one one difference is that poet is more focused on actual worst case." }, { "end": 2988, "start": 2978, "text": " Like if it's optimizing for something the optimization for the environment generation is more of a worst case sort of optimization." }, { "end": 2994, "start": 2988, "text": " And in doing that is sort of is motivates to generate environments which are unsolvable." }, { "end": 3010, "start": 2994, "text": " And sort of to correct for that there's a thing in in poet which tries to target environments to like be at a certain threshold of difficultness with respect to how likely it is for the the previsation to solve it." }, { "end": 3027, "start": 3010, "text": " And so this makes it works in their setting so that they do get the sort of increasing complexity and they actually get some really cool results." }, { "end": 3046, "start": 3027, "text": " And I think that like the need to have this sort of constant that like maintains like the difficulty right and like tuning that well for the environment makes poet like more difficult to generalize to different environments." }, { "end": 3059, "start": 3046, "text": " And it's hard time implementing something that was sort of like that baseline in our environment. Whereas paired sort of gets away without these sorts of these sorts of thresholds." }, { "end": 3066, "start": 3059, "text": " So I guess I'm our hope is that paired is going to be able to be more easily applied to more environments." }, { "end": 3077, "start": 3066, "text": " So if I understand this right it seems like paired is sort of like an important insight that now that you have it you can do these you can do these make these new types of curricula with this new method." }, { "end": 3086, "start": 3077, "text": " And it kind of stands on its own is that is that the case it kind of solves the problem or what might be left for future work here." }, { "end": 3103, "start": 3086, "text": " Yeah, so I guess I'm two minds at this one here. I think that the observation that we can generate difficult but solveable environments with paired is like a fairly like important step." }, { "end": 3114, "start": 3103, "text": " At the same time I am sort of overwhelmed by the number of different like extensions I can imagine for this and a lot of them are like fairly different from paired." }, { "end": 3128, "start": 3114, "text": " I think that paired I guess sort of opens the door for me to think about a lot of different architectures for using these sorts of like multi agent training regimes to come up with these sorts of increasing complexities." }, { "end": 3157, "start": 3128, "text": " I guess they sort of fall into two camps one is like ways of making paired itself more stable so this could be like different ways of parameterizing the adversary or different different ways of setting up the paired game so I guess in the paper we we also explored versions of symmetric paired where the adversary or the antagonist and protagonist roles are swapped depending on how agents are performing." }, { "end": 3182, "start": 3157, "text": " We also tried some population variants and all these had like stable performance but like a bit different properties and I think there's a lot that can be done in those regimes making them more stable and making like coming up with ways of of yeah I guess just making them more stable and working more settings." }, { "end": 3203, "start": 3182, "text": " I guess there's this other set of future works that I am pretty also pretty excited about where instead of trying to solve for like a min and max regret so objective like those techniques are you could try to solve for like more complicated things or just different things I think that's." }, { "end": 3232, "start": 3203, "text": " Yeah I guess there's a lot of different future works we could get into there but yeah I guess it's a fact is to say that I think there's a lot of follow up work that that can be done in paired that isn't just like a direct like a direct solidification of what is already there so let's move on to another paper you co-authored the epic paper quantifying differences in reward functions that's believe at all 2020." }, { "end": 3246, "start": 3232, "text": " And so this seems like a kind of a surprising result it says here in the abstract a distance to quantify the difference between two reward functions directly without training a policy that seems magical." }, { "end": 3250, "start": 3246, "text": " Can you tell us a bit how about how that works." }, { "end": 3273, "start": 3250, "text": " Yeah so Adam Gleves is again the first author on this the motivation for this comes from the fact that real world applications are such that like they don't come with a pre specified reward function I guess this is again what we were talking about earlier where like the reward function in RL we often think of it as like coming from on high." }, { "end": 3302, "start": 3273, "text": " But in real world applications like somebody has to like either write that down or like learn it from data and often this process could go wrong and so if we have done this like multiple times like a few people have written down like their guesses or like we've learned a few different ones would be good to check to see whether they're actually even saying like saying to train for this same thing or training for similar things because if like you you get some reward function for a bunch of different sources." }, { "end": 3314, "start": 3302, "text": " And they say different things then maybe that's a indication you should go back to drawn board and see like trying to figure out what what you actually want out of the situation." }, { "end": 3331, "start": 3314, "text": " But the way of comparing reward functions right now is sort of checking how the policy works on some test environment and that the performance of reward function on test environment doesn't necessarily transfer to the training environment." }, { "end": 3360, "start": 3331, "text": " So I guess for an example of this so if we have a reward function that rewards you like rewards a car for staying near the speed limit and we have another reward function which rewards you for like a high reward for getting here to your destination and like some penalty for crashing these two could give very similar behaviors in on like a dry road in the middle of the summer where they just like both maintain the same speed for the whole whole trip." }, { "end": 3370, "start": 3360, "text": " But very different things in winter when the first reward function would just like maintain the speed limit and probably crash for the second one would be more careful." }, { "end": 3379, "start": 3370, "text": " And so in order to like trying to like sus out these differences in order to like allow the developers to go back and figure out what they actually mean." }, { "end": 3383, "start": 3379, "text": " We would want to be able to compare the reward functions directly." }, { "end": 3394, "start": 3383, "text": " And I guess the natural way of doing this were just be to do the correlation between the reward functions like these two reward functions are just like a vector of numbers over states." }, { "end": 3397, "start": 3394, "text": " And so you can just like try to compare the two vectors." }, { "end": 3404, "start": 3397, "text": " Yeah, one way to compare two reward functions would be to do the correlation between two reward functions over states." }, { "end": 3412, "start": 3404, "text": " So like you can think of reward functions just like a vector where like every state just has some sort of number and you can just compare those two vectors and see how close they are." }, { "end": 3422, "start": 3412, "text": " But the problem is that like vector like reward functions just aren't like shouldn't just be thought of as arbitrary vectors. They have some sort of structure with them." }, { "end": 3433, "start": 3422, "text": " So particularly like reward functions don't really like shouldn't be thought of different if they're just like different shaping this of each other." }, { "end": 3443, "start": 3433, "text": " And so like comparing them in just like the like as if they're two vectors would make them very vulnerable to like any sort of reward shaping." }, { "end": 3460, "start": 3443, "text": " And so what we do to fix this is we just canonicalize the reward functions to find like sort of a representative sample of what the like this reward function is that sort of immune to these sorts of reward shaping terms." }, { "end": 3464, "start": 3460, "text": " And then we compare the distances between reward functions there." }, { "end": 3476, "start": 3464, "text": " And so we can show that doing this distance metric gives us a linear regret bound on the transfer form performance when a policy is trained by one reward function and test it on another." }, { "end": 3480, "start": 3476, "text": " So any hints on on what you're up to next Michael?" }, { "end": 3500, "start": 3480, "text": " Yeah, so sort of in the same vein of the the period work where paired sort of avoid the need to specify this like hard distribution of Mase's sort of like proxgen approaches what I'm working on thinking about other ways of avoiding specifying like hard parts of the problems that we want to solve." }, { "end": 3505, "start": 3500, "text": " And hopefully finding ways of doing this without the systems performance or we degrading." }, { "end": 3515, "start": 3505, "text": " Cool. And besides what you're working on or playing to work on can you share anything about other stuff in or all lately that you find really interesting." }, { "end": 3524, "start": 3515, "text": " Yeah, so I guess sort of in the vein of the stuff I'm thinking about working on that I've been recently thinking about how to specify problems." }, { "end": 3552, "start": 3524, "text": " And I guess two approaches that have been recently released in this like area are the consequences of the line, like miss a line day I by showing it all which studies the effects of leaving features out of a reward function and conservative agency by Turner at all, which proposes a way of making assistance which may get to get mitigate these unintended side effects of like leaving stuff out of your own word function." }, { "end": 3572, "start": 3552, "text": " And I think both of these are like sort of good steps in terms of trying to make our methods less vulnerable to misspecification of the problems that we want to solve because I think the problems that we actually want to solve like climate change and like poverty and like economic issues and stuff like this." }, { "end": 3584, "start": 3572, "text": " Oftentimes these are really difficult to actually specify and so if we make problem specification or make systems that are less vulnerable to misspecification of the problems, but the patients." }, { "end": 3591, "start": 3584, "text": " Then I think we'll be able to apply our like really good AI techniques to more important pressing problems." }, { "end": 3605, "start": 3591, "text": " Michael Dennis, this has been fantastic super fascinating for me. It hasn't been the shortest interview we've done. I really appreciate your patience with that. And thanks so much for sharing your time and your insight with with me in our audience today. Thank you Michael Dennis." }, { "end": 3607, "start": 3605, "text": " Yeah, thanks. It's been great." }, { "end": 3614, "start": 3607, "text": " Thanks." }, { "end": 3619, "start": 3614, "text": " Notes and links for this episode are at talkrl.com." }, { "end": 3624, "start": 3619, "text": " If you like this show, I need your support. You can help in a few ways." }, { "end": 3637, "start": 3624, "text": " One, subscribe on your favorite podcast platform. Subscriptions make a big difference. Two, follow us on Twitter and talkrl podcast. We love retweets." }, { "end": 3648, "start": 3637, "text": " Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better." }, { "end": 3655, "start": 3648, "text": " Talkrl." } ]
Roman Ring
Roman Ring discusses the Research Engineer role at DeepMind, StarCraft II, AlphaStar, his bachelor's thesis, JAX, Julia, IMPALA and more!
https://media.transistor…6e1.mp3?src=site
This is TalkArail Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Robin Chohan. Roman Ring is a research engineer that deep-mind. Roman, I want to thank you for taking the time to be here today. Thanks so much. Thank you so much for having me. Can you tell us a bit about the research engineer role that deep-mind? Yeah, so research engineering in general is a somewhat recent concept. I think it came to be as a response to increasing software complexity of machine learning projects. Research scientists just couldn't do them on their own. But then even if software engineers were hard, there was this gap in background knowledge between the two. So you would have this situation where you would have really knowledgeable research scientists and talented software engineers. But nobody to bridge the gap between the two, so to say. So then being an RE is somewhat like a jack of all trace kind of deal or maybe even more of a spectrum where you have pure research on one side and engineering on the other. And then individual REs can choose where to stand. I'll be honest, even a deep-mind, the difference between REs and software engineers and sometimes even research scientists is somewhat blurry. So there are research scientists that are really talented software engineers and vice versa. But at least during hiring, I think the difference comes down to expectations. So an RE would be expected to know why do you need important sampling in Impala. Whereas software engineers should know how various already-used algorithms stack up. I think the cool thing about research engineering and but maybe a lot of people don't realize is that you don't need a PhD to get a job at an industry lab. So it's enough to have a masters for an RE position. And yeah, there's just a lot of discussions on Reddit where a person who clearly indicates he has no interest in academia. And asks if what are his potential career trajectories and then people would come in and say, well, if you want to do research, you have to have a PhD. But in reality, that's not the case, even for private industry labs like deep-mind. So for you personally, what do you feel are the most interesting parts of the research engineer role? So in a lot of ways, engineering, research engineering right now is like deviled West. There's a lot of things that are still not figured out and people are trying all sorts of wacky ideas from low-level compiler side to huge software architectural projects. And I came into machine learning from web development where I think a lot of these things are already figured out. And so at a certain point in your career, you're just basically going through the motions, even if you're starting a project from the ground up. And in ML, basically nobody knows what are the best things. So you have this freedom of choice, freedom to explore things. And I think not many people realize that some of the major research breakthroughs like Alex Net or recent GPT-3 were done mainly through engineering efforts. So in Alex Net, they actually used the architecture that was like from the 90s, Lynette with some modifications, of course, but still the core is Lynette. And what drove the achievement was the custom CUDA kernels. And then GPT-3 is literally just GPT-2 with some minor changes. And what obviously drove the results was the enormous scale that they had. So what do you feel are the biggest challenges for engineering at this point? Are there any other bottlenecks in progress of engineering? Yeah, so the Wild West aspect of it can also act as a double-aged sword. Basically since nobody knows what's the best way to do things, then there's some applicator or even waste of efforts happening. You can spend quite a bit of time going in higher direction that's basically fruitless. So you submitted an impressive bachelor's thesis on Starcraft agents when you were at University of Tartune Estonia, is that right? Yeah. Can you tell us a bit about that thesis? Yeah, so this goes back to like early 2000s when I first discovered Starcraft 1. And I immediately fell in love with the complexity of the game. Even though at the time I had no idea I would be working AI, I actually was initially planning to work on to be a translator from English to Russian. Anyway, so when I discovered the game, I already back then was thinking what it would take to have a computer program play the game. And I also saw that in the first version of Starcraft, even the built-in AI, not even the one that you play against, but the one that drives units, things like Pathfying, it was really inefficient. So that was an indication to me that this game would probably be a major challenge for AI. And then, so when I went back to university and arrived at a point where I would have to do a thesis, my mind immediately went back to Starcraft. And I guess this was just really lucky that specifically at the deadline where I had to decide on a project, DeepMind released their PC2 API library. And yeah, the choice was obvious then. At first I had really ambitious ideas where I would replicate the results, the DeepMind released some initial results, and then built some research on top of that. But basically since the initial paper that came out with the library didn't have any source code, and you had to replicate the architecture yourself, that was quite difficult for me as a junior researcher. So in the end, I settled on just being as close to the DeepMind results and architecture as possible. One side challenge with my undergrad was that it was actually in statistics. So a major stress for me was whether I would be able to prove to a room full of like hardcore mathematics and statistics people that what I did wasn't just play video games for half a year. And I had to be, yeah, and I had to be really careful with every mathematical definition I bring up. And I think that actually in the end played a major role in my fundamental understanding of RL. And yes, since I had to be really careful and know my things. Okay, I really enjoyed going through these. I think it's an incredible, incredible effort, especially for anybody, but especially at the bachelor level. So Starcraft has some interesting properties as an environment. Can you talk us through what makes Starcraft especially interesting and challenging? Yeah, all right. So the game itself is a real-time strategy video game. What that means is that you have to combine long-term and short-term planning to basically build up your army and then figure out what's the best timing to attack your opponent and win the game. A layer of complexity on top of that is that there are three distinct races in the game, each with its own unique mechanics. So from an RL perspective, the obvious challenge is that the state and action spaces are basically infinite for practical purposes. And this dwarfs the typical board games like Chess and Go. In fact, I think you could probably add a chess game as part of Starcraft, and it wouldn't really affect things much at this point. Yeah, then there's a Fog of War, which is a big part of the game. Basically, what it does is it hides information about opponent's unions and buildings from the player, unless he has his own union nearby. This makes information a really critical resource in the game, and tournament games are often won with, so to say, meta-gaming where players would do all sorts of drinks to avoid giving up on information or giving wrong information to the opponent. Also, the game's lasts for thousands of steps, so if you assign a reward only on the last step, as it's done for Alpha Star, then this makes the environment have really, really long sparse rewards. And finally, one major challenge when playing Quirce's humans is that the winning strategies are not non-transitive. Basically, there is no single optimal policy that would work best. It's more like a really high-dimensional rock paper scissors. So I think there were two deep-mind Starcraft papers before the 2019 Nature Paper won, introducing the Starcraft environment, and then the relational R01, is that right? Yes, so the first one, as you said, this one was really alongside the PiC2 environment. And then, yeah, the other one was relational reinforcement learning, but the finding thing is about that one is that it was done basically in parallel to Alpha Star, and it was led by a separate group of people with some advisory help from the Alpha Star team. But, well, it did showcase Starcraft as a research environment. It didn't really build up to the final agent in the end. Okay, so before we get into that Nature Paper, can you help us set the stage for where Starcraft RL agents were at high level before that Nature Paper came out? Sure, so outside of Starcraft 2, there's also really active community of Starcraft 1 research, but at least at the time of when Alpha Star started gaining traction, they were still using fairly basic machine learning approaches. I'm not sure even if anybody used RL successfully. There's also the Starcraft 2 AI community, which focuses mainly on handcrafted bots, which are then pitted against each other in regular tournaments. They're actually quite good, and some could even win versus human players, but of course they suffer from the usual drawbacks of handcrafted engineering. For the Alpha Star itself, it went really quickly from basically playing on the self-contained mini games to Pro level. I think the paper itself was the initial paper was released in something like 2017, and then for quite a long time, DeepMind tried to do things from scratch, Tableau Rasa, so to say, that is with pure RL. That didn't quite work out in the end. But anyway, basically by December 2018, they already had a RL Rasa January 2019, they had the show match versus pro players. While the Alpha Star won, many people still thought that it wasn't quite fair, so the agent used way too many actions, and also it was playing just one race on one map. But then, so that was in the say January 2018, and then the agents described in the Nature paper were basically locked in like June or July, so that's basically half a year of difference. So you co-authored that Nature article titled Grandmaster Level in Starcraft 2 using multi-agent reinforcement learning by Vinyal Siddhal. So can you give us a short overview of that paper? So this paper describes a machine learning based agent, which reached a fairly high level of play in Starcraft, though unfortunately it didn't dominate the scene like AlphaGo. It's initialized with supervised pre-training and fine-training from human data, and then followed by multi-agent RL training group, and this loop prior to as a support that were hard to stupid. The agent, a neural network architecture, was built to handle multi-model inputs and outputs, and at the time I think it was the first RL agent that combines convolutional recurrent and transformer layers in like one big chunk. The training was done through a leak training system to deal with this Rock Paper Scissors effect I mentioned. Basically the leak contains a set of main agents. The agents who's only person is to beat these main agents, and the agents that are encouraged to follow some strategy that often leads to quick wins. They're called cheeses in the community. Then over time this leak is populated with checkpoints of older checkpoints of these agents, and basically the goal of the main agent is of course to win against the whole leak. That eventually leads to somewhat generalizable agents. Another major focus, as I mentioned, was ensuring that playing conditions are as fair as possible. I mean, sure, you could say that the only fear test would be if you had a robot that's sitting next to a monitor, looks at the game from raw pixels, and then controls keyboard amounts. But aside from that, I think Alpha Star has a relatively fair setup, and I think most of the regular players agree to that. The paper itself is actually a combination of several individual research ideas that could probably easily get into top conference papers on their own. These ideas are like APCO, APCO-ing post-agredients, which basically is a really simple modification on top of the usual critique laws that helps quite a bit with learning from the right acting. I'm actually surprised that people still didn't decouple these ideas out of the paper and haven't tried them out in their own environments or agents. Can you say a bit about your own role in the paper? Yeah, well, I mean, the project lasted something like four years, so by the time I joined, it was basically right at the end of it. And obviously I couldn't have a major impact on something fundamental like make architectural changes or do novel research ideas. But I've helped with general engineering in the Alpha Star League that I described. I've also helped analyze the results of the millions of games, the agents played. And I added a couple of small paragraphs in the paper itself. So while my impact maybe wasn't that big, I did learn a lot during this process and during this internship. And yeah, it helped me a lot to start off my full-time project on the right foot. Do you want to share about what parts of the agent you find personally most interesting? Honestly, that it learned anything at all is mind blowing to me. I would never have thought that a single relatively small neural network could handle such an environment on a high level of play. I would always say that, yeah, sure, Alpha Go, a bit go, but to play Starcraft, something major would have to be introduced. And here we have a single network playing three different traces across four different maps across many, many variations of strategies. But if I had to point to something specific, I would say that would be forcing the agent to use camera to attend to game events. It maybe doesn't sound like much, but seeing it play in action and actually make these choices to attend different events and make decisions based on that. I'm not saying it's a GI or anything, and I know that under the hood it's basically just metrics, multiplications, but still that that was like a really weird feeling. It's almost like looking at something that's doing reasoning on some level. So do you think that a model-based approach for a future agent for Starcraft would ever be possible or could be a good idea? Sure, that could be promising, but maybe not the full dynamics model, rather some latent representations. Then you would have to consider why you're doing that. I mean, it can definitely, it won't be worse, so it would definitely improve Alpha Star's performance, but that would take even more resources to train. Well, personally, it would be fun to see Alpha Star clearly beat the top players and achieve this Alpha-Go style of victory. I don't see how this could be the only goal. So you would have to have some research reason to do it, and I don't really see one. So I remember when the Starcraft environment first came out, and it was in the description in the paper, it sounded like a lofty, distant goal to build agents for the full Starcraft game. And then it was seemed like a very short time later that the Grandmaster level paper came out. So can I ask, was that timeline expected? Were you surprised by how quickly they got to that level or your team was able to get to that level? And yeah, and is Starcraft going to be a challenge for going forward for a long time, like Atari was? Yeah, so I was definitely surprised to see that result achieved. I do have to clarify that nobody is claiming that Starcraft is solved, so even though Alpha Star beats 99.8% of players, the last 0.002% are probably the hardest to beat. And it would take quite a bit of effort to reach 100%. So in that sense, the environment itself is definitely still an open research question. But realistically speaking, it does take a lot of resources to solve. So outside of big industry labs, I'm not sure how feasible it is to do research on it. That's from our own perspective. From something like a meditation learning perspective, there's definitely room for research for both big and small research labs. So I would be personally really interested to see other people try to replicate at least the Alpha Star imitation parts. That would still reach top 80% to 90% of the human players. So in some ways, the OpenAI Dota 2 work seems to be a bit of a parallel world to the Starcraft work from DeepMind. Do you have any thoughts on comparing and contrasting these two projects? Yeah, sure. Both were really interesting projects from engineering perspective. So as a research engineer, I think OpenAI 5 was definitely really cool achievement. And even though I don't play Dota myself, I can appreciate that the level of complexity of the game is probably on par with Starcraft. So I have to clarify that I do think that it's a really cool result. But they did use some things differently. They didn't focus on making the game fair between the agent and the humans. They also did a lot of reward shaping. So where was Alpha Star was trained purely on the win-loss signal? Yeah, and I guess this goes for both results, but neither Alpha Star nor OpenAI 5 actually achieved this clear milestone of beating humans. So in that sense, both environments are still open research problems. So if we step back a bit and look at the earlier generation of D-Mind agents like the DQN and AlphaGo, and then if we compare them to the current generation of agents like Agent 57, Alpha Star and Mu0, I guess the new ones are a lot more complex. Do you think that trend will just continue over the next few years? Are these agents getting just way more complex? What do you think agents might look like in a few years from now? To be fair, the modern agents tackle way more difficult tasks than something like original DQN. I mean, yeah, sure, Agent 57 is still playing the same material, but the sort of say the leak they're playing is completely different. Agent 57 has to be human baseline by a wide margin on all of the 57 games, whereas the original DQN failed miserably at half of them. So as far as research direction, I can see it go either way. Like obviously having simpler agents achieve similar results on current environments is worthwhile. And at the same time, having more complex agents achieve good results on more complex environments is also a good direction. So as you mentioned, like with GPT-3 was just basically a scaled up version of GPT-2. Could there be a lot of progress to be made just by scaling up existing methods or do we really need new methods? Yeah, again, either direction could work. There's still this problem in RL where deep networks don't really work. Up to like a couple of years ago, the best network that you could use is the three layer CONFNET basically. Then resonance came and somewhat improved that. And even like the most complex architectures like Alpha Star and OpenEA5, they're complex from architecture perspective, but they're really shallow compared to something like GPT-3. So I'm not sure if this could be improved if you just scale it really, really by a lot. But I would probably guess that something more fundamental has to happen. Either a new neural network layer or something in the RL agents themselves. So we talked about a few different families of agents. Do you think these different families of agents will continue by forgetting and splitting into different sub-families or do you see maybe these lines of work converging at some point? Well, I mean, I'd love to see it like a unifying string theory type of thing that would combine all of these algorithms into one. But at this point in time, I think we're barely in the Newton's physics era. So yeah, I think right now it's just worthwhile to explore different directions. Then again, we do advance much quicker than physics from 1600s. So who knows. Also maybe we'll get an AGI from some different direction and then just ask it to solve for RL for us. So do you think that people will be coding agents in like TensorFlow or Jax or or PyTorch for the foreseeable future? Yeah, so in short, Jax is a library that exposes a set of so-called program transformations like grad or jid on top of NAMPAI like API. And this all runs really efficiently on GPU and GPU. So if you like NAMPAI, you would love Jax. And in that sense, I think Jax would shine the more of the beaten path your research goes because you can just try it with NAMPAI. And yeah, basically the where your ideas get, the better Jax would suit you. And in that sense, I think it would be beneficial for our all context. Whether it would be Jax TensorFlow or PyTorch down the line, I'm not sure maybe it's something like Swift for TensorFlow finally comes through. But yeah, I think this goes back to the initial question of why I like engineering and that's nobody has it figured out. So yeah, I actually don't know what will happen in like two years maybe we'll have another framework and that's what's keeping it exciting for me. By the way, I have to say I see a lot of discussion and read it that TensorFlow is basically dead and Google is moving everyone to Jax. And that's just not true because TensorFlow has a lot of things figured out that Jax Core does just don't have the time for. I think I mean, I'm not speaking for Google obviously, but I think as far as things go, I don't think TensorFlow will go away anywhere in time soon in that sense. Okay, so maybe aside from which specific framework is used, but do you think that that people will continue to make agents by hand for the foreseeable future? I mean, maybe AJ will solve this for us, but aside from that, I think there's no harm in having conductive biases. And I don't see the reason why we should avoid using them, especially right now. So there's also directions like neural architecture search, but that takes even more computer resources than normal RL. So yeah, I think for the foreseeable future we'll be causing these agents algorithms by hand. Do you think that Python will remain de facto standard for ML and RL, or do you think maybe Julia or something else will dominate before too long? Yeah, so this is kind of tricky. Languages like Julia clearly do many things right? And I would love to like use them professionally, but Python just has so much momentum right now that. Especially when we're talking about large companies with pre-existing internal infrastructure, the cost of switching to another language just might not be worth it. So whatever it will be that everyone switches to it has to. I don't know, I have some features that we don't even we can't even imagine right now. So yeah, well personally, I would love to use Julia. I don't see people switching like globally to it. So besides these things, how do you see the role of research engineering changing over the next few years? What do you think that types of changes we might expect? Yeah, so basically, as I said, the the the line between research engineering and software engineering was blurry as it is. So I think these two job families will merge back into one eventually once the next generation of students comes to the job market who. We have the relevant machine learning background. But but you could say that's just terminology and whatever you call it, it's the same job. The other thing is in research, scientist positions, I see a trend for the main expertise is really valued. So instead of having like pure machine learning research positions, you have machine learning plus chemistry or machine learning plus BIO. And I think we will have the same for research and engineering or software engineering where you have like machine learning plus compilers or machine learning plus distributed systems. And I mean, that's basically already happening. Like you have general positions, but then once you get hired, you start specializing because you just can't know anything. And but I think that will be like explicit explicitly expected from people to have these dual expertise. Okay, so some some of the agents we talked about today definitely burn a lot of compute, as you mentioned. Do you have any opinions about like what we can still do in RL with small compute? Or do you feel like most of the interesting things are require a lot of compute? Do you think there's still interesting progress that we can achieve with small compute? Well, I mean, I think people over estimate how much compute even industry labs use. So aside from like large projects like Alpha Star and open A A5, it's still clearly more beneficial to use smaller environments, which use smaller amount of resources, just because you have faster results in your experiments and you can iterate faster. So basically, I think any fundamental research like improving sample efficiency can be done on basically toy environments that would be accessible to any like researcher. And then again, another thing is I think a lot of people aren't really utilizing their resources fully. So I remember when the original DKN came out having something like thousands samples per second on a single machine, meant that you're basically a god of performance. Whereas now there are papers that squeeze out like hundreds of thousands of samples per second on a single machine with some clever tricks to hide the bottleneck of communication between CPU and GPU. Basically, I think understanding all of these tricky bits and how Python itself works under a hood would definitely help. That's also assuming that you have no control over the environment like with Starcraft. But if you're establishing no research, then you can just re-implement the environment with vectorization like with SIMD on CPUs or with CUDA kernels. In fact, I think I recently stumbled on a paper from a couple of years ago by Nvidia where they basically re-implemented most of the Atari environment on CUDA. But for some reason, not many people are using it. And I think maybe there's some inherent inertia in academia against these engineering improvements, which is kind of ironic considering cases like CalaxNet and G3. So in preparing for this episode, you mentioned another set of papers that you found interesting regarding QLAMDA, Retrace, Acer, VTrace, and Pala. And as you mentioned, these are pretty big topics. Do you want to share with us some major ideas in this sequence and what makes this set of papers of particular interest to you? Yeah, so in short, the goal of those papers is to incorporate a return-based approach into off-policy RL. So the initial QLAMDA extends multi-step QLarning theory to support it. Then the Retrace idea is extending policy gradients using truncated important sampling to correct for off-policiness. And then the Acer paper adapts Retrace approach to the full acto-critical algorithm, but Acer still relies on the state-taction value estimator, the Q function. So finally, in Pala, the Retrace is adapted to the value estimator, the V function. And basically, the reason I brought up these papers is I just really liked the story that these papers paint. So they start off with a really highly theoretical idea and eventually I arrived at a system that drove probably the largest scale RL project, Alpha Star. And yes, I remember when I first found out about in Pala, I thought it just came out of nowhere, but then I would slowly go through the references and discover how altars built up through it through the years. And I found it quite inspirational as a junior researcher. So to be clear, Alpha Star is using V Trace and some other things. It talks about off-policy, but is it really truly off-policy? Can Alpha Star learn from batch data? Well, it's not quite, it's off-policy-ish. So it can't wander off too far. It's basically learning on like two batches of the experience. So it's definitely not off-policy, like you think of Dekuane or something. But it does have this off-policy-ish element. Another reason there's this element is because the RL setup itself is highly distributed. So you can often have situations where one experience from one agent comes way after the network updated. And so you have this situation where either you throw it out or you have to somehow adapt for it. And that's where things like V Trace come in that, well, important something because basically what drives this correction. So besides what we've mentioned here so far, do you have any comments or on directions in the RL world that you find interesting lately? Yeah, so there's a funny video from like 10 years ago by Simon Peyton Jones, who's a lead designer of Haskell, and the video is titled Haskell's useless. Basically, in short, he puts a bunch of languages on like a grid of useful useless. And he puts languages like C++ and Java in the useful pile, and then Haskell into the useless pile. And the reason he gives is that even though languages like C++ could potentially blow up your machine if you're not careful, it doesn't really matter because it's just so practically useless, useful. So where I'm getting at with this is that I think RL right now is the Haskell of machine learning world. It has really strong and beautiful foundational ideas, but it still has some ways to go before becoming like really practical for mainstream use. So one direction that could take us there, I think is offline RL, which basically detaches the process of gathering the samples from the learning process. And yeah, I think offline RL is something that could like open me doors in practice. Roman Ring, thank you so much for sharing your time and insight today. It's been a real pleasure to speak with you. And I know our audience will really appreciate it too. Thanks again. Yeah, thank you so much for having me. Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways. One, subscribe on your favorite podcast platform. Subscriptions make a big difference. Two, follow us on Twitter and talk RL podcast. We love retweets. Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better.
[ { "end": 12, "start": 0, "text": " This is TalkArail Podcast. All reinforcement learning, all the time." }, { "end": 18, "start": 12, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 25, "start": 21, "text": " Roman Ring is a research engineer that deep-mind." }, { "end": 29, "start": 25, "text": " Roman, I want to thank you for taking the time to be here today. Thanks so much." }, { "end": 35, "start": 29, "text": " Thank you so much for having me. Can you tell us a bit about the research engineer role that deep-mind?" }, { "end": 40, "start": 35, "text": " Yeah, so research engineering in general is a somewhat recent concept." }, { "end": 48, "start": 40, "text": " I think it came to be as a response to increasing software complexity of machine learning projects." }, { "end": 52, "start": 48, "text": " Research scientists just couldn't do them on their own." }, { "end": 60, "start": 52, "text": " But then even if software engineers were hard, there was this gap in background knowledge between the two." }, { "end": 68, "start": 60, "text": " So you would have this situation where you would have really knowledgeable research scientists and talented software engineers." }, { "end": 74, "start": 68, "text": " But nobody to bridge the gap between the two, so to say." }, { "end": 87, "start": 74, "text": " So then being an RE is somewhat like a jack of all trace kind of deal or maybe even more of a spectrum where you have pure research on one side and engineering on the other." }, { "end": 92, "start": 87, "text": " And then individual REs can choose where to stand." }, { "end": 102, "start": 92, "text": " I'll be honest, even a deep-mind, the difference between REs and software engineers and sometimes even research scientists is somewhat blurry." }, { "end": 108, "start": 102, "text": " So there are research scientists that are really talented software engineers and vice versa." }, { "end": 113, "start": 108, "text": " But at least during hiring, I think the difference comes down to expectations." }, { "end": 120, "start": 113, "text": " So an RE would be expected to know why do you need important sampling in Impala." }, { "end": 127, "start": 120, "text": " Whereas software engineers should know how various already-used algorithms stack up." }, { "end": 138, "start": 127, "text": " I think the cool thing about research engineering and but maybe a lot of people don't realize is that you don't need a PhD to get a job at an industry lab." }, { "end": 144, "start": 138, "text": " So it's enough to have a masters for an RE position." }, { "end": 155, "start": 144, "text": " And yeah, there's just a lot of discussions on Reddit where a person who clearly indicates he has no interest in academia." }, { "end": 167, "start": 155, "text": " And asks if what are his potential career trajectories and then people would come in and say, well, if you want to do research, you have to have a PhD." }, { "end": 173, "start": 167, "text": " But in reality, that's not the case, even for private industry labs like deep-mind." }, { "end": 179, "start": 173, "text": " So for you personally, what do you feel are the most interesting parts of the research engineer role?" }, { "end": 185, "start": 179, "text": " So in a lot of ways, engineering, research engineering right now is like deviled West." }, { "end": 201, "start": 185, "text": " There's a lot of things that are still not figured out and people are trying all sorts of wacky ideas from low-level compiler side to huge software architectural projects." }, { "end": 209, "start": 201, "text": " And I came into machine learning from web development where I think a lot of these things are already figured out." }, { "end": 218, "start": 209, "text": " And so at a certain point in your career, you're just basically going through the motions, even if you're starting a project from the ground up." }, { "end": 223, "start": 218, "text": " And in ML, basically nobody knows what are the best things." }, { "end": 229, "start": 223, "text": " So you have this freedom of choice, freedom to explore things." }, { "end": 241, "start": 229, "text": " And I think not many people realize that some of the major research breakthroughs like Alex Net or recent GPT-3 were done mainly through engineering efforts." }, { "end": 251, "start": 241, "text": " So in Alex Net, they actually used the architecture that was like from the 90s, Lynette with some modifications, of course, but still the core is Lynette." }, { "end": 260, "start": 251, "text": " And what drove the achievement was the custom CUDA kernels. And then GPT-3 is literally just GPT-2 with some minor changes." }, { "end": 265, "start": 260, "text": " And what obviously drove the results was the enormous scale that they had." }, { "end": 269, "start": 265, "text": " So what do you feel are the biggest challenges for engineering at this point?" }, { "end": 275, "start": 269, "text": " Are there any other bottlenecks in progress of engineering?" }, { "end": 281, "start": 275, "text": " Yeah, so the Wild West aspect of it can also act as a double-aged sword." }, { "end": 290, "start": 281, "text": " Basically since nobody knows what's the best way to do things, then there's some applicator or even waste of efforts happening." }, { "end": 298, "start": 290, "text": " You can spend quite a bit of time going in higher direction that's basically fruitless." }, { "end": 306, "start": 298, "text": " So you submitted an impressive bachelor's thesis on Starcraft agents when you were at University of Tartune Estonia, is that right?" }, { "end": 307, "start": 306, "text": " Yeah." }, { "end": 310, "start": 307, "text": " Can you tell us a bit about that thesis?" }, { "end": 316, "start": 310, "text": " Yeah, so this goes back to like early 2000s when I first discovered Starcraft 1." }, { "end": 321, "start": 316, "text": " And I immediately fell in love with the complexity of the game." }, { "end": 332, "start": 321, "text": " Even though at the time I had no idea I would be working AI, I actually was initially planning to work on to be a translator from English to Russian." }, { "end": 343, "start": 332, "text": " Anyway, so when I discovered the game, I already back then was thinking what it would take to have a computer program play the game." }, { "end": 359, "start": 343, "text": " And I also saw that in the first version of Starcraft, even the built-in AI, not even the one that you play against, but the one that drives units, things like Pathfying, it was really inefficient." }, { "end": 367, "start": 359, "text": " So that was an indication to me that this game would probably be a major challenge for AI." }, { "end": 379, "start": 367, "text": " And then, so when I went back to university and arrived at a point where I would have to do a thesis, my mind immediately went back to Starcraft." }, { "end": 393, "start": 379, "text": " And I guess this was just really lucky that specifically at the deadline where I had to decide on a project, DeepMind released their PC2 API library." }, { "end": 397, "start": 393, "text": " And yeah, the choice was obvious then." }, { "end": 411, "start": 397, "text": " At first I had really ambitious ideas where I would replicate the results, the DeepMind released some initial results, and then built some research on top of that." }, { "end": 427, "start": 411, "text": " But basically since the initial paper that came out with the library didn't have any source code, and you had to replicate the architecture yourself, that was quite difficult for me as a junior researcher." }, { "end": 439, "start": 427, "text": " So in the end, I settled on just being as close to the DeepMind results and architecture as possible." }, { "end": 447, "start": 439, "text": " One side challenge with my undergrad was that it was actually in statistics." }, { "end": 461, "start": 447, "text": " So a major stress for me was whether I would be able to prove to a room full of like hardcore mathematics and statistics people that what I did wasn't just play video games for half a year." }, { "end": 469, "start": 461, "text": " And I had to be, yeah, and I had to be really careful with every mathematical definition I bring up." }, { "end": 475, "start": 469, "text": " And I think that actually in the end played a major role in my fundamental understanding of RL." }, { "end": 481, "start": 475, "text": " And yes, since I had to be really careful and know my things." }, { "end": 489, "start": 481, "text": " Okay, I really enjoyed going through these. I think it's an incredible, incredible effort, especially for anybody, but especially at the bachelor level." }, { "end": 501, "start": 489, "text": " So Starcraft has some interesting properties as an environment. Can you talk us through what makes Starcraft especially interesting and challenging?" }, { "end": 507, "start": 501, "text": " Yeah, all right. So the game itself is a real-time strategy video game." }, { "end": 527, "start": 507, "text": " What that means is that you have to combine long-term and short-term planning to basically build up your army and then figure out what's the best timing to attack your opponent and win the game." }, { "end": 537, "start": 527, "text": " A layer of complexity on top of that is that there are three distinct races in the game, each with its own unique mechanics." }, { "end": 547, "start": 537, "text": " So from an RL perspective, the obvious challenge is that the state and action spaces are basically infinite for practical purposes." }, { "end": 563, "start": 547, "text": " And this dwarfs the typical board games like Chess and Go. In fact, I think you could probably add a chess game as part of Starcraft, and it wouldn't really affect things much at this point." }, { "end": 578, "start": 563, "text": " Yeah, then there's a Fog of War, which is a big part of the game. Basically, what it does is it hides information about opponent's unions and buildings from the player, unless he has his own union nearby." }, { "end": 596, "start": 578, "text": " This makes information a really critical resource in the game, and tournament games are often won with, so to say, meta-gaming where players would do all sorts of drinks to avoid giving up on information or giving wrong information to the opponent." }, { "end": 612, "start": 596, "text": " Also, the game's lasts for thousands of steps, so if you assign a reward only on the last step, as it's done for Alpha Star, then this makes the environment have really, really long sparse rewards." }, { "end": 624, "start": 612, "text": " And finally, one major challenge when playing Quirce's humans is that the winning strategies are not non-transitive." }, { "end": 634, "start": 624, "text": " Basically, there is no single optimal policy that would work best. It's more like a really high-dimensional rock paper scissors." }, { "end": 647, "start": 634, "text": " So I think there were two deep-mind Starcraft papers before the 2019 Nature Paper won, introducing the Starcraft environment, and then the relational R01, is that right?" }, { "end": 655, "start": 647, "text": " Yes, so the first one, as you said, this one was really alongside the PiC2 environment." }, { "end": 673, "start": 655, "text": " And then, yeah, the other one was relational reinforcement learning, but the finding thing is about that one is that it was done basically in parallel to Alpha Star, and it was led by a separate group of people with some advisory help from the Alpha Star team." }, { "end": 681, "start": 673, "text": " But, well, it did showcase Starcraft as a research environment. It didn't really build up to the final agent in the end." }, { "end": 693, "start": 681, "text": " Okay, so before we get into that Nature Paper, can you help us set the stage for where Starcraft RL agents were at high level before that Nature Paper came out?" }, { "end": 715, "start": 693, "text": " Sure, so outside of Starcraft 2, there's also really active community of Starcraft 1 research, but at least at the time of when Alpha Star started gaining traction, they were still using fairly basic machine learning approaches." }, { "end": 730, "start": 715, "text": " I'm not sure even if anybody used RL successfully. There's also the Starcraft 2 AI community, which focuses mainly on handcrafted bots, which are then pitted against each other in regular tournaments." }, { "end": 741, "start": 730, "text": " They're actually quite good, and some could even win versus human players, but of course they suffer from the usual drawbacks of handcrafted engineering." }, { "end": 750, "start": 741, "text": " For the Alpha Star itself, it went really quickly from basically playing on the self-contained mini games to Pro level." }, { "end": 767, "start": 750, "text": " I think the paper itself was the initial paper was released in something like 2017, and then for quite a long time, DeepMind tried to do things from scratch, Tableau Rasa, so to say, that is with pure RL." }, { "end": 785, "start": 767, "text": " That didn't quite work out in the end. But anyway, basically by December 2018, they already had a RL Rasa January 2019, they had the show match versus pro players." }, { "end": 803, "start": 785, "text": " While the Alpha Star won, many people still thought that it wasn't quite fair, so the agent used way too many actions, and also it was playing just one race on one map." }, { "end": 819, "start": 803, "text": " But then, so that was in the say January 2018, and then the agents described in the Nature paper were basically locked in like June or July, so that's basically half a year of difference." }, { "end": 829, "start": 819, "text": " So you co-authored that Nature article titled Grandmaster Level in Starcraft 2 using multi-agent reinforcement learning by Vinyal Siddhal." }, { "end": 833, "start": 829, "text": " So can you give us a short overview of that paper?" }, { "end": 861, "start": 833, "text": " So this paper describes a machine learning based agent, which reached a fairly high level of play in Starcraft, though unfortunately it didn't dominate the scene like AlphaGo. It's initialized with supervised pre-training and fine-training from human data, and then followed by multi-agent RL training group, and this loop prior to as a support that were hard to stupid." }, { "end": 882, "start": 861, "text": " The agent, a neural network architecture, was built to handle multi-model inputs and outputs, and at the time I think it was the first RL agent that combines convolutional recurrent and transformer layers in like one big chunk." }, { "end": 896, "start": 882, "text": " The training was done through a leak training system to deal with this Rock Paper Scissors effect I mentioned. Basically the leak contains a set of main agents." }, { "end": 908, "start": 896, "text": " The agents who's only person is to beat these main agents, and the agents that are encouraged to follow some strategy that often leads to quick wins." }, { "end": 925, "start": 908, "text": " They're called cheeses in the community. Then over time this leak is populated with checkpoints of older checkpoints of these agents, and basically the goal of the main agent is of course to win against the whole leak." }, { "end": 940, "start": 925, "text": " That eventually leads to somewhat generalizable agents. Another major focus, as I mentioned, was ensuring that playing conditions are as fair as possible." }, { "end": 957, "start": 940, "text": " I mean, sure, you could say that the only fear test would be if you had a robot that's sitting next to a monitor, looks at the game from raw pixels, and then controls keyboard amounts." }, { "end": 969, "start": 957, "text": " But aside from that, I think Alpha Star has a relatively fair setup, and I think most of the regular players agree to that." }, { "end": 981, "start": 969, "text": " The paper itself is actually a combination of several individual research ideas that could probably easily get into top conference papers on their own." }, { "end": 998, "start": 981, "text": " These ideas are like APCO, APCO-ing post-agredients, which basically is a really simple modification on top of the usual critique laws that helps quite a bit with learning from the right acting." }, { "end": 1013, "start": 998, "text": " I'm actually surprised that people still didn't decouple these ideas out of the paper and haven't tried them out in their own environments or agents." }, { "end": 1016, "start": 1013, "text": " Can you say a bit about your own role in the paper?" }, { "end": 1036, "start": 1016, "text": " Yeah, well, I mean, the project lasted something like four years, so by the time I joined, it was basically right at the end of it. And obviously I couldn't have a major impact on something fundamental like make architectural changes or do novel research ideas." }, { "end": 1048, "start": 1036, "text": " But I've helped with general engineering in the Alpha Star League that I described. I've also helped analyze the results of the millions of games, the agents played." }, { "end": 1053, "start": 1048, "text": " And I added a couple of small paragraphs in the paper itself." }, { "end": 1071, "start": 1053, "text": " So while my impact maybe wasn't that big, I did learn a lot during this process and during this internship. And yeah, it helped me a lot to start off my full-time project on the right foot." }, { "end": 1076, "start": 1071, "text": " Do you want to share about what parts of the agent you find personally most interesting?" }, { "end": 1090, "start": 1076, "text": " Honestly, that it learned anything at all is mind blowing to me. I would never have thought that a single relatively small neural network could handle such an environment on a high level of play." }, { "end": 1111, "start": 1090, "text": " I would always say that, yeah, sure, Alpha Go, a bit go, but to play Starcraft, something major would have to be introduced. And here we have a single network playing three different traces across four different maps across many, many variations of strategies." }, { "end": 1122, "start": 1111, "text": " But if I had to point to something specific, I would say that would be forcing the agent to use camera to attend to game events." }, { "end": 1134, "start": 1122, "text": " It maybe doesn't sound like much, but seeing it play in action and actually make these choices to attend different events and make decisions based on that." }, { "end": 1144, "start": 1134, "text": " I'm not saying it's a GI or anything, and I know that under the hood it's basically just metrics, multiplications, but still that that was like a really weird feeling." }, { "end": 1150, "start": 1144, "text": " It's almost like looking at something that's doing reasoning on some level." }, { "end": 1160, "start": 1150, "text": " So do you think that a model-based approach for a future agent for Starcraft would ever be possible or could be a good idea?" }, { "end": 1169, "start": 1160, "text": " Sure, that could be promising, but maybe not the full dynamics model, rather some latent representations." }, { "end": 1174, "start": 1169, "text": " Then you would have to consider why you're doing that." }, { "end": 1184, "start": 1174, "text": " I mean, it can definitely, it won't be worse, so it would definitely improve Alpha Star's performance, but that would take even more resources to train." }, { "end": 1194, "start": 1184, "text": " Well, personally, it would be fun to see Alpha Star clearly beat the top players and achieve this Alpha-Go style of victory." }, { "end": 1200, "start": 1194, "text": " I don't see how this could be the only goal." }, { "end": 1208, "start": 1200, "text": " So you would have to have some research reason to do it, and I don't really see one." }, { "end": 1223, "start": 1208, "text": " So I remember when the Starcraft environment first came out, and it was in the description in the paper, it sounded like a lofty, distant goal to build agents for the full Starcraft game." }, { "end": 1230, "start": 1223, "text": " And then it was seemed like a very short time later that the Grandmaster level paper came out." }, { "end": 1245, "start": 1230, "text": " So can I ask, was that timeline expected? Were you surprised by how quickly they got to that level or your team was able to get to that level?" }, { "end": 1253, "start": 1245, "text": " And yeah, and is Starcraft going to be a challenge for going forward for a long time, like Atari was?" }, { "end": 1262, "start": 1253, "text": " Yeah, so I was definitely surprised to see that result achieved." }, { "end": 1281, "start": 1262, "text": " I do have to clarify that nobody is claiming that Starcraft is solved, so even though Alpha Star beats 99.8% of players, the last 0.002% are probably the hardest to beat." }, { "end": 1288, "start": 1281, "text": " And it would take quite a bit of effort to reach 100%." }, { "end": 1298, "start": 1288, "text": " So in that sense, the environment itself is definitely still an open research question." }, { "end": 1314, "start": 1298, "text": " But realistically speaking, it does take a lot of resources to solve. So outside of big industry labs, I'm not sure how feasible it is to do research on it." }, { "end": 1328, "start": 1314, "text": " That's from our own perspective. From something like a meditation learning perspective, there's definitely room for research for both big and small research labs." }, { "end": 1339, "start": 1328, "text": " So I would be personally really interested to see other people try to replicate at least the Alpha Star imitation parts." }, { "end": 1346, "start": 1339, "text": " That would still reach top 80% to 90% of the human players." }, { "end": 1355, "start": 1346, "text": " So in some ways, the OpenAI Dota 2 work seems to be a bit of a parallel world to the Starcraft work from DeepMind." }, { "end": 1359, "start": 1355, "text": " Do you have any thoughts on comparing and contrasting these two projects?" }, { "end": 1370, "start": 1359, "text": " Yeah, sure. Both were really interesting projects from engineering perspective." }, { "end": 1378, "start": 1370, "text": " So as a research engineer, I think OpenAI 5 was definitely really cool achievement." }, { "end": 1388, "start": 1378, "text": " And even though I don't play Dota myself, I can appreciate that the level of complexity of the game is probably on par with Starcraft." }, { "end": 1399, "start": 1388, "text": " So I have to clarify that I do think that it's a really cool result." }, { "end": 1407, "start": 1399, "text": " But they did use some things differently. They didn't focus on making the game fair between the agent and the humans." }, { "end": 1422, "start": 1407, "text": " They also did a lot of reward shaping. So where was Alpha Star was trained purely on the win-loss signal?" }, { "end": 1437, "start": 1422, "text": " Yeah, and I guess this goes for both results, but neither Alpha Star nor OpenAI 5 actually achieved this clear milestone of beating humans." }, { "end": 1442, "start": 1437, "text": " So in that sense, both environments are still open research problems." }, { "end": 1460, "start": 1442, "text": " So if we step back a bit and look at the earlier generation of D-Mind agents like the DQN and AlphaGo, and then if we compare them to the current generation of agents like Agent 57, Alpha Star and Mu0, I guess the new ones are a lot more complex." }, { "end": 1466, "start": 1460, "text": " Do you think that trend will just continue over the next few years?" }, { "end": 1476, "start": 1466, "text": " Are these agents getting just way more complex? What do you think agents might look like in a few years from now?" }, { "end": 1484, "start": 1476, "text": " To be fair, the modern agents tackle way more difficult tasks than something like original DQN." }, { "end": 1508, "start": 1484, "text": " I mean, yeah, sure, Agent 57 is still playing the same material, but the sort of say the leak they're playing is completely different. Agent 57 has to be human baseline by a wide margin on all of the 57 games, whereas the original DQN failed miserably at half of them." }, { "end": 1522, "start": 1508, "text": " So as far as research direction, I can see it go either way. Like obviously having simpler agents achieve similar results on current environments is worthwhile." }, { "end": 1534, "start": 1522, "text": " And at the same time, having more complex agents achieve good results on more complex environments is also a good direction." }, { "end": 1552, "start": 1534, "text": " So as you mentioned, like with GPT-3 was just basically a scaled up version of GPT-2. Could there be a lot of progress to be made just by scaling up existing methods or do we really need new methods?" }, { "end": 1572, "start": 1552, "text": " Yeah, again, either direction could work. There's still this problem in RL where deep networks don't really work. Up to like a couple of years ago, the best network that you could use is the three layer CONFNET basically." }, { "end": 1592, "start": 1572, "text": " Then resonance came and somewhat improved that. And even like the most complex architectures like Alpha Star and OpenEA5, they're complex from architecture perspective, but they're really shallow compared to something like GPT-3." }, { "end": 1610, "start": 1592, "text": " So I'm not sure if this could be improved if you just scale it really, really by a lot. But I would probably guess that something more fundamental has to happen." }, { "end": 1618, "start": 1610, "text": " Either a new neural network layer or something in the RL agents themselves." }, { "end": 1638, "start": 1618, "text": " So we talked about a few different families of agents. Do you think these different families of agents will continue by forgetting and splitting into different sub-families or do you see maybe these lines of work converging at some point?" }, { "end": 1654, "start": 1638, "text": " Well, I mean, I'd love to see it like a unifying string theory type of thing that would combine all of these algorithms into one. But at this point in time, I think we're barely in the Newton's physics era." }, { "end": 1678, "start": 1654, "text": " So yeah, I think right now it's just worthwhile to explore different directions. Then again, we do advance much quicker than physics from 1600s. So who knows. Also maybe we'll get an AGI from some different direction and then just ask it to solve for RL for us." }, { "end": 1686, "start": 1678, "text": " So do you think that people will be coding agents in like TensorFlow or Jax or or PyTorch for the foreseeable future?" }, { "end": 1700, "start": 1686, "text": " Yeah, so in short, Jax is a library that exposes a set of so-called program transformations like grad or jid on top of NAMPAI like API." }, { "end": 1720, "start": 1700, "text": " And this all runs really efficiently on GPU and GPU. So if you like NAMPAI, you would love Jax. And in that sense, I think Jax would shine the more of the beaten path your research goes because you can just try it with NAMPAI." }, { "end": 1732, "start": 1720, "text": " And yeah, basically the where your ideas get, the better Jax would suit you. And in that sense, I think it would be beneficial for our all context." }, { "end": 1744, "start": 1732, "text": " Whether it would be Jax TensorFlow or PyTorch down the line, I'm not sure maybe it's something like Swift for TensorFlow finally comes through." }, { "end": 1754, "start": 1744, "text": " But yeah, I think this goes back to the initial question of why I like engineering and that's nobody has it figured out." }, { "end": 1766, "start": 1754, "text": " So yeah, I actually don't know what will happen in like two years maybe we'll have another framework and that's what's keeping it exciting for me." }, { "end": 1776, "start": 1766, "text": " By the way, I have to say I see a lot of discussion and read it that TensorFlow is basically dead and Google is moving everyone to Jax." }, { "end": 1786, "start": 1776, "text": " And that's just not true because TensorFlow has a lot of things figured out that Jax Core does just don't have the time for." }, { "end": 1796, "start": 1786, "text": " I think I mean, I'm not speaking for Google obviously, but I think as far as things go, I don't think TensorFlow will go away anywhere in time soon in that sense." }, { "end": 1808, "start": 1796, "text": " Okay, so maybe aside from which specific framework is used, but do you think that that people will continue to make agents by hand for the foreseeable future?" }, { "end": 1818, "start": 1808, "text": " I mean, maybe AJ will solve this for us, but aside from that, I think there's no harm in having conductive biases." }, { "end": 1828, "start": 1818, "text": " And I don't see the reason why we should avoid using them, especially right now." }, { "end": 1842, "start": 1828, "text": " So there's also directions like neural architecture search, but that takes even more computer resources than normal RL." }, { "end": 1850, "start": 1842, "text": " So yeah, I think for the foreseeable future we'll be causing these agents algorithms by hand." }, { "end": 1859, "start": 1850, "text": " Do you think that Python will remain de facto standard for ML and RL, or do you think maybe Julia or something else will dominate before too long?" }, { "end": 1862, "start": 1859, "text": " Yeah, so this is kind of tricky." }, { "end": 1867, "start": 1862, "text": " Languages like Julia clearly do many things right?" }, { "end": 1877, "start": 1867, "text": " And I would love to like use them professionally, but Python just has so much momentum right now that." }, { "end": 1888, "start": 1877, "text": " Especially when we're talking about large companies with pre-existing internal infrastructure, the cost of switching to another language just might not be worth it." }, { "end": 1895, "start": 1888, "text": " So whatever it will be that everyone switches to it has to." }, { "end": 1901, "start": 1895, "text": " I don't know, I have some features that we don't even we can't even imagine right now." }, { "end": 1909, "start": 1901, "text": " So yeah, well personally, I would love to use Julia. I don't see people switching like globally to it." }, { "end": 1915, "start": 1909, "text": " So besides these things, how do you see the role of research engineering changing over the next few years?" }, { "end": 1919, "start": 1915, "text": " What do you think that types of changes we might expect?" }, { "end": 1931, "start": 1919, "text": " Yeah, so basically, as I said, the the the line between research engineering and software engineering was blurry as it is." }, { "end": 1946, "start": 1931, "text": " So I think these two job families will merge back into one eventually once the next generation of students comes to the job market who." }, { "end": 1951, "start": 1946, "text": " We have the relevant machine learning background." }, { "end": 1958, "start": 1951, "text": " But but you could say that's just terminology and whatever you call it, it's the same job." }, { "end": 1967, "start": 1958, "text": " The other thing is in research, scientist positions, I see a trend for the main expertise is really valued." }, { "end": 1977, "start": 1967, "text": " So instead of having like pure machine learning research positions, you have machine learning plus chemistry or machine learning plus BIO." }, { "end": 1989, "start": 1977, "text": " And I think we will have the same for research and engineering or software engineering where you have like machine learning plus compilers or machine learning plus distributed systems." }, { "end": 2000, "start": 1989, "text": " And I mean, that's basically already happening. Like you have general positions, but then once you get hired, you start specializing because you just can't know anything." }, { "end": 2007, "start": 2000, "text": " And but I think that will be like explicit explicitly expected from people to have these dual expertise." }, { "end": 2014, "start": 2007, "text": " Okay, so some some of the agents we talked about today definitely burn a lot of compute, as you mentioned." }, { "end": 2020, "start": 2014, "text": " Do you have any opinions about like what we can still do in RL with small compute?" }, { "end": 2025, "start": 2020, "text": " Or do you feel like most of the interesting things are require a lot of compute?" }, { "end": 2031, "start": 2025, "text": " Do you think there's still interesting progress that we can achieve with small compute?" }, { "end": 2038, "start": 2031, "text": " Well, I mean, I think people over estimate how much compute even industry labs use." }, { "end": 2061, "start": 2038, "text": " So aside from like large projects like Alpha Star and open A A5, it's still clearly more beneficial to use smaller environments, which use smaller amount of resources, just because you have faster results in your experiments and you can iterate faster." }, { "end": 2078, "start": 2061, "text": " So basically, I think any fundamental research like improving sample efficiency can be done on basically toy environments that would be accessible to any like researcher." }, { "end": 2087, "start": 2078, "text": " And then again, another thing is I think a lot of people aren't really utilizing their resources fully." }, { "end": 2100, "start": 2087, "text": " So I remember when the original DKN came out having something like thousands samples per second on a single machine, meant that you're basically a god of performance." }, { "end": 2116, "start": 2100, "text": " Whereas now there are papers that squeeze out like hundreds of thousands of samples per second on a single machine with some clever tricks to hide the bottleneck of communication between CPU and GPU." }, { "end": 2127, "start": 2116, "text": " Basically, I think understanding all of these tricky bits and how Python itself works under a hood would definitely help." }, { "end": 2134, "start": 2127, "text": " That's also assuming that you have no control over the environment like with Starcraft." }, { "end": 2151, "start": 2134, "text": " But if you're establishing no research, then you can just re-implement the environment with vectorization like with SIMD on CPUs or with CUDA kernels." }, { "end": 2166, "start": 2151, "text": " In fact, I think I recently stumbled on a paper from a couple of years ago by Nvidia where they basically re-implemented most of the Atari environment on CUDA. But for some reason, not many people are using it." }, { "end": 2180, "start": 2166, "text": " And I think maybe there's some inherent inertia in academia against these engineering improvements, which is kind of ironic considering cases like CalaxNet and G3." }, { "end": 2190, "start": 2180, "text": " So in preparing for this episode, you mentioned another set of papers that you found interesting regarding QLAMDA, Retrace, Acer, VTrace, and Pala." }, { "end": 2194, "start": 2190, "text": " And as you mentioned, these are pretty big topics." }, { "end": 2206, "start": 2194, "text": " Do you want to share with us some major ideas in this sequence and what makes this set of papers of particular interest to you?" }, { "end": 2216, "start": 2206, "text": " Yeah, so in short, the goal of those papers is to incorporate a return-based approach into off-policy RL." }, { "end": 2223, "start": 2216, "text": " So the initial QLAMDA extends multi-step QLarning theory to support it." }, { "end": 2238, "start": 2223, "text": " Then the Retrace idea is extending policy gradients using truncated important sampling to correct for off-policiness." }, { "end": 2255, "start": 2238, "text": " And then the Acer paper adapts Retrace approach to the full acto-critical algorithm, but Acer still relies on the state-taction value estimator, the Q function." }, { "end": 2266, "start": 2255, "text": " So finally, in Pala, the Retrace is adapted to the value estimator, the V function." }, { "end": 2275, "start": 2266, "text": " And basically, the reason I brought up these papers is I just really liked the story that these papers paint." }, { "end": 2288, "start": 2275, "text": " So they start off with a really highly theoretical idea and eventually I arrived at a system that drove probably the largest scale RL project, Alpha Star." }, { "end": 2302, "start": 2288, "text": " And yes, I remember when I first found out about in Pala, I thought it just came out of nowhere, but then I would slowly go through the references and discover how altars built up through it through the years." }, { "end": 2306, "start": 2302, "text": " And I found it quite inspirational as a junior researcher." }, { "end": 2319, "start": 2306, "text": " So to be clear, Alpha Star is using V Trace and some other things. It talks about off-policy, but is it really truly off-policy? Can Alpha Star learn from batch data?" }, { "end": 2337, "start": 2319, "text": " Well, it's not quite, it's off-policy-ish. So it can't wander off too far. It's basically learning on like two batches of the experience." }, { "end": 2343, "start": 2337, "text": " So it's definitely not off-policy, like you think of Dekuane or something." }, { "end": 2357, "start": 2343, "text": " But it does have this off-policy-ish element. Another reason there's this element is because the RL setup itself is highly distributed." }, { "end": 2375, "start": 2357, "text": " So you can often have situations where one experience from one agent comes way after the network updated. And so you have this situation where either you throw it out or you have to somehow adapt for it." }, { "end": 2387, "start": 2375, "text": " And that's where things like V Trace come in that, well, important something because basically what drives this correction." }, { "end": 2396, "start": 2387, "text": " So besides what we've mentioned here so far, do you have any comments or on directions in the RL world that you find interesting lately?" }, { "end": 2410, "start": 2396, "text": " Yeah, so there's a funny video from like 10 years ago by Simon Peyton Jones, who's a lead designer of Haskell, and the video is titled Haskell's useless." }, { "end": 2426, "start": 2410, "text": " Basically, in short, he puts a bunch of languages on like a grid of useful useless. And he puts languages like C++ and Java in the useful pile, and then Haskell into the useless pile." }, { "end": 2442, "start": 2426, "text": " And the reason he gives is that even though languages like C++ could potentially blow up your machine if you're not careful, it doesn't really matter because it's just so practically useless, useful." }, { "end": 2462, "start": 2442, "text": " So where I'm getting at with this is that I think RL right now is the Haskell of machine learning world. It has really strong and beautiful foundational ideas, but it still has some ways to go before becoming like really practical for mainstream use." }, { "end": 2480, "start": 2462, "text": " So one direction that could take us there, I think is offline RL, which basically detaches the process of gathering the samples from the learning process." }, { "end": 2489, "start": 2480, "text": " And yeah, I think offline RL is something that could like open me doors in practice." }, { "end": 2498, "start": 2489, "text": " Roman Ring, thank you so much for sharing your time and insight today. It's been a real pleasure to speak with you. And I know our audience will really appreciate it too. Thanks again." }, { "end": 2501, "start": 2498, "text": " Yeah, thank you so much for having me." }, { "end": 2516, "start": 2501, "text": " Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways." }, { "end": 2537, "start": 2516, "text": " One, subscribe on your favorite podcast platform. Subscriptions make a big difference. Two, follow us on Twitter and talk RL podcast. We love retweets. Three, give us a five-star rating on Apple podcasts. If you don't think we deserve five stars, let us know on Twitter what we could do better." } ]
Shimon Whiteson
Shimon Whiteson on his WhiRL lab, his work at Waymo UK, variBAD, QMIX, co-operative multi-agent RL, StarCraft Multi-Agent Challenge, advice to grad students, and much ...
https://media.transistor…073.mp3?src=site
This is TalkAreal Podcast. All reinforcement learning, all the time. Interviews at Brilliant Bokes across the world of RL. I'm your host, Robin Chohan. I'm super excited to introduce our guest today. Shimon Whiteson is a professor of computer science at Oxford University. The head of the world, the Whiteson Research Lab at Oxford, head of research at Waymo UK. Professor Whiteson, thank you so much for joining us today. Thanks for having me. It's a pleasure to be here. So how do you describe your personal research interests? Well, pretty much all of my research is about figuring out how to control autonomous systems, like robots, but also software agents like video games and other applications. And we take a day-to-day driven approach, so that means primarily reinforcement learning. That's sort of the primary mechanism by which we derive control policies from data for autonomous systems, but also related tools like learning from demonstration. And well, in recent years, I've been focusing a lot on multi-agent reinforcement learning and also met a reinforcement learning. So can you say a bit about what's happening at your RL lab lately? Well, a lot is happening. There's I think too many projects going on to name them all. But we have kind of over the past few years coheared around a couple subgroups. So one is about multi-agent reinforcement learning and the other one is about metar reinforcement learning. So on the multi-agent reinforcement learning side, there's still a lot of core questions that aren't settled. We actually have some exciting new results now using PPO, that's kind of upending the conventional wisdom about what doesn't doesn't work in multi-agent reinforcement learning. And we're trying to extend multi-agent reinforcement learning to continuous domains, domains with continuous action spaces, and solve problems like how to transfer from one task to related tasks. They might have different numbers of agents and different entities in the world. And on the metar reinforcement learning side, we're looking at at Bayesian approaches primarily. So Bayesian reinforcement learning is a topic I've been interested in for a long time. But although we seem kind of too good to be true, something that could not be made practical, but it's starting to feel like the time for Bayesian reinforcement learning has finally come. So that's another exciting topic we're working on. Can you share a bit with us about how do you think about the roadmap for a lab like world? Do you plan for a head or is the fast-paced of ML mean there's very short planning horizons? It's not just machine learning. I think in research in general, planning is kind of an impossible task. I think there's an expression like planning is essential, but plans are useless, which I think is very, very much true. Planning is a useful exercise, but any plan you make is almost immediately obsolete. The first set of experiments you run comes out differently than you expect, and all your plans go out the window. So my strategy focuses a lot more on people, just trying to recruit the best people and give them what they need to succeed. I find that people always assume I have some grand like research ambition, some overarching plan for my whole career, but actually things are very student driven. The thing the lesson I've learned over the years is that the best results are obtained when students are given the freedom to work on the thing that they're really passionate about. So I do try to gently steer the students, but I don't really set the agenda because I find it counterproductive. I'd like to talk a little bit about your work at latent logic, your company that Waymo acquired in 2019. I understand your team was developing imitation learning to simulate human behavior on the road. Can you tell us a bit about that? Sure. As I mentioned before, in addition to reinforcement learning, a big topic I work on is learning from demonstration. Learning from demonstration is a synonym for imitation learning. So rather than assuming access to some reward signal, we learn from some demonstrations, from some example trajectories provided by an expert who knows how to solve task. And both at latent logic and now in its new incarnation as Waymo UK, the mission is basically the same, which is to provide a crucial piece of the simulation puzzle for self-driving. So simulation is extremely important to achieving self-driving. There's basically no viable path to full autonomy that doesn't go through simulation. Even an industry leader like Waymo that has a huge data advantage relies heavily on simulation in order to iterate quickly and to meet like extremely high safety standards. Simulation is a really important part of the safety evaluation process that's used to determine when a new version of the software can be safely upgraded or deployed to a new domain. But that's only the simulation is only useful if the simulations are realistic. And to make the simulations realistic, we need to have realistic models of the behavior of the other agents that the self-driving car might interact with. The human drivers, cyclists and pedestrians that are also on the road, we need to know how they'll respond to behavior from the self-driving car. So we need to learn realistic behavior models, realistic policies for those agents to put in the simulation or the simulation will be useless. And that's where imitation learning, learning demonstration comes in. So we're building new imitation learning tools that derive such behavior from the data that's collected by Waymo's own cars on the road to try to learn realistic behavior models to make those simulators more worthwhile. So we have a couple papers to discuss today, the first being very bad. And I remember this one at ICLR and the first author, Wiza Zincraf, had a memorable line to me in the poster session. She said, very bad is a very good method. So let's talk about that. Can you share with us the general just of this paper? What's going on in very bad? Yeah, so this is a great example of what I was referring to about Bayesian reinforcement learning. So the starting point for very bad is like an existing formalism called the Bayes Adaptive Markov decision process, which is like the sort of fully Bayesian formulation of the reinforcement learning problem. So the idea behind the Bayes Adaptive MVP is that we model the problem of reinforcement learning itself as a problem of partial observability. Like why are we doing reinforcement learning? Why aren't we just planning when why are we learning instead of planning? Because we don't know what Markov decision process we're in. So we we model that as a latent state, basically the transition and reward functions, the unknown parts of the Markov decision process. They're treated as some kind of latent state. And we have some observations that are correlated with them, but that don't disambiguate it. So this treats the problem as a partially observable Markov decision process where the hidden variables correspond to the transition reward functions of the unknown MVP that we're trying to do RLN. So what we have to do is we have an inference problem. We have to do inference about this latency with to maintain a posterior about what MVP were in as we act in the world. And then the really important part is we need to plan in belief space. So this posterior over the over what MVP were in this is a belief. And as we take actions, we travel around from one belief to another. And if we can figure out the optimal policy for for traveling through belief space, then we will have an optimal solution to the expiration problem in reinforcement learning. We will trade off information gathering and reward gathering actions in exactly the right way. So is to maximize our expected return over some some planning horizon. So that this is like a known existing idea, the base adaptive MVP. The trick is to make it practical like planning and belief space is really difficult. The inference itself can be difficult. And you know, here we haven't even we have an additional challenge, which is that we don't even know how to describe the this hidden state space. So what we do with very bad is we basically try to to take this base adaptive formalism and combine it with some modern tools to come up with an approximate way to achieve base optimal behaviors. So we use a variational autoencoder to approximate the inference. And then we use reinforcement learning method deep, deep reinforcement learning methods policy gradient methods to approximate the planning step. So basically we have a network which does the inference in the form of this variational autoencoder. And then the posterior, the approximate posterior is then fed as input to a policy downstream, which learns how to take actions conditioned on its posterior. So to learn a policy for moving around in belief space. And you know, because we take this approximate approach, rather than doing some analytical inference where you would compute your posterior directly from the prior, instead we're sampling from this prior. And by sampling from the prior, what we basically do is turn it into a meta reinforcement learning problem. Because every sample you draw from this prior, it's like a training task that you can use to figure out, you know, what is the mapping between beliefs and actions that you should take when you're then deployed in some new task, which you assume was sampled from the same prior. So from a meta reinforcement learning perspective, the key insight here is that this policy that acts in some new unknown task, it should condition on your posterior, on the whole posterior, not on some point estimate or some sample from the posterior, which is what a lot of other methods have done. Because when you condition on this posterior, you have the chance to approximate the base optimal behavior that, you know, in principle, a solution to the base adaptive mdp would give you. So in this setting, does the agent observe the reward at test time? Yeah, but it doesn't know what reward function generated that reward, but it can observe rewards, and that's like a clue about what the reward function is. So, so we had Taylor Killian on the show a few episodes back and he did some related work with hidden parameter mdp's, hip mdp's, which the very bad paper sites and in his case, it was it was medical patients. So the transition function had some variation, but the reward function didn't vary. We wanted the patients to get better. So if I understand correctly, in very bad, the task could differ in the reward function as well as the transitions. Can you just help me, help me understand like what's kind of settings would we, would we want the reward function to vary? Sure. So, I mean, a good example of this is when a task corresponds to a user. So, if you think about, for example, if the agent is some kind of recommendation system or some ad serving system, then the reward function corresponds to when the user accepts the recommendation or clicks on the hat or, you know, views the recommended document or whatever. And the situations in which that would happen will be different for each user. So we can think of that as a reward function changing from task to task as it changes from user to user. But, you know, what's part of the transition function, what's part of the reward function is a bit arbitrary. So you can move back and forth from one to the other just by changing the way you model the state space. So it's not really a fundamental distinction. But, you know, from the perspective of the base adaptive MVP, everything that's unknown about what task you're in any part of the description of the MVP, whether it's the transition function or the reward function that's unknown to you, that's what you maintain a posterior belt. I see. Okay. And then I wanted to ask you about this one line that's said, and I'm quoting, I mean, difference of very bad to many existing Bayesian RL methods is that we metal-learn the inference procedure, i.e. how to do a posterior update. Is it, was there any other option than to metal-learn that or was that does metal-learning there help in some way? So, I mean, in principle, you can do the inference exactly, but that's not going to be practical for the kind of tasks that we're interested in solving. So the main issue is that we don't have a priori a good low-dimensional representation of this hidden state. You know, we can write down a big table to design the transition and reward functions and then say, okay, we need to fill in all the values in this table, but that's going to be really high-dimensional. And, you know, if you have continuous states, it's not even going to be possible. So, you know, if we represent it as these tables, the inference will be easy. You know, we can use Dirichlet priors and just do inference by updating counts. But then, you know, this Bay's optimal policy, it's going to have to explore every state separately. We won't learn about new tasks. We won't figure out what new tasks we're in, nearly quickly enough to be like an effective agent. So, what we have to do is we have to find some low-dimensional latent representation that lets us generalize. So, we have to fit, you know, only a handful of parameters to disempeguate what state we're in. And we have to come up with this latent representation that exactly the same time that we learn a procedure for doing inference on that latent space. And, you know, that's exactly what the variational autoencoder can do. It's one tool for solving exactly that problem. And then, you know, once we solve that problem, we just have to learn a policy that conditions on the approximate posterior to approximate this Bay's optimal behavior. Cool. Thanks. All right. So, let's move on to multi-agent RL. So, you have such adept of experiencing knowledge and multi-agent RL. So, I'm looking forward to asking this. What are the main challenges with multi-agent RL at this point? So, I'll restrict myself to the cooperative setting because that's the one I've focused the most on and there's enough to talk about there. So, one thing is representational. There are still, sort of core questions about how we should be representing the complex value functions that we need to learn in order to solve a cooperative multi-agent RL problem. So, factorization of a complex value function has been a big focus and a lot of the progress that has occurred has come from factorization. But we don't, we still don't know really what are the right factorizations and what kind of factorizations will be needed for the even harder tasks we like to solve. Another one is robustness. We need to be able to generalize better from one task to another to learn policies that can operate in situations where there might be different numbers of agents, when an agent enters and leave the scene, where there might be different numbers of other entities in the world, non-agent entities. And then I guess the other thing I would mention would just be algorithmic questions. So, how centralized should these algorithms be? Should the agents be learning independently? Should they be totally centralized? Should they be somewhere in between? As I mentioned before, we have some new results and those results are kind of upending the conventional wisdom about that. So, it seems like maybe centralization of these algorithms is not as important as we had thought and as the results from the past couple years had been suggesting. So, my own introduction to Multi-Asian RL was building RL agents for the Palmerman competition in Europe's 2018. The Palmerman environment a couple years back and I noticed that there are just so many different ways that things can be arranged in Multi-Asian RL. So, I wonder, is there any question at this point of whether we have the right frameworks for thinking about these problems? Like, is it more, and maybe partially touched on this before, but is it more a matter of solving technical issues where the problems have already been framed and well-defined? Yeah, I think there are definitional problems. So, I think in any field, so that half the battle is asking the right questions and that that's certainly true in Multi-Asian RL. In fact, maybe it's especially true in Multi-Asian RL because as you said, once you move from the single agent setting to the Multi-Asian setting, there's an explosion of possible settings and all these different assumptions you can make. And that's a practical problem because it leads to a lot of confusion, where people assume one thing about your formalism and actually the formalism works differently. So, just even communicating about what we're doing is tricky, but it also makes it harder to figure out what the right questions are because there's this overwhelming like, like, a bounty of choices of formalisms you can focus on. I think a fair amount of the success, the progress that we've made in my lab on solving Multi-Asian RL problems has come from identifying sort of the right assumptions. What is the right setting to focus on? So, finding a sweet spot in particular, focusing on this CTDE setting, this essentialized training, a decentralized execution, which seems to be kind of a key assumption. It's restrictive in a way that gives you leverage to solve the problem, but not so restrictive that it's unrealistic or the algorithms are not actually useful. And so, I do think there needs to be more focus on finding those sweet spot settings in order to make progress. In non-corporative settings, it's even more difficult because just even pinning down what it is that we're trying to do, what is the point of Multi-Asian learning, what would the definition of success be that itself becomes non-trivial in non-corporative settings? So, in Pomeranian, in some versions, there was a communication channel. And I noticed when we talk about decentralized policies, we're assuming that agents can't see each other's observations or they can't communicate. I wonder how much of the complexity of Multi-Asian RL is due to that inability to communicate? And if agents had cooperative agents had like a high capacity communication channels, does that make the problem just really simple or does that just move the complexity around? So, if they have high capacity communication channels, like if communication is basically free, then there's a sense in which the problem isn't really multi-agent. So, basically, if every agent can broadcast all of its observations to the other agents, then we don't really need to think about there being multiple agents. We're in what's called the multi-agent MDP, which is not a very good name because it's not really inherently multi-agent. We can think of it as if there's one puppeteer agent that's deciding what everyone has to do, and that agent just happens to face a multi-dimensional action space where you can think of each dimension as being an agent, but that's just semantics. So, without some constraint on communication, it isn't fundamentally multi-agent. That doesn't mean that it's easier to solve. So, you know, there's a distinction between the natural and the artificial decentralization. The natural decentralization is imposed by the world because there's some like your sensors don't allow you to communicate. And then there's artificial decentralization where we as designers say, okay, we're going to treat each agent in some semi-independent or local way because having every agent condition on everything that every agent observed is just not a tractable learning problem. So, even if it's not fundamentally multi-agent, it may still be very difficult. If the agents can communicate, but the bandwidth is limited, then they face a different challenge, which is that they have to like invent a language or a protocol by which they're going to communicate, use that channel efficiently. And if they can't communicate at all, well, then coordination becomes difficult because they don't know what their teammates have observed and therefore what they're going to do. But the problem may be simpler in the sense that at least they don't have all those observations coming in that they have to process and they don't have to think of a clever protocol to use to communicate with their teammates. So, people are talking about what it's what's going to take to use RL in the real world. And it seems like there's a lot of work left to do there. There's a New Reps 2020 workshop on this topic. Is there a parallel discussion on what it'll take to use multi-agent RL in the real world? Or is it a multi-agent just too early on to think about that? I don't think it's too early on. I think certainly there's a lot left to do. There are a lot of open challenges. But I think we actually already have algorithms that can be of practical use in many real world applications. If you think about robots and warehouses that need to coordinate their behavior tool like avoid colliding with each other when they're fetching packages or a video game AI where you have to learn agents that play effectively with human teammates and against human opponents. And of course, self-driving cars. They share the road with other agents. So, I think there are a number of real world applications where the existing tools are already of use even if there may be still open questions. So, can you comment on the use of DEPORL and multi-agent RL for autonomous vehicles? You said that the current methods might be useful for that. Is that relevant now or is that something for the future? Yeah. So, I mean, you can think about, so to give just one of many examples, one core problem in self-driving is behavior prediction. So, before you even get to thinking about how you're going to act, you need to be able to make predictions about how other agents are going to act. And that already implies a kind of game theoretic reasoning. The actions of the other agents will be conditioned on some assumptions they make about how you are going to act. If I try to merge the car behind me will hit the brakes in order to make room for me. So, that kind of reasoning in which agents have a model of how the other agents on the road are going to act and their behavior conditions on that model. So, if we're going to make good predictions about what other agents are going to do, we need to think game theoretically. We need to have models of them and we need to think about the models that they have with us. So, that almost sounds like a theory of mind thing. Can we just stop at the one level? It is a theory of mind thing. I think we don't know what level we need to go to. Okay, so, in our first episode, we talked about multi-agent oral research from Natasha Jakes who had the social influence paper for solving social dilemma environments where standard oral agents couldn't solve the collective task like a tragedy of the commons type tasks. So, are the types of methods you're talking about, cooperative oral, cooperative multi-agent oral, would they be suitable for these types of tasks? Well, the tragedy of commons is something that arises in a mixed setting. Okay, so, I guess the answer is no. I see, it's a different setting. Yeah, so, there's a related challenge which arises in the cooperative setting. So, in the cooperative setting, you have the multi-agent credit assignment problem. So, if you know RL, you probably know the temporal credit assignment problem, which is like, you know, I win a game of chess and then I look back and move 17 and say, how much credit do I give to move 17 for my ultimate victory? Well, we have the same idea in the multi-agent setting, but it's across agents instead of time. So, you know, the football team scores a goal and then you have to ask, like, how much credit does each player on the team get for that goal? So, if you don't solve the multi-agent credit assignment properly, then you can have like lazy agents. So, if you don't tease apart what that agent's contribution is and make sure to reward them only if they actually made a difference, then they might end up being lazy. So, it's not the same as the tragedy of the commons, which arises from the fact that interests are not aligned. This is just a, like, a learning difficulty among cooperative agents, but it is kind of conceptually similar. Cool. Okay. And then this issue of how to break symmetry in decentralized multi-agent, like if two agents are approaching a doorway, you know, which goes first so they don't bump into each other. How does, is that a trivial problem or is that a deep problem in moral? It can be a deep problem. So, in the multi-agent literature, this is typically addressed with what are called social conventions. So, the idea is that in advance you have some agreement, like, if we come to an intersection, the person who sees the red light stops and the person who sees the green light goes, this is an arbitrary social convention, but because we've agreed on it, we're able to, you know, avoid collisions. So, in this, in the cooperative setting, when we have this CTDE assumption, when we have this centralized training, decentralized execution setting, then we have the opportunity to form social conventions during training. The training process is centralized, so we all agree together. What are the set of policies we're going to use later when we're deployed? But we won't be able to communicate any more after deployment. And, you know, typically to speed up the learning process, we do parameter sharing. So, during the centralized phase, we're learning policies for all the agents, but really we're just learning one policy, which all the agents can use. But those policies typically condition on the, like, agent index. And you could also, if you wanted to condition on like a richer description of the agent's type. And this allows the agents to be as heterogeneous as they like, even though they completely share policy, share parameters. So, in this way, you can break ties by conditioning on the agent index. But if you don't have the CTDE setting, if the agents don't get to plan together, if it's like an ad hoc teamwork setting or a fully decentralized setting where they learn from scratch, and they're already decentralized when they start learning, or if the agents are not cooperative, so this pre-planning phase doesn't even make any sense because they're not on the same team. In all of those settings, then this symmetry breaking problem can be more fundamental. Well, that was super interesting. Okay, let's move on to Q-Mex. That's the second paper we plan to discuss. The title of that paper is monotonic value function factorization for deep multi-agent reinforcement learning by Rashid at all in 2020. And with yourself as a co-author, can you tell us briefly what is this paper about? Yeah, so this paper is a great example of trying to leverage this CTDE setting. So we want to do this centralized training, the result of which will yield decentralized policies that we can execute without further communication. And what we're trying to do is learn a centralized value function. So we're trying to learn a big value function that all the agents will use that estimates the expected return conditioned on the actions of all the agents. So this is a joint value function that takes all the agents into account. But because that value function is really complex and difficult to learn, we're going to learn a factor value function. So we're going to have a factorization that's going to exploit conditional independence properties between the agents in order to simplify the value function, make it easier to learn. And we're going to choose a particular form of factorization that's going to allow us to achieve decentralized ability. So what this value function does is it's going to be like a mixture of local value functions. So for each agent, each agent will have its own kind of local estimate of value. And then all those local estimates will get mixed together into an estimate of the joint centralized value. And if we if we're clever about how we do this mixing, then we'll achieve this decentralized ability property, which basically says that rather than having to do some really expensive maximization over the joint action space, every time we want to select actions or do a learning update, which not only would be expensive, but which would not be possible in the decentralized setting, instead we're just going to do a bunch of local argmaxes where every agent will just individually maximize with respect to its local value function. And if we choose the mixing in the right way, then these two come out the same. So we have this decentralized ability property. If we can do these local argmaxes and get the same result as if we did the expensive global argmax. And it turns out that, you know, if the mixing function has this monotonic entity property, if the joint value function has a monotonic relationship to each of the local value functions, then this decentralized ability property holds. And, you know, we can easily, we can trivially decentralize the value function once we've learned it. So for the listeners, I just want to point out that Professor Whiteson gave a lecture on Cumix at the MIT embodied intelligence series earlier this year, which I liked a lot, and where you also touched on SMAC and Maven and encourage listeners, check that out and the link will be in the show notes. So the SMAC, this dark graph, multi-agents challenge test bed here, that was managing units and starcraft too. From what I understand, that's different than what Alpha Star was doing. Alpha Star was trying to control everything with one centralized agent and SMAC is separate agents for each unit. Is that correct? Yeah, that's exactly right. I mean, Alpha Star is an amazing, amazing achievement, of course. The similarities between their setting and what we are doing are actually quite superficial. So they do both focus on on starcraft too. But in Alpha Star, they're solving the whole problem, micro and macro. And they're doing it with like a single puppeteer agent. So we can think of this as this multi-agent MVP setting that I mentioned earlier. So there's a single agent that decides what every unit is going to do. Whereas what we're trying to do is model the deck pump VP, the decentralized partially observable markup decision process. The multi-agent aspect is inherent because the agents have only a partial view of the world and they can't communicate directly with each other. So in Starcraft, that means the agents have a limited field of view. And we're trying to learn how to coordinate their behavior just in micro management. So when should we think about centralized training? When would that be the right or wrong way to frame things in say real world multi-agent RL? So the setting makes sense when on deployment, it's no longer possible for the agents to communicate directly. Or we don't want to rely on that communication. So anytime you want solutions that are really robust, you might not want to rely on communication channels that might go down or that might just be unreliable. So then you can learn a policy where the agents don't condition. They don't rely on getting input directly from the other agents. And in that way, you can learn more robust policies. So those are the settings when you want this decentralized policy. The centralization of the training, that is possible anytime the agents can train together in advance. And this is I think almost all of the time because even in a single agent setting, much less a multi-agent one, we don't deploy agents in the real world, type of lorosa, we don't say, okay, we'll just put this robot out in the world and have it learn what MDP is in from scratch. We train them in advance and then we fine tune them on deployment. So that training together, it almost always happens. The other assumption that I think is important to mention is that this setting does assume that the world is stationary. Like we're in some deckbomb Dp, but that deckbomb Dp is not changing so that the plan that we make during the centralized training setting is still relevant on deployment. Now that doesn't mean that things are non-stationary if the world is changing over time that this setting is in applicable because it could still be the starting point. So if you have a world where you aren't sure that the lorosa physics won't change over time, you might still want to do centralized training and decentralized execution, but then on top of that, you would want to have some decentralized learning algorithms so the agents can continue to adapt as the world changes. So if I understood correctly, this paper builds on the value decomposition network by Sunhek at all 2017, that's VDN, which I think I heard you say avoids the extremes of the independent Q functions on the one hand and the fully centralized Q functions on the other hand. Can you say a bit about how QMX extends VDN or is it different from VDN? Yeah, so both VDN and QMX are, I would say, on the extreme of that spectrum in the sense that they're learning centralized value functions, so learning value functions at condition on the actions of all the agents and in fact all of their observations, but they just factor that centralized value function in a different way. So VDN and QMX, they both are designed so as to factor the value function in a way that allows this decentralized ability property I mentioned, so that every agent after the value function is learned, every agent can just take its local component of it and use that to decide how to act, just maximize with respect to its local component. But VDN achieves this by just mixing these local value functions as a simple summation, so the joint value function is just a sum of the local value functions, and that's enough that achieves this decentralized ability, but in an unnecessarily restrictive way. So the idea behind QMX was if what we're after is decentralized ability, the property we need is just monotonicity and we can have a richer monotonic way of mixing these local value functions that isn't just a summation. Okay, and then QMX has a mixing network. If I understand correctly the weights are generated by a hyper network, could you maybe comment on the hyper network here? That's something that I haven't seen very often on how does that help? Yeah, so the hyper network is actually a key ingredient, and we've done some ablation experiments that confirm that this is actually quite important to performance. So a hyper network is just a neural network, the outputs of which specify the weights of another neural network. So in QMX we have these agent networks, these are the local value functions that each agent can compute based on its local observations. And then we have a mixing function, it takes the output of all of those agent networks as input, and it produces output and estimate of the joint value. But the mixing network, its weights are not expressed directly, they are expressed as the output of a hyper network. So when we train, we optimize the parameters of the agent network, those parameters are shared among all the agents, and we optimize the parameters of the hyper network. And the hyper network then in turn produces the weights of the mixing network. And the reason to do that is because the mixing network is only used during the training phase. So we have the centralized training, the decentralized execution, anything that you're going to throw away during the execution phase is something that can take advantage of things that are only available during centralized training. So this mixing network, it takes as input, not just the outputs of the agent networks, it takes as input also the global state, the state which will be hidden from the agents in the execution phase, but which during this centralized training we can assume access to. So and that's fine, this centralized state can be input to the mixing network because we will need the mixing network during the execution phase. So the fact that it will be hidden during execution is no problem. But the mixing network is constrained to adhere to this monotonicity property. So then the weights of the mixing network have to be non-negative so that this monotonicity property is preserved. And we don't want to restrict the relationship between the state and the mixing network. We want the mixing network to be able to change flexibly as the centralized state changes and we want to be able to mix these local value functions in a different way for every state. So by having the state be input to the hyper network rather than to the mixing network, we get around this monotonicity constraint and enable a more flexible conditioning on the human state. And then by global state does that mean like the the set of all the agent observations together? So actually in the deck pondip in general even if you concatenate all of the agents observations together that still does not disempeguate the global state. So there's a related formula called the deck mdp where every agent has partial observability but if you put all the agents observations together then you know the state. But in the deck pondip you still don't know the state even when you put all the observations together. But the centralized training it has happens in a simulator or something where we assume we can just cheat and look under the hood and see what the global state is. The only thing is we're not allowed to use it. The policies we learn are not allowed to condition on it in the decentralized execution phase. But we have it while we're training and we want to exploit it and we exploit it by having the mixing network condition on it via the hyper network. Okay so we're assuming a simulator that can reveal the global state which I guess wouldn't be the case if this is an off policy deployed robots or something like that. I mean that totally depends on the situation but yeah there are some situations where you might during training be have access to the to the joint observation but still not have access to the state that can happen. Cool. Okay so so the the monotonicity if I understand that correctly means that does that mean that all the agents value functions have to be aligned in a sense. If we had like I we were talking earlier I had in the notes about an example of you know if one agent had a slightly different value function they liked something that wasn't good for the team. Would that violate the monotonicity assumption? No so that's a different issue so if the agents have different value functions then we're in a mix setting. So their preferences are not aligned. And what that means is that the value function is vector value there's actually a different value for every agent. Like if you write down a normal form game in each cell in that normal form game there's more than one number because there's a payoff for every agent. So the value is actually a vector. Here we still have a fully cooperative setting so the value is a scalar the output of this mixing function this estimate of the joint value function is a scalar value just like in single agent RL. But it's composed of a bunch of by mixing a bunch of local value estimates. So when we say that it's monotonic what we mean is that our estimate of this of this scalar value to join value function has the property that it increases monotonically with respect to each of each agents local value. So what in effect this means is that you know each agent the action that it wants to take does not depend on the action choices of the other agents because it's value you know as it's local if it changes its action so as to increase its local value it knows that that will also increase the joint value. In the MIT lecture that that we mentioned earlier you talked about the Google Research Football environment. Can you comment on how the StarCraft multi-agent challenge and the Google Research Football environment present different types of multi-agent challenges and like what kind of dimensions do you think about when comparing environments like that? So I can speculate about that. I don't know the answer because we haven't done enough with Google Research Football yet. But I can speculate because we do know a lot from the related problem of the Robocup simulation league. You know it's just like another football simulator that's been around for a long time. Google Research Football is kind of the same idea but you know in a new framework that makes learning easy and that connects nicely to deep learning packages and stuff like that. But you know what we know from the Robocup simulation league is it in a task like this hierarchy is very important. It's going to be very hard to learn like a flat monolithic policy that plays you know world world comfortable. It's a naturally hierarchical task and you need to leverage that hierarchy in order to make the learning problem tractable. So you need to think about you know high level skills like passing the ball shooting on goal playing defense that kind of thing. Hierarchable reinforcement learning is an active topic of research. It's a very difficult you know sort of challenging area of research where I think there's still there still hasn't been like a wow result yet but we we at the same time know that it's an essential ingredient that we will eventually need to have. From the perspective of multi agent reinforcement learning when we add hierarchy into the equation then things get interesting because hierarchy increases the level of simultaneous. So basically if you're acting with a monolithic policy you might be acting at very fine grain time steps like you act every time step and time step might be you know like a fraction of a second. But if you have a hierarchical policy at the higher levels of the hierarchy things are abstracted away. So taking an action means deciding to shoot on goal an action which might last like several seconds. And from a multi agent perspective this increases your cymbal tenetty with the other agents. So when you commit to a high level thing like shoot on goal a lot happens while that's being executed and you know the other agents are doing a lot in the meantime. And this greatly increases the importance of coordination among the agents. So if cymbal tenetty is low if a time step is only a fraction of a second it's not that important to coordinate with your teammates because if you choose the wrong action you know one time step later you'll see you'll observe what the other agents did you'll see it wasn't right you can change your response. So instead of coordinating you can just react to what the other agents do. If cymbal tenetty is high because you're choosing high level abstract skills that take several seconds to execute then coordination may be crucially important. By the time you realize that you know you should have passed instead of shooting on goal you know you've already you've already lost the game. Cool okay just to be clear can you can you define cymbal tenetty for me? I can define it in a handway way. Yeah perfect. We can think of it as like how much happens in one time step. So like how much is the world going to change between now and when you get to act again. If it's only a tiny bit then you can just wait and see what the other guy didn't react. Like including the changes in the environment and the other agents? Yeah yeah. Okay okay. But if a lot is going to change before you act again then it's really important that you that your choice is coordinated with your teammates. There will be too late to fix it later. Okay that makes sense. All right so we have a fair number of grad students and future grad students in the audience. I know that from the from the Twitter account. Do you have any advice for students who might want you or someone like you as an advisor or generally for those who want to pursue post-graduate studies in in ML and RL? Sure I can say a couple things. I would say there's no substitute for a strong foundation. You know and that's that's as much true now as it was before deep learning became a big thing. You know especially important in machine learning are topics like calculus linear algebra probability and statistics a really good foundation in those topics is is absolutely essential. I think it's also really important to it's really helpful to get some research experience before you start something like a PhD. I think a lot of people think research is something that you're either good at or you aren't but actually it's a very learnable skill but it's a skill that can only be learned by doing it. So I think it's really helpful to get some experience not only so that if you do start a PhD you have some of the tools you need to actually make a success of it but so that you find out you know whether you like it and that you're able to demonstrate on you know like on application that you that you have developed some of those skills that you have the capacity you know to complete the degree you're applying for. So sometimes I wonder about people who did say a PhD on support vector machines when they were big before ML turned to deep learning and so those aren't that relevant anymore and so you might think well was that a good choice? How do how do how do people how should people pick a topic that's that's going to stay relevant that they that they're confident will still be relevant. So my my opinion might be a bit unconventional about this sort of thing but I would say don't worry about it. I I see a lot of people like um doing a lot of hand-wringing about whether they're working on the right topic um and and I think it's a waste of energy. I think I think research impact is really difficult to predict and and almost entirely out of your control uh you know it depends not only on you know what happens with the development of science and technology itself but also a lot of political stuff just like what happens uh you know what stuff happens to be popular uh you know what happens with you know trends and dynamics within the machine learning community that that you know have nothing to do with science just have to do with people. So I think it's a mistake to get too invested in like you know my happiness and success depends on you know whether you know whether some arbitrary thing becomes popular or not. I think you know it's important to work on something that you're passionate about that you find exciting because otherwise it will go nowhere um if you aren't really enthusiastic about it nothing good will come out of it but you know whether it's going to have impact who knows uh I don't think anybody can really predict that and I don't think you know if you pick a topic and it turns out to be unfashionable I don't think that's a huge problem. I think if you look at a lot of the you know the really big figures in the machine learning community today um you know the ones that were around before deep learning became popular they were working on something else they were working on you know support vector machines or they were working on graphical models so they were working on uh patient optimization and you know when the deep learning revolution happened they adapted they learned from these new tools they revised their beliefs and their view and their priorities they figured out how to how to use these new tools effectively on the problems that they were interested in um they changed their research focus as a result of you know the changing situation and then they you know made important valuable contributions to deep learning so one of the exciting things about doing academic work is that it never gets old because there always is this new stuff coming down the pipeline that you uh uh get to learn about and and explore and do things with that's also the challenge of it because you can never you can never afford to rest on your laurels you know we're get stale and as you get older it gets harder and harder to sort of like wrap your head around each new revolution and um and make a contribution to it but um you know that that's the fun of it um can you um can you fill us in on what what conferences like are there are there multi-agent uh learning conferences I guess our audience might be familiar with uh the new reps and ICML and ICLR but are there um are there other conferences outside of that that are that are good to pay attention to for for multi-agent work yeah so I mean those that you mentioned are the the primary ones and um the ones where we primarily submit our publications in my lab the other one worth mentioning is is Amos autonomous agents and multi-agent systems that was actually the first conference I ever published that um and uh so yeah that I mean there's a a lot of multi-agent stuff happens there the I guess the other one would be iCAPS which is the planning conference um so it's not multi-agent reinforcement learning but but like for example in the deck on DP formalism a lot of the work is on the planning side got the learning side so a lot of multi-agent stuff is published there too so besides um your own work and the things we mentioned uh here today already are there other other things happening in in RL that you find interesting lately um a lot of interesting stuff is happening in RL the the main difficulty is keeping up with it is such like firehose of papers being published um the things that that that interest me tend to be the things that contradict my prior beliefs because that's when I like really learn something new fortunately i'm wrong a lot so i'm always learning something um i would say like a recent example of that is what's been happening with unsupervised learning in reinforcement learning so methods like curl that use contrastive losses to um to in an unsupervised way learn good representations for reinforcement learning um so i'm i'm on record as as a skeptic of unsupervised approaches for reinforcement learning um i've uh i've made public uh comments about it on the number of occasions so i think it's you know i think it's fair to say on the estimated their success um i would not have predicted the you know these recent methods and how successful they've been i do i i would say i've only partially revised my opinion on the subject because i am still concerned about cases where um the choice of features what makes a good feature depends crucially on the reward function i mean to me it's self evident that in general you know the way to represent the world the way to process your observations into some higher level representation depends very much on the task that you're in um so i don't think that that we can do this in an unsupervised way but unsupervised methods can help and um uh you know i was i was surprised to learn how much they could help so i think we came in just under the hour um this has been a fascinating interview uh professor shimon whiteson uh you're so kind to take the time out to speak with us i know you want to uh tuck your kids into bed tonight so um i'm glad to end it there thank you so much on behalf of myself and our audience uh professor whiteson yeah thanks for having me it was a pleasure notes and links for this episode are at talk rl.com if you like this show i need your support you can help in a few ways one subscribe on your favorite podcast platform subscriptions make a big difference two follow us on twitter and talk rl podcast we love retweets three give us a five star rating on apple podcasts if you don't think we deserve five stars let us know on twitter what we could do better talk rl
[ { "end": 12.8, "start": 0, "text": " This is TalkAreal Podcast. All reinforcement learning, all the time." }, { "end": 19.84, "start": 12.8, "text": " Interviews at Brilliant Bokes across the world of RL. I'm your host, Robin Chohan." }, { "end": 24.48, "start": 19.84, "text": " I'm super excited to introduce our guest today. Shimon Whiteson is a professor of computer" }, { "end": 29.560000000000002, "start": 24.48, "text": " science at Oxford University. The head of the world, the Whiteson Research Lab at Oxford," }, { "end": 33.32, "start": 29.56, "text": " head of research at Waymo UK. Professor Whiteson, thank you so much for joining us today." }, { "end": 39.32, "start": 33.32, "text": " Thanks for having me. It's a pleasure to be here. So how do you describe your personal research interests?" }, { "end": 46.2, "start": 39.32, "text": " Well, pretty much all of my research is about figuring out how to control autonomous systems," }, { "end": 51.4, "start": 46.2, "text": " like robots, but also software agents like video games and other applications." }, { "end": 58.120000000000005, "start": 51.4, "text": " And we take a day-to-day driven approach, so that means primarily reinforcement learning. That's" }, { "end": 64.2, "start": 58.12, "text": " sort of the primary mechanism by which we derive control policies from data for autonomous systems," }, { "end": 70.2, "start": 64.2, "text": " but also related tools like learning from demonstration. And well, in recent years, I've been" }, { "end": 74.52, "start": 70.2, "text": " focusing a lot on multi-agent reinforcement learning and also met a reinforcement learning." }, { "end": 79.24, "start": 75.24, "text": " So can you say a bit about what's happening at your RL lab lately?" }, { "end": 85, "start": 80.03999999999999, "text": " Well, a lot is happening. There's I think too many projects going on to name them all." }, { "end": 90.92, "start": 85, "text": " But we have kind of over the past few years coheared around a couple subgroups." }, { "end": 96.28, "start": 90.92, "text": " So one is about multi-agent reinforcement learning and the other one is about metar reinforcement learning." }, { "end": 102.68, "start": 96.28, "text": " So on the multi-agent reinforcement learning side, there's still a lot of core questions that aren't settled." }, { "end": 111, "start": 102.68, "text": " We actually have some exciting new results now using PPO, that's kind of upending the conventional" }, { "end": 115.88, "start": 111, "text": " wisdom about what doesn't doesn't work in multi-agent reinforcement learning. And we're trying to" }, { "end": 121.48, "start": 115.88, "text": " extend multi-agent reinforcement learning to continuous domains, domains with continuous" }, { "end": 127.32, "start": 121.48, "text": " action spaces, and solve problems like how to transfer from one task to related tasks." }, { "end": 131.08, "start": 127.32, "text": " They might have different numbers of agents and different entities in the world." }, { "end": 138.52, "start": 133.56, "text": " And on the metar reinforcement learning side, we're looking at at Bayesian approaches" }, { "end": 142.60000000000002, "start": 138.52, "text": " primarily. So Bayesian reinforcement learning is a topic I've been interested in for a long time." }, { "end": 151.56, "start": 143.16, "text": " But although we seem kind of too good to be true, something that could not be made practical," }, { "end": 155.4, "start": 151.56, "text": " but it's starting to feel like the time for Bayesian reinforcement learning has finally come." }, { "end": 158.12, "start": 155.4, "text": " So that's another exciting topic we're working on." }, { "end": 163.4, "start": 159.24, "text": " Can you share a bit with us about how do you think about the roadmap for a lab like" }, { "end": 170.84, "start": 163.4, "text": " world? Do you plan for a head or is the fast-paced of ML mean there's very short planning horizons?" }, { "end": 176.04000000000002, "start": 173, "text": " It's not just machine learning. I think in research in general," }, { "end": 184.6, "start": 177.72, "text": " planning is kind of an impossible task. I think there's an expression like planning is" }, { "end": 189.16, "start": 184.6, "text": " essential, but plans are useless, which I think is very, very much true. Planning is a useful" }, { "end": 194.92, "start": 189.16, "text": " exercise, but any plan you make is almost immediately obsolete. The first set of experiments you run" }, { "end": 197.48, "start": 194.92, "text": " comes out differently than you expect, and all your plans go out the window." }, { "end": 203.72, "start": 199.4, "text": " So my strategy focuses a lot more on people, just trying to recruit the best people and give" }, { "end": 210.28, "start": 203.72, "text": " them what they need to succeed. I find that people always assume I have some grand like research" }, { "end": 215.16, "start": 210.28, "text": " ambition, some overarching plan for my whole career, but actually things are very student driven." }, { "end": 220.68, "start": 215.16, "text": " The thing the lesson I've learned over the years is that the best results are obtained when" }, { "end": 224.51999999999998, "start": 220.68, "text": " students are given the freedom to work on the thing that they're really passionate about. So" }, { "end": 231.07999999999998, "start": 225.24, "text": " I do try to gently steer the students, but I don't really set the agenda because I find it counterproductive." }, { "end": 237.4, "start": 232.44, "text": " I'd like to talk a little bit about your work at latent logic, your company that Waymo acquired" }, { "end": 243.4, "start": 237.4, "text": " in 2019. I understand your team was developing imitation learning to simulate human behavior on the" }, { "end": 251, "start": 243.4, "text": " road. Can you tell us a bit about that? Sure. As I mentioned before, in addition to reinforcement" }, { "end": 255, "start": 251, "text": " learning, a big topic I work on is learning from demonstration. Learning from demonstration is a" }, { "end": 261.24, "start": 255, "text": " synonym for imitation learning. So rather than assuming access to some reward signal, we learn from" }, { "end": 266.92, "start": 262.04, "text": " some demonstrations, from some example trajectories provided by an expert who knows how to solve" }, { "end": 275.32, "start": 266.92, "text": " task. And both at latent logic and now in its new incarnation as Waymo UK, the mission is basically" }, { "end": 283.48, "start": 275.32, "text": " the same, which is to provide a crucial piece of the simulation puzzle for self-driving. So simulation" }, { "end": 289.24, "start": 283.48, "text": " is extremely important to achieving self-driving. There's basically no viable path to full autonomy" }, { "end": 295.32, "start": 289.24, "text": " that doesn't go through simulation. Even an industry leader like Waymo that has a huge data" }, { "end": 301.88, "start": 295.32, "text": " advantage relies heavily on simulation in order to iterate quickly and to meet like extremely" }, { "end": 307.24, "start": 301.88, "text": " high safety standards. Simulation is a really important part of the safety evaluation process" }, { "end": 312.44, "start": 307.24, "text": " that's used to determine when a new version of the software can be safely upgraded or deployed" }, { "end": 320.6, "start": 312.44, "text": " to a new domain. But that's only the simulation is only useful if the simulations are realistic." }, { "end": 325.64000000000004, "start": 320.6, "text": " And to make the simulations realistic, we need to have realistic models of the behavior of the" }, { "end": 330.04, "start": 325.64000000000004, "text": " other agents that the self-driving car might interact with. The human drivers, cyclists and pedestrians" }, { "end": 334.52000000000004, "start": 330.04, "text": " that are also on the road, we need to know how they'll respond to behavior from the self-driving car." }, { "end": 340.6, "start": 335.16, "text": " So we need to learn realistic behavior models, realistic policies for those agents to put in" }, { "end": 345.24, "start": 340.6, "text": " the simulation or the simulation will be useless. And that's where imitation learning," }, { "end": 349.8, "start": 345.24, "text": " learning demonstration comes in. So we're building new imitation learning tools that" }, { "end": 354.2, "start": 349.8, "text": " derive such behavior from the data that's collected by Waymo's own cars on the road" }, { "end": 359.72, "start": 355, "text": " to try to learn realistic behavior models to make those simulators more worthwhile." }, { "end": 366.2, "start": 360.68, "text": " So we have a couple papers to discuss today, the first being very bad. And I remember this one at" }, { "end": 374.68, "start": 366.2, "text": " ICLR and the first author, Wiza Zincraf, had a memorable line to me in the poster session. She said," }, { "end": 381, "start": 374.68, "text": " very bad is a very good method. So let's talk about that. Can you share with us the general" }, { "end": 385.64, "start": 381, "text": " just of this paper? What's going on in very bad? Yeah, so this is a great example of what I was" }, { "end": 390.92, "start": 385.64, "text": " referring to about Bayesian reinforcement learning. So the starting point for very bad is" }, { "end": 397.16, "start": 391.96000000000004, "text": " like an existing formalism called the Bayes Adaptive Markov decision process, which is like" }, { "end": 401.48, "start": 398.12, "text": " the sort of fully Bayesian formulation of the reinforcement learning problem." }, { "end": 407.56, "start": 401.48, "text": " So the idea behind the Bayes Adaptive MVP is that we model the problem of reinforcement learning" }, { "end": 413, "start": 407.56, "text": " itself as a problem of partial observability. Like why are we doing reinforcement learning?" }, { "end": 418.04, "start": 413, "text": " Why aren't we just planning when why are we learning instead of planning? Because we don't know" }, { "end": 426.6, "start": 418.04, "text": " what Markov decision process we're in. So we we model that as a latent state, basically the" }, { "end": 431.16, "start": 426.6, "text": " transition and reward functions, the unknown parts of the Markov decision process. They're treated" }, { "end": 435.8, "start": 431.16, "text": " as some kind of latent state. And we have some observations that are correlated with them," }, { "end": 441.72, "start": 435.8, "text": " but that don't disambiguate it. So this treats the problem as a partially observable Markov decision" }, { "end": 448.36, "start": 441.72, "text": " process where the hidden variables correspond to the transition reward functions of the unknown" }, { "end": 453.16, "start": 448.36, "text": " MVP that we're trying to do RLN. So what we have to do is we have an inference problem. We have" }, { "end": 458.20000000000005, "start": 453.16, "text": " to do inference about this latency with to maintain a posterior about what MVP were in as we act in" }, { "end": 465.4, "start": 458.2, "text": " the world. And then the really important part is we need to plan in belief space. So this posterior" }, { "end": 471.4, "start": 465.4, "text": " over the over what MVP were in this is a belief. And as we take actions, we travel around from one" }, { "end": 476.44, "start": 471.4, "text": " belief to another. And if we can figure out the optimal policy for for traveling through belief" }, { "end": 480.91999999999996, "start": 476.44, "text": " space, then we will have an optimal solution to the expiration problem in reinforcement learning." }, { "end": 487.24, "start": 480.91999999999996, "text": " We will trade off information gathering and reward gathering actions in exactly the right way." }, { "end": 491.32, "start": 487.24, "text": " So is to maximize our expected return over some some planning horizon." }, { "end": 496.68, "start": 492.76, "text": " So that this is like a known existing idea, the base adaptive MVP." }, { "end": 502.92, "start": 498.28000000000003, "text": " The trick is to make it practical like planning and belief space is really difficult. The inference" }, { "end": 509, "start": 502.92, "text": " itself can be difficult. And you know, here we haven't even we have an additional challenge," }, { "end": 513.64, "start": 509, "text": " which is that we don't even know how to describe the this hidden state space." }, { "end": 519.72, "start": 513.64, "text": " So what we do with very bad is we basically try to to take this base adaptive" }, { "end": 525.16, "start": 520.36, "text": " formalism and combine it with some modern tools to come up with an approximate way to achieve" }, { "end": 530.84, "start": 525.16, "text": " base optimal behaviors. So we use a variational autoencoder to approximate the inference. And then" }, { "end": 535.08, "start": 530.84, "text": " we use reinforcement learning method deep, deep reinforcement learning methods policy gradient" }, { "end": 541, "start": 535.08, "text": " methods to approximate the planning step. So basically we have a network which does the inference" }, { "end": 546.68, "start": 541, "text": " in the form of this variational autoencoder. And then the posterior, the approximate posterior is" }, { "end": 552.68, "start": 546.68, "text": " then fed as input to a policy downstream, which learns how to take actions conditioned on its" }, { "end": 559.88, "start": 552.68, "text": " posterior. So to learn a policy for moving around in belief space. And you know, because we take" }, { "end": 564.52, "start": 559.88, "text": " this approximate approach, rather than doing some analytical inference where you would compute your" }, { "end": 572.52, "start": 564.52, "text": " posterior directly from the prior, instead we're sampling from this prior. And by sampling from the" }, { "end": 578.6, "start": 572.52, "text": " prior, what we basically do is turn it into a meta reinforcement learning problem. Because every" }, { "end": 582.92, "start": 578.6, "text": " sample you draw from this prior, it's like a training task that you can use to figure out," }, { "end": 587, "start": 583.72, "text": " you know, what is the mapping between beliefs and actions that you should take when you're" }, { "end": 591.0799999999999, "start": 587, "text": " then deployed in some new task, which you assume was sampled from the same prior." }, { "end": 596.6800000000001, "start": 591.08, "text": " So from a meta reinforcement learning perspective, the key insight here is that this policy that" }, { "end": 602.0400000000001, "start": 596.6800000000001, "text": " acts in some new unknown task, it should condition on your posterior, on the whole posterior," }, { "end": 607, "start": 602.6, "text": " not on some point estimate or some sample from the posterior, which is what a lot of other methods" }, { "end": 612.0400000000001, "start": 607, "text": " have done. Because when you condition on this posterior, you have the chance to approximate the" }, { "end": 617.08, "start": 612.0400000000001, "text": " base optimal behavior that, you know, in principle, a solution to the base adaptive mdp would give you." }, { "end": 621.5600000000001, "start": 617.08, "text": " So in this setting, does the agent observe the reward at test time?" }, { "end": 628.76, "start": 622.36, "text": " Yeah, but it doesn't know what reward function generated that reward, but it can observe rewards," }, { "end": 631.88, "start": 628.76, "text": " and that's like a clue about what the reward function is." }, { "end": 636.44, "start": 631.88, "text": " So, so we had Taylor Killian on the show a few episodes back and he did some related work with" }, { "end": 642.76, "start": 636.84, "text": " hidden parameter mdp's, hip mdp's, which the very bad paper sites and in his case," }, { "end": 648.4399999999999, "start": 642.76, "text": " it was it was medical patients. So the transition function had some variation, but the reward function" }, { "end": 653.16, "start": 648.4399999999999, "text": " didn't vary. We wanted the patients to get better. So if I understand correctly, in very bad," }, { "end": 657.8, "start": 653.16, "text": " the task could differ in the reward function as well as the transitions. Can you just help me," }, { "end": 665.8, "start": 657.8, "text": " help me understand like what's kind of settings would we, would we want the reward function to vary?" }, { "end": 671.24, "start": 665.8, "text": " Sure. So, I mean, a good example of this is when a task corresponds to a user." }, { "end": 676.6, "start": 671.24, "text": " So, if you think about, for example, if the agent is some kind of recommendation system or some ad" }, { "end": 681.8, "start": 676.6, "text": " serving system, then the reward function corresponds to when the user accepts the recommendation or" }, { "end": 687.48, "start": 681.8, "text": " clicks on the hat or, you know, views the recommended document or whatever. And the situations in which" }, { "end": 691.48, "start": 687.48, "text": " that would happen will be different for each user. So we can think of that as a reward function" }, { "end": 697.24, "start": 691.48, "text": " changing from task to task as it changes from user to user. But, you know, what's part of the transition" }, { "end": 704.12, "start": 697.24, "text": " function, what's part of the reward function is a bit arbitrary. So you can move back and forth" }, { "end": 709.32, "start": 704.12, "text": " from one to the other just by changing the way you model the state space. So it's not really a" }, { "end": 714.28, "start": 709.32, "text": " fundamental distinction. But, you know, from the perspective of the base adaptive MVP," }, { "end": 719.08, "start": 714.28, "text": " everything that's unknown about what task you're in any part of the description of the MVP," }, { "end": 724.12, "start": 719.08, "text": " whether it's the transition function or the reward function that's unknown to you, that's what you" }, { "end": 730.28, "start": 724.12, "text": " maintain a posterior belt. I see. Okay. And then I wanted to ask you about this one line that's said," }, { "end": 734.6, "start": 730.28, "text": " and I'm quoting, I mean, difference of very bad to many existing Bayesian RL methods is that we" }, { "end": 740.76, "start": 734.6, "text": " metal-learn the inference procedure, i.e. how to do a posterior update. Is it, was there any other" }, { "end": 745.48, "start": 740.76, "text": " option than to metal-learn that or was that does metal-learning there help in some way?" }, { "end": 751.5600000000001, "start": 746.84, "text": " So, I mean, in principle, you can do the inference exactly, but that's not going to be practical" }, { "end": 757.4, "start": 751.56, "text": " for the kind of tasks that we're interested in solving. So the main issue is that we don't have a" }, { "end": 763.56, "start": 757.4, "text": " priori a good low-dimensional representation of this hidden state. You know, we can write down a" }, { "end": 767.8, "start": 763.56, "text": " big table to design the transition and reward functions and then say, okay, we need to fill in all" }, { "end": 772.1999999999999, "start": 767.8, "text": " the values in this table, but that's going to be really high-dimensional. And, you know, if you have" }, { "end": 778.1199999999999, "start": 772.1999999999999, "text": " continuous states, it's not even going to be possible. So, you know, if we represent it as these" }, { "end": 783.16, "start": 778.12, "text": " tables, the inference will be easy. You know, we can use Dirichlet priors and just do inference by" }, { "end": 789.16, "start": 783.16, "text": " updating counts. But then, you know, this Bay's optimal policy, it's going to have to explore every" }, { "end": 794.36, "start": 789.16, "text": " state separately. We won't learn about new tasks. We won't figure out what new tasks we're in," }, { "end": 801.48, "start": 794.36, "text": " nearly quickly enough to be like an effective agent. So, what we have to do is we have to find some" }, { "end": 805.88, "start": 801.48, "text": " low-dimensional latent representation that lets us generalize. So, we have to fit, you know," }, { "end": 811.72, "start": 805.88, "text": " only a handful of parameters to disempeguate what state we're in. And we have to come up with" }, { "end": 815.48, "start": 811.72, "text": " this latent representation that exactly the same time that we learn a procedure for doing inference" }, { "end": 820.04, "start": 815.48, "text": " on that latent space. And, you know, that's exactly what the variational autoencoder can do. It's" }, { "end": 825.8, "start": 820.04, "text": " one tool for solving exactly that problem. And then, you know, once we solve that problem, we just" }, { "end": 832.36, "start": 825.8, "text": " have to learn a policy that conditions on the approximate posterior to approximate this Bay's" }, { "end": 838.04, "start": 832.36, "text": " optimal behavior. Cool. Thanks. All right. So, let's move on to multi-agent RL. So, you have such" }, { "end": 842.76, "start": 838.04, "text": " adept of experiencing knowledge and multi-agent RL. So, I'm looking forward to asking this." }, { "end": 846.6800000000001, "start": 842.76, "text": " What are the main challenges with multi-agent RL at this point?" }, { "end": 852.52, "start": 848.2, "text": " So, I'll restrict myself to the cooperative setting because that's the one I've focused the most" }, { "end": 858.6, "start": 852.52, "text": " on and there's enough to talk about there. So, one thing is representational. There are still," }, { "end": 865.32, "start": 858.6, "text": " sort of core questions about how we should be representing the complex value functions that we" }, { "end": 871.4, "start": 865.32, "text": " need to learn in order to solve a cooperative multi-agent RL problem. So, factorization of a" }, { "end": 878.2, "start": 871.4, "text": " complex value function has been a big focus and a lot of the progress that has occurred has come" }, { "end": 882.6800000000001, "start": 878.2, "text": " from factorization. But we don't, we still don't know really what are the right factorizations" }, { "end": 886.36, "start": 882.6800000000001, "text": " and what kind of factorizations will be needed for the even harder tasks we like to solve." }, { "end": 893.8000000000001, "start": 886.36, "text": " Another one is robustness. We need to be able to generalize better from one task to another to" }, { "end": 899.72, "start": 893.8000000000001, "text": " learn policies that can operate in situations where there might be different numbers of agents," }, { "end": 905.5600000000001, "start": 899.72, "text": " when an agent enters and leave the scene, where there might be different numbers of other entities" }, { "end": 911.08, "start": 905.5600000000001, "text": " in the world, non-agent entities. And then I guess the other thing I would mention would just be" }, { "end": 916.76, "start": 911.08, "text": " algorithmic questions. So, how centralized should these algorithms be? Should the agents be learning" }, { "end": 919.8000000000001, "start": 916.76, "text": " independently? Should they be totally centralized? Should they be somewhere in between?" }, { "end": 927.64, "start": 922.2800000000001, "text": " As I mentioned before, we have some new results and those results are kind of upending the" }, { "end": 931.88, "start": 927.64, "text": " conventional wisdom about that. So, it seems like maybe centralization of these algorithms is not as" }, { "end": 938.36, "start": 931.88, "text": " important as we had thought and as the results from the past couple years had been suggesting." }, { "end": 943.64, "start": 938.36, "text": " So, my own introduction to Multi-Asian RL was building RL agents for the Palmerman" }, { "end": 947.88, "start": 943.64, "text": " competition in Europe's 2018. The Palmerman environment a couple years back and I noticed that" }, { "end": 952.84, "start": 947.88, "text": " there are just so many different ways that things can be arranged in Multi-Asian RL. So," }, { "end": 958.44, "start": 952.84, "text": " I wonder, is there any question at this point of whether we have the right frameworks for thinking" }, { "end": 964.36, "start": 958.44, "text": " about these problems? Like, is it more, and maybe partially touched on this before, but is it more" }, { "end": 970.6, "start": 964.36, "text": " a matter of solving technical issues where the problems have already been framed and well-defined?" }, { "end": 975.96, "start": 971.72, "text": " Yeah, I think there are definitional problems. So, I think in any field," }, { "end": 981.32, "start": 975.96, "text": " so that half the battle is asking the right questions and that that's certainly true in Multi-Asian" }, { "end": 987.72, "start": 981.32, "text": " RL. In fact, maybe it's especially true in Multi-Asian RL because as you said, once you move from" }, { "end": 991.4, "start": 987.72, "text": " the single agent setting to the Multi-Asian setting, there's an explosion of possible settings" }, { "end": 996.1999999999999, "start": 991.4, "text": " and all these different assumptions you can make. And that's a practical problem because it leads" }, { "end": 1001.72, "start": 996.1999999999999, "text": " to a lot of confusion, where people assume one thing about your formalism and actually the" }, { "end": 1007.0799999999999, "start": 1001.72, "text": " formalism works differently. So, just even communicating about what we're doing is tricky," }, { "end": 1010.68, "start": 1007.0799999999999, "text": " but it also makes it harder to figure out what the right questions are because there's this" }, { "end": 1018.76, "start": 1010.68, "text": " overwhelming like, like, a bounty of choices of formalisms you can focus on. I think a fair amount" }, { "end": 1025.16, "start": 1018.76, "text": " of the success, the progress that we've made in my lab on solving Multi-Asian RL problems has come" }, { "end": 1030.12, "start": 1025.16, "text": " from identifying sort of the right assumptions. What is the right setting to focus on? So," }, { "end": 1035.4, "start": 1030.12, "text": " finding a sweet spot in particular, focusing on this CTDE setting, this essentialized training," }, { "end": 1041.16, "start": 1035.4, "text": " a decentralized execution, which seems to be kind of a key assumption. It's restrictive in a way" }, { "end": 1045.32, "start": 1041.16, "text": " that gives you leverage to solve the problem, but not so restrictive that it's unrealistic or the" }, { "end": 1051.1599999999999, "start": 1045.32, "text": " algorithms are not actually useful. And so, I do think there needs to be more focus on finding" }, { "end": 1055.8799999999999, "start": 1051.1599999999999, "text": " those sweet spot settings in order to make progress. In non-corporative settings, it's even more" }, { "end": 1059.8, "start": 1055.8799999999999, "text": " difficult because just even pinning down what it is that we're trying to do, what is the point" }, { "end": 1065.3999999999999, "start": 1059.8, "text": " of Multi-Asian learning, what would the definition of success be that itself becomes non-trivial" }, { "end": 1070.4399999999998, "start": 1065.3999999999999, "text": " in non-corporative settings? So, in Pomeranian, in some versions, there was a communication channel." }, { "end": 1077.72, "start": 1070.44, "text": " And I noticed when we talk about decentralized policies, we're assuming that agents can't see" }, { "end": 1082.76, "start": 1077.72, "text": " each other's observations or they can't communicate. I wonder how much of the complexity of Multi-Asian RL" }, { "end": 1088.2, "start": 1082.76, "text": " is due to that inability to communicate? And if agents had cooperative agents had like" }, { "end": 1094.28, "start": 1088.2, "text": " a high capacity communication channels, does that make the problem just really simple or does" }, { "end": 1102.44, "start": 1094.28, "text": " that just move the complexity around? So, if they have high capacity communication channels," }, { "end": 1107, "start": 1102.44, "text": " like if communication is basically free, then there's a sense in which the problem isn't really" }, { "end": 1113, "start": 1107, "text": " multi-agent. So, basically, if every agent can broadcast all of its observations to the other agents," }, { "end": 1117.8, "start": 1113.6399999999999, "text": " then we don't really need to think about there being multiple agents. We're in what's called the" }, { "end": 1123.56, "start": 1117.8, "text": " multi-agent MDP, which is not a very good name because it's not really inherently multi-agent." }, { "end": 1127.96, "start": 1123.56, "text": " We can think of it as if there's one puppeteer agent that's deciding what everyone has to do," }, { "end": 1132.76, "start": 1127.96, "text": " and that agent just happens to face a multi-dimensional action space where you can think of each" }, { "end": 1141.24, "start": 1132.76, "text": " dimension as being an agent, but that's just semantics. So, without some constraint on communication," }, { "end": 1147.1599999999999, "start": 1141.96, "text": " it isn't fundamentally multi-agent. That doesn't mean that it's easier to solve. So, you know," }, { "end": 1153.5600000000002, "start": 1147.16, "text": " there's a distinction between the natural and the artificial decentralization. The natural" }, { "end": 1157.88, "start": 1153.5600000000002, "text": " decentralization is imposed by the world because there's some like your sensors don't allow you to" }, { "end": 1163.0800000000002, "start": 1157.88, "text": " communicate. And then there's artificial decentralization where we as designers say, okay, we're going to" }, { "end": 1169.24, "start": 1163.0800000000002, "text": " treat each agent in some semi-independent or local way because having every agent condition on" }, { "end": 1173.96, "start": 1169.24, "text": " everything that every agent observed is just not a tractable learning problem. So, even if it's" }, { "end": 1179.72, "start": 1173.96, "text": " not fundamentally multi-agent, it may still be very difficult. If the agents can communicate," }, { "end": 1184.8400000000001, "start": 1179.72, "text": " but the bandwidth is limited, then they face a different challenge, which is that they have to" }, { "end": 1189.24, "start": 1184.8400000000001, "text": " like invent a language or a protocol by which they're going to communicate, use that channel efficiently." }, { "end": 1193.88, "start": 1190.3600000000001, "text": " And if they can't communicate at all, well, then coordination becomes difficult because they don't" }, { "end": 1198.8400000000001, "start": 1193.88, "text": " know what their teammates have observed and therefore what they're going to do. But the problem" }, { "end": 1203.16, "start": 1198.8400000000001, "text": " may be simpler in the sense that at least they don't have all those observations coming in that" }, { "end": 1207.96, "start": 1203.16, "text": " they have to process and they don't have to think of a clever protocol to use to communicate" }, { "end": 1212.44, "start": 1207.96, "text": " with their teammates. So, people are talking about what it's what's going to take to use RL in" }, { "end": 1217.16, "start": 1212.44, "text": " the real world. And it seems like there's a lot of work left to do there. There's a New Reps 2020" }, { "end": 1222.2, "start": 1217.16, "text": " workshop on this topic. Is there a parallel discussion on what it'll take to use multi-agent RL" }, { "end": 1229.64, "start": 1222.2, "text": " in the real world? Or is it a multi-agent just too early on to think about that? I don't think" }, { "end": 1233.8000000000002, "start": 1229.64, "text": " it's too early on. I think certainly there's a lot left to do. There are a lot of open challenges." }, { "end": 1239.16, "start": 1233.8000000000002, "text": " But I think we actually already have algorithms that can be of practical use in many real world" }, { "end": 1244.1200000000001, "start": 1239.16, "text": " applications. If you think about robots and warehouses that need to coordinate their behavior" }, { "end": 1250.76, "start": 1244.1200000000001, "text": " tool like avoid colliding with each other when they're fetching packages or a video game AI where" }, { "end": 1255.64, "start": 1250.76, "text": " you have to learn agents that play effectively with human teammates and against human opponents." }, { "end": 1264.0400000000002, "start": 1255.64, "text": " And of course, self-driving cars. They share the road with other agents. So, I think there are" }, { "end": 1269.5600000000002, "start": 1264.0400000000002, "text": " a number of real world applications where the existing tools are already of use even if there may" }, { "end": 1276.0400000000002, "start": 1269.5600000000002, "text": " be still open questions. So, can you comment on the use of DEPORL and multi-agent RL for autonomous" }, { "end": 1282.8400000000001, "start": 1276.0400000000002, "text": " vehicles? You said that the current methods might be useful for that. Is that relevant now or" }, { "end": 1288.6, "start": 1282.84, "text": " is that something for the future? Yeah. So, I mean, you can think about, so to give just one of many" }, { "end": 1295.3999999999999, "start": 1288.6, "text": " examples, one core problem in self-driving is behavior prediction. So, before you even get to" }, { "end": 1298.76, "start": 1295.3999999999999, "text": " thinking about how you're going to act, you need to be able to make predictions about how other" }, { "end": 1304.1999999999998, "start": 1298.76, "text": " agents are going to act. And that already implies a kind of game theoretic reasoning." }, { "end": 1310.12, "start": 1305, "text": " The actions of the other agents will be conditioned on some assumptions they make about how you" }, { "end": 1319.2399999999998, "start": 1310.12, "text": " are going to act. If I try to merge the car behind me will hit the brakes in order to make room for me." }, { "end": 1327.08, "start": 1320.4399999999998, "text": " So, that kind of reasoning in which agents have a model of how the other agents on the road" }, { "end": 1331.6399999999999, "start": 1327.08, "text": " are going to act and their behavior conditions on that model. So, if we're going to make good" }, { "end": 1335.4799999999998, "start": 1331.6399999999999, "text": " predictions about what other agents are going to do, we need to think game theoretically. We need" }, { "end": 1339.88, "start": 1335.48, "text": " to have models of them and we need to think about the models that they have with us. So, that" }, { "end": 1346.44, "start": 1339.88, "text": " almost sounds like a theory of mind thing. Can we just stop at the one level? It is a theory of mind" }, { "end": 1355.48, "start": 1346.44, "text": " thing. I think we don't know what level we need to go to. Okay, so, in our first episode," }, { "end": 1360.68, "start": 1356.44, "text": " we talked about multi-agent oral research from Natasha Jakes who had the social influence" }, { "end": 1365.64, "start": 1360.68, "text": " paper for solving social dilemma environments where standard oral agents couldn't solve the collective" }, { "end": 1371.4, "start": 1365.64, "text": " task like a tragedy of the commons type tasks. So, are the types of methods you're talking about," }, { "end": 1377.16, "start": 1371.4, "text": " cooperative oral, cooperative multi-agent oral, would they be suitable for these types of tasks?" }, { "end": 1383.16, "start": 1379.72, "text": " Well, the tragedy of commons is something that arises in a mixed setting." }, { "end": 1388.92, "start": 1383.16, "text": " Okay, so, I guess the answer is no. I see, it's a different setting." }, { "end": 1395.48, "start": 1388.92, "text": " Yeah, so, there's a related challenge which arises in the cooperative setting." }, { "end": 1399.64, "start": 1396.44, "text": " So, in the cooperative setting, you have the multi-agent credit assignment problem." }, { "end": 1405.96, "start": 1400.1200000000001, "text": " So, if you know RL, you probably know the temporal credit assignment problem, which is like," }, { "end": 1410.8400000000001, "start": 1406.52, "text": " you know, I win a game of chess and then I look back and move 17 and say, how much credit do I give" }, { "end": 1417.1599999999999, "start": 1410.84, "text": " to move 17 for my ultimate victory? Well, we have the same idea in the multi-agent setting," }, { "end": 1422.84, "start": 1417.1599999999999, "text": " but it's across agents instead of time. So, you know, the football team scores a goal and then" }, { "end": 1427.48, "start": 1423.8799999999999, "text": " you have to ask, like, how much credit does each player on the team get for that goal?" }, { "end": 1434.4399999999998, "start": 1429.48, "text": " So, if you don't solve the multi-agent credit assignment properly, then you can have like lazy agents." }, { "end": 1447.16, "start": 1434.44, "text": " So, if you don't tease apart what that agent's contribution is and make sure to reward them only" }, { "end": 1450.92, "start": 1447.16, "text": " if they actually made a difference, then they might end up being lazy. So, it's not the same as the" }, { "end": 1455.4, "start": 1450.92, "text": " tragedy of the commons, which arises from the fact that interests are not aligned. This is just a," }, { "end": 1461.72, "start": 1456.3600000000001, "text": " like, a learning difficulty among cooperative agents, but it is kind of conceptually similar." }, { "end": 1469, "start": 1461.72, "text": " Cool. Okay. And then this issue of how to break symmetry in decentralized multi-agent," }, { "end": 1473.88, "start": 1469, "text": " like if two agents are approaching a doorway, you know, which goes first so they don't bump into" }, { "end": 1478.6000000000001, "start": 1473.88, "text": " each other. How does, is that a trivial problem or is that a deep problem in moral?" }, { "end": 1486.44, "start": 1478.6000000000001, "text": " It can be a deep problem. So, in the multi-agent literature, this is typically addressed with" }, { "end": 1491.56, "start": 1486.44, "text": " what are called social conventions. So, the idea is that in advance you have some agreement, like," }, { "end": 1495.72, "start": 1491.56, "text": " if we come to an intersection, the person who sees the red light stops and the person who sees the" }, { "end": 1500.28, "start": 1495.72, "text": " green light goes, this is an arbitrary social convention, but because we've agreed on it, we're" }, { "end": 1506.6000000000001, "start": 1500.28, "text": " able to, you know, avoid collisions. So, in this, in the cooperative setting, when we have this" }, { "end": 1510.76, "start": 1506.6000000000001, "text": " CTDE assumption, when we have this centralized training, decentralized execution setting," }, { "end": 1515, "start": 1510.76, "text": " then we have the opportunity to form social conventions during training. The training process is" }, { "end": 1520.12, "start": 1515, "text": " centralized, so we all agree together. What are the set of policies we're going to use later" }, { "end": 1526.12, "start": 1520.12, "text": " when we're deployed? But we won't be able to communicate any more after deployment." }, { "end": 1532.04, "start": 1528.28, "text": " And, you know, typically to speed up the learning process, we do parameter sharing. So," }, { "end": 1536.52, "start": 1532.04, "text": " during the centralized phase, we're learning policies for all the agents, but really we're just" }, { "end": 1543.4, "start": 1536.52, "text": " learning one policy, which all the agents can use. But those policies typically condition on the," }, { "end": 1549, "start": 1543.4, "text": " like, agent index. And you could also, if you wanted to condition on like a richer description of" }, { "end": 1555.96, "start": 1549, "text": " the agent's type. And this allows the agents to be as heterogeneous as they like, even though they" }, { "end": 1562.52, "start": 1555.96, "text": " completely share policy, share parameters. So, in this way, you can break ties by conditioning" }, { "end": 1567.96, "start": 1562.52, "text": " on the agent index. But if you don't have the CTDE setting, if the agents don't get to plan" }, { "end": 1572.52, "start": 1567.96, "text": " together, if it's like an ad hoc teamwork setting or a fully decentralized setting where they learn" }, { "end": 1578.12, "start": 1572.52, "text": " from scratch, and they're already decentralized when they start learning, or if the agents are not" }, { "end": 1582.52, "start": 1578.12, "text": " cooperative, so this pre-planning phase doesn't even make any sense because they're not on the same" }, { "end": 1587.8, "start": 1582.52, "text": " team. In all of those settings, then this symmetry breaking problem can be more fundamental." }, { "end": 1592.6, "start": 1588.36, "text": " Well, that was super interesting. Okay, let's move on to Q-Mex. That's the second paper we" }, { "end": 1598.44, "start": 1592.6, "text": " plan to discuss. The title of that paper is monotonic value function factorization for deep" }, { "end": 1604.2, "start": 1598.44, "text": " multi-agent reinforcement learning by Rashid at all in 2020. And with yourself as a co-author," }, { "end": 1608.52, "start": 1605.64, "text": " can you tell us briefly what is this paper about?" }, { "end": 1616.52, "start": 1610.04, "text": " Yeah, so this paper is a great example of trying to leverage this CTDE setting. So we want to do" }, { "end": 1620.68, "start": 1616.52, "text": " this centralized training, the result of which will yield decentralized policies that we can" }, { "end": 1630.3600000000001, "start": 1620.68, "text": " execute without further communication. And what we're trying to do is learn a centralized value" }, { "end": 1635.64, "start": 1630.3600000000001, "text": " function. So we're trying to learn a big value function that all the agents will use that estimates" }, { "end": 1642.04, "start": 1635.64, "text": " the expected return conditioned on the actions of all the agents. So this is a joint value function" }, { "end": 1649.4, "start": 1642.04, "text": " that takes all the agents into account. But because that value function is really complex and" }, { "end": 1653.24, "start": 1649.4, "text": " difficult to learn, we're going to learn a factor value function. So we're going to have a" }, { "end": 1659.72, "start": 1654.1200000000001, "text": " factorization that's going to exploit conditional independence properties between the agents in" }, { "end": 1666.92, "start": 1659.72, "text": " order to simplify the value function, make it easier to learn. And we're going to choose a particular" }, { "end": 1673.24, "start": 1666.92, "text": " form of factorization that's going to allow us to achieve decentralized ability. So what this value" }, { "end": 1679.72, "start": 1673.24, "text": " function does is it's going to be like a mixture of local value functions. So for each agent, each" }, { "end": 1684.6, "start": 1679.72, "text": " agent will have its own kind of local estimate of value. And then all those local estimates will get" }, { "end": 1692.6, "start": 1684.6, "text": " mixed together into an estimate of the joint centralized value. And if we if we're clever about" }, { "end": 1697.88, "start": 1692.6, "text": " how we do this mixing, then we'll achieve this decentralized ability property, which basically says" }, { "end": 1704.68, "start": 1697.88, "text": " that rather than having to do some really expensive maximization over the joint action space," }, { "end": 1709.48, "start": 1704.68, "text": " every time we want to select actions or do a learning update, which not only would be expensive," }, { "end": 1714.6000000000001, "start": 1709.48, "text": " but which would not be possible in the decentralized setting, instead we're just going to do a bunch" }, { "end": 1719.48, "start": 1714.6000000000001, "text": " of local argmaxes where every agent will just individually maximize with respect to its local" }, { "end": 1725, "start": 1719.48, "text": " value function. And if we choose the mixing in the right way, then these two come out the same." }, { "end": 1729.8, "start": 1725, "text": " So we have this decentralized ability property. If we can do these local argmaxes and get the" }, { "end": 1736.84, "start": 1729.8, "text": " same result as if we did the expensive global argmax. And it turns out that, you know, if the mixing" }, { "end": 1744.2, "start": 1736.84, "text": " function has this monotonic entity property, if the joint value function has a monotonic relationship" }, { "end": 1750.52, "start": 1744.2, "text": " to each of the local value functions, then this decentralized ability property holds. And, you know," }, { "end": 1755.4, "start": 1750.52, "text": " we can easily, we can trivially decentralize the value function once we've learned it. So for the" }, { "end": 1760.76, "start": 1755.4, "text": " listeners, I just want to point out that Professor Whiteson gave a lecture on Cumix at the MIT" }, { "end": 1767.16, "start": 1760.76, "text": " embodied intelligence series earlier this year, which I liked a lot, and where you also touched on" }, { "end": 1772.68, "start": 1768.68, "text": " SMAC and Maven and encourage listeners, check that out and the link will be in the show notes." }, { "end": 1780.36, "start": 1773.4, "text": " So the SMAC, this dark graph, multi-agents challenge test bed here, that was managing units" }, { "end": 1787, "start": 1780.36, "text": " and starcraft too. From what I understand, that's different than what Alpha Star was doing." }, { "end": 1794.6, "start": 1787, "text": " Alpha Star was trying to control everything with one centralized agent and SMAC is separate agents" }, { "end": 1801.1599999999999, "start": 1794.6, "text": " for each unit. Is that correct? Yeah, that's exactly right. I mean, Alpha Star is an amazing," }, { "end": 1808.52, "start": 1801.1599999999999, "text": " amazing achievement, of course. The similarities between their setting and what we are doing" }, { "end": 1813.72, "start": 1808.52, "text": " are actually quite superficial. So they do both focus on on starcraft too. But in Alpha Star," }, { "end": 1820.04, "start": 1813.72, "text": " they're solving the whole problem, micro and macro. And they're doing it with like a single" }, { "end": 1824.44, "start": 1820.04, "text": " puppeteer agent. So we can think of this as this multi-agent MVP setting that I mentioned earlier." }, { "end": 1831, "start": 1825.8, "text": " So there's a single agent that decides what every unit is going to do. Whereas what we're trying" }, { "end": 1835.6399999999999, "start": 1831, "text": " to do is model the deck pump VP, the decentralized partially observable markup decision process." }, { "end": 1840.2800000000002, "start": 1835.64, "text": " The multi-agent aspect is inherent because the agents have only a partial view of the world and" }, { "end": 1845.48, "start": 1840.2800000000002, "text": " they can't communicate directly with each other. So in Starcraft, that means the agents have a" }, { "end": 1852.3600000000001, "start": 1845.48, "text": " limited field of view. And we're trying to learn how to coordinate their behavior just in micro management." }, { "end": 1858.3600000000001, "start": 1853.24, "text": " So when should we think about centralized training? When would that be the right or wrong" }, { "end": 1870.36, "start": 1858.36, "text": " way to frame things in say real world multi-agent RL? So the setting makes sense when on deployment," }, { "end": 1877.3999999999999, "start": 1870.36, "text": " it's no longer possible for the agents to communicate directly. Or we don't want to rely on that" }, { "end": 1884.12, "start": 1877.3999999999999, "text": " communication. So anytime you want solutions that are really robust, you might not want to rely on" }, { "end": 1893.9599999999998, "start": 1884.12, "text": " communication channels that might go down or that might just be unreliable. So then you can learn" }, { "end": 1900.36, "start": 1893.9599999999998, "text": " a policy where the agents don't condition. They don't rely on getting input directly from the other" }, { "end": 1908.4399999999998, "start": 1900.36, "text": " agents. And in that way, you can learn more robust policies. So those are the settings when you" }, { "end": 1916.76, "start": 1908.44, "text": " want this decentralized policy. The centralization of the training, that is possible anytime the agents" }, { "end": 1922.68, "start": 1916.76, "text": " can train together in advance. And this is I think almost all of the time because even in a single" }, { "end": 1926.68, "start": 1922.68, "text": " agent setting, much less a multi-agent one, we don't deploy agents in the real world, type of" }, { "end": 1932.1200000000001, "start": 1926.68, "text": " lorosa, we don't say, okay, we'll just put this robot out in the world and have it learn what" }, { "end": 1938.52, "start": 1932.12, "text": " MDP is in from scratch. We train them in advance and then we fine tune them on deployment. So that" }, { "end": 1943.8799999999999, "start": 1938.52, "text": " training together, it almost always happens. The other assumption that I think is important to" }, { "end": 1952.36, "start": 1944.9199999999998, "text": " mention is that this setting does assume that the world is stationary. Like we're in some" }, { "end": 1959.7199999999998, "start": 1952.36, "text": " deckbomb Dp, but that deckbomb Dp is not changing so that the plan that we make during the centralized" }, { "end": 1968.84, "start": 1959.72, "text": " training setting is still relevant on deployment. Now that doesn't mean that things are non-stationary" }, { "end": 1973.24, "start": 1968.84, "text": " if the world is changing over time that this setting is in applicable because it could still be" }, { "end": 1978.52, "start": 1973.24, "text": " the starting point. So if you have a world where you aren't sure that the lorosa physics won't" }, { "end": 1983.32, "start": 1978.52, "text": " change over time, you might still want to do centralized training and decentralized execution," }, { "end": 1987.16, "start": 1983.32, "text": " but then on top of that, you would want to have some decentralized learning algorithms so the" }, { "end": 1993, "start": 1987.16, "text": " agents can continue to adapt as the world changes. So if I understood correctly, this paper builds" }, { "end": 2001.4, "start": 1993, "text": " on the value decomposition network by Sunhek at all 2017, that's VDN, which I think I heard you" }, { "end": 2007.88, "start": 2001.4, "text": " say avoids the extremes of the independent Q functions on the one hand and the fully centralized" }, { "end": 2015.96, "start": 2007.88, "text": " Q functions on the other hand. Can you say a bit about how QMX extends VDN or is it different" }, { "end": 2025.08, "start": 2015.96, "text": " from VDN? Yeah, so both VDN and QMX are, I would say, on the extreme of that spectrum in the" }, { "end": 2029, "start": 2025.08, "text": " sense that they're learning centralized value functions, so learning value functions at condition" }, { "end": 2039.56, "start": 2030.28, "text": " on the actions of all the agents and in fact all of their observations, but they just factor that" }, { "end": 2048.52, "start": 2039.56, "text": " centralized value function in a different way. So VDN and QMX, they both are designed so as to" }, { "end": 2054.7599999999998, "start": 2049.7999999999997, "text": " factor the value function in a way that allows this decentralized ability property I mentioned," }, { "end": 2059.24, "start": 2054.7599999999998, "text": " so that every agent after the value function is learned, every agent can just take its local" }, { "end": 2063.96, "start": 2059.24, "text": " component of it and use that to decide how to act, just maximize with respect to its local component." }, { "end": 2071.08, "start": 2063.96, "text": " But VDN achieves this by just mixing these local value functions as a simple summation," }, { "end": 2076.44, "start": 2071.56, "text": " so the joint value function is just a sum of the local value functions, and that's enough" }, { "end": 2081, "start": 2076.44, "text": " that achieves this decentralized ability, but in an unnecessarily restrictive way." }, { "end": 2087.4, "start": 2081.8, "text": " So the idea behind QMX was if what we're after is decentralized ability, the property we need is" }, { "end": 2092.68, "start": 2087.4, "text": " just monotonicity and we can have a richer monotonic way of mixing these local value functions" }, { "end": 2094.3599999999997, "start": 2092.68, "text": " that isn't just a summation." }, { "end": 2104.68, "start": 2095.72, "text": " Okay, and then QMX has a mixing network. If I understand correctly the weights are generated" }, { "end": 2110.12, "start": 2104.68, "text": " by a hyper network, could you maybe comment on the hyper network here? That's something that I" }, { "end": 2116.7599999999998, "start": 2110.12, "text": " haven't seen very often on how does that help? Yeah, so the hyper network is actually a key ingredient," }, { "end": 2121, "start": 2116.7599999999998, "text": " and we've done some ablation experiments that confirm that this is actually quite important to" }, { "end": 2126.92, "start": 2121, "text": " performance. So a hyper network is just a neural network, the outputs of which specify the weights" }, { "end": 2132.84, "start": 2126.92, "text": " of another neural network. So in QMX we have these agent networks, these are the local value functions" }, { "end": 2139.56, "start": 2132.84, "text": " that each agent can compute based on its local observations. And then we have a mixing function," }, { "end": 2144.2, "start": 2139.56, "text": " it takes the output of all of those agent networks as input, and it produces output and estimate" }, { "end": 2151.3999999999996, "start": 2144.2, "text": " of the joint value. But the mixing network, its weights are not expressed directly, they are" }, { "end": 2156.68, "start": 2151.3999999999996, "text": " expressed as the output of a hyper network. So when we train, we optimize the parameters of the" }, { "end": 2162.2799999999997, "start": 2156.68, "text": " agent network, those parameters are shared among all the agents, and we optimize the parameters" }, { "end": 2168.2, "start": 2162.2799999999997, "text": " of the hyper network. And the hyper network then in turn produces the weights of the mixing network." }, { "end": 2177.64, "start": 2168.2, "text": " And the reason to do that is because the mixing network is only used during the training phase." }, { "end": 2182.9199999999996, "start": 2177.64, "text": " So we have the centralized training, the decentralized execution, anything that you're going to throw away" }, { "end": 2191.64, "start": 2182.9199999999996, "text": " during the execution phase is something that can take advantage of things that are only available" }, { "end": 2197.64, "start": 2191.64, "text": " during centralized training. So this mixing network, it takes as input, not just the outputs of the" }, { "end": 2203.48, "start": 2197.64, "text": " agent networks, it takes as input also the global state, the state which will be hidden from the" }, { "end": 2208.2799999999997, "start": 2203.48, "text": " agents in the execution phase, but which during this centralized training we can assume access to." }, { "end": 2215.96, "start": 2210.68, "text": " So and that's fine, this centralized state can be input to the mixing network because we will" }, { "end": 2220.3599999999997, "start": 2215.96, "text": " need the mixing network during the execution phase. So the fact that it will be hidden during" }, { "end": 2228.2000000000003, "start": 2220.36, "text": " execution is no problem. But the mixing network is constrained to adhere to this monotonicity" }, { "end": 2233.1600000000003, "start": 2228.2000000000003, "text": " property. So then the weights of the mixing network have to be non-negative so that this" }, { "end": 2240.6, "start": 2233.1600000000003, "text": " monotonicity property is preserved. And we don't want to restrict the relationship between the state" }, { "end": 2246.92, "start": 2240.6, "text": " and the mixing network. We want the mixing network to be able to change flexibly as the centralized" }, { "end": 2250.44, "start": 2246.92, "text": " state changes and we want to be able to mix these local value functions in a different way for every" }, { "end": 2255.48, "start": 2250.44, "text": " state. So by having the state be input to the hyper network rather than to the mixing network," }, { "end": 2262.2000000000003, "start": 2255.48, "text": " we get around this monotonicity constraint and enable a more flexible conditioning on the" }, { "end": 2271.32, "start": 2262.2000000000003, "text": " human state. And then by global state does that mean like the the set of all the agent observations" }, { "end": 2279.7200000000003, "start": 2271.32, "text": " together? So actually in the deck pondip in general even if you concatenate all of the agents" }, { "end": 2285.32, "start": 2279.7200000000003, "text": " observations together that still does not disempeguate the global state. So there's a related" }, { "end": 2291.7200000000003, "start": 2285.32, "text": " formula called the deck mdp where every agent has partial observability but if you put all the" }, { "end": 2296.28, "start": 2291.7200000000003, "text": " agents observations together then you know the state. But in the deck pondip you still don't know" }, { "end": 2301.2400000000002, "start": 2296.28, "text": " the state even when you put all the observations together. But the centralized training it has" }, { "end": 2305.24, "start": 2301.24, "text": " happens in a simulator or something where we assume we can just cheat and look under the hood" }, { "end": 2312.04, "start": 2305.24, "text": " and see what the global state is. The only thing is we're not allowed to use it. The policies we" }, { "end": 2316.8399999999997, "start": 2312.04, "text": " learn are not allowed to condition on it in the decentralized execution phase. But we have it while" }, { "end": 2322.12, "start": 2316.8399999999997, "text": " we're training and we want to exploit it and we exploit it by having the mixing network condition" }, { "end": 2329.72, "start": 2322.12, "text": " on it via the hyper network. Okay so we're assuming a simulator that can reveal the global state" }, { "end": 2334.6, "start": 2329.72, "text": " which I guess wouldn't be the case if this is an off policy deployed robots or something like that." }, { "end": 2341.16, "start": 2335.48, "text": " I mean that totally depends on the situation but yeah there are some situations where you might" }, { "end": 2347.8799999999997, "start": 2341.16, "text": " during training be have access to the to the joint observation but still not have access to the" }, { "end": 2354.6, "start": 2347.8799999999997, "text": " state that can happen. Cool. Okay so so the the monotonicity if I understand that correctly means that" }, { "end": 2362.52, "start": 2354.6, "text": " does that mean that all the agents value functions have to be aligned in a sense. If we had like I" }, { "end": 2367.96, "start": 2362.52, "text": " we were talking earlier I had in the notes about an example of you know if one agent had a" }, { "end": 2374.6, "start": 2367.96, "text": " slightly different value function they liked something that wasn't good for the team. Would that" }, { "end": 2382.68, "start": 2374.6, "text": " violate the monotonicity assumption? No so that's a different issue so if the agents have different" }, { "end": 2389, "start": 2382.68, "text": " value functions then we're in a mix setting. So their preferences are not aligned." }, { "end": 2395.08, "start": 2390.6, "text": " And what that means is that the value function is vector value there's actually a different value" }, { "end": 2400.12, "start": 2395.08, "text": " for every agent. Like if you write down a normal form game in each cell in that normal form game" }, { "end": 2405.72, "start": 2400.12, "text": " there's more than one number because there's a payoff for every agent. So the value is actually a" }, { "end": 2412.9199999999996, "start": 2405.72, "text": " vector. Here we still have a fully cooperative setting so the value is a scalar the output of" }, { "end": 2417.48, "start": 2412.9199999999996, "text": " this mixing function this estimate of the joint value function is a scalar value just like in single" }, { "end": 2426.3599999999997, "start": 2417.48, "text": " agent RL. But it's composed of a bunch of by mixing a bunch of local value estimates. So when we say" }, { "end": 2432.52, "start": 2426.3599999999997, "text": " that it's monotonic what we mean is that our estimate of this of this scalar value to join value" }, { "end": 2440.44, "start": 2432.52, "text": " function has the property that it increases monotonically with respect to each of each agents" }, { "end": 2449.48, "start": 2440.44, "text": " local value. So what in effect this means is that you know each agent the action that it wants to" }, { "end": 2457.64, "start": 2449.48, "text": " take does not depend on the action choices of the other agents because it's value you know as" }, { "end": 2461.64, "start": 2457.64, "text": " it's local if it changes its action so as to increase its local value it knows that that will" }, { "end": 2468.44, "start": 2461.64, "text": " also increase the joint value. In the MIT lecture that that we mentioned earlier you talked about the" }, { "end": 2476.2799999999997, "start": 2468.44, "text": " Google Research Football environment. Can you comment on how the StarCraft multi-agent challenge" }, { "end": 2482.52, "start": 2476.2799999999997, "text": " and the Google Research Football environment present different types of multi-agent challenges" }, { "end": 2488.04, "start": 2482.52, "text": " and like what kind of dimensions do you think about when comparing environments like that?" }, { "end": 2493.4, "start": 2488.04, "text": " So I can speculate about that. I don't know the answer because we haven't done enough with" }, { "end": 2500.7599999999998, "start": 2493.4, "text": " Google Research Football yet. But I can speculate because we do know a lot from the related problem" }, { "end": 2508.68, "start": 2500.7599999999998, "text": " of the Robocup simulation league. You know it's just like another football simulator that's been" }, { "end": 2513.56, "start": 2508.68, "text": " around for a long time. Google Research Football is kind of the same idea but you know in a new" }, { "end": 2518.44, "start": 2513.56, "text": " framework that makes learning easy and that connects nicely to deep learning packages and stuff" }, { "end": 2526.6, "start": 2518.44, "text": " like that. But you know what we know from the Robocup simulation league is it in a task like this" }, { "end": 2531.32, "start": 2526.6, "text": " hierarchy is very important. It's going to be very hard to learn like a flat monolithic policy that" }, { "end": 2538.92, "start": 2531.32, "text": " plays you know world world comfortable. It's a naturally hierarchical task and you need to leverage" }, { "end": 2545, "start": 2538.92, "text": " that hierarchy in order to make the learning problem tractable. So you need to think about you know" }, { "end": 2549.88, "start": 2545, "text": " high level skills like passing the ball shooting on goal playing defense that kind of thing." }, { "end": 2555.7200000000003, "start": 2551.96, "text": " Hierarchable reinforcement learning is an active topic of research. It's a very difficult" }, { "end": 2560.84, "start": 2556.6800000000003, "text": " you know sort of challenging area of research where I think there's still there still hasn't been" }, { "end": 2567.48, "start": 2560.84, "text": " like a wow result yet but we we at the same time know that it's an essential ingredient that we" }, { "end": 2575, "start": 2567.48, "text": " will eventually need to have. From the perspective of multi agent reinforcement learning when we add" }, { "end": 2581.4, "start": 2575, "text": " hierarchy into the equation then things get interesting because hierarchy increases the level" }, { "end": 2587.56, "start": 2581.4, "text": " of simultaneous. So basically if you're acting with a monolithic policy you might be acting at" }, { "end": 2593.8, "start": 2587.56, "text": " very fine grain time steps like you act every time step and time step might be you know like a" }, { "end": 2600.04, "start": 2593.8, "text": " fraction of a second. But if you have a hierarchical policy at the higher levels of the hierarchy" }, { "end": 2605.5600000000004, "start": 2600.04, "text": " things are abstracted away. So taking an action means deciding to shoot on goal an action which might" }, { "end": 2611.6400000000003, "start": 2605.5600000000004, "text": " last like several seconds. And from a multi agent perspective this increases your cymbal tenetty" }, { "end": 2616.52, "start": 2611.6400000000003, "text": " with the other agents. So when you commit to a high level thing like shoot on goal a lot happens" }, { "end": 2622.92, "start": 2616.52, "text": " while that's being executed and you know the other agents are doing a lot in the meantime. And this" }, { "end": 2629.16, "start": 2622.92, "text": " greatly increases the importance of coordination among the agents. So if cymbal tenetty is low if" }, { "end": 2634.28, "start": 2629.16, "text": " a time step is only a fraction of a second it's not that important to coordinate with your teammates" }, { "end": 2639.64, "start": 2634.28, "text": " because if you choose the wrong action you know one time step later you'll see you'll observe" }, { "end": 2644.2000000000003, "start": 2639.64, "text": " what the other agents did you'll see it wasn't right you can change your response. So instead of" }, { "end": 2649.56, "start": 2644.2000000000003, "text": " coordinating you can just react to what the other agents do. If cymbal tenetty is high because you're" }, { "end": 2657.32, "start": 2649.56, "text": " choosing high level abstract skills that take several seconds to execute then coordination may" }, { "end": 2660.92, "start": 2657.32, "text": " be crucially important. By the time you realize that you know you should have passed instead of" }, { "end": 2667.08, "start": 2660.92, "text": " shooting on goal you know you've already you've already lost the game. Cool okay just to be clear" }, { "end": 2676.2, "start": 2667.08, "text": " can you can you define cymbal tenetty for me? I can define it in a handway way. Yeah perfect." }, { "end": 2683.24, "start": 2676.2, "text": " We can think of it as like how much happens in one time step. So like how much is the world" }, { "end": 2689.72, "start": 2683.24, "text": " going to change between now and when you get to act again. If it's only a tiny bit then you can" }, { "end": 2694.6, "start": 2689.72, "text": " just wait and see what the other guy didn't react. Like including the changes in the environment" }, { "end": 2701, "start": 2694.6, "text": " and the other agents? Yeah yeah. Okay okay. But if a lot is going to change before you act again" }, { "end": 2705.72, "start": 2701, "text": " then it's really important that you that your choice is coordinated with your teammates. There will" }, { "end": 2711.7999999999997, "start": 2705.72, "text": " be too late to fix it later. Okay that makes sense. All right so we have a fair number of grad" }, { "end": 2716.04, "start": 2711.7999999999997, "text": " students and future grad students in the audience. I know that from the from the Twitter account." }, { "end": 2722.12, "start": 2716.7599999999998, "text": " Do you have any advice for students who might want you or someone like you as an advisor or" }, { "end": 2733.9599999999996, "start": 2722.12, "text": " generally for those who want to pursue post-graduate studies in in ML and RL? Sure I can say a couple" }, { "end": 2741.2400000000002, "start": 2733.96, "text": " things. I would say there's no substitute for a strong foundation. You know and that's" }, { "end": 2747.8, "start": 2741.2400000000002, "text": " that's as much true now as it was before deep learning became a big thing. You know especially" }, { "end": 2752.92, "start": 2747.8, "text": " important in machine learning are topics like calculus linear algebra probability and statistics" }, { "end": 2760.44, "start": 2753.88, "text": " a really good foundation in those topics is is absolutely essential. I think it's also really" }, { "end": 2767.4, "start": 2760.44, "text": " important to it's really helpful to get some research experience before you start something like" }, { "end": 2774.12, "start": 2767.4, "text": " a PhD. I think a lot of people think research is something that you're either good at or you aren't" }, { "end": 2780.52, "start": 2774.12, "text": " but actually it's a very learnable skill but it's a skill that can only be learned by doing it." }, { "end": 2787.96, "start": 2781.56, "text": " So I think it's really helpful to get some experience not only so that if you do start a PhD you" }, { "end": 2792.76, "start": 2787.96, "text": " have some of the tools you need to actually make a success of it but so that you find out you" }, { "end": 2797.56, "start": 2792.76, "text": " know whether you like it and that you're able to demonstrate on you know like on application" }, { "end": 2802.52, "start": 2797.56, "text": " that you that you have developed some of those skills that you have the capacity you know" }, { "end": 2810.6, "start": 2803.7200000000003, "text": " to complete the degree you're applying for. So sometimes I wonder about people who did say a PhD on" }, { "end": 2818.36, "start": 2810.6, "text": " support vector machines when they were big before ML turned to deep learning and so those aren't" }, { "end": 2823.56, "start": 2818.36, "text": " that relevant anymore and so you might think well was that a good choice? How do how do" }, { "end": 2829.16, "start": 2825, "text": " how do people how should people pick a topic that's that's going to stay relevant that they" }, { "end": 2836.7599999999998, "start": 2829.16, "text": " that they're confident will still be relevant. So my my opinion might be a bit unconventional" }, { "end": 2842.0400000000004, "start": 2836.76, "text": " about this sort of thing but I would say don't worry about it. I I see a lot of people like" }, { "end": 2849.4, "start": 2842.76, "text": " um doing a lot of hand-wringing about whether they're working on the right topic um and and I think" }, { "end": 2856.5200000000004, "start": 2849.4, "text": " it's a waste of energy. I think I think research impact is really difficult to predict and and almost" }, { "end": 2861.8, "start": 2856.5200000000004, "text": " entirely out of your control uh you know it depends not only on you know what happens with the" }, { "end": 2867.1600000000003, "start": 2861.8, "text": " development of science and technology itself but also a lot of political stuff just like what happens" }, { "end": 2874.1200000000003, "start": 2867.88, "text": " uh you know what stuff happens to be popular uh you know what happens with you know trends and" }, { "end": 2878.6800000000003, "start": 2874.1200000000003, "text": " dynamics within the machine learning community that that you know have nothing to do with science" }, { "end": 2883.8, "start": 2878.6800000000003, "text": " just have to do with people. So I think it's a mistake to get too invested in like you know my" }, { "end": 2888.76, "start": 2883.8, "text": " happiness and success depends on you know whether you know whether some arbitrary thing becomes" }, { "end": 2894.0400000000004, "start": 2888.76, "text": " popular or not. I think you know it's important to work on something that you're passionate about" }, { "end": 2899.2400000000002, "start": 2894.0400000000004, "text": " that you find exciting because otherwise it will go nowhere um if you aren't really enthusiastic" }, { "end": 2904.5200000000004, "start": 2899.2400000000002, "text": " about it nothing good will come out of it but you know whether it's going to have impact" }, { "end": 2910.2000000000003, "start": 2905.5600000000004, "text": " who knows uh I don't think anybody can really predict that and I don't think you know if you" }, { "end": 2914.0400000000004, "start": 2910.2000000000003, "text": " pick a topic and it turns out to be unfashionable I don't think that's a huge problem. I think if you" }, { "end": 2919.48, "start": 2914.04, "text": " look at a lot of the you know the really big figures in the machine learning community today um" }, { "end": 2925.72, "start": 2920.68, "text": " you know the ones that were around before deep learning became popular they were working on something" }, { "end": 2930.2799999999997, "start": 2925.72, "text": " else they were working on you know support vector machines or they were working on graphical models" }, { "end": 2935.96, "start": 2930.2799999999997, "text": " so they were working on uh patient optimization and you know when the deep learning revolution" }, { "end": 2940.6, "start": 2935.96, "text": " happened they adapted they learned from these new tools they revised their beliefs and their view" }, { "end": 2946.36, "start": 2940.6, "text": " and their priorities they figured out how to how to use these new tools effectively on the problems" }, { "end": 2953.24, "start": 2946.36, "text": " that they were interested in um they changed their research focus as a result of you know the changing" }, { "end": 2958.52, "start": 2953.24, "text": " situation and then they you know made important valuable contributions to deep learning so" }, { "end": 2964.92, "start": 2961.08, "text": " one of the exciting things about doing academic work is that it never gets old because there always" }, { "end": 2971.16, "start": 2964.92, "text": " is this new stuff coming down the pipeline that you uh uh get to learn about and and explore and do" }, { "end": 2975.4, "start": 2971.16, "text": " things with that's also the challenge of it because you can never you can never afford to rest on" }, { "end": 2980.12, "start": 2975.4, "text": " your laurels you know we're get stale and as you get older it gets harder and harder to sort of like" }, { "end": 2985.64, "start": 2980.12, "text": " wrap your head around each new revolution and um and make a contribution to it but um you know" }, { "end": 2993.96, "start": 2985.64, "text": " that that's the fun of it um can you um can you fill us in on what what conferences like are there" }, { "end": 2999.08, "start": 2993.96, "text": " are there multi-agent uh learning conferences I guess our audience might be familiar with" }, { "end": 3005.56, "start": 2999.8, "text": " uh the new reps and ICML and ICLR but are there um are there other conferences outside of that" }, { "end": 3010.76, "start": 3005.56, "text": " that are that are good to pay attention to for for multi-agent work yeah so I mean those that you" }, { "end": 3017.56, "start": 3010.76, "text": " mentioned are the the primary ones and um the ones where we primarily submit our publications in my" }, { "end": 3022.76, "start": 3017.56, "text": " lab the other one worth mentioning is is Amos autonomous agents and multi-agent systems that" }, { "end": 3028.6800000000003, "start": 3022.76, "text": " was actually the first conference I ever published that um and uh so yeah that I mean there's a" }, { "end": 3032.84, "start": 3029.32, "text": " a lot of multi-agent stuff happens there the I guess the other one would be iCAPS which is the" }, { "end": 3038.0400000000004, "start": 3032.84, "text": " planning conference um so it's not multi-agent reinforcement learning but but like for example in" }, { "end": 3043.1600000000003, "start": 3038.0400000000004, "text": " the deck on DP formalism a lot of the work is on the planning side got the learning side so a lot" }, { "end": 3049.88, "start": 3043.1600000000003, "text": " of multi-agent stuff is published there too so besides um your own work and the things we mentioned" }, { "end": 3055.1600000000003, "start": 3049.88, "text": " uh here today already are there other other things happening in in RL that you find interesting lately" }, { "end": 3060.52, "start": 3056.36, "text": " um a lot of interesting stuff is happening in RL the the main difficulty is keeping up with it" }, { "end": 3068.28, "start": 3060.52, "text": " is such like firehose of papers being published um the things that that that interest me tend to be the" }, { "end": 3073.4, "start": 3068.28, "text": " things that contradict my prior beliefs because that's when I like really learn something new" }, { "end": 3081.4, "start": 3073.4, "text": " fortunately i'm wrong a lot so i'm always learning something um i would say like a recent example of" }, { "end": 3087, "start": 3081.4, "text": " that is what's been happening with unsupervised learning in reinforcement learning so methods like" }, { "end": 3095.08, "start": 3087, "text": " curl that use contrastive losses to um to in an unsupervised way learn good representations for" }, { "end": 3101.2400000000002, "start": 3095.08, "text": " reinforcement learning um so i'm i'm on record as as a skeptic of unsupervised approaches for" }, { "end": 3107.56, "start": 3101.24, "text": " reinforcement learning um i've uh i've made public uh comments about it on the number of occasions" }, { "end": 3113.16, "start": 3107.8799999999997, "text": " so i think it's you know i think it's fair to say on the estimated their success um i would not" }, { "end": 3116.4399999999996, "start": 3113.16, "text": " have predicted the you know these recent methods and how successful they've been" }, { "end": 3122.8399999999997, "start": 3117.7999999999997, "text": " i do i i would say i've only partially revised my opinion on the subject because i am still" }, { "end": 3128.3599999999997, "start": 3122.8399999999997, "text": " concerned about cases where um the choice of features what makes a good feature depends" }, { "end": 3133.08, "start": 3128.36, "text": " crucially on the reward function i mean to me it's self evident that in general you know the way to" }, { "end": 3138.36, "start": 3133.08, "text": " represent the world the way to process your observations into some higher level representation" }, { "end": 3145.56, "start": 3138.36, "text": " depends very much on the task that you're in um so i don't think that that we can do this in" }, { "end": 3151.56, "start": 3145.56, "text": " an unsupervised way but unsupervised methods can help and um uh you know i was i was surprised to" }, { "end": 3157.1600000000003, "start": 3151.56, "text": " learn how much they could help so i think we came in just under the hour um this has been a" }, { "end": 3161.3199999999997, "start": 3157.16, "text": " fascinating interview uh professor shimon whiteson uh you're so kind to take the time out to speak" }, { "end": 3166.68, "start": 3161.3199999999997, "text": " with us i know you want to uh tuck your kids into bed tonight so um i'm glad to end it there" }, { "end": 3171.3199999999997, "start": 3166.68, "text": " thank you so much on behalf of myself and our audience uh professor whiteson yeah thanks for having" }, { "end": 3173.16, "start": 3171.3199999999997, "text": " me it was a pleasure" }, { "end": 3182.7599999999998, "start": 3179.72, "text": " notes and links for this episode are at talk rl.com" }, { "end": 3187.8, "start": 3182.76, "text": " if you like this show i need your support you can help in a few ways" }, { "end": 3192.76, "start": 3188.6000000000004, "text": " one subscribe on your favorite podcast platform subscriptions make a big difference" }, { "end": 3199, "start": 3194.6800000000003, "text": " two follow us on twitter and talk rl podcast we love retweets" }, { "end": 3206.5200000000004, "start": 3201.32, "text": " three give us a five star rating on apple podcasts if you don't think we deserve five stars" }, { "end": 3213.32, "start": 3206.52, "text": " let us know on twitter what we could do better" }, { "end": 3239.32, "start": 3213.32, "text": " talk rl" } ]
Aravind Srinivas
Aravind Srinivas on his work including CPC v2, RAD, CURL, and SUNRISE, unsupervised learning, teaching a Berkeley course, and more!
https://media.transistor…696.mp3?src=site
This is Talk Our Real Podcast. All reinforcement learning, all the time. Interviews of brilliant folks across the world of our realm. I'm your host, Robin Chohan. Arvin Trinivas is a third-year PhD student at UC Berkeley, advised by Professor Rebiel. He co-created and co-taught a grad course on deep unsupervised learning at Berkeley. Arvin, thanks so much for being on the show. Thank you for having me. How do you describe your area of interest? I'm broadly interested in how can we learn better representations of data in a way that we can improve the performance of our current ML systems. This may seem really general and vague, but the reason I say it in such a way is because that's how deep learning has worked out in my opinion. It's sort of like one or two good ideas, kind of transcend different topics. There's a Transformers idea which is sort of doing stuff attention. It was originally applied in L.P., but it's sort of now pretty much everywhere. All of these ideas are basically intended at getting better representations of data, either through the architecture or through the learning loss objective functions, or how do you feed in the data, what kind of data do you train on and so forth. That's kind of my interest, like just kind of improving the systems by figuring out better ways to learn representations. I'm interested in doing this both from engineering the architecture or figuring out what is the right learning objectives. Both are very interesting to me. Things like contrast of learning or more at the objective function level, while the engineering efforts like CPC version 2, where the objective function was already there, but we were just figuring out the right engineering in terms of architecture, fall more into the former category. But I'm also working on some things yet to be released, in terms of how to better improve the vision architectures beyond rest nets, how to use Transformers and rest nets together and so forth. I'm not very tied to any particular problem, like not just reinforcement learning, but that's obviously one of the main focus problems. So computer vision is also pretty interesting and so is language processing. And one of my goals is to make reinforcement learning more like computer, more like whatever is happening in deep learning for vision and NLP, where there's a lot of focus on architecture engineering, a lot of focus on data augmentation, a lot of focus on unsupervised learning. But somehow reinforcement learning is sort of like more deep reinforcement learning, sort of state more like reinforcement learning than deep learning. I don't know for what reasons, but it's sort of been that way for the last few years and now it's slowly changing. And so that's also like a pretty important topic to me, like how to make deep oral, sort of borrow a lot of the successful principles that we've seen work over time in, in like canonical deep learning and try to unify all these things. So the first paper of yours that we're going to talk about today isn't really an RL paper, but I think it sets the stage for time what curl next. So and that is the data efficient image recognition with contrastive predictive coding. So can you tell us what is going on with CPC in this paper? Sure. So firstly, CPC is basically a self-supervised learning objective, where it's centered on this paradigm of predicting the future in a latent space. And so I want to first briefly explain why that's like an important and interesting framework to think about unsupervised learning. So firstly unsupervised learning is a paradigm where you're trying to learn representations of data in a without any annotated annotated labels from humans. So you just want to learn from raw data that's just available in way more quantity than supervised learning data sets. And it's obviously inspired from how humans and animals learn in general without having actual annotations. And within unsupervised learning there are so many ways in which you can learn representations and this dates back to like Ben Geo's work on audit encoders and Hinton's work on restricted balls machines and so forth. But those things didn't really pan out back then mostly because the computation was not there and like people were working really tiny data sets like MNIST. And then there was a flurry of work in the computer vision community on trying to sort of create these pretext tasks like you create tasks that creatively yourself. Like for example you take an image and you rotate it and you try to predict the angle of rotation. And so that becomes a task that you can solve without any label labels on the images. You give the image its own labels based on like some transformations you perform. And all this kind of like had some reasonable results in terms of performance on downstream tasks whether you can take those features and train classifiers on top of them. But it was still lagging behind the kind of classifiers you could just build by directly doing supervised learning if you had like a lot of labels. So people mostly thought this unsuppvised learning or some people call it self supervised learning like Yanle Koon calls itself for us learning. So people mostly thought this class of techniques was just not like worth the time. It was just you know always going to lag behind supervised learning and you would rather just collect labels. So CPC or contrasted predictive coding is one of the first people that tries to sort of go away from these very ad hoc hand engineer tasks to something little more principal. But it's not particularly new idea. It's more like inspired from a very famous and very impactful earlier work done by Mikhailov called WordWack which I'm sure a lot of people are familiar with in terms of word vectors. It was basically the best word embeddings people were using before word came out. So word to back is this idea of where you're trying to predict the surrounding words of you're trying to predict a missing word from the surrounding words in contrast of fashion. So what what does it mean is you don't actually need to predict the actual word because back then people were not able to do softmax over a really large vocabulary. But rather you would just predict an embedding and then you would contrast it with what is the real positive embedding and a bunch of negative embeddings. So you would make a prediction in the latent space and then you would say that that prediction has to correlate maximum with the embedding of the correct word that that's missing and has to not not correlate too much with the embeddings of words that are not the correct missing words. So you can build these losses in a lots of different ways and Mikhailov had a very elegant formulation for this. So CPC sort of read with just that framework in a more general fashion that can apply to any morality not just text. So you want to have this kind of a learning framework for audio or you want to have this kind of learning framework for video or images or reinforcement learning text everything. Back then when Mikhailov did word to back the the vein which you include the context which you're using to predict the missing word was very simple. So surrounding the missing word you just average the embedding of this words there's no notion of you know the position or like trying to have like a temporal model. These kind of things were not for send back then so CPC adds all those things so that's why it's called contrast to predict according it predicts something in a contrast of fashion. And it uses a context in order to predict some something that's missing and and and that context can be modeled using an order aggressive model like like you know like pixel CNN and so forth that just looks at the past few words or the past few frames or the past few patches in an image and then price to predict the future frames of future. Image patch of future word or future audio chunk so and but it does this in an embedding space similar to word to that doesn't do it in the raw input space because doing it in the raw input space would would mean you know you're trying to model a lot of things that you don't care about for example right now when I'm talking to you. Your mind roughly models the next word I'm going to say and it doesn't do this at the level of like my actual audio waveform right like you're trying to sort of focus on the phonemes you're trying to focus on like the words time speaking and you have a language model in your head. So you're focusing on more abstract things we are trying to expect to predict the future outcomes and this is true for any sensory stream like when you see a ball falling you're like focusing on more abstract entities like the object being the ball and the physics of the ball and so forth so that's what that's another motivation for CPC you predict the future in a lane space and you're trying to do this contrast of losses which are very efficient way of. Solving the degeneracy problem in the lane space and and and so this particular paper CPC proposes like one method that works with a lot of modalities and it presented really promising results in 2018 on on the image unsupervised learning benchmarks not that like was any any better than supervised learning at the time but may be a lot of things that are going to be a lot more important. I think at the time but made like at least like a 10% jump from what the computer vision community I had by then and here you're talking about the the Van Den word paper. Yeah, yeah precisely. And yeah, so then the next summer I intern with Aaron van den Njord and we worked on the version two of the paper. So where we basically said okay like the trends and language and it'll be suggest that you know just re-visioning the old ideas and making models larger and up getting like really amazing results like. 2018 was the year of GPT and bird like the GPT one if you if you make all of that way so basically where they just took a transformer train it like train language models are mass language models but. Train it on like last months of data with a lot really large models and and show that pre training really works so a result like that for a vision computer vision would be really cool because. The amount of unlabeled image data on the web is just massive and and so we thought okay let's get a really good result on image net like what with the CPC framework by making the models larger. And so we made the like whatever the rest that's used to encode the patches in an image you make you we were using rest net 50s or one of once and we just made it like really wide and large and we also just added lot more data augmentations which was like sort of a new thing at the time but now everyone does it. And we tune a lot of hyper parameters just to see like what's very important and what's not important and just just doing these things no new idea just taking the CPC idea but with sort of doing the engineering. Give us like a lot is to go from like a accuracy of like 48% to 72% you know it's it's crazy to think say that you know but that's kind of how it was you just keep doing new tricks and then the performance just kept going up and up and up and up and. So there was a special result we had in the paper where we could pre train there is we take all the unlabeled data and image net so imagine somebody gave you image net they gave you both the images and the labels you would you would think that it's best to just first train a direct supervised learning model and and deploy the classifier like why would you do on supply learning but we had this really cool result in the paper where if the most. Where if the models really large if it has a lot of capacity it's better to first do the unsuppressed training and then fine tune onto the supervised objective just like our bird doesn't so we you do the CPC training you you get a really good feature space and then you find unit to image classification and we ended up getting like solid like to 2 to 3% improvement in the top and accuracy on image net classification. So and like 83% compared 80% just by doing this unsuppwise pre training and this was true when you had all like label data and various regimes so for example if I'm a tell you that I give you a million images but I only give you the labels for 10,000 images or any give you the labels for 100,000 images and ask you to deal with it. It's really hard like just supervised learning wouldn't be able to do anything much but but the sort of unsuppwise pre training and then fine tuning it to the label label data is is way more data efficient it can it can do classification with just like 10% of the labels and get you like really good classifier that people used to get with 100% of the labels. So to us all these things were really exciting like the fact that larger models were working much better for unsuppwise learning and unsuppwise learning is finally like a relevant thing now because it can now surpass or be competitive with supervised learning on like downstream tasks or just even improving classification and it was sort of finally delivering the promise. And then a lot of follow up work came from like other other other big companies like Facebook and Google like Facebook. And then came in the inventory of rest nets he came up with this paper called moco momentum contrast with simplified the CPC model like significantly and like also improve the results and then simpler from Google brain paper from Jeff Hinton improved on top of moco and really really pushed up the results so much. Like you know this is like really one of the hardest topics in in the in the field right now like contrast of self supervised learning. So so that's kind of the history of like these contrasts of learning papers it seems it seems like recently. It's really been picking off but but when I was working on it last year it wasn't that hot. We were still trying to get the numbers that were not not that yet in the field. And so it's is CPC considered a type of metric learning. You can say so so so you can say that CPC is both like a latent space generative model trained with contrast of losses. You can also say that CPC is a way to do metric learning where you get a lane space that can give you really good distance metrics. So that that's one nice thing about the generative framework so CPC has this notion of predicting the future and the latens. And so you can definitely use it as a latent space model that could be used say if you if you use it in reinforcement learning setup you could use it as a as a world model in the lane space or if you use it for audio you could predict the future audio chunks in the lane space. You don't have a decoder so you can if you want to actually like like a here what you're predicting you should you should also train a decoder. But in general like you can think about it as like modeling the future of in the lane space and and so that's a that's why it's more general than like the other met sex sim clear or moco which is purely in friend intended metric learning. CPC is not just a metric learning framework. I see so it kind of spans between metric learning and and generative models. Yeah. So let's talk about curl that was the contrastive unsupervised representations for reinforcement learning your first author paper that goes from this year. Can you tell us what was the what was the idea here with this paper? Yeah, so the idea of this paper is a lot of free reinforcement learning from pixels is really really sample inefficient. Particularly on these robotic control environments where you go from pixels to darks and it takes like millions of steps or 100 millions of steps to get any reasonable policy working on even a task like just reaching in. Do the with it with like a three link creature or something like that. So what is the way in which you can actually make it more efficient without complicating the RL algorithm without trying to do anything fancy like predicting the future like learning a world model and so forth. That's what motivated us think about curl which is we thought okay. There's contrast of learning is working really really well for image recognition. It's making image recognition a lot more data efficient by trying to use these contrastive losses. And trying to learn really good representations that allow you to be labelled efficient. So with with something similar happen in reinforcement learning where if you start training with both the contrast of loss and the reinforcement learning losses. Would you be able to be a lot more data efficient and therefore solid task that were earlier solvable in like 10 to 100 million time steps like you can you solve it like 100,000 time steps or 500,000 time steps like at least like maximum of a million time steps. So that was the idea and I learned from some of my mistakes that we did in the CPC version do paper in terms of like you know not like kind of like going for a simpler instance discrimination style objective compared to predicting in the like you know patches and things like that. So we I already looked at this moco paper from timing and realize it's it's a much simpler framework to think about contrast of losses than CPC. And so it's it's like it was sort of counterintuitive at the time like my professor Peter or you thought that like you should predict the future. Because you know reinforcement learning is like a time based thing and you would rather predict the future in the lane space in an explicit way by by taking a frame and time step T and put the future at times to T plus K in contrast to way just like CPC does. But I was pretty dogmatic on this thing that okay this instance discrimination which is you take an image and you take another augmented view of the same image and you just say that these to have to correlate well in the lane space compared to any other image. I just felt that this would be the simplest auxiliary loss that you can add to reinforcement learning and it should just work. And so that that that internal led to this question of like how do you do instant discrimination in the like for reinforcement learning models that that train from pixels. And so one thing that we started doing was all these instant discrimination models in moco and simpler they they use these data augmented views of the same image. So in reinforcement learning no one has ever tried data augmentations. And so we thought okay so if that works then that's like an added novelty point to the paper where the girl comes the first framework to explore data augmentations in in the reinforcement learning setup. And so that that's how it came about like you you sort of want to have an auxiliary task that can speed up your learning significantly hopefully. And then this doxlery last like laws like super simple not trying to predict the future and the pixels are not trying to do any auto encoding at the pixel level because those things are unlikely to scale to really large or complex environments that have very high fidelity in inputs. But rather just trying to do these contrasts of losses in the lane space between augmented views of the same image and we started trying using random crops and that worked out really well. We got a significant benefit from just using this contrast to objective compared to the baselines like like the both the the the naive baseline of not using any auxiliary loss as for this. You know we had we had all these really really over engineered baselines from the reinforced line community on like things like planet dreamer. So are like soft as they stochastically enacted predict and so forth. So so we beat all those methods by white margin and and we also had really good results on Atari. So so we pushed the so basically whatever results Google brain and got with a really high compute model based method that predicted the future like a video prediction model and did rollouts and you know and did planning with it. We were a bit of a bit of a bit just a very lightweight model that did this auxiliary loss. And so I mean I so that was basically the paper and I didn't really expect it to be that that big but you know the apparently people were very surprised with the results and I think now like there's a lot of follow up work on different auxiliary losses and this contrast of setting. So that and you know a lot more complicated things right so so that's lot more scope of future work here. So I'm looking at figure two in the curl paper that shows the architecture and it shows the two different batches. One going to the Korean coder and one going to the key encoder and feeding into the reinforcement learning loss and the contrast of loss. Can you help me understand what are the those two streams doing and what are the two different the different two different batches of data. Yeah sure so typically the way you do off policy reinforcement learning is you have a replay buffer and you load observations from it and then you have a read like your actor predict laws going into the model right. So now when you add an auxiliary loss you have another unsupervised objective in tandem with the reinforcement learning objective. So the contrast of model basically tries to make sure that data augmented views of the same observations have are closer in the latent space. And so you have your observations the frame stack and you created to you create two different augmented views. One is the query and the other is the key and both of them go through the separate encoders the query goes to the query encoder and the key goes to the key encoder in practice the key encoder is just a time to the version of the query encoder basically a poly average version of it. And you just try to make sure that this queue is just Ft the queue of OQ and K Fd the key of the queue and K are much closer in the lane space because you say that they are these are just two different augmented views they shouldn't be too far away in the latent space. And so that particular loss that ensures that the dark product of Q and K is high relative to like other keys that you can get from other frames tax not related to this particular image that loss is called contrast to loss. And so that is the contrast of the object on the other hand you can still perform reinforcement learning on the original input that you already have been sending so you you so that reinforcement learning losses is also back propagating through the query encoder and there is no back propagation through the key encoder so the contrast of learning loss is just going to affect the query encoder and the reinforcement learning loss also is going to affect the query encoder. And the key encoder is just like a time delayed version of the query encoder so this is similar to the momentum contrast mechanism. And and and so you just do this that's absolutely no engineering in terms of how how usually in whenever people are auxiliary losses in reinforcement learning they have to engineer the coefficient of the auxiliary loss that they are adding to get good results. But in this paper the really lucky thing that happened was we had to do none of that which is added with the losses and together it's work really well and yeah so that's how the learning objective or that's all like this framework works. So some of these types of losses I think I've seen have present negative and positive examples. Is that a thing here? Yeah, yeah, so you you you have a query and you have a key so the query sort of like an anchor and and the key is one of the positives for the anchor and all the other samples in your mini batch. So you when you load a mini batch from your replay buffer you do this for every single mini batch you created two different views. So for every sample the the like one of the views is the anchor the other views the positive and every other sample in your mini batch becomes a negative for you and then you can perform the contrast of loss with the negative with the negatives in a very very competition efficient fashion using dot products and so so yeah you're right. We use every other image and mini batches and negative for the contrast objective. I see cool. Okay and can you tell us more about maybe the experience of planning the experiments and running the experiments for this. Oh sure. I mean like we there were a few papers at the time trying to show my like improvements at the like the deep mind control benchmark which was sort of becoming popular at the time. Like in terms of sample efficiency like this stochastically in actor critics planet dreamers of what so we picked those benchmarks because it's it's always good to show results on stuff the other people are working on and and and we just tried these six basic environments that were there in the soft category with auto encoder is paper and so then we got really good improvements on top of them or or just without doing much work actually and and since this was like way, very simpler. We just start like this word publishing so that's kind of how it happened. We we coded the contrast model and just tried on bunch of environments and it worked pretty well and and so then we started like iterating more like for example, the results on one or two environments like cheetah was not that good and so we just figured out that we had to increase the bad size and it worked better. So there were like a few engineering tricks but more of us it was an idea that just wanted to work and you said it's a band and words paper where they did. You see B.C. for R.L. but they didn't get great results at the time. Can you can you help us understand why maybe that didn't work as well compared to coral. I think that firstly like a band in New York's paper used the deep mine lab environments which are which are pretty different from the environments presented here but I would say it's more more an aspect of. I are not like spending too much time on the reinforcement learning setup but you know like it was a paper like five or six benchmarks and so the amount of time you go spend like on just one part of it is much lower like like if you look at CPC version one, the original paper even the results on image net are not that great and then when we just went in full depth on image net for the version two. You got like way better results so I think it's more like that like well probably not sufficiently engineered as or sufficient time spent on it compared to like the curl paper. So what kind of domains do you think benefit from this type of loss like it seems to me maybe the requirements are domains that have high dimensional observations and maybe some way to do meaningful data augmentation is that is that correct. That's correct. Yeah. So like let's say you were trying to apply this to tabular data without making any sense or would it would you say that that's just too low dimensional or like direct state data. Yeah so I didn't I think this idea is not particularly useful for so so depends like if you are specifically talking about the exact implementation of curl you do need data augmentation for it to work because it it fundamentally centers itself on this instance is not. So if you want to do instance discrimination you want to be able to have a way to say two things are similar to compared any other thing and that that's going to be if it's at the instance level you need a data augmentation for that. But if you're trying to move more towards like the predicting the future style like I have current state I'm going to try to predict the future but I won't actually try to predict it I would I would use it and I would do it in a latent space with contrast losses. I think there you might be able to move away from data augmentation and might be able to do it with even like low level state and but but I would still assume you need data augmentation to have like a really good performance so even in CPC just because we are trying to predict the future doesn't mean we don't use data augmentation we still use data augmentation. And so might not be applicable to tabular data but might be applicable to continuous control like people need to try that. And I think it has a really good potential for you know like any image based RL or image based imitation learning or even like if somebody wanted to combine reinforcement learning with language and you wanted to use. And you want to use some contrasts of losses to learn language representations in tandem with like the reinforcement objectives I think these kind of things might be pretty useful. So there's a few things that come up in say Atari like like the bullet problem where you might say people have criticized reconstruct reconstruction methods because they say well the reconstruction might ignore a small pixel like a bullet but the bullet is actually very important to the outcome. So can you say anything about how a curl would perceive a very small feature that turns out to be really important? Yeah so let's say like you perform these random crops and you know like like between the random crops the bullet was presented both of the random crops and every other frame in your mini batch didn't have the bullet or at least some of the frames didn't have the bullet. Then you would use the bullet as the as a feature to sort of make sure that the two random crops are encoded in the same like nearby compared to other other frames near many batch right. So if if the bullet ends up becoming a access point for your instance of computer to make like like related to separate augment reviews then then you're definitely going to encode it. Compared to like reconstruction methods which is sort of like focus on everything equally and might not get the things that you care about. That said it is likely that you could still miss the bullet if you're if every single mini batch in your in your close setup has has the bullet and so that doesn't become a discriminator feature anymore for doing the instance instance discrimination objective. So so then like you might have to try things like contrastively predicting the future where where the bullet might have moved and so you focus on like oh the right true future is where the bullet is actually in the bottom and because in the past it was at the top and so it probably moved down and all the other frames seem to be having like the bullets somewhere else and so that's not the right thing. Because in these many times it must have moved on by this much so you start focusing on those things and you encode the aspect of the word being there and it's motion and things like that. So it's definitely a more powerful and better framework for learning features that reconstruction. Okay and then another problem we hear about sometimes is this noisy TV problem or which I maybe you could summarize this like an environment in which part of the observation is very is just no random. And so how how would this type of loss deal with that type of randomness in the environment. Yeah so so that's that's a really good question and that's one of the reasons why I actually started working on. On you know this kind of like a contrast to learning methods because you it'll basically ignore anything that it can't predict right so. In contrast to learning are only going to encode things that are useful for you to like predict the future like. This still like your fundamental in variances in your augmentations that you're encoding in the instance setup so if noise is not something that you can use to identify to augment abuse or identify like the future given the past. You would just ignore it and you would not encode it and so it's it's it's better from that perspective as well. Actually if if it may help I think Yashua Benzio has has a talk kind of explaining this idea in the custom some presentation he gave it Microsoft research a few years ago on it's there on YouTube where he precisely explains why you know we should not be working on splice learning from the reconstruction sense. Because it's trying to go to all the noise and you don't want to include the noise thanks for the tip so we've seen that that RL can have problems with generalization and with the Tari and Michico being kind of memorized and that's led to things like opening a procedural generation environments which which I see they used in the red paper but how. How confident do you feel that the that the nice properties of these contrast contrast of representations would hold for out of distribution trajectories data that's never seen before. So firstly there are two types of generalizations right like one is generalization across states in the same environment which is what we focus on in reinforcement. So it's not like we are overfitting it's just that we are generalizing within a narrow subset of like like what we think generalization is. So if your model is not generalizing whether across a state space like if it's not able to do well on an unseen image then it won't be very sub-efficient. But the second thing which is like can it if I can I learn something and then put it on another thing and would it learn really fast there. I think right now we don't have the right benchmarks to say anything at all. I would expect a method like contrast of learning to do better than reinforcement learning if it's if it's about generalization because it definitely learns more about the environment than just just trying to optimize a reward function. And the latter is very task specific the formula is not so and in an idea will like what I would really like is something like what happened in image net where there's a lot of trajectories from a lot of different environments and you learn a really good contrast of model on those trajectories. And then you you have an it really good encoder and you just take that and you put an MLP on top of that and do the reinforcement learning specific for a task. You could even do multitask learning where you say that you do a contrast of learning that the encoder was just learned purely from that and not reinforcement objective. And you have some separate heads emerging out of these encoders and doing the separate contrast to losses. And so that's that's that's what is like in an ideal world like that's the right like right approach in my opinion. And if you look at the curl paper. We actually have an ablation on like doing something like this where we call it like the detached encoder ablation where we say that you know what what if you could. It's actually there in figure nine of the paper where we just say that the convolutional encoder is only trained with contrast loss and the only the MLPs that are on top of them. Which which which have the policy and the value function parameters are so tiny compared to the conditional encoder and they would just do the reinforcement objective. And this sort of work really well on like lot of environments, but was still lagging behind on the harder task like cheater run and so forth. But in future, I'm sure this is this is going to work and once this works, we could imagine training like a you know like you let's see you have like a centralized learner across lots of different tasks. You you're getting data from all these tasks and the the different tasks can all feed into the convolutional encoder just perform contrast of learning so there is no issue in terms of you know combining multiple losses, which is what people usually struggle with when they try to multitask. And then they can have the heads emerging out of these out of this encoder specializing to the separate tasks based whether you want to share any parameters or not is up to you. And and and so this is ideally where I want to like get contrast learning like you know I hope to see it like sort of moving towards in future. And if that happens, then we could definitely generalize really well because if we learn on a lot of different tasks, we learn visual features that are common across a lot of different tasks. And it's very likely that you put in a new environment. It's going to be providing you some useful features so that you can just build an MLP on top of it and you don't really have to train the encoder. Do you want to talk about rad that was your paper reinforce and learning with augmented data. Yeah, I'm asking at all 2020. So what can you get us give us the general idea of this paper. Oh, sure. So after we wrote girl like one of the authors in rad like he was and he was opposed to our lab. So he actually asked us like a very interesting question like hey, this thing is really cool. It works, but you know, like what if you don't what what if like how do we know like what's really helping in the framework is it the data augmentations or is it the contrast of loss and even if the contrast of loss is helping like what is helping you more is it the fact that the reinforcement learning objective now looks at like augmented views of the data and so that's sort of more like, you know, in supervised learning. You're like saving your training and image net or see for our classifier. You're feeding in random crops of the image to the model, right. And you're saying that all these different random crops should have the same output. And in Curl, you're trying to do that in a more explicit way by say you by explicitly saying that the two augments abuse should correlate well in the lane space. So you're you're you're explicitly ensuring this consistency constraints using this contrast objective, but just sort of trying to feed into augmented views like multiple augmented views of an input to the reinforcement learning model having no explicit consistency objective, but implicitly trying to ensure it's going to happen because just like how you predict the same output for multiple inputs. You would do the same thing in the reinforcement setup where you predict the same value functional policy for the multiple different crop versions of the input. And so so one of our core is asked us to try that and we thought, okay, it's totally worth trying and it would be good to know. And we tried it and it was a surprise it actually worked like even better than Curl once a lot of these different deep mind control environments. So so that ended up becoming rat. It was like, you know, much simpler, much more radical. If you may put it that way, like, you know, in terms of being the simplest possible thing that you could try with data augmentations and reinforcement learning. And it was also that nobody had really tried this even though Curl was also the first to, you know, combine data logs in RL. It did it in a more auxiliary loss fashion that that people were already like sort of used to in RL. So nobody was used to like just having pure mod of the RL just input data or change no algorithmic change or no extra loss function working really well and, you know, beating every other method in the in the field. So so and and and to a surprise we found it also worked on these procedural generation environments about me. So so it wasn't just specific to deep mind control. And people also found it to work on Atari like other parallel paper from NYU on this. So so so yeah, it was it was a very surprising result. And and like I like discuss the rat paper doesn't mean that Curl is not useful or something like that is like a misconception among a lot of people. Like, oh, does this mean we just use rad because like I said, you know, Curl has a more general framework and you can think about it using it even without any reward functions available. Whereas in a rad like if you don't get rewards like you're not going to be able to like have any objective at the end, right? Like you would have you you can feed augmented views as your inputs, but if you don't have the the loss function coming in and making your encoder learn with like very dense rewards. This is not going to work. Whereas something like Curl you can just have the auxiliary objective and you can sort of like like use it with sparse reward task or you can use it as a way to list learn representations from data without any reward function. And find unit when you have any start getting reward functions and so forth. Okay, so do you think of the specific augmentations in rad as somehow fundamental or or they just some starting set like should we expect a growing set of augmentations. Yeah, I think you should expect a growing set of augmentations because what we start with is what happened in image recognition. Like if you look at Alex net. It's very famous for like there is like if you read the paper like you know for first page of the paper or something like that. It like they mentioned how they basically take resize the images 256 by 26 and then take 224 by 224 crops of that random crops. And that increases the size of the data set by 2000 X or something like that. So so you know random crops is something very, very fundamental to image recognition. Or and it's also used in training, generative models like like you know making grants or if it less things like that. So it was the simplest thing we could start with that said like you know contrast of learning uses a lot more augmentations like colored based augmentations. Or random grayscale and things like that and also a lot more distortions like rotation, sharing, translations. These are things that we haven't really pushed much on like yet and like you know I think I think that's more more to come. Like random translate people already figured out random translate as an augmentation works better than random crop and both focal and rad. So so that that's like another upgrade. Those are very similar in spirit and depending on environment you might need new things. For example think about robotics like manipulation task like not local motion. If you keep doing random crops you might miss the object in the scene or you might miss the arm and that it might be hard for doing the task. But what if you have two different cameras or three or four different cameras just like you know how Tesla car has eight different cameras right so. Then you could sort of like not feed in few camera images and just feed in certain camera images so you can think of that think of that as like random cropping across cameras. And that could force the model to learn to be robust to like not having certain views and try and extrapolate it implicitly. And I think that that that kind of an augmentation could work really well in robotic manipulation. And also a few start performing tasks that are more long horizon in nature like 20 or 30 different frames. You want to look at it in in in context together. Then you might want to drop few frames across time as well. So right now we're doing spatial running crops. We are dropping few pixels. But what if you just want to like drop a few frames instead of feeding even though you're your policy can look at like 20 frames in the past. Your calm net might might just be fed like 10 or 15 frames. And you would miss 5 or 10 frames but your model might learn to be robust to it. So think about this is some kind of like an input dropout. Or you perform like very different random crops across time just like how things have worked out well in video recognition where in video recognition people do both spatial and temporal random crops. So. So that's a kind of augmentation that I'm hoping really works out in future. We have we have a few projects in our lab like people working on these things. It hasn't really panned out yet and and like is it multi views like something that is like I'm hearing a lot of people are trying trying things on it with grad and girl and seeing some good results. So that might really lead to a lot of you know data efficiency gains and robotic grasping or in general robotic manipulation. Part of the promise of deep learning is avoiding handcrafted features. I guess we're at this point we're kind of hand designing some of these augmentations although it seems a lot simpler than designing features. But do you think that eventually. The augmentations would be automatically designed somehow. So, so let me let me first sort of like give a very very balanced answer to this number one like what what you said is indeed true or like a more broader sense that deep learning sort of goes away from handcrafted features. I really think that data augmentations with some domain knowledge has been something that people have always been using deep learning like it it it's probably something that people. Focus on our acknowledge a lot more now than in the past but it's always been there was a huge part of Alex net being great and as just like a data point that was never published anywhere but sort of like an interesting data point. If you take a resident or any any kind of state of the image classifier and you go to the code base and you just comment out the random crop you know just just just say that you only take the center crops or you resize the image to a property shape. Your accuracy would probably at least 78% and that's that's a huge you have a classifier that 76% top on accuracy. The accuracy would just drop like 68% or something like that so doesn't that already tell you like how image recognition is just like one of the generally given us like one of the success stories of deep learning. Or whatever started the whole revolution. Dependent so much on like domain knowledge which is the fact that you know in an image you if you move the camera around which is what is equal to the simulating a random crop and capture the image. The object person in the image the object you identify as an image shouldn't really change. And the other augmentation which we all use is this flipping augmentation which is you flip an image from right to left or left right. You should see the same same class so that that that's another domain knowledge thing which which people have been using ever since like cruisetsk used it and even the old like Lenet classifiers that would train on amnesty use data augmentation. Like like translating the digits and so forth. And in NLP people have been using it with like the back translation idea which is like you you take a sentence you translate it to another language and then you come back to your original language and you get like the same sentence with a different way in which it's being constructed with the same meaning. And people have used this as an augmentation for training translation models and getting a lot of gain significant gains that this technique is even used in production. So yeah so I would say that you know all the success stories of deep learning audio NLP vision have always been using domain knowledge and it's not a bad thing. And do I see this becoming more like learned rather than engineered it's hard to say in fact like learned augmentation so far not really worked well. Like coaxially has to speak about auto augmentation which is sort of using the equivalent of neural architecture search for augmentation. Like can we just automatically you construct over cavity space and then automatically learn the right augmentation or sequence of augmentations. Like I said like you know just like how NASA sort of fundamentally built on top of human engineered ops like you have the three by three con you have the risk skip connection and you have bash norm you have layered on and then you figure out how to combine all these things. Similarly augment like auto augmentation is also built on top of a similar idea so unless you have the vocabulary of like random crop or and translate stuff like color distortions and things like that. You're not going to be able to do something fundamentally new out of these auto ml style approaches. So yeah I think it's going to be I think it's going to be a big part of a future systems and we definitely get a lot of benefit from doing augmentation tricks. So just like like think about I think we're practically as any like like robotics is a very practical field and it's not going to be academic in definitely like you know a lot of startups already beginning to focus on just getting robust to work in industry and not focusing too much on writing a paper. And for them it's like okay I train I train a robot in my company but then robot has to go and work in a factory. So the lighting conditions will be different the camera might be different or you know like the object scene might be very different. So obviously like you're not going to be able to train at test time so you shouldn't be able to do all these augmentations that you should fundamentally prepare for while deploying. Just by you're training the robot in your factory right so you would randomize a lot of lighting conditions you would randomize the objects. There is this idea and robotics called domain randomization where you train a lot of different variants and simulations so that it can work in the real world. So the these ideas are always been there. It's just that curl and rad are trying to explicitly like increase the importance of focusing on that more than the algorithms and showing the power that just the simple changes can give you a massive improvement in performance. So does rad end up using a lot more mini batches than standard model for the algorithm. Not a lot like it uses a bad size of 128 which is what other people are training with. But it effectively sees a lot more data a lot diverse data than other methods because it's looking at these augmented views of the images. So it's sort of like kind of like you know like how Alex net basically increases size of the data set by 2000 x by doing the augmentations rad is increasing the size of the data set implicitly by never providing like one version of the same thing with a lot of multiple different versions. In terms of computation we are not increasing the amount of computation the model goes through it's the same amount of computation it's just that it's seeing a lot of diverse inputs each time it does any computation so it's way faster. Okay, that was surprising answer to me. I didn't expect that that's really cool. Okay, so let's move on to sunrise. You got you got so many great papers here to talk about and so sunrise. This is a simple unified framework for ensemble learning and deeper in force and learning. Can you tell us about this paper? What is the general idea here? Sure. So firstly this paper I can't take too much credit for us because like you know curl and drag I can say I was primarily leading these efforts but sunrise was a lot of mostly done by by Kim and the first author and my advisor Peter Abil actually like was sort of his ideas. So basically the ideas like like if you want to just like say curl and drag focus on the data augmentation and those aspects but but you can also improve reinforcement anyway focusing on the algorithm aspect. And in the algorithm aspect value based methods are really finicky and hard to train but but they are often the best methods in terms of sample efficiency compared to policy gradient methods and in value based methods. You you typically have this thing called a target net that provides you the target for the Bellman equation and you back propagate the Bellman error. So usually the target net is finicky because the updates change a lot and therefore you know the the current network is changing much faster than the target net and might not might not actually be a you know like a great great like like most stable thing that you can work with. So the idea here is like what if ensemble models like if you have multiple target nets or multiple value functions and you could sort of use the statistics across them to stabilize your updates. So let's say I had I had 10 different methods to pick my target from and I could take the mean and the standard deviation across these different models. And use that to sort of provide an uncertainty estimate from a target instead of just using a point estimate and thereby being a lot more you know stable and fast training. So that's the idea and sunrise and it's heavily inspired by ospans work in ospans work on bootstrapped EQN except that it's it's using the mean and the and the variance of these estimates more or explicitly and able to like do the error propagation much better. And so that's really all I you know I can say about it because it was a lot more work done by came in on this on this setup but one thing I would say one thing that to me was really surprising about the paper was was it's really good results on state based or like like where the state of the art results on state based or like where you're not learning from pixels but the actual state were typically from model based methods that try to predict the future state and then you know used it to simulate some fake or lots or whatever. But this one is a pure model free method with ensembles has sort of like caught up with those results from model based methods and and to me that was really surprising that sort of like the curl story again for states because in curl we saw that happening on images and with sunrise we saw that like os a simple idea with existing model free or nothing else can can can be very competitive or even out perform the state. The art model based methods awesome let's let's talk about the course that you co created in co talk deep on just unsupervised learning. Sure Berkeley course CS294158 so can you tell us a bit about this course and the contents. So yeah in in in in 20 so we we started the first version of the class in 2019 I think so like basically in 2018 like after so I interned open year in 2018 and in the summer and I was working on reinforcement learning at the time. But that was the same time like Ilya Sutskiver and Alekratford to the GPT paper and we we saw that happened and I saw how it basically changed the NLP field and in four months like the bird paper came out and and basically was a revolution and. So so we saw like we saw a right in front of our eyes that how like you know this unsupervised learning was really picking off and then the when I had gone to ICML that summer to present my like like a paper on like my research at Berkeley I the CPC paper come out during the conference so it was sort of like pretty. The writing was on the wall that you know we should be working on super best learning and so on the other hand our lab was sort of like a reinforcement learning lab like Peter appeals more known for reinforcement learning so. We sort of thought like okay the best way to learn is to teach it so let's sort of start a class and then we can learn the content ourselves as to just like teach it to other people and and thereby by learning the content well we can do more research on it so that's how it started in 2019 like there was like a couple of people in our lab who were also very interested in generative models like pixel C and NSBA is flow models and gans and so forth so. And I was very interested in like representation learning and and like how it can be applied a lot of different problems so so we kind of put everything together and like and and and thought of first version in like 2019 spring and then we follow it up this year with the second version and 20 spring so that's really the story of the class and it was probably the one of the first few. Classes on this topic very focused on like all the latest and greatest in the field because it's it's kind of like a moving feed right so for example when we first thought the class. The state of the art on image representation learning was CPC version one which are like 48% accuracy but now this year we thought the class like you know was some moco version 2 had come out and the number of. Like you know already like a close to 80% so that's how fast the field was changing and like similarly like when we designed the class in 2019 we only GPT and bird existed at the time but now you know how it is right and and I'll be like it's basically a flurry of bird papers now and GPT to GPT 3 and so forth so. Similarly like you know curl didn't curl wasn't there when we thought the class like when we started the class this semester and I'm sure next year there will be a lot of people in the topic and reinforcement learning as well so. It's a very fast moving topic and we just thought there'll be very interesting to have like a. Class that is move at the same time helps us in instructors learn as well as the students learn we learn together. So do you feel like this is early days still for unsupervised learning. Yeah definitely I think so. I think we I think it's way better than a couple years ago that's for sure like I think yeah it it it it definitely has got a very compute compute intensive and so there are some principles we figure out that to do to be good at unsupervised learning. We do you really need a lot of computation power and you need to train really large models and you need to train on really large data sets so there are some fundamental principles that are just objectively true now. Based on like what's happening a language and what's happening in images and and that kind of volume of training that can scale. Hasn't been done in reinforcement learning yet or imitation learning and so I think that would be pretty interesting to see like most probably we not able to do it because we don't have the benchmarks and might might be best done in an industry or setting like autopilot or like a robotics company but I think I think that kind of a scale shown for some control tasks or some navigation tasks we really interesting. But that said like I do think that like the right objective has hasn't been nailed like you know it's still sort of in NLP it's fundamentally centered on language modeling or mass language modeling in vision it's fundamentally centered on contrast of learning between augmented views with the air and and and reinforcement learning it seems like contrast learning or. And using some kind of augmentation seems to work really well so so it seems a bit like you know each each each specific topic has its own. Formula that's going for it and whether we will ever be able to unify everything into one framework is is is like an open question and it's still early days to sort of give a very definitive answer on that. And it may also not be necessary I'm not I'm not like like you know a big fan of like saying over everything should be the same. Class of models same objective and it should just work on any day I think that university things really cute but. So it's not like a must have you it's fine if we can engineer really good AI systems that work even if they're very specific to the domain that you're training on but but in terms of yeah like what objectives to use like how to kind of improve further. Like images we haven't still scale beyond image net convincingly well like NLP open ash kind of to take into a totally different level by training it on like extremely large data sets. But in vision it still feels like we are still got the million images are 100 million images regime and kind of truly scaling it to like billions or trillions of images on the internet like training it on videos training on entire YouTube. I think these are still like out of reach of the moment and the reason is simply because of the computation power like for training a good image super image unsupervised model. You need to train for at least like one week or one or two weeks we have really large model and you're training it for really long like thousands of epochs. But that's like for like a million lab million million image data set so what if you really want to go to billion images and that's like thousand X movie computation right. So imagine having to wait for a thousand weeks this is not feasible and so you will have to like sort of you might be able to just do like a one pass over the entire data set. To train in a similar time frame and you would need a lot more course lot lot bigger parts a lot more GPUs and so I do think like it's not going to be a lot more. Like it's sort of like a topic that's more on like more relevant or more doable for large industry company than academia. And the best way academia can kind of work on unsupervised learnings to try to do it and in the reinforcement learning setups because it's a lot more computation cheaper and like not something that industry is currently super focused on in terms of getting the best numbers. And so you get to have a state of the art and a lot of visibility for your work so that was another motivating factor to do curl and yeah I hope more people tried out in other environments like navigation manipulation or even like text and reinforcement learning things. So this pattern of doing massive unsupervised learning or pre training and then and then fine tuning for certain tasks like we see with bird. I wonder does that point to a future where very few organizations have the capacity to do the massive pre training and then on the rest of us are basically doing fine tuning on the result. Yeah, I think that's very likely. And I kind of think that's how it should be where like a lot of people may not have the compute part to do the pre training but if if industries keep releasing these pre train checkpoints, then we might be able to use them in academic settings like take a bird check point and do something cool with it or take a CPC or mocha or similar check point and try to like use it in a downstream task that is of academic interest. So the only thing so far is in RL or imitation learning it's still not pan-dod in terms of like taking something that's trained on image net or YouTube and putting it on an RL environment and that's kind of like a sad thing right like ideally it should work like if we are really good visual features. You should be able to like work on a diary or you should be able to work on deep my lab or deep mind control and so forth but somehow doesn't seem to work that great doesn't seem to give us much benefit from as like training from scratch on these environments. So but once that kind of works like once we move to like much harder environments which are really time consuming to render or like once we are in like the real world and any kind of reinforcement learning is going to be super compute intense like super data intensive and walk lock intensive using a really good pre train checkpoint provided by industry on as your backbone architecture and woodstrapping from it and fine tuning it would be the. Way to go I think yeah. So any hints on on what types of things you plan to work on next you plan on doing fall and work on the type of things you've been doing. Yeah yeah so there are a lot of follow like there are a lot of follow projects on on on curl and rad that I'm not directly working on but what's sort of happening in our lab and some projects I'm just cannot advising on them. And currently I'm more focused on architectures deep learning architectures for like vision so so that's another thing that I'm very excited about like I hope it has similar. Like impact as curled which is we are trying to do things like putting putting self attention and reinforcement learning. How do you how do you do that like what is there what what are the right deep learning architectures for vision and RL things like that. So so we have we have a few projects in our lab on those things also you know trying to use domain knowledge and other forms like what if you wanted to use optical flow in reinforcement learning like like you want to model motion. Want to use like what want to solve like more temporal task so there are some people also trying like temporal versions of. Curl so so there are a lot of projects on those things and also like trying to do something in like driving simulators car like Carla you know because they are more relevant to the real world not that it's very real world either but. There's not much you can do sitting at home during cold and like do do something real world in reinforcement learning right so it's it's all going to be simulation so why not be why not it be on like some more more realistic one and so so that those are some interesting projects and I also continue to keep working on like deep learning like. Deep learning architectures for computer vision and so forth so it's like a multi prong lot lot of different projects. Sounds like you have a road map in mind a lot of road map. Yeah I'm not sure if it's to long term but these are I mean I just think it'd be pretty interesting to see if we can sort of move away from the primitive architectures that have been. So just like how when we did the data Augsburg like like people people who were not taking Augs seriously in our like are now taking it seriously I think architecture design should also be kind of similar and so I think we'll see a lot more people on that not not just from us but other people as well so. And like I said a bit earlier like when you asked me about the research focus for like I think you're really useful to make deep are more like deep learning like sort of more centered on engineering that and less always on new ideas because. You notice like you know there are like hundreds of papers and deeper enforcement learning that have proposed new ideas new new learning algorithms new value function new policies or exploration methods. But somehow it's been really hard to keep track of the progress because they all do it on their own special set of environments and. It's really hard to track the like like what's really helping you right and on the other hand something very simple like curl or rad or sunrise these are applied on a standardized benchmarks but are heavily centered on just like one single idea and and and very transferable often across multiple setups both in general and specific ways so I'm kind of want to focus more on those things because. That's kind of how deep learning has generally progress in like you know over the years and it's very likely that that's how reinforcement learning or imitation learning will also progress over the years. Yeah I have to say like many of these papers that you were involved in or first authored or in this kind of extreme quadrant of extremely simple and extremely effective in improving performance which is probably a pretty great place to be. Yeah thanks a lot for your kind words and and you know like I'm very inspired by the work that open it is almost like all the time in terms of pushing the boundaries on what what simplicity and engineering can buy or complexity and new ideas and you know like the if you look at the results and reinforcement learning it's crazy. Like the results on the dactyl hand and the Rubik's cube just by pushing the simple idea of domain randomization which which is also like an inspiration for data augmentations by the way because in domain randomization you just train on like various different simulators and various different image renderings and so forth and I think it's all like fundamentally the same idea. So I think simplicity can always give you a lot that there is some benefit from doing that or more more complex or so so that there are like two different things one is you want to like enjoy your research and sometimes doing new things gives you more enjoyment. But the other things you also want to make sure that you don't me and or too much into working on non problems like things that don't actually that don't actually pose you problems but might actually be problems you invented just to have fun you know and and and so I think I tend to focus more on like making sure it's not too much on the fun. The fun side it like and not like under the stick but kind of really interesting papers was is like very useful but may not be super novel I think I would rather lean towards doing the useful but not super now. Besides the things that you mentioned already in this interview are there things happening in our lately that you find really interesting other things let's see so I mean I mean John Schumann's papers are usually very interesting to me. So he I think he pushes on the fundamental algorithm side and recently he released a paper called basic policy gradient was pretty interesting like the image augmentation is all you need from NYU was very similar to that. She was pretty much the same except they had like more they also had augmentations to the target nets in the value function and and focus purely on value based methods. So I think I think they also are doing great work and and then there is this paper from Montreal which was a follow up to curl called momentum predictive representations which uses another idea on unsupervised learning called BYOL would strap your own latent. From from deep mine and applies it to reinforcement learning and and and they do this temporal predictions which give them lot more gains over curl and and they really improved the data efficiency and Atari even further. So that those are pretty like pretty much in the same vein as curl and rad but being done by other people and other other groups and it's always like you know satisfying when when. Multiple people are thinking of the same thing and pushing pushing the numbers as hard and like yeah so that that that there's also like you know. Like a lot of interesting work done and like like generally like in robot learning and like companies which don't really come out and like academic papers but. In general I'm I'm a fan of like what's happening in industry as well like how people do how do how people basically are pushing on pick and place and kind of like you know like pushing on replacing humans in these logistics and factories and like like for example. Even if we make like 10x progress in grasping and pick and place we might be able to have like single data every instead of today delivery and Amazon right so Amazon Prime so. Like those are those are high impact in terms of economic value and like like but may not necessarily be the most like. It's not like you take the latest soda or the algorithm and put it on put it on these robots and hope that that happens it's going to be more of these domain specific engineering a lot of like you know augmentations good object detection and so for like it's it's going to be more of engineering than research but. But you know it definitely uses a lot of the ideas we publish in the in the field and and tries to get it into practice and make it a reality. And I think that those those kind of things without a lot of impact. Arvin Trinivas this has been fascinating very enlightening for me and I'm sure for our audience so I want to thank you for sharing your your insight and your time with all of us today and we look forward to watching and working the future thanks so much Arvin. Thank you Robyn thanks for having me. Notes and links for this episode are at talkrl.com If you like this show I need your support you can help in a few ways. Subscribe on your favorite podcast platform subscriptions make a big difference. 3. Give us a five star rating on Apple podcasts. If you don't think we deserve five stars let us know on Twitter what we could do better.
[ { "end": 12, "start": 0, "text": " This is Talk Our Real Podcast. All reinforcement learning, all the time." }, { "end": 18, "start": 12, "text": " Interviews of brilliant folks across the world of our realm. I'm your host, Robin Chohan." }, { "end": 32, "start": 18, "text": " Arvin Trinivas is a third-year PhD student at UC Berkeley, advised by Professor Rebiel. He co-created and co-taught a grad course on deep unsupervised learning at Berkeley." }, { "end": 34, "start": 32, "text": " Arvin, thanks so much for being on the show." }, { "end": 38, "start": 34, "text": " Thank you for having me. How do you describe your area of interest?" }, { "end": 54, "start": 38, "text": " I'm broadly interested in how can we learn better representations of data in a way that we can improve the performance of our current ML systems." }, { "end": 68, "start": 54, "text": " This may seem really general and vague, but the reason I say it in such a way is because that's how deep learning has worked out in my opinion." }, { "end": 77, "start": 68, "text": " It's sort of like one or two good ideas, kind of transcend different topics." }, { "end": 93, "start": 77, "text": " There's a Transformers idea which is sort of doing stuff attention. It was originally applied in L.P., but it's sort of now pretty much everywhere." }, { "end": 112, "start": 93, "text": " All of these ideas are basically intended at getting better representations of data, either through the architecture or through the learning loss objective functions, or how do you feed in the data, what kind of data do you train on and so forth." }, { "end": 132, "start": 112, "text": " That's kind of my interest, like just kind of improving the systems by figuring out better ways to learn representations. I'm interested in doing this both from engineering the architecture or figuring out what is the right learning objectives." }, { "end": 157, "start": 132, "text": " Both are very interesting to me. Things like contrast of learning or more at the objective function level, while the engineering efforts like CPC version 2, where the objective function was already there, but we were just figuring out the right engineering in terms of architecture, fall more into the former category." }, { "end": 171, "start": 157, "text": " But I'm also working on some things yet to be released, in terms of how to better improve the vision architectures beyond rest nets, how to use Transformers and rest nets together and so forth." }, { "end": 191, "start": 171, "text": " I'm not very tied to any particular problem, like not just reinforcement learning, but that's obviously one of the main focus problems. So computer vision is also pretty interesting and so is language processing." }, { "end": 212, "start": 191, "text": " And one of my goals is to make reinforcement learning more like computer, more like whatever is happening in deep learning for vision and NLP, where there's a lot of focus on architecture engineering, a lot of focus on data augmentation, a lot of focus on unsupervised learning." }, { "end": 222, "start": 212, "text": " But somehow reinforcement learning is sort of like more deep reinforcement learning, sort of state more like reinforcement learning than deep learning." }, { "end": 230, "start": 222, "text": " I don't know for what reasons, but it's sort of been that way for the last few years and now it's slowly changing." }, { "end": 249, "start": 230, "text": " And so that's also like a pretty important topic to me, like how to make deep oral, sort of borrow a lot of the successful principles that we've seen work over time in, in like canonical deep learning and try to unify all these things." }, { "end": 263, "start": 249, "text": " So the first paper of yours that we're going to talk about today isn't really an RL paper, but I think it sets the stage for time what curl next. So and that is the data efficient image recognition with contrastive predictive coding." }, { "end": 268, "start": 263, "text": " So can you tell us what is going on with CPC in this paper?" }, { "end": 287, "start": 268, "text": " Sure. So firstly, CPC is basically a self-supervised learning objective, where it's centered on this paradigm of predicting the future in a latent space." }, { "end": 297, "start": 287, "text": " And so I want to first briefly explain why that's like an important and interesting framework to think about unsupervised learning." }, { "end": 311, "start": 297, "text": " So firstly unsupervised learning is a paradigm where you're trying to learn representations of data in a without any annotated annotated labels from humans." }, { "end": 330, "start": 311, "text": " So you just want to learn from raw data that's just available in way more quantity than supervised learning data sets. And it's obviously inspired from how humans and animals learn in general without having actual annotations." }, { "end": 348, "start": 330, "text": " And within unsupervised learning there are so many ways in which you can learn representations and this dates back to like Ben Geo's work on audit encoders and Hinton's work on restricted balls machines and so forth." }, { "end": 361, "start": 348, "text": " But those things didn't really pan out back then mostly because the computation was not there and like people were working really tiny data sets like MNIST." }, { "end": 377, "start": 361, "text": " And then there was a flurry of work in the computer vision community on trying to sort of create these pretext tasks like you create tasks that creatively yourself." }, { "end": 389, "start": 377, "text": " Like for example you take an image and you rotate it and you try to predict the angle of rotation. And so that becomes a task that you can solve without any label labels on the images." }, { "end": 396, "start": 389, "text": " You give the image its own labels based on like some transformations you perform." }, { "end": 410, "start": 396, "text": " And all this kind of like had some reasonable results in terms of performance on downstream tasks whether you can take those features and train classifiers on top of them." }, { "end": 420, "start": 410, "text": " But it was still lagging behind the kind of classifiers you could just build by directly doing supervised learning if you had like a lot of labels." }, { "end": 429, "start": 420, "text": " So people mostly thought this unsuppvised learning or some people call it self supervised learning like Yanle Koon calls itself for us learning." }, { "end": 442, "start": 429, "text": " So people mostly thought this class of techniques was just not like worth the time. It was just you know always going to lag behind supervised learning and you would rather just collect labels." }, { "end": 459, "start": 442, "text": " So CPC or contrasted predictive coding is one of the first people that tries to sort of go away from these very ad hoc hand engineer tasks to something little more principal." }, { "end": 478, "start": 459, "text": " But it's not particularly new idea. It's more like inspired from a very famous and very impactful earlier work done by Mikhailov called WordWack which I'm sure a lot of people are familiar with in terms of word vectors." }, { "end": 484, "start": 478, "text": " It was basically the best word embeddings people were using before word came out." }, { "end": 495, "start": 484, "text": " So word to back is this idea of where you're trying to predict the surrounding words of you're trying to predict a missing word from the surrounding words in contrast of fashion." }, { "end": 505, "start": 495, "text": " So what what does it mean is you don't actually need to predict the actual word because back then people were not able to do softmax over a really large vocabulary." }, { "end": 514, "start": 505, "text": " But rather you would just predict an embedding and then you would contrast it with what is the real positive embedding and a bunch of negative embeddings." }, { "end": 533, "start": 514, "text": " So you would make a prediction in the latent space and then you would say that that prediction has to correlate maximum with the embedding of the correct word that that's missing and has to not not correlate too much with the embeddings of words that are not the correct missing words." }, { "end": 541, "start": 533, "text": " So you can build these losses in a lots of different ways and Mikhailov had a very elegant formulation for this." }, { "end": 550, "start": 541, "text": " So CPC sort of read with just that framework in a more general fashion that can apply to any morality not just text." }, { "end": 563, "start": 550, "text": " So you want to have this kind of a learning framework for audio or you want to have this kind of learning framework for video or images or reinforcement learning text everything." }, { "end": 581, "start": 563, "text": " Back then when Mikhailov did word to back the the vein which you include the context which you're using to predict the missing word was very simple." }, { "end": 593, "start": 581, "text": " So surrounding the missing word you just average the embedding of this words there's no notion of you know the position or like trying to have like a temporal model." }, { "end": 605, "start": 593, "text": " These kind of things were not for send back then so CPC adds all those things so that's why it's called contrast to predict according it predicts something in a contrast of fashion." }, { "end": 634, "start": 605, "text": " And it uses a context in order to predict some something that's missing and and and that context can be modeled using an order aggressive model like like you know like pixel CNN and so forth that just looks at the past few words or the past few frames or the past few patches in an image and then price to predict the future frames of future." }, { "end": 655, "start": 634, "text": " Image patch of future word or future audio chunk so and but it does this in an embedding space similar to word to that doesn't do it in the raw input space because doing it in the raw input space would would mean you know you're trying to model a lot of things that you don't care about for example right now when I'm talking to you." }, { "end": 675, "start": 655, "text": " Your mind roughly models the next word I'm going to say and it doesn't do this at the level of like my actual audio waveform right like you're trying to sort of focus on the phonemes you're trying to focus on like the words time speaking and you have a language model in your head." }, { "end": 704, "start": 675, "text": " So you're focusing on more abstract things we are trying to expect to predict the future outcomes and this is true for any sensory stream like when you see a ball falling you're like focusing on more abstract entities like the object being the ball and the physics of the ball and so forth so that's what that's another motivation for CPC you predict the future in a lane space and you're trying to do this contrast of losses which are very efficient way of." }, { "end": 732, "start": 704, "text": " Solving the degeneracy problem in the lane space and and and so this particular paper CPC proposes like one method that works with a lot of modalities and it presented really promising results in 2018 on on the image unsupervised learning benchmarks not that like was any any better than supervised learning at the time but may be a lot of things that are going to be a lot more important." }, { "end": 743, "start": 732, "text": " I think at the time but made like at least like a 10% jump from what the computer vision community I had by then and here you're talking about the the Van Den word paper." }, { "end": 745, "start": 743, "text": " Yeah, yeah precisely." }, { "end": 755, "start": 745, "text": " And yeah, so then the next summer I intern with Aaron van den Njord and we worked on the version two of the paper." }, { "end": 770, "start": 755, "text": " So where we basically said okay like the trends and language and it'll be suggest that you know just re-visioning the old ideas and making models larger and up getting like really amazing results like." }, { "end": 784, "start": 770, "text": " 2018 was the year of GPT and bird like the GPT one if you if you make all of that way so basically where they just took a transformer train it like train language models are mass language models but." }, { "end": 798, "start": 784, "text": " Train it on like last months of data with a lot really large models and and show that pre training really works so a result like that for a vision computer vision would be really cool because." }, { "end": 812, "start": 798, "text": " The amount of unlabeled image data on the web is just massive and and so we thought okay let's get a really good result on image net like what with the CPC framework by making the models larger." }, { "end": 832, "start": 812, "text": " And so we made the like whatever the rest that's used to encode the patches in an image you make you we were using rest net 50s or one of once and we just made it like really wide and large and we also just added lot more data augmentations which was like sort of a new thing at the time but now everyone does it." }, { "end": 847, "start": 832, "text": " And we tune a lot of hyper parameters just to see like what's very important and what's not important and just just doing these things no new idea just taking the CPC idea but with sort of doing the engineering." }, { "end": 867, "start": 847, "text": " Give us like a lot is to go from like a accuracy of like 48% to 72% you know it's it's crazy to think say that you know but that's kind of how it was you just keep doing new tricks and then the performance just kept going up and up and up and up and." }, { "end": 896, "start": 867, "text": " So there was a special result we had in the paper where we could pre train there is we take all the unlabeled data and image net so imagine somebody gave you image net they gave you both the images and the labels you would you would think that it's best to just first train a direct supervised learning model and and deploy the classifier like why would you do on supply learning but we had this really cool result in the paper where if the most." }, { "end": 925, "start": 896, "text": " Where if the models really large if it has a lot of capacity it's better to first do the unsuppressed training and then fine tune onto the supervised objective just like our bird doesn't so we you do the CPC training you you get a really good feature space and then you find unit to image classification and we ended up getting like solid like to 2 to 3% improvement in the top and accuracy on image net classification." }, { "end": 952, "start": 925, "text": " So and like 83% compared 80% just by doing this unsuppwise pre training and this was true when you had all like label data and various regimes so for example if I'm a tell you that I give you a million images but I only give you the labels for 10,000 images or any give you the labels for 100,000 images and ask you to deal with it." }, { "end": 974, "start": 952, "text": " It's really hard like just supervised learning wouldn't be able to do anything much but but the sort of unsuppwise pre training and then fine tuning it to the label label data is is way more data efficient it can it can do classification with just like 10% of the labels and get you like really good classifier that people used to get with 100% of the labels." }, { "end": 997, "start": 974, "text": " So to us all these things were really exciting like the fact that larger models were working much better for unsuppwise learning and unsuppwise learning is finally like a relevant thing now because it can now surpass or be competitive with supervised learning on like downstream tasks or just even improving classification and it was sort of finally delivering the promise." }, { "end": 1006, "start": 997, "text": " And then a lot of follow up work came from like other other other big companies like Facebook and Google like Facebook." }, { "end": 1031, "start": 1006, "text": " And then came in the inventory of rest nets he came up with this paper called moco momentum contrast with simplified the CPC model like significantly and like also improve the results and then simpler from Google brain paper from Jeff Hinton improved on top of moco and really really pushed up the results so much." }, { "end": 1040, "start": 1031, "text": " Like you know this is like really one of the hardest topics in in the in the field right now like contrast of self supervised learning." }, { "end": 1046, "start": 1040, "text": " So so that's kind of the history of like these contrasts of learning papers it seems it seems like recently." }, { "end": 1052, "start": 1046, "text": " It's really been picking off but but when I was working on it last year it wasn't that hot." }, { "end": 1058, "start": 1052, "text": " We were still trying to get the numbers that were not not that yet in the field." }, { "end": 1064, "start": 1058, "text": " And so it's is CPC considered a type of metric learning." }, { "end": 1074, "start": 1064, "text": " You can say so so so you can say that CPC is both like a latent space generative model trained with contrast of losses." }, { "end": 1082, "start": 1074, "text": " You can also say that CPC is a way to do metric learning where you get a lane space that can give you really good distance metrics." }, { "end": 1092, "start": 1082, "text": " So that that's one nice thing about the generative framework so CPC has this notion of predicting the future and the latens." }, { "end": 1110, "start": 1092, "text": " And so you can definitely use it as a latent space model that could be used say if you if you use it in reinforcement learning setup you could use it as a as a world model in the lane space or if you use it for audio you could predict the future audio chunks in the lane space." }, { "end": 1119, "start": 1110, "text": " You don't have a decoder so you can if you want to actually like like a here what you're predicting you should you should also train a decoder." }, { "end": 1133, "start": 1119, "text": " But in general like you can think about it as like modeling the future of in the lane space and and so that's a that's why it's more general than like the other met sex sim clear or moco which is purely in friend intended metric learning." }, { "end": 1137, "start": 1133, "text": " CPC is not just a metric learning framework." }, { "end": 1142, "start": 1137, "text": " I see so it kind of spans between metric learning and and generative models." }, { "end": 1143, "start": 1142, "text": " Yeah." }, { "end": 1152, "start": 1143, "text": " So let's talk about curl that was the contrastive unsupervised representations for reinforcement learning your first author paper that goes from this year." }, { "end": 1156, "start": 1152, "text": " Can you tell us what was the what was the idea here with this paper?" }, { "end": 1168, "start": 1156, "text": " Yeah, so the idea of this paper is a lot of free reinforcement learning from pixels is really really sample inefficient." }, { "end": 1185, "start": 1168, "text": " Particularly on these robotic control environments where you go from pixels to darks and it takes like millions of steps or 100 millions of steps to get any reasonable policy working on even a task like just reaching in." }, { "end": 1191, "start": 1185, "text": " Do the with it with like a three link creature or something like that." }, { "end": 1205, "start": 1191, "text": " So what is the way in which you can actually make it more efficient without complicating the RL algorithm without trying to do anything fancy like predicting the future like learning a world model and so forth." }, { "end": 1213, "start": 1205, "text": " That's what motivated us think about curl which is we thought okay." }, { "end": 1218, "start": 1213, "text": " There's contrast of learning is working really really well for image recognition." }, { "end": 1228, "start": 1218, "text": " It's making image recognition a lot more data efficient by trying to use these contrastive losses." }, { "end": 1233, "start": 1228, "text": " And trying to learn really good representations that allow you to be labelled efficient." }, { "end": 1244, "start": 1233, "text": " So with with something similar happen in reinforcement learning where if you start training with both the contrast of loss and the reinforcement learning losses." }, { "end": 1259, "start": 1244, "text": " Would you be able to be a lot more data efficient and therefore solid task that were earlier solvable in like 10 to 100 million time steps like you can you solve it like 100,000 time steps or 500,000 time steps like at least like maximum of a million time steps." }, { "end": 1283, "start": 1259, "text": " So that was the idea and I learned from some of my mistakes that we did in the CPC version do paper in terms of like you know not like kind of like going for a simpler instance discrimination style objective compared to predicting in the like you know patches and things like that." }, { "end": 1294, "start": 1283, "text": " So we I already looked at this moco paper from timing and realize it's it's a much simpler framework to think about contrast of losses than CPC." }, { "end": 1305, "start": 1294, "text": " And so it's it's like it was sort of counterintuitive at the time like my professor Peter or you thought that like you should predict the future." }, { "end": 1322, "start": 1305, "text": " Because you know reinforcement learning is like a time based thing and you would rather predict the future in the lane space in an explicit way by by taking a frame and time step T and put the future at times to T plus K in contrast to way just like CPC does." }, { "end": 1338, "start": 1322, "text": " But I was pretty dogmatic on this thing that okay this instance discrimination which is you take an image and you take another augmented view of the same image and you just say that these to have to correlate well in the lane space compared to any other image." }, { "end": 1346, "start": 1338, "text": " I just felt that this would be the simplest auxiliary loss that you can add to reinforcement learning and it should just work." }, { "end": 1360, "start": 1346, "text": " And so that that that internal led to this question of like how do you do instant discrimination in the like for reinforcement learning models that that train from pixels." }, { "end": 1376, "start": 1360, "text": " And so one thing that we started doing was all these instant discrimination models in moco and simpler they they use these data augmented views of the same image." }, { "end": 1380, "start": 1376, "text": " So in reinforcement learning no one has ever tried data augmentations." }, { "end": 1392, "start": 1380, "text": " And so we thought okay so if that works then that's like an added novelty point to the paper where the girl comes the first framework to explore data augmentations in in the reinforcement learning setup." }, { "end": 1401, "start": 1392, "text": " And so that that's how it came about like you you sort of want to have an auxiliary task that can speed up your learning significantly hopefully." }, { "end": 1420, "start": 1401, "text": " And then this doxlery last like laws like super simple not trying to predict the future and the pixels are not trying to do any auto encoding at the pixel level because those things are unlikely to scale to really large or complex environments that have very high fidelity in inputs." }, { "end": 1431, "start": 1420, "text": " But rather just trying to do these contrasts of losses in the lane space between augmented views of the same image and we started trying using random crops and that worked out really well." }, { "end": 1447, "start": 1431, "text": " We got a significant benefit from just using this contrast to objective compared to the baselines like like the both the the the naive baseline of not using any auxiliary loss as for this." }, { "end": 1457, "start": 1447, "text": " You know we had we had all these really really over engineered baselines from the reinforced line community on like things like planet dreamer." }, { "end": 1464, "start": 1457, "text": " So are like soft as they stochastically enacted predict and so forth." }, { "end": 1471, "start": 1464, "text": " So so we beat all those methods by white margin and and we also had really good results on Atari." }, { "end": 1488, "start": 1471, "text": " So so we pushed the so basically whatever results Google brain and got with a really high compute model based method that predicted the future like a video prediction model and did rollouts and you know and did planning with it." }, { "end": 1494, "start": 1488, "text": " We were a bit of a bit of a bit just a very lightweight model that did this auxiliary loss." }, { "end": 1516, "start": 1494, "text": " And so I mean I so that was basically the paper and I didn't really expect it to be that that big but you know the apparently people were very surprised with the results and I think now like there's a lot of follow up work on different auxiliary losses and this contrast of setting." }, { "end": 1523, "start": 1516, "text": " So that and you know a lot more complicated things right so so that's lot more scope of future work here." }, { "end": 1530, "start": 1523, "text": " So I'm looking at figure two in the curl paper that shows the architecture and it shows the two different batches." }, { "end": 1539, "start": 1530, "text": " One going to the Korean coder and one going to the key encoder and feeding into the reinforcement learning loss and the contrast of loss." }, { "end": 1547, "start": 1539, "text": " Can you help me understand what are the those two streams doing and what are the two different the different two different batches of data." }, { "end": 1564, "start": 1547, "text": " Yeah sure so typically the way you do off policy reinforcement learning is you have a replay buffer and you load observations from it and then you have a read like your actor predict laws going into the model right." }, { "end": 1573, "start": 1564, "text": " So now when you add an auxiliary loss you have another unsupervised objective in tandem with the reinforcement learning objective." }, { "end": 1585, "start": 1573, "text": " So the contrast of model basically tries to make sure that data augmented views of the same observations have are closer in the latent space." }, { "end": 1594, "start": 1585, "text": " And so you have your observations the frame stack and you created to you create two different augmented views." }, { "end": 1611, "start": 1594, "text": " One is the query and the other is the key and both of them go through the separate encoders the query goes to the query encoder and the key goes to the key encoder in practice the key encoder is just a time to the version of the query encoder basically a poly average version of it." }, { "end": 1629, "start": 1611, "text": " And you just try to make sure that this queue is just Ft the queue of OQ and K Fd the key of the queue and K are much closer in the lane space because you say that they are these are just two different augmented views they shouldn't be too far away in the latent space." }, { "end": 1647, "start": 1629, "text": " And so that particular loss that ensures that the dark product of Q and K is high relative to like other keys that you can get from other frames tax not related to this particular image that loss is called contrast to loss." }, { "end": 1675, "start": 1647, "text": " And so that is the contrast of the object on the other hand you can still perform reinforcement learning on the original input that you already have been sending so you you so that reinforcement learning losses is also back propagating through the query encoder and there is no back propagation through the key encoder so the contrast of learning loss is just going to affect the query encoder and the reinforcement learning loss also is going to affect the query encoder." }, { "end": 1683, "start": 1675, "text": " And the key encoder is just like a time delayed version of the query encoder so this is similar to the momentum contrast mechanism." }, { "end": 1702, "start": 1683, "text": " And and and so you just do this that's absolutely no engineering in terms of how how usually in whenever people are auxiliary losses in reinforcement learning they have to engineer the coefficient of the auxiliary loss that they are adding to get good results." }, { "end": 1720, "start": 1702, "text": " But in this paper the really lucky thing that happened was we had to do none of that which is added with the losses and together it's work really well and yeah so that's how the learning objective or that's all like this framework works." }, { "end": 1728, "start": 1720, "text": " So some of these types of losses I think I've seen have present negative and positive examples. Is that a thing here?" }, { "end": 1743, "start": 1728, "text": " Yeah, yeah, so you you you have a query and you have a key so the query sort of like an anchor and and the key is one of the positives for the anchor and all the other samples in your mini batch." }, { "end": 1750, "start": 1743, "text": " So you when you load a mini batch from your replay buffer you do this for every single mini batch you created two different views." }, { "end": 1773, "start": 1750, "text": " So for every sample the the like one of the views is the anchor the other views the positive and every other sample in your mini batch becomes a negative for you and then you can perform the contrast of loss with the negative with the negatives in a very very competition efficient fashion using dot products and so so yeah you're right." }, { "end": 1785, "start": 1773, "text": " We use every other image and mini batches and negative for the contrast objective. I see cool. Okay and can you tell us more about maybe the experience of planning the experiments and running the experiments for this." }, { "end": 1811, "start": 1785, "text": " Oh sure. I mean like we there were a few papers at the time trying to show my like improvements at the like the deep mind control benchmark which was sort of becoming popular at the time." }, { "end": 1833, "start": 1811, "text": " Like in terms of sample efficiency like this stochastically in actor critics planet dreamers of what so we picked those benchmarks because it's it's always good to show results on stuff the other people are working on and and and we just tried these six basic environments that were there in the soft" }, { "end": 1847, "start": 1833, "text": " category with auto encoder is paper and so then we got really good improvements on top of them or or just without doing much work actually and and since this was like way," }, { "end": 1854, "start": 1847, "text": " very simpler. We just start like this word publishing so that's kind of how it happened." }, { "end": 1865, "start": 1854, "text": " We we coded the contrast model and just tried on bunch of environments and it worked pretty well and and so then we started like iterating more like for example," }, { "end": 1873, "start": 1865, "text": " the results on one or two environments like cheetah was not that good and so we just figured out that we had to increase the bad size and it worked better." }, { "end": 1883, "start": 1873, "text": " So there were like a few engineering tricks but more of us it was an idea that just wanted to work and you said it's a band and words paper where they did." }, { "end": 1895, "start": 1883, "text": " You see B.C. for R.L. but they didn't get great results at the time. Can you can you help us understand why maybe that didn't work as well compared to coral." }, { "end": 1903, "start": 1895, "text": " I think that firstly like a band in New York's paper used the deep mine lab environments which are which are pretty different from the" }, { "end": 1909, "start": 1903, "text": " environments presented here but I would say it's more more an aspect of." }, { "end": 1929, "start": 1909, "text": " I are not like spending too much time on the reinforcement learning setup but you know like it was a paper like five or six benchmarks and so the amount of time you go spend like on just one part of it is much lower like like if you look at CPC version one," }, { "end": 1939, "start": 1929, "text": " the original paper even the results on image net are not that great and then when we just went in full depth on image net for the version two." }, { "end": 1951, "start": 1939, "text": " You got like way better results so I think it's more like that like well probably not sufficiently engineered as or sufficient time spent on it compared to like the curl paper." }, { "end": 1968, "start": 1951, "text": " So what kind of domains do you think benefit from this type of loss like it seems to me maybe the requirements are domains that have high dimensional observations and maybe some way to do meaningful data augmentation is that is that correct." }, { "end": 1969, "start": 1968, "text": " That's correct." }, { "end": 1981, "start": 1969, "text": " Yeah. So like let's say you were trying to apply this to tabular data without making any sense or would it would you say that that's just too low dimensional or like direct state data." }, { "end": 1998, "start": 1981, "text": " Yeah so I didn't I think this idea is not particularly useful for so so depends like if you are specifically talking about the exact implementation of curl you do need data augmentation for it to work because it it fundamentally centers itself on this instance is not." }, { "end": 2016, "start": 1998, "text": " So if you want to do instance discrimination you want to be able to have a way to say two things are similar to compared any other thing and that that's going to be if it's at the instance level you need a data augmentation for that." }, { "end": 2033, "start": 2016, "text": " But if you're trying to move more towards like the predicting the future style like I have current state I'm going to try to predict the future but I won't actually try to predict it I would I would use it and I would do it in a latent space with contrast losses." }, { "end": 2056, "start": 2033, "text": " I think there you might be able to move away from data augmentation and might be able to do it with even like low level state and but but I would still assume you need data augmentation to have like a really good performance so even in CPC just because we are trying to predict the future doesn't mean we don't use data augmentation we still use data augmentation." }, { "end": 2068, "start": 2056, "text": " And so might not be applicable to tabular data but might be applicable to continuous control like people need to try that." }, { "end": 2085, "start": 2068, "text": " And I think it has a really good potential for you know like any image based RL or image based imitation learning or even like if somebody wanted to combine reinforcement learning with language and you wanted to use." }, { "end": 2096, "start": 2085, "text": " And you want to use some contrasts of losses to learn language representations in tandem with like the reinforcement objectives I think these kind of things might be pretty useful." }, { "end": 2114, "start": 2096, "text": " So there's a few things that come up in say Atari like like the bullet problem where you might say people have criticized reconstruct reconstruction methods because they say well the reconstruction might ignore a small pixel like a bullet but the bullet is actually very important to the outcome." }, { "end": 2124, "start": 2114, "text": " So can you say anything about how a curl would perceive a very small feature that turns out to be really important?" }, { "end": 2143, "start": 2124, "text": " Yeah so let's say like you perform these random crops and you know like like between the random crops the bullet was presented both of the random crops and every other frame in your mini batch didn't have the bullet or at least some of the frames didn't have the bullet." }, { "end": 2156, "start": 2143, "text": " Then you would use the bullet as the as a feature to sort of make sure that the two random crops are encoded in the same like nearby compared to other other frames near many batch right." }, { "end": 2170, "start": 2156, "text": " So if if the bullet ends up becoming a access point for your instance of computer to make like like related to separate augment reviews then then you're definitely going to encode it." }, { "end": 2178, "start": 2170, "text": " Compared to like reconstruction methods which is sort of like focus on everything equally and might not get the things that you care about." }, { "end": 2197, "start": 2178, "text": " That said it is likely that you could still miss the bullet if you're if every single mini batch in your in your close setup has has the bullet and so that doesn't become a discriminator feature anymore for doing the instance instance discrimination objective." }, { "end": 2223, "start": 2197, "text": " So so then like you might have to try things like contrastively predicting the future where where the bullet might have moved and so you focus on like oh the right true future is where the bullet is actually in the bottom and because in the past it was at the top and so it probably moved down and all the other frames seem to be having like the bullets somewhere else and so that's not the right thing." }, { "end": 2236, "start": 2223, "text": " Because in these many times it must have moved on by this much so you start focusing on those things and you encode the aspect of the word being there and it's motion and things like that." }, { "end": 2242, "start": 2236, "text": " So it's definitely a more powerful and better framework for learning features that reconstruction." }, { "end": 2253, "start": 2242, "text": " Okay and then another problem we hear about sometimes is this noisy TV problem or which I maybe you could summarize this like an environment in which part of the observation is very is just no random." }, { "end": 2259, "start": 2253, "text": " And so how how would this type of loss deal with that type of randomness in the environment." }, { "end": 2268, "start": 2259, "text": " Yeah so so that's that's a really good question and that's one of the reasons why I actually started working on." }, { "end": 2282, "start": 2268, "text": " On you know this kind of like a contrast to learning methods because you it'll basically ignore anything that it can't predict right so." }, { "end": 2289, "start": 2282, "text": " In contrast to learning are only going to encode things that are useful for you to like predict the future like." }, { "end": 2306, "start": 2289, "text": " This still like your fundamental in variances in your augmentations that you're encoding in the instance setup so if noise is not something that you can use to identify to augment abuse or identify like the future given the past." }, { "end": 2314, "start": 2306, "text": " You would just ignore it and you would not encode it and so it's it's it's better from that perspective as well." }, { "end": 2339, "start": 2314, "text": " Actually if if it may help I think Yashua Benzio has has a talk kind of explaining this idea in the custom some presentation he gave it Microsoft research a few years ago on it's there on YouTube where he precisely explains why you know we should not be working on splice learning from the reconstruction sense." }, { "end": 2361, "start": 2339, "text": " Because it's trying to go to all the noise and you don't want to include the noise thanks for the tip so we've seen that that RL can have problems with generalization and with the Tari and Michico being kind of memorized and that's led to things like opening a procedural generation environments which which I see they used in the red paper but how." }, { "end": 2372, "start": 2361, "text": " How confident do you feel that the that the nice properties of these contrast contrast of representations would hold for out of distribution trajectories data that's never seen before." }, { "end": 2382, "start": 2372, "text": " So firstly there are two types of generalizations right like one is generalization across states in the same environment which is what we focus on in reinforcement." }, { "end": 2392, "start": 2382, "text": " So it's not like we are overfitting it's just that we are generalizing within a narrow subset of like like what we think generalization is." }, { "end": 2403, "start": 2392, "text": " So if your model is not generalizing whether across a state space like if it's not able to do well on an unseen image then it won't be very sub-efficient." }, { "end": 2412, "start": 2403, "text": " But the second thing which is like can it if I can I learn something and then put it on another thing and would it learn really fast there." }, { "end": 2417, "start": 2412, "text": " I think right now we don't have the right benchmarks to say anything at all." }, { "end": 2432, "start": 2417, "text": " I would expect a method like contrast of learning to do better than reinforcement learning if it's if it's about generalization because it definitely learns more about the environment than just just trying to optimize a reward function." }, { "end": 2449, "start": 2432, "text": " And the latter is very task specific the formula is not so and in an idea will like what I would really like is something like what happened in image net where there's a lot of trajectories from a lot of different environments and you learn a really good contrast of model on those trajectories." }, { "end": 2457, "start": 2449, "text": " And then you you have an it really good encoder and you just take that and you put an MLP on top of that and do the reinforcement learning specific for a task." }, { "end": 2470, "start": 2457, "text": " You could even do multitask learning where you say that you do a contrast of learning that the encoder was just learned purely from that and not reinforcement objective." }, { "end": 2478, "start": 2470, "text": " And you have some separate heads emerging out of these encoders and doing the separate contrast to losses." }, { "end": 2487, "start": 2478, "text": " And so that's that's that's what is like in an ideal world like that's the right like right approach in my opinion." }, { "end": 2490, "start": 2487, "text": " And if you look at the curl paper." }, { "end": 2503, "start": 2490, "text": " We actually have an ablation on like doing something like this where we call it like the detached encoder ablation where we say that you know what what if you could." }, { "end": 2516, "start": 2503, "text": " It's actually there in figure nine of the paper where we just say that the convolutional encoder is only trained with contrast loss and the only the MLPs that are on top of them." }, { "end": 2529, "start": 2516, "text": " Which which which have the policy and the value function parameters are so tiny compared to the conditional encoder and they would just do the reinforcement objective." }, { "end": 2538, "start": 2529, "text": " And this sort of work really well on like lot of environments, but was still lagging behind on the harder task like cheater run and so forth." }, { "end": 2553, "start": 2538, "text": " But in future, I'm sure this is this is going to work and once this works, we could imagine training like a you know like you let's see you have like a centralized learner across lots of different tasks." }, { "end": 2573, "start": 2553, "text": " You you're getting data from all these tasks and the the different tasks can all feed into the convolutional encoder just perform contrast of learning so there is no issue in terms of you know combining multiple losses, which is what people usually struggle with when they try to multitask." }, { "end": 2586, "start": 2573, "text": " And then they can have the heads emerging out of these out of this encoder specializing to the separate tasks based whether you want to share any parameters or not is up to you." }, { "end": 2596, "start": 2586, "text": " And and and so this is ideally where I want to like get contrast learning like you know I hope to see it like sort of moving towards in future." }, { "end": 2608, "start": 2596, "text": " And if that happens, then we could definitely generalize really well because if we learn on a lot of different tasks, we learn visual features that are common across a lot of different tasks." }, { "end": 2619, "start": 2608, "text": " And it's very likely that you put in a new environment. It's going to be providing you some useful features so that you can just build an MLP on top of it and you don't really have to train the encoder." }, { "end": 2631, "start": 2619, "text": " Do you want to talk about rad that was your paper reinforce and learning with augmented data. Yeah, I'm asking at all 2020. So what can you get us give us the general idea of this paper." }, { "end": 2655, "start": 2631, "text": " Oh, sure. So after we wrote girl like one of the authors in rad like he was and he was opposed to our lab. So he actually asked us like a very interesting question like hey, this thing is really cool. It works, but you know, like what if you don't what what if like how do we know like what's really helping in the" }, { "end": 2675, "start": 2655, "text": " framework is it the data augmentations or is it the contrast of loss and even if the contrast of loss is helping like what is helping you more is it the fact that the reinforcement learning objective now looks at like augmented views of the data and so that's sort of more like, you know, in supervised learning." }, { "end": 2685, "start": 2675, "text": " You're like saving your training and image net or see for our classifier. You're feeding in random crops of the image to the model, right." }, { "end": 2689, "start": 2685, "text": " And you're saying that all these different random crops should have the same output." }, { "end": 2699, "start": 2689, "text": " And in Curl, you're trying to do that in a more explicit way by say you by explicitly saying that the two augments abuse should correlate well in the lane space." }, { "end": 2725, "start": 2699, "text": " So you're you're you're explicitly ensuring this consistency constraints using this contrast objective, but just sort of trying to feed into augmented views like multiple augmented views of an input to the reinforcement learning model having no explicit consistency objective, but implicitly trying to ensure it's going to happen because just like how you predict the same output for multiple inputs." }, { "end": 2733, "start": 2725, "text": " You would do the same thing in the reinforcement setup where you predict the same value functional policy for the multiple different crop versions of the input." }, { "end": 2743, "start": 2733, "text": " And so so one of our core is asked us to try that and we thought, okay, it's totally worth trying and it would be good to know." }, { "end": 2750, "start": 2743, "text": " And we tried it and it was a surprise it actually worked like even better than Curl once a lot of these different deep mind control environments." }, { "end": 2766, "start": 2750, "text": " So so that ended up becoming rat. It was like, you know, much simpler, much more radical. If you may put it that way, like, you know, in terms of being the simplest possible thing that you could try with data augmentations and reinforcement learning." }, { "end": 2783, "start": 2766, "text": " And it was also that nobody had really tried this even though Curl was also the first to, you know, combine data logs in RL. It did it in a more auxiliary loss fashion that that people were already like sort of used to in RL." }, { "end": 2798, "start": 2783, "text": " So nobody was used to like just having pure mod of the RL just input data or change no algorithmic change or no extra loss function working really well and, you know, beating every other method in the in the field." }, { "end": 2805, "start": 2798, "text": " So so and and and to a surprise we found it also worked on these procedural generation environments about me." }, { "end": 2819, "start": 2805, "text": " So so it wasn't just specific to deep mind control. And people also found it to work on Atari like other parallel paper from NYU on this." }, { "end": 2834, "start": 2819, "text": " So so so yeah, it was it was a very surprising result. And and like I like discuss the rat paper doesn't mean that Curl is not useful or something like that is like a misconception among a lot of people." }, { "end": 2844, "start": 2834, "text": " Like, oh, does this mean we just use rad because like I said, you know, Curl has a more general framework and you can think about it using it even without any reward functions available." }, { "end": 2852, "start": 2844, "text": " Whereas in a rad like if you don't get rewards like you're not going to be able to like have any objective at the end, right?" }, { "end": 2864, "start": 2852, "text": " Like you would have you you can feed augmented views as your inputs, but if you don't have the the loss function coming in and making your encoder learn with like very dense rewards. This is not going to work." }, { "end": 2879, "start": 2864, "text": " Whereas something like Curl you can just have the auxiliary objective and you can sort of like like use it with sparse reward task or you can use it as a way to list learn representations from data without any reward function." }, { "end": 2884, "start": 2879, "text": " And find unit when you have any start getting reward functions and so forth." }, { "end": 2898, "start": 2884, "text": " Okay, so do you think of the specific augmentations in rad as somehow fundamental or or they just some starting set like should we expect a growing set of augmentations." }, { "end": 2909, "start": 2898, "text": " Yeah, I think you should expect a growing set of augmentations because what we start with is what happened in image recognition. Like if you look at Alex net." }, { "end": 2918, "start": 2909, "text": " It's very famous for like there is like if you read the paper like you know for first page of the paper or something like that." }, { "end": 2929, "start": 2918, "text": " It like they mentioned how they basically take resize the images 256 by 26 and then take 224 by 224 crops of that random crops." }, { "end": 2935, "start": 2929, "text": " And that increases the size of the data set by 2000 X or something like that." }, { "end": 2941, "start": 2935, "text": " So so you know random crops is something very, very fundamental to image recognition." }, { "end": 2949, "start": 2941, "text": " Or and it's also used in training, generative models like like you know making grants or if it less things like that." }, { "end": 2961, "start": 2949, "text": " So it was the simplest thing we could start with that said like you know contrast of learning uses a lot more augmentations like colored based augmentations." }, { "end": 2973, "start": 2961, "text": " Or random grayscale and things like that and also a lot more distortions like rotation, sharing, translations." }, { "end": 2982, "start": 2973, "text": " These are things that we haven't really pushed much on like yet and like you know I think I think that's more more to come." }, { "end": 2991, "start": 2982, "text": " Like random translate people already figured out random translate as an augmentation works better than random crop and both focal and rad." }, { "end": 2994, "start": 2991, "text": " So so that that's like another upgrade." }, { "end": 3000, "start": 2994, "text": " Those are very similar in spirit and depending on environment you might need new things." }, { "end": 3005, "start": 3000, "text": " For example think about robotics like manipulation task like not local motion." }, { "end": 3016, "start": 3005, "text": " If you keep doing random crops you might miss the object in the scene or you might miss the arm and that it might be hard for doing the task." }, { "end": 3025, "start": 3016, "text": " But what if you have two different cameras or three or four different cameras just like you know how Tesla car has eight different cameras right so." }, { "end": 3035, "start": 3025, "text": " Then you could sort of like not feed in few camera images and just feed in certain camera images so you can think of that think of that as like random cropping across cameras." }, { "end": 3043, "start": 3035, "text": " And that could force the model to learn to be robust to like not having certain views and try and extrapolate it implicitly." }, { "end": 3049, "start": 3043, "text": " And I think that that that kind of an augmentation could work really well in robotic manipulation." }, { "end": 3057, "start": 3049, "text": " And also a few start performing tasks that are more long horizon in nature like 20 or 30 different frames." }, { "end": 3061, "start": 3057, "text": " You want to look at it in in in context together." }, { "end": 3070, "start": 3061, "text": " Then you might want to drop few frames across time as well. So right now we're doing spatial running crops. We are dropping few pixels." }, { "end": 3079, "start": 3070, "text": " But what if you just want to like drop a few frames instead of feeding even though you're your policy can look at like 20 frames in the past." }, { "end": 3083, "start": 3079, "text": " Your calm net might might just be fed like 10 or 15 frames." }, { "end": 3094, "start": 3083, "text": " And you would miss 5 or 10 frames but your model might learn to be robust to it. So think about this is some kind of like an input dropout." }, { "end": 3106, "start": 3094, "text": " Or you perform like very different random crops across time just like how things have worked out well in video recognition where in video recognition people do both spatial and temporal random crops." }, { "end": 3108, "start": 3106, "text": " So." }, { "end": 3117, "start": 3108, "text": " So that's a kind of augmentation that I'm hoping really works out in future. We have we have a few projects in our lab like people working on these things." }, { "end": 3130, "start": 3117, "text": " It hasn't really panned out yet and and like is it multi views like something that is like I'm hearing a lot of people are trying trying things on it with grad and girl and seeing some good results." }, { "end": 3141, "start": 3130, "text": " So that might really lead to a lot of you know data efficiency gains and robotic grasping or in general robotic manipulation." }, { "end": 3152, "start": 3141, "text": " Part of the promise of deep learning is avoiding handcrafted features. I guess we're at this point we're kind of hand designing some of these augmentations although it seems a lot simpler than designing features." }, { "end": 3155, "start": 3152, "text": " But do you think that eventually." }, { "end": 3159, "start": 3155, "text": " The augmentations would be automatically designed somehow." }, { "end": 3174, "start": 3159, "text": " So, so let me let me first sort of like give a very very balanced answer to this number one like what what you said is indeed true or like a more broader sense that deep learning sort of goes away from handcrafted features." }, { "end": 3187, "start": 3174, "text": " I really think that data augmentations with some domain knowledge has been something that people have always been using deep learning like it it it's probably something that people." }, { "end": 3205, "start": 3187, "text": " Focus on our acknowledge a lot more now than in the past but it's always been there was a huge part of Alex net being great and as just like a data point that was never published anywhere but sort of like an interesting data point." }, { "end": 3221, "start": 3205, "text": " If you take a resident or any any kind of state of the image classifier and you go to the code base and you just comment out the random crop you know just just just say that you only take the center crops or you resize the image to a property shape." }, { "end": 3232, "start": 3221, "text": " Your accuracy would probably at least 78% and that's that's a huge you have a classifier that 76% top on accuracy." }, { "end": 3245, "start": 3232, "text": " The accuracy would just drop like 68% or something like that so doesn't that already tell you like how image recognition is just like one of the generally given us like one of the success stories of deep learning." }, { "end": 3249, "start": 3245, "text": " Or whatever started the whole revolution." }, { "end": 3260, "start": 3249, "text": " Dependent so much on like domain knowledge which is the fact that you know in an image you if you move the camera around which is what is equal to the simulating a random crop and capture the image." }, { "end": 3266, "start": 3260, "text": " The object person in the image the object you identify as an image shouldn't really change." }, { "end": 3273, "start": 3266, "text": " And the other augmentation which we all use is this flipping augmentation which is you flip an image from right to left or left right." }, { "end": 3288, "start": 3273, "text": " You should see the same same class so that that that's another domain knowledge thing which which people have been using ever since like cruisetsk used it and even the old like Lenet classifiers that would train on amnesty use data augmentation." }, { "end": 3292, "start": 3288, "text": " Like like translating the digits and so forth." }, { "end": 3310, "start": 3292, "text": " And in NLP people have been using it with like the back translation idea which is like you you take a sentence you translate it to another language and then you come back to your original language and you get like the same sentence with a different way in which it's being constructed with the same meaning." }, { "end": 3320, "start": 3310, "text": " And people have used this as an augmentation for training translation models and getting a lot of gain significant gains that this technique is even used in production." }, { "end": 3335, "start": 3320, "text": " So yeah so I would say that you know all the success stories of deep learning audio NLP vision have always been using domain knowledge and it's not a bad thing." }, { "end": 3348, "start": 3335, "text": " And do I see this becoming more like learned rather than engineered it's hard to say in fact like learned augmentation so far not really worked well." }, { "end": 3358, "start": 3348, "text": " Like coaxially has to speak about auto augmentation which is sort of using the equivalent of neural architecture search for augmentation." }, { "end": 3368, "start": 3358, "text": " Like can we just automatically you construct over cavity space and then automatically learn the right augmentation or sequence of augmentations." }, { "end": 3384, "start": 3368, "text": " Like I said like you know just like how NASA sort of fundamentally built on top of human engineered ops like you have the three by three con you have the risk skip connection and you have bash norm you have layered on and then you figure out how to combine all these things." }, { "end": 3398, "start": 3384, "text": " Similarly augment like auto augmentation is also built on top of a similar idea so unless you have the vocabulary of like random crop or and translate stuff like color distortions and things like that." }, { "end": 3405, "start": 3398, "text": " You're not going to be able to do something fundamentally new out of these auto ml style approaches." }, { "end": 3417, "start": 3405, "text": " So yeah I think it's going to be I think it's going to be a big part of a future systems and we definitely get a lot of benefit from doing augmentation tricks." }, { "end": 3434, "start": 3417, "text": " So just like like think about I think we're practically as any like like robotics is a very practical field and it's not going to be academic in definitely like you know a lot of startups already beginning to focus on just getting robust to work in industry and not focusing too much on writing a paper." }, { "end": 3443, "start": 3434, "text": " And for them it's like okay I train I train a robot in my company but then robot has to go and work in a factory." }, { "end": 3452, "start": 3443, "text": " So the lighting conditions will be different the camera might be different or you know like the object scene might be very different." }, { "end": 3462, "start": 3452, "text": " So obviously like you're not going to be able to train at test time so you shouldn't be able to do all these augmentations that you should fundamentally prepare for while deploying." }, { "end": 3471, "start": 3462, "text": " Just by you're training the robot in your factory right so you would randomize a lot of lighting conditions you would randomize the objects." }, { "end": 3481, "start": 3471, "text": " There is this idea and robotics called domain randomization where you train a lot of different variants and simulations so that it can work in the real world." }, { "end": 3484, "start": 3481, "text": " So the these ideas are always been there." }, { "end": 3500, "start": 3484, "text": " It's just that curl and rad are trying to explicitly like increase the importance of focusing on that more than the algorithms and showing the power that just the simple changes can give you a massive improvement in performance." }, { "end": 3506, "start": 3500, "text": " So does rad end up using a lot more mini batches than standard model for the algorithm." }, { "end": 3515, "start": 3506, "text": " Not a lot like it uses a bad size of 128 which is what other people are training with." }, { "end": 3526, "start": 3515, "text": " But it effectively sees a lot more data a lot diverse data than other methods because it's looking at these augmented views of the images." }, { "end": 3546, "start": 3526, "text": " So it's sort of like kind of like you know like how Alex net basically increases size of the data set by 2000 x by doing the augmentations rad is increasing the size of the data set implicitly by never providing like one version of the same thing with a lot of multiple different versions." }, { "end": 3559, "start": 3546, "text": " In terms of computation we are not increasing the amount of computation the model goes through it's the same amount of computation it's just that it's seeing a lot of diverse inputs each time it does any computation so it's way faster." }, { "end": 3563, "start": 3559, "text": " Okay, that was surprising answer to me. I didn't expect that that's really cool." }, { "end": 3571, "start": 3563, "text": " Okay, so let's move on to sunrise. You got you got so many great papers here to talk about and so sunrise." }, { "end": 3576, "start": 3571, "text": " This is a simple unified framework for ensemble learning and deeper in force and learning." }, { "end": 3580, "start": 3576, "text": " Can you tell us about this paper? What is the general idea here?" }, { "end": 3594, "start": 3580, "text": " Sure. So firstly this paper I can't take too much credit for us because like you know curl and drag I can say I was primarily leading these efforts but" }, { "end": 3604, "start": 3594, "text": " sunrise was a lot of mostly done by by Kim and the first author and my advisor Peter Abil actually like was sort of his ideas." }, { "end": 3618, "start": 3604, "text": " So basically the ideas like like if you want to just like say curl and drag focus on the data augmentation and those aspects but but you can also improve reinforcement anyway focusing on the algorithm aspect." }, { "end": 3632, "start": 3618, "text": " And in the algorithm aspect value based methods are really finicky and hard to train but but they are often the best methods in terms of sample efficiency compared to policy gradient methods and in value based methods." }, { "end": 3643, "start": 3632, "text": " You you typically have this thing called a target net that provides you the target for the Bellman equation and you back propagate the Bellman error." }, { "end": 3664, "start": 3643, "text": " So usually the target net is finicky because the updates change a lot and therefore you know the the current network is changing much faster than the target net and might not might not actually be a you know like a great great like like most stable thing that you can work with." }, { "end": 3679, "start": 3664, "text": " So the idea here is like what if ensemble models like if you have multiple target nets or multiple value functions and you could sort of use the statistics across them to stabilize your updates." }, { "end": 3693, "start": 3679, "text": " So let's say I had I had 10 different methods to pick my target from and I could take the mean and the standard deviation across these different models." }, { "end": 3708, "start": 3693, "text": " And use that to sort of provide an uncertainty estimate from a target instead of just using a point estimate and thereby being a lot more you know stable and fast training." }, { "end": 3727, "start": 3708, "text": " So that's the idea and sunrise and it's heavily inspired by ospans work in ospans work on bootstrapped EQN except that it's it's using the mean and the and the variance of these estimates more or explicitly and able to like do the error propagation much better." }, { "end": 3746, "start": 3727, "text": " And so that's really all I you know I can say about it because it was a lot more work done by came in on this on this setup but one thing I would say one thing that to me was really surprising about the paper was was it's really good results on state based" }, { "end": 3766, "start": 3746, "text": " or like like where the state of the art results on state based or like where you're not learning from pixels but the actual state were typically from model based methods that try to predict the future state and then you know used it to simulate some fake or lots or whatever." }, { "end": 3795, "start": 3766, "text": " But this one is a pure model free method with ensembles has sort of like caught up with those results from model based methods and and to me that was really surprising that sort of like the curl story again for states because in curl we saw that happening on images and with sunrise we saw that like os a simple idea with existing model free or nothing else can can can be very competitive or even out perform the state." }, { "end": 3805, "start": 3795, "text": " The art model based methods awesome let's let's talk about the course that you co created in co talk deep on just unsupervised learning." }, { "end": 3813, "start": 3805, "text": " Sure Berkeley course CS294158 so can you tell us a bit about this course and the contents." }, { "end": 3840, "start": 3813, "text": " So yeah in in in in 20 so we we started the first version of the class in 2019 I think so like basically in 2018 like after so I interned open year in 2018 and in the summer and I was working on reinforcement learning at the time." }, { "end": 3859, "start": 3840, "text": " But that was the same time like Ilya Sutskiver and Alekratford to the GPT paper and we we saw that happened and I saw how it basically changed the NLP field and in four months like the bird paper came out and and basically was a revolution and." }, { "end": 3879, "start": 3859, "text": " So so we saw like we saw a right in front of our eyes that how like you know this unsupervised learning was really picking off and then the when I had gone to ICML that summer to present my like like a paper on like my research at Berkeley I the CPC paper come out during the conference so it was sort of like pretty." }, { "end": 3896, "start": 3879, "text": " The writing was on the wall that you know we should be working on super best learning and so on the other hand our lab was sort of like a reinforcement learning lab like Peter appeals more known for reinforcement learning so." }, { "end": 3925, "start": 3896, "text": " We sort of thought like okay the best way to learn is to teach it so let's sort of start a class and then we can learn the content ourselves as to just like teach it to other people and and thereby by learning the content well we can do more research on it so that's how it started in 2019 like there was like a couple of people in our lab who were also very interested in generative models like pixel C and NSBA is flow models and gans and so forth so." }, { "end": 3954, "start": 3925, "text": " And I was very interested in like representation learning and and like how it can be applied a lot of different problems so so we kind of put everything together and like and and and thought of first version in like 2019 spring and then we follow it up this year with the second version and 20 spring so that's really the story of the class and it was probably the one of the first few." }, { "end": 3967, "start": 3954, "text": " Classes on this topic very focused on like all the latest and greatest in the field because it's it's kind of like a moving feed right so for example when we first thought the class." }, { "end": 3983, "start": 3967, "text": " The state of the art on image representation learning was CPC version one which are like 48% accuracy but now this year we thought the class like you know was some moco version 2 had come out and the number of." }, { "end": 4011, "start": 3983, "text": " Like you know already like a close to 80% so that's how fast the field was changing and like similarly like when we designed the class in 2019 we only GPT and bird existed at the time but now you know how it is right and and I'll be like it's basically a flurry of bird papers now and GPT to GPT 3 and so forth so." }, { "end": 4024, "start": 4011, "text": " Similarly like you know curl didn't curl wasn't there when we thought the class like when we started the class this semester and I'm sure next year there will be a lot of people in the topic and reinforcement learning as well so." }, { "end": 4031, "start": 4024, "text": " It's a very fast moving topic and we just thought there'll be very interesting to have like a." }, { "end": 4038, "start": 4031, "text": " Class that is move at the same time helps us in instructors learn as well as the students learn we learn together." }, { "end": 4043, "start": 4038, "text": " So do you feel like this is early days still for unsupervised learning." }, { "end": 4046, "start": 4043, "text": " Yeah definitely I think so." }, { "end": 4066, "start": 4046, "text": " I think we I think it's way better than a couple years ago that's for sure like I think yeah it it it it definitely has got a very compute compute intensive and so there are some principles we figure out that to do to be good at unsupervised learning." }, { "end": 4080, "start": 4066, "text": " We do you really need a lot of computation power and you need to train really large models and you need to train on really large data sets so there are some fundamental principles that are just objectively true now." }, { "end": 4089, "start": 4080, "text": " Based on like what's happening a language and what's happening in images and and that kind of volume of training that can scale." }, { "end": 4117, "start": 4089, "text": " Hasn't been done in reinforcement learning yet or imitation learning and so I think that would be pretty interesting to see like most probably we not able to do it because we don't have the benchmarks and might might be best done in an industry or setting like autopilot or like a robotics company but I think I think that kind of a scale shown for some control tasks or some navigation tasks we really interesting." }, { "end": 4143, "start": 4117, "text": " But that said like I do think that like the right objective has hasn't been nailed like you know it's still sort of in NLP it's fundamentally centered on language modeling or mass language modeling in vision it's fundamentally centered on contrast of learning between augmented views with the air and and and reinforcement learning it seems like contrast learning or." }, { "end": 4155, "start": 4143, "text": " And using some kind of augmentation seems to work really well so so it seems a bit like you know each each each specific topic has its own." }, { "end": 4170, "start": 4155, "text": " Formula that's going for it and whether we will ever be able to unify everything into one framework is is is like an open question and it's still early days to sort of give a very definitive answer on that." }, { "end": 4181, "start": 4170, "text": " And it may also not be necessary I'm not I'm not like like you know a big fan of like saying over everything should be the same." }, { "end": 4188, "start": 4181, "text": " Class of models same objective and it should just work on any day I think that university things really cute but." }, { "end": 4208, "start": 4188, "text": " So it's not like a must have you it's fine if we can engineer really good AI systems that work even if they're very specific to the domain that you're training on but but in terms of yeah like what objectives to use like how to kind of improve further." }, { "end": 4222, "start": 4208, "text": " Like images we haven't still scale beyond image net convincingly well like NLP open ash kind of to take into a totally different level by training it on like extremely large data sets." }, { "end": 4237, "start": 4222, "text": " But in vision it still feels like we are still got the million images are 100 million images regime and kind of truly scaling it to like billions or trillions of images on the internet like training it on videos training on entire YouTube." }, { "end": 4248, "start": 4237, "text": " I think these are still like out of reach of the moment and the reason is simply because of the computation power like for training a good image super image unsupervised model." }, { "end": 4256, "start": 4248, "text": " You need to train for at least like one week or one or two weeks we have really large model and you're training it for really long like thousands of epochs." }, { "end": 4266, "start": 4256, "text": " But that's like for like a million lab million million image data set so what if you really want to go to billion images and that's like thousand X movie computation right." }, { "end": 4282, "start": 4266, "text": " So imagine having to wait for a thousand weeks this is not feasible and so you will have to like sort of you might be able to just do like a one pass over the entire data set." }, { "end": 4295, "start": 4282, "text": " To train in a similar time frame and you would need a lot more course lot lot bigger parts a lot more GPUs and so I do think like it's not going to be a lot more." }, { "end": 4307, "start": 4295, "text": " Like it's sort of like a topic that's more on like more relevant or more doable for large industry company than academia." }, { "end": 4323, "start": 4307, "text": " And the best way academia can kind of work on unsupervised learnings to try to do it and in the reinforcement learning setups because it's a lot more computation cheaper and like not something that industry is currently super focused on in terms of getting the best numbers." }, { "end": 4341, "start": 4323, "text": " And so you get to have a state of the art and a lot of visibility for your work so that was another motivating factor to do curl and yeah I hope more people tried out in other environments like navigation manipulation or even like text and reinforcement learning things." }, { "end": 4350, "start": 4341, "text": " So this pattern of doing massive unsupervised learning or pre training and then and then fine tuning for certain tasks like we see with bird." }, { "end": 4364, "start": 4350, "text": " I wonder does that point to a future where very few organizations have the capacity to do the massive pre training and then on the rest of us are basically doing fine tuning on the result." }, { "end": 4367, "start": 4364, "text": " Yeah, I think that's very likely." }, { "end": 4386, "start": 4367, "text": " And I kind of think that's how it should be where like a lot of people may not have the compute part to do the pre training but if if industries keep releasing these pre train checkpoints," }, { "end": 4409, "start": 4386, "text": " then we might be able to use them in academic settings like take a bird check point and do something cool with it or take a CPC or mocha or similar check point and try to like use it in a downstream task that is of academic interest." }, { "end": 4435, "start": 4409, "text": " So the only thing so far is in RL or imitation learning it's still not pan-dod in terms of like taking something that's trained on image net or YouTube and putting it on an RL environment and that's kind of like a sad thing right like ideally it should work like if we are really good visual features." }, { "end": 4451, "start": 4435, "text": " You should be able to like work on a diary or you should be able to work on deep my lab or deep mind control and so forth but somehow doesn't seem to work that great doesn't seem to give us much benefit from as like training from scratch on these environments." }, { "end": 4480, "start": 4451, "text": " So but once that kind of works like once we move to like much harder environments which are really time consuming to render or like once we are in like the real world and any kind of reinforcement learning is going to be super compute intense like super data intensive and walk lock intensive using a really good pre train checkpoint provided by industry on as your backbone architecture and woodstrapping from it and fine tuning it would be the." }, { "end": 4483, "start": 4480, "text": " Way to go I think yeah." }, { "end": 4490, "start": 4483, "text": " So any hints on on what types of things you plan to work on next you plan on doing fall and work on the type of things you've been doing." }, { "end": 4508, "start": 4490, "text": " Yeah yeah so there are a lot of follow like there are a lot of follow projects on on on curl and rad that I'm not directly working on but what's sort of happening in our lab and some projects I'm just cannot advising on them." }, { "end": 4520, "start": 4508, "text": " And currently I'm more focused on architectures deep learning architectures for like vision so so that's another thing that I'm very excited about like I hope it has similar." }, { "end": 4528, "start": 4520, "text": " Like impact as curled which is we are trying to do things like putting putting self attention and reinforcement learning." }, { "end": 4537, "start": 4528, "text": " How do you how do you do that like what is there what what are the right deep learning architectures for vision and RL things like that." }, { "end": 4555, "start": 4537, "text": " So so we have we have a few projects in our lab on those things also you know trying to use domain knowledge and other forms like what if you wanted to use optical flow in reinforcement learning like like you want to model motion." }, { "end": 4563, "start": 4555, "text": " Want to use like what want to solve like more temporal task so there are some people also trying like temporal versions of." }, { "end": 4579, "start": 4563, "text": " Curl so so there are a lot of projects on those things and also like trying to do something in like driving simulators car like Carla you know because they are more relevant to the real world not that it's very real world either but." }, { "end": 4604, "start": 4579, "text": " There's not much you can do sitting at home during cold and like do do something real world in reinforcement learning right so it's it's all going to be simulation so why not be why not it be on like some more more realistic one and so so that those are some interesting projects and I also continue to keep working on like deep learning like." }, { "end": 4612, "start": 4604, "text": " Deep learning architectures for computer vision and so forth so it's like a multi prong lot lot of different projects." }, { "end": 4616, "start": 4612, "text": " Sounds like you have a road map in mind a lot of road map." }, { "end": 4632, "start": 4616, "text": " Yeah I'm not sure if it's to long term but these are I mean I just think it'd be pretty interesting to see if we can sort of move away from the primitive architectures that have been." }, { "end": 4660, "start": 4632, "text": " So just like how when we did the data Augsburg like like people people who were not taking Augs seriously in our like are now taking it seriously I think architecture design should also be kind of similar and so I think we'll see a lot more people on that not not just from us but other people as well so." }, { "end": 4678, "start": 4660, "text": " And like I said a bit earlier like when you asked me about the research focus for like I think you're really useful to make deep are more like deep learning like sort of more centered on engineering that and less always on new ideas because." }, { "end": 4690, "start": 4678, "text": " You notice like you know there are like hundreds of papers and deeper enforcement learning that have proposed new ideas new new learning algorithms new value function new policies or exploration methods." }, { "end": 4699, "start": 4690, "text": " But somehow it's been really hard to keep track of the progress because they all do it on their own special set of environments and." }, { "end": 4728, "start": 4699, "text": " It's really hard to track the like like what's really helping you right and on the other hand something very simple like curl or rad or sunrise these are applied on a standardized benchmarks but are heavily centered on just like one single idea and and and very transferable often across multiple setups both in general and specific ways so I'm kind of want to focus more on those things because." }, { "end": 4741, "start": 4728, "text": " That's kind of how deep learning has generally progress in like you know over the years and it's very likely that that's how reinforcement learning or imitation learning will also progress over the years." }, { "end": 4755, "start": 4741, "text": " Yeah I have to say like many of these papers that you were involved in or first authored or in this kind of extreme quadrant of extremely simple and extremely effective in improving performance which is probably a pretty great place to be." }, { "end": 4780, "start": 4755, "text": " Yeah thanks a lot for your kind words and and you know like I'm very inspired by the work that open it is almost like all the time in terms of pushing the boundaries on what what simplicity and engineering can buy or complexity and new ideas and you know like the if you look at the results and reinforcement learning it's crazy." }, { "end": 4809, "start": 4780, "text": " Like the results on the dactyl hand and the Rubik's cube just by pushing the simple idea of domain randomization which which is also like an inspiration for data augmentations by the way because in domain randomization you just train on like various different simulators and various different image renderings and so forth and I think it's all like fundamentally the same idea." }, { "end": 4830, "start": 4809, "text": " So I think simplicity can always give you a lot that there is some benefit from doing that or more more complex or so so that there are like two different things one is you want to like enjoy your research and sometimes doing new things gives you more enjoyment." }, { "end": 4859, "start": 4830, "text": " But the other things you also want to make sure that you don't me and or too much into working on non problems like things that don't actually that don't actually pose you problems but might actually be problems you invented just to have fun you know and and and so I think I tend to focus more on like making sure it's not too much on the fun." }, { "end": 4877, "start": 4859, "text": " The fun side it like and not like under the stick but kind of really interesting papers was is like very useful but may not be super novel I think I would rather lean towards doing the useful but not super now." }, { "end": 4894, "start": 4877, "text": " Besides the things that you mentioned already in this interview are there things happening in our lately that you find really interesting other things let's see so I mean I mean John Schumann's papers are usually very interesting to me." }, { "end": 4906, "start": 4894, "text": " So he I think he pushes on the fundamental algorithm side and recently he released a paper called basic policy gradient was pretty interesting like the image augmentation is all you need from NYU was very similar to that." }, { "end": 4918, "start": 4906, "text": " She was pretty much the same except they had like more they also had augmentations to the target nets in the value function and and focus purely on value based methods." }, { "end": 4935, "start": 4918, "text": " So I think I think they also are doing great work and and then there is this paper from Montreal which was a follow up to curl called momentum predictive representations which uses another idea on unsupervised learning called BYOL would strap your own latent." }, { "end": 4949, "start": 4935, "text": " From from deep mine and applies it to reinforcement learning and and and they do this temporal predictions which give them lot more gains over curl and and they really improved the data efficiency and Atari even further." }, { "end": 4964, "start": 4949, "text": " So that those are pretty like pretty much in the same vein as curl and rad but being done by other people and other other groups and it's always like you know satisfying when when." }, { "end": 4978, "start": 4964, "text": " Multiple people are thinking of the same thing and pushing pushing the numbers as hard and like yeah so that that that there's also like you know." }, { "end": 4990, "start": 4978, "text": " Like a lot of interesting work done and like like generally like in robot learning and like companies which don't really come out and like academic papers but." }, { "end": 5010, "start": 4990, "text": " In general I'm I'm a fan of like what's happening in industry as well like how people do how do how people basically are pushing on pick and place and kind of like you know like pushing on replacing humans in these logistics and factories and like like for example." }, { "end": 5024, "start": 5010, "text": " Even if we make like 10x progress in grasping and pick and place we might be able to have like single data every instead of today delivery and Amazon right so Amazon Prime so." }, { "end": 5033, "start": 5024, "text": " Like those are those are high impact in terms of economic value and like like but may not necessarily be the most like." }, { "end": 5054, "start": 5033, "text": " It's not like you take the latest soda or the algorithm and put it on put it on these robots and hope that that happens it's going to be more of these domain specific engineering a lot of like you know augmentations good object detection and so for like it's it's going to be more of engineering than research but." }, { "end": 5064, "start": 5054, "text": " But you know it definitely uses a lot of the ideas we publish in the in the field and and tries to get it into practice and make it a reality." }, { "end": 5069, "start": 5064, "text": " And I think that those those kind of things without a lot of impact." }, { "end": 5082, "start": 5069, "text": " Arvin Trinivas this has been fascinating very enlightening for me and I'm sure for our audience so I want to thank you for sharing your your insight and your time with all of us today and we look forward to watching and working the future thanks so much Arvin." }, { "end": 5085, "start": 5082, "text": " Thank you Robyn thanks for having me." }, { "end": 5096, "start": 5091, "text": " Notes and links for this episode are at talkrl.com" }, { "end": 5100, "start": 5096, "text": " If you like this show I need your support you can help in a few ways." }, { "end": 5113, "start": 5100, "text": " Subscribe on your favorite podcast platform subscriptions make a big difference." }, { "end": 5116, "start": 5113, "text": " 3. Give us a five star rating on Apple podcasts." }, { "end": 5131, "start": 5116, "text": " If you don't think we deserve five stars let us know on Twitter what we could do better." } ]
Taylor Killian
Taylor Killian on the latest in RL for Health, including Hidden Parameter MDPs, Mimic III and Sepsis, Counterfactually Guided Policy Transfer and lots more!
https://media.transistor…b5d.mp3?src=site
This is TalkAreal Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan. Taylor Killian is a PhD student at the University of Toronto and the Vector Institute. He works as an intern at Google Brain and in his own words is an aspiring researcher slash scientist. Taylor Killian, thanks so much for joining us today. Hey, I'm really excited to have the opportunity to talk and share what I'm working on and also how I've gotten to where I am. Super excited to chat with you. So how do you describe your research interests? It's a great question. It's been under constant evolution but in a directed fashion. I had the opportunity quite early in my adult life to serve as a religious representative for my church where I spent a full two years talking with people about my beliefs and sharing what they mean to me. I was always fascinated by how people received that information and what they did with it. And some people would act what I felt like were counter to what they proposed to be their beliefs versus those who acted in line with their beliefs. There's a lot of uncertainty in people's decision making and after finishing that time, I returned to my undergraduate institution and thought I wanted to do behavioral science but in an analytical way because I was very interested in math and I felt like I was good at it. But probably fortunately for me, they didn't have a behavioral science track in the psychology department at my university. And so I was forced at the time to put sort of decision making on the back burner just as I progressed through my undergrad. But after that and graduating and getting a job and being a computational scientist that that question kept coming back, how do we make decisions in opportunities or in situations where there is a high level of uncertainty or where we might have some prior context. A lot of those questions in my own mind came from sort of a neuroscience or behavioral science background. But I'm quite analytical in thinking and given my limited exposure to the world, I thought that that had to be within applied math. What is there within applied math to study that has to do with decision making and I was fortunate to get the opportunity to pursue a master's degree at Harvard while I was working. And I approached a faculty member and said, hey, I'm really interested in applied math but about decision making and she says, oh, that sounds like a reinforcement learning. And I have some projects along those lines, are you interested in healthcare. And my father is a doctor and I had sworn to never be a medical doctor in my life, just given the stress that I observed in his life and it didn't seem like that was the path for me. I said, yeah, I'm interested in healthcare. I think that it's a valuable area to be pursuing research solutions to some of the major problems and that kind of became the introduction to where I am now as a researcher who is trying to develop stable and robust methods within reinforcement learning as motivated by or applied to healthcare problems. And so all of that was just I think prepare a quick answer. I apologize for kind of editorializing a little bit but a quick answer is about what my particular research interests are is, you know, within the construct of decision making under uncertainty, are there ways that we can develop robust, reliable or generalizable approaches within reinforcement learning in. In highly uncertain or partially observed settings awesome. So this is your episode. I encourage you to editorialize all you want. That's totally bonus for us as listeners. We want to know what you're thinking. This is great. From looking at your Google scholar page, you you have some some work in physics related to fluid dynamics and then machine learning with radio sensors and some core ML stuff like capsule networks. So did you have like different chapters in the past or you focused on these different things and is it current direction the future for you. I really struggle with the word chapters because that kind of there's a connotation that the doors closed in some of these circumstances, the doors definitely closed like I'm probably never going to return to working in experimental fluid dynamics and a lot of which I did during my undergraduate as a research in research assistant in the supplied in computational math program that I was designed for myself. I had the fortune of working with Ted Truska who's now Utah State University who pioneered really fascinating imaging techniques for experimental fluid dynamics and he needed people to help develop computational models. And I had the interest but also ability to join him and that prepared me in a way to take the job that I was ultimately offered at MIT Lincoln Laboratory, which is where I did more radar processing because that is the heritage of Lincoln laboratories that it was one of the early inventors and adopters of radio magnetic frequency for sensing purposes. So I think that the idea of the process is that they have a heritage in the that comes from the MIT radiation laboratory that spun out shortly after World War II into what is now known as Lincoln Laboratory. I was not fully aware of the type of work that I'd be getting myself into coming into electrical engineering predominant business, but it was great and I learned a lot and that stage of my career really taught me a lot about what I was interested in and what I wanted to do. I was fortunate that I was given the opportunity as part of my employment to return to school to really flesh out what those research and professional interests are. And after I finished my degree, I needed to return to work full time to fulfill my obligations to them. And that's where we kind of were forced to do more low hanging fruit from a government perspective is that they didn't quite have this appetite for sequential decision making in the way that I was proposing. And so we were looking at model robustness for vision type applications and that's where the capsule network work came from. Okay, so looking at your health-related papers, some of the ones that I looked at, I get the sense that you really dig really deep into some diverse angles on this topic and machine learning for health. Can you tell us how you think about your research roadmap going forward? Do you have a very specific path in mind or are you doing a lot of exploration? I think that the way that I at least have diagnosed my inability to get on a very specific path is that there's too many good ideas out there that need to be solved. Or it's just like there's fascinating problems that I see. Let me backtrack a little bit that my training from the earliest days of my aspirational scientific career have been in interdisciplinary settings where I come with a set of expertise or growing expertise. And I'm working with experts from a different area and we come together to solve a common problem. That's been a standard for my entire career from back when I was in undergrad through my employment today as that I find it unfortunate when people refuse to work in interdisciplinary fashions and I think naturally machine learning and AI in general is an intercessed plenary field. I'm really grateful to be a part of it. That is probably not to say that I don't have specific directions in mind. A lot of the diversity in my research has come through just taking advantage of the opportunities right ahead of me or working on that good idea that just can't be put away. More specifically within a healthcare context. I mentioned earlier one of my core research interests is in generalization and robustness and currently machine learning models applied to healthcare problems are not robust and they are not reliable. I'm much less generalizable and one of the core research focuses that I have throughout my PhD but I think it will it's a big enough problem that I think it's going to be a hopefully hallmark of my career is developing you suitable methods or model frameworks that allow for distributed processing but also model transfer between hospitals. I have family that live in very rural settings where their access to healthcare technology is quite limited and my professional career has brought me to large urban settings where we have research hospitals and fantastic opportunities for our healthcare. I would hate for any technology I developed to not be accessible and available to my family that live in less opportune areas. That is one of the major directions that I'm going for in my career is can we develop things that can operate reliably outside of the environment that they were trained in. Along the lines there's little little battles or fires that you have to put out along the way you have to develop a way to handle uncertainty you have to handle partials of ability or changing actions sets or changing feature sets depending on the particular application within healthcare. You might get very diverse types of distribution shift between these environments and so along the way there's always going to be some really good idea in a collaborative fashion that I'm going to be working on. But ultimately the direction is making reinforcement learning reliable and functional within a off policy or partially observed setting. So from a technical standpoint that's probably where I sit within RL but I'm pretty open to lots of different things. So from my point of view you seem to be able to you have this amazing ability to innovate with a real breadth of different methods kind of the opposite of the one trick pony. So how do you balance learning new things versus applying what you already know? How did you come to this breadth and I'm talking both on the ML side and maybe the health side too. Yeah, you know the first it's very generous for you to say that I've been innovative. I think it's more desperate than anything is that you know you come to a problem and you have an idea of how it should work. And since I've been relatively new to this field like I didn't know anything about machine learning until I started by master's degree. And so that's thinking back now four years ago and I had very rudimental skills in programming at that time. And so I've approached research in a sponge like manner, you're sort of just trying to draw insight from lots of different areas. And you know I think that in order to solve some of these more challenging problems we need to look at the ways that things have worked other in other places. From the health care perspective and I think that this is important for anybody who's trying to apply any machine learning much less reinforcement learning to the real world is that you have to talk with experts. You have to understand what they've done and what the relevant problems are. It's an unfortunate crutch of ours in the research community to sort of play pay lip service to an application to motivate our work and give it some meaning. And I do appreciate the efforts by my colleagues within the reinforcement learning community that when they talk about off policy reinforcement learning in particular and they they motivate it by oh this could be useful for health care. That's good and that's important and we need to make strides in some of these important technical problems and the challenges that we face with them. But if we're doing that in a vacuum and in isolation without knowledge of what the actual practices of the doctors who would be using the technology then we're wasting our time and we're wasting their time and we're developing solutions and putting hype around them that if adopted would potentially be harmful and quite dangerous. And I think that I think it's important to recognize our own limitations, but then also pick up the expertise and the best practices of those who we want to work with. And I think by synthesizing the best practices of various fields, you know, I struggle with imposter syndrome like anybody and it's probably made worse by the fact that I try to do this synthesis is that I don't feel like I'm getting to be good at it. Any one thing, but rather you know, and this is in my mind by doubts telling me that I'm becoming mediocre at a lot of things are at least knowledgeable about what might be out there without having any dedicated experience, but that's that's partially why I chose to get a PhD is to be able to slow down a little bit and do an in depth focus on a particular area of research so that I could become proficient and good at that area of research and then expand as I move forward. So can we talk about the health and clinical setting in a little more depth for people who may have you may maybe understand RL but have focused on Atari or opening I Jim can you help us understand and appreciate what is really different here about RL in health in general and in a clinical setting. Yeah, I've been asked this question a lot just by colleagues and friends and I think it's really important to preface the comments that my comments that I'll give in response to the question by the motivation I have for doing health care research as a reinforcement learning researcher is that the majority of open problems within reinforcement learning such as sparse rewards or credit assignment or the exploration exploitation trade off off policy RL and the list kind of goes on and on in terms of these big open challenges or problems within the reinforcement and community all of those are present in states in the health care problems and health care is characterized at least in the way that I observe it as an inherently sequential process. Where an expert decision maker with their best understanding of the setting of the environment, the patient of the multiple patients that they're seeing with their various come confounding conditions and symptoms they take that information and make the best decision they can and then they observe the response and then they adjust adopt a try something again and they do this. Hopefully toward the patient improving and leaving the hospital or having a stable and healthy life if it's in a less acute setting or in and and sort of the unfortunate circumstances where you know the the clinician is unable to provide the care adequate to have the patient survive and in some cases it's unavoidable right is that. Patients and individuals help decays to a point where there's not much they can be done and the standard of care at that point changes quite drastically and it's remarkable to see the types of approaches that clinicians take when they hit these the sort of dead end type situations within health care. But in terms of more directly answering your question, how does it differ from your traditional or you know simulator based research and the major differences that we have a fixed data set we're unable to explore and we're unable to do online evaluation of policies that we learn from this fixed data and so that opens up a big can of worms in terms of off policy evaluation and off policy. Reinforcement learning in general and this is a largely unsolved area of research and there's some fantastic efforts that have been going on from a variety of groups throughout the world looking at off policy evaluation and ways to improve it. There's particularly like the highlight like the work that's been coming out from finale doschivalizes group with Emma Brunskill as a collaborator as well as Nathan callus from Cornell Tech. Like these two groups among others and their significant effort from David Sontag for example and other I can list a lot of names who are looking at off policy evaluation and making it stable so that we know in these settings where we cannot explore we cannot evaluate new policies online. How reliable are the outcomes that we are suggesting that we can achieve with reinforcement learning and that's under traditional sense of you we are learning policies to suggest actions there's an alternative approach however that I've been investigating in collaboration with Maddie Fatemi from Microsoft research based in Montreal. Looking at it in sort of a inverse direction is using this sequential framework where we can distill long term outcomes to our current decision can we choose which actions to avoid instead and we have some preliminary work that is under review along these lines right now and it's sort of making the statement of this is how RL and health care is different is that we can't take this traditional sense because we can't explore we we can't experiment with actions but we can use the same framework to describe what's going on and to maybe identify optimal behavior from the observed experts you know the clinicians who have helped generate the data that we use. And so I feel like in kind of the laboring the point a little bit here but the major difference is just in data access as well as being able to test and evaluate the solutions that you find. So in Atari or open a gem reward is really simple but here how do yeah how do we think of reward here in the health setting like are we trying to get the best expected like average health outcome across the population or should we be trying to avoid the worst. That opens up a really interesting can of worms when you talk about expected health outcomes because there's a plethora of work within the ML fairness community that shows the expected outcomes is incredibly biased and unfair towards marginalized or minority communities and this is particularly challenging in health care so there's some work a couple years ago that Irene Chan published with. Data Tantag out of MIT where she looked at you where the discrimination within a health care decision framework would be coming from and she looked at a cross section of different groups and people's within the mimic data set based on this expected positive outcome and found that women and. Minorities the racial minorities were provided with much worse expected outcomes because they are not adequately counted for in the training and so it's difficult to say that a reward in health care from an RL perspective could be sort of this mean or median type performance and this is where I think the holy grail that we're all striving for within the machine learning for health care community is personalized medicine. And looking at an individual by individual basis you can we provide advocate care and treatment selection that is tailored to a particular situation or particular patient condition and the motivation or at least how that informs the design of rewards that we approach is you know it's it's better to use hard and fast binary rewards you for for a hospital acute treatment. A hospital acute care setting that's a pretty easy thing to define. You whether somebody survives in his discharge shows allowed to leave the hospital or they unfortunately expire and succumb to their symptoms and you know so that binary plus one minus one reward is pretty easy to define but the other types of reward definition that you might find you if you're looking at a long term care scenario or you somebody is trying to manage. Diabetes for example you know that reward design is going to be largely informed by the particular problem that they're working on so like back to this diabetes example you might want to maximize the time that the patient is in a healthy glucose range or minimize the times that they go hypotenic hypotonic in their in their glucose levels where they're they're having you know too much blood sugar which is quite dangerous right and so you will design your rewards based on the particular problem in a good example of somebody who did this and it has done significant work looking at defining rewards is nirangini prasad who just recently graduated from Princeton in her work with her advisor Barbara Engelhart is that one paper that I have in mind is nirangini looked at removing a patient from a ventilator something that we're all very aware of right now in this age of the coronavirus pandemic. Is that you win is the appropriate time to remove somebody from a ventilator and she designed a very specific reward function that helps describe the clinical problem of removing somebody too early or too late from a ventilator and you know that that she has some follow up work that she recently published the summer looking at you what is an admissible reward in a health care problem and does it cover the right physiological characteristics of the problem is it attainable from a clinical practice perspective etc. And so it I think the short answers it's nuanced you know the reward definition within health care can be as simple as binary outcome versus some continuous measure of a long term process. Okay cool and then but just a little bit more on that I was I guess what's not clear to me is if you have a distribution of outcomes like like let's say in the long term care setting you know your policy could be shifting the mean of that distribution but it also could be shifting the changing the variants so different policies might have different types of tales. So I just wonder if there's if that's something that that you think about in terms of do we do him like a maximum thing like trying to make the worst the best possible worst case outcome for the for the population versus the more expected the more average. I get to your point about fairness of different the different sub groups and I think that question applies to them to no matter how you split it. Yeah you know I have I'm not like super aware of this approach being undertaken within a health care context yet. I know that there is some work within the causal causal inference literature applied to health care with a machine learning focus that have been doing this you know so like some of the work from Susan Murphy has been thinking about this. But I would also point to some interesting work that's come out this summer from Cirque 11's group that is doing this maximum approach to q learning where the their algorithm is is titled conservative q learning which is very creative and I appreciate that. But then there's another there's another there's another paper that just just came out like a couple weeks ago from K Marga semi pour is a friend of mine who's been working as a student researcher at Google with the shengu and its e-mac is the name of their approach where they take this expected positive reward in this off policy sense and then they marginalize again some of the subgroup type challenges and there. Their setting is will both surge 11's paper and this e-mac paper is looking at robotics specifically but some of the characteristics could potentially be applied to some of these long term. You know continuous type problems within a health care setting for sure but they they're really hasn't been a whole lot that I'm aware of that has been explicitly looking at domain shift within the expected rewards as a response to. Optimize in policy right thanks those are great tips. Okay so so what about model based RL in the setting my impression is that a lot of model based RL is looking at domains that are really quite determined to sick and either they have no noise or maybe they have very simple noise. So how do models differ in in these health settings like are they still useful maybe they're useful in different ways like is it possible to do planning with models in settings like this in noisy environments like this. I think that that's an open research question that's yet to be determined is some of my prior work has been within model based RL within health care where we're trying to. You adapt we're going to talk about this later so I won't get too far into it but we try to adapt the model based on local features of the task or the environment. But in general I think that there's a danger in thinking that model based RL is the solution to I you know I have. Continually found myself thinking this and I think that it has its its use cases for sure but I like you pointed out a lot of those use cases are in more simplistic settings where you have deterministic transition behavior. You know very low noise environments and extrapolating to a health care scenario you what what is your model of right like how well calibrated can a model be of the human body and you know so little about it. Even with the centuries of medical research that have produced a lot of great understanding insight about you medical practice but then also just our physiology there's still a lot we don't know and you know in model based RL the performance of your policy is largely driven by the accuracy of your model or at least how well it can describe the environment around you and there has been. So papers in the recent past you under the imitation learning framework. That look at you know what happens if you have a sub optimal model or sub optimal demonstrator but in like when you add additional layers of complexity such as non determinism in the transition statistics or you going full off policy you that a lot of those solutions don't really work that well. So let's move on to hidden parameter mdp's this topic is related to your master's thesis is that right. Yeah yeah so the hip mdp was the core foundation of my master's thesis you know I was fortunate to be able to repurpose the paper we published on it as my master's thesis which is with additional explanatory introduction chapters about you know Gaussian processes. And Bayesian neural networks but yeah the hip mdp it's it's something that I really enjoy talking about because it one means a lot to me is the first like real research project that I was able to start and finish as a machine learning researcher and it's been I mean the fact that we got published it makes me feel at least that it was successful and the other people have been building on top of it is. Another way to I guess in Dean in my eyes that it's been successful. So what is a hip mdp and and why is it a useful and interesting you George Cunodero as soon as our collaborator on this project and one of the originators of the hip mdp with finale. Do she was my advisor Harvard yeah he might not like me saying this but I view the hip mdp is an abstraction of the a palm D. P. where given a family of related. M D P's or tasks where the major differentiation is the perturbation and how the transition dynamics are observed the hip mdp parameterizes that that variation in the transition dynamics because if you can. Can accurately or correctly prescribe what the individual mdp or individual tasks transition dynamics are you should be able to optimally solve that problem given prior observed behavior from other tasks within that same family. And so without as a illustrative example if you've learned to write with a pen you can most likely write with any other pen that you pick up no matter how heavy it is no matter what the tip is like you as long as it has ink you will likely be able to pick it up and write it. And that's because our body and our mind have been trained among like to handle these types of different variations where a reinforcement learning agent hasn't necessarily right and it's not necessarily robust to slight perturbations in the weight of an object or how the mechanics of a moving arm might change if the tolerance is on a motor or off by a little bit. And so what we proposed or at least what was originally proposed in their original paper that was put on archive in 2013 and finally published in 2016 was that if you can filter or estimate among all of the prior observations of this family of tasks and use them to find something that's similar to what you're observing now. And parameterize it that way that you should be able to accelerate your learning in the current task of the current framework and so. By work during my master's degree was trying to make that approach scalable and more robust because as I said they they used a filtering procedure that was provided at least the prior over that filtering procedure was seeded through an Indian buffet process which is. Really difficult to scale and at least in the setting that they were using to establish basis functions over the transition dynamics that they're observing and so one of the insights that finale in George came up with and proposed to me when I was starting the project was you can we take these filtering weights that are being used to linearly combine these learned. Your basis functions of our transition statistics can we take those weights and use them as input to a parametric model. Or in the original setting was can we use them as input to a Gaussian process because they are still interested in sort of these. And non parametric statistical basis functions at the time we found that GPs aren't a really great setting or at least at the understanding that we had of them at that time you know this is late 2016. And that it was better to maybe move into you know probabilistic framework that we still wanted to be able to do inference over these estimated you know hidden parameters that would connect the family of tasks together but still be somewhat scalable to higher dimension problems fell so with more data. And so that's where we replaced the Gaussian process with the Bayesian neural network to help be a stand in transition model that we could then optimize based on the individual tasks that we're observing by function of these hidden parameters. And so I feel like I've been meandering a little bit so they hit the MDP in summary is a method by which we can describe small perturbations and observed dynamics between related tasks. And so from a health care perspective this is a task would be you treating a patient from a cohort that all have you know AIDS for example you know that was a simulated problem that we addressed in our paper is that when you observe some new patient you what about their physiology can you learn from their observed response to the medication that you give. And can that be used then to help inform the type of medication that you want to give them in the future and this was done at least within this constructed hidden parameter markup decision processes by estimating and optimizing the hidden parameters for that individual patient. And then after you we solve the problem for that one patient we would take the observed statistics and these hidden parameters and keep them along with our updated transition model the Bayesian neural network to be prepared for the next patient that would come in. And then the hope here is that if you find somebody who's similar to what you've observed in the past it will be easier to update and optimize their hidden parameters and then get a quicker or more efficient and effective policy downstream and so. So this sounds a bit related to contextual MDP but that's a slightly different concept right could you help us compare contrast those to. Yeah it definitely does sit within the same idea you so I view contextual MDP as a specialized use case of what you think to current research has been termed a generalized hidden parameter markup decision process is that the contextual MDP. Has largely been used in multi on band it settings where the reward is fixed per task and that specific context of the reward being fixed or the particular user being different is known to the learning algorithm and where the hip MDP differs is that it doesn't assume that knowledge it just. Observes that there's been a change in and we assume in the construction of the MDP that you will know when you are approached with a new task and it's. Upon the algorithms job to figure out and learn a approximation for that context and so this generalize in parameter markup decision paper that I. The reference does from Christian Perez and the offense Carolets us who were at Uber a I at the time and it was presented at triply I this past winter. So can the can observables give us hints about the hidden parameters like. The demographics of the patient maybe or are we assuming that we don't get any hints about these hidden parameters except for what happens with the transitions. Yeah so I think I think a practice if this were scaled and improved to be able to be used in a real setting is that demographics would absolutely be a part of that contextual process of learning the underlying or hidden parameters that you can't observe. You know the demographics you such as race or gender. You know height weight age. You know et cetera et cetera you can go down this list those those things do help give some understanding or context but they're still broad variance within these demographic groups and so I would view demographic information as a head start of like learning some actual physiological context. But ultimately it's just has to be about the data and it has to be about the observed transitions and how they respond to medication and. You know in an ideal setting that's how healthcare works is that you know doctors come with their training and their understanding of the medical literature as well as just the practice of medicine but. And they use that to inform their initial diagnoses and treatments but they adjust and they adapt or at least the best ones do and they adapt in a hopefully compassionate way and I think that's the way that we're trying to develop machine learning methods for is to have this built in at least conceptual understanding of a problem and develop a solution that adapts and you know this is might be over thinking it but in a compassionate way in a fair way. In a way that is equitable across the cross section of the demographic. So you talked a little bit about how you improved the hip MDP solution and maybe the setting with your first paper I wonder if you can could you walk us through kind of the set of MDP papers like what the different types of progress that was made in terms of both the setting and and the solution. Yeah I'm happy to do that so the original paper by finale and George just set up the problem and introduced the framework. So they they're early work did bear some similarities to a few other prior pieces of literature that I'm kind of spacing on right now but it's very slow like what they did was very slow and it couldn't scale to problems of higher than four dimensions and I kind of chocolate when I say that because we in our updated paper didn't look at anything greater than six dimensions but you know we added to. To factors of variation in the state space but what we what we did in that my first paper looking at in primary market position processes was you develop a scalable or at least a functional approach to learning these hidden parameters and we did that by virtue of inference through a patient neural network. What we found or at least there's pretty apparent to us as we're doing that research is that it was still computationally inefficient and really expensive because we would need to simulate thousands of episodes using that model in order to infer what those hidden parameters were and you worked for what we were trying to do but there's no way that that approach would work in a real setting and after I finished my master's degree I had to go back to work full time so I didn't get a chance to really participate in the next step of this but luckily finale had a brand new PhD student start right at the same time I graduated named Jiu Yao and Jiu was fascinated by the idea initially of the hip MVP and was interested in making it more computationally feasible. Without needing to run you know thousands of simulated episodes of an environment in order to estimate these hidden parameters and her idea was to distill the optimal policies from prior tasks into a generalized policy class in the same way that we were distilling all the transition functions into this learned patient neural network that would be parameterized by these hidden parameters. Which would give you the change of behavior she says okay mean learn those hidden parameters using our transition model but we don't need to rely on that transition model being like absolutely correct we just need to be good enough to get us a stable set of hidden parameters and then use those hidden parameters to parameterize the policy class and then get the differentiated behavior in this you know general policy based on those hidden parameters. And unfortunately it is great work and it worked really well but we have yet been able to convince reviewers that we did a good enough job but we did put the paper we didn't put the paper in archive yet we still have some things in the works to hopefully improve it looking at more of a theoretical bent and finalities had some undergraduate mathematics students looking at more of the theory behind these hidden parameter markup. Decision processes and specifically with this direct policy transfer framework but we do have like I have a version of the paper that we presented at an ICML workshop two years ago on my website and it has been cited by other researchers and so at least it's making some contribution in that fashion. This seems like it's going to be huge basically for real world RL I can't imagine it just being limited to healthcare setting it seems like it would have touched everything. Yeah I have similar thoughts about it I think that I think that this approach to adaptation and generalization in RL is really appealing we see that with the metal learning community within RL that have been doing fantastic work looking at ways to adapt. And so I think that the way we look at it is looking at ways to do adaptation online you know as you are learning in a new task and you adapt a policy class to work optimally however I do stress at least in my own mind thinking that you metal learning and even my own work is fitting to single distribution is that it's really difficult to get any of these things to work outside of the work. Outside of the observed task class that you have in your training set is that there has been some efforts in the metal RL community looking at out of distribution adaptation but I haven't found any of the papers to be overly convincing. One additional limitation of our work is that we only looked at the transition of the perturbation of the transition dynamics there is additional factors of variation in RL problem that you can account for and this was the major focus of the generalized human parameter process sorry the generalized HIPMDP paper from Christian Perez and his collaborators was that they factored the hidden parameters to account for different forms of variation so variation in the reward structure variation the transition structure and I think they had another factor variation but it's escaping me right now. And that has also been a feature of some additional follow on work one one particular paper that I have yet to read but I've had I've been in a lot of discussions with Amy Zhang about who's the lead author on is that she took the HIPMDP framework along with the block MDP paper. framework which is something that she has inherited from John Langford and has been looking looking out on her own for quick adaptation but then also synthesization of policies and you know that they're addressing different factors of variation that you might observe among a family of tasks so there there's a lot of really exciting and fun work in the days to come of looking at outside of a meta RL perspective because I'm still not overly convinced that it's the right approach because we're using a lot of computation to fit to the same distribution there but I think that the insights that we're gaining in that line of research is really informing creative modeling strategies and approaches within a more traditional RL framework. So it sounds like this area has a lot of potential and it's not fully solved yet. Yeah that's right there's a lot that can be done and I'm excited that there are a lot of researchers looking at it. I shouldn't say a lot there's there there have been efforts in the in the near past that indicate that people are interested in this type of problem. I'm going to move to another recent paper viewers counterfactually guided policy transfer in clinical settings what's going on this paper you're tackling domain shift in RL using cause of models is that the idea. I that's the major technical direction of the paper I think it's a little easier to stomach by describing the motivation and as I referenced earlier. There is a lot to do in order to make models within machine learning and healthcare transferable and generalizable between medical institutions and one of the major challenges of this model transfer is that practices vary between hospitals the type of measurements that they take the type of instrumentation that they have at these different hospitals confound the transfer problem. But the major confounding factor that limits the ability to transfer models between hospitals is that you the patient just the patient population is completely different and it can vary quite widely with various conditions are underlying symptoms or at least syndromes that that patient population has you can think for example looking at the overall structure of what a transfer learning problem is is that you have some source setting or source data set that you use to train your model from and you want to apply that somewhere else with minimal adaptation or some adaptation or no adaptation depending on how confident you are. In the healthcare setting so that large source environment could feasibly be a research hospital in a large urban environment where you do have some population diversity but that patient cohort that you might have in your data set will be pretty different from a regional diabetes clinic for example where you might have how you might have had a minority of patients within your source setting that had diabetes and you know their particular practice and care taken to accommodate them but when you go to a diabetes clinic that's the majority of the population all of a sudden and you know this this patient population might also have skewed to be the older there might be other demographic differences and without with blindly applying a model from a large research hospital to a regional clinic you're going to miss a lot of that variation and as I said earlier potentially do a lot of harm and be overconfident in the policy or the treatment strategy learned from the major hospital when applying it to the smaller setting and so that that was the primary motivation for our work. Looking at a way to address this this form of domain shift within the underlying data distribution and we did this with a simulated cohort of you again simulated patients that had sepsis and one of the factors of variation that you could set in defining these you simulated patient cohorts is the percentage or the proportion of that population as diabetic. And we used the simulator that was developed by David Sontag's group out of MIT and was featured in a paper that Michael Overs and he published an ICNL last summer and so we took their simulator and sort of built sort of a wrapper around it that allowed us to vary the proportion of patients within it as being more or less diabetic than the source environment and then some of that. And then studied algorithmic solutions or improvements to some off policy or settings with counterfactual inference to address this type of domain shift just in the patient population itself. And it seems like we're very so early on in combining the causal models and the reinforcement learning side and I think there's still some people still don't even think that that's important to do. But I think it's it's really exciting to see you with one of the early papers in this in the in combining these two fields. Do you see this combination being a big deal going forward? Yeah, I think that there's a really good history of people with specifically within the healthcare context of machine learning research that have been looking at causal inference. You know, professors that come to mind or see you're suci sari susan athe, you know, Susan Murphy being one. And the list kind of goes on and on David's on tag has been looking at this Elias Baron, Bollum has been looking specifically at the fundamental theoretical underpinnings of reinforcement learning and causal inference and the connection between them. But you know, I I believe quite ardently actually that any future solution that we have for generalization within RL needs to account for causal structure, especially in an off policy or offline where you have a lot of information. And so we're offline where you have a fixed data set is that we need to learn a lot from our colleagues in the statistics department and public health and epidemiology world about how to do good causal inference and, you know, I think Judea Pearl, burn on show call of, you know, at all, have been doing a really good job. That was the name that I was trying to say, you know, these three researchers among all of the ones I've named have been doing a really great job of introducing some of these concepts within machine learning. And now a lot of the effort is you're drawing the coherent connections for usability. And you know, is it feasible to make the assumptions that we do make in order to make these things work? You know, people have their bones to pick with the way that machine learning researchers use causal language and causal frameworks and I think they're valid in raising those concerns. And it's upon us as a community who want to use these tools to listen and to learn. And that's been something that Sean Lee, Joshi and I, you know, my primary collaborator on this paper and then as well as my advisor, magic is me, we've been listening and we've been talking with experts in this field to try to get a better sense of what we're doing right and what we can be doing better. And I think that it's an exciting future, assuming that we can be successful in scaling the approaches that we present in this paper that we're highlighting right now to more realistic scenarios. Right now, most of the causal inference and reinforcement learning literature that's at least the bridging between these two areas has been. And I think that the only only work that's been done that has looked at slightly continuous settings has been with Susan Murphy's work developing a monitor. And providing mobile interruptions to somebody's day, you're wearing like a smartwatch, for example, like, oh, hey, you should get up and walk or, oh, hey, you know, your heart rates too high slow down. And her project in the sort of the fully funded study that she's been on is known as heart steps there, they're probably one of the only projects or at least sets of research out there that's been looking outside of the more controllable discrete settings. And I think that there's a lot of development that needs to be done both in the statistics side, but then also in the modeling side from a machine learning perspective about how to expand and adapt to more continuous and realistic settings. And that's actually some work that I'm quite excited to get started on you later this year. And it sounds like I have a lot of background. We need to do. There's a lot that I don't understand yet, and I'm trying to learn from my collaborators who know no far more than I do. I want to just add, I love hard steps. I think Susan Murphy's work is so fascinating and I learned a lot from reading about that. I want to move on to talk more about mimic mimic three and sepsis. Okay. So mimic three and the sepsis problem seems to come up a lot in ML for health. I think you made a comment that it's kind of like the amnest for for for ML for health. And so I understand this is ICU data from a teaching hospital. Is that right? Can you tell us more about the problem and the data set? Yeah, I mean, so the data is collected from Beth Israel Deaconess Medical Center in Boston, which is part of the Harvard Medical School. You know, the system of teaching hospitals and research hospitals. So Leo Sally and his collaborators at MIT thought, you know, we have this really rich data set of electronic medical records that we can use to inform better decision making. But then also improve medical practice and you also Leo is a practicing acute care doctor and saw within his own workplace, you know, in the intensive care unit. The potential benefits for developing this type of data are data set to be used by the community and they've gone through substantial efforts to privatize it to clean it and to present it to be used. And by anybody as long as they go through adequate ethics training and they follow the protocols. The defined by the consortium that has hosted and also prepared the data set for use. They actually have just finished a new version of mimic so version four and it's being rolled out this summer to include and in a much larger set of patients. Another improvement to the data is they now have chest x-rays fully integrated for all the patients that have them. They have improved or increased availability of the notes from doctors and clinical teams. And another thing that some of my colleagues are quite excited about is that they also are including pharmacology and medication reports, which has been something that they haven't had historically within the mimic data set. And why mimic and why sepsis or at least why that has become sort of this focus is that sepsis is a really poorly understood problem. So it has a lot of potential gains, but it also introduces a lot of difficulty where, you know, we fall into the trap as machine learning researchers saying we've solved it, we've done it, but a doctor looks at the solutions as well. So we knew all that already. It's just a harder problem than you thought. Why it's been used so widely within the machine learning for healthcare community is one the availability of mimic, but it also is one of the conditions within the hospital that gets really dedicated monitoring. And so there is a richness to the data that's there as well as the like consistent measurements. And so you don't have as much missing this or at least unobserved information about a patient such as their heart rate or their respiratory rate. You know, their blood levels, the list goes on and on as you consider the vitals because these patients are at the most danger of dying within the hospital. In fact, sepsis is one of the leading causes of in hospital death. And sepsis itself isn't a, you know, diagnosable condition, but it is a umbrella term for large scale systemic shutdown of bodies organ in response to infection and pain. And so sepsis can be detected by a variety of measures, one of which being, being rising, lactic acid levels within the blood, which is a physiological response that our bodies have to infection and pain. And it can be manifest in multiple different ways. And so if you have access to the mimic data set and the notes, you look through the patient cohort who have sepsis. Unfortunately, or sadly, those that succumb to their sepsis, there's a variety of your scenarios or conditions that may lead to a patient becoming septic. The ones that stick out to me is individual had surgery after an accident. Their sutures got infected and that infected their blood. And, you know, they became septic. And I have a mind, I got infected from chemotherapy. And, you know, like these, these are really rare and unsettling situations that when you look at aggregate or in aggregate at this hospital data is that they pop up. It's not a happy time to read case reports about somebody who passed away. And it's even more difficult when you look at the clinical decisions that have been made. And you say, oh, you know, in retrospect, they could have seen this and they could have changed that. Ziett Obermeier has a really nice way of describing this phenomenon. And in retrospect, we can be the best at anything. The challenge is diagnosing or at least identifying the signal as it's happening or even better before it happens. And I think that that is the large, large motivation behind a lot of the machine learning for healthcare research. But in particular, solving the septus problem is only one really small piece of, you know, this overall healthcare puzzle that we have. It just happens that, you know, thanks to the efforts of, you know, the mimic team, we have this data set available and it's unfortunately influenced a lot of larger data collection practices is that so like recently a team in Europe just published a large help data set for intensive care, but it's focused on septus or, you know, when we talk to our clinical collaborators at local hospitals here in Toronto. They kind of back away, you say like, oh, we don't have the data to support septus. And we're like, no, no, we don't want to focus on sepsis. We want to focus on problems that you're facing, but we're going to benchmark against this sepsis condition in this open data set. And that you know, once we have those types of conversations with our clinical collaborators, I think that one, we learn what they're really interested in and two, they see the limitations of. You know, current practice within machine learning and it helps kind of bring us to equal terms where they see that you the problem hasn't been solved and we're not just there to be engineers, but we're there to help them in actual clinical research, which opens a lot of really great partnerships and doors when you have this common understanding. So from my brief readings, I only ever encounter the phrase mimic three in the context of sepsis, but is mimic three really all about sepsis or is it a much broader data set? It's definitely a broader data set like I was sort of saying because of the frequency of records for a septic patient, it makes it an easier problem to look at. And the community has defined really good data extraction code to pull out the sepsis patients, but there's a large variety of conditions and people within the mimic data set, but all constraints to the intensive care unit. And so it's a cute care and the challenges that come with that, you know, given that these are short timelines, these are people in very dire circumstances and some of the recording for these patients is quite sporadic because of that, you know, doctors are working feverishly to treat and care for people who are on the brink of. Of dying and so sepsis has become a little bit easier because it has very systematic protocols of measuring and monitoring the patients and so I think that's just why the majority of the papers that we see from the community that use mimic utilize the sepsis framework, but that doesn't mean that you don't use this data if you're interested in. In in in solving something else so the mechanical ventilation, weaning paper from Neuroengineering facade that I referenced earlier that looks at the septic cohort, but they don't look at treating sepsis right they're looking at a sub problem within that cohort, but I am aware of research that people looking at the same septic cohort to do diabetes, management and recognition within a clinical setting, you know, there's mental help type research that has been done with like within the context of sort of the the mimic or septic cohort as well, right, like there's a lot of interesting parallels that can be drawn within the data that doesn't focus on sepsis, but at its core, I think the most low hanging fruit of the problem, just to be able to do that. So when we look at say when deep oral for started with Atari and how DQN did with with Atari back then and and how agents today are doing like with agent 57 and Mu zero, some people are saying or I sometimes wonder, you know, how we solve the Tari is the Tari just solved it's not that useful anymore. How would you comment in terms of where we are on that journey with with mimic three and sepsis I guess we're away a long ways are we long ways from from solving it what would mean to solve this problem. Yeah, I don't know I don't you know to be completely honest, I don't know if it's possible clinically to describe a solution like you is is a solution like with the language that you were used to using you is a solution something that is attainable and I think that there's always going to be some context driven exception to any one clinical practice. Given the patient characteristics the situation and right so what we've seen at least there was there there have been some published medical papers from China looking at the coronavirus pandemic and 100% of the patients who died in their hospitals were observed to be septic whereas those that recovered. How many it was like 40 to 60% or septic at one point right so like it takes on different contexts in form because if you're treating a patient in the hospital currently with you the COVID 19 virus the sepsis is going to be a factor that you consider but largely you're just focused on treating the current symptoms of the virus. And so that largely changes I think the the texture of the problem and there there have been efforts to make generalizable deep RL solutions to the septic septic problem and I think that they're ill guided in in a lot of ways and I don't want to really delve too deeply into them because I really respect the effort and the researchers who who did this you know so this is on a route Raghous paper set of papers from you know 2017 2018 and then the AI clinician paper that was published in nature medicine in 2018. It like they did great work introducing deep RL into the question at least getting the attention of the reinforcement learning community looking at sepsis. But I think that we do ourselves a disservice when we take a traditionalist RL approach to healthcare and I think that's what a lot of people have been trying to do by applying a deep Q network. Just to learn optimal action selection strategies and you know finale doscivilize over go this man who was one of her now graduated students wrote this really great opinion piece for nature medicine giving guidelines of using reinforcement learning and healthcare and they identify all of the pitfalls of using a traditionalist approach for this fixed health data set. And I all this is to say I don't know what a solution looks like especially without running clinical trials you know I think that in the best best world or the best case scenario we convince either the NIH or some other health regulation body that we have done a good enough job and I hope that we would feel confident and assured that we've done a good enough job capturing all of the best. And I think that we can have a good job capturing all of these factors of variation in medical practice that we could run a full clinical trial like we cannot assume that we've solved anything in healthcare or really in the real world without actually experimenting. That's really dangerous territory for machine learning within healthcare is that we need to be sure that there are reversible decisions in case our models are wrong and the current status of a lot of this is that we are not there and we're nowhere close. Part of my motivation of coming to Toronto is that there are doctors here who are willing to work with us to develop clinical trials for algorithmically driven decision tools not for full treatment that would preclude a solution to the sepsis problem but might help us in a smaller smaller problem that will free up the doctors mental capacity in time to be a little bit more attentive to their patients or provide. Opportunity to develop new or novel treatment solutions and that is the goal that I think is realizable within the next five to 10 years depending on the use case and depending on the problem. For example, there is a group here in Toronto at the children's hospital that is looking at triage for the children patients that come in of identifying the right wings or departments in the hospital to put them in and that itself is a reinforcement learning problem as that you want to look at the long term effects or outcomes of that initial triage decision. That is some exciting work that some colleagues at minor getting started on and I think that that is a really feasible or at least ethical approach to trying to develop some algorithmic aid for a healthcare setting when it comes down to help and recovery. I get a little nervous about thinking that we are close or even putting a prediction of how close we may be but I think that we are hopefully getting there within my lifetime and I am excited to be a part of the efforts to at least make that somewhat realizable. I am glad you raised AI clinician and there is quite a bit of back and forth about how close we were using a solution like that and critiques about handling uncertainty. The criticisms of that work are both fair and unfair. I think there are some sour grapes by some of the critics of that work because they wanted to be the first to do this but I think a lot of their criticisms about modeling uncertainty and the robustness of the approach as well as even the types of treatments that the AI clinician suggested or very, very good. I think that that is the large challenge with doing oral and healthcare is that you kind of get fixated or at least the agents get fixated on the actions that cause the most change and you develop into suggesting those really serious actions immediately when they are not necessary. That has been something that we have been trying to diagnose in some of this work that I alluded to with Medi Fatemi is why is it choosing those types of actions and can we identify when those actions are useful or when those actions are potentially detrimental. So you presented a workshop spotlight related to this topic entitled learning representations for prediction of next patient state at the ACM conference on health inference and learning 2020. Can you tell us a little bit about that? Yeah, so this conference is brand new. It was headed up by my advisor Marzia and her close collaborators and the reason why I'm sort of prefacing this is I just want to say that we do have probably the coolest conference acronym out there chill. So we're I tried with no success to convince Marzia to contact Netflix to get a sponsorship and so that we like this is especially because Ben and Jerry's now has a ice cream flavor Netflix and chilled right so it would have been fantastic to have that you know we don't have to take ourselves overly seriously all the time. So but this this paper that I you know presented as a workshop spotlight was looking at the way that within sort of the sequential framework of decision making and healthcare you know what's the right way to learn a representation. A lot of reinforcement learning methods in healthcare just take the data in a isolated sequence and just say like okay this is time step T we have all of these observations will just use that data as raw input into our agent. That is fine and it's worked well you know given the results that we do have in the literature but it's not necessarily realistic because a clinician will use some historical understanding of the you know a patient's you know blood pressure for example in providing some treatment. And there have only been two papers in the literature at least that I'm aware of that have done any sort of recurrent modeling to construct a you know time dependent history you know in sort of a hidden state of what the patient condition is and even then those two papers just sort of blindly use a recurrent neural network and there hasn't been a systematic or rigorous. Study on what the appropriate representation is for a patient and so what our work was trying to to do and we're hoping to get this up on archive within the next couple months just because we got good comments from you know failed submission of this paper to a conference this summer of where to improve it and we're going to do that but what we did is we took several different models that would embed. In a current fashion sequential observations and then diagnose or at least investigated what those representations look like and what they allow us to do in the auxiliary task that we use with these representations is in predicting the next observation so given a sequence up to time T can you predict the observation of your time varying. And so we wanted to make sure we did when learning these representations was ensure at least constrained the representation learning to be clinically meaningful so rather than just learn some latent variable that you know happens to achieve good accuracy on your your test setting we want it to at least maintain some semantically meaningful information. And so what we did is we constrained the learning process by making sure that the Pearson correlation with the learned representation and the you know known and measured a QD scores which is just a measure of how severe patient condition is we wanted to maximize that correlation while learning these representations and what we were able to find you this is just sort of a analysis study we're not developing any new model we're not. Presenting any new way of doing these embeddings but by learning these representations with this constraining processes that the simpler RNN based methods are able to more or less separate out the types of patients that we would see right a patients that survive who have less severe conditions versus those that have more severe conditions and who do ultimately succumb to their to their treat. Or to their symptoms and that why this is important is that if we're thinking about applying reinforcement learning agent to these learned representations we want to kind of help maintain some of this clinically meaningful information but then also give it some head start and seeing that these representations are separable between the two patient cohorts. We're excited to start applying learned policies to this type of learned representation from and so you know this workshop spotlight that I did as well as the paper that we're going to be publishing or at least putting on the archives over soon as just largely a first step at saying hey you know all of this state construction business that we've been doing in. You know RL for healthcare you much less machine learning for health care is probably not well informed and we can do better and so that's just starting trying to start this conversation of you what's the appropriate way to represent a patient condition given sequential observations and how can we use that to do better reinforcement learning downstream. So you develop these you know a range of innovative solutions for using RL in these clinical settings. Can you say more about what that path would be to to getting them out there to helping real people like with this is this decades away is it could be around the corner. It was a path of life. Yeah so in terms of reinforcement learning I think we've got quite a ways to go but in terms of I would you know I probably in not speaking perfectly precisely here but you know a lot of these standard or more traditional machine learning approaches. You know we have evidence of them working really well in some healthcare settings and those have been in direct collaboration with clinicians and hospitals you know so there's cat heller who's now at Google but was at Duke she and her students were able to develop a really great. Sort of causal inference based solution to managing patients in hospital. So Brett Nestor who is a student with me at University of Toronto he's been working with St. Michael's Hospital here in Toronto about developing prediction methods over you know in the general internal medicine department. Can you predict whether or not a patient will need to be transferred to the intensive care unit because that is a very difficult process that takes a lot of coordination and if they can know a day in advance that somebody's going to be coming to the ICU they can make that transition better and maintain the patient health. So it's much easier in that transition to the intensive care unit another further example has been you know Susan Murphy's work where she's probably the only researcher that has had a like actual clinical trial with machine learning. Approaches under the hood. So Gisaria has been working at this but in each one of these cases of these success stories with applying machine learning to healthcare in practice and production is that it's always been in collaboration is that we like I said earlier we can't operate in a vacuum within the healthcare and by having you invested clinicians who understand you know the methodology. And is really important and you know there are some really great doctors and radiologists that we're affiliated with and collaborate with that are helping us always see the big picture of what they're doing and so specifically talking about Judy Kichoya who's at Emory Hospital in Atlanta Georgia and then Luke Oakden Raider who's based out of Australia. They are really great critics of everything that's being done to make sure that we're doing it in an appropriate fashion and you know I have friends and colleagues out of Harvard Medical School who are constantly helping us remember that you know that there is understood practice when we approach. And we're making strides in you know technology within healthcare but they need to be motivated and informed by what can actually be done so there's good reasons why probably the medical establishment doesn't really follow the philosophy of move fast and break things which is maybe. I think there's good reasons why there's probably some bad reasons why to it for going to be completely honest the challenge with regulation boards is that they're humans right and these humans are also doctors that have their own individual biases and preferences and you know it's it's the reality that we need to deal with and it's upon us as researchers to convince them that we are being careful and that we're thinking through the challenges that they care about too. And so it's you know this is kind of the excitement of being at the leading edge of any type of problem and any type of industry is that you get to you get to develop a lot of patients but you also learn a lot in the same thing and I think that that's why we're here on earth in general is to develop and learn and to to become better versions of ourselves. And I think that when we work into disciplinary or in it'll be correct when we work in interdisciplinary settings we are exposed to more opportunities to improve ourselves. Taylor, do you have any comments on other things that are happening in the world of RL that you find really interesting or important these days outside of your own work? There's a lot and I feel like I've probably taken too much time to describe like feelings that I have and thoughts I have. One really quick thing that I'm excited about is that I am really grateful that there has been an added interest in applying reinforcement learning to the real world and with the challenges in modeling and you know architecture and learning that comes with that. And so I think that we're I wouldn't say we're in a renaissance yet of offline RL but I think that what we're seeing coming from various groups and labs throughout the world is that there is a dedicated interest in making these things work in the real world. And you know there are some success stories that I've been made aware of that I know are not public where reinforcement learning has actually been used in the real world to great effect and it done so in a robust and stable and safe manner. And I think it's it's really exciting to envision or at least hypothesize how much more progress we're going to be making in the near term. Taylor Kylian this has been an absolute pleasure and thank you so much for giving us a window into your you're really important and fascinating work that you've been doing. Thanks so much for joining us. You know I appreciate the invitation. I think that what you do with this podcast is fascinating and that you balance between young researchers as well as established experts and I think that you're speaking as a consumer of your podcast. But now as a guest is that I really appreciate that balance because I think that it's important for young and new ideas to get exposure as well as to just to get the experience to be out there. And so I am really grateful for the opportunity. Notes and links for this episode are at talkrl.com. If you like this show, I need your support. You can help in a few ways. Once subscribe on your favorite podcast platform subscriptions make a big difference. And if you don't think we deserve five stars, let us know on Twitter what we could do better.
[ { "end": 12, "start": 0, "text": " This is TalkAreal Podcast. All reinforcement learning, all the time." }, { "end": 20, "start": 12, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan." }, { "end": 25, "start": 20, "text": " Taylor Killian is a PhD student at the University of Toronto and the Vector Institute." }, { "end": 31, "start": 25, "text": " He works as an intern at Google Brain and in his own words is an aspiring researcher slash scientist." }, { "end": 34, "start": 31, "text": " Taylor Killian, thanks so much for joining us today." }, { "end": 42, "start": 34, "text": " Hey, I'm really excited to have the opportunity to talk and share what I'm working on and also how I've gotten to where I am." }, { "end": 47, "start": 42, "text": " Super excited to chat with you. So how do you describe your research interests?" }, { "end": 55, "start": 47, "text": " It's a great question. It's been under constant evolution but in a directed fashion." }, { "end": 73, "start": 55, "text": " I had the opportunity quite early in my adult life to serve as a religious representative for my church where I spent a full two years talking with people about my beliefs and sharing what they mean to me." }, { "end": 79, "start": 73, "text": " I was always fascinated by how people received that information and what they did with it." }, { "end": 91, "start": 79, "text": " And some people would act what I felt like were counter to what they proposed to be their beliefs versus those who acted in line with their beliefs." }, { "end": 109, "start": 91, "text": " There's a lot of uncertainty in people's decision making and after finishing that time, I returned to my undergraduate institution and thought I wanted to do behavioral science but in an analytical way because I was very interested in math and I felt like I was good at it." }, { "end": 127, "start": 109, "text": " But probably fortunately for me, they didn't have a behavioral science track in the psychology department at my university. And so I was forced at the time to put sort of decision making on the back burner just as I progressed through my undergrad." }, { "end": 149, "start": 127, "text": " But after that and graduating and getting a job and being a computational scientist that that question kept coming back, how do we make decisions in opportunities or in situations where there is a high level of uncertainty or where we might have some prior context." }, { "end": 166, "start": 149, "text": " A lot of those questions in my own mind came from sort of a neuroscience or behavioral science background. But I'm quite analytical in thinking and given my limited exposure to the world, I thought that that had to be within applied math." }, { "end": 181, "start": 166, "text": " What is there within applied math to study that has to do with decision making and I was fortunate to get the opportunity to pursue a master's degree at Harvard while I was working." }, { "end": 196, "start": 181, "text": " And I approached a faculty member and said, hey, I'm really interested in applied math but about decision making and she says, oh, that sounds like a reinforcement learning. And I have some projects along those lines, are you interested in healthcare." }, { "end": 211, "start": 196, "text": " And my father is a doctor and I had sworn to never be a medical doctor in my life, just given the stress that I observed in his life and it didn't seem like that was the path for me." }, { "end": 240, "start": 211, "text": " I said, yeah, I'm interested in healthcare. I think that it's a valuable area to be pursuing research solutions to some of the major problems and that kind of became the introduction to where I am now as a researcher who is trying to develop stable and robust methods within reinforcement learning as motivated by or applied to healthcare problems." }, { "end": 269, "start": 240, "text": " And so all of that was just I think prepare a quick answer. I apologize for kind of editorializing a little bit but a quick answer is about what my particular research interests are is, you know, within the construct of decision making under uncertainty, are there ways that we can develop robust, reliable or generalizable approaches within reinforcement learning in." }, { "end": 282, "start": 269, "text": " In highly uncertain or partially observed settings awesome. So this is your episode. I encourage you to editorialize all you want. That's totally bonus for us as listeners. We want to know what you're thinking. This is great." }, { "end": 299, "start": 282, "text": " From looking at your Google scholar page, you you have some some work in physics related to fluid dynamics and then machine learning with radio sensors and some core ML stuff like capsule networks. So did you have like different chapters in the past or you focused on these different things and is it current direction the future for you." }, { "end": 325, "start": 299, "text": " I really struggle with the word chapters because that kind of there's a connotation that the doors closed in some of these circumstances, the doors definitely closed like I'm probably never going to return to working in experimental fluid dynamics and a lot of which I did during my undergraduate as a research in research assistant in the supplied in computational math program that I was designed for myself." }, { "end": 340, "start": 325, "text": " I had the fortune of working with Ted Truska who's now Utah State University who pioneered really fascinating imaging techniques for experimental fluid dynamics and he needed people to help develop computational models." }, { "end": 369, "start": 340, "text": " And I had the interest but also ability to join him and that prepared me in a way to take the job that I was ultimately offered at MIT Lincoln Laboratory, which is where I did more radar processing because that is the heritage of Lincoln laboratories that it was one of the early inventors and adopters of radio magnetic frequency for sensing purposes." }, { "end": 382, "start": 369, "text": " So I think that the idea of the process is that they have a heritage in the that comes from the MIT radiation laboratory that spun out shortly after World War II into what is now known as Lincoln Laboratory." }, { "end": 402, "start": 382, "text": " I was not fully aware of the type of work that I'd be getting myself into coming into electrical engineering predominant business, but it was great and I learned a lot and that stage of my career really taught me a lot about what I was interested in and what I wanted to do." }, { "end": 414, "start": 402, "text": " I was fortunate that I was given the opportunity as part of my employment to return to school to really flesh out what those research and professional interests are." }, { "end": 421, "start": 414, "text": " And after I finished my degree, I needed to return to work full time to fulfill my obligations to them." }, { "end": 435, "start": 421, "text": " And that's where we kind of were forced to do more low hanging fruit from a government perspective is that they didn't quite have this appetite for sequential decision making in the way that I was proposing." }, { "end": 443, "start": 435, "text": " And so we were looking at model robustness for vision type applications and that's where the capsule network work came from." }, { "end": 455, "start": 443, "text": " Okay, so looking at your health-related papers, some of the ones that I looked at, I get the sense that you really dig really deep into some diverse angles on this topic and machine learning for health." }, { "end": 466, "start": 455, "text": " Can you tell us how you think about your research roadmap going forward? Do you have a very specific path in mind or are you doing a lot of exploration?" }, { "end": 479, "start": 466, "text": " I think that the way that I at least have diagnosed my inability to get on a very specific path is that there's too many good ideas out there that need to be solved." }, { "end": 485, "start": 479, "text": " Or it's just like there's fascinating problems that I see." }, { "end": 502, "start": 485, "text": " Let me backtrack a little bit that my training from the earliest days of my aspirational scientific career have been in interdisciplinary settings where I come with a set of expertise or growing expertise." }, { "end": 508, "start": 502, "text": " And I'm working with experts from a different area and we come together to solve a common problem." }, { "end": 530, "start": 508, "text": " That's been a standard for my entire career from back when I was in undergrad through my employment today as that I find it unfortunate when people refuse to work in interdisciplinary fashions and I think naturally machine learning and AI in general is an intercessed plenary field." }, { "end": 549, "start": 530, "text": " I'm really grateful to be a part of it. That is probably not to say that I don't have specific directions in mind. A lot of the diversity in my research has come through just taking advantage of the opportunities right ahead of me or working on that good idea that just can't be put away." }, { "end": 553, "start": 549, "text": " More specifically within a healthcare context." }, { "end": 567, "start": 553, "text": " I mentioned earlier one of my core research interests is in generalization and robustness and currently machine learning models applied to healthcare problems are not robust and they are not reliable." }, { "end": 595, "start": 567, "text": " I'm much less generalizable and one of the core research focuses that I have throughout my PhD but I think it will it's a big enough problem that I think it's going to be a hopefully hallmark of my career is developing you suitable methods or model frameworks that allow for distributed processing but also model transfer between hospitals." }, { "end": 617, "start": 595, "text": " I have family that live in very rural settings where their access to healthcare technology is quite limited and my professional career has brought me to large urban settings where we have research hospitals and fantastic opportunities for our healthcare." }, { "end": 627, "start": 617, "text": " I would hate for any technology I developed to not be accessible and available to my family that live in less opportune areas." }, { "end": 642, "start": 627, "text": " That is one of the major directions that I'm going for in my career is can we develop things that can operate reliably outside of the environment that they were trained in." }, { "end": 664, "start": 642, "text": " Along the lines there's little little battles or fires that you have to put out along the way you have to develop a way to handle uncertainty you have to handle partials of ability or changing actions sets or changing feature sets depending on the particular application within healthcare." }, { "end": 679, "start": 664, "text": " You might get very diverse types of distribution shift between these environments and so along the way there's always going to be some really good idea in a collaborative fashion that I'm going to be working on." }, { "end": 694, "start": 679, "text": " But ultimately the direction is making reinforcement learning reliable and functional within a off policy or partially observed setting." }, { "end": 702, "start": 694, "text": " So from a technical standpoint that's probably where I sit within RL but I'm pretty open to lots of different things." }, { "end": 711, "start": 702, "text": " So from my point of view you seem to be able to you have this amazing ability to innovate with a real breadth of different methods kind of the opposite of the one trick pony." }, { "end": 724, "start": 711, "text": " So how do you balance learning new things versus applying what you already know? How did you come to this breadth and I'm talking both on the ML side and maybe the health side too." }, { "end": 738, "start": 724, "text": " Yeah, you know the first it's very generous for you to say that I've been innovative. I think it's more desperate than anything is that you know you come to a problem and you have an idea of how it should work." }, { "end": 749, "start": 738, "text": " And since I've been relatively new to this field like I didn't know anything about machine learning until I started by master's degree." }, { "end": 760, "start": 749, "text": " And so that's thinking back now four years ago and I had very rudimental skills in programming at that time." }, { "end": 769, "start": 760, "text": " And so I've approached research in a sponge like manner, you're sort of just trying to draw insight from lots of different areas." }, { "end": 782, "start": 769, "text": " And you know I think that in order to solve some of these more challenging problems we need to look at the ways that things have worked other in other places." }, { "end": 795, "start": 782, "text": " From the health care perspective and I think that this is important for anybody who's trying to apply any machine learning much less reinforcement learning to the real world is that you have to talk with experts." }, { "end": 800, "start": 795, "text": " You have to understand what they've done and what the relevant problems are." }, { "end": 814, "start": 800, "text": " It's an unfortunate crutch of ours in the research community to sort of play pay lip service to an application to motivate our work and give it some meaning." }, { "end": 829, "start": 814, "text": " And I do appreciate the efforts by my colleagues within the reinforcement learning community that when they talk about off policy reinforcement learning in particular and they they motivate it by oh this could be useful for health care." }, { "end": 838, "start": 829, "text": " That's good and that's important and we need to make strides in some of these important technical problems and the challenges that we face with them." }, { "end": 860, "start": 838, "text": " But if we're doing that in a vacuum and in isolation without knowledge of what the actual practices of the doctors who would be using the technology then we're wasting our time and we're wasting their time and we're developing solutions and putting hype around them that if adopted would potentially be harmful and quite dangerous." }, { "end": 871, "start": 860, "text": " And I think that I think it's important to recognize our own limitations, but then also pick up the expertise and the best practices of those who we want to work with." }, { "end": 889, "start": 871, "text": " And I think by synthesizing the best practices of various fields, you know, I struggle with imposter syndrome like anybody and it's probably made worse by the fact that I try to do this synthesis is that I don't feel like I'm getting to be good at it." }, { "end": 918, "start": 889, "text": " Any one thing, but rather you know, and this is in my mind by doubts telling me that I'm becoming mediocre at a lot of things are at least knowledgeable about what might be out there without having any dedicated experience, but that's that's partially why I chose to get a PhD is to be able to slow down a little bit and do an in depth focus on a particular area of research so that I could become proficient and good at that area of research and then expand as I move forward." }, { "end": 938, "start": 918, "text": " So can we talk about the health and clinical setting in a little more depth for people who may have you may maybe understand RL but have focused on Atari or opening I Jim can you help us understand and appreciate what is really different here about RL in health in general and in a clinical setting." }, { "end": 967, "start": 938, "text": " Yeah, I've been asked this question a lot just by colleagues and friends and I think it's really important to preface the comments that my comments that I'll give in response to the question by the motivation I have for doing health care research as a reinforcement learning researcher is that the majority of open problems within reinforcement learning such as" }, { "end": 996, "start": 967, "text": " sparse rewards or credit assignment or the exploration exploitation trade off off policy RL and the list kind of goes on and on in terms of these big open challenges or problems within the reinforcement and community all of those are present in states in the health care problems and health care is characterized at least in the way that I observe it as an inherently sequential process." }, { "end": 1024, "start": 996, "text": " Where an expert decision maker with their best understanding of the setting of the environment, the patient of the multiple patients that they're seeing with their various come confounding conditions and symptoms they take that information and make the best decision they can and then they observe the response and then they adjust adopt a try something again and they do this." }, { "end": 1051, "start": 1024, "text": " Hopefully toward the patient improving and leaving the hospital or having a stable and healthy life if it's in a less acute setting or in and and sort of the unfortunate circumstances where you know the the clinician is unable to provide the care adequate to have the patient survive and in some cases it's unavoidable right is that." }, { "end": 1078, "start": 1051, "text": " Patients and individuals help decays to a point where there's not much they can be done and the standard of care at that point changes quite drastically and it's remarkable to see the types of approaches that clinicians take when they hit these the sort of dead end type situations within health care." }, { "end": 1107, "start": 1078, "text": " But in terms of more directly answering your question, how does it differ from your traditional or you know simulator based research and the major differences that we have a fixed data set we're unable to explore and we're unable to do online evaluation of policies that we learn from this fixed data and so that opens up a big can of worms in terms of off policy evaluation and off policy." }, { "end": 1127, "start": 1107, "text": " Reinforcement learning in general and this is a largely unsolved area of research and there's some fantastic efforts that have been going on from a variety of groups throughout the world looking at off policy evaluation and ways to improve it." }, { "end": 1139, "start": 1127, "text": " There's particularly like the highlight like the work that's been coming out from finale doschivalizes group with Emma Brunskill as a collaborator as well as Nathan callus from Cornell Tech." }, { "end": 1161, "start": 1139, "text": " Like these two groups among others and their significant effort from David Sontag for example and other I can list a lot of names who are looking at off policy evaluation and making it stable so that we know in these settings where we cannot explore we cannot evaluate new policies online." }, { "end": 1186, "start": 1161, "text": " How reliable are the outcomes that we are suggesting that we can achieve with reinforcement learning and that's under traditional sense of you we are learning policies to suggest actions there's an alternative approach however that I've been investigating in collaboration with Maddie Fatemi from Microsoft research based in Montreal." }, { "end": 1215, "start": 1186, "text": " Looking at it in sort of a inverse direction is using this sequential framework where we can distill long term outcomes to our current decision can we choose which actions to avoid instead and we have some preliminary work that is under review along these lines right now and it's sort of making the statement of this is how RL and health care is different is that we can't take this traditional" }, { "end": 1235, "start": 1215, "text": " sense because we can't explore we we can't experiment with actions but we can use the same framework to describe what's going on and to maybe identify optimal behavior from the observed experts you know the clinicians who have helped generate the data that we use." }, { "end": 1249, "start": 1235, "text": " And so I feel like in kind of the laboring the point a little bit here but the major difference is just in data access as well as being able to test and evaluate the solutions that you find." }, { "end": 1267, "start": 1249, "text": " So in Atari or open a gem reward is really simple but here how do yeah how do we think of reward here in the health setting like are we trying to get the best expected like average health outcome across the population or should we be trying to avoid the worst." }, { "end": 1294, "start": 1267, "text": " That opens up a really interesting can of worms when you talk about expected health outcomes because there's a plethora of work within the ML fairness community that shows the expected outcomes is incredibly biased and unfair towards marginalized or minority communities and this is particularly challenging in health care so there's some work a couple years ago that Irene Chan published with." }, { "end": 1321, "start": 1294, "text": " Data Tantag out of MIT where she looked at you where the discrimination within a health care decision framework would be coming from and she looked at a cross section of different groups and people's within the mimic data set based on this expected positive outcome and found that women and." }, { "end": 1350, "start": 1321, "text": " Minorities the racial minorities were provided with much worse expected outcomes because they are not adequately counted for in the training and so it's difficult to say that a reward in health care from an RL perspective could be sort of this mean or median type performance and this is where I think the holy grail that we're all striving for within the machine learning for health care community is personalized medicine." }, { "end": 1379, "start": 1350, "text": " And looking at an individual by individual basis you can we provide advocate care and treatment selection that is tailored to a particular situation or particular patient condition and the motivation or at least how that informs the design of rewards that we approach is you know it's it's better to use hard and fast binary rewards you for for a hospital acute treatment." }, { "end": 1384, "start": 1379, "text": " A hospital acute care setting that's a pretty easy thing to define." }, { "end": 1408, "start": 1384, "text": " You whether somebody survives in his discharge shows allowed to leave the hospital or they unfortunately expire and succumb to their symptoms and you know so that binary plus one minus one reward is pretty easy to define but the other types of reward definition that you might find you if you're looking at a long term care scenario or you somebody is trying to manage." }, { "end": 1437, "start": 1408, "text": " Diabetes for example you know that reward design is going to be largely informed by the particular problem that they're working on so like back to this diabetes example you might want to maximize the time that the patient is in a healthy glucose range or minimize the times that they go hypotenic hypotonic in their in their glucose levels where they're they're having you know too much blood sugar which is quite dangerous right and so you" }, { "end": 1466, "start": 1437, "text": " will design your rewards based on the particular problem in a good example of somebody who did this and it has done significant work looking at defining rewards is nirangini prasad who just recently graduated from Princeton in her work with her advisor Barbara Engelhart is that one paper that I have in mind is nirangini looked at removing a patient from a ventilator something that we're all very aware of right now in this age of the coronavirus pandemic." }, { "end": 1490, "start": 1466, "text": " Is that you win is the appropriate time to remove somebody from a ventilator and she designed a very specific reward function that helps describe the clinical problem of removing somebody too early or too late from a ventilator and you know that that she has some follow up work that she recently published the summer" }, { "end": 1508, "start": 1490, "text": " looking at you what is an admissible reward in a health care problem and does it cover the right physiological characteristics of the problem is it attainable from a clinical practice perspective etc." }, { "end": 1521, "start": 1508, "text": " And so it I think the short answers it's nuanced you know the reward definition within health care can be as simple as binary outcome versus some continuous measure of a long term process." }, { "end": 1537, "start": 1521, "text": " Okay cool and then but just a little bit more on that I was I guess what's not clear to me is if you have a distribution of outcomes like like let's say in the long term care setting you know your policy could be shifting the mean of that distribution" }, { "end": 1546, "start": 1537, "text": " but it also could be shifting the changing the variants so different policies might have different types of tales." }, { "end": 1564, "start": 1546, "text": " So I just wonder if there's if that's something that that you think about in terms of do we do him like a maximum thing like trying to make the worst the best possible worst case outcome for the for the population versus the more expected the more average." }, { "end": 1573, "start": 1564, "text": " I get to your point about fairness of different the different sub groups and I think that question applies to them to no matter how you split it." }, { "end": 1584, "start": 1573, "text": " Yeah you know I have I'm not like super aware of this approach being undertaken within a health care context yet." }, { "end": 1600, "start": 1584, "text": " I know that there is some work within the causal causal inference literature applied to health care with a machine learning focus that have been doing this you know so like some of the work from Susan Murphy has been thinking about this." }, { "end": 1621, "start": 1600, "text": " But I would also point to some interesting work that's come out this summer from Cirque 11's group that is doing this maximum approach to q learning where the their algorithm is is titled conservative q learning which is very creative and I appreciate that." }, { "end": 1650, "start": 1621, "text": " But then there's another there's another there's another paper that just just came out like a couple weeks ago from K Marga semi pour is a friend of mine who's been working as a student researcher at Google with the shengu and its e-mac is the name of their approach where they take this expected positive reward in this off policy sense and then they marginalize again some of the subgroup type challenges and there." }, { "end": 1669, "start": 1650, "text": " Their setting is will both surge 11's paper and this e-mac paper is looking at robotics specifically but some of the characteristics could potentially be applied to some of these long term." }, { "end": 1688, "start": 1669, "text": " You know continuous type problems within a health care setting for sure but they they're really hasn't been a whole lot that I'm aware of that has been explicitly looking at domain shift within the expected rewards as a response to." }, { "end": 1708, "start": 1688, "text": " Optimize in policy right thanks those are great tips. Okay so so what about model based RL in the setting my impression is that a lot of model based RL is looking at domains that are really quite determined to sick and either they have no noise or maybe they have very simple noise." }, { "end": 1724, "start": 1708, "text": " So how do models differ in in these health settings like are they still useful maybe they're useful in different ways like is it possible to do planning with models in settings like this in noisy environments like this." }, { "end": 1737, "start": 1724, "text": " I think that that's an open research question that's yet to be determined is some of my prior work has been within model based RL within health care where we're trying to." }, { "end": 1760, "start": 1737, "text": " You adapt we're going to talk about this later so I won't get too far into it but we try to adapt the model based on local features of the task or the environment. But in general I think that there's a danger in thinking that model based RL is the solution to I you know I have." }, { "end": 1779, "start": 1760, "text": " Continually found myself thinking this and I think that it has its its use cases for sure but I like you pointed out a lot of those use cases are in more simplistic settings where you have deterministic transition behavior." }, { "end": 1798, "start": 1779, "text": " You know very low noise environments and extrapolating to a health care scenario you what what is your model of right like how well calibrated can a model be of the human body and you know so little about it." }, { "end": 1826, "start": 1798, "text": " Even with the centuries of medical research that have produced a lot of great understanding insight about you medical practice but then also just our physiology there's still a lot we don't know and you know in model based RL the performance of your policy is largely driven by the accuracy of your model or at least how well it can describe the environment around you and there has been." }, { "end": 1834, "start": 1826, "text": " So papers in the recent past you under the imitation learning framework." }, { "end": 1854, "start": 1834, "text": " That look at you know what happens if you have a sub optimal model or sub optimal demonstrator but in like when you add additional layers of complexity such as non determinism in the transition statistics or you going full off policy you that a lot of those solutions don't really work that well." }, { "end": 1861, "start": 1854, "text": " So let's move on to hidden parameter mdp's this topic is related to your master's thesis is that right." }, { "end": 1883, "start": 1861, "text": " Yeah yeah so the hip mdp was the core foundation of my master's thesis you know I was fortunate to be able to repurpose the paper we published on it as my master's thesis which is with additional explanatory introduction chapters about you know Gaussian processes." }, { "end": 1912, "start": 1883, "text": " And Bayesian neural networks but yeah the hip mdp it's it's something that I really enjoy talking about because it one means a lot to me is the first like real research project that I was able to start and finish as a machine learning researcher and it's been I mean the fact that we got published it makes me feel at least that it was successful and the other people have been building on top of it is." }, { "end": 1918, "start": 1912, "text": " Another way to I guess in Dean in my eyes that it's been successful." }, { "end": 1930, "start": 1918, "text": " So what is a hip mdp and and why is it a useful and interesting you George Cunodero as soon as our collaborator on this project and one of the originators of the hip mdp with finale." }, { "end": 1943, "start": 1930, "text": " Do she was my advisor Harvard yeah he might not like me saying this but I view the hip mdp is an abstraction of the a palm D. P. where given a family of related." }, { "end": 1959, "start": 1943, "text": " M D P's or tasks where the major differentiation is the perturbation and how the transition dynamics are observed the hip mdp parameterizes that that variation in the transition dynamics because if you can." }, { "end": 1979, "start": 1959, "text": " Can accurately or correctly prescribe what the individual mdp or individual tasks transition dynamics are you should be able to optimally solve that problem given prior observed behavior from other tasks within that same family." }, { "end": 1996, "start": 1979, "text": " And so without as a illustrative example if you've learned to write with a pen you can most likely write with any other pen that you pick up no matter how heavy it is no matter what the tip is like you as long as it has ink you will likely be able to pick it up and write it." }, { "end": 2022, "start": 1996, "text": " And that's because our body and our mind have been trained among like to handle these types of different variations where a reinforcement learning agent hasn't necessarily right and it's not necessarily robust to slight perturbations in the weight of an object or how the mechanics of a moving arm might change if the tolerance is on a motor or off by a little bit." }, { "end": 2051, "start": 2022, "text": " And so what we proposed or at least what was originally proposed in their original paper that was put on archive in 2013 and finally published in 2016 was that if you can filter or estimate among all of the prior observations of this family of tasks and use them to find something that's similar to what you're observing now." }, { "end": 2060, "start": 2051, "text": " And parameterize it that way that you should be able to accelerate your learning in the current task of the current framework and so." }, { "end": 2080, "start": 2060, "text": " By work during my master's degree was trying to make that approach scalable and more robust because as I said they they used a filtering procedure that was provided at least the prior over that filtering procedure was seeded through an Indian buffet process which is." }, { "end": 2106, "start": 2080, "text": " Really difficult to scale and at least in the setting that they were using to establish basis functions over the transition dynamics that they're observing and so one of the insights that finale in George came up with and proposed to me when I was starting the project was you can we take these filtering weights that are being used to linearly combine these learned." }, { "end": 2115, "start": 2106, "text": " Your basis functions of our transition statistics can we take those weights and use them as input to a parametric model." }, { "end": 2123, "start": 2115, "text": " Or in the original setting was can we use them as input to a Gaussian process because they are still interested in sort of these." }, { "end": 2138, "start": 2123, "text": " And non parametric statistical basis functions at the time we found that GPs aren't a really great setting or at least at the understanding that we had of them at that time you know this is late 2016." }, { "end": 2159, "start": 2138, "text": " And that it was better to maybe move into you know probabilistic framework that we still wanted to be able to do inference over these estimated you know hidden parameters that would connect the family of tasks together but still be somewhat scalable to higher dimension problems fell so with more data." }, { "end": 2178, "start": 2159, "text": " And so that's where we replaced the Gaussian process with the Bayesian neural network to help be a stand in transition model that we could then optimize based on the individual tasks that we're observing by function of these hidden parameters." }, { "end": 2192, "start": 2178, "text": " And so I feel like I've been meandering a little bit so they hit the MDP in summary is a method by which we can describe small perturbations and observed dynamics between related tasks." }, { "end": 2220, "start": 2192, "text": " And so from a health care perspective this is a task would be you treating a patient from a cohort that all have you know AIDS for example you know that was a simulated problem that we addressed in our paper is that when you observe some new patient you what about their physiology can you learn from their observed response to the medication that you give." }, { "end": 2239, "start": 2220, "text": " And can that be used then to help inform the type of medication that you want to give them in the future and this was done at least within this constructed hidden parameter markup decision processes by estimating and optimizing the hidden parameters for that individual patient." }, { "end": 2257, "start": 2239, "text": " And then after you we solve the problem for that one patient we would take the observed statistics and these hidden parameters and keep them along with our updated transition model the Bayesian neural network to be prepared for the next patient that would come in." }, { "end": 2276, "start": 2257, "text": " And then the hope here is that if you find somebody who's similar to what you've observed in the past it will be easier to update and optimize their hidden parameters and then get a quicker or more efficient and effective policy downstream and so." }, { "end": 2286, "start": 2276, "text": " So this sounds a bit related to contextual MDP but that's a slightly different concept right could you help us compare contrast those to." }, { "end": 2305, "start": 2286, "text": " Yeah it definitely does sit within the same idea you so I view contextual MDP as a specialized use case of what you think to current research has been termed a generalized hidden parameter markup decision process is that the contextual MDP." }, { "end": 2331, "start": 2305, "text": " Has largely been used in multi on band it settings where the reward is fixed per task and that specific context of the reward being fixed or the particular user being different is known to the learning algorithm and where the hip MDP differs is that it doesn't assume that knowledge it just." }, { "end": 2342, "start": 2331, "text": " Observes that there's been a change in and we assume in the construction of the MDP that you will know when you are approached with a new task and it's." }, { "end": 2353, "start": 2342, "text": " Upon the algorithms job to figure out and learn a approximation for that context and so this generalize in parameter markup decision paper that I." }, { "end": 2364, "start": 2353, "text": " The reference does from Christian Perez and the offense Carolets us who were at Uber a I at the time and it was presented at triply I this past winter." }, { "end": 2369, "start": 2364, "text": " So can the can observables give us hints about the hidden parameters like." }, { "end": 2378, "start": 2369, "text": " The demographics of the patient maybe or are we assuming that we don't get any hints about these hidden parameters except for what happens with the transitions." }, { "end": 2396, "start": 2378, "text": " Yeah so I think I think a practice if this were scaled and improved to be able to be used in a real setting is that demographics would absolutely be a part of that contextual process of learning the underlying or hidden parameters that you can't observe." }, { "end": 2401, "start": 2396, "text": " You know the demographics you such as race or gender." }, { "end": 2404, "start": 2401, "text": " You know height weight age." }, { "end": 2426, "start": 2404, "text": " You know et cetera et cetera you can go down this list those those things do help give some understanding or context but they're still broad variance within these demographic groups and so I would view demographic information as a head start of like learning some actual physiological context." }, { "end": 2435, "start": 2426, "text": " But ultimately it's just has to be about the data and it has to be about the observed transitions and how they respond to medication and." }, { "end": 2449, "start": 2435, "text": " You know in an ideal setting that's how healthcare works is that you know doctors come with their training and their understanding of the medical literature as well as just the practice of medicine but." }, { "end": 2478, "start": 2449, "text": " And they use that to inform their initial diagnoses and treatments but they adjust and they adapt or at least the best ones do and they adapt in a hopefully compassionate way and I think that's the way that we're trying to develop machine learning methods for is to have this built in at least conceptual understanding of a problem and develop a solution that adapts and you know this is might be over thinking it but in a compassionate way in a fair way." }, { "end": 2484, "start": 2478, "text": " In a way that is equitable across the cross section of the demographic." }, { "end": 2504, "start": 2484, "text": " So you talked a little bit about how you improved the hip MDP solution and maybe the setting with your first paper I wonder if you can could you walk us through kind of the set of MDP papers like what the different types of progress that was made in terms of both the setting and and the solution." }, { "end": 2514, "start": 2504, "text": " Yeah I'm happy to do that so the original paper by finale and George just set up the problem and introduced the framework." }, { "end": 2543, "start": 2514, "text": " So they they're early work did bear some similarities to a few other prior pieces of literature that I'm kind of spacing on right now but it's very slow like what they did was very slow and it couldn't scale to problems of higher than four dimensions and I kind of chocolate when I say that because we in our updated paper didn't look at anything greater than six dimensions but you know we added to." }, { "end": 2568, "start": 2543, "text": " To factors of variation in the state space but what we what we did in that my first paper looking at in primary market position processes was you develop a scalable or at least a functional approach to learning these hidden parameters and we did that by virtue of inference through a patient neural network." }, { "end": 2597, "start": 2568, "text": " What we found or at least there's pretty apparent to us as we're doing that research is that it was still computationally inefficient and really expensive because we would need to simulate thousands of episodes using that model in order to infer what those hidden parameters were and you worked for what we were trying to do but there's no way that that approach would work in a real setting and" }, { "end": 2626, "start": 2597, "text": " after I finished my master's degree I had to go back to work full time so I didn't get a chance to really participate in the next step of this but luckily finale had a brand new PhD student start right at the same time I graduated named Jiu Yao and Jiu was fascinated by the idea initially of the hip MVP and was interested in making it more computationally feasible." }, { "end": 2655, "start": 2626, "text": " Without needing to run you know thousands of simulated episodes of an environment in order to estimate these hidden parameters and her idea was to distill the optimal policies from prior tasks into a generalized policy class in the same way that we were distilling all the transition functions into this learned patient neural network that would be parameterized by these hidden parameters." }, { "end": 2684, "start": 2655, "text": " Which would give you the change of behavior she says okay mean learn those hidden parameters using our transition model but we don't need to rely on that transition model being like absolutely correct we just need to be good enough to get us a stable set of hidden parameters and then use those hidden parameters to parameterize the policy class and then get the differentiated behavior in this you know general policy based on those hidden parameters." }, { "end": 2713, "start": 2684, "text": " And unfortunately it is great work and it worked really well but we have yet been able to convince reviewers that we did a good enough job but we did put the paper we didn't put the paper in archive yet we still have some things in the works to hopefully improve it looking at more of a theoretical bent and finalities had some undergraduate mathematics students looking at more of the theory behind these hidden parameter markup." }, { "end": 2737, "start": 2713, "text": " Decision processes and specifically with this direct policy transfer framework but we do have like I have a version of the paper that we presented at an ICML workshop two years ago on my website and it has been cited by other researchers and so at least it's making some contribution in that fashion." }, { "end": 2746, "start": 2737, "text": " This seems like it's going to be huge basically for real world RL I can't imagine it just being limited to healthcare setting it seems like it would have touched everything." }, { "end": 2766, "start": 2746, "text": " Yeah I have similar thoughts about it I think that I think that this approach to adaptation and generalization in RL is really appealing we see that with the metal learning community within RL that have been doing fantastic work looking at ways to adapt." }, { "end": 2794, "start": 2766, "text": " And so I think that the way we look at it is looking at ways to do adaptation online you know as you are learning in a new task and you adapt a policy class to work optimally however I do stress at least in my own mind thinking that you metal learning and even my own work is fitting to single distribution is that it's really difficult to get any of these things to work outside of the work." }, { "end": 2810, "start": 2794, "text": " Outside of the observed task class that you have in your training set is that there has been some efforts in the metal RL community looking at out of distribution adaptation but I haven't found any of the papers to be overly convincing." }, { "end": 2839, "start": 2810, "text": " One additional limitation of our work is that we only looked at the transition of the perturbation of the transition dynamics there is additional factors of variation in RL problem that you can account for and this was the major focus of the generalized human parameter process sorry the generalized HIPMDP paper from Christian Perez and his collaborators was that they factored the" }, { "end": 2851, "start": 2839, "text": " hidden parameters to account for different forms of variation so variation in the reward structure variation the transition structure and I think they had another factor variation but it's escaping me right now." }, { "end": 2868, "start": 2851, "text": " And that has also been a feature of some additional follow on work one one particular paper that I have yet to read but I've had I've been in a lot of discussions with Amy Zhang about who's the lead author on is that she took the HIPMDP framework along with the block MDP paper." }, { "end": 2896, "start": 2868, "text": " framework which is something that she has inherited from John Langford and has been looking looking out on her own for quick adaptation but then also synthesization of policies and you know that they're addressing different factors of variation that you might observe among a family of tasks so there there's a lot of really exciting and fun work in the days to come of looking at" }, { "end": 2923, "start": 2896, "text": " outside of a meta RL perspective because I'm still not overly convinced that it's the right approach because we're using a lot of computation to fit to the same distribution there but I think that the insights that we're gaining in that line of research is really informing creative modeling strategies and approaches within a more traditional RL framework." }, { "end": 2928, "start": 2923, "text": " So it sounds like this area has a lot of potential and it's not fully solved yet." }, { "end": 2935, "start": 2928, "text": " Yeah that's right there's a lot that can be done and I'm excited that there are a lot of researchers looking at it." }, { "end": 2945, "start": 2935, "text": " I shouldn't say a lot there's there there have been efforts in the in the near past that indicate that people are interested in this type of problem." }, { "end": 2958, "start": 2945, "text": " I'm going to move to another recent paper viewers counterfactually guided policy transfer in clinical settings what's going on this paper you're tackling domain shift in RL using cause of models is that the idea." }, { "end": 2970, "start": 2958, "text": " I that's the major technical direction of the paper I think it's a little easier to stomach by describing the motivation and as I referenced earlier." }, { "end": 2998, "start": 2970, "text": " There is a lot to do in order to make models within machine learning and healthcare transferable and generalizable between medical institutions and one of the major challenges of this model transfer is that practices vary between hospitals the type of measurements that they take the type of instrumentation that they have at these different hospitals confound the transfer problem." }, { "end": 3020, "start": 2998, "text": " But the major confounding factor that limits the ability to transfer models between hospitals is that you the patient just the patient population is completely different and it can vary quite widely with various conditions are underlying" }, { "end": 3049, "start": 3020, "text": " symptoms or at least syndromes that that patient population has you can think for example looking at the overall structure of what a transfer learning problem is is that you have some source setting or source data set that you use to train your model from and you want to apply that somewhere else with minimal adaptation or some adaptation or no adaptation depending on how confident you are." }, { "end": 3078, "start": 3049, "text": " In the healthcare setting so that large source environment could feasibly be a research hospital in a large urban environment where you do have some population diversity but that patient cohort that you might have in your data set will be pretty different from a regional diabetes clinic for example where you might have how" }, { "end": 3104, "start": 3078, "text": " you might have had a minority of patients within your source setting that had diabetes and you know their particular practice and care taken to accommodate them but when you go to a diabetes clinic that's the majority of the population all of a sudden and you know this this patient population might also have skewed to be the older there might be other demographic differences" }, { "end": 3133, "start": 3104, "text": " and without with blindly applying a model from a large research hospital to a regional clinic you're going to miss a lot of that variation and as I said earlier potentially do a lot of harm and be overconfident in the policy or the treatment strategy learned from the major hospital when applying it to the smaller setting and so that that was the primary motivation for our work." }, { "end": 3162, "start": 3133, "text": " Looking at a way to address this this form of domain shift within the underlying data distribution and we did this with a simulated cohort of you again simulated patients that had sepsis and one of the factors of variation that you could set in defining these you simulated patient cohorts is the percentage or the proportion of that population as diabetic." }, { "end": 3191, "start": 3162, "text": " And we used the simulator that was developed by David Sontag's group out of MIT and was featured in a paper that Michael Overs and he published an ICNL last summer and so we took their simulator and sort of built sort of a wrapper around it that allowed us to vary the proportion of patients within it as being more or less diabetic than the source environment and then some of that." }, { "end": 3207, "start": 3191, "text": " And then studied algorithmic solutions or improvements to some off policy or settings with counterfactual inference to address this type of domain shift just in the patient population itself." }, { "end": 3226, "start": 3207, "text": " And it seems like we're very so early on in combining the causal models and the reinforcement learning side and I think there's still some people still don't even think that that's important to do. But I think it's it's really exciting to see you with one of the early papers in this in the in combining these two fields." }, { "end": 3229, "start": 3226, "text": " Do you see this combination being a big deal going forward?" }, { "end": 3239, "start": 3229, "text": " Yeah, I think that there's a really good history of people with specifically within the healthcare context of machine learning research that have been looking at causal inference." }, { "end": 3249, "start": 3239, "text": " You know, professors that come to mind or see you're suci sari susan athe, you know, Susan Murphy being one." }, { "end": 3262, "start": 3249, "text": " And the list kind of goes on and on David's on tag has been looking at this Elias Baron, Bollum has been looking specifically at the fundamental theoretical underpinnings of reinforcement learning and causal inference and the connection between them." }, { "end": 3278, "start": 3262, "text": " But you know, I I believe quite ardently actually that any future solution that we have for generalization within RL needs to account for causal structure, especially in an off policy or offline where you have a lot of information." }, { "end": 3304, "start": 3278, "text": " And so we're offline where you have a fixed data set is that we need to learn a lot from our colleagues in the statistics department and public health and epidemiology world about how to do good causal inference and, you know, I think Judea Pearl, burn on show call of, you know, at all, have been doing a really good job." }, { "end": 3315, "start": 3304, "text": " That was the name that I was trying to say, you know, these three researchers among all of the ones I've named have been doing a really great job of introducing some of these concepts within machine learning." }, { "end": 3323, "start": 3315, "text": " And now a lot of the effort is you're drawing the coherent connections for usability." }, { "end": 3332, "start": 3323, "text": " And you know, is it feasible to make the assumptions that we do make in order to make these things work?" }, { "end": 3347, "start": 3332, "text": " You know, people have their bones to pick with the way that machine learning researchers use causal language and causal frameworks and I think they're valid in raising those concerns." }, { "end": 3353, "start": 3347, "text": " And it's upon us as a community who want to use these tools to listen and to learn." }, { "end": 3374, "start": 3353, "text": " And that's been something that Sean Lee, Joshi and I, you know, my primary collaborator on this paper and then as well as my advisor, magic is me, we've been listening and we've been talking with experts in this field to try to get a better sense of what we're doing right and what we can be doing better." }, { "end": 3388, "start": 3374, "text": " And I think that it's an exciting future, assuming that we can be successful in scaling the approaches that we present in this paper that we're highlighting right now to more realistic scenarios." }, { "end": 3402, "start": 3388, "text": " Right now, most of the causal inference and reinforcement learning literature that's at least the bridging between these two areas has been." }, { "end": 3419, "start": 3402, "text": " And I think that the only only work that's been done that has looked at slightly continuous settings has been with Susan Murphy's work developing a monitor." }, { "end": 3433, "start": 3419, "text": " And providing mobile interruptions to somebody's day, you're wearing like a smartwatch, for example, like, oh, hey, you should get up and walk or, oh, hey, you know, your heart rates too high slow down." }, { "end": 3452, "start": 3433, "text": " And her project in the sort of the fully funded study that she's been on is known as heart steps there, they're probably one of the only projects or at least sets of research out there that's been looking outside of the more controllable discrete settings." }, { "end": 3465, "start": 3452, "text": " And I think that there's a lot of development that needs to be done both in the statistics side, but then also in the modeling side from a machine learning perspective about how to expand and adapt to more continuous and realistic settings." }, { "end": 3471, "start": 3465, "text": " And that's actually some work that I'm quite excited to get started on you later this year." }, { "end": 3483, "start": 3471, "text": " And it sounds like I have a lot of background. We need to do. There's a lot that I don't understand yet, and I'm trying to learn from my collaborators who know no far more than I do." }, { "end": 3489, "start": 3483, "text": " I want to just add, I love hard steps. I think Susan Murphy's work is so fascinating and I learned a lot from reading about that." }, { "end": 3507, "start": 3489, "text": " I want to move on to talk more about mimic mimic three and sepsis. Okay. So mimic three and the sepsis problem seems to come up a lot in ML for health. I think you made a comment that it's kind of like the amnest for for for ML for health." }, { "end": 3515, "start": 3507, "text": " And so I understand this is ICU data from a teaching hospital. Is that right? Can you tell us more about the problem and the data set?" }, { "end": 3525, "start": 3515, "text": " Yeah, I mean, so the data is collected from Beth Israel Deaconess Medical Center in Boston, which is part of the Harvard Medical School." }, { "end": 3544, "start": 3525, "text": " You know, the system of teaching hospitals and research hospitals. So Leo Sally and his collaborators at MIT thought, you know, we have this really rich data set of electronic medical records that we can use to inform better decision making." }, { "end": 3558, "start": 3544, "text": " But then also improve medical practice and you also Leo is a practicing acute care doctor and saw within his own workplace, you know, in the intensive care unit." }, { "end": 3572, "start": 3558, "text": " The potential benefits for developing this type of data are data set to be used by the community and they've gone through substantial efforts to privatize it to clean it and to present it to be used." }, { "end": 3580, "start": 3572, "text": " And by anybody as long as they go through adequate ethics training and they follow the protocols." }, { "end": 3589, "start": 3580, "text": " The defined by the consortium that has hosted and also prepared the data set for use." }, { "end": 3604, "start": 3589, "text": " They actually have just finished a new version of mimic so version four and it's being rolled out this summer to include and in a much larger set of patients." }, { "end": 3612, "start": 3604, "text": " Another improvement to the data is they now have chest x-rays fully integrated for all the patients that have them." }, { "end": 3619, "start": 3612, "text": " They have improved or increased availability of the notes from doctors and clinical teams." }, { "end": 3634, "start": 3619, "text": " And another thing that some of my colleagues are quite excited about is that they also are including pharmacology and medication reports, which has been something that they haven't had historically within the mimic data set." }, { "end": 3647, "start": 3634, "text": " And why mimic and why sepsis or at least why that has become sort of this focus is that sepsis is a really poorly understood problem." }, { "end": 3660, "start": 3647, "text": " So it has a lot of potential gains, but it also introduces a lot of difficulty where, you know, we fall into the trap as machine learning researchers saying we've solved it, we've done it, but a doctor looks at the solutions as well." }, { "end": 3677, "start": 3660, "text": " So we knew all that already. It's just a harder problem than you thought. Why it's been used so widely within the machine learning for healthcare community is one the availability of mimic, but it also is one of the conditions within the hospital that gets really dedicated monitoring." }, { "end": 3693, "start": 3677, "text": " And so there is a richness to the data that's there as well as the like consistent measurements. And so you don't have as much missing this or at least unobserved information about a patient such as their heart rate or their respiratory rate." }, { "end": 3709, "start": 3693, "text": " You know, their blood levels, the list goes on and on as you consider the vitals because these patients are at the most danger of dying within the hospital. In fact, sepsis is one of the leading causes of in hospital death." }, { "end": 3725, "start": 3709, "text": " And sepsis itself isn't a, you know, diagnosable condition, but it is a umbrella term for large scale systemic shutdown of bodies organ in response to infection and pain." }, { "end": 3740, "start": 3725, "text": " And so sepsis can be detected by a variety of measures, one of which being, being rising, lactic acid levels within the blood, which is a physiological response that our bodies have to infection and pain." }, { "end": 3755, "start": 3740, "text": " And it can be manifest in multiple different ways. And so if you have access to the mimic data set and the notes, you look through the patient cohort who have sepsis." }, { "end": 3770, "start": 3755, "text": " Unfortunately, or sadly, those that succumb to their sepsis, there's a variety of your scenarios or conditions that may lead to a patient becoming septic." }, { "end": 3785, "start": 3770, "text": " The ones that stick out to me is individual had surgery after an accident. Their sutures got infected and that infected their blood. And, you know, they became septic." }, { "end": 3802, "start": 3785, "text": " And I have a mind, I got infected from chemotherapy. And, you know, like these, these are really rare and unsettling situations that when you look at aggregate or in aggregate at this hospital data is that they pop up." }, { "end": 3820, "start": 3802, "text": " It's not a happy time to read case reports about somebody who passed away. And it's even more difficult when you look at the clinical decisions that have been made. And you say, oh, you know, in retrospect, they could have seen this and they could have changed that." }, { "end": 3837, "start": 3820, "text": " Ziett Obermeier has a really nice way of describing this phenomenon. And in retrospect, we can be the best at anything. The challenge is diagnosing or at least identifying the signal as it's happening or even better before it happens." }, { "end": 3855, "start": 3837, "text": " And I think that that is the large, large motivation behind a lot of the machine learning for healthcare research. But in particular, solving the septus problem is only one really small piece of, you know, this overall healthcare puzzle that we have." }, { "end": 3884, "start": 3855, "text": " It just happens that, you know, thanks to the efforts of, you know, the mimic team, we have this data set available and it's unfortunately influenced a lot of larger data collection practices is that so like recently a team in Europe just published a large help data set for intensive care, but it's focused on septus or, you know, when we talk to our clinical collaborators at local hospitals here in Toronto." }, { "end": 3893, "start": 3884, "text": " They kind of back away, you say like, oh, we don't have the data to support septus. And we're like, no, no, we don't want to focus on sepsis." }, { "end": 3913, "start": 3893, "text": " We want to focus on problems that you're facing, but we're going to benchmark against this sepsis condition in this open data set. And that you know, once we have those types of conversations with our clinical collaborators, I think that one, we learn what they're really interested in and two, they see the limitations of." }, { "end": 3933, "start": 3913, "text": " You know, current practice within machine learning and it helps kind of bring us to equal terms where they see that you the problem hasn't been solved and we're not just there to be engineers, but we're there to help them in actual clinical research, which opens a lot of really great partnerships and doors when you have this common understanding." }, { "end": 3944, "start": 3933, "text": " So from my brief readings, I only ever encounter the phrase mimic three in the context of sepsis, but is mimic three really all about sepsis or is it a much broader data set?" }, { "end": 3956, "start": 3944, "text": " It's definitely a broader data set like I was sort of saying because of the frequency of records for a septic patient, it makes it an easier problem to look at." }, { "end": 3976, "start": 3956, "text": " And the community has defined really good data extraction code to pull out the sepsis patients, but there's a large variety of conditions and people within the mimic data set, but all constraints to the intensive care unit." }, { "end": 4000, "start": 3976, "text": " And so it's a cute care and the challenges that come with that, you know, given that these are short timelines, these are people in very dire circumstances and some of the recording for these patients is quite sporadic because of that, you know, doctors are working feverishly to treat and care for people who are on the brink of." }, { "end": 4028, "start": 4000, "text": " Of dying and so sepsis has become a little bit easier because it has very systematic protocols of measuring and monitoring the patients and so I think that's just why the majority of the papers that we see from the community that use mimic utilize the sepsis framework, but that doesn't mean that you don't use this data if you're interested in." }, { "end": 4056, "start": 4028, "text": " In in in solving something else so the mechanical ventilation, weaning paper from Neuroengineering facade that I referenced earlier that looks at the septic cohort, but they don't look at treating sepsis right they're looking at a sub problem within that cohort, but I am aware of research that people looking at the same septic cohort to do diabetes," }, { "end": 4085, "start": 4056, "text": " management and recognition within a clinical setting, you know, there's mental help type research that has been done with like within the context of sort of the the mimic or septic cohort as well, right, like there's a lot of interesting parallels that can be drawn within the data that doesn't focus on sepsis, but at its core, I think the most low hanging fruit of the problem, just to be able to do that." }, { "end": 4108, "start": 4085, "text": " So when we look at say when deep oral for started with Atari and how DQN did with with Atari back then and and how agents today are doing like with agent 57 and Mu zero, some people are saying or I sometimes wonder, you know, how we solve the Tari is the Tari just solved it's not that useful anymore." }, { "end": 4120, "start": 4108, "text": " How would you comment in terms of where we are on that journey with with mimic three and sepsis I guess we're away a long ways are we long ways from from solving it what would mean to solve this problem." }, { "end": 4148, "start": 4120, "text": " Yeah, I don't know I don't you know to be completely honest, I don't know if it's possible clinically to describe a solution like you is is a solution like with the language that you were used to using you is a solution something that is attainable and I think that there's always going to be some context driven exception to any one clinical practice." }, { "end": 4172, "start": 4148, "text": " Given the patient characteristics the situation and right so what we've seen at least there was there there have been some published medical papers from China looking at the coronavirus pandemic and 100% of the patients who died in their hospitals were observed to be septic whereas those that recovered." }, { "end": 4199, "start": 4172, "text": " How many it was like 40 to 60% or septic at one point right so like it takes on different contexts in form because if you're treating a patient in the hospital currently with you the COVID 19 virus the sepsis is going to be a factor that you consider but largely you're just focused on treating the current symptoms of the virus." }, { "end": 4228, "start": 4199, "text": " And so that largely changes I think the the texture of the problem and there there have been efforts to make generalizable deep RL solutions to the septic septic problem and I think that they're ill guided in in a lot of ways and I don't want to really delve too deeply into them because I really respect the effort and the researchers who who did this you know so this is on a route Raghous paper" }, { "end": 4237, "start": 4228, "text": " set of papers from you know 2017 2018 and then the AI clinician paper that was published in nature medicine in 2018." }, { "end": 4248, "start": 4237, "text": " It like they did great work introducing deep RL into the question at least getting the attention of the reinforcement learning community looking at sepsis." }, { "end": 4266, "start": 4248, "text": " But I think that we do ourselves a disservice when we take a traditionalist RL approach to healthcare and I think that's what a lot of people have been trying to do by applying a deep Q network." }, { "end": 4292, "start": 4266, "text": " Just to learn optimal action selection strategies and you know finale doscivilize over go this man who was one of her now graduated students wrote this really great opinion piece for nature medicine giving guidelines of using reinforcement learning and healthcare and they identify all of the pitfalls of using a traditionalist approach for this fixed health data set." }, { "end": 4321, "start": 4292, "text": " And I all this is to say I don't know what a solution looks like especially without running clinical trials you know I think that in the best best world or the best case scenario we convince either the NIH or some other health regulation body that we have done a good enough job and I hope that we would feel confident and assured that we've done a good enough job capturing all of the best." }, { "end": 4340, "start": 4321, "text": " And I think that we can have a good job capturing all of these factors of variation in medical practice that we could run a full clinical trial like we cannot assume that we've solved anything in healthcare or really in the real world without actually experimenting." }, { "end": 4362, "start": 4340, "text": " That's really dangerous territory for machine learning within healthcare is that we need to be sure that there are reversible decisions in case our models are wrong and the current status of a lot of this is that we are not there and we're nowhere close." }, { "end": 4391, "start": 4362, "text": " Part of my motivation of coming to Toronto is that there are doctors here who are willing to work with us to develop clinical trials for algorithmically driven decision tools not for full treatment that would preclude a solution to the sepsis problem but might help us in a smaller smaller problem that will free up the doctors mental capacity in time to be a little bit more attentive to their patients or provide." }, { "end": 4407, "start": 4391, "text": " Opportunity to develop new or novel treatment solutions and that is the goal that I think is realizable within the next five to 10 years depending on the use case and depending on the problem." }, { "end": 4436, "start": 4407, "text": " For example, there is a group here in Toronto at the children's hospital that is looking at triage for the children patients that come in of identifying the right wings or departments in the hospital to put them in and that itself is a reinforcement learning problem as that you want to look at the long term effects or outcomes of that initial triage decision." }, { "end": 4459, "start": 4436, "text": " That is some exciting work that some colleagues at minor getting started on and I think that that is a really feasible or at least ethical approach to trying to develop some algorithmic aid for a healthcare setting when it comes down to help and recovery." }, { "end": 4480, "start": 4459, "text": " I get a little nervous about thinking that we are close or even putting a prediction of how close we may be but I think that we are hopefully getting there within my lifetime and I am excited to be a part of the efforts to at least make that somewhat realizable." }, { "end": 4492, "start": 4480, "text": " I am glad you raised AI clinician and there is quite a bit of back and forth about how close we were using a solution like that and critiques about handling uncertainty." }, { "end": 4521, "start": 4492, "text": " The criticisms of that work are both fair and unfair. I think there are some sour grapes by some of the critics of that work because they wanted to be the first to do this but I think a lot of their criticisms about modeling uncertainty and the robustness of the approach as well as even the types of treatments that the AI clinician suggested or very, very good." }, { "end": 4548, "start": 4521, "text": " I think that that is the large challenge with doing oral and healthcare is that you kind of get fixated or at least the agents get fixated on the actions that cause the most change and you develop into suggesting those really serious actions immediately when they are not necessary." }, { "end": 4567, "start": 4548, "text": " That has been something that we have been trying to diagnose in some of this work that I alluded to with Medi Fatemi is why is it choosing those types of actions and can we identify when those actions are useful or when those actions are potentially detrimental." }, { "end": 4580, "start": 4567, "text": " So you presented a workshop spotlight related to this topic entitled learning representations for prediction of next patient state at the ACM conference on health inference and learning 2020." }, { "end": 4582, "start": 4580, "text": " Can you tell us a little bit about that?" }, { "end": 4602, "start": 4582, "text": " Yeah, so this conference is brand new. It was headed up by my advisor Marzia and her close collaborators and the reason why I'm sort of prefacing this is I just want to say that we do have probably the coolest conference acronym out there chill." }, { "end": 4629, "start": 4602, "text": " So we're I tried with no success to convince Marzia to contact Netflix to get a sponsorship and so that we like this is especially because Ben and Jerry's now has a ice cream flavor Netflix and chilled right so it would have been fantastic to have that you know we don't have to take ourselves overly seriously all the time." }, { "end": 4647, "start": 4629, "text": " So but this this paper that I you know presented as a workshop spotlight was looking at the way that within sort of the sequential framework of decision making and healthcare you know what's the right way to learn a representation." }, { "end": 4666, "start": 4647, "text": " A lot of reinforcement learning methods in healthcare just take the data in a isolated sequence and just say like okay this is time step T we have all of these observations will just use that data as raw input into our agent." }, { "end": 4690, "start": 4666, "text": " That is fine and it's worked well you know given the results that we do have in the literature but it's not necessarily realistic because a clinician will use some historical understanding of the you know a patient's you know blood pressure for example in providing some treatment." }, { "end": 4719, "start": 4690, "text": " And there have only been two papers in the literature at least that I'm aware of that have done any sort of recurrent modeling to construct a you know time dependent history you know in sort of a hidden state of what the patient condition is and even then those two papers just sort of blindly use a recurrent neural network and there hasn't been a systematic or rigorous." }, { "end": 4747, "start": 4719, "text": " Study on what the appropriate representation is for a patient and so what our work was trying to to do and we're hoping to get this up on archive within the next couple months just because we got good comments from you know failed submission of this paper to a conference this summer of where to improve it and we're going to do that but what we did is we took several different models that would embed." }, { "end": 4772, "start": 4747, "text": " In a current fashion sequential observations and then diagnose or at least investigated what those representations look like and what they allow us to do in the auxiliary task that we use with these representations is in predicting the next observation so given a sequence up to time T can you predict the observation of your time varying." }, { "end": 4801, "start": 4772, "text": " And so we wanted to make sure we did when learning these representations was ensure at least constrained the representation learning to be clinically meaningful so rather than just learn some latent variable that you know happens to achieve good accuracy on your your test setting we want it to at least maintain some semantically meaningful information." }, { "end": 4830, "start": 4801, "text": " And so what we did is we constrained the learning process by making sure that the Pearson correlation with the learned representation and the you know known and measured a QD scores which is just a measure of how severe patient condition is we wanted to maximize that correlation while learning these representations and what we were able to find you this is just sort of a analysis study we're not developing any new model we're not." }, { "end": 4859, "start": 4830, "text": " Presenting any new way of doing these embeddings but by learning these representations with this constraining processes that the simpler RNN based methods are able to more or less separate out the types of patients that we would see right a patients that survive who have less severe conditions versus those that have more severe conditions and who do ultimately succumb to their to their treat." }, { "end": 4887, "start": 4859, "text": " Or to their symptoms and that why this is important is that if we're thinking about applying reinforcement learning agent to these learned representations we want to kind of help maintain some of this clinically meaningful information but then also give it some head start and seeing that these representations are separable between the two patient cohorts." }, { "end": 4914, "start": 4887, "text": " We're excited to start applying learned policies to this type of learned representation from and so you know this workshop spotlight that I did as well as the paper that we're going to be publishing or at least putting on the archives over soon as just largely a first step at saying hey you know all of this state construction business that we've been doing in." }, { "end": 4937, "start": 4914, "text": " You know RL for healthcare you much less machine learning for health care is probably not well informed and we can do better and so that's just starting trying to start this conversation of you what's the appropriate way to represent a patient condition given sequential observations and how can we use that to do better reinforcement learning downstream." }, { "end": 4955, "start": 4937, "text": " So you develop these you know a range of innovative solutions for using RL in these clinical settings. Can you say more about what that path would be to to getting them out there to helping real people like with this is this decades away is it could be around the corner." }, { "end": 4975, "start": 4955, "text": " It was a path of life. Yeah so in terms of reinforcement learning I think we've got quite a ways to go but in terms of I would you know I probably in not speaking perfectly precisely here but you know a lot of these standard or more traditional machine learning approaches." }, { "end": 4995, "start": 4975, "text": " You know we have evidence of them working really well in some healthcare settings and those have been in direct collaboration with clinicians and hospitals you know so there's cat heller who's now at Google but was at Duke she and her students were able to develop a really great." }, { "end": 5001, "start": 4995, "text": " Sort of causal inference based solution to managing patients in hospital." }, { "end": 5017, "start": 5001, "text": " So Brett Nestor who is a student with me at University of Toronto he's been working with St. Michael's Hospital here in Toronto about developing prediction methods over you know in the general internal medicine department." }, { "end": 5037, "start": 5017, "text": " Can you predict whether or not a patient will need to be transferred to the intensive care unit because that is a very difficult process that takes a lot of coordination and if they can know a day in advance that somebody's going to be coming to the ICU they can make that transition better and maintain the patient health." }, { "end": 5052, "start": 5037, "text": " So it's much easier in that transition to the intensive care unit another further example has been you know Susan Murphy's work where she's probably the only researcher that has had a like actual clinical trial with machine learning." }, { "end": 5055, "start": 5052, "text": " Approaches under the hood." }, { "end": 5084, "start": 5055, "text": " So Gisaria has been working at this but in each one of these cases of these success stories with applying machine learning to healthcare in practice and production is that it's always been in collaboration is that we like I said earlier we can't operate in a vacuum within the healthcare and by having you invested clinicians who understand you know the methodology." }, { "end": 5108, "start": 5084, "text": " And is really important and you know there are some really great doctors and radiologists that we're affiliated with and collaborate with that are helping us always see the big picture of what they're doing and so specifically talking about Judy Kichoya who's at Emory Hospital in Atlanta Georgia and then Luke Oakden Raider who's based out of Australia." }, { "end": 5133, "start": 5108, "text": " They are really great critics of everything that's being done to make sure that we're doing it in an appropriate fashion and you know I have friends and colleagues out of Harvard Medical School who are constantly helping us remember that you know that there is understood practice when we approach." }, { "end": 5151, "start": 5133, "text": " And we're making strides in you know technology within healthcare but they need to be motivated and informed by what can actually be done so there's good reasons why probably the medical establishment doesn't really follow the philosophy of move fast and break things which is maybe." }, { "end": 5180, "start": 5151, "text": " I think there's good reasons why there's probably some bad reasons why to it for going to be completely honest the challenge with regulation boards is that they're humans right and these humans are also doctors that have their own individual biases and preferences and you know it's it's the reality that we need to deal with and it's upon us as researchers to convince them that we are being careful and that we're thinking through the challenges that they care about too." }, { "end": 5209, "start": 5180, "text": " And so it's you know this is kind of the excitement of being at the leading edge of any type of problem and any type of industry is that you get to you get to develop a lot of patients but you also learn a lot in the same thing and I think that that's why we're here on earth in general is to develop and learn and to to become better versions of ourselves." }, { "end": 5222, "start": 5209, "text": " And I think that when we work into disciplinary or in it'll be correct when we work in interdisciplinary settings we are exposed to more opportunities to improve ourselves." }, { "end": 5232, "start": 5222, "text": " Taylor, do you have any comments on other things that are happening in the world of RL that you find really interesting or important these days outside of your own work?" }, { "end": 5240, "start": 5232, "text": " There's a lot and I feel like I've probably taken too much time to describe like feelings that I have and thoughts I have." }, { "end": 5261, "start": 5240, "text": " One really quick thing that I'm excited about is that I am really grateful that there has been an added interest in applying reinforcement learning to the real world and with the challenges in modeling and you know architecture and learning that comes with that." }, { "end": 5279, "start": 5261, "text": " And so I think that we're I wouldn't say we're in a renaissance yet of offline RL but I think that what we're seeing coming from various groups and labs throughout the world is that there is a dedicated interest in making these things work in the real world." }, { "end": 5294, "start": 5279, "text": " And you know there are some success stories that I've been made aware of that I know are not public where reinforcement learning has actually been used in the real world to great effect and it done so in a robust and stable and safe manner." }, { "end": 5305, "start": 5294, "text": " And I think it's it's really exciting to envision or at least hypothesize how much more progress we're going to be making in the near term." }, { "end": 5314, "start": 5305, "text": " Taylor Kylian this has been an absolute pleasure and thank you so much for giving us a window into your you're really important and fascinating work that you've been doing." }, { "end": 5316, "start": 5314, "text": " Thanks so much for joining us." }, { "end": 5334, "start": 5316, "text": " You know I appreciate the invitation. I think that what you do with this podcast is fascinating and that you balance between young researchers as well as established experts and I think that you're speaking as a consumer of your podcast." }, { "end": 5349, "start": 5334, "text": " But now as a guest is that I really appreciate that balance because I think that it's important for young and new ideas to get exposure as well as to just to get the experience to be out there." }, { "end": 5352, "start": 5349, "text": " And so I am really grateful for the opportunity." }, { "end": 5363, "start": 5352, "text": " Notes and links for this episode are at talkrl.com." }, { "end": 5368, "start": 5363, "text": " If you like this show, I need your support. You can help in a few ways." }, { "end": 5383, "start": 5368, "text": " Once subscribe on your favorite podcast platform subscriptions make a big difference." }, { "end": 5398, "start": 5383, "text": " And if you don't think we deserve five stars, let us know on Twitter what we could do better." } ]
Nan Jiang
Nan Jiang takes us deep into Model-based vs Model-free RL, Sim vs Real, Evaluation & Overfitting, RL Theory vs Practice and much more!
https://media.transistor…ff3.mp3?src=site
This is Talk by Rail Podcast. All reinforcement learning all the time. Interviews of brilliant folks across the world of RL. I'm your host, Robin Chohan. Nan Jiang is an assistant professor of computer science at University of Illinois. He was a postdoc at Microsoft Research and did his PhD at University of Michigan under Professor Sithindr Singh. Welcome, Professor Nan Jiang. It's very kind of you to join us today. Thanks for having me here. So how do you describe your area of interest? Yeah, so in general, I work in RL theory. But there are many types of theory you can do in RL. And one particular or some of the things I particularly focus on is in terms of, you know, how can we give performance guarantees and design a principled algorithm for the function approximation case? And what kind of assumptions do we need to achieve those guarantees? And what are the fundamental limits of reinforcement learning under various settings? Among all of these, you know, typically I look for sample complexity resource. That is, can we achieve, learn our policy, evaluate our policy using as little data as possible? I see from your website that you are co-authoring a monograph titled Reinforcement Learning, Theory and Algorithms. And it's currently a working draft. Can you tell us a bit about this book? Yeah, so really, you know, this all started when I, actually before I started in UIEC, I knew that I'm well, I would teach a PhD seminar course when I start as a assistant professor. So I started to prepare course notes for that. And the reason I need to write down my own course notes is because for the kind of RL theory analysis that I want to teach my students, I couldn't really find any material like that in, you know, existing, you know, classical text. So I just ended up like writing a lot of things. And then, you know, as I taught the course, I expanded on that. And my collaborators at Agro and Shankarate at Microsoft Research and UW, they found the set of notes and thought it was good. And they used that when they co-taught a similar course in University of Washington, and further expanded on that, and included a lot of topics that I didn't really touch on before, like, policy gradient and imitation learning. And now here we are. We have a set of course notes, a little bit, or like, I'll organize, but the eventual goal is that we may just, at some point, we'll sit down and reorganize everything and press book at the end of the day. So maybe you partly answered my next question, but how would you situate or compare it to books like Sutton and Bartos, reinforcement learning introduction or Chabas, specialties, algorithms for reinforcement learning, or others in the genre? Yeah, I mean, those are great textbooks, and I started in order to like reading these textbooks myself. I think they're very great books as introductory material. And specifically, like, the Sutton and Bartos book provides a lot of wonderful insights from the more kind of AI perspective, where Chabas book takes more of like a machine learning perspective, right? So if you have more of like a learning theory background, and want to know what RL is about, Chabas book would be a great choice, right? But the problem with both books is that you don't really find that much of an theoretical analysis, or performance guarantees in these books, because these are introductory books, they're not supposed to touch heavily on these kind of materials. And that goes back to what I was saying a minute ago. If I really want a set of materials that teach students how to perform theoretical analysis, especially sample complexity analysis in reinforcement learning, I really just need to write up my own thing. Can you help us understand more about the relationship between theory and practice in RL today? Like, is the current state of RL theory largely limited to simpler problems and simpler function approximations? Yeah, that's a, you know, that's a great question, and that's a question that's always going to be there, right? So I think to a good extent, yes, you know, we have a very mature theory for RL in the tabular setting, right? I think this audience is familiar with this concept, but when we say tabular, we may have a finite small state and action space, and you can afford a sample and a computational complexities that scale polynomially with the number of states and actions. Right? So there the, our understanding is very, very sharp. We still make progress in that setting, but, you know, the gap is really close. However, as we all know, in the real world, in more challenging practical problems, all of them, most of them are not in the tabular setting, right? So I think previously the amount of sample complexity results, sorry, the amount of theoretical results for the general function approximation setting is quite scarce. But in recent years, we're seeing a growing number of papers and quite fast progress in some sub-arrows. For example, when we think of linear function approximation in some special structured environments, we have a, we have a lot of theoretical understanding now compared to, say, five years ago. The other sort of extreme is when you just say, I have a function estimator that has limited statistical capacity, and otherwise, I don't want to assume anything on that. What we can do with that in RL. So that, I mean, I myself have worked on papers of that flavor, like, you know, for a while. So we also have some good understanding for that regime. But, you know, for many practitioners, the really interesting setting is probably, like, when you use function approximators, they are not unstructured, but something like a neural nets, right? And we know neural nets have many amazing and sometimes confusing behaviors or properties, even in supervised learning, right? And we've probably heard of all these, like, over-premetization. How can we learn neural nets that has more parameters than the amount of data? So if you want to study RL with neural function approximators, you would need to bring all those kind of understandings from supervised learning and combine them organically with the unique challenges of RL. And I think this is really, like, very, very understudied area, and we're just about to start, you know, as a community working on that. And actually, to that point, I'm co-organizing a workshop. I think I... It's almost approved, so I won't say too much about it, but it will happen in summer 2021, where the focus is exclusively on the RL theory. So I'm really looking forward to, you know, the future progress on this topic from the community. So would you say that the gap between theory and practice is getting wider or is starting to narrow? Yeah, so that's another interesting question. So in terms of, you know, so one way that I think about theory and practice, especially in the context for RL, is that it's kind of theory gives you the baseline where the empirical work gives you the skyline, right? So when you're empirical working, always evaluate it on a set of environments like the Atari Games or Mojoukou or whatever, you'll never be able to test your algorithm on all the benchmarks that you would like to. On the other hand, in theory, you get, oftentimes you get worst case guarantees that holds for like all environments that can be casted, say, as MDPs, but that's a very large set of environments, and many of them are not... Many of the problem instances in this family are not problems that you would really care about. So, you know, theory is always pessimistic and empirical is always sort of optimistic. And, you know, bridging their gap is... I mean, in a sense, I think they're sort of like working on different things where empirical work shows you promises or where or skylines where like what we could possibly do, where theory usually catch up from behind and trying to tell you, oh, like for this kind of problem, we surely know that we can do something here, right? So that's the relationship between theory and practice that I think of. Now, in terms of whether their gap is expanding or, you know, closing, I think it really depends on the topic, right? So, I think, for example, in the function approximation, in terms of understanding the role of function approximation in RL, I think we're... In general, my feeling is that we're getting closer. For example, if you see some recent empirical papers that try to diagnose what's really happening when you run deep RL algorithms, they from time to time, they will refer to theoretical papers for, you know, the theoretical underpinnings for some of the empirical behaviors that they see those algorithms are doing. So in planning this episode, you mentioned three major areas of interest. They were model-based versus model-free RL, simulation versus real, and evaluation of RL algorithms and overfitting. So, I was hoping to start with the first one, model-based versus model-free RL. Can you share with us a bit of perspective your perspective on this dichotomy? So, you know, I think model-free versus model-based is probably one of the most overloaded and confused ideas or concepts in all of RL. Really, like when different people say model-based versus model-free, they sometimes make very different things, and I think that's part of where the confusion comes from. So, for example, in the tabular case, there's this very classical notion that model-based is more sample-efficient than model-free RL. But when you actually say that, what you mean, for example, is that, you know, if you give me a stream of data, like, as ARS prime, two pulls, you can use them to build an empirical model of the world and then complete the optimal policy from it. Or, conversely, you can just pass the stream data to something like Q-learning and let it tell you what the optimal policy is. And if we compare these two algorithms, of course, model-based will be much more sample-efficient. But if you really look into this example, you realize that maybe the difference is not really model-based versus model-free, it's say one-pass versus multi-pass algorithms. Because your Q-learning algorithm is only allowed to read and use every data point, like once. So, there's this very interesting result, I think it's almost a folklore, that if you turn the stream data into a replay-bapper and allow Q-learning to make multiple pass, actually infinite many passes over this dataset, it eventually just converts to the same solution as the model-based solution. So, in this case, if you remove this distinction between one-pass versus multi-pass, I would say that in a tabler setting, maybe there's no distinction between model-based and model-free RL. So, when I say model-based versus model-free, I'm thinking more of a function-procimate setting and some of the fundamental representation difference between these family of RL algorithm. And my view has been heavily inspired by this classical paper towards unified theory of state abstractions for NDPs by Lee Hongli and others in 2006. And the overall general message of the idea is that if you look at a different family of RL algorithms, like model-based value function-based and policy-based algorithms, there's a very natural hierarchy and trade-off here. So, if you run a model-based algorithms, you're implicitly assuming that your representation is rich enough to capture the dynamics of the world. And if you can capture the dynamics of the world, you can use that representation to also express, say, the Q function of the world or even the near-optical policy of this environment. So, your representation must be powerful enough to allow you to express all these objects. So, that's a big assumption. It's a lot of burden in terms of designing your, say, feature representation. On the other hand, with the strong assumption, it comes with great guarantees that almost all RL algorithms, if you hook up with such kind of nice representation, they'll work just very like all the guarantees will just pass through and you get all the nice guarantees. On the other hand, if you move down this hierarchy towards the policy-based algorithm side, you now have very light representation burden. All you need is to be able to, say, express a near-optical policy. You don't need to worry about reward function, transition dynamics, all of that. So, you may end up with a very simple representation. But the bad news is that here, the kind of algorithms you can work with in, like, using these kind of representation will be relatively limited. Maybe you can do policy search, maybe you can do policy gradient type of algorithms, but if you try to run something like a model-based algorithm on these kind of simplified representation, things can completely break down. So, that's the kind of trade-off that I usually think of in terms of model-based versus model-free RL. One thing that comes to mind for me with model-based RL is that we might have a chance to learn something different than the reward function that we're given at first. Maybe we can learn other types of behaviors, which doesn't seem to be as feasible with value or policy-based RL. So, it seems like we have one one extreme with model-based RL. Maybe we can learn any policy in the best case, and in value or policy-based RL, we're really limited to just working with that one reward function. Is there a continuum there? Is there anything in the middle? Oh, definitely. We can learn something related to the original task, but a bit different. I think that's a great point. So, you're saying that if I can model the dynamics of the world, I can just plug in any reward function I like and get the near-often policy for that reward function just by planning. That's actually a great point. In fact, just out of that, there's a very natural middle ground between model-free RL and model-based RL. What you can possibly do is to specify the space of reward function that you might be interested in working with, and then try to model enough, but not to model the entire dynamics of the entire world. So that you will be able to learn the near-often policy for all the reward functions in that family. I think this is somewhat related to the notion of reward-free RL or reward-free exploration, so things like that. There are a couple of quite a few recent papers on this topic, so it's kind of like a growing step direction. Do you mention a paper that you co-authored with when-son model-based RL in contextual decision processes, pack-bounds and exponential improvements over model-free approaches? That was from 2018. Can you help us understand what is this paper saying? In this paper, we consider the problem of how do you do systematic exploration in large RL problems. So really just think of it as a big MDP where you must use some form of function passmation. Previously, in 2017, with some of the same co-authors, we had a paper that does the value-based version of, basically roughly the same idea, where all you have is a space of candidate value functions, and you want to do good exploration with that. In this paper, what we do is to assume that we actually have a class of candidate models of the environment, and as I mentioned before, when you have stronger representation power to represent the dynamics of the world, you naturally expect to be able to do more with it. That's precisely what we show, that in some special cases, where our model-based algorithm can achieve polynomial sample complexity, whereas any kind of value-based algorithms where the notion of value-based or model-free is defined in a particular manner, just has to suffer exponential sample complexity. That's the part of exponential improvements that is suggested in the title. This paper introduces a witness rank, something called witness rank, and can trust it to Belman rank. Can you help us understand what it is that these ranks measure, and what is witness rank doing? Yes, sure. When you talk about rank, I think many people would actually think of matrix rank. It's actually pretty closely related in this case. One canonical example where both witness rank and Belman rank are low is when you write down the transition dynamics of your NDP as A by S transition matrix, if this matrix has low rank, then your environment naturally will have low, both low Belman rank and low witness rank. The reason we care about these ranks is because when you really think of the fundamental nature of systematic exploration in RL, what is the core difficulty here? Why exploration is so hard in RL, and you don't have this challenge in supervised learning? The reason is because in supervised learning, at least in the most standard model, you have data from the training distribution, and you test your classifier using test data drawn from the same distribution. You don't have distribution shift. The biggest issue in RL is that there is sometimes severe distribution shift. When you execute different policies, you can see very different distributions of data. In fact, you can show that if you don't regulate this, if you allow an environment where the exponentially many different candidate policies can generate exponentially many distributions that are drastically different from each other, then there is no hope to actually learn anything using polynomial sample complexity. In this context, what you really need to assume in order to make any polynomial sample complexity claim at all is to say that all those distributions should share something similar. Either they have some good overlaps, or they have some shared common structures that, for example, all of them can be embedded in a lower dimensional space. That's where this notion of rank comes from. Both the notion of Belmer rank and Wunders rank somehow characterize how low of a dimension, what kind of low dimensional space can you embed all those distributions that you can possibly generate in this environment? The difference between the two notions of ranks is that, as I mentioned, Belmer rank, which is related to my previous paper, is closely tied to value based RL, whereas Wunders rank is specifically designed for model based RL. And because of this difference, there are some environments where Wunders rank can handle, but Belmer rank cannot. A canonical example of that is what we call a factored MDP, or sometimes known as MDP described by a dynamic based network, where you just have your status represented by a number of state variables. And as the state evolves, the value of each variable for the next time step only depends on the value of a small number of nodes from the previous episode, sorry, from the previous time step. And turns out that in this kind of model, you can actually show that the Wunders rank is small, but in some special cases, no value based algorithms can solve the, can explore its problems efficiently. So are these types of ranks applicable in real world or common scenarios? Like, can we talk about rank of Atari or Mojoco or things like that? Or is it limited to cases where we have a linear transition matrix? Yeah, I think that's a great question. So I think the real situation is somewhere in between. So we do have several, some of the actual benchmarks that we use in DPRL today that can be seen as having low Belmer rank or Wunders rank. One canonical example is what we call the visual grid world, right? I'm sure you probably have seen these environments before where you have, you actually have a grid world environment, but you actually render it in a 3D say game engine like Minecraft, and instead of telling the agent which grid it is in, you give it the first person view of the whole environment. And assuming that you're the role pixel image that you see is approximately Markovian, then this would be an environment that literally has low Belmer rank or low Wunders rank where the rank is bounded by the number of grids. And as you mentioned, Mojoco, that gives you another example. If you actually have a linear control problem, like say, Alqr or something like that, that also will yield a low Belmer rank and low Wunders rank. However, I would say that for Atari games, especially some of the more complicated games, it's very hard to characterize them using these very nice, clean structures that are proposed in theory. I think it's due a big open problem as of today that what kind of structures, very general structure assumptions can we use to capture games like Atari games that can develop some nice guarantees for environments of those structures. I think that's due a big and interesting open problem. I attended your RL theory seminar earlier last week, entitled information theoretic considerations in battery enforcement learning. Can you tell us just briefly about what that you're talking about? Yeah, so the topic there is related to the notion of model based versus model for it. So the question I'm thinking about is literally in Bash RL where you just have a set of data points and you want to compute near off from policy, and you want to use a value based RL algorithm and say trying to approximate the Q star function. What's the fundamental limitations there? So in particular, in supervised learning, the strongest representation assumption we usually make is called realized ability that your function approximator can actually just capture the target function you're aiming to learn. And here this is Q star, but turns out that many of us believe that in the RL case, especially in the Bash RL case, that's not going to be sufficient. We often need some expressivity assumptions that are much more much stronger than that. But at the same time, we don't really know if we just can't get away with is realizable deal. So it's about some discussions around this idea of where is the true limit of Bash RL in terms of representation. In the seminar, you commented at one point, an imperative raising, if that's okay, you said roughly distribution RL is somewhere between model free RL and model based RL. Is that right? Yeah, kind of. You can say it's, you know, again, like depending on what you mean, for there's a specific sense where that is correct. If you think about this, right? So if I'm able to model the true dynamics of the world, then using the same representation again using the similar reasoning that I used earlier, that you will be able to express to express the object of interest in distribution RL. I don't remember what it is called, but basically what you try to do is that for any given state action pair, you want to be able to predict the distribution of returns you would obtain from this state action pair under certain policy. Now, if you can, if you can represent the entire dynamics, you can sort of represent that. Now, if you can represent that, it also means that you will be able to represent the usual expected value function because all it takes is to, you know, use an expectation operator. So in that sense, you know, distribution RL models slightly more than value based RL, but also a little bit less than model based RL. So in that particular sense, I would say yes. Distribution RL is in the middle between model based and value based and back to the topic of that seminar. If it is true that in the batch setting, value based RL faces some fundamental representation hardness, then maybe or maybe not distribution RL can be a way to lift it without going all the way to model based RL. Do you see other approaches between model free and model based RL and is that a rich area? Yeah, I think there are definitely some ideas that are that float around and some, especially some ideas that you can find in the empirical work of DPRL that nicely fit in between value based and model based. One example is value based RL with auxiliary tasks. So let's say you do something like DQN, but instead of just like training it with the, you know, the TD types of loss, you also ask your network to predict like some other things that's happening in the environment and use that to help you learn a better representation. So that will be a very good example of something in between model free or I will say value based and model based RL. Another thing to think about is, you know, some of the hardness that I've been developing or deriving my papers assumes some kind of like a minimal representation power for value based RL. So I have a function class that can express Q star, but nothing else. I can't express Q pi. I can't express, you know, other functions of interest in this environment. So if the harness is associated with that, maybe you want to do circumusbanded is to, you know, introduce the notion of over parameterization, right? Try to have a network or have a representation that can predict more than just say the function that you are eventually interested in learning, but something else, right? So this is also related to, you know, auxiliary tasks. Another interesting idea that I've been fascinated for years, but I have not seen a very good development is this idea of like learning incomplete models for reinforcement learning, right? So one big issue with model based RL, especially if you do it in the raw space space, is that you're just predicting like so many things, right? So there are lots of only important details in this real world that you probably don't even ever need to care about. But if you just run a variable model based RL algorithm, you're trying to model all of it, right? Can you just do less than that, right? Just try to pick bits of the world and predict the dynamics on those fragments and somehow use this incomplete models for prediction and planning. You know, some of these ideas have seen some nice early investigation and exploration in the PhD thesis of a reactivity who happens to be my academic brother, but I really look forward to seeing some, for example, some of the modernization of those ideas in the DPRL scenario. Maybe that could be a considered a way to combine causal reasoning with RL as well, because if you choose which part of the model you want to include, you can use your domain knowledge of a causality to only model those parts that are relevant to your problem at hand and discard the rest. Yeah, I think that's, you know, there will be a lot of ideas that can come into this notion, this framework of incomplete models, including like causality, as you said, and also, you know, some of the state abstractions concepts like bysimulation and homomorphism are also rigorous mathematical frameworks that define what can and what cannot be discarded as all important details. So I think it would need some of the some combination of all these ideas to to come to a, you know, a relatively complete solution to this problem. So let's move on to the second topic, sim versus real. You refer to simulation based RL as solving large planning problems with learning methods and RL from data as solving learning problems. Can you help us understand the distinction here? Yeah, so, you know, I think this is another, this is another common confusion in the area of RL in the sense that when RL people write a paper, you don't know what problem he or she is really interested in solving, right? So some people are really just interested in solving planning problems, right? So I have a large black box simulator and I just want to compute a neural policy for that. And I just happen to be using sampling based methods, which looks sometimes very similar or identical to learning methods. Whereas other people try to solve RL problems that are that are really just defined by data, right? So let's say you want to do automated dot log system, online customer service, use RL to improve decisions in healthcare scenarios. So in these kind of scenarios, you just don't have a simulator and you do with data. So I think there's a there's a very huge difference there, right? So for example, there are some algorithms or some sub areas of RL that are dedicated to the simulator setting. One example that I'll think of is molecular research, right? So you see the early days of MCTS, if you look at their papers, they will specifically say sample based planning in their instead of learning, right? Although the MCTS stands from the RL community, it's really a planning algorithm. And on the other hand, you have some problems like off policy evaluation, right? How do you use your start good data to estimate the performance of a policy? And this really only makes sense when you don't have a simulator, because if you have a simulator and you want to learn the performance of a policy, the easiest way is just to run that policy, which we do all the days in deep RL today, right? So I think it's pretty important to make a distinction between these two OSHAs. So when it comes to learning from real world data, I've been thinking more about the type of noise that we see in real world data. It seems like the type of stochasticity in the real world is more complex than what you can easily model in a SIM by adding any simple type of noise. And so that makes it hard to model and that makes building world models and off policy evaluation more challenging. And then it's expensive to deploy these policies in the real world to evaluate how they actually do in a production setting. And so it seems like these things combine to make it really hard to iterate on RL with real world data. And so like on one hand, we have this very advanced simulation based RL like, like you were saying with Monte Carlo tree search. And so we have now we have mu zero and and data and agent 57. And that stuff's all really far along, but on the other side with real life RL, it seems like we're maybe working more on very basics. Is that how you see it right now? I think I agree with a lot of the points you make here, although I would say, you know, for some of the simulation based RL, they actually have a serious goal, right? And their goal is for real. For example, when you when you when you try to build an agent that can play go and data, they have their real world benefits or values, right? So actually in these cases, solving the planning problem defined by the simulator can be your goal. And there are various grand challenges there. And we've seen like very impressive advances. As you mentioned, you know, we have AlphaGo, Alpha zero and this amazing, you know, data playing agent. On the other hand, really depends for some people solving the simulation solving the simulator problem is not their final goal, right? The reason we use simulators in RL study is because we use them as benchmarks or as a way to emulate what would happen if we were to apply RL in the real world. So in that case, I would say yes, a lot of the difficulties that you mentioned earlier, for example, that you have sample capacity issues, there are consequences and risks of taking real decisions. It is difficult to run a policy in the real world. And there are actually many more of these kind of, you know, difficulties associated with real world RL. And many of these aspects can just, it is very hard to study them in the simulator setting or as of now, we pay much less attention to them in our simulator centered RL research, right? So just to add a few other examples, right? So if you actually learn from real world data in cases in scenarios like healthcare, more likely than not, you will be given some passive data that arises from, you know, for example, previous historical medical record. And in that case, you know, thinking of confunderness and introducing something like causal inference could be crucial, which we're not doing a lot at all in the simulator based RL research. So what is missing that keeps us from seeing more RL in the real world? And I guess based on your answer improvements and simulations won't be enough? I mean, as I mentioned, you know, we can always use a simulator to as a benchmark or as an emulator of what happens in the real world. I think what part of what we really need is to take this view seriously, right? Use the emulator in your way that really tries to mimic what happens in the real world. And sometimes it's surprisingly hard to do this and I would give you one example of this, right? For example, you know, I've been working on policy evaluation for quite a while. And as we always do, we will use simulators as benchmarks to test and evaluate and compare different OPE algorithms. And in this case, you know, when you show off the performance of your algorithm on the simulator, it's very, very attempting to do hyper parameter tuning just as what everyone else does in DPRL. But on the other hand, if you think about when you actually apply OPE in a real world task, you realize that you just can't do hyper parameter tuning at all. Because what you usually tune against is the ground choose value of a policy, which is precisely what you're trying to estimate here and you don't have access to, right? It's pretty funny that there's one time where we submit a submitted paper and about empirical benchmarks. And one of the reviewers says that you're just not doing enough hyper parameter tuning. I think that's kind of like the reflection of how people's mindset of, you know, we just need to, you know, turn hyper parameters to make this thing work in the simulator. Whereas if you seriously use simulator as a to emulate the real world situation, you should put a lot more restrictions on yourself when it comes to, you know, measure the performance of your algorithm among other things. So let's move to the third topic now that is evaluation of our algorithms and overfitting, which you started to touch on with the off policy valuation. First can, can you just remind us what is the relationship between the topics of evaluation and overfitting? Yeah, so I guess when we said this, I'm really talking about this in the context of, you know, many people have been criticizing, especially empirical RL research as something like RL is the only machine learning paradigm where you test on training data. So I think again, like there's some confusion and confusion of different ideas and concepts here. But all the way, I think eventually the question is when you have an RL algorithms that are trained on some data or trained on some environments, how do you want to, how can you evaluate this algorithm? Right, so what kind of evaluation protocol do you use so that if the evaluation outcome is, is that this algorithm is very good, you're actually confident that this algorithm is, you know, generalizing properly for whatever generalization means and that it's not overfitting to the data or the environment that you trained it up. So you suggested on Twitter that we might look to meta learning and transfer learning for generalization in RL, is that right? Yes and no, right. So again, it really depends on what type of generalization are you talking about, right? So I think when people criticize RL for like test on training data, what they really mean is that in RL, you train on relatively simple simplified or simple environments and you test on the same environments. So that's kind of like test on your training data. And sometimes what people really look for is actually kind of a transfer learning behavior, right? So for example, you learn to, I don't know, like pick up a hammer in this particular environment and let's say what people really want is that you actually learn how to pick up a hammer that you will be able to do the same thing. So you put in a different environment, right? So what you, what they don't want the agent to do is that, for example, sometimes maybe the agent overfist to a particular environment that it uses some environment specific visual cues to help him or her to pick up a hammer and such cues maybe absent in a different environment. So what people really, really want is that, oh, can I just have the agent to really just learn to pick up a hammer, right? But my reaction to that is, you know, in the standard mathematical framework of RL, what we really have is that, you know, there's a single environment, you give data to the learner that are generally from this environment. And it will succeed in this environment period, right? In the standard framework, the nothing is said about how the learner can transfer some of the ability learned from one environment to another unless you, you know, present the learner with a whole distribution of diverse environments where typically you can think of it as a big environment with a diverse set of random initial starting states. So that's why I said, like if you really look for these kind of transfer learning in facts, then invoke a more appropriate mathematical framework to study that instead of blaming the lack of transfer ability of RL algorithms that are designed for a single environment. So to what extent can we blame on like overfitting and poor generalization on the function approximator versus the reinforcement learning side? Like I guess with T URL, it seems to me that we can make a lot of progress just by waiting for supervised learning to get better. It seems like there's more to that here. Is that right? I think if I understand your question correctly, I think part of the question is, you know, if we use something like deep newness, which are very powerful function approximators, we're going to the risk of, you know, you know, fitting like too many things or fitting to precisely to the environment. Right. So I don't really know, I don't really have a good answer to this question, although I suspect that, for example, maybe if you use a simpler function approximator, and they help with this particular kind of generalization. Right. So for example, in 2015, you know, there's this paper, state of the art control of our Harry games using shadow reinforcement learning by Leon Machado, Tauvi and Balding. So what they show is that at least as of 2015, the state of the art Atari results can be totally achieved with, say, linear function approximation. So maybe if that gives you the same kind of performance on the environment that you train on, maybe it will generalize better to, you know, slightly different environments. And on the other hand, as I mentioned before, really, really like, I think function approximator cannot be the, you know, cannot be the so answer to this question, right, because if you really look for those kinds of transfer learning behavior, basically, there must be a way where you communicate to the learner, what you're really looking to, you know, what you're really hoping that she learns, right. Why does she, why is she supposed to know that picking up the hammer is so important on its own without relying on visual cues, right. So if you're just around like a very standard algorithm in this, you know, information directly, you're just not letting the learner know what you really care about. So if you care about that behavior, there must be a way to inject that kind of knowledge that kind of go into your learning algorithm or your data or anywhere in the execution of the algorithm. So I think that's where we probably need to think more about. I'm reminded of an open, I did that work with the shadow hand and the Rubik's cube, the dexterity paper and my understanding is that they used domain randomization and simulation to adjust the parameters so that the agent didn't overfit to the specific parameters of certain things like gravity constant friction constants in the simulation. But what that meant is that they had to train the agent on so many different environments, which I think maybe is only feasible if you have a small number of parameters to diversify your exploration with. And I can't help but think that that that approach doesn't seem very scalable. So I wonder if there's some way to get that effect without actually having to sample from so many different environments. Because the space of things that you could, they're basically saying we don't care too much about these parameters but the number of things we don't care about is so large. I don't expect that we could ever enumerate them with simulation. Yeah, so I mean, I don't know. I think the memorization is definitely like one of these ideas out there that helps you like overcome overfitting to a specific environment. The other thing that people do, for example, is inject some adverse real or even just random noise to the state dynamics so that you don't. That's another way to just avoid overfitting to the precise dynamics of the environment that you train now. So yeah, I don't know like there are as you said, like some of these approaches are computational difficult or very challenging like domain memorization, typically need to sample like lots and lots of environments. And yeah, I don't really have a good answer here, but yes, I think we need probably need a better ways more computationally and simple efficient ways to to overcome this issue. So you mentioned some of your work in off policy evaluation, you've authored some very influential papers on an off policy evaluation, including doubly robust off policy value value value value evaluation for reinforcement learning. Yeah, yeah, I mean, the side story here, I think the distinction we wanted to draw there is the notion of off policy learn a entire value function versus just the learning. The scalar expected return of a policy we made the latter, but there has always been a confusion between the two. I think the terminology has evolved since then and my co author Lee only probably has settled on this notion of off policy estimation, but you know, even to not like people use different names for that concept. So retrospectively this value evaluation phrase has been a bad idea and hasn't has been really propelled and then you have a minimax confidence interval for off policy evaluation policy optimization. You also co authored a 2019 paper comparing OBE methods empirical study of off policy policy evaluation for reinforcement learning by the lotion at all. So given that you have so much knowledge of this area, can you maybe share some advice for practitioners on approaches to off policy evaluation? How should we look at that in settings where we don't have a simulator? Yeah, I think that's a good question. So I think that they're, you know, first of all, I'm not a really a practitioner and so the first thing I would say at a high level is that you really need to talk to the domain experts and really understand the unique properties and challenges of your particular application. So for example, think about OBE in healthcare versus some of these like online recommendation systems. The kind of challenges the kind of data you do with can be drastically different in these two scenarios, right? For example, in healthcare, as I mentioned, you probably get not exploratory historical data that are obtained by human decision makers and you face causal inference issues that I confound in this. And all of that, whereas if you're a company like Microsoft, Google, Amazon, so on, you may try to use RL to improve your, your online services or your online interactions with your customers. And in that case, if you've been used to do these kind of things, for example, Microsoft has this decision service platform that are designed for doing contextual banded stuff, then you may have very well locked data where not only you locked like state action, reward and all of that, but also for each action that you've recorded, you've also recorded the probability that that you wanted to sample that action when you actually like generated that data. And that piece of information turns out to be very crucial if you want to apply something like important sampling, right? Whereas those kind of information is typically missing in other scenarios like healthcare, if not even like the yield defined. And on this topic really like, I think just the past weekend, there's a very nice virtual conference RL for real life where there's a dedicated panel of speakers about RL for healthcare environments. I haven't been able to check out that those videos myself and I'll definitely do and I encourage those who are interested in applying RL and OPE in some of these related applications and networks to check out those videos as well. I did actually spend part of my weekend with some of those videos and I can say that especially the healthcare ones were really fascinating and very informative to me. Cool. So does that mean that the off policy evaluation problem can't really be solved by just improving the world models using say deep learning or some other supervised learning methods. Does it sound like there is much more to solving that problem than then building better, better world models that to use to be used for off policy evaluation? Yeah, I think that's a good question. Right. So one of the approaches that you would immediately think of for our policy evaluation is a model is our as you mentioned. So if I can possibly model the dynamics of the world, then of course I can use that to evaluate anything I want. The problem with that is that the food dynamics of the world is two powerful. It's overly powerful that you can basically do anything. You'll be able to keep a little of doing anything with it. And which means typically means that you're making kind of unrealistic assumption. And down to a technical level, what really happens is that especially if you have a large stage space and you think of a model learning over a large stage space. The problem you face is that you're you're trying to learn the transition dynamics right state action maps to distribution over next state. So unlike other standard classification regression scenarios of supervised learning here, you're trying to learn a function that has a out for rich label space. So the label is not even just a state, which is already high dimensional, but it's actually distribution over states. So some of the difficulties that we've touched on earlier, like what aspects of the state is important versus own important, what will come in here, right? So typically when people try to learn these raw world models now, what kind of loss function do they use? Well, first of all, they often assume the world is deterministic, which is approximately true in some of the control benchmarks. But for real world scenarios, as you mentioned, some of them are highly noisy. So you can't pretend that the world is deterministic. And furthermore, even if it is deterministic, you still have to define a informative loss function for your state, right? Are two states close to each other or not? But if you think about what you can do, for example, in the if you're building a model for a terrogames, well, you're given two pixel screens. How can compare them? If you use something like L1 or L2 loss, that's not that's going to be highly informative. You can try something like a perception loss, basically like using a neural net to distinguish between them. But again, that kind of discriminator is going to be is very generic. It doesn't really speak to your precise need of doing a fallacy valuation. It is completely generic just to help you learn a model. And there must be a trade-off here, right? If you apply a very, very generic approach to learn a very complex object like the full model of the world, then you lose the ability to focus and concentrate your sample and computational resources on the part of the world that are really just important for your fallacy valuation task. So that's why I think I may be wrong, but I think model based approach as a solution to OPE is probably not the best way to go. And I've worked in OPE for a while, and in recent years we've also seen a very fast progress in some of the new ideas that can give you reliable OPE estimations with relatively mild representation assumptions. Much weaker than assuming that you can capture the world dynamics. I think I'll bet on that route, where we continuously weaken the representation assumptions we need for OPE so that we get more and more reliable OPE procedures that uses less and less assumptions to the point that people are comfortable applying them in the world of scenarios. That's very interesting. Could you, would you care to mention any specific works along that line that you're pointing to? Yeah, sure. So I mean, before I point to any specific method, really when you apply OPE, you should be thinking first what a regime of a problem, what regime of OPE you're in, right? So people probably have heard of, you know, in RL, OPE can be very difficult if you have a long horizon. That's partially true, right? So really what you should care about is how long the horizon is and how different your behavior policy and your target policy is. If they are very different from each other and or the horizon is very long, you don't want to use something like important sampling, which is great for otherwise, right? So if your two policies are very close to each other or the horizon is relatively short, important sampling will give you unbiased estimation, which does not rely on any of the, function approximation assumptions. And you can also do some nice variance reduction as we did in the W robust OPE paper to further improve these kind of method in this regime. However, in other scenarios, you will find yourself in the much more challenging situation where either the two policies differ significantly from each other and or the horizon is very long. And if you try to apply important sampling in this regime, you'll find that your importance ways, which you need to multiply together over multiple time steps, will quickly have an a balance that explodes exponentially with the horizon. So in this case, you need something else, right? So you need something that's closer to say other value based algorithms that makes uses of Bellman equations to overcome this so called Curse of Horizon. And I've there's this very nice paper from 20, 2018, Europe's called breaking the Curse of Horizon, which introduces this idea of marginalized important sampling were instead of trying to correct the distribution of an entire sequence of actions you've taken. It just tried to correct the mismatch between the marginal distributions of states that you've seen to the marginal state distribution that should be induced by the party policy. So that's where I've that's a topic that I've worked on extensively recently. And I think it's a very promising idea in the regime where, you know, important sampling really doesn't work. Thank you for clarifying those regimes. And that was actually I think a really key insight that I was missing because I couldn't see how important sampling could solve problems from the other regime, but I didn't have I couldn't put my finger on it describing the reason. Thank you for clarifying that. Yeah, if you're laughing, I'll just add another phone fact, right. So some people think important sampling is good. It gives you unbiased estimation. Whereas other thing important sampling is just so bad. It just gives you exponential variance everywhere. That's also not true. So if you have ever applied or implemented policy gradient methods. I actually have an I'm focused on value based methods. OK, but you know, like many people have implemented policy gradient. And you know, like if you have ever used the policy gradient, you'll be essentially used important sampling. So there's this very nice connection that policy gradient is essentially using important sampling to estimate the return of your policy around the small neighborhood of your current policy and then just differentiate that. Right. So and you don't see exponential variance in policy gradient precisely because in this case, your behavior policy and target policy are infinitesimally close to each other. So, you know, which means that you can expand this a little bit if your behavior target policy are slightly different from each other. So, this sampling will still work very well, right. And you know, this particular connection between OPE and PG has been mentioned by Geotan and Peter Biel in 2010 paper. And we recently in the size of now we've extended to further it is established many more connections between the entire family of variance reduction for important sampling and various reduction for PG. So, you know, I think that that's another piece of evidence or a piece of facts you should keep in mind when you think about when and when important sampling works versus it doesn't it doesn't. Do you have opinions on how cause a model should apply to RL and are they a promising direction for OPE under distributional shift? Yeah, I think that's a good question. So, especially in the context of OPE, you know, cause of inference will definitely play a big role here. But the way that I think about it, I think that the way you ask the question is can we use ideas from cause of inference to improve the current OPE methods maybe in the current setup. The way I think about it is that, you know, when you actually apply OPE in the real world, especially where this there is confonding this, right. So, this is where you really need cause inference methods because all our standard OPE methods in the reinforcement learning literature, the majority of them are assuming on confonded data. Right. So, many RL audience may not be familiar or know precisely what confonding this means really it means that the historical data you've collected are the decisions that actions taken in those in those data are not taken by you. And those actions may have depended on information that are not accessible to you. For example, if you're in a healthcare scenario, you may have data that are generated by past decisions made by human doctors. And now you try to use it to improve your automated healthcare decision making system, which for example, featureized the patients using some certain features. But back in that data set, when the human doctor makes a decision, you actually may have depended their decisions based on, for example, the facial expression of the patient and many more subtle information that is just not recorded in your current data set. So, that's where confonding is comes into play and you really need causality that the tools from causal inference to combat that. So, the way that I think about it is that really, it's the issue of confonding is that makes the problem more complex and even more challenging than it is already is in terms of OPE. And we will need a causal inference to solve to deal with those issues. Can you tell us a bit about your research directions going forward? Yeah, I mean, so in general, you know, so the typical way I find research problems is that, you know, there are several of these big-ish problems that just stay in my mind like all the time. For example, several of them I've mentioned earlier, like what is the fundamental representation limit of various modes of reinforcement learning? And, you know, in every, in some of the papers, I try to address bits of these big open problems like a little by little. And on the other hand, you know, every time you've done a paper, usually you just cannot open or you cannot solve all the problems. Right? You always leave some problems open where there are some loose ends that you've been overlooked before. And after you've done a paper, you usually just sit down and reflect on what you have done. And what are the questions that have always been like just lingering in your mind when you just write a paper. And then you realize maybe there's some like brand new like questions out there that needs to be addressed. And that naturally leads to the next research topic. And outside of your own work, are there things happening in RL these days that you're particularly interested in? Yeah, I mean, so back in the days, you know, RL theory used to be a very small field. But in recent years, you know, we've all seen a very rapid growth of interest and attention that's filled. And there are tons of papers on our archive almost every day, if not every, almost every week, if not every day, on various directions in RL theory. And just as everyone else, I can barely like keep up with all these latest results. And of course, like from time to time, there are some papers that are just very, very interesting that just immediately like caught my attention. For example, I've mentioned that I've worked recently worked on OPE with marginalizing for an example. And that's really inspired by this 2018 work, which was definitely surprised when I saw it for the first time. And other than RL theory, you know, I also keenly was happening in empirical RL research like the DPRL works. As I mentioned, you know, the improvised RL works is sort of like the optimistic estimation of what is plausible, what can we possibly achieve, what is the skyline of RL in various situations. So, you know, when you do theory, you always need to make certain assumptions. And I would actually say that, you know, in some situations, statisticians have a very poor idea of what assumptions are realistic and what are not. Because whether they are realistic assumptions really depend on whether they can be satisfied in a practical scenarios. And to get a brief idea of what assumptions are plausible versus what are not, you really need to pay some attention to what's happening in the empirical community and see what kind of methods have been successful and what have been not. Professor Nanjian, I've learned so much from speaking with you today and I know our audience is grateful. I look forward to following your research going forward and thank you so much for sharing your time with all of us today. Professor Nanjian. Thanks for again for having me here and it was a great pleasure like talking to you. That's our episode for today folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 13, "start": 0, "text": " This is Talk by Rail Podcast. All reinforcement learning all the time." }, { "end": 21, "start": 13, "text": " Interviews of brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 25, "start": 21, "text": " Nan Jiang is an assistant professor of computer science at University of Illinois." }, { "end": 33, "start": 25, "text": " He was a postdoc at Microsoft Research and did his PhD at University of Michigan under Professor Sithindr Singh." }, { "end": 36, "start": 33, "text": " Welcome, Professor Nan Jiang. It's very kind of you to join us today." }, { "end": 38, "start": 36, "text": " Thanks for having me here." }, { "end": 41, "start": 38, "text": " So how do you describe your area of interest?" }, { "end": 45, "start": 41, "text": " Yeah, so in general, I work in RL theory." }, { "end": 50, "start": 45, "text": " But there are many types of theory you can do in RL." }, { "end": 57, "start": 50, "text": " And one particular or some of the things I particularly focus on is in terms of, you know," }, { "end": 67, "start": 57, "text": " how can we give performance guarantees and design a principled algorithm for the function approximation case?" }, { "end": 73, "start": 67, "text": " And what kind of assumptions do we need to achieve those guarantees?" }, { "end": 79, "start": 73, "text": " And what are the fundamental limits of reinforcement learning under various settings?" }, { "end": 86, "start": 79, "text": " Among all of these, you know, typically I look for sample complexity resource." }, { "end": 96, "start": 86, "text": " That is, can we achieve, learn our policy, evaluate our policy using as little data as possible?" }, { "end": 103, "start": 96, "text": " I see from your website that you are co-authoring a monograph titled Reinforcement Learning, Theory and Algorithms." }, { "end": 105, "start": 103, "text": " And it's currently a working draft." }, { "end": 107, "start": 105, "text": " Can you tell us a bit about this book?" }, { "end": 114, "start": 107, "text": " Yeah, so really, you know, this all started when I, actually before I started in UIEC," }, { "end": 122, "start": 114, "text": " I knew that I'm well, I would teach a PhD seminar course when I start as a assistant professor." }, { "end": 125, "start": 122, "text": " So I started to prepare course notes for that." }, { "end": 135, "start": 125, "text": " And the reason I need to write down my own course notes is because for the kind of RL theory analysis that I want to teach my students," }, { "end": 144, "start": 135, "text": " I couldn't really find any material like that in, you know, existing, you know, classical text." }, { "end": 148, "start": 144, "text": " So I just ended up like writing a lot of things." }, { "end": 154, "start": 148, "text": " And then, you know, as I taught the course, I expanded on that." }, { "end": 163, "start": 154, "text": " And my collaborators at Agro and Shankarate at Microsoft Research and UW," }, { "end": 168, "start": 163, "text": " they found the set of notes and thought it was good." }, { "end": 176, "start": 168, "text": " And they used that when they co-taught a similar course in University of Washington," }, { "end": 185, "start": 176, "text": " and further expanded on that, and included a lot of topics that I didn't really touch on before, like, policy gradient and imitation learning." }, { "end": 186, "start": 185, "text": " And now here we are." }, { "end": 195, "start": 186, "text": " We have a set of course notes, a little bit, or like, I'll organize, but the eventual goal is that we may just, at some point," }, { "end": 203, "start": 195, "text": " we'll sit down and reorganize everything and press book at the end of the day." }, { "end": 210, "start": 203, "text": " So maybe you partly answered my next question, but how would you situate or compare it to books like Sutton and Bartos," }, { "end": 218, "start": 210, "text": " reinforcement learning introduction or Chabas, specialties, algorithms for reinforcement learning, or others in the genre?" }, { "end": 223, "start": 218, "text": " Yeah, I mean, those are great textbooks, and I started in order to like reading these textbooks myself." }, { "end": 229, "start": 223, "text": " I think they're very great books as introductory material." }, { "end": 238, "start": 229, "text": " And specifically, like, the Sutton and Bartos book provides a lot of wonderful insights from the more kind of AI perspective," }, { "end": 243, "start": 238, "text": " where Chabas book takes more of like a machine learning perspective, right?" }, { "end": 249, "start": 243, "text": " So if you have more of like a learning theory background, and want to know what RL is about," }, { "end": 252, "start": 249, "text": " Chabas book would be a great choice, right?" }, { "end": 259, "start": 252, "text": " But the problem with both books is that you don't really find that much of an theoretical analysis," }, { "end": 272, "start": 259, "text": " or performance guarantees in these books, because these are introductory books, they're not supposed to touch heavily on these kind of materials." }, { "end": 277, "start": 272, "text": " And that goes back to what I was saying a minute ago." }, { "end": 285, "start": 277, "text": " If I really want a set of materials that teach students how to perform theoretical analysis," }, { "end": 291, "start": 285, "text": " especially sample complexity analysis in reinforcement learning, I really just need to write up my own thing." }, { "end": 298, "start": 291, "text": " Can you help us understand more about the relationship between theory and practice in RL today?" }, { "end": 305, "start": 298, "text": " Like, is the current state of RL theory largely limited to simpler problems and simpler function approximations?" }, { "end": 312, "start": 305, "text": " Yeah, that's a, you know, that's a great question, and that's a question that's always going to be there, right?" }, { "end": 323, "start": 312, "text": " So I think to a good extent, yes, you know, we have a very mature theory for RL in the tabular setting, right?" }, { "end": 333, "start": 323, "text": " I think this audience is familiar with this concept, but when we say tabular, we may have a finite small state and action space," }, { "end": 341, "start": 333, "text": " and you can afford a sample and a computational complexities that scale polynomially with the number of states and actions." }, { "end": 346, "start": 341, "text": " Right? So there the, our understanding is very, very sharp." }, { "end": 353, "start": 346, "text": " We still make progress in that setting, but, you know, the gap is really close." }, { "end": 359, "start": 353, "text": " However, as we all know, in the real world, in more challenging practical problems," }, { "end": 364, "start": 359, "text": " all of them, most of them are not in the tabular setting, right?" }, { "end": 377, "start": 364, "text": " So I think previously the amount of sample complexity results, sorry, the amount of theoretical results for the general function approximation setting is quite scarce." }, { "end": 392, "start": 377, "text": " But in recent years, we're seeing a growing number of papers and quite fast progress in some sub-arrows." }, { "end": 402, "start": 392, "text": " For example, when we think of linear function approximation in some special structured environments," }, { "end": 410, "start": 402, "text": " we have a, we have a lot of theoretical understanding now compared to, say, five years ago." }, { "end": 423, "start": 410, "text": " The other sort of extreme is when you just say, I have a function estimator that has limited statistical capacity, and otherwise, I don't want to assume anything on that." }, { "end": 435, "start": 423, "text": " What we can do with that in RL. So that, I mean, I myself have worked on papers of that flavor, like, you know, for a while." }, { "end": 448, "start": 435, "text": " So we also have some good understanding for that regime. But, you know, for many practitioners, the really interesting setting is probably, like, when you use function approximators," }, { "end": 452, "start": 448, "text": " they are not unstructured, but something like a neural nets, right?" }, { "end": 461, "start": 452, "text": " And we know neural nets have many amazing and sometimes confusing behaviors or properties, even in supervised learning, right?" }, { "end": 470, "start": 461, "text": " And we've probably heard of all these, like, over-premetization. How can we learn neural nets that has more parameters than the amount of data?" }, { "end": 486, "start": 470, "text": " So if you want to study RL with neural function approximators, you would need to bring all those kind of understandings from supervised learning and combine them organically with the unique challenges of RL." }, { "end": 496, "start": 486, "text": " And I think this is really, like, very, very understudied area, and we're just about to start, you know, as a community working on that." }, { "end": 514, "start": 496, "text": " And actually, to that point, I'm co-organizing a workshop. I think I... It's almost approved, so I won't say too much about it, but it will happen in summer 2021, where the focus is exclusively on the RL theory." }, { "end": 521, "start": 514, "text": " So I'm really looking forward to, you know, the future progress on this topic from the community." }, { "end": 527, "start": 521, "text": " So would you say that the gap between theory and practice is getting wider or is starting to narrow?" }, { "end": 542, "start": 527, "text": " Yeah, so that's another interesting question. So in terms of, you know, so one way that I think about theory and practice, especially in the context for RL," }, { "end": 551, "start": 542, "text": " is that it's kind of theory gives you the baseline where the empirical work gives you the skyline, right?" }, { "end": 567, "start": 551, "text": " So when you're empirical working, always evaluate it on a set of environments like the Atari Games or Mojoukou or whatever, you'll never be able to test your algorithm on all the benchmarks that you would like to." }, { "end": 583, "start": 567, "text": " On the other hand, in theory, you get, oftentimes you get worst case guarantees that holds for like all environments that can be casted, say, as MDPs, but that's a very large set of environments, and many of them are not..." }, { "end": 596, "start": 583, "text": " Many of the problem instances in this family are not problems that you would really care about. So, you know, theory is always pessimistic and empirical is always sort of optimistic." }, { "end": 601, "start": 596, "text": " And, you know, bridging their gap is..." }, { "end": 629, "start": 601, "text": " I mean, in a sense, I think they're sort of like working on different things where empirical work shows you promises or where or skylines where like what we could possibly do, where theory usually catch up from behind and trying to tell you, oh, like for this kind of problem, we surely know that we can do something here, right?" }, { "end": 640, "start": 629, "text": " So that's the relationship between theory and practice that I think of. Now, in terms of whether their gap is expanding or, you know, closing, I think it really depends on the topic, right?" }, { "end": 651, "start": 640, "text": " So, I think, for example, in the function approximation, in terms of understanding the role of function approximation in RL, I think we're..." }, { "end": 678, "start": 651, "text": " In general, my feeling is that we're getting closer. For example, if you see some recent empirical papers that try to diagnose what's really happening when you run deep RL algorithms, they from time to time, they will refer to theoretical papers for, you know, the theoretical underpinnings for some of the empirical behaviors that they see those algorithms are doing." }, { "end": 691, "start": 678, "text": " So in planning this episode, you mentioned three major areas of interest. They were model-based versus model-free RL, simulation versus real, and evaluation of RL algorithms and overfitting." }, { "end": 702, "start": 691, "text": " So, I was hoping to start with the first one, model-based versus model-free RL. Can you share with us a bit of perspective your perspective on this dichotomy?" }, { "end": 711, "start": 702, "text": " So, you know, I think model-free versus model-based is probably one of the most overloaded and confused ideas or concepts in all of RL." }, { "end": 722, "start": 711, "text": " Really, like when different people say model-based versus model-free, they sometimes make very different things, and I think that's part of where the confusion comes from." }, { "end": 740, "start": 722, "text": " So, for example, in the tabular case, there's this very classical notion that model-based is more sample-efficient than model-free RL. But when you actually say that, what you mean, for example, is that, you know, if you give me a stream of data, like, as ARS prime, two pulls," }, { "end": 755, "start": 740, "text": " you can use them to build an empirical model of the world and then complete the optimal policy from it. Or, conversely, you can just pass the stream data to something like Q-learning and let it tell you what the optimal policy is." }, { "end": 760, "start": 755, "text": " And if we compare these two algorithms, of course, model-based will be much more sample-efficient." }, { "end": 771, "start": 760, "text": " But if you really look into this example, you realize that maybe the difference is not really model-based versus model-free, it's say one-pass versus multi-pass algorithms." }, { "end": 778, "start": 771, "text": " Because your Q-learning algorithm is only allowed to read and use every data point, like once." }, { "end": 800, "start": 778, "text": " So, there's this very interesting result, I think it's almost a folklore, that if you turn the stream data into a replay-bapper and allow Q-learning to make multiple pass, actually infinite many passes over this dataset, it eventually just converts to the same solution as the model-based solution." }, { "end": 812, "start": 800, "text": " So, in this case, if you remove this distinction between one-pass versus multi-pass, I would say that in a tabler setting, maybe there's no distinction between model-based and model-free RL." }, { "end": 827, "start": 812, "text": " So, when I say model-based versus model-free, I'm thinking more of a function-procimate setting and some of the fundamental representation difference between these family of RL algorithm." }, { "end": 839, "start": 827, "text": " And my view has been heavily inspired by this classical paper towards unified theory of state abstractions for NDPs by Lee Hongli and others in 2006." }, { "end": 856, "start": 839, "text": " And the overall general message of the idea is that if you look at a different family of RL algorithms, like model-based value function-based and policy-based algorithms, there's a very natural hierarchy and trade-off here." }, { "end": 868, "start": 856, "text": " So, if you run a model-based algorithms, you're implicitly assuming that your representation is rich enough to capture the dynamics of the world." }, { "end": 880, "start": 868, "text": " And if you can capture the dynamics of the world, you can use that representation to also express, say, the Q function of the world or even the near-optical policy of this environment." }, { "end": 890, "start": 880, "text": " So, your representation must be powerful enough to allow you to express all these objects. So, that's a big assumption." }, { "end": 895, "start": 890, "text": " It's a lot of burden in terms of designing your, say, feature representation." }, { "end": 912, "start": 895, "text": " On the other hand, with the strong assumption, it comes with great guarantees that almost all RL algorithms, if you hook up with such kind of nice representation, they'll work just very like all the guarantees will just pass through and you get all the nice guarantees." }, { "end": 929, "start": 912, "text": " On the other hand, if you move down this hierarchy towards the policy-based algorithm side, you now have very light representation burden. All you need is to be able to, say, express a near-optical policy." }, { "end": 936, "start": 929, "text": " You don't need to worry about reward function, transition dynamics, all of that. So, you may end up with a very simple representation." }, { "end": 945, "start": 936, "text": " But the bad news is that here, the kind of algorithms you can work with in, like, using these kind of representation will be relatively limited." }, { "end": 960, "start": 945, "text": " Maybe you can do policy search, maybe you can do policy gradient type of algorithms, but if you try to run something like a model-based algorithm on these kind of simplified representation, things can completely break down." }, { "end": 967, "start": 960, "text": " So, that's the kind of trade-off that I usually think of in terms of model-based versus model-free RL." }, { "end": 980, "start": 967, "text": " One thing that comes to mind for me with model-based RL is that we might have a chance to learn something different than the reward function that we're given at first." }, { "end": 992, "start": 980, "text": " Maybe we can learn other types of behaviors, which doesn't seem to be as feasible with value or policy-based RL. So, it seems like we have one one extreme with model-based RL." }, { "end": 1004, "start": 992, "text": " Maybe we can learn any policy in the best case, and in value or policy-based RL, we're really limited to just working with that one reward function." }, { "end": 1015, "start": 1004, "text": " Is there a continuum there? Is there anything in the middle? Oh, definitely. We can learn something related to the original task, but a bit different." }, { "end": 1026, "start": 1015, "text": " I think that's a great point. So, you're saying that if I can model the dynamics of the world, I can just plug in any reward function I like and get the near-often policy for that reward function just by planning." }, { "end": 1038, "start": 1026, "text": " That's actually a great point. In fact, just out of that, there's a very natural middle ground between model-free RL and model-based RL." }, { "end": 1055, "start": 1038, "text": " What you can possibly do is to specify the space of reward function that you might be interested in working with, and then try to model enough, but not to model the entire dynamics of the entire world." }, { "end": 1064, "start": 1055, "text": " So that you will be able to learn the near-often policy for all the reward functions in that family." }, { "end": 1076, "start": 1064, "text": " I think this is somewhat related to the notion of reward-free RL or reward-free exploration, so things like that." }, { "end": 1084, "start": 1076, "text": " There are a couple of quite a few recent papers on this topic, so it's kind of like a growing step direction." }, { "end": 1095, "start": 1084, "text": " Do you mention a paper that you co-authored with when-son model-based RL in contextual decision processes, pack-bounds and exponential improvements over model-free approaches?" }, { "end": 1101, "start": 1095, "text": " That was from 2018. Can you help us understand what is this paper saying?" }, { "end": 1121, "start": 1101, "text": " In this paper, we consider the problem of how do you do systematic exploration in large RL problems. So really just think of it as a big MDP where you must use some form of function passmation." }, { "end": 1142, "start": 1121, "text": " Previously, in 2017, with some of the same co-authors, we had a paper that does the value-based version of, basically roughly the same idea, where all you have is a space of candidate value functions, and you want to do good exploration with that." }, { "end": 1164, "start": 1142, "text": " In this paper, what we do is to assume that we actually have a class of candidate models of the environment, and as I mentioned before, when you have stronger representation power to represent the dynamics of the world, you naturally expect to be able to do more with it." }, { "end": 1191, "start": 1164, "text": " That's precisely what we show, that in some special cases, where our model-based algorithm can achieve polynomial sample complexity, whereas any kind of value-based algorithms where the notion of value-based or model-free is defined in a particular manner, just has to suffer exponential sample complexity." }, { "end": 1197, "start": 1191, "text": " That's the part of exponential improvements that is suggested in the title." }, { "end": 1213, "start": 1197, "text": " This paper introduces a witness rank, something called witness rank, and can trust it to Belman rank. Can you help us understand what it is that these ranks measure, and what is witness rank doing?" }, { "end": 1227, "start": 1213, "text": " Yes, sure. When you talk about rank, I think many people would actually think of matrix rank. It's actually pretty closely related in this case." }, { "end": 1251, "start": 1227, "text": " One canonical example where both witness rank and Belman rank are low is when you write down the transition dynamics of your NDP as A by S transition matrix, if this matrix has low rank, then your environment naturally will have low, both low Belman rank and low witness rank." }, { "end": 1269, "start": 1251, "text": " The reason we care about these ranks is because when you really think of the fundamental nature of systematic exploration in RL, what is the core difficulty here? Why exploration is so hard in RL, and you don't have this challenge in supervised learning?" }, { "end": 1285, "start": 1269, "text": " The reason is because in supervised learning, at least in the most standard model, you have data from the training distribution, and you test your classifier using test data drawn from the same distribution." }, { "end": 1301, "start": 1285, "text": " You don't have distribution shift. The biggest issue in RL is that there is sometimes severe distribution shift. When you execute different policies, you can see very different distributions of data." }, { "end": 1325, "start": 1301, "text": " In fact, you can show that if you don't regulate this, if you allow an environment where the exponentially many different candidate policies can generate exponentially many distributions that are drastically different from each other, then there is no hope to actually learn anything using polynomial sample complexity." }, { "end": 1343, "start": 1325, "text": " In this context, what you really need to assume in order to make any polynomial sample complexity claim at all is to say that all those distributions should share something similar." }, { "end": 1353, "start": 1343, "text": " Either they have some good overlaps, or they have some shared common structures that, for example, all of them can be embedded in a lower dimensional space." }, { "end": 1375, "start": 1353, "text": " That's where this notion of rank comes from. Both the notion of Belmer rank and Wunders rank somehow characterize how low of a dimension, what kind of low dimensional space can you embed all those distributions that you can possibly generate in this environment?" }, { "end": 1393, "start": 1375, "text": " The difference between the two notions of ranks is that, as I mentioned, Belmer rank, which is related to my previous paper, is closely tied to value based RL, whereas Wunders rank is specifically designed for model based RL." }, { "end": 1417, "start": 1393, "text": " And because of this difference, there are some environments where Wunders rank can handle, but Belmer rank cannot. A canonical example of that is what we call a factored MDP, or sometimes known as MDP described by a dynamic based network, where you just have your status represented by a number of state variables." }, { "end": 1431, "start": 1417, "text": " And as the state evolves, the value of each variable for the next time step only depends on the value of a small number of nodes from the previous episode, sorry, from the previous time step." }, { "end": 1446, "start": 1431, "text": " And turns out that in this kind of model, you can actually show that the Wunders rank is small, but in some special cases, no value based algorithms can solve the, can explore its problems efficiently." }, { "end": 1460, "start": 1446, "text": " So are these types of ranks applicable in real world or common scenarios? Like, can we talk about rank of Atari or Mojoco or things like that? Or is it limited to cases where we have a linear transition matrix?" }, { "end": 1479, "start": 1460, "text": " Yeah, I think that's a great question. So I think the real situation is somewhere in between. So we do have several, some of the actual benchmarks that we use in DPRL today that can be seen as having low Belmer rank or Wunders rank." }, { "end": 1508, "start": 1479, "text": " One canonical example is what we call the visual grid world, right? I'm sure you probably have seen these environments before where you have, you actually have a grid world environment, but you actually render it in a 3D say game engine like Minecraft, and instead of telling the agent which grid it is in, you give it the first person view of the whole environment." }, { "end": 1527, "start": 1508, "text": " And assuming that you're the role pixel image that you see is approximately Markovian, then this would be an environment that literally has low Belmer rank or low Wunders rank where the rank is bounded by the number of grids." }, { "end": 1542, "start": 1527, "text": " And as you mentioned, Mojoco, that gives you another example. If you actually have a linear control problem, like say, Alqr or something like that, that also will yield a low Belmer rank and low Wunders rank." }, { "end": 1560, "start": 1542, "text": " However, I would say that for Atari games, especially some of the more complicated games, it's very hard to characterize them using these very nice, clean structures that are proposed in theory." }, { "end": 1578, "start": 1560, "text": " I think it's due a big open problem as of today that what kind of structures, very general structure assumptions can we use to capture games like Atari games that can develop some nice guarantees for environments of those structures." }, { "end": 1581, "start": 1578, "text": " I think that's due a big and interesting open problem." }, { "end": 1591, "start": 1581, "text": " I attended your RL theory seminar earlier last week, entitled information theoretic considerations in battery enforcement learning." }, { "end": 1596, "start": 1591, "text": " Can you tell us just briefly about what that you're talking about?" }, { "end": 1618, "start": 1596, "text": " Yeah, so the topic there is related to the notion of model based versus model for it. So the question I'm thinking about is literally in Bash RL where you just have a set of data points and you want to compute near off from policy, and you want to use a value based RL algorithm and say trying to approximate the Q star function." }, { "end": 1637, "start": 1618, "text": " What's the fundamental limitations there? So in particular, in supervised learning, the strongest representation assumption we usually make is called realized ability that your function approximator can actually just capture the target function you're aiming to learn." }, { "end": 1654, "start": 1637, "text": " And here this is Q star, but turns out that many of us believe that in the RL case, especially in the Bash RL case, that's not going to be sufficient. We often need some expressivity assumptions that are much more much stronger than that." }, { "end": 1673, "start": 1654, "text": " But at the same time, we don't really know if we just can't get away with is realizable deal. So it's about some discussions around this idea of where is the true limit of Bash RL in terms of representation." }, { "end": 1685, "start": 1673, "text": " In the seminar, you commented at one point, an imperative raising, if that's okay, you said roughly distribution RL is somewhere between model free RL and model based RL. Is that right?" }, { "end": 1696, "start": 1685, "text": " Yeah, kind of. You can say it's, you know, again, like depending on what you mean, for there's a specific sense where that is correct." }, { "end": 1716, "start": 1696, "text": " If you think about this, right? So if I'm able to model the true dynamics of the world, then using the same representation again using the similar reasoning that I used earlier, that you will be able to express to express the object of interest in distribution RL." }, { "end": 1731, "start": 1716, "text": " I don't remember what it is called, but basically what you try to do is that for any given state action pair, you want to be able to predict the distribution of returns you would obtain from this state action pair under certain policy." }, { "end": 1749, "start": 1731, "text": " Now, if you can, if you can represent the entire dynamics, you can sort of represent that. Now, if you can represent that, it also means that you will be able to represent the usual expected value function because all it takes is to, you know, use an expectation operator." }, { "end": 1763, "start": 1749, "text": " So in that sense, you know, distribution RL models slightly more than value based RL, but also a little bit less than model based RL. So in that particular sense, I would say yes." }, { "end": 1790, "start": 1763, "text": " Distribution RL is in the middle between model based and value based and back to the topic of that seminar. If it is true that in the batch setting, value based RL faces some fundamental representation hardness, then maybe or maybe not distribution RL can be a way to lift it without going all the way to model based RL." }, { "end": 1796, "start": 1790, "text": " Do you see other approaches between model free and model based RL and is that a rich area?" }, { "end": 1811, "start": 1796, "text": " Yeah, I think there are definitely some ideas that are that float around and some, especially some ideas that you can find in the empirical work of DPRL that nicely fit in between value based and model based." }, { "end": 1840, "start": 1811, "text": " One example is value based RL with auxiliary tasks. So let's say you do something like DQN, but instead of just like training it with the, you know, the TD types of loss, you also ask your network to predict like some other things that's happening in the environment and use that to help you learn a better representation." }, { "end": 1848, "start": 1840, "text": " So that will be a very good example of something in between model free or I will say value based and model based RL." }, { "end": 1862, "start": 1848, "text": " Another thing to think about is, you know, some of the hardness that I've been developing or deriving my papers assumes some kind of like a minimal representation power for value based RL." }, { "end": 1873, "start": 1862, "text": " So I have a function class that can express Q star, but nothing else. I can't express Q pi. I can't express, you know, other functions of interest in this environment." }, { "end": 1883, "start": 1873, "text": " So if the harness is associated with that, maybe you want to do circumusbanded is to, you know, introduce the notion of over parameterization, right?" }, { "end": 1897, "start": 1883, "text": " Try to have a network or have a representation that can predict more than just say the function that you are eventually interested in learning, but something else, right? So this is also related to, you know, auxiliary tasks." }, { "end": 1913, "start": 1897, "text": " Another interesting idea that I've been fascinated for years, but I have not seen a very good development is this idea of like learning incomplete models for reinforcement learning, right?" }, { "end": 1931, "start": 1913, "text": " So one big issue with model based RL, especially if you do it in the raw space space, is that you're just predicting like so many things, right? So there are lots of only important details in this real world that you probably don't even ever need to care about." }, { "end": 1954, "start": 1931, "text": " But if you just run a variable model based RL algorithm, you're trying to model all of it, right? Can you just do less than that, right? Just try to pick bits of the world and predict the dynamics on those fragments and somehow use this incomplete models for prediction and planning." }, { "end": 1976, "start": 1954, "text": " You know, some of these ideas have seen some nice early investigation and exploration in the PhD thesis of a reactivity who happens to be my academic brother, but I really look forward to seeing some, for example, some of the modernization of those ideas in the DPRL scenario." }, { "end": 1995, "start": 1976, "text": " Maybe that could be a considered a way to combine causal reasoning with RL as well, because if you choose which part of the model you want to include, you can use your domain knowledge of a causality to only model those parts that are relevant to your problem at hand and discard the rest." }, { "end": 2023, "start": 1995, "text": " Yeah, I think that's, you know, there will be a lot of ideas that can come into this notion, this framework of incomplete models, including like causality, as you said, and also, you know, some of the state abstractions concepts like bysimulation and homomorphism are also rigorous mathematical frameworks that define what can and what cannot be discarded as all important details." }, { "end": 2035, "start": 2023, "text": " So I think it would need some of the some combination of all these ideas to to come to a, you know, a relatively complete solution to this problem." }, { "end": 2047, "start": 2035, "text": " So let's move on to the second topic, sim versus real. You refer to simulation based RL as solving large planning problems with learning methods and RL from data as solving learning problems." }, { "end": 2050, "start": 2047, "text": " Can you help us understand the distinction here?" }, { "end": 2066, "start": 2050, "text": " Yeah, so, you know, I think this is another, this is another common confusion in the area of RL in the sense that when RL people write a paper, you don't know what problem he or she is really interested in solving, right?" }, { "end": 2070, "start": 2066, "text": " So some people are really just interested in solving planning problems, right?" }, { "end": 2086, "start": 2070, "text": " So I have a large black box simulator and I just want to compute a neural policy for that. And I just happen to be using sampling based methods, which looks sometimes very similar or identical to learning methods." }, { "end": 2096, "start": 2086, "text": " Whereas other people try to solve RL problems that are that are really just defined by data, right?" }, { "end": 2109, "start": 2096, "text": " So let's say you want to do automated dot log system, online customer service, use RL to improve decisions in healthcare scenarios." }, { "end": 2114, "start": 2109, "text": " So in these kind of scenarios, you just don't have a simulator and you do with data." }, { "end": 2119, "start": 2114, "text": " So I think there's a there's a very huge difference there, right?" }, { "end": 2128, "start": 2119, "text": " So for example, there are some algorithms or some sub areas of RL that are dedicated to the simulator setting." }, { "end": 2133, "start": 2128, "text": " One example that I'll think of is molecular research, right?" }, { "end": 2143, "start": 2133, "text": " So you see the early days of MCTS, if you look at their papers, they will specifically say sample based planning in their instead of learning, right?" }, { "end": 2151, "start": 2143, "text": " Although the MCTS stands from the RL community, it's really a planning algorithm." }, { "end": 2155, "start": 2151, "text": " And on the other hand, you have some problems like off policy evaluation, right?" }, { "end": 2159, "start": 2155, "text": " How do you use your start good data to estimate the performance of a policy?" }, { "end": 2171, "start": 2159, "text": " And this really only makes sense when you don't have a simulator, because if you have a simulator and you want to learn the performance of a policy, the easiest way is just to run that policy, which we do all the days in deep RL today, right?" }, { "end": 2177, "start": 2171, "text": " So I think it's pretty important to make a distinction between these two OSHAs." }, { "end": 2184, "start": 2177, "text": " So when it comes to learning from real world data, I've been thinking more about the type of noise that we see in real world data." }, { "end": 2198, "start": 2184, "text": " It seems like the type of stochasticity in the real world is more complex than what you can easily model in a SIM by adding any simple type of noise." }, { "end": 2206, "start": 2198, "text": " And so that makes it hard to model and that makes building world models and off policy evaluation more challenging." }, { "end": 2213, "start": 2206, "text": " And then it's expensive to deploy these policies in the real world to evaluate how they actually do in a production setting." }, { "end": 2220, "start": 2213, "text": " And so it seems like these things combine to make it really hard to iterate on RL with real world data." }, { "end": 2232, "start": 2220, "text": " And so like on one hand, we have this very advanced simulation based RL like, like you were saying with Monte Carlo tree search. And so we have now we have mu zero and and data and agent 57." }, { "end": 2241, "start": 2232, "text": " And that stuff's all really far along, but on the other side with real life RL, it seems like we're maybe working more on very basics." }, { "end": 2244, "start": 2241, "text": " Is that how you see it right now?" }, { "end": 2256, "start": 2244, "text": " I think I agree with a lot of the points you make here, although I would say, you know, for some of the simulation based RL, they actually have a serious goal, right?" }, { "end": 2270, "start": 2256, "text": " And their goal is for real. For example, when you when you when you try to build an agent that can play go and data, they have their real world benefits or values, right?" }, { "end": 2282, "start": 2270, "text": " So actually in these cases, solving the planning problem defined by the simulator can be your goal. And there are various grand challenges there." }, { "end": 2294, "start": 2282, "text": " And we've seen like very impressive advances. As you mentioned, you know, we have AlphaGo, Alpha zero and this amazing, you know, data playing agent." }, { "end": 2304, "start": 2294, "text": " On the other hand, really depends for some people solving the simulation solving the simulator problem is not their final goal, right?" }, { "end": 2318, "start": 2304, "text": " The reason we use simulators in RL study is because we use them as benchmarks or as a way to emulate what would happen if we were to apply RL in the real world." }, { "end": 2332, "start": 2318, "text": " So in that case, I would say yes, a lot of the difficulties that you mentioned earlier, for example, that you have sample capacity issues, there are consequences and risks of taking real decisions." }, { "end": 2344, "start": 2332, "text": " It is difficult to run a policy in the real world. And there are actually many more of these kind of, you know, difficulties associated with real world RL." }, { "end": 2364, "start": 2344, "text": " And many of these aspects can just, it is very hard to study them in the simulator setting or as of now, we pay much less attention to them in our simulator centered RL research, right?" }, { "end": 2386, "start": 2364, "text": " So just to add a few other examples, right? So if you actually learn from real world data in cases in scenarios like healthcare, more likely than not, you will be given some passive data that arises from, you know, for example, previous historical medical record." }, { "end": 2401, "start": 2386, "text": " And in that case, you know, thinking of confunderness and introducing something like causal inference could be crucial, which we're not doing a lot at all in the simulator based RL research." }, { "end": 2411, "start": 2401, "text": " So what is missing that keeps us from seeing more RL in the real world? And I guess based on your answer improvements and simulations won't be enough?" }, { "end": 2420, "start": 2411, "text": " I mean, as I mentioned, you know, we can always use a simulator to as a benchmark or as an emulator of what happens in the real world." }, { "end": 2432, "start": 2420, "text": " I think what part of what we really need is to take this view seriously, right? Use the emulator in your way that really tries to mimic what happens in the real world." }, { "end": 2444, "start": 2432, "text": " And sometimes it's surprisingly hard to do this and I would give you one example of this, right? For example, you know, I've been working on policy evaluation for quite a while." }, { "end": 2456, "start": 2444, "text": " And as we always do, we will use simulators as benchmarks to test and evaluate and compare different OPE algorithms." }, { "end": 2471, "start": 2456, "text": " And in this case, you know, when you show off the performance of your algorithm on the simulator, it's very, very attempting to do hyper parameter tuning just as what everyone else does in DPRL." }, { "end": 2482, "start": 2471, "text": " But on the other hand, if you think about when you actually apply OPE in a real world task, you realize that you just can't do hyper parameter tuning at all." }, { "end": 2495, "start": 2482, "text": " Because what you usually tune against is the ground choose value of a policy, which is precisely what you're trying to estimate here and you don't have access to, right?" }, { "end": 2503, "start": 2495, "text": " It's pretty funny that there's one time where we submit a submitted paper and about empirical benchmarks." }, { "end": 2520, "start": 2503, "text": " And one of the reviewers says that you're just not doing enough hyper parameter tuning. I think that's kind of like the reflection of how people's mindset of, you know, we just need to, you know, turn hyper parameters to make this thing work in the simulator." }, { "end": 2538, "start": 2520, "text": " Whereas if you seriously use simulator as a to emulate the real world situation, you should put a lot more restrictions on yourself when it comes to, you know, measure the performance of your algorithm among other things." }, { "end": 2547, "start": 2538, "text": " So let's move to the third topic now that is evaluation of our algorithms and overfitting, which you started to touch on with the off policy valuation." }, { "end": 2553, "start": 2547, "text": " First can, can you just remind us what is the relationship between the topics of evaluation and overfitting?" }, { "end": 2570, "start": 2553, "text": " Yeah, so I guess when we said this, I'm really talking about this in the context of, you know, many people have been criticizing, especially empirical RL research as something like RL is the only machine learning paradigm where you test on training data." }, { "end": 2579, "start": 2570, "text": " So I think again, like there's some confusion and confusion of different ideas and concepts here." }, { "end": 2596, "start": 2579, "text": " But all the way, I think eventually the question is when you have an RL algorithms that are trained on some data or trained on some environments, how do you want to, how can you evaluate this algorithm?" }, { "end": 2619, "start": 2596, "text": " Right, so what kind of evaluation protocol do you use so that if the evaluation outcome is, is that this algorithm is very good, you're actually confident that this algorithm is, you know, generalizing properly for whatever generalization means and that it's not overfitting to the data or the environment that you trained it up." }, { "end": 2626, "start": 2619, "text": " So you suggested on Twitter that we might look to meta learning and transfer learning for generalization in RL, is that right?" }, { "end": 2633, "start": 2626, "text": " Yes and no, right. So again, it really depends on what type of generalization are you talking about, right?" }, { "end": 2647, "start": 2633, "text": " So I think when people criticize RL for like test on training data, what they really mean is that in RL, you train on relatively simple simplified or simple environments and you test on the same environments." }, { "end": 2652, "start": 2647, "text": " So that's kind of like test on your training data." }, { "end": 2659, "start": 2652, "text": " And sometimes what people really look for is actually kind of a transfer learning behavior, right?" }, { "end": 2676, "start": 2659, "text": " So for example, you learn to, I don't know, like pick up a hammer in this particular environment and let's say what people really want is that you actually learn how to pick up a hammer that you will be able to do the same thing." }, { "end": 2697, "start": 2676, "text": " So you put in a different environment, right? So what you, what they don't want the agent to do is that, for example, sometimes maybe the agent overfist to a particular environment that it uses some environment specific visual cues to help him or her to pick up a hammer and such cues maybe absent in a different environment." }, { "end": 2706, "start": 2697, "text": " So what people really, really want is that, oh, can I just have the agent to really just learn to pick up a hammer, right?" }, { "end": 2724, "start": 2706, "text": " But my reaction to that is, you know, in the standard mathematical framework of RL, what we really have is that, you know, there's a single environment, you give data to the learner that are generally from this environment." }, { "end": 2752, "start": 2724, "text": " And it will succeed in this environment period, right? In the standard framework, the nothing is said about how the learner can transfer some of the ability learned from one environment to another unless you, you know, present the learner with a whole distribution of diverse environments where typically you can think of it as a big environment with a diverse set of random initial starting states." }, { "end": 2774, "start": 2752, "text": " So that's why I said, like if you really look for these kind of transfer learning in facts, then invoke a more appropriate mathematical framework to study that instead of blaming the lack of transfer ability of RL algorithms that are designed for a single environment." }, { "end": 2784, "start": 2774, "text": " So to what extent can we blame on like overfitting and poor generalization on the function approximator versus the reinforcement learning side?" }, { "end": 2791, "start": 2784, "text": " Like I guess with T URL, it seems to me that we can make a lot of progress just by waiting for supervised learning to get better." }, { "end": 2794, "start": 2791, "text": " It seems like there's more to that here. Is that right?" }, { "end": 2811, "start": 2794, "text": " I think if I understand your question correctly, I think part of the question is, you know, if we use something like deep newness, which are very powerful function approximators, we're going to the risk of, you know, you know, fitting like too many things or fitting to precisely to the environment." }, { "end": 2826, "start": 2811, "text": " Right. So I don't really know, I don't really have a good answer to this question, although I suspect that, for example, maybe if you use a simpler function approximator, and they help with this particular kind of generalization." }, { "end": 2850, "start": 2826, "text": " Right. So for example, in 2015, you know, there's this paper, state of the art control of our Harry games using shadow reinforcement learning by Leon Machado, Tauvi and Balding. So what they show is that at least as of 2015, the state of the art Atari results can be totally achieved with, say, linear function approximation." }, { "end": 2862, "start": 2850, "text": " So maybe if that gives you the same kind of performance on the environment that you train on, maybe it will generalize better to, you know, slightly different environments." }, { "end": 2891, "start": 2862, "text": " And on the other hand, as I mentioned before, really, really like, I think function approximator cannot be the, you know, cannot be the so answer to this question, right, because if you really look for those kinds of transfer learning behavior, basically, there must be a way where you communicate to the learner, what you're really looking to, you know, what you're really hoping that she learns, right." }, { "end": 2903, "start": 2891, "text": " Why does she, why is she supposed to know that picking up the hammer is so important on its own without relying on visual cues, right." }, { "end": 2913, "start": 2903, "text": " So if you're just around like a very standard algorithm in this, you know, information directly, you're just not letting the learner know what you really care about." }, { "end": 2927, "start": 2913, "text": " So if you care about that behavior, there must be a way to inject that kind of knowledge that kind of go into your learning algorithm or your data or anywhere in the execution of the algorithm." }, { "end": 2932, "start": 2927, "text": " So I think that's where we probably need to think more about." }, { "end": 2958, "start": 2932, "text": " I'm reminded of an open, I did that work with the shadow hand and the Rubik's cube, the dexterity paper and my understanding is that they used domain randomization and simulation to adjust the parameters so that the agent didn't overfit to the specific parameters of certain things like gravity constant friction constants in the simulation." }, { "end": 2972, "start": 2958, "text": " But what that meant is that they had to train the agent on so many different environments, which I think maybe is only feasible if you have a small number of parameters to diversify your exploration with." }, { "end": 2978, "start": 2972, "text": " And I can't help but think that that that approach doesn't seem very scalable." }, { "end": 2986, "start": 2978, "text": " So I wonder if there's some way to get that effect without actually having to sample from so many different environments." }, { "end": 2997, "start": 2986, "text": " Because the space of things that you could, they're basically saying we don't care too much about these parameters but the number of things we don't care about is so large." }, { "end": 3000, "start": 2997, "text": " I don't expect that we could ever enumerate them with simulation." }, { "end": 3002, "start": 3000, "text": " Yeah, so I mean, I don't know." }, { "end": 3010, "start": 3002, "text": " I think the memorization is definitely like one of these ideas out there that helps you like overcome overfitting to a specific environment." }, { "end": 3022, "start": 3010, "text": " The other thing that people do, for example, is inject some adverse real or even just random noise to the state dynamics so that you don't." }, { "end": 3029, "start": 3022, "text": " That's another way to just avoid overfitting to the precise dynamics of the environment that you train now." }, { "end": 3042, "start": 3029, "text": " So yeah, I don't know like there are as you said, like some of these approaches are computational difficult or very challenging like domain memorization, typically need to sample like lots and lots of environments." }, { "end": 3054, "start": 3042, "text": " And yeah, I don't really have a good answer here, but yes, I think we need probably need a better ways more computationally and simple efficient ways to to overcome this issue." }, { "end": 3068, "start": 3054, "text": " So you mentioned some of your work in off policy evaluation, you've authored some very influential papers on an off policy evaluation, including doubly robust off policy value value value value evaluation for reinforcement learning." }, { "end": 3083, "start": 3068, "text": " Yeah, yeah, I mean, the side story here, I think the distinction we wanted to draw there is the notion of off policy learn a entire value function versus just the learning." }, { "end": 3091, "start": 3083, "text": " The scalar expected return of a policy we made the latter, but there has always been a confusion between the two." }, { "end": 3105, "start": 3091, "text": " I think the terminology has evolved since then and my co author Lee only probably has settled on this notion of off policy estimation, but you know, even to not like people use different names for that concept." }, { "end": 3120, "start": 3105, "text": " So retrospectively this value evaluation phrase has been a bad idea and hasn't has been really propelled and then you have a minimax confidence interval for off policy evaluation policy optimization." }, { "end": 3131, "start": 3120, "text": " You also co authored a 2019 paper comparing OBE methods empirical study of off policy policy evaluation for reinforcement learning by the lotion at all." }, { "end": 3144, "start": 3131, "text": " So given that you have so much knowledge of this area, can you maybe share some advice for practitioners on approaches to off policy evaluation?" }, { "end": 3149, "start": 3144, "text": " How should we look at that in settings where we don't have a simulator?" }, { "end": 3173, "start": 3149, "text": " Yeah, I think that's a good question. So I think that they're, you know, first of all, I'm not a really a practitioner and so the first thing I would say at a high level is that you really need to talk to the domain experts and really understand the unique properties and challenges of your particular application." }, { "end": 3182, "start": 3173, "text": " So for example, think about OBE in healthcare versus some of these like online recommendation systems." }, { "end": 3202, "start": 3182, "text": " The kind of challenges the kind of data you do with can be drastically different in these two scenarios, right? For example, in healthcare, as I mentioned, you probably get not exploratory historical data that are obtained by human decision makers and you face causal inference issues that I confound in this." }, { "end": 3220, "start": 3202, "text": " And all of that, whereas if you're a company like Microsoft, Google, Amazon, so on, you may try to use RL to improve your, your online services or your online interactions with your customers." }, { "end": 3248, "start": 3220, "text": " And in that case, if you've been used to do these kind of things, for example, Microsoft has this decision service platform that are designed for doing contextual banded stuff, then you may have very well locked data where not only you locked like state action, reward and all of that, but also for each action that you've recorded, you've also recorded the probability that" }, { "end": 3261, "start": 3248, "text": " that you wanted to sample that action when you actually like generated that data. And that piece of information turns out to be very crucial if you want to apply something like important sampling, right?" }, { "end": 3273, "start": 3261, "text": " Whereas those kind of information is typically missing in other scenarios like healthcare, if not even like the yield defined." }, { "end": 3290, "start": 3273, "text": " And on this topic really like, I think just the past weekend, there's a very nice virtual conference RL for real life where there's a dedicated panel of speakers about RL for healthcare environments." }, { "end": 3305, "start": 3290, "text": " I haven't been able to check out that those videos myself and I'll definitely do and I encourage those who are interested in applying RL and OPE in some of these related applications and networks to check out those videos as well." }, { "end": 3314, "start": 3305, "text": " I did actually spend part of my weekend with some of those videos and I can say that especially the healthcare ones were really fascinating and very informative to me." }, { "end": 3316, "start": 3314, "text": " Cool." }, { "end": 3328, "start": 3316, "text": " So does that mean that the off policy evaluation problem can't really be solved by just improving the world models using say deep learning or some other supervised learning methods." }, { "end": 3337, "start": 3328, "text": " Does it sound like there is much more to solving that problem than then building better, better world models that to use to be used for off policy evaluation?" }, { "end": 3347, "start": 3337, "text": " Yeah, I think that's a good question. Right. So one of the approaches that you would immediately think of for our policy evaluation is a model is our as you mentioned." }, { "end": 3354, "start": 3347, "text": " So if I can possibly model the dynamics of the world, then of course I can use that to evaluate anything I want." }, { "end": 3370, "start": 3354, "text": " The problem with that is that the food dynamics of the world is two powerful. It's overly powerful that you can basically do anything. You'll be able to keep a little of doing anything with it." }, { "end": 3385, "start": 3370, "text": " And which means typically means that you're making kind of unrealistic assumption. And down to a technical level, what really happens is that especially if you have a large stage space and you think of a model learning over a large stage space." }, { "end": 3395, "start": 3385, "text": " The problem you face is that you're you're trying to learn the transition dynamics right state action maps to distribution over next state." }, { "end": 3408, "start": 3395, "text": " So unlike other standard classification regression scenarios of supervised learning here, you're trying to learn a function that has a out for rich label space." }, { "end": 3415, "start": 3408, "text": " So the label is not even just a state, which is already high dimensional, but it's actually distribution over states." }, { "end": 3425, "start": 3415, "text": " So some of the difficulties that we've touched on earlier, like what aspects of the state is important versus own important, what will come in here, right?" }, { "end": 3434, "start": 3425, "text": " So typically when people try to learn these raw world models now, what kind of loss function do they use?" }, { "end": 3447, "start": 3434, "text": " Well, first of all, they often assume the world is deterministic, which is approximately true in some of the control benchmarks. But for real world scenarios, as you mentioned, some of them are highly noisy." }, { "end": 3450, "start": 3447, "text": " So you can't pretend that the world is deterministic." }, { "end": 3464, "start": 3450, "text": " And furthermore, even if it is deterministic, you still have to define a informative loss function for your state, right? Are two states close to each other or not?" }, { "end": 3474, "start": 3464, "text": " But if you think about what you can do, for example, in the if you're building a model for a terrogames, well, you're given two pixel screens. How can compare them?" }, { "end": 3486, "start": 3474, "text": " If you use something like L1 or L2 loss, that's not that's going to be highly informative. You can try something like a perception loss, basically like using a neural net to distinguish between them." }, { "end": 3496, "start": 3486, "text": " But again, that kind of discriminator is going to be is very generic. It doesn't really speak to your precise need of doing a fallacy valuation." }, { "end": 3523, "start": 3496, "text": " It is completely generic just to help you learn a model. And there must be a trade-off here, right? If you apply a very, very generic approach to learn a very complex object like the full model of the world, then you lose the ability to focus and concentrate your sample and computational resources on the part of the world that are really just important for your fallacy valuation task." }, { "end": 3534, "start": 3523, "text": " So that's why I think I may be wrong, but I think model based approach as a solution to OPE is probably not the best way to go." }, { "end": 3557, "start": 3534, "text": " And I've worked in OPE for a while, and in recent years we've also seen a very fast progress in some of the new ideas that can give you reliable OPE estimations with relatively mild representation assumptions." }, { "end": 3586, "start": 3557, "text": " Much weaker than assuming that you can capture the world dynamics. I think I'll bet on that route, where we continuously weaken the representation assumptions we need for OPE so that we get more and more reliable OPE procedures that uses less and less assumptions to the point that people are comfortable applying them in the world of scenarios." }, { "end": 3594, "start": 3586, "text": " That's very interesting. Could you, would you care to mention any specific works along that line that you're pointing to?" }, { "end": 3611, "start": 3594, "text": " Yeah, sure. So I mean, before I point to any specific method, really when you apply OPE, you should be thinking first what a regime of a problem, what regime of OPE you're in, right?" }, { "end": 3621, "start": 3611, "text": " So people probably have heard of, you know, in RL, OPE can be very difficult if you have a long horizon. That's partially true, right?" }, { "end": 3631, "start": 3621, "text": " So really what you should care about is how long the horizon is and how different your behavior policy and your target policy is." }, { "end": 3644, "start": 3631, "text": " If they are very different from each other and or the horizon is very long, you don't want to use something like important sampling, which is great for otherwise, right?" }, { "end": 3656, "start": 3644, "text": " So if your two policies are very close to each other or the horizon is relatively short, important sampling will give you unbiased estimation, which does not rely on any of the," }, { "end": 3670, "start": 3656, "text": " function approximation assumptions. And you can also do some nice variance reduction as we did in the W robust OPE paper to further improve these kind of method in this regime." }, { "end": 3697, "start": 3670, "text": " However, in other scenarios, you will find yourself in the much more challenging situation where either the two policies differ significantly from each other and or the horizon is very long. And if you try to apply important sampling in this regime, you'll find that your importance ways, which you need to multiply together over multiple time steps, will quickly have an a balance that explodes exponentially with the horizon." }, { "end": 3715, "start": 3697, "text": " So in this case, you need something else, right? So you need something that's closer to say other value based algorithms that makes uses of Bellman equations to overcome this so called Curse of Horizon." }, { "end": 3735, "start": 3715, "text": " And I've there's this very nice paper from 20, 2018, Europe's called breaking the Curse of Horizon, which introduces this idea of marginalized important sampling were instead of trying to correct the distribution of an entire sequence of actions you've taken." }, { "end": 3747, "start": 3735, "text": " It just tried to correct the mismatch between the marginal distributions of states that you've seen to the marginal state distribution that should be induced by the party policy." }, { "end": 3762, "start": 3747, "text": " So that's where I've that's a topic that I've worked on extensively recently. And I think it's a very promising idea in the regime where, you know, important sampling really doesn't work." }, { "end": 3779, "start": 3762, "text": " Thank you for clarifying those regimes. And that was actually I think a really key insight that I was missing because I couldn't see how important sampling could solve problems from the other regime, but I didn't have I couldn't put my finger on it describing the reason. Thank you for clarifying that." }, { "end": 3797, "start": 3779, "text": " Yeah, if you're laughing, I'll just add another phone fact, right. So some people think important sampling is good. It gives you unbiased estimation. Whereas other thing important sampling is just so bad. It just gives you exponential variance everywhere. That's also not true." }, { "end": 3803, "start": 3797, "text": " So if you have ever applied or implemented policy gradient methods." }, { "end": 3817, "start": 3803, "text": " I actually have an I'm focused on value based methods. OK, but you know, like many people have implemented policy gradient. And you know, like if you have ever used the policy gradient, you'll be essentially used important sampling." }, { "end": 3835, "start": 3817, "text": " So there's this very nice connection that policy gradient is essentially using important sampling to estimate the return of your policy around the small neighborhood of your current policy and then just differentiate that." }, { "end": 3847, "start": 3835, "text": " Right. So and you don't see exponential variance in policy gradient precisely because in this case, your behavior policy and target policy are infinitesimally close to each other." }, { "end": 3855, "start": 3847, "text": " So, you know, which means that you can expand this a little bit if your behavior target policy are slightly different from each other." }, { "end": 3867, "start": 3855, "text": " So, this sampling will still work very well, right. And you know, this particular connection between OPE and PG has been mentioned by Geotan and Peter Biel in 2010 paper." }, { "end": 3881, "start": 3867, "text": " And we recently in the size of now we've extended to further it is established many more connections between the entire family of variance reduction for important sampling and various reduction for PG." }, { "end": 3899, "start": 3881, "text": " So, you know, I think that that's another piece of evidence or a piece of facts you should keep in mind when you think about when and when important sampling works versus it doesn't it doesn't." }, { "end": 3907, "start": 3899, "text": " Do you have opinions on how cause a model should apply to RL and are they a promising direction for OPE under distributional shift?" }, { "end": 3917, "start": 3907, "text": " Yeah, I think that's a good question. So, especially in the context of OPE, you know, cause of inference will definitely play a big role here." }, { "end": 3931, "start": 3917, "text": " But the way that I think about it, I think that the way you ask the question is can we use ideas from cause of inference to improve the current OPE methods maybe in the current setup." }, { "end": 3941, "start": 3931, "text": " The way I think about it is that, you know, when you actually apply OPE in the real world, especially where this there is confonding this, right." }, { "end": 3953, "start": 3941, "text": " So, this is where you really need cause inference methods because all our standard OPE methods in the reinforcement learning literature, the majority of them are assuming on confonded data." }, { "end": 3973, "start": 3953, "text": " Right. So, many RL audience may not be familiar or know precisely what confonding this means really it means that the historical data you've collected are the decisions that actions taken in those in those data are not taken by you." }, { "end": 3990, "start": 3973, "text": " And those actions may have depended on information that are not accessible to you. For example, if you're in a healthcare scenario, you may have data that are generated by past decisions made by human doctors." }, { "end": 4017, "start": 3990, "text": " And now you try to use it to improve your automated healthcare decision making system, which for example, featureized the patients using some certain features. But back in that data set, when the human doctor makes a decision, you actually may have depended their decisions based on, for example, the facial expression of the patient and many more subtle information that is just not recorded in your current data set." }, { "end": 4028, "start": 4017, "text": " So, that's where confonding is comes into play and you really need causality that the tools from causal inference to combat that." }, { "end": 4044, "start": 4028, "text": " So, the way that I think about it is that really, it's the issue of confonding is that makes the problem more complex and even more challenging than it is already is in terms of OPE." }, { "end": 4050, "start": 4044, "text": " And we will need a causal inference to solve to deal with those issues." }, { "end": 4054, "start": 4050, "text": " Can you tell us a bit about your research directions going forward?" }, { "end": 4073, "start": 4054, "text": " Yeah, I mean, so in general, you know, so the typical way I find research problems is that, you know, there are several of these big-ish problems that just stay in my mind like all the time." }, { "end": 4084, "start": 4073, "text": " For example, several of them I've mentioned earlier, like what is the fundamental representation limit of various modes of reinforcement learning?" }, { "end": 4093, "start": 4084, "text": " And, you know, in every, in some of the papers, I try to address bits of these big open problems like a little by little." }, { "end": 4106, "start": 4093, "text": " And on the other hand, you know, every time you've done a paper, usually you just cannot open or you cannot solve all the problems." }, { "end": 4112, "start": 4106, "text": " Right? You always leave some problems open where there are some loose ends that you've been overlooked before." }, { "end": 4124, "start": 4112, "text": " And after you've done a paper, you usually just sit down and reflect on what you have done. And what are the questions that have always been like just lingering in your mind when you just write a paper." }, { "end": 4134, "start": 4124, "text": " And then you realize maybe there's some like brand new like questions out there that needs to be addressed. And that naturally leads to the next research topic." }, { "end": 4141, "start": 4134, "text": " And outside of your own work, are there things happening in RL these days that you're particularly interested in?" }, { "end": 4149, "start": 4141, "text": " Yeah, I mean, so back in the days, you know, RL theory used to be a very small field." }, { "end": 4157, "start": 4149, "text": " But in recent years, you know, we've all seen a very rapid growth of interest and attention that's filled." }, { "end": 4167, "start": 4157, "text": " And there are tons of papers on our archive almost every day, if not every, almost every week, if not every day, on various directions in RL theory." }, { "end": 4173, "start": 4167, "text": " And just as everyone else, I can barely like keep up with all these latest results." }, { "end": 4181, "start": 4173, "text": " And of course, like from time to time, there are some papers that are just very, very interesting that just immediately like caught my attention." }, { "end": 4189, "start": 4181, "text": " For example, I've mentioned that I've worked recently worked on OPE with marginalizing for an example." }, { "end": 4198, "start": 4189, "text": " And that's really inspired by this 2018 work, which was definitely surprised when I saw it for the first time." }, { "end": 4209, "start": 4198, "text": " And other than RL theory, you know, I also keenly was happening in empirical RL research like the DPRL works." }, { "end": 4225, "start": 4209, "text": " As I mentioned, you know, the improvised RL works is sort of like the optimistic estimation of what is plausible, what can we possibly achieve, what is the skyline of RL in various situations." }, { "end": 4231, "start": 4225, "text": " So, you know, when you do theory, you always need to make certain assumptions." }, { "end": 4242, "start": 4231, "text": " And I would actually say that, you know, in some situations, statisticians have a very poor idea of what assumptions are realistic and what are not." }, { "end": 4249, "start": 4242, "text": " Because whether they are realistic assumptions really depend on whether they can be satisfied in a practical scenarios." }, { "end": 4267, "start": 4249, "text": " And to get a brief idea of what assumptions are plausible versus what are not, you really need to pay some attention to what's happening in the empirical community and see what kind of methods have been successful and what have been not." }, { "end": 4280, "start": 4267, "text": " Professor Nanjian, I've learned so much from speaking with you today and I know our audience is grateful. I look forward to following your research going forward and thank you so much for sharing your time with all of us today. Professor Nanjian." }, { "end": 4285, "start": 4280, "text": " Thanks for again for having me here and it was a great pleasure like talking to you." }, { "end": 4303, "start": 4285, "text": " That's our episode for today folks. Be sure to check talkrl.com for more great episodes." } ]
Danijar Hafner
Danijar Hafner takes us on an odyssey through deep learning & neuroscience, PlaNet, Dreamer, world models, latent dynamics, curious agents, and more!
https://media.transistor…30e.mp3?src=site
This is Talk by Rail Podcast. All reinforcement learning, all the time. Interviews of brilliant folks across the world of RL. I'm your host, Robin Chohan. Denny Jar-Haffner is a PhD student at the University of Toronto, a student researcher at Google Brain and the Vector Institute and holds a Master's at Research from University College London. Denny Jar, thanks so much for speaking with me. Hi, Robin. Thanks a lot. So how do you describe your area of focus? Yeah, I work in broadly and artificial intelligence, and that's really where the motivation for me comes from. Not so much building better applications, but more really understanding the concepts behind intelligent thinking. And I think machine learning actually gives us pretty powerful tools that we can use to study at least some of these questions that we couldn't study directly on a person or in the brain because it's just so hard to make measurements. So that motivation led me to machine learning and then most specifically to reinforcement learning. So a lot of my work is in reinforcement learning, in generative modeling, learning world models, and exploration. Can you share with us what your PhD advisors focus on? Sure. So my main advisor is Jimmy Bhatt, and I'm also advised by Jeffrey Hinton. And they both focus on a lot of questions around deep neural networks. So about architectures of deep neural networks, about optimization, and things you can do with it. So in some sense, it's great to have an advisor like that or true advisors like that because they have quite broad interests and broad knowledge as well. So I can basically do whatever I want and I get good feedback and good advice on those topics. So the first paper we're going to discuss today, you're a contributing author to a deep learning framework for neuroscience by Richards et al. So I really don't know anything about neuroscience. My apologies in advance as I try to follow along with this. But what is the main idea here? The reason I think this is a good paper to start off with is that it really gives the general framework for us to think about understanding the brain and what it can do in connection to machine learning. The general idea is that neuroscience has existed for a really long time and there's lots of data around and there are also some theories. But it's almost at the point where there are lots of small kind of data sets and measurements that have been made. But we're really for one, we're limited by the types of experiments we can run on real on real subjects just because it's so hard to look into the brain basically make measurements. There's a school and then there's so much going on. It's really hard to kind of target specific, let's say neurons that you would want to measure. And so that's one thing and the other thing is that there are some kind of general themes missing. And of course there are some ideas of general theories that put together all these experimental results. But it seems like we need some more guiding principles to really make sense of all of that data and get some frameworks that we can think within. And so the idea of this paper is that we kind of have a similar situation in deep learning where we have all these crazy architectures and different lost functions that you can optimize. And different ways to optimize these lost functions. And so this has served us really well in the deep learning community. There's a lost function. There's way to optimize this lost function. And then there's an architectural model. So to optimize this function. And so in this paper we propose this framework as a way to make sense of data and neuroscience. So how can we draw connections between the two disciplines here? So this paper talks about these three components, objective functions, which together are equivalent to lost functions, learning rules and architectures. Can you say just a brief a little bit about these three things and maybe contrast how they work in neuroscience and how we define them in machine learning? So I'm very much on the machine learning site. And I'm really interested in neuroscience. But I can speak much better for the machine learning side of things here. And so for example, let's say you just try and you know some deep neural network on some image classification task. And so there's some data which often you don't have control over. And then there is an architecture that would be how many layers you use in your neural network, whether you use any skip connections, what activation function you want to use and so on. And then there's a loss function which in the case of supervised learning is quite simple. It's just maximize the probability of your supervised outputs that you want the network to predict. But that could be much more complicated in other scenarios, for example, in unsupervised learning. It's really a field that's about trying to find out what is a good loss function. If you don't know exactly what you want the model to output precisely. So that's the second component. We have the architectures, we have the loss functions. And then once you have these two, you've defined an optimization problem. So find the parameters in this model or in this architecture to optimize this objective, given some data. And then you have to optimize it. So how do you actually find the right parameters and machine learning we call that an optimizer like stochastic gradient descent or Adam and so on. But in neuroscience, that would be a learning rule where you write down the dynamics of how do you actually, how do the ways change from one step to the next or maybe even continuously over time to make progress on finding better parameters that maximize the loss function. So you said unsupervised learning is a lot about figuring out what the loss should be. And that's obviously still an open question. But would you, do you feel like in general in machine learning, we kind of have these three things figured out to some degree. That's a really good question. I think we have, we have really good ways to optimize our networks. So I think the learning rule part is figured out to, to at least the level where you can do a lot of things with it. And it's often not the bottleneck anymore. Of course, there are a lot of people working on developing better optimizers and actually Jimmy works a lot on that as well. And it's like an interesting field because when you come up with a better optimizer, then you've made the lives of thousands of people easier because now they can all just switch over to that optimizer and they will get better results with their machine learning projects. And that's really the power that comes from the logical framework like this. So the ideas, if we find good building blocks that we can separate a project, a problem into, then people can work on them to some extent independently of the other building blocks. So if I want to solve some, if I want to find a better architecture for specific tasks, I don't have to also be researched on finding a better optimizer at the same time or on finding a better objective function at the same time. So to answer your question, I think when a decent position in terms of the learning rules, I think we're also in a decent position in terms of the architectures, even though it's probably not as clear yet, just because it's such a giant design space of how can you build in your network. One thing we figured out is that we have a tool bank of different neural modules that you can stack together. And that's a really, really powerful way of thinking about building an architecture. You can have dense layers, fully connected layers and convolutional layers and attention layers and recurrent layers and so on. You put them all together and they kind of work in any order or more or less. So I think we can still design much better architectures, especially for precise tasks. So one big benefit of deep learning is that it can apply to everything, whatever your prediction problem is, you can use deep learning and you can probably do a pretty good job at making predictions. But especially when there is very little data, then we have to be more careful about what architecture we use. And so you basically have to build priors about the data into the architecture. I think we can still do much better job there. For one, we can for very specific problems, we can find better priors. And then an example here is that for convolutions work well for images. But then there's still a lot of knowledge that we intuitively have about natural images that are not captured by convolutional network. So for example, there are objects in the world. And so, you know, objects tend to be consistent in time. Right. They move slowly. It's like some piece of information in my sensory input that is correlated in space and time. And it can move in time or it can move in space. And we don't really put these priors into our networks yet. And that's what Jeff has been working on for a really long time with the capsule networks. So there's a spectrum of how precise you want to tailor something to a task, get really good results on what task, but then lose some generality. And I think object priors are general enough that they will be useful for a lot of things. But probably some other priors that we haven't really incorporated well into our architectures yet like smoothness, for example. So and there is lots of interesting work on on live sheets neural networks and so on. So I think there's a very active development on the architecture side. And to come to the last component of objectives, I think I think that's where we have to do the most, the most work and where we're kind of really early in the process. So that's what I think is probably the biggest bottleneck of machine learning and also of understanding and understanding intelligent systems better. Finding the right objective functions. As I said, that's basically, to me, that's basically what unsupervised learning means as a field at the moment because some people say, well, it's not really, you know, a rigorous, clearly defined task that you're trying to solve. But to me, that's really the beauty of it. We don't know yet what is the right mathematical objective that you want to optimize and research for it. And if you find better objective functions, you can learn better representations, you can describe systems better and becomes especially interesting, not just if you're trying to learn representations, but in the reinforcement learning setting, where you're not just doing perception, but you're also detecting with the world. I think it's not at all key yet, what are agents should optimize for if there are no rewards around that's super interesting. And I've always thought of the depotting architectures as very well factored. As you say, we can you we have all this libraries of layers that we can just drop in. But you help me appreciate the extent to which the other components are are also well factored, which is I think a great insight. So for the brain, do we have any idea if we should expect to find a single type of objective function and like a single learning rule, or could we imagine there could be many different types of objective functions on learning rules and different parts of the brain. Is that still a completely open question? That's a really good question. The theoretical answer is that it doesn't really matter. So yes, for any system, any system that you can observe, you can there exists theoretically exactly one objective function that describes all the behavior of that system. And actually not quite true, it describes all the behavior of that system that can be described through an objective function. So in the paper, we talk a bit about this and it's basically the idea of the fundamental theorem of vector calculus or the halm holds the composition. And so the idea is the following. Let's say you're describing a system could be, it could be a neural network where the weights, weights change over time in some gigantic space of all the possible combinations or configurations of the weight, weight vector. Or it could be, it could be very simple system like a thermostat that just has a sensor and then controls the heating unit. Or it could be even more complex than a neural network or deep neural network like a system like the brain. And so all these systems you can view from like a dynamical systems perspective. There's some state space and every point in that space describes possible configuration of the system. And at the current time, it's at one point and then it kind of moves around over time as the brain is doing its computing or as the thermostat is reading new temperature values and storing some new internal state about them like a moving average maybe. And as the weights of our deep neural networks change with every optimization step, they also move around in the state space. And so when you when you describe a system like from this angle, you can view it as a vector field in the state space. Right, if you if your state description is complete, then from every state, there is a direction in which you go to get to the next state. And if you include the. If you couple that with an external system, you really kind of. Then you really have a like a closed state and like a closed system where everything is captured by your description and and basically everything becomes more or less predictable. And for every point in the configuration space, there is a direction and that gives you the next point in configuration space. And so when you describe systems systems like this, you can actually you get a vector field every point in state space is a direction that's that's the vector field and you can decompose it in two simpler vector fields and that works in any case. Except for some maybe degeneracies that are just of theoretical interest. And you can you can decompose it into one part that is optimizing something and one part that's not optimizing anything. So think of the configuration space again. And now plot the heat map over it, which is the objective function. So some points in weight space, give you better value. Mean that you're on that work is better predicting the labels, let's say. And some points mean that the neural network is worse at predicting the labels. And we can write down our own cost function there. And then we can implement our own learning rules so that we end up with the system that seeks out the better regions in the configuration space. And we can use the same mental picture to describe an existing system that we don't change anymore that we don't have control over the dynamics. And so there is still this potential function or energy function or cost function. Those are all the same, the same things. Just different fields call them differently. And so when you look at the system, you can wait for a while, you can observe it and it's moving around in this configuration space. And it will be more often in some places and less often than other places. And from that, you can derive a cost function. So what has cost function is the system optimizing for? Well, you just look at what it's doing and you over time, you will get an idea of what parts of the state space it likes. And what parts it tries to avoid. And then that's your cost function. It's just the stationary distribution. The visitation frequency basically. And so once you have the visitation frequency of a system, you can describe all of its optimizing behavior. So you can say, now that I have the cost function, maybe a very simple example is a person, maybe you have a daily routine and you can be in different rooms of your house, you can be at work, maybe not at the moment, but at least there are different rooms at home that you can switch between. And there is some probability of going from that room to the other room and so on. And if you observe somebody for, or maybe you write down every day, what room you've been for in for how long. And then you get this kind of cost function that describes you. It's like, oh, the living room is the best. For example, you spend the most time there. And so once you have this cost function, you can describe the dynamics. If you give me the cost function, I can basically reverse engineer you to some extent. To based on like what state space you chose, it's probably not like the state space always uses some abstraction because you can't go to like the kind of particle level. But let's say it's different rooms and then I can build something that also seeks out the same, seeks out the rooms with the same preference distribution. Okay, so that's, that's the optimizing part. And then there is a part to every system that is independent of the stationary, well, it's orthogonal to the gradient on the stationary distribution. So if you give me the distribution over rooms, I can build a new agent that follows the gradient on this preference distribution always tries to go towards what is better under the cost function. But then there is some maybe external perturbation that keep it away from there. So it has to kind of keep going towards towards the optimum. But then there is also always, or there's also potentially a direction that doesn't change the probability. And so that's the direction that's orthogonal to the gradient on the cost function. So if you think of the cost function as a surface, like as a, as a hill surface over your configuration space, then you can either go up or you can walk around the control lines of this cost function. And so that's the difference between the divergence part of the vector field that goes up on the cost function and it tries to concentrate on the, on the optimal points. Or I guess if it's a cost function goes down if it's an objective function to maximize it goes up. And then there's the curl part that just walks around control lines. And so it's never optimizing for anything. It always cycles back after, after a long time. So this is all, explain why when you're talking about something as an optimization problem, or you're just trying to describe intelligence as an optimization, then you will lose this part that doesn't optimize anything. So you'll not be able to describe that part. And that's probably fine. Like maybe we have evolved to be quite efficient. And so maybe we don't do a lot of unnecessary things that don't actually optimize any object to function. But what else, right? Maybe that's on some level of abstraction that you choose to describe the system. Maybe that's really important to get something that has the behavior that shows the behavior is that we think offense maybe connected to intelligence. So is this paper saying that we should look for the components similar to those that we use in deep learning in the brain. And then maybe vice versa, figure out how to adjust deep learning to match more closely match what we see in brains to help us understand to use deep learning to understand brains. Is that, is that close to the message? Yeah, yeah. So it goes in that direction. I don't think machine learning and neuroscience have to converge to one thing. We can use different models in machine learning. Then, then the models that might be useful for explaining the brain because there are biological constraints on the brain. And it's interesting to understand them and understand what kind of ways nature found around those. But just conceptually speaking, the best models for the type of computer hardware that we have are probably different. So if your goal is to to build an algorithm that's very good at predicting labels on some data set, then probably like the very long term solution will be different from the biological solution. Now, that said, at the moment, we're still quite far away from getting anything close to the intelligence of a brain, of course. And so I think neuroscience has a lot of potential for helping us with building better models in machine learning. But it doesn't have to be the goal doesn't have to be to end up in the same place for both disciplines, although that I think that would be interesting. But that's not necessary. And what the paper is saying is we should use the same framework to break down the problem. And that will help us share insights in both directions. And as I said earlier, it's really difficult to make measurements in the brain. And there are a couple of papers from the last few years where people have studied the learning models in a similar way in terms of like analyzing their activations, then then neuroscientists would study brain. And found that there are actually really surprisingly strong connections between how the deep neural network processes some input to solve a prediction task. And how the activations and the brain look like that try to solve the same prediction task. And so there is definitely exchange in both directions. And I think both disciplines can learn from the other and use tools from there. Because on the other hand, we also have no idea really how deep neural networks work and why they work. And so maybe some ideas from neuroscience will help there. And I think the reason you can find these similarities between models in machine learning and measurements in the brain is that even though the models are very different in some way, they still, both systems are still trying to solve the same task. And a lot of our solving a task is a lot of the computation needed to solve the task is actually more about your input data than the architecture you're using to process it. So that's why I think nobody really knows, but my intuition is that probably there are some constraints on computation in general on what information do you need to extract from your input so that later on you can solve a task. So if you have any comments on how the insights of this paper might relate to reinforcement learning more specifically than learning in general, this wasn't an RL paper, right? Of course, it was not an RL paper. For me, the biggest takeaway of this kind of perspective on understanding intelligence is that for the biggest takeaway for reinforcement learning is that we have to think a lot about what objective functions we should use in reinforcement learning. Because I mean, it's always been bothering me that we have a reward signal that comes from the environment. And that's actually not how reinforcement learning used to be defined in some of the earlier work where you would usually, you know, there are some, some early papers on the question of where rewards come from. And the way to think about it really is that there's an environment that gives you sensory inputs, you give it actions, and it doesn't care about what you're doing, right? So I would be with the environment cap and then there's an agent and then agent. You can choose to break that agent down into two components and one component gives you the reward as a function of, you know, the past sequence of inputs and the past sequence of actions. And then there's another component that tries to maximize this reward. And so that's the kind of classical reinforcement learning component where maybe you learn a value function or, you know, there are many things that you could be doing. And so I think we haven't really spent a lot of time yet or enough time to understand the first component where actually the reward is being generated. And then we want to build something that is more intelligent or closer to maybe an intelligent being than the current agents we use in reinforcement learning. Then we have to make progress on that part because there's, there's not really a reward function in the world. So it's that we can think of maybe, you know, optimizing for survival is good. But then that doesn't really give you a good idea of the system. I want to understand. So I think this optimizing for survival in some world with like a giant simulation, like an artificial life approach to building intelligence might work to build something. Like I mean, we're quite far away from that, but in principle, it could work. But I and it might be easier to study the resulting system than to study in like biological system. But it doesn't really answer the question of how it's doing that. And maybe you don't care about that. You just want to build something that replicates some aspects of behavior that we see in people. But to me, I actually want to know what are the components that we're optimizing for like within one lifetime. And to get that additional insight, we have to try out different objective functions, different implementations of this module one in the agent that provides the objective function to the optimization component. And we have to try them out and we have to do it in an environment that probably has to be very complex. And then we can look at the behavior and we can see if that's similar in some way to the behaviors we're trying to replicate. And we're very general, like people are very general in the sense that there are many different environments in which we can do something. And so the objective function should also be general in the sense that it doesn't depend on some underlying environment state. Like if you want to move the glass from one side of the table to the other, then maybe if you have a physics simulator and you know the object idea of the glass and so on, you can, you know, come here to square distance between the position and the goal position. But that's not the sensory input that the agent gets. And so that's not available if you want to general implementation of the first component of the agent. So it has to be something that's only a function of the sensory inputs and past actions. And still accounts for interesting behavior across many different environments. So are you pointing to intrinsic motivation as a key here? Yes, yes, that's how the field is often called. And often intrinsic motivation. I think there are there are many different ways of how to really evaluate intrinsic motivation. And it's it's very difficult. And I think it's a good challenge to make progress up. And there are parts of intrinsic motivation where you're basically trying to be better at solving a particular task. And so maybe you like sum up the intrinsic reward with the extrinsic reward and you get something that makes past learning progress on the task, then without the extrinsic motivation. Another evaluation setting that I really like that I think will come to you a bit later in the podcast is that you explore without any task in mind. And then maybe you can use the data set that results from that to later on train a new agent on it to solve specific tasks. You can see how useful this exploration was. So now let's turn to a set of four papers that are tightly related with each other, starting with with planet that's learning late dynamics for planning from pixels. Can you tell us what what's the main idea of this the planet paper? The main idea was to learn and learn an dynamics model of the environment that's accurate enough that you can do a rain post with learning with it. And people have been trying to get model based are all to work in various and sensations fall out time and there has been lots of progress as well. But it was really almost like a bottleneck where it kind of worked on simple tasks, but then it didn't really work on harder tasks. And so practice people were still using model free methods most of the time, even though model based methods are feeling in different ways because for one day it's kind of like a really intuitive thing that you have a model of the world that lets you predict into the future. I mean, we know that people can do that. So probably our agents should as well. And but then having a world model also lets you do a lot of things that you couldn't do with the model for the agent. So it's almost this backlog of research ideas in my head and other people's heads that were blocked by not having accurate enough world models to implement them. So that was really the goal because I wanted to work on intrinsic motivation. Yeah, you can do better exploration if you have a world model. And I think we'll talk about this when we get to the disagreement paper about the retrospective versus expected exploration. And so I knew to do that. I really needed world models to work on some tasks. Some kind of tasks that I would be happy with with high dimensional inputs and so on. And that's why I started working on learning dynamics models from pixels. So that's so interesting. So you really are planning multiple papers ahead for where you want to get to being strategic about it. Yes. And maybe not so much a chain of papers, but I always had this goal of building autonomous agents with intrinsic motivation. And then whenever I started a new project, I reflect on that and think about what is the limitation. Like, can we do this now? Or is there anything that's still necessary to solve before we can build that? And it was almost a bit frustrating in the beginning when I started my masters in London that I wanted to do this like active exploration. But there was just no, no accurate dynamics model that I've used for it. And then people told me, you know, yeah, we all know that this would be cool to have, but we've been trying for a really long time. And it just doesn't work and we don't really know why. And I thought, okay, well, you know, I'll try and it's a year. And Tim was really helpful to me, Lily Crab when he advised the project. And my manager at Google at the time, James Davidson was very helpful as well. And we just went through it quite systematically and we kind of tried for a really long time and eventually it worked. And I think there isn't even like a single thing that I could point to that that was like the point where it made click where suddenly started to work. I mean, those were mostly backs in the implementation where, oh, like, you know, we normalized the input twice and then at the end, it's like during evaluation, you do different normalization, then, then, then trading off course, your model doesn't make good predictions. So it was mainly to we had a pretty clear idea of what we wanted to do. I wanted to build this like a dynamic model because I think, I think a lot of our work with low dimensional inputs, it's a bit too tight. I actually don't even read those papers anymore in most cases. And you can do quite well with random search and so on. So, so to me, there need to be some some high dimensional input where representation learning is part of the challenge. And, and then if you put forward, it doesn't really make sense to do that in pixel space from one image to the next because that gets very expensive and arrows can accumulate very quickly. And it's definitely not what we do. Like, when I plan on when I plan my day, I don't plan how my like activations on my retina change. Like, hours from now, it's all in an abstract space. And it's both abstract in both abstracts from space into concepts and so on. And it also abstracts in time. And we so far, we focused on the first aspect and we're trying to also doing some work on the temporal abstraction, but I think that's still quite unsolved. Yeah, so at the end, we had this kind of clear picture of what we wanted to do and we didn't actually deviate much from it throughout the project, we just visualized a lot of metrics and and try to really understand what was going on. And then we found a lot of bugs that we fixed over time and then at the end, it just worked. And we work quite surprised. So that must have been really satisfying. You worked on this for a year. And the first thing that jumped at me from this paper was the efficiency gain. It said there was a line that said, data efficiency gain of planet over a D4 PG, a factor of 250 times, which is just huge. So was that was that surprising to you? I guess you've been working on it for a year. So by that time you're used to it. But did you expect anything of that level when you went into this? To be honest, I didn't really care about data efficiency at all because I just needed a world model to do exploration with. I didn't really care about it being so much more data efficient, but it turned out to be more data efficient. And of course, that's useful for a lot of applications. Like if you want to use world models for robotics, for example, where environment steps are much more expensive than in a simulation. Then it really matters. So of course we put it in the paper. But I wasn't actually like it didn't actually matter to me. And it still doesn't. To add to this. I think the reason that it's more data efficient, there are multiple reasons. We don't exactly know how much each of them contributes. But one reason is that a lot of model free methods just don't use any specific representation learning. They learn representations just through the reinforcement learning loss for maybe value learning or policy radians. And so the only signal that goes in is like the action that you chose and the reward that you got. And that's that just if you think about. Like let's say I throw you in an unknown environment. Do well in that environment in some way maybe try and get food or you trying to solve specific tasks you want to solve. And if you just imagine that everything you would learn about this world would just come from the reward and the actions you chose. That's just insane. That means like I'm not trying to find any correlations in my input, for example, I'm not trying to explain what I'm seeing. And that just what more mathematically speaking there is a lot of information in in the images or in the sensory inputs that you get about the environment. So you should use that in some explicit way using representation learning, I think. And it's this can be quite separate actually from the RL algorithm. So there's a lot of work showing a lot of applications application papers showing that you can you have your RL agent. And then in addition to the policy gradient boss, you just have a reconstruction loss from maybe the features of your some height of high out representation within the network. And then you just try to reconstruct your input and that really helps a lot even though it's a very simple thing to do. Especially when when you have high dimensional inputs. And so it's I think it's perfectly fine to do research on representation learning for control and like, or RL separately. But if you want something that's data efficient, you should definitely make use of your inputs in some way. And to add to that, the same is true for world models as well. If you have a specific task, because in principle, you only need to make accurate predictions of future rewards. And that's enough to get maximum performance on the task. And in principle, you don't even need to reconstruct your inputs in the world model. It's just that then you're back to only learning from a very limited learning signal. And I think there is still some benefit in learning a world model, even without any explicit representation learning. You still incorporate some other useful priors into the world model such that, for example, that there is a compact compact activation vector that explains the state of the world at one point in time. That's that's a useful prior, right? It means that we have this high dimensional input. And for the age of that's this gigantic pixel grid. And it means that there is a much smaller representation that has to, has to describe everything that the agent needs to know about the input. And so, and then if you have a, if you have a dynamics model, then there needs to be function that takes this description of one point in time to the description of the next point in time. And then that has to be enough to predict the good action at that point in time or predict the value or the reward. And so, this idea of a hidden Markov model structure is also useful. It's a useful prior. I don't know exactly how much the representation learning contributes to the data efficiency compared to the just learning and latent space compact representation of the environment state. But of the sequence of past inputs to the agent, but for example, that's what mu zero does. It's not learning a global world model where the agent learns everything about its inputs. It's just learning what is necessary to solve with specific tasks because all the learning signal comes from the reward and the value and the policy gradients. So, but you're still incorporating this at least as one prior of having a compact representation. So, in, in the planet paper, I think you, you separate the stochastic and deterministic components of the state. And can you help us understand why you want to separate those and then how that separation works? Yes. So, we, when we came up with the model, we basically just tried random things and we had no idea what we were doing. And this particular combination seemed to work well. And so afterwards, I, I will try a lot of other designs and they did not work. And I think by now I have a bit of a better understanding. Of course, we had some hypotheses of why maybe stochastic part helps and deterministic part helps. But then later on doing other projects building on top of this model, we got some more insights of why this is, this might be particularly useful way of designing the, the latent transition function. And so, one point is that if you, if you want to latent dynamics model, where given the sequence of states, you can predict all the images individually. So, there is no skip connection from one image to the next, let's say. Then, then your sequence of latent states has to be stochastic in an environment where the agent can't make deterministic predictions. So, that could be either because maybe there is actually noise injected in the simulator and how the simulator works. Or it could be because the agent doesn't know everything about the world. So, it's a partially observable environment and that makes it stochastic from the perspective of the agent. And so, to predict multiple possible futures, you need stochasticity in your, in your latent state sequence. But if you make it fully stochastic, then you get a typical state space model where the hidden state at one step is just the, let's say a Gaussian, where the mean is predicted through some neural network from the last state and the last action. And, and the variance is also predicted by the neural network. Then, there is a lot of noise during training and, and that noise, technically speaking, it adds information to your state at every type of state, but it's not information about the environment. So, it's not useful information that it kind of hides the information that the model has extracted already. So, if you think about maybe the agent has seen some images and then it has inferred the position of objects and put that into the latent state. And now, you predict forward for five time steps, but at every time step you're adding noise to the state, then it becomes really hard for the model for the agent to preserve information over multiple time steps. It's just a raised after a couple of steps. And here you're talking about the conditional VAE formulation, is that right? What is the conditional VAE formulation? Sorry, I meant, when you're talking about a stochastic model like you are right now, are you speaking about like a VAE? Yes, so it's, it's a latent variable model, the way of VAE is a latent variable model. And they're, and we train it the same way of VAE is being trained. So, it's the same elbow objective function or free energy objective function. But you don't call it a VAE. And it has a lot of similarities. So, you could, you could see it as a, as a very kind of specific case of a VAE where instead of having one kind of fixed size representation as your latent variable, you instead have a sequence, a mark of chain of latent variables. And then your data is also a sequence of images rather than a single image. So, you can think of it as a sequential VAE. So, you were describing how the, the stochastic component cannot capture all the information. And so, that's why you need the deterministic component as well. So, theoretically speaking, it could. The stochastic version, the fully stochastic model is general. So, it could learn to set the variance to close to zero for some of the state components. And that way it would preserve information over many time steps without getting erased by noise. It's just hard to learn. And you don't really get good gradients for learning that because optimization process is so noisy. And so, you would basically end up with a model that doesn't learn long term dependencies in the data well. And so, having a deterministic component is, is in principle, just like setting the variance to zero for, for some of the stochastic components in the state. So, that you put in the prior that there are some things that should be preserved over a long time. So, is the idea that in certain areas of the environment, things could be fully or more so deterministic or more so stochastic? Like, do these two components kind of become more influential or less in certain areas as appropriate? That's an interesting question. So, I like, I think that's, it's basically the same question. But I like to think about, I like to, I like to not think about the implementation of the environment. So, this comes up for exploration as well. But in this case, whether the environment is more stochastic or less stochastic in some states, doesn't matter. What matters is whether it's more or less predictable for the agent. Right, because the agent doesn't really know more about the environment than the sequence of its inputs. And it can't make more sense of them than what its model architecture lets the agent make sense of the data. So, most stochastic, practically what it actually means is that the agent can't model it well. The agent doesn't know exactly what's going to happen with things that, you know, many possible things could happen. And that could be because we inject like, pseudo random noise into the simulation, or it could be just because there are so many visual details, let's say, or the model is too small to really make an accurate prediction for some of the more complex parts of the world. And now to answer your question, the way I think about this latent model now with the stochastic and the deterministic part is that there's another big benefit of having a stochastic part. And it's not so much about stochasticity in the data, but it's more about allowing you to control how much information goes into the deterministic state. So, you can think of this as a deterministic model where at every time step, you have a stochastic variable that lets you add information about the current image. And there's a KL regularizer that encourages the model to not incorporate that much new information into the hidden state. But he's still training it to reconstruct all the images. So, what this reconstruction arrow does together with the KL regularizer is when you want to reconstruct the image from some particular state, then the model is allowed to look at the image through the stochastic bottleneck, but it's encouraged not to because of the KL regularizer. So, instead, it would look at all the input information that it has already extracted from past time steps, because there's no KL regularizer for those. Or there is, but it already paid for it. So, the model is better off using the deterministic path to look back by time to get the information from there, as long as it's something that can be predicted from the past. And I think that encourages the model to learn long-term dependencies. Okay, so maybe I'm misunderstanding a little bit here, but is this model not Markovian? Like, does it not look back at the only the one step previous state? Or you're saying it's looking back in time through implicitly through the deterministic latent. Is that what you're saying? Yes, yes, exactly. So, it's actually, it's good that you're bringing up this point because there are different ways to think about this stochastic deterministic parts in the model. You can either think of it as the Markovian model, where just some elements in the state are not stochastic. And your state is basically the concatenation of deterministic and stochastic state at every time step. Or you can think of it as the non-Markovian model of only the stochastic state. So, if you don't, if you can ignore the deterministic part from your model description from, like, when you write down a probabilistic graphical model, and you only write down the stochastic sequence of states, then this deterministic RNN actually lets this stochastic state at some time step T, depend on all the past stochastic states through this deterministic kind of shortcut. But that, yeah, so those are both valid views. You can say it's a non-Markovian stochastic model, or you can say it's Markovian hybrid stochastic deterministic model. But the second perspective is useful for the implementation, because it means that when you observe a new image, you don't have to go back in time. You only need the last stochastic and deterministic state, and the new image to compute the next stochastic and deterministic state. So, I was looking at a little bit at the code for the RSSM component, and there was a comment saying that if an observation is present, the posterior latent is computed from both the hidden state and the observation. So, is that, does that mean that when it's imagining, is that because when it's imagining the future, the observation is not available? Is that what that line means? Yes, yes, exactly. So, you can think of this as the prior and the approximate posterior n of Ae, or the prior and the encoder n of Ae. They both give you distribution over the latent variable. They are both the belief over the code. But one is a more accurate belief, because it got some context information, in this case the whole image. So, one is the prior one is the posterior or approximate posterior. And this principle is more general than that. You could have additional context information. You could have the whole context, like just give it the whole image as you do in a B-A-E, to try to get the most accurately. But you could give it some information as well. You could either give it part of the image, like a patch maybe, or you could give it some additional context information about the image, like a label, like a class label for the image. And, you know, what's the belief over the code? If I only know it's a doc, and then that's going to be a narrower distribution, then the prior belief that doesn't know any context. But it's still going to be wider distribution, then the belief I get when I condition on the whole image. And so, in a temporal model, something similar happens where the prior belief over the code had sometimes step T, there are multiple beliefs you could have over that. If you don't know anything, then that could just be standard Gaussian, let's say. But in a Rally on a sequence model in general, there is a lot of context, you know, and that context is basically all the past inputs, but just not the current one, and of course not the future ones yet. And so, that's the prior that you need to use, at least when you just write down the standard set of elbow objective, the prior over the code at times step T, the distribution, the belief that doesn't depend on the current image, should still have access to all the past images. And another way to view this as a common filter, because basically the model is just a nonlinear learned common filter. So, in a common filter, you also have this temporal prior, which is called the prediction step that tries to predict the hidden variables without knowing the current image. And then there's an update step that takes this prior belief, this temporal prior belief, and updates it to a more precise distribution by looking at the new input, by looking at the new image. And so, we do the same in a sequential VIE. So, is the model aware that when it's imagining future time steps, that it's less certain about those in some sense? Yes, yes. So, those are two neural network components. You actually have to learn two transition functions. One where you give it the past state, the past state, and the past action, and you train it to predict a distribution over the next state. And then another one where you give it the past state, and the past action, and the current image, and then try to predict another distribution. And that will be more precise than narrower distribution, and it actually is when you look at the entropy, because it has information to more context, or access to more context information. And the way those two are trained is that during training, you always use the one that can see the data, but the KL regularizer is from that second distribution to the first, so to the prior transition function. And so, that trains the prior transition function, you basically try and predict what the posterior, the better belief is going to be, but without seeing the new image, so won't be able to do perfect job, unless the sequence of inputs is fully deterministic. And so that is the only thing that trains this KL regularizer, is actually the only lost term that trains the prior transition function. And the prior transition function is what you use for forward imagination when you're just planning into the future, but you don't know the actual inputs for those time steps. And at the same time, the KL regularizer regularizes the posterior belief, saying that, you know, even though you got to look at the image, don't be like overconfident, try to still be close to what you would have predicted without seeing this data point, try to still be close to the temporal prior. Can you talk about what range of environments this type of approach is best suited for, or the limits in one environment would, this could be applied too well. Does it have something to do with how much stochasticity they have, or I mean, it seems like the environment is a user really pixel large dimension, large dimensional pixels state space. But is that the, is it, is that the main area where this method is useful or is it go beyond that? Yes. So I think the approach is generally useful for, for a lot of reinforcement learning setups. There are some applications of reinforcement learning where you not really have an agent in that sense, but just trying to solve some discrete optimization problem or some black box optimization problem where you don't get radians. So in those cases, I don't know, like when you're trying to, I don't know, like maybe try to predict the proof for a mathematical statement. I don't know, I haven't really thought about those problems. Like when you have an agent in an environment, and the, especially if the environment is partially observed. So you have to integrate information over time. So for example, an image won't tell you velocities of objects that just tells you positions. And then if it's a, if the field of view is limited because you're only looking in one direction and you don't see the object in the other direction, you also have to integrate information over time. And so then this is a very useful, very useful general approach. Because you're, you're making use of the factorization of, of a partial observable environment. So in some sense, the latent states that you're learning can be thought of as a replacement of the hidden states of the environment that the agent doesn't have access to. Now, this is important. The latent states learned by the agent are not an approximation of the environment state. Right, there's no reason whatsoever to believe that they will become similar in value to whatever the environment state is. But they are an alternative representation that if the model is trained well also explains the same sequence of observations given the same sequence of, of actions. And then you have a alternative implementation of the environment if you want. And so that, that's really powerful because now you've got a Markov system. So once you have this representation, then you can even make predictions into the future given actions. You don't need a recurrent policy anymore. But the state is already sufficient. And I think your question also hinted a bit in the direction of could we do this for low dimensional inputs, like more typical for these muzzle court tasks. And the answer is yes, we have tried that at some point. It does work. And it is a bit faster than learning from pixels, but actually not that much. Yeah, it works well. And I think Brendan Amos said a paper on differentiable model predictive control where he does that. And also found that it worked quite well. But I haven't done any any. Yeah, we had one project where we tried it on low dimensional states and it worked, but it didn't go anywhere. So yeah, I'm interested in the pixel space. And right now I'm trying to scale up these models to walk on flexed diamonds. Some of that we had in the follow up paper for Dreamer. All right, let's turn to another recent paper of yours dream to control learning behaviors by latent imagination. Can you know, we got to hear you describe this paper in our December and your episode. Can you remind us of the main idea with this paper? Sure. So one limitation that planet has is so it does learn a quite powerful world model, but it doesn't make use of it in the most efficient way to the right behaviors. Planet uses online online search at every time step when it attacks with the environment. And that can be really expensive because you do many, you predict forward many trajectories and then you select the one action that you like the best and you execute it. And you throw away all this all this effort and you would like to do another search at the next time step. And that thing data becomes quite expensive. So it's doing model predictive control. Exactly. Yeah. And the second limitation is that in the original planet agent, we don't learn a value function and there is no temporal abstraction. And the agent is only going to consider rewards within the planning horizon. And you can't increase the planning horizon infinitely because for one eventually your model is going to make more, it's going to make less accurate predictions. So if you're searching for a longer plan, it's going to take you longer to find a bit plan because the search space got so much bigger. There's so much more longer plans than there are shorter plans. So it's not really computationally tractable. So we consider a very far delayed rewards that are like 100 to 100 time steps into the future. And that's a one way that I thought initially you could get around that is through temporal abstraction. And I still think that's really the long term way to go. We have value functions in reinforcement learning and they work quite well. So for now, we can solve it that way. And so Dreamer is really a follow up on planet where we use the same dynamics model, the same world model. But we're using it in a more clever way to learn how to predict good actions. And there is a substantial increase in computational performance. So we went down from maybe one day for a million time steps to like four to five hours. And there's a substantial improvement in the horizon of how how many future rewards the agent considers. And so that leads to much higher empirical performance as well. And the way we do that is we throw away the model predictive control part. And instead we have a neural network to predict actions and act a network that takes the date and state of the well model and predicts a distribution over over the action that hopefully is best for the state. And we have a second neural network in the late in space, which predicts the value that expected some of future rewards with some discount factor that the current act network is thought to achieve from this particular state that is input to the value network. And with the value function and the actor, you can do an efficient actor critic algorithm in late in space. And you can train that from model predictions independently of the data collection. And you don't have to do any online planning anymore once you have a good actor to collect data, you just run the world model at every step to get the latest state. And then or to update the latest state from the last step to the next one to incorporate the new input. And then you just send that to the actor, you predict an action and execute that. And so all the model predictions or planning, if you want to call it still want to call it planning happens offline independently of the current episode. So in principle, you could also distribute this and run it asynchronously very efficiently. And the way you learn these two components now. Like one thing you could do is you have a world model, you know, it basically defines a new or a problem. It's an imagination MDP where instead of environment states, you have these model states and so on and you predict through watts as well. So you could throw any model free or algorithm at it now. And you can solve it without actually causing additional environment interaction. So we get get a very data efficient algorithm. But you can actually if you're doing that, you're not really making full use of the world model because we have a neural network world model. So we can actually compute gradients through it. But all the model free around algorithms, they are designed for real environments where you can't differentiate through it. So they don't make use of these gradients. And that's why we can do better by developing an act of critic algorithm that's specific for world models. And the algorithm actually quite simple. You encode some like past data from the replay buffer to get some initials model states. And then you imagine forward a sequence with some imagination for rising let's say 20 steps using the actions. Not from the replay buffer, but from the act network. So you're just like the actors just trying out something in the model world. And then you predict all the corresponding with watts for those states. You predict all the corresponding values as well based on your current value estimate. And you want to maximize that with respect to the actions that are with respect to the act network, the parameters of the act network. So you can actually compute very elegantly compute the gradient of the sub of future rewards and future values that you can like weigh in some way if you want. And you can compute the derivative of that with respect to the act parameters just by propagating through the multi step predictions of your model because it's all the network components. And there are some stochastic notes in there because the model state has a stochastic component. And the actions are also sampled from the act distribution. So there are two ways to you can deal with it. If it's continuous and can be reprimed rise like a Gaussian, for example, then you just use a reprimed violation trick to compute the gradients through all these steps. And if it's discrete, then you can use straight through estimation, which is not really the exact rating, but it still works very well. And once you do that, you know exactly if you change the active parameters a little bit, how is that going to at what rate is that going to increase the future reward or decrease the future rewards? You know how to change the act network. And then the only thing that's left is optimizing the value network and that should stand through simple temporal difference learning. So the value at one step just should correspond to maybe the reward plus the value at the next step or you could do you could actually do a multi step return. So the value should correspond to the next 10 rewards plus the 11th value. What we actually do in the papers, we do lambda return, which means we take all of these and step returns for different values of n. So one reward plus the value to rewards plus the following value and so on and we weigh them. But yeah, that's just so we don't have to choose a hyper parameter for it and it doesn't really matter that much. So on a high level, is this sounds similar to Sutton's Dynar architecture, but then Dynar didn't have this notion of gradients or it was independent of what kind of function approximator I think was a used right? Yes. Sutton's Dynar, I think basically includes almost all of model based around. It's a very kind of very general high level perspective where you have some data from the real environment and use that to learn some model of the environment or of the of the data that you got from the environment. And then you use that model to somehow select an action and then you can execute that in the real world. And I think the Dynar paper even talks about online planning as well, but maybe that's a follow-up paper. But yeah, in principle, these are all within the category of Dynar style algorithms. So you're building on the work you did in Planet and you used to use the same RSSM deterministic plus stochastic type model here, was this the model the same? Yes, the world model is exactly the same. And we for continuous control, we found the world model still works across like older 20 continuous control tasks. There are a few more, but we chose the ones for which the best model for the algorithm got non-zero performance because some of the tasks don't really make sense from pixels. You can't see the things that are necessary for solving the task. So yeah, the world model just worked for all these. And the improvement comes from the value function and also comes from the act network, which can actually learn a better policy than an online planning algorithm. Can potentially do because it doesn't assume that the actions are independent independent in time, for example. And the act network also has a lot more optimization steps in total because for the online NPC and planet, you can do maybe 10 optimization steps, but then you have to have an action at the end of the day because otherwise, if you do too many too many optimization steps, then it becomes way too slow to really interact with the environment. Whereas the act network in Dreamer is shared, there's just one act network throughout the whole training process of the agent. So over time, it will get trained much more. And later on, in addition to the continuous tasks, we did some discrete tasks in the Tari and deep mind lab. And we also found that the same world model just works. But we did increase the size of the stochastic and deterministic states. So we just gave the model more capacity. And so I was actually really surprised by that. But what it said is that the planet agent was bottlenecked not by the model, but by the planning part. Was that surprising to you to when you determined the final performance of the Dreamer agent, or was that how have what you expected? No, I was actually quite surprised. So I knew that to do some of the more interesting tasks that I wanted to solve that I wanted to do exploration and eventually we needed to consider reports further into the future than 20 steps. So we couldn't use planet out of the box. And I almost thought that, oh, there are probably much bigger problems. And we probably have to find a better world model. And like, you know, is it even worth focusing on on the horizon problem? Or they're much bigger bottlenecks at the moment. But it was a kind of almost easy problem to tackle because there are already solutions for that with temporal difference learning. And we just kind of applied that to the setting we were in where you have a differentiable world model to make efficient use of that. And I was really surprised how well it worked. And I was also really surprised how that that that that doesn't do any look ahead while interacting with the environment can do better and even be as they type efficient as an online model predictive control. Do you think that dreamer would do pretty well, even if it didn't differentiate through the model, like if you were just, or maybe that's something in between planet and dreamer, like the idea of just distilling planets planning into a policy network, kind of like maybe like at what alpha zero would do. That's different than what you did here, though, right? Because you you differentiate it through the model. Yeah. Would that be a reasonable thing to do? You think that would work well here or. Yeah, there are there's a there's a design space of different. Of different algorithms that operate within the normal model to a derive long term behavior to learn value function and an actor. And so the alpha goal does it is. And so we can't really you can't really do that with a with a big reply buffer because the returns you got in the past, they've dependent on the actions that you that your actor chose in the past, but now your actors, your actors already better. So those returns won't reflect the value of the actor in the state right now. But if you make the replay buffer small enough, it's approximately, you're approximately on policy. And then if you just train it on a lot of data, then, then that can work well. It's just that in the low data regime that we're in, making your replay buffer small is a bad idea. And and just pretty clearly always hurts performance. So so we couldn't really go with this like approximate on policy approach to learn the value function. We needed to we needed to do TD learning. And we needed to do it on imagined roll outs. Because we can't use the past replay buffer data because it's too different. So now to do online to do imagined roll outs, you need a policy to select actions. And as you said, you couldn't principle use a search to select actions there. Like a like like a Cm search, let's say, and then distill that. But like learn a value from it and then and then learn an actor network from that. But or you not learn an actor network anymore if you have a value function, you can just use that during planning and that will be fine. But the problem is you can't really afford to do the Cm search. And every time step in imagination for like, you know, so many imagination trajectories. So that's why we actually ended up abandoning the explicit search and switch to using an actor network. Yeah. And I think your question was also whether it could work similarly well if we ignore the gradients. And I'm not 100% sure. So what I do know is that once you have the world model, all the environment, all the training inside the world model, just cost you walk off time, it doesn't cost you environment interaction. So you could use a less efficient optimization algorithm in imagination. And you would get the same data efficiency in the real world. And I don't see a reason why. Why. The normal model free algorithm inside the world model. Couldn't get to the same final performance as well. But I think it would be computationally more expensive because you would need more updates. But I haven't tried it. So let's turn to another very recent paper yours planning to explore via latent disagreement. Can you tell us what the main idea here is with this paper? Yes. So I was really excited about the paper because I finally got to the point where I wanted to be about two and a half years ago when I started to work on planet, which is to do forward looking exploration. And so we solved the world model problem to sufficient degree. And then we solved the horizon problem to sufficient degrees. So that was planet and dreamer. And then we could finally do exploration with it. And that's the key point of this paper. And there are a couple of ideas in the one this. When you do exploration. You need some measure of novelty that you can optimize for as you intrinsic reward. So we use an ensemble disagreement for that, which. Deepak Patak was a collaborator in the project has done a lot of work with and there are a couple of papers also from other people who. Show that. A sombal disagreement works. Works really well as a novelty signal. And I would even include random network distillation into the category of ensemble disagreement. And. So so that's the kind of. The source of novelty that gives you the intrinsic reward. But then there's another aspect to. To the project, which is. When you do exploration to learn about the environment. And you have novelty as some objective function. Then that's a non stationary objective function. Because every time you attack with the world, you see new data. And. And then that changes your knowledge and so that changes what you think is novel about. Like what future inputs will be novel. And so there's conceptual problem with model free exploration. Because. Model free optimization works by training the policy from samples of the real environment. And so you have some novelty objective that you want to maximize with your exploration policy. And to do that, you need to draw samples from the environment. To improve the policy for that novelty objective. But while you're training the policy, the novelty objective has already changed because you've. You needed all these samples to train your policy and those samples tell you more about the environment. So. In some sense, it doesn't really make it doesn't really make that much sense conceptually. Sorry, is that why a lot of the curiosity formulations just taken incredibly long time like a huge billions of samples? Yes, I think that's an important part of it. And I think that you can be much more data efficient. By doing forward looking exploration. And to do forward exploration forward looking exploration. You really need a world model. At least I don't see another way of doing it. Because you need to train the policy to maximize the novelty reward. Without changing the knowledge of the agent. So without causing any additional environment to action. And that way you can actually find the best policy for your current reward and then execute that. For maybe one step or maybe for multiple steps. And then gather some more data and then update the model update your novelty reward. And then optimize the policy again. So you really like doing a lot of compute to decide what is the best action I can choose next. Rather than the model free approach where the policy will always lag behind because it's. It hasn't converged on the novelty reward. But you're already changing the novelty reward. Okay, cool. So could you maybe just make crystal clear for us again this distinction between retro-spective novelty and expected surprise. And so in what is the more common case here? I guess the retrospective novelty is more called is the more common case looking at the at the at the at the past literature. Yes, yes, I would say that's yeah, that's better to say. So these are the two terms that I like to use to describe these two ways of using exploration, although both have been done for a long time. But yeah, so the retrospective. Retrospective surprise is what a model free agent maximizes if it has an intrinsic reward. What it basically is doing is, you know, in the beginning, you don't know anything. So you do random actions and then you find something that's novel. And then see. He's simulate an episode and he predict all the intrinsic rewards for that episode. And in the beginning, it will all be novel because you don't know anything yet. And so then you train your policy to. It basically tells you policy, oh, this was a really good trajectory because it was very novel. So you're reinforcing the same behavior. And if you were really good at optimizing your policy, then it would and the environment isn't to random. Then it would go and realize the same trajectory again. But that's exactly not what you want because you just went there so it's not novel anymore. It was novel by the time you tried it for the first time. And so you do it again. And this time you get a low reward. And so then you encourage the policy to not go there again anymore. So then what does the policy do? It has no idea. It just knows don't go there. And then it's doing another random exploration somewhere else. Going there a second time to find out it's not novel anymore. Like in practice, there is more generalization in the network going on and so on. So it's not exactly this. But I think it's a useful mental picture. To understand what's really wrong with retrospective exploration. And in contrast to that, there is expected exploration or planning to explore forward looking exploration. Where you use a predictive model of the future to optimize your policy in imagination. So that the policy gets really good at choosing whatever at the time you're training it is novel to the agent. But since you're training it from imagined rollouts, the training doesn't tell the agent anything new about the environment. And so the intrinsic reward doesn't change. You can really optimize this for my longer and principle, even until your policy converges fully. And then in the most extreme case, you would just execute one action of that policy. And then I retrain your world model and so on. Retrain your policy in imagination. And then you really get what is most promising to explore next. And then you can look into the future and think, oh, if I go here, I don't really know what's going to happen here. But for the things that I think might be happening, some of them are like really interesting. Because they're really different from everything I've seen so far others have not so. Not so different from what I've seen so far. And then you can go in a really directed way. And to the parts that your model expects the most interesting parts of the world to maximize the information. The expected information that you imagine you could gain about the environment. There was a cool paper called model based active exploration where they do something quite similar. But on much simpler environments and without any high dimensional inputs. But they learn an ensemble of they basically learn. Ten environment models. And then the disagreement between their predictions. Is the reward. And then they train like basically soft active critic or some other model of free algorithm to maximize this imagined reward. On the imagined predictions. So it's it's also implementing this forward looking exploration. Now the challenge we had in addition to that is that we have high dimensional image inputs. So we can't really afford to do the. The policy optimization in image space we have to do it in the late and space. And so we need some way of defining the novelty reward there. And what we did for that is. From every late in state. During training we predict an ensemble to try and. Regress the observation embedding for the next time step. Whatever the confnet produces in terms of features before it goes into the model at the next step. As you get the. And then we have a couple of one step predictors. That's more efficient than actually like replicated like training multiple RSS and architectures. It's just like some feed forward lags. And. And that turned out to work really well. And once you have this trained for training on you of course needed target for the next observation embedding. But for imagination training you only need the variance. Of these ensemble predictors. So you don't need the future observations. You can do it all in the late in space of the world model to predict the. Prodigated trajectory of states. And then for every state you. Feed it to all the ensemble predictors. And you just compute the disagreement between them. How does this formulation respond to the noisy TV problem where model world models. Get confused by random noise sources of random noise. Yeah. And I like to connect this to the earlier point where. It's not so much about whether the environment is stochastic or random or not. So. Anatoric uncertainty or reducible uncertainty. It's not just the property of the environment whether the screen is unpredictable or not. It's also a property of your agent and the modeling capacities of your agent. So even if something in principle is perfectly predictable. If your model is too weak. Then it will never learn it and you don't want to get stuck trying to learn about that forever. Where you could actually move on to other parts of the world where there is lots of things that you can learn. So the question question of the noisy TV really becomes the question of. How do I know when I should give up. On learning something and move on to the next thing. And conceptually I think the answer is you don't really ever know. But the best you can do is learn things in order of increasing difficulty. Learn the easiest things first. The things that are easiest for you. And so eventually you will have learned everything that you can learn and then you will be stuck on the next hardest thing. But there is not really a way to avoid that. So. So to do that. To know to have an idea of what you can't learn. You need a noise model. So you need. You need a way to if you have a deterministic model. Then you have two problems for one. It kind of has to explain everything perfectly. And the second is you don't really you can't really consider multiple hypotheses. Over the models. You just like this one model. Some like one point in the weight space of all possible models. And you don't really know how much certainty you're having that model. So you don't know how much the certainty reduced after you see some new data. So if you have a distribution of our models like a Bayesian neural network on ensemble. Then you can that gives you a bit of an idea of how much you know. What's the disagreement in your ensemble. But then you also. You also want a way to allow noise in your predictions. For example, if you're if you try to let's say just predict the next observation to keep it simple. And from like maybe the last input and the action. And you do that. You do that within ensemble of Gaussian models. Then you're allowing some error in the prediction. You're saying, you know, each model tries to really predict the next input. But with the Gaussian distribution, so doesn't have to be perfect. It's trying to get the mean to be the right mean. But then also if the observation is somewhere else, it's okay because we're predicting this Gaussian. So we signed some possibility to all the next inputs we could get. And so then. The variance in your output of this Gaussian is basically the amount of noise that you assume there is in the data. And so the more noise there is in the data, maybe you should avoid those parts of the environment. And that's what the expert. Basically information game also tells you mathematically and intuitively this works out really nicely because you have this ensemble of models. They all predict the Gaussian over something in the future. Let's say the next image and even though the next image is is a bit random. And maybe in hierarchies to pass. The means of your ensemble over time when you get enough data, they will still converge to the mean of whatever is the distribution of the next input. And so the ensemble disagreement will go to zero, even though there is randomness in your inputs. And so you will not be interested in them anymore. So it's able to model the stochasticity in a way that makes it not curious about it anymore. Actually, it's not clear to me how that works. So if let's say the agent comes across two two displays or let's say two displays. And one is showing just random goboards 30x30 go. Or a smaller one, let's say tick-tock board. And the other one is the same board but it's being played by experts. And we know they're different, right? We know these two cases are totally different. And we know we might think that if we could at least with a simple game, if we watch it long enough, we could figure it out. But we don't know that at first. Right. So you have a model that tries to predict the next move in the game. Like just tries to predict the next input to the agent. What it's going to see next. And then you need multiple models so that you can get an idea of multiple hypotheses of the rules of the environment. And you try to learn the rules of the environment by having a model that from one go position or image of a go position predicts the next image of a go position. And. And so to get uncertainty about your to do exploration. You need some way of representing your uncertainty either explicitly or any other algorithm will do it in some implicit form. So one way to do that is to train multiple environment models. And so then you get an idea of well, if they are all the same, then I'm quite certain about what the next outcome is going to be. They're all different. I probably have not that not that good of an idea. So. If you train these in both scenarios for the random go board and for the expert go board, then in the random go board, the dynamics models in the beginning, they are initialized differently so they will predict different things. So your agent will go there for a while. And then over time. All of the models will just learn to predict the mean. And maybe the variance of the next image. And so the mean image or the average over the next moves is is going to be uniform probably. So if it's in pixel space, if you're actually looking at the go board, it would be basically. You know, the stones that are already there, they will stay there and all the other. All the other empty fields, they will have an equal chance of. Of getting the next stone. So they will all be like a little bit darker, a little bit lighter based on what players next. And so. If there is nothing to predict, if there were something to predict about the next move, then, you know, there would be some fields that are clearly still empty and some fields that have some chance of the stone ending up there. And. And if you have multiple predictors, then they can all predict this average image. But in case of the random policy or in case of the random bought out there while they will all predict the exact next. Kind of uniform distribution over possibilities. And so they all predict the uniform distribution over next possibilities. You know that. First of all, your models all agree. They all predict the uniform distribution. So probably the next move is actually uniform. And then you know that there's there's nothing more to learn because your, your song, the members have agreed. Or agree, even though they are not certain in what the next outcome is, where is in the next move and. It will get it will take much longer for them to agree. On what the next move is going to be. And they will only agree by the time that they've actually like perfectly reverse engineer. The expert players to the degree that the model allows them to. Can you tell us a bit about the process of of writing these papers. Like for example, to the experiments in general work at the experiments workout. Often how you expected them to are there often dead ends that are reflected in the final papers. The experiments rarely work out the way you want them to work out. So you need to run a lot of experiments. And I also want to be very confident in my own algorithm when I write about it. Because it for one, it takes some time and effort to write a paper. And that's time where you can't do research. And so I only want to do that if I have a result that's. That I'm happy enough with that I'm willing to spend all this time for writing the paper and then writing rebuttals for the conference. And then you have to do poster and maybe a talk or so and so on. And if you're not really, if you don't really believe in the method, then all of these steps are painful. So I don't want to do that. And I didn't think that way before grad school because before grad school, you kind of. You just need to get a paper so you get into a PhD program. But once you're in a PhD program, you have several years and you can think much more long term. And much more actually follow follow your interests. So I want to be sure that I have something that. That I also believe in. And so that just takes a long time and you have to like run a lot of experiments. Whatever problem you're studying in the paper, either world modeling or exploration and so on. There's usually a big design space of ideas you can explore. And I want to kind of as much as possible strategically break down this space and test out all these ideas, get an understanding of which of them are better words or what better in some situation, but worse in another why. And it's not always easy because for example, we didn't do that much of that for plan, for example, just because we tried a lot of things and they didn't they all just didn't work at all. But I think we would actually be interesting to go back and try out. Like try to really understand why, for example, this stochastic and deterministic state separation seem to be so important. So so there is a lot of tuning necessary and it takes a long time. And I think it's worth putting in that time when it's better to have. One paper here that you're really happy with, then for papers that nobody. That don't really help anybody. Does that answer your question? Yeah, that was great. So do you have any comments on what you think the future of world models looks like? Yeah. So I think we still need to scale up a bit. Because reconstructing accurate images doesn't seem to be the solution, the long term solution for representation learning. Neither in in model 3RL, no model base RL. So I think there are better ways of learning representations, learning latent states than by reconstructing images. Because if you think about it, there might be a lot of things in the image that the agent doesn't really have. And there may also be a lot of things in the image that are just kind of really difficult to predict. And my experience so far is that if you can get reconstruction to work on an environment, then it does really well because you're basically solving a harder task than you have to. You're trying to predict all your sensory inputs. If you can do that, then you know everything about the world there is. But if you can't, because the world is too complex to predict everything accurately in input space, then the agent tends to not learn about representation. And so it's not like a graceful failure. And I think contrastive representation learning is really interesting. So I think that's a couple of very successful, empirical successful methods for static images that I think we can apply to video sequences for RL. And so we're trying some of that. And I think another aspect that I think a lot of RL is still kind of bottleneck by is temporal abstraction. And I said earlier value functions give you some of that because they had you consider rewards into the long term future. But in a really complex environment, I think it will become intractable to learn the good value function for everything. And you probably need to do some kind of online planning. Just because there are too many possible scenarios that you could imagine to really be able to learn about all of them. And so what you want to do is do the planning online. So you only have to do it for the situations that you actually encounter. And to them still consider long horizons you need to have temporal abstraction in your world model. So that's another thing we're trying. And then besides that, I think we, I mean, there is a lot of. There is a big open space for objective functions that are enabled through learning accurate about laws and some of them will benefit from having uncertainty estimates that are more accurate than maybe ensembles about parts of the world model. And then we have a small better empowerment is another interesting objective function that we're studying that becomes much easier to compute once you have a world model. So in summary, it's scaling up. Learning better representations. And finding better objective functions because eventually exploration will become really important as well to learn a good part model. So back at the Neureps 2019 RL workshop poster sessions, I was at David Silver's poster for Muse 0. And I asked him about how Muse 0 handled Stokeasticity. And he told me that it didn't. It used a deterministic model. But that he, but he said it could be extended to handle Stokeastic case. And I think I think Muse 0 builds on the predictron. And paper which which does some kind of temporal abstraction. So maybe there's progress being made in that in the temporal abstraction side. Yeah, I'm, I'm actually not sure if the original predictron has temporal abstraction in it. But yeah, so I think for the Stokeasticity aspect, it may be more necessary when you're trying to explain more complex data. So if you're trying to explain your inputs, Stokeasticity becomes more important than if you're just trying to explain future rewards. That's my guess. Yeah, also you have to learn a lot more of course if you, if you're trying to model the world rather than the task. And, and with the result is that you get a model that can be useful for a lot of different tasks. And that can be useful for exploration where you don't have a task at all. But there, I mean, there are some, there are some recent papers on doing temporal abstraction and some old ones as well, both in model 3 and model based. Arrell, it's just that. And I think there are lots of great ideas and a lot of these ideas can probably. My guess is that we don't have to invent like a crazy fancy method for like almost everything in machine learning. We just have to take like a reasonable kind of something that seems intuitively correct and then push it, push it until it either works or we. You find a reason for why it doesn't work. And that hasn't really happened for a temporal abstraction yet at Arrell. Can you say anything about the research directions that that you are pursuing going forward? Yeah, I mean, that overlaps a lot with your with what I said in response to your question about next steps for world models. But yeah, for me, I'm trying to systematically go through different objective functions for intrinsic motivation now. And besides that, we also want to work on harder tasks. So I need to scale up world models further so that we can do. Let's say train like an agent with only within intrinsic motivation to play Minecraft from pixels. That would be great. And besides the house and it survives and maybe fights them once this and ignite and you know, because there's such a complexity and kind of. There are there are so many things you can do because it's not a lot of games are actually easier to explore than you might think. For example, in Mario, you can only walk forward. So it's not that difficult to explain to explore. It's basically either you're you're making progress, you go forward or you don't. But in an open world game, there are so many things you can do. And then you have to then you get an additional challenge because once you've explored something, you kind of have to go back and. And see if there's something else that I could have also tried from here. And and so that's why I like thinking about training it, doing intrinsic motivation Minecraft because. You know, you have to build tools and then use these tools to get better materials and they can be better tools and then you can. You know, build more like like bring yourself into like a better. Into a better state for surviving. And so even agent can actually do all these things, then it must be very general. Very general objective function that that can explain all of this. Besides your own work, is there other angles in RL that you find very interesting lately that you might not have mentioned? Yeah, there's one that I've been thinking about a bit, but not really not done anything in which is external memory. For to give agents long term memory, which is I think. Temporal abstraction is there's one part of the puzzle. You do want to plan into the future on a temporary abstract level. But and that gives you a long context into the from the past as well. But I think you can't keep everything in memory in your working memory at a time. And so it's very natural to think that that could be this external memory module that you can write things into. And then you can later query it to get back the facts that you need at the moment. So there, yeah, there are a couple of interesting interesting papers on training these modules for RL. And another direction that's not directly reinforcement learning is. It is the like brain and slide architectures. So I think it would be cool to to develop an unsupervised learning algorithm that works in an online setting on high dimensional inputs. So it can't really do backprop through time. It has to find some other way because it keeps getting new new input. So I think we have to be cool to kind of go away from the static image setting into the online streaming setting for representation learning. And potentially explore ideas people like just kind of the very basic ideas that people will know about computation in the brain, which is like sparse distributed representations. And the hierarchy is so on. Danisher Haffner, it's been a real treat. And thanks for taking this time and your patience to teach us so much. Actually, I've learned so much in this episode. I'm going to listen to it many times. And it's been great hearing about your fascinating research. I can't wait to hear or read about what you come up next. Thanks for sharing your time and your insight with us. Thanks, Robin. That was a great chat and looking forward to hearing the episode. That's our episode for today folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 13, "start": 0, "text": " This is Talk by Rail Podcast. All reinforcement learning, all the time." }, { "end": 22, "start": 13, "text": " Interviews of brilliant folks across the world of RL. I'm your host, Robin Chohan." }, { "end": 27, "start": 22, "text": " Denny Jar-Haffner is a PhD student at the University of Toronto, a student researcher" }, { "end": 33, "start": 27, "text": " at Google Brain and the Vector Institute and holds a Master's at Research from University College London." }, { "end": 36, "start": 33, "text": " Denny Jar, thanks so much for speaking with me." }, { "end": 38, "start": 36, "text": " Hi, Robin. Thanks a lot." }, { "end": 40, "start": 38, "text": " So how do you describe your area of focus?" }, { "end": 48, "start": 40, "text": " Yeah, I work in broadly and artificial intelligence, and that's really where the motivation for me comes from." }, { "end": 56, "start": 48, "text": " Not so much building better applications, but more really understanding the concepts" }, { "end": 63, "start": 56, "text": " behind intelligent thinking. And I think machine learning actually gives us pretty powerful tools" }, { "end": 70, "start": 63, "text": " that we can use to study at least some of these questions that we couldn't study directly on a person or in the brain" }, { "end": 74, "start": 70, "text": " because it's just so hard to make measurements." }, { "end": 82, "start": 74, "text": " So that motivation led me to machine learning and then most specifically to reinforcement learning." }, { "end": 92, "start": 82, "text": " So a lot of my work is in reinforcement learning, in generative modeling, learning world models, and exploration." }, { "end": 96, "start": 92, "text": " Can you share with us what your PhD advisors focus on?" }, { "end": 104, "start": 96, "text": " Sure. So my main advisor is Jimmy Bhatt, and I'm also advised by Jeffrey Hinton." }, { "end": 111, "start": 104, "text": " And they both focus on a lot of questions around deep neural networks." }, { "end": 118, "start": 111, "text": " So about architectures of deep neural networks, about optimization, and things you can do with it." }, { "end": 129, "start": 118, "text": " So in some sense, it's great to have an advisor like that or true advisors like that because they have quite broad interests and broad knowledge as well." }, { "end": 138, "start": 129, "text": " So I can basically do whatever I want and I get good feedback and good advice on those topics." }, { "end": 146, "start": 138, "text": " So the first paper we're going to discuss today, you're a contributing author to a deep learning framework for neuroscience by Richards et al." }, { "end": 151, "start": 146, "text": " So I really don't know anything about neuroscience. My apologies in advance as I try to follow along with this." }, { "end": 153, "start": 151, "text": " But what is the main idea here?" }, { "end": 167, "start": 153, "text": " The reason I think this is a good paper to start off with is that it really gives the general framework for us to think about understanding the brain and what it can do in connection to machine learning." }, { "end": 177, "start": 167, "text": " The general idea is that neuroscience has existed for a really long time and there's lots of data around and there are also some theories." }, { "end": 187, "start": 177, "text": " But it's almost at the point where there are lots of small kind of data sets and measurements that have been made." }, { "end": 201, "start": 187, "text": " But we're really for one, we're limited by the types of experiments we can run on real on real subjects just because it's so hard to look into the brain basically make measurements." }, { "end": 209, "start": 201, "text": " There's a school and then there's so much going on. It's really hard to kind of target specific, let's say neurons that you would want to measure." }, { "end": 220, "start": 209, "text": " And so that's one thing and the other thing is that there are some kind of general themes missing." }, { "end": 229, "start": 220, "text": " And of course there are some ideas of general theories that put together all these experimental results." }, { "end": 241, "start": 229, "text": " But it seems like we need some more guiding principles to really make sense of all of that data and get some frameworks that we can think within." }, { "end": 258, "start": 241, "text": " And so the idea of this paper is that we kind of have a similar situation in deep learning where we have all these crazy architectures and different lost functions that you can optimize." }, { "end": 263, "start": 258, "text": " And different ways to optimize these lost functions." }, { "end": 270, "start": 263, "text": " And so this has served us really well in the deep learning community." }, { "end": 278, "start": 270, "text": " There's a lost function. There's way to optimize this lost function. And then there's an architectural model." }, { "end": 293, "start": 278, "text": " So to optimize this function. And so in this paper we propose this framework as a way to make sense of data and neuroscience." }, { "end": 297, "start": 293, "text": " So how can we draw connections between the two disciplines here?" }, { "end": 308, "start": 297, "text": " So this paper talks about these three components, objective functions, which together are equivalent to lost functions, learning rules and architectures." }, { "end": 320, "start": 308, "text": " Can you say just a brief a little bit about these three things and maybe contrast how they work in neuroscience and how we define them in machine learning?" }, { "end": 331, "start": 320, "text": " So I'm very much on the machine learning site. And I'm really interested in neuroscience." }, { "end": 344, "start": 331, "text": " But I can speak much better for the machine learning side of things here. And so for example, let's say you just try and you know some deep neural network on some image classification task." }, { "end": 354, "start": 344, "text": " And so there's some data which often you don't have control over." }, { "end": 366, "start": 354, "text": " And then there is an architecture that would be how many layers you use in your neural network, whether you use any skip connections, what activation function you want to use and so on." }, { "end": 380, "start": 366, "text": " And then there's a loss function which in the case of supervised learning is quite simple. It's just maximize the probability of your supervised outputs that you want the network to predict." }, { "end": 385, "start": 380, "text": " But that could be much more complicated in other scenarios, for example, in unsupervised learning." }, { "end": 389, "start": 385, "text": " It's really a field that's about trying to find out what is a good loss function." }, { "end": 399, "start": 389, "text": " If you don't know exactly what you want the model to output precisely. So that's the second component." }, { "end": 408, "start": 399, "text": " We have the architectures, we have the loss functions. And then once you have these two, you've defined an optimization problem." }, { "end": 417, "start": 408, "text": " So find the parameters in this model or in this architecture to optimize this objective, given some data." }, { "end": 431, "start": 417, "text": " And then you have to optimize it. So how do you actually find the right parameters and machine learning we call that an optimizer like stochastic gradient descent or Adam and so on." }, { "end": 452, "start": 431, "text": " But in neuroscience, that would be a learning rule where you write down the dynamics of how do you actually, how do the ways change from one step to the next or maybe even continuously over time to make progress on finding better parameters that maximize the loss function." }, { "end": 466, "start": 452, "text": " So you said unsupervised learning is a lot about figuring out what the loss should be. And that's obviously still an open question. But would you, do you feel like in general in machine learning, we kind of have these three things figured out to some degree." }, { "end": 475, "start": 466, "text": " That's a really good question. I think we have, we have really good ways to optimize our networks." }, { "end": 489, "start": 475, "text": " So I think the learning rule part is figured out to, to at least the level where you can do a lot of things with it. And it's often not the bottleneck anymore." }, { "end": 498, "start": 489, "text": " Of course, there are a lot of people working on developing better optimizers and actually Jimmy works a lot on that as well." }, { "end": 517, "start": 498, "text": " And it's like an interesting field because when you come up with a better optimizer, then you've made the lives of thousands of people easier because now they can all just switch over to that optimizer and they will get better results with their machine learning projects." }, { "end": 541, "start": 517, "text": " And that's really the power that comes from the logical framework like this. So the ideas, if we find good building blocks that we can separate a project, a problem into, then people can work on them to some extent independently of the other building blocks." }, { "end": 557, "start": 541, "text": " So if I want to solve some, if I want to find a better architecture for specific tasks, I don't have to also be researched on finding a better optimizer at the same time or on finding a better objective function at the same time." }, { "end": 581, "start": 557, "text": " So to answer your question, I think when a decent position in terms of the learning rules, I think we're also in a decent position in terms of the architectures, even though it's probably not as clear yet, just because it's such a giant design space of how can you build in your network." }, { "end": 598, "start": 581, "text": " One thing we figured out is that we have a tool bank of different neural modules that you can stack together. And that's a really, really powerful way of thinking about building an architecture." }, { "end": 611, "start": 598, "text": " You can have dense layers, fully connected layers and convolutional layers and attention layers and recurrent layers and so on. You put them all together and they kind of work in any order or more or less." }, { "end": 638, "start": 611, "text": " So I think we can still design much better architectures, especially for precise tasks. So one big benefit of deep learning is that it can apply to everything, whatever your prediction problem is, you can use deep learning and you can probably do a pretty good job at making predictions." }, { "end": 653, "start": 638, "text": " But especially when there is very little data, then we have to be more careful about what architecture we use. And so you basically have to build priors about the data into the architecture." }, { "end": 670, "start": 653, "text": " I think we can still do much better job there. For one, we can for very specific problems, we can find better priors. And then an example here is that for convolutions work well for images." }, { "end": 681, "start": 670, "text": " But then there's still a lot of knowledge that we intuitively have about natural images that are not captured by convolutional network." }, { "end": 699, "start": 681, "text": " So for example, there are objects in the world. And so, you know, objects tend to be consistent in time. Right. They move slowly. It's like some piece of information in my sensory input that is correlated in space and time." }, { "end": 714, "start": 699, "text": " And it can move in time or it can move in space. And we don't really put these priors into our networks yet. And that's what Jeff has been working on for a really long time with the capsule networks." }, { "end": 735, "start": 714, "text": " So there's a spectrum of how precise you want to tailor something to a task, get really good results on what task, but then lose some generality. And I think object priors are general enough that they will be useful for a lot of things." }, { "end": 745, "start": 735, "text": " But probably some other priors that we haven't really incorporated well into our architectures yet like smoothness, for example." }, { "end": 760, "start": 745, "text": " So and there is lots of interesting work on on live sheets neural networks and so on. So I think there's a very active development on the architecture side." }, { "end": 775, "start": 760, "text": " And to come to the last component of objectives, I think I think that's where we have to do the most, the most work and where we're kind of really early in the process." }, { "end": 787, "start": 775, "text": " So that's what I think is probably the biggest bottleneck of machine learning and also of understanding and understanding intelligent systems better." }, { "end": 790, "start": 787, "text": " Finding the right objective functions." }, { "end": 806, "start": 790, "text": " As I said, that's basically, to me, that's basically what unsupervised learning means as a field at the moment because some people say, well, it's not really, you know, a rigorous, clearly defined task that you're trying to solve." }, { "end": 816, "start": 806, "text": " But to me, that's really the beauty of it. We don't know yet what is the right mathematical objective that you want to optimize and research for it." }, { "end": 839, "start": 816, "text": " And if you find better objective functions, you can learn better representations, you can describe systems better and becomes especially interesting, not just if you're trying to learn representations, but in the reinforcement learning setting, where you're not just doing perception, but you're also detecting with the world." }, { "end": 851, "start": 839, "text": " I think it's not at all key yet, what are agents should optimize for if there are no rewards around that's super interesting. And I've always thought of the depotting architectures as very well factored." }, { "end": 855, "start": 851, "text": " As you say, we can you we have all this libraries of layers that we can just drop in." }, { "end": 864, "start": 855, "text": " But you help me appreciate the extent to which the other components are are also well factored, which is I think a great insight." }, { "end": 881, "start": 864, "text": " So for the brain, do we have any idea if we should expect to find a single type of objective function and like a single learning rule, or could we imagine there could be many different types of objective functions on learning rules and different parts of the brain." }, { "end": 884, "start": 881, "text": " Is that still a completely open question?" }, { "end": 906, "start": 884, "text": " That's a really good question. The theoretical answer is that it doesn't really matter. So yes, for any system, any system that you can observe, you can there exists theoretically exactly one objective function that describes all the behavior of that system." }, { "end": 914, "start": 906, "text": " And actually not quite true, it describes all the behavior of that system that can be described through an objective function." }, { "end": 926, "start": 914, "text": " So in the paper, we talk a bit about this and it's basically the idea of the fundamental theorem of vector calculus or the halm holds the composition." }, { "end": 945, "start": 926, "text": " And so the idea is the following. Let's say you're describing a system could be, it could be a neural network where the weights, weights change over time in some gigantic space of all the possible combinations or configurations of the weight, weight vector." }, { "end": 958, "start": 945, "text": " Or it could be, it could be very simple system like a thermostat that just has a sensor and then controls the heating unit." }, { "end": 967, "start": 958, "text": " Or it could be even more complex than a neural network or deep neural network like a system like the brain." }, { "end": 981, "start": 967, "text": " And so all these systems you can view from like a dynamical systems perspective. There's some state space and every point in that space describes possible configuration of the system." }, { "end": 1000, "start": 981, "text": " And at the current time, it's at one point and then it kind of moves around over time as the brain is doing its computing or as the thermostat is reading new temperature values and storing some new internal state about them like a moving average maybe." }, { "end": 1018, "start": 1000, "text": " And as the weights of our deep neural networks change with every optimization step, they also move around in the state space. And so when you when you describe a system like from this angle, you can view it as a vector field in the state space." }, { "end": 1031, "start": 1018, "text": " Right, if you if your state description is complete, then from every state, there is a direction in which you go to get to the next state. And if you include the." }, { "end": 1036, "start": 1031, "text": " If you couple that with an external system, you really kind of." }, { "end": 1050, "start": 1036, "text": " Then you really have a like a closed state and like a closed system where everything is captured by your description and and basically everything becomes more or less predictable." }, { "end": 1059, "start": 1050, "text": " And for every point in the configuration space, there is a direction and that gives you the next point in configuration space." }, { "end": 1079, "start": 1059, "text": " And so when you describe systems systems like this, you can actually you get a vector field every point in state space is a direction that's that's the vector field and you can decompose it in two simpler vector fields and that works in any case." }, { "end": 1094, "start": 1079, "text": " Except for some maybe degeneracies that are just of theoretical interest. And you can you can decompose it into one part that is optimizing something and one part that's not optimizing anything." }, { "end": 1099, "start": 1094, "text": " So think of the configuration space again." }, { "end": 1110, "start": 1099, "text": " And now plot the heat map over it, which is the objective function. So some points in weight space, give you better value." }, { "end": 1117, "start": 1110, "text": " Mean that you're on that work is better predicting the labels, let's say." }, { "end": 1130, "start": 1117, "text": " And some points mean that the neural network is worse at predicting the labels. And we can write down our own cost function there." }, { "end": 1139, "start": 1130, "text": " And then we can implement our own learning rules so that we end up with the system that seeks out the better regions in the configuration space." }, { "end": 1149, "start": 1139, "text": " And we can use the same mental picture to describe an existing system that we don't change anymore that we don't have control over the dynamics." }, { "end": 1156, "start": 1149, "text": " And so there is still this potential function or energy function or cost function." }, { "end": 1160, "start": 1156, "text": " Those are all the same, the same things." }, { "end": 1170, "start": 1160, "text": " Just different fields call them differently. And so when you look at the system, you can wait for a while, you can observe it and it's moving around in this configuration space." }, { "end": 1178, "start": 1170, "text": " And it will be more often in some places and less often than other places. And from that, you can derive a cost function." }, { "end": 1189, "start": 1178, "text": " So what has cost function is the system optimizing for? Well, you just look at what it's doing and you over time, you will get an idea of what parts of the state space it likes." }, { "end": 1198, "start": 1189, "text": " And what parts it tries to avoid. And then that's your cost function. It's just the stationary distribution." }, { "end": 1210, "start": 1198, "text": " The visitation frequency basically. And so once you have the visitation frequency of a system, you can describe all of its optimizing behavior." }, { "end": 1234, "start": 1210, "text": " So you can say, now that I have the cost function, maybe a very simple example is a person, maybe you have a daily routine and you can be in different rooms of your house, you can be at work, maybe not at the moment, but at least there are different rooms at home that you can switch between." }, { "end": 1248, "start": 1234, "text": " And there is some probability of going from that room to the other room and so on. And if you observe somebody for, or maybe you write down every day, what room you've been for in for how long." }, { "end": 1258, "start": 1248, "text": " And then you get this kind of cost function that describes you. It's like, oh, the living room is the best. For example, you spend the most time there." }, { "end": 1270, "start": 1258, "text": " And so once you have this cost function, you can describe the dynamics. If you give me the cost function, I can basically reverse engineer you to some extent." }, { "end": 1281, "start": 1270, "text": " To based on like what state space you chose, it's probably not like the state space always uses some abstraction because you can't go to like the kind of particle level." }, { "end": 1294, "start": 1281, "text": " But let's say it's different rooms and then I can build something that also seeks out the same, seeks out the rooms with the same preference distribution." }, { "end": 1310, "start": 1294, "text": " Okay, so that's, that's the optimizing part. And then there is a part to every system that is independent of the stationary, well, it's orthogonal to the gradient on the stationary distribution." }, { "end": 1324, "start": 1310, "text": " So if you give me the distribution over rooms, I can build a new agent that follows the gradient on this preference distribution always tries to go towards what is better under the cost function." }, { "end": 1333, "start": 1324, "text": " But then there is some maybe external perturbation that keep it away from there. So it has to kind of keep going towards towards the optimum." }, { "end": 1347, "start": 1333, "text": " But then there is also always, or there's also potentially a direction that doesn't change the probability." }, { "end": 1355, "start": 1347, "text": " And so that's the direction that's orthogonal to the gradient on the cost function." }, { "end": 1371, "start": 1355, "text": " So if you think of the cost function as a surface, like as a, as a hill surface over your configuration space, then you can either go up or you can walk around the control lines of this cost function." }, { "end": 1386, "start": 1371, "text": " And so that's the difference between the divergence part of the vector field that goes up on the cost function and it tries to concentrate on the, on the optimal points." }, { "end": 1401, "start": 1386, "text": " Or I guess if it's a cost function goes down if it's an objective function to maximize it goes up. And then there's the curl part that just walks around control lines." }, { "end": 1405, "start": 1401, "text": " And so it's never optimizing for anything. It always cycles back after, after a long time." }, { "end": 1423, "start": 1405, "text": " So this is all, explain why when you're talking about something as an optimization problem, or you're just trying to describe intelligence as an optimization, then you will lose this part that doesn't optimize anything." }, { "end": 1439, "start": 1423, "text": " So you'll not be able to describe that part. And that's probably fine. Like maybe we have evolved to be quite efficient. And so maybe we don't do a lot of unnecessary things that don't actually optimize any object to function." }, { "end": 1456, "start": 1439, "text": " But what else, right? Maybe that's on some level of abstraction that you choose to describe the system. Maybe that's really important to get something that has the behavior that shows the behavior is that we think offense maybe connected to intelligence." }, { "end": 1477, "start": 1456, "text": " So is this paper saying that we should look for the components similar to those that we use in deep learning in the brain. And then maybe vice versa, figure out how to adjust deep learning to match more closely match what we see in brains to help us understand to use deep learning to understand brains." }, { "end": 1480, "start": 1477, "text": " Is that, is that close to the message?" }, { "end": 1491, "start": 1480, "text": " Yeah, yeah. So it goes in that direction. I don't think machine learning and neuroscience have to converge to one thing." }, { "end": 1502, "start": 1491, "text": " We can use different models in machine learning. Then, then the models that might be useful for explaining the brain because there are biological constraints on the brain." }, { "end": 1521, "start": 1502, "text": " And it's interesting to understand them and understand what kind of ways nature found around those. But just conceptually speaking, the best models for the type of computer hardware that we have are probably different." }, { "end": 1538, "start": 1521, "text": " So if your goal is to to build an algorithm that's very good at predicting labels on some data set, then probably like the very long term solution will be different from the biological solution." }, { "end": 1556, "start": 1538, "text": " Now, that said, at the moment, we're still quite far away from getting anything close to the intelligence of a brain, of course. And so I think neuroscience has a lot of potential for helping us with building better models in machine learning." }, { "end": 1568, "start": 1556, "text": " But it doesn't have to be the goal doesn't have to be to end up in the same place for both disciplines, although that I think that would be interesting." }, { "end": 1577, "start": 1568, "text": " But that's not necessary. And what the paper is saying is we should use the same framework to break down the problem." }, { "end": 1586, "start": 1577, "text": " And that will help us share insights in both directions. And as I said earlier, it's really difficult to make measurements in the brain." }, { "end": 1602, "start": 1586, "text": " And there are a couple of papers from the last few years where people have studied the learning models in a similar way in terms of like analyzing their activations, then then neuroscientists would study brain." }, { "end": 1616, "start": 1602, "text": " And found that there are actually really surprisingly strong connections between how the deep neural network processes some input to solve a prediction task." }, { "end": 1625, "start": 1616, "text": " And how the activations and the brain look like that try to solve the same prediction task." }, { "end": 1638, "start": 1625, "text": " And so there is definitely exchange in both directions. And I think both disciplines can learn from the other and use tools from there." }, { "end": 1644, "start": 1638, "text": " Because on the other hand, we also have no idea really how deep neural networks work and why they work." }, { "end": 1648, "start": 1644, "text": " And so maybe some ideas from neuroscience will help there." }, { "end": 1673, "start": 1648, "text": " And I think the reason you can find these similarities between models in machine learning and measurements in the brain is that even though the models are very different in some way, they still, both systems are still trying to solve the same task." }, { "end": 1690, "start": 1673, "text": " And a lot of our solving a task is a lot of the computation needed to solve the task is actually more about your input data than the architecture you're using to process it." }, { "end": 1712, "start": 1690, "text": " So that's why I think nobody really knows, but my intuition is that probably there are some constraints on computation in general on what information do you need to extract from your input so that later on you can solve a task." }, { "end": 1727, "start": 1712, "text": " So if you have any comments on how the insights of this paper might relate to reinforcement learning more specifically than learning in general, this wasn't an RL paper, right?" }, { "end": 1752, "start": 1727, "text": " Of course, it was not an RL paper. For me, the biggest takeaway of this kind of perspective on understanding intelligence is that for the biggest takeaway for reinforcement learning is that we have to think a lot about what objective functions we should use in reinforcement learning." }, { "end": 1778, "start": 1752, "text": " Because I mean, it's always been bothering me that we have a reward signal that comes from the environment. And that's actually not how reinforcement learning used to be defined in some of the earlier work where you would usually, you know, there are some," }, { "end": 1795, "start": 1778, "text": " some early papers on the question of where rewards come from. And the way to think about it really is that there's an environment that gives you sensory inputs, you give it actions, and it doesn't care about what you're doing, right?" }, { "end": 1815, "start": 1795, "text": " So I would be with the environment cap and then there's an agent and then agent. You can choose to break that agent down into two components and one component gives you the reward as a function of, you know, the past sequence of inputs and the past sequence of actions." }, { "end": 1830, "start": 1815, "text": " And then there's another component that tries to maximize this reward. And so that's the kind of classical reinforcement learning component where maybe you learn a value function or, you know, there are many things that you could be doing." }, { "end": 1841, "start": 1830, "text": " And so I think we haven't really spent a lot of time yet or enough time to understand the first component where actually the reward is being generated." }, { "end": 1854, "start": 1841, "text": " And then we want to build something that is more intelligent or closer to maybe an intelligent being than the current agents we use in reinforcement learning." }, { "end": 1863, "start": 1854, "text": " Then we have to make progress on that part because there's, there's not really a reward function in the world." }, { "end": 1873, "start": 1863, "text": " So it's that we can think of maybe, you know, optimizing for survival is good. But then that doesn't really give you a good idea of the system." }, { "end": 1888, "start": 1873, "text": " I want to understand. So I think this optimizing for survival in some world with like a giant simulation, like an artificial life approach to building intelligence might work to build something." }, { "end": 1900, "start": 1888, "text": " Like I mean, we're quite far away from that, but in principle, it could work. But I and it might be easier to study the resulting system than to study in like biological system." }, { "end": 1915, "start": 1900, "text": " But it doesn't really answer the question of how it's doing that. And maybe you don't care about that. You just want to build something that replicates some aspects of behavior that we see in people." }, { "end": 1928, "start": 1915, "text": " But to me, I actually want to know what are the components that we're optimizing for like within one lifetime." }, { "end": 1944, "start": 1928, "text": " And to get that additional insight, we have to try out different objective functions, different implementations of this module one in the agent that provides the objective function to the optimization component." }, { "end": 1950, "start": 1944, "text": " And we have to try them out and we have to do it in an environment that probably has to be very complex." }, { "end": 1960, "start": 1950, "text": " And then we can look at the behavior and we can see if that's similar in some way to the behaviors we're trying to replicate." }, { "end": 1970, "start": 1960, "text": " And we're very general, like people are very general in the sense that there are many different environments in which we can do something." }, { "end": 1979, "start": 1970, "text": " And so the objective function should also be general in the sense that it doesn't depend on some underlying environment state." }, { "end": 1992, "start": 1979, "text": " Like if you want to move the glass from one side of the table to the other, then maybe if you have a physics simulator and you know the object idea of the glass and so on, you can, you know, come here to square distance between the position and the goal position." }, { "end": 2001, "start": 1992, "text": " But that's not the sensory input that the agent gets. And so that's not available if you want to general implementation of the first component of the agent." }, { "end": 2007, "start": 2001, "text": " So it has to be something that's only a function of the sensory inputs and past actions." }, { "end": 2011, "start": 2007, "text": " And still accounts for interesting behavior across many different environments." }, { "end": 2017, "start": 2011, "text": " So are you pointing to intrinsic motivation as a key here?" }, { "end": 2023, "start": 2017, "text": " Yes, yes, that's how the field is often called." }, { "end": 2028, "start": 2023, "text": " And often intrinsic motivation." }, { "end": 2036, "start": 2028, "text": " I think there are there are many different ways of how to really evaluate intrinsic motivation." }, { "end": 2040, "start": 2036, "text": " And it's it's very difficult." }, { "end": 2044, "start": 2040, "text": " And I think it's a good challenge to make progress up." }, { "end": 2054, "start": 2044, "text": " And there are parts of intrinsic motivation where you're basically trying to be better at solving a particular task." }, { "end": 2070, "start": 2054, "text": " And so maybe you like sum up the intrinsic reward with the extrinsic reward and you get something that makes past learning progress on the task, then without the extrinsic motivation." }, { "end": 2083, "start": 2070, "text": " Another evaluation setting that I really like that I think will come to you a bit later in the podcast is that you explore without any task in mind." }, { "end": 2092, "start": 2083, "text": " And then maybe you can use the data set that results from that to later on train a new agent on it to solve specific tasks." }, { "end": 2095, "start": 2092, "text": " You can see how useful this exploration was." }, { "end": 2107, "start": 2095, "text": " So now let's turn to a set of four papers that are tightly related with each other, starting with with planet that's learning late dynamics for planning from pixels." }, { "end": 2110, "start": 2107, "text": " Can you tell us what what's the main idea of this the planet paper?" }, { "end": 2123, "start": 2110, "text": " The main idea was to learn and learn an dynamics model of the environment that's accurate enough that you can do a rain post with learning with it." }, { "end": 2137, "start": 2123, "text": " And people have been trying to get model based are all to work in various and sensations fall out time and there has been lots of progress as well." }, { "end": 2147, "start": 2137, "text": " But it was really almost like a bottleneck where it kind of worked on simple tasks, but then it didn't really work on harder tasks." }, { "end": 2167, "start": 2147, "text": " And so practice people were still using model free methods most of the time, even though model based methods are feeling in different ways because for one day it's kind of like a really intuitive thing that you have a model of the world that lets you predict into the future." }, { "end": 2182, "start": 2167, "text": " I mean, we know that people can do that. So probably our agents should as well. And but then having a world model also lets you do a lot of things that you couldn't do with the model for the agent." }, { "end": 2198, "start": 2182, "text": " So it's almost this backlog of research ideas in my head and other people's heads that were blocked by not having accurate enough world models to implement them." }, { "end": 2218, "start": 2198, "text": " So that was really the goal because I wanted to work on intrinsic motivation. Yeah, you can do better exploration if you have a world model. And I think we'll talk about this when we get to the disagreement paper about the retrospective versus expected exploration." }, { "end": 2236, "start": 2218, "text": " And so I knew to do that. I really needed world models to work on some tasks. Some kind of tasks that I would be happy with with high dimensional inputs and so on." }, { "end": 2242, "start": 2236, "text": " And that's why I started working on learning dynamics models from pixels." }, { "end": 2250, "start": 2242, "text": " So that's so interesting. So you really are planning multiple papers ahead for where you want to get to being strategic about it." }, { "end": 2265, "start": 2250, "text": " Yes. And maybe not so much a chain of papers, but I always had this goal of building autonomous agents with intrinsic motivation." }, { "end": 2277, "start": 2265, "text": " And then whenever I started a new project, I reflect on that and think about what is the limitation. Like, can we do this now?" }, { "end": 2283, "start": 2277, "text": " Or is there anything that's still necessary to solve before we can build that?" }, { "end": 2299, "start": 2283, "text": " And it was almost a bit frustrating in the beginning when I started my masters in London that I wanted to do this like active exploration. But there was just no, no accurate dynamics model that I've used for it." }, { "end": 2306, "start": 2299, "text": " And then people told me, you know, yeah, we all know that this would be cool to have, but we've been trying for a really long time." }, { "end": 2320, "start": 2306, "text": " And it just doesn't work and we don't really know why. And I thought, okay, well, you know, I'll try and it's a year." }, { "end": 2327, "start": 2320, "text": " And Tim was really helpful to me, Lily Crab when he advised the project." }, { "end": 2341, "start": 2327, "text": " And my manager at Google at the time, James Davidson was very helpful as well. And we just went through it quite systematically and we kind of tried for a really long time and eventually it worked." }, { "end": 2354, "start": 2341, "text": " And I think there isn't even like a single thing that I could point to that that was like the point where it made click where suddenly started to work." }, { "end": 2369, "start": 2354, "text": " I mean, those were mostly backs in the implementation where, oh, like, you know, we normalized the input twice and then at the end, it's like during evaluation, you do different normalization, then, then, then trading off course, your model doesn't make good predictions." }, { "end": 2390, "start": 2369, "text": " So it was mainly to we had a pretty clear idea of what we wanted to do. I wanted to build this like a dynamic model because I think, I think a lot of our work with low dimensional inputs, it's a bit too tight." }, { "end": 2408, "start": 2390, "text": " I actually don't even read those papers anymore in most cases. And you can do quite well with random search and so on. So, so to me, there need to be some some high dimensional input where representation learning is part of the challenge." }, { "end": 2421, "start": 2408, "text": " And, and then if you put forward, it doesn't really make sense to do that in pixel space from one image to the next because that gets very expensive and arrows can accumulate very quickly." }, { "end": 2433, "start": 2421, "text": " And it's definitely not what we do. Like, when I plan on when I plan my day, I don't plan how my like activations on my retina change." }, { "end": 2448, "start": 2433, "text": " Like, hours from now, it's all in an abstract space. And it's both abstract in both abstracts from space into concepts and so on. And it also abstracts in time." }, { "end": 2461, "start": 2448, "text": " And we so far, we focused on the first aspect and we're trying to also doing some work on the temporal abstraction, but I think that's still quite unsolved." }, { "end": 2475, "start": 2461, "text": " Yeah, so at the end, we had this kind of clear picture of what we wanted to do and we didn't actually deviate much from it throughout the project, we just visualized a lot of metrics and and try to really understand what was going on." }, { "end": 2484, "start": 2475, "text": " And then we found a lot of bugs that we fixed over time and then at the end, it just worked. And we work quite surprised." }, { "end": 2493, "start": 2484, "text": " So that must have been really satisfying. You worked on this for a year. And the first thing that jumped at me from this paper was the efficiency gain." }, { "end": 2502, "start": 2493, "text": " It said there was a line that said, data efficiency gain of planet over a D4 PG, a factor of 250 times, which is just huge." }, { "end": 2512, "start": 2502, "text": " So was that was that surprising to you? I guess you've been working on it for a year. So by that time you're used to it. But did you expect anything of that level when you went into this?" }, { "end": 2521, "start": 2512, "text": " To be honest, I didn't really care about data efficiency at all because I just needed a world model to do exploration with." }, { "end": 2529, "start": 2521, "text": " I didn't really care about it being so much more data efficient, but it turned out to be more data efficient." }, { "end": 2544, "start": 2529, "text": " And of course, that's useful for a lot of applications. Like if you want to use world models for robotics, for example, where environment steps are much more expensive than in a simulation." }, { "end": 2555, "start": 2544, "text": " Then it really matters. So of course we put it in the paper. But I wasn't actually like it didn't actually matter to me. And it still doesn't." }, { "end": 2564, "start": 2555, "text": " To add to this. I think the reason that it's more data efficient, there are multiple reasons." }, { "end": 2576, "start": 2564, "text": " We don't exactly know how much each of them contributes. But one reason is that a lot of model free methods just don't use any specific representation learning." }, { "end": 2590, "start": 2576, "text": " They learn representations just through the reinforcement learning loss for maybe value learning or policy radians. And so the only signal that goes in is like the action that you chose and the reward that you got." }, { "end": 2595, "start": 2590, "text": " And that's that just if you think about." }, { "end": 2609, "start": 2595, "text": " Like let's say I throw you in an unknown environment. Do well in that environment in some way maybe try and get food or you trying to solve specific tasks you want to solve." }, { "end": 2620, "start": 2609, "text": " And if you just imagine that everything you would learn about this world would just come from the reward and the actions you chose. That's just insane." }, { "end": 2631, "start": 2620, "text": " That means like I'm not trying to find any correlations in my input, for example, I'm not trying to explain what I'm seeing." }, { "end": 2645, "start": 2631, "text": " And that just what more mathematically speaking there is a lot of information in in the images or in the sensory inputs that you get about the environment." }, { "end": 2658, "start": 2645, "text": " So you should use that in some explicit way using representation learning, I think. And it's this can be quite separate actually from the RL algorithm." }, { "end": 2681, "start": 2658, "text": " So there's a lot of work showing a lot of applications application papers showing that you can you have your RL agent. And then in addition to the policy gradient boss, you just have a reconstruction loss from maybe the features of your some height of high out representation within the network." }, { "end": 2689, "start": 2681, "text": " And then you just try to reconstruct your input and that really helps a lot even though it's a very simple thing to do." }, { "end": 2700, "start": 2689, "text": " Especially when when you have high dimensional inputs. And so it's I think it's perfectly fine to do research on representation learning for control and like," }, { "end": 2714, "start": 2700, "text": " or RL separately. But if you want something that's data efficient, you should definitely make use of your inputs in some way. And to add to that, the same is true for world models as well." }, { "end": 2726, "start": 2714, "text": " If you have a specific task, because in principle, you only need to make accurate predictions of future rewards. And that's enough to get maximum performance on the task." }, { "end": 2739, "start": 2726, "text": " And in principle, you don't even need to reconstruct your inputs in the world model. It's just that then you're back to only learning from a very limited learning signal." }, { "end": 2749, "start": 2739, "text": " And I think there is still some benefit in learning a world model, even without any explicit representation learning." }, { "end": 2765, "start": 2749, "text": " You still incorporate some other useful priors into the world model such that, for example, that there is a compact compact activation vector that explains the state of the world at one point in time." }, { "end": 2775, "start": 2765, "text": " That's that's a useful prior, right? It means that we have this high dimensional input. And for the age of that's this gigantic pixel grid." }, { "end": 2785, "start": 2775, "text": " And it means that there is a much smaller representation that has to, has to describe everything that the agent needs to know about the input." }, { "end": 2797, "start": 2785, "text": " And so, and then if you have a, if you have a dynamics model, then there needs to be function that takes this description of one point in time to the description of the next point in time." }, { "end": 2803, "start": 2797, "text": " And then that has to be enough to predict the good action at that point in time or predict the value or the reward." }, { "end": 2822, "start": 2803, "text": " And so, this idea of a hidden Markov model structure is also useful. It's a useful prior. I don't know exactly how much the representation learning contributes to the data efficiency compared to the just learning and latent space compact representation of the environment state." }, { "end": 2836, "start": 2822, "text": " But of the sequence of past inputs to the agent, but for example, that's what mu zero does. It's not learning a global world model where the agent learns everything about its inputs." }, { "end": 2848, "start": 2836, "text": " It's just learning what is necessary to solve with specific tasks because all the learning signal comes from the reward and the value and the policy gradients." }, { "end": 2854, "start": 2848, "text": " So, but you're still incorporating this at least as one prior of having a compact representation." }, { "end": 2863, "start": 2854, "text": " So, in, in the planet paper, I think you, you separate the stochastic and deterministic components of the state." }, { "end": 2870, "start": 2863, "text": " And can you help us understand why you want to separate those and then how that separation works?" }, { "end": 2879, "start": 2870, "text": " Yes. So, we, when we came up with the model, we basically just tried random things and we had no idea what we were doing." }, { "end": 2889, "start": 2879, "text": " And this particular combination seemed to work well. And so afterwards, I, I will try a lot of other designs and they did not work." }, { "end": 2899, "start": 2889, "text": " And I think by now I have a bit of a better understanding. Of course, we had some hypotheses of why maybe stochastic part helps and deterministic part helps." }, { "end": 2912, "start": 2899, "text": " But then later on doing other projects building on top of this model, we got some more insights of why this is, this might be particularly useful way of designing the, the latent transition function." }, { "end": 2928, "start": 2912, "text": " And so, one point is that if you, if you want to latent dynamics model, where given the sequence of states, you can predict all the images individually." }, { "end": 2944, "start": 2928, "text": " So, there is no skip connection from one image to the next, let's say. Then, then your sequence of latent states has to be stochastic in an environment where the agent can't make deterministic predictions." }, { "end": 2952, "start": 2944, "text": " So, that could be either because maybe there is actually noise injected in the simulator and how the simulator works." }, { "end": 2961, "start": 2952, "text": " Or it could be because the agent doesn't know everything about the world. So, it's a partially observable environment and that makes it stochastic from the perspective of the agent." }, { "end": 2970, "start": 2961, "text": " And so, to predict multiple possible futures, you need stochasticity in your, in your latent state sequence." }, { "end": 2987, "start": 2970, "text": " But if you make it fully stochastic, then you get a typical state space model where the hidden state at one step is just the, let's say a Gaussian, where the mean is predicted through some neural network from the last state and the last action." }, { "end": 2991, "start": 2987, "text": " And, and the variance is also predicted by the neural network." }, { "end": 3005, "start": 2991, "text": " Then, there is a lot of noise during training and, and that noise, technically speaking, it adds information to your state at every type of state, but it's not information about the environment." }, { "end": 3011, "start": 3005, "text": " So, it's not useful information that it kind of hides the information that the model has extracted already." }, { "end": 3022, "start": 3011, "text": " So, if you think about maybe the agent has seen some images and then it has inferred the position of objects and put that into the latent state." }, { "end": 3036, "start": 3022, "text": " And now, you predict forward for five time steps, but at every time step you're adding noise to the state, then it becomes really hard for the model for the agent to preserve information over multiple time steps." }, { "end": 3039, "start": 3036, "text": " It's just a raised after a couple of steps." }, { "end": 3044, "start": 3039, "text": " And here you're talking about the conditional VAE formulation, is that right?" }, { "end": 3047, "start": 3044, "text": " What is the conditional VAE formulation?" }, { "end": 3055, "start": 3047, "text": " Sorry, I meant, when you're talking about a stochastic model like you are right now, are you speaking about like a VAE?" }, { "end": 3065, "start": 3055, "text": " Yes, so it's, it's a latent variable model, the way of VAE is a latent variable model." }, { "end": 3075, "start": 3065, "text": " And they're, and we train it the same way of VAE is being trained. So, it's the same elbow objective function or free energy objective function." }, { "end": 3078, "start": 3075, "text": " But you don't call it a VAE." }, { "end": 3080, "start": 3078, "text": " And it has a lot of similarities." }, { "end": 3093, "start": 3080, "text": " So, you could, you could see it as a, as a very kind of specific case of a VAE where instead of having one kind of fixed size representation as your latent variable," }, { "end": 3097, "start": 3093, "text": " you instead have a sequence, a mark of chain of latent variables." }, { "end": 3102, "start": 3097, "text": " And then your data is also a sequence of images rather than a single image." }, { "end": 3106, "start": 3102, "text": " So, you can think of it as a sequential VAE." }, { "end": 3113, "start": 3106, "text": " So, you were describing how the, the stochastic component cannot capture all the information." }, { "end": 3116, "start": 3113, "text": " And so, that's why you need the deterministic component as well." }, { "end": 3120, "start": 3116, "text": " So, theoretically speaking, it could." }, { "end": 3124, "start": 3120, "text": " The stochastic version, the fully stochastic model is general." }, { "end": 3131, "start": 3124, "text": " So, it could learn to set the variance to close to zero for some of the state components." }, { "end": 3137, "start": 3131, "text": " And that way it would preserve information over many time steps without getting erased by noise." }, { "end": 3139, "start": 3137, "text": " It's just hard to learn." }, { "end": 3146, "start": 3139, "text": " And you don't really get good gradients for learning that because optimization process is so noisy." }, { "end": 3155, "start": 3146, "text": " And so, you would basically end up with a model that doesn't learn long term dependencies in the data well." }, { "end": 3163, "start": 3155, "text": " And so, having a deterministic component is, is in principle," }, { "end": 3170, "start": 3163, "text": " just like setting the variance to zero for, for some of the stochastic components in the state." }, { "end": 3177, "start": 3170, "text": " So, that you put in the prior that there are some things that should be preserved over a long time." }, { "end": 3188, "start": 3177, "text": " So, is the idea that in certain areas of the environment, things could be fully or more so deterministic or more so stochastic?" }, { "end": 3198, "start": 3188, "text": " Like, do these two components kind of become more influential or less in certain areas as appropriate?" }, { "end": 3201, "start": 3198, "text": " That's an interesting question." }, { "end": 3211, "start": 3201, "text": " So, I like, I think that's, it's basically the same question." }, { "end": 3220, "start": 3211, "text": " But I like to think about, I like to, I like to not think about the implementation of the environment." }, { "end": 3225, "start": 3220, "text": " So, this comes up for exploration as well." }, { "end": 3234, "start": 3225, "text": " But in this case, whether the environment is more stochastic or less stochastic in some states, doesn't matter." }, { "end": 3238, "start": 3234, "text": " What matters is whether it's more or less predictable for the agent." }, { "end": 3244, "start": 3238, "text": " Right, because the agent doesn't really know more about the environment than the sequence of its inputs." }, { "end": 3252, "start": 3244, "text": " And it can't make more sense of them than what its model architecture lets the agent make sense of the data." }, { "end": 3262, "start": 3252, "text": " So, most stochastic, practically what it actually means is that the agent can't model it well." }, { "end": 3268, "start": 3262, "text": " The agent doesn't know exactly what's going to happen with things that, you know, many possible things could happen." }, { "end": 3279, "start": 3268, "text": " And that could be because we inject like, pseudo random noise into the simulation, or it could be just because there are so many visual details, let's say," }, { "end": 3288, "start": 3279, "text": " or the model is too small to really make an accurate prediction for some of the more complex parts of the world." }, { "end": 3306, "start": 3288, "text": " And now to answer your question, the way I think about this latent model now with the stochastic and the deterministic part is that there's another big benefit of having a stochastic part." }, { "end": 3322, "start": 3306, "text": " And it's not so much about stochasticity in the data, but it's more about allowing you to control how much information goes into the deterministic state." }, { "end": 3335, "start": 3322, "text": " So, you can think of this as a deterministic model where at every time step, you have a stochastic variable that lets you add information about the current image." }, { "end": 3346, "start": 3335, "text": " And there's a KL regularizer that encourages the model to not incorporate that much new information into the hidden state." }, { "end": 3349, "start": 3346, "text": " But he's still training it to reconstruct all the images." }, { "end": 3362, "start": 3349, "text": " So, what this reconstruction arrow does together with the KL regularizer is when you want to reconstruct the image from some particular state," }, { "end": 3371, "start": 3362, "text": " then the model is allowed to look at the image through the stochastic bottleneck, but it's encouraged not to because of the KL regularizer." }, { "end": 3383, "start": 3371, "text": " So, instead, it would look at all the input information that it has already extracted from past time steps, because there's no KL regularizer for those." }, { "end": 3399, "start": 3383, "text": " Or there is, but it already paid for it. So, the model is better off using the deterministic path to look back by time to get the information from there, as long as it's something that can be predicted from the past." }, { "end": 3404, "start": 3399, "text": " And I think that encourages the model to learn long-term dependencies." }, { "end": 3415, "start": 3404, "text": " Okay, so maybe I'm misunderstanding a little bit here, but is this model not Markovian? Like, does it not look back at the only the one step previous state?" }, { "end": 3423, "start": 3415, "text": " Or you're saying it's looking back in time through implicitly through the deterministic latent. Is that what you're saying?" }, { "end": 3425, "start": 3423, "text": " Yes, yes, exactly." }, { "end": 3440, "start": 3425, "text": " So, it's actually, it's good that you're bringing up this point because there are different ways to think about this stochastic deterministic parts in the model." }, { "end": 3449, "start": 3440, "text": " You can either think of it as the Markovian model, where just some elements in the state are not stochastic." }, { "end": 3456, "start": 3449, "text": " And your state is basically the concatenation of deterministic and stochastic state at every time step." }, { "end": 3462, "start": 3456, "text": " Or you can think of it as the non-Markovian model of only the stochastic state." }, { "end": 3474, "start": 3462, "text": " So, if you don't, if you can ignore the deterministic part from your model description from, like, when you write down a probabilistic graphical model, and you only write down the stochastic sequence of states," }, { "end": 3489, "start": 3474, "text": " then this deterministic RNN actually lets this stochastic state at some time step T, depend on all the past stochastic states through this deterministic kind of shortcut." }, { "end": 3501, "start": 3489, "text": " But that, yeah, so those are both valid views. You can say it's a non-Markovian stochastic model, or you can say it's Markovian hybrid stochastic deterministic model." }, { "end": 3509, "start": 3501, "text": " But the second perspective is useful for the implementation, because it means that when you observe a new image, you don't have to go back in time." }, { "end": 3516, "start": 3509, "text": " You only need the last stochastic and deterministic state, and the new image to compute the next stochastic and deterministic state." }, { "end": 3530, "start": 3516, "text": " So, I was looking at a little bit at the code for the RSSM component, and there was a comment saying that if an observation is present, the posterior latent is computed from both the hidden state and the observation." }, { "end": 3540, "start": 3530, "text": " So, is that, does that mean that when it's imagining, is that because when it's imagining the future, the observation is not available? Is that what that line means?" }, { "end": 3554, "start": 3540, "text": " Yes, yes, exactly. So, you can think of this as the prior and the approximate posterior n of Ae, or the prior and the encoder n of Ae." }, { "end": 3561, "start": 3554, "text": " They both give you distribution over the latent variable. They are both the belief over the code." }, { "end": 3568, "start": 3561, "text": " But one is a more accurate belief, because it got some context information, in this case the whole image." }, { "end": 3574, "start": 3568, "text": " So, one is the prior one is the posterior or approximate posterior." }, { "end": 3591, "start": 3574, "text": " And this principle is more general than that. You could have additional context information. You could have the whole context, like just give it the whole image as you do in a B-A-E, to try to get the most accurately." }, { "end": 3604, "start": 3591, "text": " But you could give it some information as well. You could either give it part of the image, like a patch maybe, or you could give it some additional context information about the image, like a label, like a class label for the image." }, { "end": 3617, "start": 3604, "text": " And, you know, what's the belief over the code? If I only know it's a doc, and then that's going to be a narrower distribution, then the prior belief that doesn't know any context." }, { "end": 3626, "start": 3617, "text": " But it's still going to be wider distribution, then the belief I get when I condition on the whole image." }, { "end": 3641, "start": 3626, "text": " And so, in a temporal model, something similar happens where the prior belief over the code had sometimes step T, there are multiple beliefs you could have over that." }, { "end": 3647, "start": 3641, "text": " If you don't know anything, then that could just be standard Gaussian, let's say." }, { "end": 3662, "start": 3647, "text": " But in a Rally on a sequence model in general, there is a lot of context, you know, and that context is basically all the past inputs, but just not the current one, and of course not the future ones yet." }, { "end": 3690, "start": 3662, "text": " And so, that's the prior that you need to use, at least when you just write down the standard set of elbow objective, the prior over the code at times step T, the distribution, the belief that doesn't depend on the current image, should still have access to all the past images." }, { "end": 3702, "start": 3690, "text": " And another way to view this as a common filter, because basically the model is just a nonlinear learned common filter." }, { "end": 3716, "start": 3702, "text": " So, in a common filter, you also have this temporal prior, which is called the prediction step that tries to predict the hidden variables without knowing the current image." }, { "end": 3728, "start": 3716, "text": " And then there's an update step that takes this prior belief, this temporal prior belief, and updates it to a more precise distribution by looking at the new input, by looking at the new image." }, { "end": 3732, "start": 3728, "text": " And so, we do the same in a sequential VIE." }, { "end": 3740, "start": 3732, "text": " So, is the model aware that when it's imagining future time steps, that it's less certain about those in some sense?" }, { "end": 3760, "start": 3740, "text": " Yes, yes. So, those are two neural network components. You actually have to learn two transition functions. One where you give it the past state, the past state, and the past action, and you train it to predict a distribution over the next state." }, { "end": 3770, "start": 3760, "text": " And then another one where you give it the past state, and the past action, and the current image, and then try to predict another distribution." }, { "end": 3781, "start": 3770, "text": " And that will be more precise than narrower distribution, and it actually is when you look at the entropy, because it has information to more context, or access to more context information." }, { "end": 3799, "start": 3781, "text": " And the way those two are trained is that during training, you always use the one that can see the data, but the KL regularizer is from that second distribution to the first, so to the prior transition function." }, { "end": 3814, "start": 3799, "text": " And so, that trains the prior transition function, you basically try and predict what the posterior, the better belief is going to be, but without seeing the new image, so won't be able to do perfect job, unless the sequence of inputs is fully deterministic." }, { "end": 3835, "start": 3814, "text": " And so that is the only thing that trains this KL regularizer, is actually the only lost term that trains the prior transition function. And the prior transition function is what you use for forward imagination when you're just planning into the future, but you don't know the actual inputs for those time steps." }, { "end": 3860, "start": 3835, "text": " And at the same time, the KL regularizer regularizes the posterior belief, saying that, you know, even though you got to look at the image, don't be like overconfident, try to still be close to what you would have predicted without seeing this data point, try to still be close to the temporal prior." }, { "end": 3873, "start": 3860, "text": " Can you talk about what range of environments this type of approach is best suited for, or the limits in one environment would, this could be applied too well." }, { "end": 3886, "start": 3873, "text": " Does it have something to do with how much stochasticity they have, or I mean, it seems like the environment is a user really pixel large dimension, large dimensional pixels state space." }, { "end": 3894, "start": 3886, "text": " But is that the, is it, is that the main area where this method is useful or is it go beyond that?" }, { "end": 3905, "start": 3894, "text": " Yes. So I think the approach is generally useful for, for a lot of reinforcement learning setups." }, { "end": 3921, "start": 3905, "text": " There are some applications of reinforcement learning where you not really have an agent in that sense, but just trying to solve some discrete optimization problem or some black box optimization problem where you don't get radians." }, { "end": 3937, "start": 3921, "text": " So in those cases, I don't know, like when you're trying to, I don't know, like maybe try to predict the proof for a mathematical statement. I don't know, I haven't really thought about those problems." }, { "end": 3945, "start": 3937, "text": " Like when you have an agent in an environment, and the, especially if the environment is partially observed." }, { "end": 3953, "start": 3945, "text": " So you have to integrate information over time. So for example, an image won't tell you velocities of objects that just tells you positions." }, { "end": 3965, "start": 3953, "text": " And then if it's a, if the field of view is limited because you're only looking in one direction and you don't see the object in the other direction, you also have to integrate information over time." }, { "end": 3981, "start": 3965, "text": " And so then this is a very useful, very useful general approach. Because you're, you're making use of the factorization of, of a partial observable environment." }, { "end": 3995, "start": 3981, "text": " So in some sense, the latent states that you're learning can be thought of as a replacement of the hidden states of the environment that the agent doesn't have access to." }, { "end": 4003, "start": 3995, "text": " Now, this is important. The latent states learned by the agent are not an approximation of the environment state." }, { "end": 4013, "start": 4003, "text": " Right, there's no reason whatsoever to believe that they will become similar in value to whatever the environment state is." }, { "end": 4023, "start": 4013, "text": " But they are an alternative representation that if the model is trained well also explains the same sequence of observations given the same sequence of, of actions." }, { "end": 4035, "start": 4023, "text": " And then you have a alternative implementation of the environment if you want. And so that, that's really powerful because now you've got a Markov system." }, { "end": 4041, "start": 4035, "text": " So once you have this representation, then you can even make predictions into the future given actions. You don't need a recurrent policy anymore." }, { "end": 4055, "start": 4041, "text": " But the state is already sufficient. And I think your question also hinted a bit in the direction of could we do this for low dimensional inputs, like more typical for these muzzle court tasks." }, { "end": 4065, "start": 4055, "text": " And the answer is yes, we have tried that at some point. It does work. And it is a bit faster than learning from pixels, but actually not that much." }, { "end": 4079, "start": 4065, "text": " Yeah, it works well. And I think Brendan Amos said a paper on differentiable model predictive control where he does that." }, { "end": 4087, "start": 4079, "text": " And also found that it worked quite well. But I haven't done any any." }, { "end": 4096, "start": 4087, "text": " Yeah, we had one project where we tried it on low dimensional states and it worked, but it didn't go anywhere. So yeah, I'm interested in the pixel space." }, { "end": 4101, "start": 4096, "text": " And right now I'm trying to scale up these models to walk on flexed diamonds." }, { "end": 4107, "start": 4101, "text": " Some of that we had in the follow up paper for Dreamer." }, { "end": 4119, "start": 4107, "text": " All right, let's turn to another recent paper of yours dream to control learning behaviors by latent imagination. Can you know, we got to hear you describe this paper in our December and your episode." }, { "end": 4123, "start": 4119, "text": " Can you remind us of the main idea with this paper?" }, { "end": 4143, "start": 4123, "text": " Sure. So one limitation that planet has is so it does learn a quite powerful world model, but it doesn't make use of it in the most efficient way to the right behaviors." }, { "end": 4161, "start": 4143, "text": " Planet uses online online search at every time step when it attacks with the environment. And that can be really expensive because you do many, you predict forward many trajectories and then you select the one action that you like the best and you execute it." }, { "end": 4167, "start": 4161, "text": " And you throw away all this all this effort and you would like to do another search at the next time step." }, { "end": 4175, "start": 4167, "text": " And that thing data becomes quite expensive. So it's doing model predictive control. Exactly. Yeah." }, { "end": 4189, "start": 4175, "text": " And the second limitation is that in the original planet agent, we don't learn a value function and there is no temporal abstraction." }, { "end": 4207, "start": 4189, "text": " And the agent is only going to consider rewards within the planning horizon. And you can't increase the planning horizon infinitely because for one eventually your model is going to make more, it's going to make less accurate predictions." }, { "end": 4219, "start": 4207, "text": " So if you're searching for a longer plan, it's going to take you longer to find a bit plan because the search space got so much bigger. There's so much more longer plans than there are shorter plans." }, { "end": 4225, "start": 4219, "text": " So it's not really computationally tractable." }, { "end": 4239, "start": 4225, "text": " So we consider a very far delayed rewards that are like 100 to 100 time steps into the future. And that's a one way that I thought initially you could get around that is through temporal abstraction." }, { "end": 4245, "start": 4239, "text": " And I still think that's really the long term way to go." }, { "end": 4260, "start": 4245, "text": " We have value functions in reinforcement learning and they work quite well. So for now, we can solve it that way. And so Dreamer is really a follow up on planet where we use the same dynamics model, the same world model." }, { "end": 4285, "start": 4260, "text": " But we're using it in a more clever way to learn how to predict good actions. And there is a substantial increase in computational performance. So we went down from maybe one day for a million time steps to like four to five hours." }, { "end": 4302, "start": 4285, "text": " And there's a substantial improvement in the horizon of how how many future rewards the agent considers. And so that leads to much higher empirical performance as well." }, { "end": 4323, "start": 4302, "text": " And the way we do that is we throw away the model predictive control part. And instead we have a neural network to predict actions and act a network that takes the date and state of the well model and predicts a distribution over over the action that hopefully is best for the state." }, { "end": 4345, "start": 4323, "text": " And we have a second neural network in the late in space, which predicts the value that expected some of future rewards with some discount factor that the current act network is thought to achieve from this particular state that is input to the value network." }, { "end": 4360, "start": 4345, "text": " And with the value function and the actor, you can do an efficient actor critic algorithm in late in space. And you can train that from model predictions independently of the data collection." }, { "end": 4375, "start": 4360, "text": " And you don't have to do any online planning anymore once you have a good actor to collect data, you just run the world model at every step to get the latest state. And then or to update the latest state from the last step to the next one to incorporate the new input." }, { "end": 4389, "start": 4375, "text": " And then you just send that to the actor, you predict an action and execute that. And so all the model predictions or planning, if you want to call it still want to call it planning happens offline independently of the current episode." }, { "end": 4395, "start": 4389, "text": " So in principle, you could also distribute this and run it asynchronously very efficiently." }, { "end": 4401, "start": 4395, "text": " And the way you learn these two components now." }, { "end": 4408, "start": 4401, "text": " Like one thing you could do is you have a world model, you know, it basically defines a new or a problem." }, { "end": 4420, "start": 4408, "text": " It's an imagination MDP where instead of environment states, you have these model states and so on and you predict through watts as well. So you could throw any model free or algorithm at it now." }, { "end": 4427, "start": 4420, "text": " And you can solve it without actually causing additional environment interaction. So we get get a very data efficient algorithm." }, { "end": 4440, "start": 4427, "text": " But you can actually if you're doing that, you're not really making full use of the world model because we have a neural network world model. So we can actually compute gradients through it." }, { "end": 4449, "start": 4440, "text": " But all the model free around algorithms, they are designed for real environments where you can't differentiate through it." }, { "end": 4459, "start": 4449, "text": " So they don't make use of these gradients. And that's why we can do better by developing an act of critic algorithm that's specific for world models." }, { "end": 4462, "start": 4459, "text": " And the algorithm actually quite simple." }, { "end": 4469, "start": 4462, "text": " You encode some like past data from the replay buffer to get some initials model states." }, { "end": 4478, "start": 4469, "text": " And then you imagine forward a sequence with some imagination for rising let's say 20 steps using the actions." }, { "end": 4482, "start": 4478, "text": " Not from the replay buffer, but from the act network." }, { "end": 4487, "start": 4482, "text": " So you're just like the actors just trying out something in the model world." }, { "end": 4491, "start": 4487, "text": " And then you predict all the corresponding with watts for those states." }, { "end": 4496, "start": 4491, "text": " You predict all the corresponding values as well based on your current value estimate." }, { "end": 4505, "start": 4496, "text": " And you want to maximize that with respect to the actions that are with respect to the act network, the parameters of the act network." }, { "end": 4518, "start": 4505, "text": " So you can actually compute very elegantly compute the gradient of the sub of future rewards and future values that you can like weigh in some way if you want." }, { "end": 4529, "start": 4518, "text": " And you can compute the derivative of that with respect to the act parameters just by propagating through the multi step predictions of your model because it's all the network components." }, { "end": 4535, "start": 4529, "text": " And there are some stochastic notes in there because the model state has a stochastic component." }, { "end": 4540, "start": 4535, "text": " And the actions are also sampled from the act distribution." }, { "end": 4544, "start": 4540, "text": " So there are two ways to you can deal with it." }, { "end": 4556, "start": 4544, "text": " If it's continuous and can be reprimed rise like a Gaussian, for example, then you just use a reprimed violation trick to compute the gradients through all these steps." }, { "end": 4566, "start": 4556, "text": " And if it's discrete, then you can use straight through estimation, which is not really the exact rating, but it still works very well." }, { "end": 4577, "start": 4566, "text": " And once you do that, you know exactly if you change the active parameters a little bit, how is that going to at what rate is that going to increase the future reward or decrease the future rewards?" }, { "end": 4580, "start": 4577, "text": " You know how to change the act network." }, { "end": 4587, "start": 4580, "text": " And then the only thing that's left is optimizing the value network and that should stand through simple temporal difference learning." }, { "end": 4598, "start": 4587, "text": " So the value at one step just should correspond to maybe the reward plus the value at the next step or you could do you could actually do a multi step return." }, { "end": 4603, "start": 4598, "text": " So the value should correspond to the next 10 rewards plus the 11th value." }, { "end": 4614, "start": 4603, "text": " What we actually do in the papers, we do lambda return, which means we take all of these and step returns for different values of n." }, { "end": 4620, "start": 4614, "text": " So one reward plus the value to rewards plus the following value and so on and we weigh them." }, { "end": 4627, "start": 4620, "text": " But yeah, that's just so we don't have to choose a hyper parameter for it and it doesn't really matter that much." }, { "end": 4642, "start": 4627, "text": " So on a high level, is this sounds similar to Sutton's Dynar architecture, but then Dynar didn't have this notion of gradients or it was independent of what kind of function approximator I think was a used right?" }, { "end": 4653, "start": 4642, "text": " Yes. Sutton's Dynar, I think basically includes almost all of model based around." }, { "end": 4670, "start": 4653, "text": " It's a very kind of very general high level perspective where you have some data from the real environment and use that to learn some model of the environment or of the of the data that you got from the environment." }, { "end": 4676, "start": 4670, "text": " And then you use that model to somehow select an action and then you can execute that in the real world." }, { "end": 4684, "start": 4676, "text": " And I think the Dynar paper even talks about online planning as well, but maybe that's a follow-up paper." }, { "end": 4690, "start": 4684, "text": " But yeah, in principle, these are all within the category of Dynar style algorithms." }, { "end": 4701, "start": 4690, "text": " So you're building on the work you did in Planet and you used to use the same RSSM deterministic plus stochastic type model here, was this the model the same?" }, { "end": 4719, "start": 4701, "text": " Yes, the world model is exactly the same. And we for continuous control, we found the world model still works across like older 20 continuous control tasks." }, { "end": 4732, "start": 4719, "text": " There are a few more, but we chose the ones for which the best model for the algorithm got non-zero performance because some of the tasks don't really make sense from pixels. You can't see the things that are necessary for solving the task." }, { "end": 4748, "start": 4732, "text": " So yeah, the world model just worked for all these. And the improvement comes from the value function and also comes from the act network, which can actually learn a better policy than an online planning algorithm." }, { "end": 4757, "start": 4748, "text": " Can potentially do because it doesn't assume that the actions are independent independent in time, for example." }, { "end": 4777, "start": 4757, "text": " And the act network also has a lot more optimization steps in total because for the online NPC and planet, you can do maybe 10 optimization steps, but then you have to have an action at the end of the day because otherwise, if you do too many too many optimization steps, then it becomes way too slow to really interact with the environment." }, { "end": 4790, "start": 4777, "text": " Whereas the act network in Dreamer is shared, there's just one act network throughout the whole training process of the agent. So over time, it will get trained much more." }, { "end": 4798, "start": 4790, "text": " And later on, in addition to the continuous tasks, we did some discrete tasks in the Tari and deep mind lab." }, { "end": 4807, "start": 4798, "text": " And we also found that the same world model just works. But we did increase the size of the stochastic and deterministic states." }, { "end": 4812, "start": 4807, "text": " So we just gave the model more capacity." }, { "end": 4824, "start": 4812, "text": " And so I was actually really surprised by that. But what it said is that the planet agent was bottlenecked not by the model, but by the planning part." }, { "end": 4833, "start": 4824, "text": " Was that surprising to you to when you determined the final performance of the Dreamer agent, or was that how have what you expected?" }, { "end": 4851, "start": 4833, "text": " No, I was actually quite surprised. So I knew that to do some of the more interesting tasks that I wanted to solve that I wanted to do exploration and eventually we needed to consider reports further into the future than 20 steps." }, { "end": 4868, "start": 4851, "text": " So we couldn't use planet out of the box. And I almost thought that, oh, there are probably much bigger problems. And we probably have to find a better world model. And like, you know, is it even worth focusing on on the horizon problem?" }, { "end": 4871, "start": 4868, "text": " Or they're much bigger bottlenecks at the moment." }, { "end": 4888, "start": 4871, "text": " But it was a kind of almost easy problem to tackle because there are already solutions for that with temporal difference learning. And we just kind of applied that to the setting we were in where you have a differentiable world model to make efficient use of that." }, { "end": 4908, "start": 4888, "text": " And I was really surprised how well it worked. And I was also really surprised how that that that that doesn't do any look ahead while interacting with the environment can do better and even be as they type efficient as an online model predictive control." }, { "end": 4927, "start": 4908, "text": " Do you think that dreamer would do pretty well, even if it didn't differentiate through the model, like if you were just, or maybe that's something in between planet and dreamer, like the idea of just distilling planets planning into a policy network, kind of like maybe like at what alpha zero would do." }, { "end": 4933, "start": 4927, "text": " That's different than what you did here, though, right? Because you you differentiate it through the model. Yeah." }, { "end": 4938, "start": 4933, "text": " Would that be a reasonable thing to do? You think that would work well here or." }, { "end": 4943, "start": 4938, "text": " Yeah, there are there's a there's a design space of different." }, { "end": 4952, "start": 4943, "text": " Of different algorithms that operate within the normal model to a derive long term behavior to learn value function and an actor." }, { "end": 4965, "start": 4952, "text": " And so the alpha goal does it is." }, { "end": 4982, "start": 4965, "text": " And so we can't really you can't really do that with a with a big reply buffer because the returns you got in the past, they've dependent on the actions that you that your actor chose in the past, but now your actors, your actors already better." }, { "end": 4987, "start": 4982, "text": " So those returns won't reflect the value of the actor in the state right now." }, { "end": 4995, "start": 4987, "text": " But if you make the replay buffer small enough, it's approximately, you're approximately on policy." }, { "end": 5002, "start": 4995, "text": " And then if you just train it on a lot of data, then, then that can work well." }, { "end": 5010, "start": 5002, "text": " It's just that in the low data regime that we're in, making your replay buffer small is a bad idea." }, { "end": 5022, "start": 5010, "text": " And and just pretty clearly always hurts performance. So so we couldn't really go with this like approximate on policy approach to learn the value function." }, { "end": 5027, "start": 5022, "text": " We needed to we needed to do TD learning." }, { "end": 5031, "start": 5027, "text": " And we needed to do it on imagined roll outs." }, { "end": 5036, "start": 5031, "text": " Because we can't use the past replay buffer data because it's too different." }, { "end": 5044, "start": 5036, "text": " So now to do online to do imagined roll outs, you need a policy to select actions." }, { "end": 5050, "start": 5044, "text": " And as you said, you couldn't principle use a search to select actions there." }, { "end": 5059, "start": 5050, "text": " Like a like like a Cm search, let's say, and then distill that." }, { "end": 5064, "start": 5059, "text": " But like learn a value from it and then and then learn an actor network from that." }, { "end": 5072, "start": 5064, "text": " But or you not learn an actor network anymore if you have a value function, you can just use that during planning and that will be fine." }, { "end": 5077, "start": 5072, "text": " But the problem is you can't really afford to do the Cm search." }, { "end": 5083, "start": 5077, "text": " And every time step in imagination for like, you know, so many imagination trajectories." }, { "end": 5093, "start": 5083, "text": " So that's why we actually ended up abandoning the explicit search and switch to using an actor network." }, { "end": 5105, "start": 5093, "text": " Yeah. And I think your question was also whether it could work similarly well if we ignore the gradients." }, { "end": 5110, "start": 5105, "text": " And I'm not 100% sure." }, { "end": 5119, "start": 5110, "text": " So what I do know is that once you have the world model, all the environment, all the training inside the world model," }, { "end": 5124, "start": 5119, "text": " just cost you walk off time, it doesn't cost you environment interaction." }, { "end": 5130, "start": 5124, "text": " So you could use a less efficient optimization algorithm in imagination." }, { "end": 5133, "start": 5130, "text": " And you would get the same data efficiency in the real world." }, { "end": 5137, "start": 5133, "text": " And I don't see a reason why." }, { "end": 5139, "start": 5137, "text": " Why." }, { "end": 5143, "start": 5139, "text": " The normal model free algorithm inside the world model." }, { "end": 5146, "start": 5143, "text": " Couldn't get to the same final performance as well." }, { "end": 5151, "start": 5146, "text": " But I think it would be computationally more expensive because you would need more updates." }, { "end": 5154, "start": 5151, "text": " But I haven't tried it." }, { "end": 5160, "start": 5154, "text": " So let's turn to another very recent paper yours planning to explore via latent disagreement." }, { "end": 5164, "start": 5160, "text": " Can you tell us what the main idea here is with this paper?" }, { "end": 5175, "start": 5164, "text": " Yes. So I was really excited about the paper because I finally got to the point where I wanted to be" }, { "end": 5185, "start": 5175, "text": " about two and a half years ago when I started to work on planet, which is to do forward looking exploration." }, { "end": 5189, "start": 5185, "text": " And so we solved the world model problem to sufficient degree." }, { "end": 5193, "start": 5189, "text": " And then we solved the horizon problem to sufficient degrees." }, { "end": 5195, "start": 5193, "text": " So that was planet and dreamer." }, { "end": 5201, "start": 5195, "text": " And then we could finally do exploration with it." }, { "end": 5207, "start": 5201, "text": " And that's the key point of this paper." }, { "end": 5211, "start": 5207, "text": " And there are a couple of ideas in the one this." }, { "end": 5214, "start": 5211, "text": " When you do exploration." }, { "end": 5221, "start": 5214, "text": " You need some measure of novelty that you can optimize for as you intrinsic reward." }, { "end": 5226, "start": 5221, "text": " So we use an ensemble disagreement for that, which." }, { "end": 5233, "start": 5226, "text": " Deepak Patak was a collaborator in the project has done a lot of work with and there are a couple of papers also from other people who." }, { "end": 5235, "start": 5233, "text": " Show that." }, { "end": 5237, "start": 5235, "text": " A sombal disagreement works." }, { "end": 5242, "start": 5237, "text": " Works really well as a novelty signal." }, { "end": 5248, "start": 5242, "text": " And I would even include random network distillation into the category of ensemble disagreement." }, { "end": 5250, "start": 5248, "text": " And." }, { "end": 5253, "start": 5250, "text": " So so that's the kind of." }, { "end": 5257, "start": 5253, "text": " The source of novelty that gives you the intrinsic reward." }, { "end": 5260, "start": 5257, "text": " But then there's another aspect to." }, { "end": 5263, "start": 5260, "text": " To the project, which is." }, { "end": 5267, "start": 5263, "text": " When you do exploration to learn about the environment." }, { "end": 5270, "start": 5267, "text": " And you have novelty as some objective function." }, { "end": 5274, "start": 5270, "text": " Then that's a non stationary objective function." }, { "end": 5278, "start": 5274, "text": " Because every time you attack with the world, you see new data." }, { "end": 5279, "start": 5278, "text": " And." }, { "end": 5285, "start": 5279, "text": " And then that changes your knowledge and so that changes what you think is novel about." }, { "end": 5289, "start": 5285, "text": " Like what future inputs will be novel." }, { "end": 5295, "start": 5289, "text": " And so there's conceptual problem with model free exploration." }, { "end": 5297, "start": 5295, "text": " Because." }, { "end": 5304, "start": 5297, "text": " Model free optimization works by training the policy from samples of the real environment." }, { "end": 5310, "start": 5304, "text": " And so you have some novelty objective that you want to maximize with your exploration policy." }, { "end": 5314, "start": 5310, "text": " And to do that, you need to draw samples from the environment." }, { "end": 5318, "start": 5314, "text": " To improve the policy for that novelty objective." }, { "end": 5323, "start": 5318, "text": " But while you're training the policy, the novelty objective has already changed because you've." }, { "end": 5328, "start": 5323, "text": " You needed all these samples to train your policy and those samples tell you more about the environment." }, { "end": 5330, "start": 5328, "text": " So." }, { "end": 5336, "start": 5330, "text": " In some sense, it doesn't really make it doesn't really make that much sense conceptually." }, { "end": 5345, "start": 5336, "text": " Sorry, is that why a lot of the curiosity formulations just taken incredibly long time like a huge billions of samples?" }, { "end": 5349, "start": 5345, "text": " Yes, I think that's an important part of it." }, { "end": 5353, "start": 5349, "text": " And I think that you can be much more data efficient." }, { "end": 5356, "start": 5353, "text": " By doing forward looking exploration." }, { "end": 5360, "start": 5356, "text": " And to do forward exploration forward looking exploration." }, { "end": 5364, "start": 5360, "text": " You really need a world model." }, { "end": 5367, "start": 5364, "text": " At least I don't see another way of doing it." }, { "end": 5373, "start": 5367, "text": " Because you need to train the policy to maximize the novelty reward." }, { "end": 5376, "start": 5373, "text": " Without changing the knowledge of the agent." }, { "end": 5379, "start": 5376, "text": " So without causing any additional environment to action." }, { "end": 5384, "start": 5379, "text": " And that way you can actually find the best policy for your current reward and then execute that." }, { "end": 5388, "start": 5384, "text": " For maybe one step or maybe for multiple steps." }, { "end": 5393, "start": 5388, "text": " And then gather some more data and then update the model update your novelty reward." }, { "end": 5396, "start": 5393, "text": " And then optimize the policy again." }, { "end": 5403, "start": 5396, "text": " So you really like doing a lot of compute to decide what is the best action I can choose next." }, { "end": 5409, "start": 5403, "text": " Rather than the model free approach where the policy will always lag behind because it's." }, { "end": 5414, "start": 5409, "text": " It hasn't converged on the novelty reward. But you're already changing the novelty reward." }, { "end": 5426, "start": 5414, "text": " Okay, cool. So could you maybe just make crystal clear for us again this distinction between retro-spective novelty and expected surprise." }, { "end": 5429, "start": 5426, "text": " And so in what is the more common case here?" }, { "end": 5438, "start": 5429, "text": " I guess the retrospective novelty is more called is the more common case looking at the at the at the at the past literature." }, { "end": 5443, "start": 5438, "text": " Yes, yes, I would say that's yeah, that's better to say." }, { "end": 5455, "start": 5443, "text": " So these are the two terms that I like to use to describe these two ways of using exploration, although both have been done for a long time." }, { "end": 5462, "start": 5455, "text": " But yeah, so the retrospective." }, { "end": 5470, "start": 5462, "text": " Retrospective surprise is what a model free agent maximizes if it has an intrinsic reward." }, { "end": 5474, "start": 5470, "text": " What it basically is doing is, you know, in the beginning, you don't know anything." }, { "end": 5479, "start": 5474, "text": " So you do random actions and then you find something that's novel." }, { "end": 5483, "start": 5479, "text": " And then see." }, { "end": 5488, "start": 5483, "text": " He's simulate an episode and he predict all the intrinsic rewards for that episode." }, { "end": 5491, "start": 5488, "text": " And in the beginning, it will all be novel because you don't know anything yet." }, { "end": 5496, "start": 5491, "text": " And so then you train your policy to." }, { "end": 5501, "start": 5496, "text": " It basically tells you policy, oh, this was a really good trajectory because it was very novel." }, { "end": 5504, "start": 5501, "text": " So you're reinforcing the same behavior." }, { "end": 5510, "start": 5504, "text": " And if you were really good at optimizing your policy, then it would and the environment isn't to random." }, { "end": 5516, "start": 5510, "text": " Then it would go and realize the same trajectory again." }, { "end": 5521, "start": 5516, "text": " But that's exactly not what you want because you just went there so it's not novel anymore." }, { "end": 5525, "start": 5521, "text": " It was novel by the time you tried it for the first time." }, { "end": 5527, "start": 5525, "text": " And so you do it again." }, { "end": 5530, "start": 5527, "text": " And this time you get a low reward." }, { "end": 5534, "start": 5530, "text": " And so then you encourage the policy to not go there again anymore." }, { "end": 5536, "start": 5534, "text": " So then what does the policy do?" }, { "end": 5537, "start": 5536, "text": " It has no idea." }, { "end": 5539, "start": 5537, "text": " It just knows don't go there." }, { "end": 5542, "start": 5539, "text": " And then it's doing another random exploration somewhere else." }, { "end": 5545, "start": 5542, "text": " Going there a second time to find out it's not novel anymore." }, { "end": 5550, "start": 5545, "text": " Like in practice, there is more generalization in the network going on and so on." }, { "end": 5552, "start": 5550, "text": " So it's not exactly this." }, { "end": 5555, "start": 5552, "text": " But I think it's a useful mental picture." }, { "end": 5561, "start": 5555, "text": " To understand what's really wrong with retrospective exploration." }, { "end": 5568, "start": 5561, "text": " And in contrast to that, there is expected exploration or planning to explore forward looking exploration." }, { "end": 5576, "start": 5568, "text": " Where you use a predictive model of the future to optimize your policy in imagination." }, { "end": 5587, "start": 5576, "text": " So that the policy gets really good at choosing whatever at the time you're training it is novel to the agent." }, { "end": 5595, "start": 5587, "text": " But since you're training it from imagined rollouts, the training doesn't tell the agent anything new about the environment." }, { "end": 5598, "start": 5595, "text": " And so the intrinsic reward doesn't change." }, { "end": 5605, "start": 5598, "text": " You can really optimize this for my longer and principle, even until your policy converges fully." }, { "end": 5610, "start": 5605, "text": " And then in the most extreme case, you would just execute one action of that policy." }, { "end": 5613, "start": 5610, "text": " And then I retrain your world model and so on." }, { "end": 5616, "start": 5613, "text": " Retrain your policy in imagination." }, { "end": 5621, "start": 5616, "text": " And then you really get what is most promising to explore next." }, { "end": 5626, "start": 5621, "text": " And then you can look into the future and think, oh, if I go here, I don't really know what's going to happen here." }, { "end": 5631, "start": 5626, "text": " But for the things that I think might be happening, some of them are like really interesting." }, { "end": 5636, "start": 5631, "text": " Because they're really different from everything I've seen so far others have not so." }, { "end": 5639, "start": 5636, "text": " Not so different from what I've seen so far." }, { "end": 5644, "start": 5639, "text": " And then you can go in a really directed way." }, { "end": 5655, "start": 5644, "text": " And to the parts that your model expects the most interesting parts of the world to maximize the information." }, { "end": 5662, "start": 5655, "text": " The expected information that you imagine you could gain about the environment." }, { "end": 5673, "start": 5662, "text": " There was a cool paper called model based active exploration where they do something quite similar." }, { "end": 5681, "start": 5673, "text": " But on much simpler environments and without any high dimensional inputs." }, { "end": 5687, "start": 5681, "text": " But they learn an ensemble of they basically learn." }, { "end": 5690, "start": 5687, "text": " Ten environment models." }, { "end": 5694, "start": 5690, "text": " And then the disagreement between their predictions." }, { "end": 5696, "start": 5694, "text": " Is the reward." }, { "end": 5702, "start": 5696, "text": " And then they train like basically soft active critic or some other model of free algorithm to maximize this imagined reward." }, { "end": 5704, "start": 5702, "text": " On the imagined predictions." }, { "end": 5710, "start": 5704, "text": " So it's it's also implementing this forward looking exploration." }, { "end": 5716, "start": 5710, "text": " Now the challenge we had in addition to that is that we have high dimensional image inputs." }, { "end": 5721, "start": 5716, "text": " So we can't really afford to do the." }, { "end": 5726, "start": 5721, "text": " The policy optimization in image space we have to do it in the late and space." }, { "end": 5732, "start": 5726, "text": " And so we need some way of defining the novelty reward there." }, { "end": 5734, "start": 5732, "text": " And what we did for that is." }, { "end": 5736, "start": 5734, "text": " From every late in state." }, { "end": 5740, "start": 5736, "text": " During training we predict an ensemble to try and." }, { "end": 5743, "start": 5740, "text": " Regress the observation embedding for the next time step." }, { "end": 5749, "start": 5743, "text": " Whatever the confnet produces in terms of features before it goes into the model at the next step." }, { "end": 5751, "start": 5749, "text": " As you get the." }, { "end": 5757, "start": 5751, "text": " And then we have a couple of one step predictors." }, { "end": 5762, "start": 5757, "text": " That's more efficient than actually like replicated like training multiple RSS and architectures." }, { "end": 5765, "start": 5762, "text": " It's just like some feed forward lags." }, { "end": 5766, "start": 5765, "text": " And." }, { "end": 5769, "start": 5766, "text": " And that turned out to work really well." }, { "end": 5775, "start": 5769, "text": " And once you have this trained for training on you of course needed target for the next observation embedding." }, { "end": 5778, "start": 5775, "text": " But for imagination training you only need the variance." }, { "end": 5781, "start": 5778, "text": " Of these ensemble predictors." }, { "end": 5783, "start": 5781, "text": " So you don't need the future observations." }, { "end": 5788, "start": 5783, "text": " You can do it all in the late in space of the world model to predict the." }, { "end": 5790, "start": 5788, "text": " Prodigated trajectory of states." }, { "end": 5792, "start": 5790, "text": " And then for every state you." }, { "end": 5795, "start": 5792, "text": " Feed it to all the ensemble predictors." }, { "end": 5798, "start": 5795, "text": " And you just compute the disagreement between them." }, { "end": 5805, "start": 5798, "text": " How does this formulation respond to the noisy TV problem where model world models." }, { "end": 5811, "start": 5805, "text": " Get confused by random noise sources of random noise." }, { "end": 5812, "start": 5811, "text": " Yeah." }, { "end": 5817, "start": 5812, "text": " And I like to connect this to the earlier point where." }, { "end": 5825, "start": 5817, "text": " It's not so much about whether the environment is stochastic or random or not." }, { "end": 5827, "start": 5825, "text": " So." }, { "end": 5831, "start": 5827, "text": " Anatoric uncertainty or reducible uncertainty." }, { "end": 5839, "start": 5831, "text": " It's not just the property of the environment whether the screen is unpredictable or not." }, { "end": 5845, "start": 5839, "text": " It's also a property of your agent and the modeling capacities of your agent." }, { "end": 5849, "start": 5845, "text": " So even if something in principle is perfectly predictable." }, { "end": 5852, "start": 5849, "text": " If your model is too weak." }, { "end": 5858, "start": 5852, "text": " Then it will never learn it and you don't want to get stuck trying to learn about that forever." }, { "end": 5863, "start": 5858, "text": " Where you could actually move on to other parts of the world where there is lots of things that you can learn." }, { "end": 5869, "start": 5863, "text": " So the question question of the noisy TV really becomes the question of." }, { "end": 5873, "start": 5869, "text": " How do I know when I should give up." }, { "end": 5876, "start": 5873, "text": " On learning something and move on to the next thing." }, { "end": 5882, "start": 5876, "text": " And conceptually I think the answer is you don't really ever know." }, { "end": 5888, "start": 5882, "text": " But the best you can do is learn things in order of increasing difficulty." }, { "end": 5891, "start": 5888, "text": " Learn the easiest things first." }, { "end": 5893, "start": 5891, "text": " The things that are easiest for you." }, { "end": 5899, "start": 5893, "text": " And so eventually you will have learned everything that you can learn and then you will be stuck on the next hardest thing." }, { "end": 5903, "start": 5899, "text": " But there is not really a way to avoid that." }, { "end": 5905, "start": 5903, "text": " So." }, { "end": 5908, "start": 5905, "text": " So to do that." }, { "end": 5914, "start": 5908, "text": " To know to have an idea of what you can't learn." }, { "end": 5916, "start": 5914, "text": " You need a noise model." }, { "end": 5918, "start": 5916, "text": " So you need." }, { "end": 5924, "start": 5918, "text": " You need a way to if you have a deterministic model." }, { "end": 5928, "start": 5924, "text": " Then you have two problems for one." }, { "end": 5931, "start": 5928, "text": " It kind of has to explain everything perfectly." }, { "end": 5937, "start": 5931, "text": " And the second is you don't really you can't really consider multiple hypotheses." }, { "end": 5940, "start": 5937, "text": " Over the models." }, { "end": 5942, "start": 5940, "text": " You just like this one model." }, { "end": 5946, "start": 5942, "text": " Some like one point in the weight space of all possible models." }, { "end": 5950, "start": 5946, "text": " And you don't really know how much certainty you're having that model." }, { "end": 5955, "start": 5950, "text": " So you don't know how much the certainty reduced after you see some new data." }, { "end": 5960, "start": 5955, "text": " So if you have a distribution of our models like a Bayesian neural network on ensemble." }, { "end": 5964, "start": 5960, "text": " Then you can that gives you a bit of an idea of how much you know." }, { "end": 5967, "start": 5964, "text": " What's the disagreement in your ensemble." }, { "end": 5970, "start": 5967, "text": " But then you also." }, { "end": 5975, "start": 5970, "text": " You also want a way to allow noise in your predictions." }, { "end": 5981, "start": 5975, "text": " For example, if you're if you try to let's say just predict the next observation to keep it simple." }, { "end": 5985, "start": 5981, "text": " And from like maybe the last input and the action." }, { "end": 5988, "start": 5985, "text": " And you do that." }, { "end": 5991, "start": 5988, "text": " You do that within ensemble of Gaussian models." }, { "end": 5994, "start": 5991, "text": " Then you're allowing some error in the prediction." }, { "end": 6001, "start": 5994, "text": " You're saying, you know, each model tries to really predict the next input." }, { "end": 6004, "start": 6001, "text": " But with the Gaussian distribution, so doesn't have to be perfect." }, { "end": 6006, "start": 6004, "text": " It's trying to get the mean to be the right mean." }, { "end": 6012, "start": 6006, "text": " But then also if the observation is somewhere else, it's okay because we're predicting this Gaussian." }, { "end": 6017, "start": 6012, "text": " So we signed some possibility to all the next inputs we could get." }, { "end": 6022, "start": 6017, "text": " And so then." }, { "end": 6030, "start": 6022, "text": " The variance in your output of this Gaussian is basically the amount of noise that you assume there is in the data." }, { "end": 6036, "start": 6030, "text": " And so the more noise there is in the data, maybe you should avoid those parts of the environment." }, { "end": 6041, "start": 6036, "text": " And that's what the expert." }, { "end": 6049, "start": 6041, "text": " Basically information game also tells you mathematically and intuitively this works out really nicely because you have this ensemble of models." }, { "end": 6052, "start": 6049, "text": " They all predict the Gaussian over something in the future." }, { "end": 6057, "start": 6052, "text": " Let's say the next image and even though the next image is is a bit random." }, { "end": 6060, "start": 6057, "text": " And maybe in hierarchies to pass." }, { "end": 6070, "start": 6060, "text": " The means of your ensemble over time when you get enough data, they will still converge to the mean of whatever is the distribution of the next input." }, { "end": 6078, "start": 6070, "text": " And so the ensemble disagreement will go to zero, even though there is randomness in your inputs." }, { "end": 6081, "start": 6078, "text": " And so you will not be interested in them anymore." }, { "end": 6089, "start": 6081, "text": " So it's able to model the stochasticity in a way that makes it not curious about it anymore." }, { "end": 6091, "start": 6089, "text": " Actually, it's not clear to me how that works." }, { "end": 6098, "start": 6091, "text": " So if let's say the agent comes across two two displays or let's say two displays." }, { "end": 6103, "start": 6098, "text": " And one is showing just random goboards 30x30 go." }, { "end": 6107, "start": 6103, "text": " Or a smaller one, let's say tick-tock board." }, { "end": 6112, "start": 6107, "text": " And the other one is the same board but it's being played by experts." }, { "end": 6115, "start": 6112, "text": " And we know they're different, right?" }, { "end": 6117, "start": 6115, "text": " We know these two cases are totally different." }, { "end": 6125, "start": 6117, "text": " And we know we might think that if we could at least with a simple game, if we watch it long enough, we could figure it out." }, { "end": 6128, "start": 6125, "text": " But we don't know that at first." }, { "end": 6136, "start": 6128, "text": " Right. So you have a model that tries to predict the next move in the game." }, { "end": 6139, "start": 6136, "text": " Like just tries to predict the next input to the agent." }, { "end": 6142, "start": 6139, "text": " What it's going to see next." }, { "end": 6154, "start": 6142, "text": " And then you need multiple models so that you can get an idea of multiple hypotheses of the rules of the environment." }, { "end": 6164, "start": 6154, "text": " And you try to learn the rules of the environment by having a model that from one go position or image of a go position predicts the next image of a go position." }, { "end": 6167, "start": 6164, "text": " And." }, { "end": 6173, "start": 6167, "text": " And so to get uncertainty about your to do exploration." }, { "end": 6182, "start": 6173, "text": " You need some way of representing your uncertainty either explicitly or any other algorithm will do it in some implicit form." }, { "end": 6187, "start": 6182, "text": " So one way to do that is to train multiple environment models." }, { "end": 6194, "start": 6187, "text": " And so then you get an idea of well, if they are all the same, then I'm quite certain about what the next outcome is going to be." }, { "end": 6198, "start": 6194, "text": " They're all different. I probably have not that not that good of an idea." }, { "end": 6200, "start": 6198, "text": " So." }, { "end": 6218, "start": 6200, "text": " If you train these in both scenarios for the random go board and for the expert go board, then in the random go board, the dynamics models in the beginning, they are initialized differently so they will predict different things." }, { "end": 6221, "start": 6218, "text": " So your agent will go there for a while." }, { "end": 6225, "start": 6221, "text": " And then over time." }, { "end": 6231, "start": 6225, "text": " All of the models will just learn to predict the mean. And maybe the variance of the next image." }, { "end": 6239, "start": 6231, "text": " And so the mean image or the average over the next moves is is going to be uniform probably." }, { "end": 6245, "start": 6239, "text": " So if it's in pixel space, if you're actually looking at the go board, it would be basically." }, { "end": 6255, "start": 6245, "text": " You know, the stones that are already there, they will stay there and all the other. All the other empty fields, they will have an equal chance of." }, { "end": 6264, "start": 6255, "text": " Of getting the next stone. So they will all be like a little bit darker, a little bit lighter based on what players next." }, { "end": 6266, "start": 6264, "text": " And so." }, { "end": 6279, "start": 6266, "text": " If there is nothing to predict, if there were something to predict about the next move, then, you know, there would be some fields that are clearly still empty and some fields that have some chance of the stone ending up there." }, { "end": 6281, "start": 6279, "text": " And." }, { "end": 6289, "start": 6281, "text": " And if you have multiple predictors, then they can all predict this average image." }, { "end": 6298, "start": 6289, "text": " But in case of the random policy or in case of the random bought out there while they will all predict the exact next." }, { "end": 6301, "start": 6298, "text": " Kind of uniform distribution over possibilities." }, { "end": 6305, "start": 6301, "text": " And so they all predict the uniform distribution over next possibilities." }, { "end": 6308, "start": 6305, "text": " You know that." }, { "end": 6311, "start": 6308, "text": " First of all, your models all agree." }, { "end": 6316, "start": 6311, "text": " They all predict the uniform distribution. So probably the next move is actually uniform." }, { "end": 6322, "start": 6316, "text": " And then you know that there's there's nothing more to learn because your, your song, the members have agreed." }, { "end": 6331, "start": 6322, "text": " Or agree, even though they are not certain in what the next outcome is, where is in the next move and." }, { "end": 6335, "start": 6331, "text": " It will get it will take much longer for them to agree." }, { "end": 6337, "start": 6335, "text": " On what the next move is going to be." }, { "end": 6344, "start": 6337, "text": " And they will only agree by the time that they've actually like perfectly reverse engineer." }, { "end": 6349, "start": 6344, "text": " The expert players to the degree that the model allows them to." }, { "end": 6354, "start": 6349, "text": " Can you tell us a bit about the process of of writing these papers." }, { "end": 6360, "start": 6354, "text": " Like for example, to the experiments in general work at the experiments workout." }, { "end": 6368, "start": 6360, "text": " Often how you expected them to are there often dead ends that are reflected in the final papers." }, { "end": 6374, "start": 6368, "text": " The experiments rarely work out the way you want them to work out." }, { "end": 6377, "start": 6374, "text": " So you need to run a lot of experiments." }, { "end": 6386, "start": 6377, "text": " And I also want to be very confident in my own algorithm when I write about it." }, { "end": 6394, "start": 6386, "text": " Because it for one, it takes some time and effort to write a paper." }, { "end": 6398, "start": 6394, "text": " And that's time where you can't do research." }, { "end": 6403, "start": 6398, "text": " And so I only want to do that if I have a result that's." }, { "end": 6411, "start": 6403, "text": " That I'm happy enough with that I'm willing to spend all this time for writing the paper and then writing rebuttals for the conference." }, { "end": 6415, "start": 6411, "text": " And then you have to do poster and maybe a talk or so and so on." }, { "end": 6421, "start": 6415, "text": " And if you're not really, if you don't really believe in the method, then all of these steps are painful." }, { "end": 6424, "start": 6421, "text": " So I don't want to do that." }, { "end": 6431, "start": 6424, "text": " And I didn't think that way before grad school because before grad school, you kind of." }, { "end": 6434, "start": 6431, "text": " You just need to get a paper so you get into a PhD program." }, { "end": 6442, "start": 6434, "text": " But once you're in a PhD program, you have several years and you can think much more long term." }, { "end": 6446, "start": 6442, "text": " And much more actually follow follow your interests." }, { "end": 6451, "start": 6446, "text": " So I want to be sure that I have something that." }, { "end": 6453, "start": 6451, "text": " That I also believe in." }, { "end": 6459, "start": 6453, "text": " And so that just takes a long time and you have to like run a lot of experiments." }, { "end": 6467, "start": 6459, "text": " Whatever problem you're studying in the paper, either world modeling or exploration and so on." }, { "end": 6471, "start": 6467, "text": " There's usually a big design space of ideas you can explore." }, { "end": 6478, "start": 6471, "text": " And I want to kind of as much as possible strategically break down this space and test out all these ideas," }, { "end": 6486, "start": 6478, "text": " get an understanding of which of them are better words or what better in some situation, but worse in another why." }, { "end": 6497, "start": 6486, "text": " And it's not always easy because for example, we didn't do that much of that for plan, for example, just because we tried a lot of things and they didn't they all just didn't work at all." }, { "end": 6503, "start": 6497, "text": " But I think we would actually be interesting to go back and try out." }, { "end": 6511, "start": 6503, "text": " Like try to really understand why, for example, this stochastic and deterministic state separation seem to be so important." }, { "end": 6516, "start": 6511, "text": " So so there is a lot of tuning necessary and it takes a long time." }, { "end": 6521, "start": 6516, "text": " And I think it's worth putting in that time when it's better to have." }, { "end": 6528, "start": 6521, "text": " One paper here that you're really happy with, then for papers that nobody." }, { "end": 6531, "start": 6528, "text": " That don't really help anybody." }, { "end": 6533, "start": 6531, "text": " Does that answer your question?" }, { "end": 6535, "start": 6533, "text": " Yeah, that was great." }, { "end": 6540, "start": 6535, "text": " So do you have any comments on what you think the future of world models looks like?" }, { "end": 6542, "start": 6540, "text": " Yeah." }, { "end": 6546, "start": 6542, "text": " So I think we still need to scale up a bit." }, { "end": 6558, "start": 6546, "text": " Because reconstructing accurate images doesn't seem to be the solution, the long term solution for representation learning." }, { "end": 6562, "start": 6558, "text": " Neither in in model 3RL, no model base RL." }, { "end": 6569, "start": 6562, "text": " So I think there are better ways of learning representations, learning latent states than by reconstructing images." }, { "end": 6576, "start": 6569, "text": " Because if you think about it, there might be a lot of things in the image that the agent doesn't really have." }, { "end": 6583, "start": 6576, "text": " And there may also be a lot of things in the image that are just kind of really difficult to predict." }, { "end": 6595, "start": 6583, "text": " And my experience so far is that if you can get reconstruction to work on an environment, then it does really well because you're basically solving a harder task than you have to." }, { "end": 6602, "start": 6595, "text": " You're trying to predict all your sensory inputs. If you can do that, then you know everything about the world there is." }, { "end": 6613, "start": 6602, "text": " But if you can't, because the world is too complex to predict everything accurately in input space, then the agent tends to not learn about representation." }, { "end": 6616, "start": 6613, "text": " And so it's not like a graceful failure." }, { "end": 6621, "start": 6616, "text": " And I think contrastive representation learning is really interesting." }, { "end": 6636, "start": 6621, "text": " So I think that's a couple of very successful, empirical successful methods for static images that I think we can apply to video sequences for RL." }, { "end": 6638, "start": 6636, "text": " And so we're trying some of that." }, { "end": 6654, "start": 6638, "text": " And I think another aspect that I think a lot of RL is still kind of bottleneck by is temporal abstraction. And I said earlier value functions give you some of that because they had you consider rewards into the long term future." }, { "end": 6665, "start": 6654, "text": " But in a really complex environment, I think it will become intractable to learn the good value function for everything." }, { "end": 6669, "start": 6665, "text": " And you probably need to do some kind of online planning." }, { "end": 6678, "start": 6669, "text": " Just because there are too many possible scenarios that you could imagine to really be able to learn about all of them." }, { "end": 6687, "start": 6678, "text": " And so what you want to do is do the planning online. So you only have to do it for the situations that you actually encounter." }, { "end": 6699, "start": 6687, "text": " And to them still consider long horizons you need to have temporal abstraction in your world model. So that's another thing we're trying." }, { "end": 6706, "start": 6699, "text": " And then besides that, I think we, I mean, there is a lot of." }, { "end": 6726, "start": 6706, "text": " There is a big open space for objective functions that are enabled through learning accurate about laws and some of them will benefit from having uncertainty estimates that are more accurate than maybe ensembles about parts of the world model." }, { "end": 6738, "start": 6726, "text": " And then we have a small better empowerment is another interesting objective function that we're studying that becomes much easier to compute once you have a world model." }, { "end": 6741, "start": 6738, "text": " So in summary, it's scaling up." }, { "end": 6744, "start": 6741, "text": " Learning better representations." }, { "end": 6753, "start": 6744, "text": " And finding better objective functions because eventually exploration will become really important as well to learn a good part model." }, { "end": 6762, "start": 6753, "text": " So back at the Neureps 2019 RL workshop poster sessions, I was at David Silver's poster for Muse 0." }, { "end": 6771, "start": 6762, "text": " And I asked him about how Muse 0 handled Stokeasticity. And he told me that it didn't. It used a deterministic model." }, { "end": 6782, "start": 6771, "text": " But that he, but he said it could be extended to handle Stokeastic case. And I think I think Muse 0 builds on the predictron." }, { "end": 6792, "start": 6782, "text": " And paper which which does some kind of temporal abstraction. So maybe there's progress being made in that in the temporal abstraction side." }, { "end": 6802, "start": 6792, "text": " Yeah, I'm, I'm actually not sure if the original predictron has temporal abstraction in it." }, { "end": 6814, "start": 6802, "text": " But yeah, so I think for the Stokeasticity aspect, it may be more necessary when you're trying to explain more complex data." }, { "end": 6823, "start": 6814, "text": " So if you're trying to explain your inputs, Stokeasticity becomes more important than if you're just trying to explain future rewards." }, { "end": 6826, "start": 6823, "text": " That's my guess." }, { "end": 6834, "start": 6826, "text": " Yeah, also you have to learn a lot more of course if you, if you're trying to model the world rather than the task." }, { "end": 6841, "start": 6834, "text": " And, and with the result is that you get a model that can be useful for a lot of different tasks." }, { "end": 6846, "start": 6841, "text": " And that can be useful for exploration where you don't have a task at all." }, { "end": 6856, "start": 6846, "text": " But there, I mean, there are some, there are some recent papers on doing temporal abstraction and some old ones as well, both in model 3 and model based." }, { "end": 6859, "start": 6856, "text": " Arrell, it's just that." }, { "end": 6866, "start": 6859, "text": " And I think there are lots of great ideas and a lot of these ideas can probably." }, { "end": 6874, "start": 6866, "text": " My guess is that we don't have to invent like a crazy fancy method for like almost everything in machine learning." }, { "end": 6889, "start": 6874, "text": " We just have to take like a reasonable kind of something that seems intuitively correct and then push it, push it until it either works or we." }, { "end": 6892, "start": 6889, "text": " You find a reason for why it doesn't work." }, { "end": 6897, "start": 6892, "text": " And that hasn't really happened for a temporal abstraction yet at Arrell." }, { "end": 6902, "start": 6897, "text": " Can you say anything about the research directions that that you are pursuing going forward?" }, { "end": 6914, "start": 6902, "text": " Yeah, I mean, that overlaps a lot with your with what I said in response to your question about next steps for world models." }, { "end": 6926, "start": 6914, "text": " But yeah, for me, I'm trying to systematically go through different objective functions for intrinsic motivation now." }, { "end": 6932, "start": 6926, "text": " And besides that, we also want to work on harder tasks." }, { "end": 6937, "start": 6932, "text": " So I need to scale up world models further so that we can do." }, { "end": 6949, "start": 6937, "text": " Let's say train like an agent with only within intrinsic motivation to play Minecraft from pixels." }, { "end": 6953, "start": 6949, "text": " That would be great." }, { "end": 6965, "start": 6953, "text": " And besides the house and it survives and maybe fights them once this and ignite and you know, because there's such a complexity and kind of." }, { "end": 6976, "start": 6965, "text": " There are there are so many things you can do because it's not a lot of games are actually easier to explore than you might think." }, { "end": 6984, "start": 6976, "text": " For example, in Mario, you can only walk forward. So it's not that difficult to explain to explore." }, { "end": 6991, "start": 6984, "text": " It's basically either you're you're making progress, you go forward or you don't." }, { "end": 6997, "start": 6991, "text": " But in an open world game, there are so many things you can do." }, { "end": 7004, "start": 6997, "text": " And then you have to then you get an additional challenge because once you've explored something, you kind of have to go back and." }, { "end": 7009, "start": 7004, "text": " And see if there's something else that I could have also tried from here." }, { "end": 7020, "start": 7009, "text": " And and so that's why I like thinking about training it, doing intrinsic motivation Minecraft because." }, { "end": 7028, "start": 7020, "text": " You know, you have to build tools and then use these tools to get better materials and they can be better tools and then you can." }, { "end": 7037, "start": 7028, "text": " You know, build more like like bring yourself into like a better." }, { "end": 7040, "start": 7037, "text": " Into a better state for surviving." }, { "end": 7046, "start": 7040, "text": " And so even agent can actually do all these things, then it must be very general." }, { "end": 7053, "start": 7046, "text": " Very general objective function that that can explain all of this." }, { "end": 7060, "start": 7053, "text": " Besides your own work, is there other angles in RL that you find very interesting lately that you might not have mentioned?" }, { "end": 7071, "start": 7060, "text": " Yeah, there's one that I've been thinking about a bit, but not really not done anything in which is external memory." }, { "end": 7076, "start": 7071, "text": " For to give agents long term memory, which is I think." }, { "end": 7086, "start": 7076, "text": " Temporal abstraction is there's one part of the puzzle. You do want to plan into the future on a temporary abstract level." }, { "end": 7093, "start": 7086, "text": " But and that gives you a long context into the from the past as well." }, { "end": 7097, "start": 7093, "text": " But I think you can't keep everything in memory in your working memory at a time." }, { "end": 7105, "start": 7097, "text": " And so it's very natural to think that that could be this external memory module that you can write things into." }, { "end": 7110, "start": 7105, "text": " And then you can later query it to get back the facts that you need at the moment." }, { "end": 7119, "start": 7110, "text": " So there, yeah, there are a couple of interesting interesting papers on training these modules for RL." }, { "end": 7127, "start": 7119, "text": " And another direction that's not directly reinforcement learning is." }, { "end": 7145, "start": 7127, "text": " It is the like brain and slide architectures. So I think it would be cool to to develop an unsupervised learning algorithm that works in an online setting on high dimensional inputs." }, { "end": 7147, "start": 7145, "text": " So it can't really do backprop through time." }, { "end": 7163, "start": 7147, "text": " It has to find some other way because it keeps getting new new input. So I think we have to be cool to kind of go away from the static image setting into the online streaming setting for representation learning." }, { "end": 7175, "start": 7163, "text": " And potentially explore ideas people like just kind of the very basic ideas that people will know about computation in the brain, which is like sparse distributed representations." }, { "end": 7179, "start": 7175, "text": " And the hierarchy is so on." }, { "end": 7187, "start": 7179, "text": " Danisher Haffner, it's been a real treat. And thanks for taking this time and your patience to teach us so much." }, { "end": 7191, "start": 7187, "text": " Actually, I've learned so much in this episode. I'm going to listen to it many times." }, { "end": 7195, "start": 7191, "text": " And it's been great hearing about your fascinating research." }, { "end": 7201, "start": 7195, "text": " I can't wait to hear or read about what you come up next. Thanks for sharing your time and your insight with us." }, { "end": 7209, "start": 7201, "text": " Thanks, Robin. That was a great chat and looking forward to hearing the episode." }, { "end": 7227, "start": 7209, "text": " That's our episode for today folks. Be sure to check talkrl.com for more great episodes." } ]
Csaba Szepesvari
Csaba Szepesvari of DeepMind shares his views on Bandits, Adversaries, PUCT in AlphaGo / AlphaZero / MuZero, AGI and RL, what is timeless, and more!
https://media.transistor…a86.mp3?src=site
This is Talk by Rail Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan. Professor Chabot Sushvari is head of the foundation's team at DeepMind. Professor of Computer Science at the University of Alberta, Canada C-Far AI Chair, Fellow at the Alberta Machine Intelligence Institute, co-author of the book Bandit Algorithms along with Toru Latamore, and author of the book Algorithms for Reinforcement Learning. Thank you Chabot for being here. Thank you for having me. You've described your research interest as interactive online learning. Can you tell us when did you decide to dedicate yourself to this field? And how did you come to that decision? Yeah, I guess this goes back to my PhD years when I first discovered reinforcement learning. And I've been interested in AI and I saw that reinforcement learning is a very good way to study how to create intelligent agents. So after this, I started working on this and then since then I'd be working on this. Did you say your career turned out as you planned or are you surprised by how it turned out? Well, I didn't always plan things. So after my PhD I was working for industry for quite some time. And after that I came back to academia. And so I cannot say that I was planning everything that happened to me, but things worked out pretty well, I would say. So in your professional life, what accomplishments do you feel has meant the most to you? And has that idea of what's most important changed over your career? I guess from the very beginning I was curious about what is possible to achieve as algorithms and with what sort of algorithms. And so it limits off-performance and like creating understanding about problems, achieving better understanding of problems. And so it desires that kind of show that how the most important to me and that has always been a case. Can you tell us a bit about the foundations team at DeepMind? What is the goal of that team? Yeah, so the foundations team at DeepMind has been created about a little bit more than two years ago. And it's as the name suggests, the goal of the team is to study the foundations of AI, machine learning, reinforcement learning, create a better theoretical understanding of what's possible to achieve in these fields. Can you say anything about the difference between doing this type of work in industry and academia since you've seen both sides? I guess industry means a lot of different things. So that I am currently at DeepMind. I think DeepMind is very specializing in industry. It's almost like an ideal academic environment, but industry in general doesn't necessarily mean this. So from this perspective, DeepMind is maybe even more academic than an academic environment because you can just like focus on research. When you're designing a new algorithm or when you're doing the type of research you do, what does it look like to do that type of work? What does your day to day look like? What kind of milestones are there in the middle of a project like that? Yeah, so the type of work I do is of theoretical nature. So it's a lot of bouncing back and force between problems, models, algorithms, and trying to figure out what is the next problem to look at in what model and with what algorithms to approach a given problem or even just trying to understand whether there exists a marathon that is able to break through some barrier. So a lot of bouncing back and forth between these different aspects of the work. So you have advised a lot of accomplished researchers for their masters and PhDs. On your homepage, you mentioned what you look for in students. I wonder if you can say anything about what you think the students should look for in an advisor. I guess it's pretty important that advisor should be able to act as a good mentor in addition to be technically proficient. Things that are maybe sometimes overlooked is whether the advisor is going to have the time, for example, to work directly with the students. I think that that's pretty important. Is there some stages of maturity in machine learning and reinforcement learning that researchers go through and what in your mind, what does that type of ladder look like or progression? I guess it's probably not that different than in other fields in that you first have to pick up certain technical skills before you're able to move on to the next level of the ladder, which could be to pick your own problems, to design your own problems, your own challenges. So I guess that must be the same in every field. So when we talk about reinforcement learning, there's these different problem settings and subfields that come up bandits and game theory and the full RL problem and control systems optimization. Are all of these fields brothers and sisters of each other? Are they all subfields of something or how should we think about these different aspects? Are they different perspectives? Do we need to understand them all to be good at decision-making under uncertainty? I guess the more you understand the better you would be off to start with that, what's necessary to understand, I don't know. I try to learn about all these different perspectives. But a lot of times what you discover that these perspectives come from a certain time, certain type of problems have been important for people. As times move on, some of the results and achievements of previous results, achievements become less important or less important from the perspective of the type of problems that you're trying to solve. Nevertheless, it happens a lot of times that people before us have thought about the same problems, maybe approaches slightly differently, but had very valuable thoughts. So it's really versatile to study all these different viewpoints. And yes, these fields are by and large studying the same problem. These settings have been around for many decades. Is it a fair question to ask, like, do we have all the fields right now that we need or are there still some missing in this list that don't exist yet? Well, that's really hard to answer. I guess we are creating fields as we go. So I expect new fields are going to emerge, but I have not seen this clue about what they could be or how they're going to look like or how they are going to be different than the ones that we currently have. So you said that we're creating fields as we go. Can you, could you mention maybe a more recent, a more recent subfield? I mean, like, if you just like think about the buzzwords of today that haven't been buzzwords yesterday and then SV programs, these buzzwords become fields on their own. So a buzzword of not a long time ago has been data science, right? I guess maybe it's still popular in certain circles. And your buzzword is deep learning. Are there own fields or not? Like, I don't know. They are or they will be. So this is exactly what I mean. As we are interested in focus shifts, a lot of people start to work on the same topic and then it may become on its own field. So in your bandit algorithms book, you note into the introduction that bandit problems were introduced by Thompson in back in 1933 in the context of medical trials. So now it's I guess 87 years later and I've read that the FDA discourages the use of bandits in clinical trials. Do you feel strongly that should change and what do you think would take to change that? I'm pretty sure that things are going to change. That is usually a back end force between you know, technological pushes and regulation. The regulators are rightly thinking very carefully about the pros and cons of different approaches. And I think what changes is that in biology, as well you can see a lot of advances just yesterday. I read an article in a scientific journal that was talking about that today, it is possible to create medications for a single patient and people are debating whether we should do that. So when things change so drastically, I don't see why FDA who previously had an opposing opinion about this particle topic but don't change its mind about planets. I think that so I haven't read this discouragement but I as far as I know, there are actually some trials that are using bandit type offs algorithms. In your book, you mentioned the EXP for algorithm which learns to delegate to a set of experts to find understand correctly and which of themselves could be bandit algorithms. So I was just wondering is is EXP for kind of like the bandit version of hierarchical reinforcement learning and other experts like options in RL? Or is that mapping not very good? Well, the mapping is good to some extent but it's missing a lot of elements. As in bandits, you don't have long tributes, there are transitions, there is no reglass back of planning instead of bandits at least. I'm like there is planning for reducing uncertainty but that's all. So this framework is meant to study information sharing in a hierarchical context and it's good for that. What hierarchical RL has a lot of other aspects that this framework just cannot have. I wonder if automated methods will ever be used to find improved fundamental bandit algorithms. Is that a sensible thing? Oh yes, for sure. Why not? With my colleagues at Google Brain, we are actually looking at some of these automated methods to construct bandit algorithms. It's like the thing that you need to understand is that these have different goals. So if you have a set of different bandit environments that you can sample from, then that is a lot of sense to specializing to these set of environments. And an automated learning algorithm, perhaps more efficiently than what a human would be able to do because for a human it may be very obliqued or packed to extract the knowledge required to specialize the bandit algorithms. So it makes a lot of sense to me to do this. And you've said if I compare phrase that machine learning research should reduce emphasis on competitive testing and that the purpose of science is to generate knowledge. Do you think of bandit algorithms that you produced more as inventions or more like their discoveries? Like is the science or is it engineering? So this question has two parts. I'm not sure how the two parts have related to each other. So the first part was, I was reflecting something you said that ML research should reduce emphasis on competitive testing and that the purpose of science is to generate knowledge. So as opposed to just optimizing on a leaderboard maybe. Right. Yeah. So I still maintain that. I think that purpose of science is really just to generate knowledge or not just but like ultimately that's the goal. And it's a different type of activity. If you care about solving a particular problem then of course you're not necessarily interested in understanding what works or what doesn't work and like and try many different things. And these are complementary approaches. And the second part was, okay, sorry, it's not the best question. But I was I was one I wanted to know if you felt that bandit algorithms are producing these bandit algorithms is more like an engineering invention or more like a scientific discovery. Oh, I see. Like I guess it depends, right? So if you use an automated metal tool, discover bandit algorithms than the study of the automated algorithm, then does it work? How did it work? To what extent can it work? Could be studied as a scientific or mathematical question. It could it could be approaches a former question. But you can also decide to start with a problem setting and then try to find the best algorithm for that setting. Let it be a bandit algorithm or anything else that would also be a scientific approach. But if you care about a practical problem, then you know, you can just try different things. And as long as you are entering practices sound, meeting that the inference is usually there is some uncertainty about like, you know, like the application. And so you're you're making inferences about what's going to work, what's not going to work. And as long as you have a sound approach to this, then you can try many different things. For your UCT algorithm, I think from a 2006 paper bandit based Monte Carlo planning, is that the heart of AlphaGo, Alpha0 and Mu0, is that right? So some version or maybe I should say an algorithm that was inspired by UCT, that's called PUC, is at the heart. So it's a modification of UCD, which is being used by these algorithms. So did you foresee that type of use when you first came up with that? Definitely. Some kind of use always on your mind when you're trying to come up with an algorithm. And we were looking at the game domains and specifically goal. You're always optimistic, right? About the future of any of your dimensions, if you want. But I don't think that we were hoping at a time that in about a decade, these algorithms is going to contribute in some way to a major success like the success of AlphaGo. Would you say that this algorithm is timeless? Or would you say that at some point it could maybe be superseded? And I mean the PUCT. Yeah, right. I guess it's maybe, well, I don't think anything is timeless. For some time, it can stand as the default algorithm, I hope, that others need to supersede. And then it fulfills his purpose. I think that's a sense in which it can be timeless. But just for a while. So it's not timeless. Was it a major effort to develop UCT or like in retrospect when you think about it? Was it a big project for you or something minor? It was a fun project. It happened when I've been still in Hungary working with a colleague, Levant Kochish, and we were learning about panets. And my colleague, specialties, has been learning in games. And he went to a conference on the topic and he came back and told me that, hey, Chava, there is always buzz about this Monte Carlo 3-search algorithms, but they are doing it all wrong. And we do something about this by merging with Spanid Argot Thumps. And it's been quite fun to do this. So I wanted to ask you about the notion of adversary in bandits that comes up often. If I understand correctly, the adversary in bandits is not typically like a learning agent that's designed to minimize our reward in a game theory sense. Is that correct? Yeah, that's correct. It's a big adversary. So how do you contrast the adversarial bandits to a real competitive agent like a multi-agent or a game theory RL sense? Yeah, so the adversary of panets framework comes from adjusting online learning where people study each other setting into a sequence prediction problems. You have a sequence that you need to predict that you're not making much assumption about. And you just want to lift the statistical assumption and see whether you can define a notion of full learning in a meaningful way. And it turns out that you can and then you can utilize this and create this adverse bandit setting. So this is an attempt to understand what is important about the statistical assumptions that standard learning theory makes very often, which of these assumptions are essential to what degree to learning. Okay, so but if there was a learning competitor that was trying to minimize your reward, would you say that was also an adversary in this sense? Well, you could say that it would be an adversary, but the algorithms that are debitled for this adverse setting are not necessarily good algorithms. Nevertheless, it happens that very often they are maybe with some extra care and extra modifications. Actually, my colleagues in a better and as far as well have been using this adverse learning algorithms in the context of games to, for example, compute approximate Nashack-Librarian. Can you tell us more about the special case of nature as an adversary and how that differs? So nature as an adversary doesn't care about what I do, right? If I have a true adversary then the adversary is going to watch me and if it notices some regularity, some pattern in the way I choose my actions, then it will try to take advantage of that. But as if I have to compete with nature, nature doesn't care about me and I can try to take advantage of nature. So it's kind of like a really dumb adversary, if you wish. It could be a very challenging to compete with, but it's not reacting to you. When it comes to uncertainty, in the bandit setting and the RL setting, it seems there's so many different places then uncertainty can arise and we have different types of uncertainty. It seems like many methods just kind of bundle all this up together. How important is it for us to distinguish between like, epistemic and allotoric uncertainty in these settings? I think that this distinction is pretty important and quite fundamental. You have to reason about both types of uncertainty, right? So it's one is concerned with the uncertainty of future events, the other is concerned with your uncertainty about like the nature of the environment that you're exposed to. So you have to reason about both levels of uncertainty. And I think that this is pretty well understood now. So at Nureps 2019, MTS Khan gave a talk on deep learning with Bayesian principles. Seems like we're just starting to figure out how to combine these two paradigms in a meaningful way, just in terms of supervised learning. Can you comment on that combination in terms of RL? Like where are we in reinforcement learning in terms of bringing together the deep learning side and the Bayesian side? Yeah, that's just a lot to reconcile. So I guess I know more about how the Bayesian side interacts with RL. There are some very interesting ideas there. So Tomson's name came up previously and he was proposing a very simple idea, which is that you should sample. So you maintain a posterior distribution of what a possible environment that you might be in. And whenever it comes to make a decision about what actions to take, what you could do is that you just mean you just sample from this posterior aparticular environment. And then you run your planning algorithm on this. So this could be like a full RL environment. And then you figure out the body center and you could start to follow that body center and you would follow that body center. So this led to an algorithm called posterior sample sampling reinforcement learning PSRL, which is a very interesting and highly competitive idea, highly competitive algorithms. Sorry. Still there is a lot of things that are unknown. So sampling from a posterior may not be easy. Usually this is pre-dogged with huge, huge computational challenges. So addressing those challenges is one important topic. Then you can even go further and you ask yourself, is it really important that I sample from a posterior distribution? Or what type of distribution should I sample from? What is important about these distributions? So there are many, many interviewing questions here. And I expect to see a lot more to come on this integration from. And of course, at any point in time, if you need functional approximation, then you can throw in some neural networks, eat and there and make everything more complete in that way and maybe more flexible. So yeah, there indeed, many interesting ways you can combine these ideas and we just started to explore them. If you had to describe what is missing from RL today to allow it to fulfill the complete vision of what it could potentially become, why is it are eliminated right now? Why is it still mostly in the lab? I guess RL deals with a scenario which is more complex than the typical scenario where machine learning has been used. So sequential decision making other uncertainty. And because of that, deployment is harder. So that's one aspect. Another aspect of the problem is that of course, the space of RL problems is so huge that you can always find really, really difficult instances in it, which may require, if you care about those specific instances, some special approaches. So we have a number of performance limit lower bounds for different RL settings that clearly tell us that without structural assumptions for the assumptions, you can't hope to do very well. So at times, it's even unclear to me whether we should blame the current algorithms or we should just blame or bad luck that the space of our problems is so large. So is that why we keep seeing new algorithms coming out all the time? I guess that could be one reason. We won't stop right? Just because something is really hard. So people are inventing new and newer things. Is much in RL fundamental and timeless? If we go into the distant future or I was thinking in an advanced alien civilization, would they agree with us that these are the basic building blocks of decision-making and intelligence? Or are some of this maybe accidental? Or is there nobody now? I guess I don't know about the aliens, but I feel that this is kind of the same as in mathematics in general. The closer you stay to simple problems, the higher the chances are going to be that you're going to find something fundamental that is borderline timeless. So the koshi schwarz inequality hurt what might something really simple and core to many of the things that we do is not going to go away anytime soon. So the question is whether, for example, Markovian decision processes, as we know of them today are fundamental or not, whether they're going to be viewed as foundational building blocks. I guess they are still, I would say, pretty simple. And as such, there could be some really valuable ideas there in their simplicity. So I guess my hunch is that there is something that is core and foundation about these frameworks that we are studying currently. I was present at the Neurips 2019 RL social where there was a small group of people that you were part of and you were congratulating an open AI researcher on their shadow hand and Rubik's cube work, that very impressive project. And I also heard you say that you felt disappointed that they needed to use markers on the hands. I wonder if you can share a little bit more about that comment with our listeners of why that was important to you, why you felt that way about it? Yeah, I guess it is not really specific to that project, my disappointment. It's more about that I was reflecting on that in this problem, there is a perceptual element which is that you need to sense the configuration of the cube and how the hand is holding it and all that. And it's pretty limited perceptual problem compared to if you look at round you like what you see and the perceptual problems that humans have to do that. And yet, I guess for practical reasons or whatnot, like these researchers, these great people at Open AI were forced, I guess, to instrument the environment, instrument that this perceptual problem didn't need to be dealt with in its complexity. Whereas I find it like maybe I had expectations that we should be able to deal with it by now. Yeah, so that's my disappointment. So it sounds like more comment on the state of the field? Yeah, absolutely. Yeah. And clearly bandit algorithms have been applied in practical applications for some time and been very successful. It seems maybe that's quite a lot less true for the full RL setting. Yet it seems to have so much promise. Our first guest on this podcast was Natasha Jigs and she spoke about her paper tackling climate change with machine learning and that paper mentions multiple areas where emissions could be improved by applying potentially by applying RL in the future. But this hasn't really come to pass yet. And then we see a paper, a recent paper from Doolock Arnold at all challenges of real world reinforcement learning where they try to isolate what are the challenges keeping this out of the out of practice. I wonder how you feel about where we are in terms of bringing RL into real world applications on a large scale. Is it very distant? Is it a distant dream? Is it just around the corner? How do you feel about that? I guess it's going to be a matter of year process. We'll see more and more of our applications. I expect that. But as with control, there are certain risks in what, right? And so people are conservative and when it comes to applying learning algorithms that would be learning online. If you're learning offline, then you have a batch of data and learning with a batch of data is really complicated. Not only really complicated, it could be just impossible because the data may not have the information that you actually need to come up with really good policies. So Arad is again, read out with all of these challenges when it comes to applications. So the easiest applications could be when you have some internet, I don't know, various systems where everything happens virtually. So to say, and maybe the impact of one, not so great decision is not so high, right? So I expect more applications to come, but I don't expect that this is going to be happening like as a beanfall or something like that. Do you think that model-based RL is critical to getting real-world systems working and safe? I don't know about that. It depends on the application, I guess. Model-based RL is a great idea, but it's not without any problems. Yeah, I don't know the answer. Do you have any comments about time in model-based RL? Like it's always struck me as restrictive that the standard model is a one-step, one-time-step model. And if you have your time scale very small, then it could be modeling such a small amount of time that makes it hard to model in a longer term. I guess people have been looking at the modest-step predictions and compressing time or abstracting time in various ways. So Harika RL and various approaches to that, you can easily imagine models and people have been looking at building models that may modest-step predictions at a time. It's not trivial to do it, right? Because I use stang and policy, like for what is the thing that you're trying to predict, right? I guess we'll keep trying. I agree that it's an important aspect and maybe it's receiving less tension, but maybe it's because it's more complicated to come up. So it's more complicated as a problem and as a result, it takes more time to come up as the set of right ideas for this study. I guess we have the prediction on paper from David Silver and Company that seemed to be abstracting time away in some sense. I actually didn't follow that entire paper, but I read it and tried to follow it. It seems like they overcame the single-time-step issue and found some way to make time abstract, which seems very appealing. I guess there are a bunch of papers even before that paper, people have been talking about compressing time and abstracting time. Do you have any comments on explainability in RL? Do you feel like explainability is absolutely required for practical systems? It basically depends on the system. For subsystems, it's going to be required, but for others, not so much. So I think explainability is an important topic, but there will be lots of problems where explainability is not strictly required. And also explainability is, you know, like it's relative to who you are explaining to. So what does it mean that you explain something? Some decision that was going on. It means that whoever you're explaining to is going to accept the explanation as the explanation. So with humans, when they are explaining things to each other, they take into account what the other person knows and so on and so forth. So it's also true that the explanations may be, you know, like high-level and maybe, you know, not super precise. It's just like, you know, like oftentimes we tell each other, tell a million intuition about this decision that you made. And if the explanation receives matches or expectations, then we accept that. So it's a tricky question when humans are involved, I guess. Do you think that safe RL systems have to be explainable? Like is that a required property of a safe system or is it maybe sometimes optional? Pretty sure that it's going to be optional a lot of times. But safety is going to be also pretty cool. Whether it's an RL system, I guess for RL systems, it's even more complicated than for any other systems. Because you have a sequence of decisions that you make and then maybe those lead to a garden pass that you'll never ever enter, except for once in lifetime, but that could be highly costly and could have bad consequences. How do you know that that's not going to happen? It's a very important topic and it's good that a lot of people are looking at it. Do you think AGI is worth discussing these days? Why not? So do you think of AGI as something very abstract that can never be fully attained, like an asymptote or something? Or do you think it's something that can really exist at some point? I guess discussions are important no matter what at a very high level. So the idea that you should increase generality is a very appealing idea. Of course, at the same time, we also know that generality could come at some price. So the whole idea of that, you want to be as generality as possible and not compromise, performance is already intriguing. I think that what we see from the history of AGI is that people at times were trying too hard to over-specialize too early. So it was an instance of premature optimization and that's a mistake that it is easy to fall into. As such, it's really interesting to see that these days you have all these learning algorithms that are achieving things that we couldn't imagine achieving a lot of a long time ago and they're even more generative. So by increasing generality, you can be better. That's a very intriguing idea. But of course, it's not true in the first case. So you have to find the right way to navigate this space and I find this quite interesting. Can you comment on the role of reinforcement learning on the path to AGI? What is a relationship between these two? Reinforcement learning is just a learning algorithm for a specific situation that you need to make decisions and sequential. And I wonder if there is some uncertainty involved. To me, that's really fundamental to intelligence. And if you want to abstract away a lot of, it has to be in the intelligence then maybe one simple model is just to say that you want to solve as many reinforcement learning problems with a single algorithm as possible. So it's quite foundational to AGI or whatever you want to call it. So AI research is very open these days and we openly published in general. Then we saw an opening, I'm limiting the publishing of GPT 2 and they released it over only after some time and they said it risked. Do you feel like publishing in RL should continue openly for the foreseeable future? Or do you see a future in which at some point it's not safe to do that? Right. So it's my personal opinion that we should keep science open and that's our safeest bat. And humanity has a history of being able to deal with new information that is generated by scientists and not without any problems, but the things work out well. And so I'm quite optimistic. I would take an optimistic stance on this and I would say that we should keep publishing openly. Of course, there are some risks. Then like in specific cases, you might override this rule, but I think in general, we should aim for an open publishing model. And then you, I once heard you quote Voltaire and maybe Spider-Man's uncle Ben who both said, with great power comes great responsibility. Do you ever worry that RL might be used for some unsavory purposes like maybe you know, Starcraft algorithms being used in military or machine learning being used to sway the electorate? Do you think that any of the RL community can do to help ensure that the progress is good for humanity on the balance? I think the RL community has responsibility of sharing information about the progress it makes and in from the public about the potential ups and downs of older advances that are being made at the same time as a researcher, you know, like your everyday work, like you have to decide whether you're going to like continue on your career and then pursue research or you're going to maybe contribute in other ways by working on keeping everyone as safe. So I think that we have roles for like these are both a lot of nice roles and everyone should decide for themselves about what they think they should be and you can of course go back and force as well if you wish. So these bandit models with the rewards are probably running a large part of e-commerce. If we thought about these bandit rewards, if we included some notion of externalities which often get ignored like environmental or social impact, maybe you know, all of global commerce could kind of become a little more green or a little more clean or a little more ethical. I wonder if you think that there's a place for an idea like that. Oh, absolutely. I think that companies who are employing algorithms like this are already thinking about this. I think that it's already happening. Great. There's been a lot of discussion about failure modes for very simple rewards like optimizing for engagement on social media, leading to extreme content. I wonder if you could have any opinions on how we could avoid these unwanted outcomes from those systems? I'm not an expert on these specific topics, right. But clearly when there are feedback loops and this is about feedback loops, right. So you design the algorithm that is linked to some consequences and the result, things change and the algorithm tariffs and things can go really bad. So we should always think about the feedback loops in the systems. I think this is again happening already. I think it's the interest of the companies as far to take care of all of these risks, right. So if they want to survive in the long term and it's in their benefit of not causing like this pretty unfortunate unfolding of events. Can you maybe contrast approaches to RL at DeepMind versus maybe what other institutions or labs are doing? Is there like a certain deep-mind way or a perspective or approach? I think DeepMind is representing maybe not evenly but quite a bit of every aspect of how you can approach reinforcement learning and machine learning problems. It's a big company. So I don't think of DeepMind as just an entity that has some very particle angle. Of course, DeepMind being part of all of the bad has access to huge compute and resources. And just this fact is going to mean that you will see keep seeing papers that heavily use these resources. But I think that that's okay because it's all about exploring the space of possibilities, exploring what's possible and once you have access to these resources, then you should try to take advantage of that. So can I ask you what are you focused on these days personally and work wide? I'm trying to focus more on exploration in reinforcement learning. We just finished with my colleague Thor, multi-more, dis-booking, and so trying to move a little bit more towards the RS space. And what is reinforcement learning? How to use side information in reinforcement learning? How to do a spot on this specification in reinforcement learning? These are the topics I look at currently. So aside from your current work, are there a few specific trends in RRL that you find super interesting at this moment very recently? Yeah, I really like the works where people start to look at simple pumpy-p's. So pumpy-p is the spatial observable MDP. And in these simple models, you're not observing the state directly. So maybe there are a few states, a hundred states or whatnot, but you make observations in the state and then you observe maybe an image, which is a very high dimensional quality. And you try to design algorithms that don't break down, even though the observation space, the possible observation space is human juice, right? The handling state space is also you try to discover underlying regularities and take advantage of this. So there's been a number of papers that came out on this, which are really nice. I have one last question for you. DeepMind has been around now for 10 years. Can you help us imagine what reinforcement learning might be like in another 10 years? I have no idea. I hope that it's going to be as popular, even more popular than today. But we need a lot more new ideas. I guess, Harika, LaGre, dealing with spatial observability, taking those actions to just collect information. We need a lot more research. So I found your name appears as Prince Chaba in Hungarian mythology, meaning a gift from the heavens. I want to say this has been a real gift to myself and our listeners. Thank you so much, Chaba. Hi, thank you. That's our episode for today, folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 12.8, "start": 0, "text": " This is Talk by Rail Podcast. All reinforcement learning, all the time." }, { "end": 20.12, "start": 12.8, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 25, "start": 20.12, "text": " Professor Chabot Sushvari is head of the foundation's team at DeepMind. Professor of Computer" }, { "end": 30.48, "start": 25, "text": " Science at the University of Alberta, Canada C-Far AI Chair, Fellow at the Alberta Machine Intelligence" }, { "end": 35.72, "start": 30.48, "text": " Institute, co-author of the book Bandit Algorithms along with Toru Latamore, and author of the" }, { "end": 39.2, "start": 35.72, "text": " book Algorithms for Reinforcement Learning. Thank you Chabot for being here." }, { "end": 40.4, "start": 39.2, "text": " Thank you for having me." }, { "end": 44.760000000000005, "start": 40.4, "text": " You've described your research interest as interactive online learning. Can you tell us when" }, { "end": 48.92, "start": 44.760000000000005, "text": " did you decide to dedicate yourself to this field? And how did you come to that decision?" }, { "end": 58.400000000000006, "start": 48.92, "text": " Yeah, I guess this goes back to my PhD years when I first discovered reinforcement learning." }, { "end": 64.4, "start": 58.400000000000006, "text": " And I've been interested in AI and I saw that reinforcement learning is a very good" }, { "end": 73.6, "start": 64.4, "text": " way to study how to create intelligent agents. So after this, I started working on this" }, { "end": 76.68, "start": 73.6, "text": " and then since then I'd be working on this." }, { "end": 80.36000000000001, "start": 76.68, "text": " Did you say your career turned out as you planned or are you surprised by how it turned" }, { "end": 81.36000000000001, "start": 80.36000000000001, "text": " out?" }, { "end": 88.16000000000001, "start": 81.36000000000001, "text": " Well, I didn't always plan things. So after my PhD I was working for industry for quite" }, { "end": 95.60000000000001, "start": 88.16000000000001, "text": " some time. And after that I came back to academia. And so I cannot say that I was planning" }, { "end": 101.28, "start": 95.60000000000001, "text": " everything that happened to me, but things worked out pretty well, I would say." }, { "end": 105.4, "start": 101.28, "text": " So in your professional life, what accomplishments do you feel has meant the most to you? And has" }, { "end": 109.24000000000001, "start": 105.4, "text": " that idea of what's most important changed over your career?" }, { "end": 118.28, "start": 109.24000000000001, "text": " I guess from the very beginning I was curious about what is possible to achieve as algorithms" }, { "end": 124.36000000000001, "start": 118.28, "text": " and with what sort of algorithms. And so it limits off-performance and like creating" }, { "end": 131.36, "start": 124.36000000000001, "text": " understanding about problems, achieving better understanding of problems. And so it" }, { "end": 138.4, "start": 131.36, "text": " desires that kind of show that how the most important to me and that has always been" }, { "end": 139.72000000000003, "start": 138.4, "text": " a case." }, { "end": 143.8, "start": 139.72000000000003, "text": " Can you tell us a bit about the foundations team at DeepMind? What is the goal of that" }, { "end": 144.8, "start": 143.8, "text": " team?" }, { "end": 151.16000000000003, "start": 144.8, "text": " Yeah, so the foundations team at DeepMind has been created about a little bit more than" }, { "end": 160.32000000000002, "start": 151.16000000000003, "text": " two years ago. And it's as the name suggests, the goal of the team is to study the foundations" }, { "end": 168.95999999999998, "start": 160.32, "text": " of AI, machine learning, reinforcement learning, create a better theoretical understanding" }, { "end": 173.88, "start": 168.95999999999998, "text": " of what's possible to achieve in these fields." }, { "end": 178.28, "start": 173.88, "text": " Can you say anything about the difference between doing this type of work in industry and" }, { "end": 180.92, "start": 178.28, "text": " academia since you've seen both sides?" }, { "end": 186.32, "start": 180.92, "text": " I guess industry means a lot of different things. So that I am currently at DeepMind. I" }, { "end": 193.84, "start": 186.32, "text": " think DeepMind is very specializing in industry. It's almost like an ideal academic environment," }, { "end": 201.4, "start": 193.84, "text": " but industry in general doesn't necessarily mean this. So from this perspective, DeepMind" }, { "end": 209.32, "start": 201.4, "text": " is maybe even more academic than an academic environment because you can just like focus" }, { "end": 211, "start": 209.32, "text": " on research." }, { "end": 216.16, "start": 211, "text": " When you're designing a new algorithm or when you're doing the type of research you do," }, { "end": 221.28, "start": 216.16, "text": " what does it look like to do that type of work? What does your day to day look like?" }, { "end": 225.72, "start": 221.28, "text": " What kind of milestones are there in the middle of a project like that?" }, { "end": 232.96, "start": 225.72, "text": " Yeah, so the type of work I do is of theoretical nature. So it's a lot of bouncing back and" }, { "end": 242.28, "start": 232.96, "text": " force between problems, models, algorithms, and trying to figure out what is the next" }, { "end": 248.92000000000002, "start": 242.28, "text": " problem to look at in what model and with what algorithms to approach a given problem" }, { "end": 254, "start": 248.92000000000002, "text": " or even just trying to understand whether there exists a marathon that is able to break" }, { "end": 260.84, "start": 254, "text": " through some barrier. So a lot of bouncing back and forth between these different aspects" }, { "end": 261.92, "start": 260.84, "text": " of the work." }, { "end": 267.16, "start": 261.92, "text": " So you have advised a lot of accomplished researchers for their masters and PhDs. On your" }, { "end": 270.64, "start": 267.16, "text": " homepage, you mentioned what you look for in students. I wonder if you can say anything" }, { "end": 275.32, "start": 270.64, "text": " about what you think the students should look for in an advisor." }, { "end": 284.03999999999996, "start": 275.32, "text": " I guess it's pretty important that advisor should be able to act as a good mentor in addition" }, { "end": 290.8, "start": 284.03999999999996, "text": " to be technically proficient. Things that are maybe sometimes overlooked is whether" }, { "end": 297.52, "start": 290.8, "text": " the advisor is going to have the time, for example, to work directly with the students." }, { "end": 303.59999999999997, "start": 297.52, "text": " I think that that's pretty important. Is there some stages of maturity in machine learning" }, { "end": 308.2, "start": 303.59999999999997, "text": " and reinforcement learning that researchers go through and what in your mind, what does" }, { "end": 311.47999999999996, "start": 308.2, "text": " that type of ladder look like or progression?" }, { "end": 317.56, "start": 311.47999999999996, "text": " I guess it's probably not that different than in other fields in that you first have" }, { "end": 323.96, "start": 317.56, "text": " to pick up certain technical skills before you're able to move on to the next level of" }, { "end": 329.56, "start": 323.96, "text": " the ladder, which could be to pick your own problems, to design your own problems, your" }, { "end": 335.52, "start": 329.56, "text": " own challenges. So I guess that must be the same in every field." }, { "end": 339.44, "start": 335.52, "text": " So when we talk about reinforcement learning, there's these different problem settings" }, { "end": 345.84, "start": 339.44, "text": " and subfields that come up bandits and game theory and the full RL problem and control" }, { "end": 351.35999999999996, "start": 345.84, "text": " systems optimization. Are all of these fields brothers and sisters of each other? Are" }, { "end": 355.52000000000004, "start": 351.36, "text": " they all subfields of something or how should we think about these different aspects?" }, { "end": 360.64, "start": 355.52000000000004, "text": " Are they different perspectives? Do we need to understand them all to be good at decision-making" }, { "end": 362.44, "start": 360.64, "text": " under uncertainty?" }, { "end": 368.6, "start": 362.44, "text": " I guess the more you understand the better you would be off to start with that, what's" }, { "end": 375.12, "start": 368.6, "text": " necessary to understand, I don't know. I try to learn about all these different perspectives." }, { "end": 383, "start": 375.12, "text": " But a lot of times what you discover that these perspectives come from a certain time," }, { "end": 391.88, "start": 383, "text": " certain type of problems have been important for people. As times move on, some of the" }, { "end": 399.04, "start": 391.88, "text": " results and achievements of previous results, achievements become less important or less" }, { "end": 404.44, "start": 399.04, "text": " important from the perspective of the type of problems that you're trying to solve." }, { "end": 411.08, "start": 404.44, "text": " Nevertheless, it happens a lot of times that people before us have thought about the same" }, { "end": 416.76, "start": 411.08, "text": " problems, maybe approaches slightly differently, but had very valuable thoughts. So it's" }, { "end": 425.72, "start": 416.76, "text": " really versatile to study all these different viewpoints. And yes, these fields are by and" }, { "end": 429.12, "start": 425.72, "text": " large studying the same problem." }, { "end": 432.76, "start": 429.12, "text": " These settings have been around for many decades. Is it a fair question to ask, like," }, { "end": 437.08, "start": 432.76, "text": " do we have all the fields right now that we need or are there still some missing in" }, { "end": 439.52, "start": 437.08, "text": " this list that don't exist yet?" }, { "end": 448.84, "start": 439.52, "text": " Well, that's really hard to answer. I guess we are creating fields as we go. So I expect" }, { "end": 456.92, "start": 448.84, "text": " new fields are going to emerge, but I have not seen this clue about what they could be" }, { "end": 462.44, "start": 456.92, "text": " or how they're going to look like or how they are going to be different than the ones" }, { "end": 463.92, "start": 462.44, "text": " that we currently have." }, { "end": 468.2, "start": 463.92, "text": " So you said that we're creating fields as we go. Can you, could you mention maybe a more" }, { "end": 470.2, "start": 468.2, "text": " recent, a more recent subfield?" }, { "end": 479.15999999999997, "start": 470.2, "text": " I mean, like, if you just like think about the buzzwords of today that haven't been buzzwords" }, { "end": 486.68, "start": 479.15999999999997, "text": " yesterday and then SV programs, these buzzwords become fields on their own. So a buzzword" }, { "end": 494.52, "start": 486.68, "text": " of not a long time ago has been data science, right? I guess maybe it's still popular in" }, { "end": 501.68, "start": 494.52, "text": " certain circles. And your buzzword is deep learning. Are there own fields or not? Like," }, { "end": 511.96000000000004, "start": 501.68, "text": " I don't know. They are or they will be. So this is exactly what I mean. As we are interested" }, { "end": 518.56, "start": 511.96, "text": " in focus shifts, a lot of people start to work on the same topic and then it may become" }, { "end": 520.1999999999999, "start": 518.56, "text": " on its own field." }, { "end": 525.52, "start": 520.1999999999999, "text": " So in your bandit algorithms book, you note into the introduction that bandit problems" }, { "end": 532.88, "start": 525.52, "text": " were introduced by Thompson in back in 1933 in the context of medical trials. So now it's" }, { "end": 538.0799999999999, "start": 532.88, "text": " I guess 87 years later and I've read that the FDA discourages the use of bandits in" }, { "end": 542.24, "start": 538.08, "text": " clinical trials. Do you feel strongly that should change and what do you think would" }, { "end": 543.24, "start": 542.24, "text": " take to change that?" }, { "end": 550.44, "start": 543.24, "text": " I'm pretty sure that things are going to change. That is usually a back end force between" }, { "end": 559.5600000000001, "start": 550.44, "text": " you know, technological pushes and regulation. The regulators are rightly thinking very carefully" }, { "end": 570.64, "start": 559.56, "text": " about the pros and cons of different approaches. And I think what changes is that in biology," }, { "end": 576.8, "start": 570.64, "text": " as well you can see a lot of advances just yesterday. I read an article in a scientific" }, { "end": 583.9599999999999, "start": 576.8, "text": " journal that was talking about that today, it is possible to create medications for a" }, { "end": 590.2800000000001, "start": 583.96, "text": " single patient and people are debating whether we should do that. So when things change so" }, { "end": 597.96, "start": 590.2800000000001, "text": " drastically, I don't see why FDA who previously had an opposing opinion about this particle" }, { "end": 605.5600000000001, "start": 597.96, "text": " topic but don't change its mind about planets. I think that so I haven't read this discouragement" }, { "end": 614.3199999999999, "start": 605.56, "text": " but I as far as I know, there are actually some trials that are using bandit type offs" }, { "end": 615.3199999999999, "start": 614.3199999999999, "text": " algorithms." }, { "end": 622.4, "start": 615.3199999999999, "text": " In your book, you mentioned the EXP for algorithm which learns to delegate to a set of experts" }, { "end": 628.1999999999999, "start": 622.4, "text": " to find understand correctly and which of themselves could be bandit algorithms. So I was just" }, { "end": 634.3599999999999, "start": 628.1999999999999, "text": " wondering is is EXP for kind of like the bandit version of hierarchical reinforcement learning" }, { "end": 639.96, "start": 634.36, "text": " and other experts like options in RL? Or is that mapping not very good?" }, { "end": 648.16, "start": 639.96, "text": " Well, the mapping is good to some extent but it's missing a lot of elements. As in bandits," }, { "end": 656.52, "start": 648.16, "text": " you don't have long tributes, there are transitions, there is no reglass back of planning instead" }, { "end": 662.64, "start": 656.52, "text": " of bandits at least. I'm like there is planning for reducing uncertainty but that's all." }, { "end": 674.04, "start": 662.64, "text": " So this framework is meant to study information sharing in a hierarchical context and it's" }, { "end": 681.08, "start": 674.04, "text": " good for that. What hierarchical RL has a lot of other aspects that this framework" }, { "end": 682.08, "start": 681.08, "text": " just cannot have." }, { "end": 689.52, "start": 682.08, "text": " I wonder if automated methods will ever be used to find improved fundamental bandit algorithms." }, { "end": 691.12, "start": 689.52, "text": " Is that a sensible thing?" }, { "end": 694, "start": 691.12, "text": " Oh yes, for sure. Why not?" }, { "end": 700.72, "start": 694, "text": " With my colleagues at Google Brain, we are actually looking at some of these automated" }, { "end": 706.4, "start": 700.72, "text": " methods to construct bandit algorithms. It's like the thing that you need to understand" }, { "end": 715.84, "start": 706.4, "text": " is that these have different goals. So if you have a set of different bandit environments" }, { "end": 721.84, "start": 715.84, "text": " that you can sample from, then that is a lot of sense to specializing to these set of" }, { "end": 729.64, "start": 721.84, "text": " environments. And an automated learning algorithm, perhaps more efficiently than what a human" }, { "end": 736.9200000000001, "start": 729.64, "text": " would be able to do because for a human it may be very obliqued or packed to extract" }, { "end": 741.76, "start": 736.9200000000001, "text": " the knowledge required to specialize the bandit algorithms. So it makes a lot of sense" }, { "end": 744.36, "start": 741.76, "text": " to me to do this." }, { "end": 749.12, "start": 744.36, "text": " And you've said if I compare phrase that machine learning research should reduce emphasis" }, { "end": 753.88, "start": 749.12, "text": " on competitive testing and that the purpose of science is to generate knowledge." }, { "end": 758.76, "start": 753.88, "text": " Do you think of bandit algorithms that you produced more as inventions or more like" }, { "end": 764.48, "start": 758.76, "text": " their discoveries? Like is the science or is it engineering?" }, { "end": 770.2, "start": 764.48, "text": " So this question has two parts. I'm not sure how the two parts have related to each" }, { "end": 771.4, "start": 770.2, "text": " other." }, { "end": 778.68, "start": 771.4, "text": " So the first part was, I was reflecting something you said that ML research should reduce emphasis" }, { "end": 782.9599999999999, "start": 778.68, "text": " on competitive testing and that the purpose of science is to generate knowledge. So as" }, { "end": 785.84, "start": 782.9599999999999, "text": " opposed to just optimizing on a leaderboard maybe." }, { "end": 791.4399999999999, "start": 785.84, "text": " Right. Yeah. So I still maintain that. I think that purpose of science is really just" }, { "end": 798.72, "start": 791.4399999999999, "text": " to generate knowledge or not just but like ultimately that's the goal. And it's a different" }, { "end": 804.6800000000001, "start": 798.72, "text": " type of activity. If you care about solving a particular problem then of course you're" }, { "end": 810.0400000000001, "start": 804.6800000000001, "text": " not necessarily interested in understanding what works or what doesn't work and like" }, { "end": 815.32, "start": 810.0400000000001, "text": " and try many different things. And these are complementary approaches. And the second" }, { "end": 823.2, "start": 815.32, "text": " part was, okay, sorry, it's not the best question. But I was I was one I wanted to know if" }, { "end": 828.6, "start": 823.2, "text": " you felt that bandit algorithms are producing these bandit algorithms is more like an" }, { "end": 832.64, "start": 828.6, "text": " engineering invention or more like a scientific discovery." }, { "end": 841.84, "start": 832.64, "text": " Oh, I see. Like I guess it depends, right? So if you use an automated metal tool, discover" }, { "end": 850.2, "start": 841.84, "text": " bandit algorithms than the study of the automated algorithm, then does it work? How did it work?" }, { "end": 857.88, "start": 850.2, "text": " To what extent can it work? Could be studied as a scientific or mathematical question." }, { "end": 865.68, "start": 857.88, "text": " It could it could be approaches a former question. But you can also decide to start with" }, { "end": 871.04, "start": 865.68, "text": " a problem setting and then try to find the best algorithm for that setting. Let it be" }, { "end": 879.6, "start": 871.04, "text": " a bandit algorithm or anything else that would also be a scientific approach. But if you" }, { "end": 887.28, "start": 879.6, "text": " care about a practical problem, then you know, you can just try different things. And as long" }, { "end": 894.88, "start": 887.28, "text": " as you are entering practices sound, meeting that the inference is usually there is some" }, { "end": 899.3199999999999, "start": 894.88, "text": " uncertainty about like, you know, like the application. And so you're you're making" }, { "end": 904.9599999999999, "start": 899.3199999999999, "text": " inferences about what's going to work, what's not going to work. And as long as you have" }, { "end": 910.28, "start": 904.9599999999999, "text": " a sound approach to this, then you can try many different things." }, { "end": 917.28, "start": 910.28, "text": " For your UCT algorithm, I think from a 2006 paper bandit based Monte Carlo planning, is" }, { "end": 923.76, "start": 917.28, "text": " that the heart of AlphaGo, Alpha0 and Mu0, is that right? So some version or maybe I should" }, { "end": 930.68, "start": 923.76, "text": " say an algorithm that was inspired by UCT, that's called PUC, is at the heart. So it's a" }, { "end": 935.1999999999999, "start": 930.68, "text": " modification of UCD, which is being used by these algorithms." }, { "end": 941.5600000000001, "start": 935.2, "text": " So did you foresee that type of use when you first came up with that? Definitely. Some kind" }, { "end": 948.08, "start": 941.5600000000001, "text": " of use always on your mind when you're trying to come up with an algorithm. And we were" }, { "end": 954.4000000000001, "start": 948.08, "text": " looking at the game domains and specifically goal. You're always optimistic, right? About" }, { "end": 962.4000000000001, "start": 954.4000000000001, "text": " the future of any of your dimensions, if you want. But I don't think that we were hoping" }, { "end": 970.92, "start": 962.4, "text": " at a time that in about a decade, these algorithms is going to contribute in some way to a major" }, { "end": 977.76, "start": 970.92, "text": " success like the success of AlphaGo. Would you say that this algorithm is timeless? Or would" }, { "end": 984.12, "start": 977.76, "text": " you say that at some point it could maybe be superseded? And I mean the PUCT. Yeah, right." }, { "end": 996.4, "start": 984.12, "text": " I guess it's maybe, well, I don't think anything is timeless. For some time, it can stand" }, { "end": 1006.32, "start": 996.4, "text": " as the default algorithm, I hope, that others need to supersede. And then it fulfills his" }, { "end": 1014.04, "start": 1006.32, "text": " purpose. I think that's a sense in which it can be timeless. But just for a while." }, { "end": 1021.9599999999999, "start": 1014.04, "text": " So it's not timeless. Was it a major effort to develop UCT or like in retrospect when" }, { "end": 1029.1599999999999, "start": 1021.9599999999999, "text": " you think about it? Was it a big project for you or something minor? It was a fun project." }, { "end": 1039, "start": 1029.1599999999999, "text": " It happened when I've been still in Hungary working with a colleague, Levant Kochish, and" }, { "end": 1046.12, "start": 1039, "text": " we were learning about panets. And my colleague, specialties, has been learning in games. And" }, { "end": 1055.44, "start": 1046.12, "text": " he went to a conference on the topic and he came back and told me that, hey, Chava, there" }, { "end": 1061.52, "start": 1055.44, "text": " is always buzz about this Monte Carlo 3-search algorithms, but they are doing it all wrong." }, { "end": 1069.68, "start": 1061.52, "text": " And we do something about this by merging with Spanid Argot Thumps. And it's been quite fun" }, { "end": 1072.44, "start": 1069.68, "text": " to do this." }, { "end": 1077.52, "start": 1072.44, "text": " So I wanted to ask you about the notion of adversary in bandits that comes up often." }, { "end": 1082.92, "start": 1077.52, "text": " If I understand correctly, the adversary in bandits is not typically like a learning agent" }, { "end": 1086.6, "start": 1082.92, "text": " that's designed to minimize our reward in a game theory sense. Is that correct?" }, { "end": 1089.08, "start": 1086.6, "text": " Yeah, that's correct. It's a big adversary." }, { "end": 1095.96, "start": 1089.08, "text": " So how do you contrast the adversarial bandits to a real competitive agent like a multi-agent" }, { "end": 1097.4399999999998, "start": 1095.96, "text": " or a game theory RL sense?" }, { "end": 1107, "start": 1097.4399999999998, "text": " Yeah, so the adversary of panets framework comes from adjusting online learning where" }, { "end": 1117.28, "start": 1107, "text": " people study each other setting into a sequence prediction problems. You have a sequence" }, { "end": 1122.56, "start": 1117.28, "text": " that you need to predict that you're not making much assumption about. And you just want" }, { "end": 1129.68, "start": 1122.56, "text": " to lift the statistical assumption and see whether you can define a notion of full learning" }, { "end": 1135.24, "start": 1129.68, "text": " in a meaningful way. And it turns out that you can and then you can utilize this and create" }, { "end": 1137.96, "start": 1135.24, "text": " this adverse bandit setting." }, { "end": 1145.84, "start": 1137.96, "text": " So this is an attempt to understand what is important about the statistical assumptions" }, { "end": 1154.36, "start": 1145.84, "text": " that standard learning theory makes very often, which of these assumptions are essential" }, { "end": 1157.1599999999999, "start": 1154.36, "text": " to what degree to learning." }, { "end": 1161.84, "start": 1157.1599999999999, "text": " Okay, so but if there was a learning competitor that was trying to minimize your reward," }, { "end": 1165.24, "start": 1161.84, "text": " would you say that was also an adversary in this sense?" }, { "end": 1172.3999999999999, "start": 1165.24, "text": " Well, you could say that it would be an adversary, but the algorithms that are debitled for" }, { "end": 1178.92, "start": 1172.4, "text": " this adverse setting are not necessarily good algorithms. Nevertheless, it happens that" }, { "end": 1187.3200000000002, "start": 1178.92, "text": " very often they are maybe with some extra care and extra modifications. Actually, my colleagues" }, { "end": 1196.0800000000002, "start": 1187.3200000000002, "text": " in a better and as far as well have been using this adverse learning algorithms in the" }, { "end": 1201.96, "start": 1196.0800000000002, "text": " context of games to, for example, compute approximate Nashack-Librarian." }, { "end": 1206.72, "start": 1201.96, "text": " Can you tell us more about the special case of nature as an adversary and how that differs?" }, { "end": 1213.72, "start": 1206.72, "text": " So nature as an adversary doesn't care about what I do, right? If I have a true adversary" }, { "end": 1222.1200000000001, "start": 1213.72, "text": " then the adversary is going to watch me and if it notices some regularity, some pattern" }, { "end": 1229.3600000000001, "start": 1222.1200000000001, "text": " in the way I choose my actions, then it will try to take advantage of that. But as if" }, { "end": 1235.9199999999998, "start": 1229.36, "text": " I have to compete with nature, nature doesn't care about me and I can try to take advantage" }, { "end": 1242.6799999999998, "start": 1235.9199999999998, "text": " of nature. So it's kind of like a really dumb adversary, if you wish. It could be a very" }, { "end": 1246.6799999999998, "start": 1242.6799999999998, "text": " challenging to compete with, but it's not reacting to you." }, { "end": 1250.1999999999998, "start": 1246.6799999999998, "text": " When it comes to uncertainty, in the bandit setting and the RL setting, it seems there's" }, { "end": 1257.1599999999999, "start": 1250.1999999999998, "text": " so many different places then uncertainty can arise and we have different types of uncertainty." }, { "end": 1262.96, "start": 1257.16, "text": " It seems like many methods just kind of bundle all this up together. How important is it" }, { "end": 1269.52, "start": 1262.96, "text": " for us to distinguish between like, epistemic and allotoric uncertainty in these settings?" }, { "end": 1279.64, "start": 1269.52, "text": " I think that this distinction is pretty important and quite fundamental. You have to reason" }, { "end": 1286.16, "start": 1279.64, "text": " about both types of uncertainty, right? So it's one is concerned with the uncertainty" }, { "end": 1290.64, "start": 1286.16, "text": " of future events, the other is concerned with your uncertainty about like the nature" }, { "end": 1297.3200000000002, "start": 1290.64, "text": " of the environment that you're exposed to. So you have to reason about both levels of" }, { "end": 1303.44, "start": 1297.3200000000002, "text": " uncertainty. And I think that this is pretty well understood now." }, { "end": 1310.8400000000001, "start": 1303.44, "text": " So at Nureps 2019, MTS Khan gave a talk on deep learning with Bayesian principles. Seems" }, { "end": 1316.3999999999999, "start": 1310.84, "text": " like we're just starting to figure out how to combine these two paradigms in a meaningful" }, { "end": 1322.3999999999999, "start": 1316.3999999999999, "text": " way, just in terms of supervised learning. Can you comment on that combination in terms" }, { "end": 1327.3999999999999, "start": 1322.3999999999999, "text": " of RL? Like where are we in reinforcement learning in terms of bringing together the deep" }, { "end": 1329.1999999999998, "start": 1327.3999999999999, "text": " learning side and the Bayesian side?" }, { "end": 1341.6000000000001, "start": 1329.2, "text": " Yeah, that's just a lot to reconcile. So I guess I know more about how the Bayesian side" }, { "end": 1352.4, "start": 1341.6000000000001, "text": " interacts with RL. There are some very interesting ideas there. So Tomson's name came up previously" }, { "end": 1364.1200000000001, "start": 1352.4, "text": " and he was proposing a very simple idea, which is that you should sample. So you maintain" }, { "end": 1369.16, "start": 1364.1200000000001, "text": " a posterior distribution of what a possible environment that you might be in. And whenever" }, { "end": 1375.1200000000001, "start": 1369.16, "text": " it comes to make a decision about what actions to take, what you could do is that you just" }, { "end": 1383.32, "start": 1375.12, "text": " mean you just sample from this posterior aparticular environment. And then you run your planning" }, { "end": 1388.76, "start": 1383.32, "text": " algorithm on this. So this could be like a full RL environment. And then you figure out" }, { "end": 1392.9599999999998, "start": 1388.76, "text": " the body center and you could start to follow that body center and you would follow that" }, { "end": 1399.4799999999998, "start": 1392.9599999999998, "text": " body center. So this led to an algorithm called posterior sample sampling reinforcement" }, { "end": 1407.16, "start": 1399.48, "text": " learning PSRL, which is a very interesting and highly competitive idea, highly competitive" }, { "end": 1416, "start": 1407.16, "text": " algorithms. Sorry. Still there is a lot of things that are unknown. So sampling from" }, { "end": 1426.96, "start": 1416, "text": " a posterior may not be easy. Usually this is pre-dogged with huge, huge computational" }, { "end": 1436.96, "start": 1426.96, "text": " challenges. So addressing those challenges is one important topic. Then you can even go" }, { "end": 1445.16, "start": 1436.96, "text": " further and you ask yourself, is it really important that I sample from a posterior distribution?" }, { "end": 1451.28, "start": 1445.16, "text": " Or what type of distribution should I sample from? What is important about these distributions?" }, { "end": 1460.6, "start": 1451.28, "text": " So there are many, many interviewing questions here. And I expect to see a lot more to come" }, { "end": 1467.96, "start": 1460.6, "text": " on this integration from. And of course, at any point in time, if you need functional" }, { "end": 1474.84, "start": 1467.96, "text": " approximation, then you can throw in some neural networks, eat and there and make everything" }, { "end": 1485.24, "start": 1474.84, "text": " more complete in that way and maybe more flexible. So yeah, there indeed, many interesting" }, { "end": 1491.24, "start": 1485.24, "text": " ways you can combine these ideas and we just started to explore them." }, { "end": 1498, "start": 1491.24, "text": " If you had to describe what is missing from RL today to allow it to fulfill the complete" }, { "end": 1503.48, "start": 1498, "text": " vision of what it could potentially become, why is it are eliminated right now? Why is" }, { "end": 1514.24, "start": 1503.48, "text": " it still mostly in the lab? I guess RL deals with a scenario which is more complex than" }, { "end": 1520.08, "start": 1514.24, "text": " the typical scenario where machine learning has been used. So sequential decision making" }, { "end": 1526.32, "start": 1520.08, "text": " other uncertainty. And because of that, deployment is harder. So that's one aspect. Another" }, { "end": 1533.4399999999998, "start": 1526.32, "text": " aspect of the problem is that of course, the space of RL problems is so huge that you" }, { "end": 1539.9199999999998, "start": 1533.4399999999998, "text": " can always find really, really difficult instances in it, which may require, if you" }, { "end": 1551.04, "start": 1539.9199999999998, "text": " care about those specific instances, some special approaches. So we have a number of" }, { "end": 1560.44, "start": 1551.04, "text": " performance limit lower bounds for different RL settings that clearly tell us that without" }, { "end": 1568.1599999999999, "start": 1560.44, "text": " structural assumptions for the assumptions, you can't hope to do very well. So at times," }, { "end": 1574.92, "start": 1568.1599999999999, "text": " it's even unclear to me whether we should blame the current algorithms or we should just" }, { "end": 1581.92, "start": 1574.92, "text": " blame or bad luck that the space of our problems is so large." }, { "end": 1587.5600000000002, "start": 1581.92, "text": " So is that why we keep seeing new algorithms coming out all the time? I guess that could" }, { "end": 1595, "start": 1587.5600000000002, "text": " be one reason. We won't stop right? Just because something is really hard. So people are" }, { "end": 1598.3600000000001, "start": 1595, "text": " inventing new and newer things." }, { "end": 1604.24, "start": 1598.3600000000001, "text": " Is much in RL fundamental and timeless? If we go into the distant future or I was thinking" }, { "end": 1609.08, "start": 1604.24, "text": " in an advanced alien civilization, would they agree with us that these are the basic" }, { "end": 1615.32, "start": 1609.08, "text": " building blocks of decision-making and intelligence? Or are some of this maybe accidental?" }, { "end": 1616.92, "start": 1615.32, "text": " Or is there nobody now?" }, { "end": 1625.76, "start": 1616.92, "text": " I guess I don't know about the aliens, but I feel that this is kind of the same as" }, { "end": 1635.24, "start": 1625.76, "text": " in mathematics in general. The closer you stay to simple problems, the higher the chances" }, { "end": 1640.92, "start": 1635.24, "text": " are going to be that you're going to find something fundamental that is borderline" }, { "end": 1648.04, "start": 1640.92, "text": " timeless. So the koshi schwarz inequality hurt what might something really simple and" }, { "end": 1654, "start": 1648.04, "text": " core to many of the things that we do is not going to go away anytime soon." }, { "end": 1661.44, "start": 1654, "text": " So the question is whether, for example, Markovian decision processes, as we know of them today" }, { "end": 1669.68, "start": 1661.44, "text": " are fundamental or not, whether they're going to be viewed as foundational building blocks." }, { "end": 1674.88, "start": 1669.68, "text": " I guess they are still, I would say, pretty simple. And as such, there could be some" }, { "end": 1681.88, "start": 1674.88, "text": " really valuable ideas there in their simplicity. So I guess my hunch is that there is something" }, { "end": 1689.92, "start": 1681.88, "text": " that is core and foundation about these frameworks that we are studying currently." }, { "end": 1696.6000000000001, "start": 1689.92, "text": " I was present at the Neurips 2019 RL social where there was a small group of people that" }, { "end": 1701.8400000000001, "start": 1696.6000000000001, "text": " you were part of and you were congratulating an open AI researcher on their shadow hand" }, { "end": 1708.16, "start": 1701.8400000000001, "text": " and Rubik's cube work, that very impressive project. And I also heard you say that you" }, { "end": 1712.5600000000002, "start": 1708.16, "text": " felt disappointed that they needed to use markers on the hands. I wonder if you can share" }, { "end": 1717.0800000000002, "start": 1712.5600000000002, "text": " a little bit more about that comment with our listeners of why that was important to" }, { "end": 1718.92, "start": 1717.0800000000002, "text": " you, why you felt that way about it?" }, { "end": 1726.92, "start": 1718.92, "text": " Yeah, I guess it is not really specific to that project, my disappointment. It's more" }, { "end": 1735.6000000000001, "start": 1726.92, "text": " about that I was reflecting on that in this problem, there is a perceptual element which" }, { "end": 1741.9599999999998, "start": 1735.6, "text": " is that you need to sense the configuration of the cube and how the hand is holding it" }, { "end": 1751.48, "start": 1741.9599999999998, "text": " and all that. And it's pretty limited perceptual problem compared to if you look at round" }, { "end": 1760.9599999999998, "start": 1751.48, "text": " you like what you see and the perceptual problems that humans have to do that. And yet, I" }, { "end": 1770.44, "start": 1760.96, "text": " guess for practical reasons or whatnot, like these researchers, these great people at Open" }, { "end": 1779.68, "start": 1770.44, "text": " AI were forced, I guess, to instrument the environment, instrument that this perceptual" }, { "end": 1789.3600000000001, "start": 1779.68, "text": " problem didn't need to be dealt with in its complexity. Whereas I find it like maybe" }, { "end": 1796.36, "start": 1789.36, "text": " I had expectations that we should be able to deal with it by now. Yeah, so that's my" }, { "end": 1797.6799999999998, "start": 1796.36, "text": " disappointment." }, { "end": 1800.36, "start": 1797.6799999999998, "text": " So it sounds like more comment on the state of the field?" }, { "end": 1802.7199999999998, "start": 1800.36, "text": " Yeah, absolutely. Yeah." }, { "end": 1807.12, "start": 1802.7199999999998, "text": " And clearly bandit algorithms have been applied in practical applications for some time" }, { "end": 1812.32, "start": 1807.12, "text": " and been very successful. It seems maybe that's quite a lot less true for the full RL" }, { "end": 1818.8799999999999, "start": 1812.32, "text": " setting. Yet it seems to have so much promise. Our first guest on this podcast was Natasha" }, { "end": 1825.2800000000002, "start": 1818.88, "text": " Jigs and she spoke about her paper tackling climate change with machine learning and" }, { "end": 1832.44, "start": 1825.2800000000002, "text": " that paper mentions multiple areas where emissions could be improved by applying potentially" }, { "end": 1839.2800000000002, "start": 1832.44, "text": " by applying RL in the future. But this hasn't really come to pass yet. And then we see a paper," }, { "end": 1846.0400000000002, "start": 1839.2800000000002, "text": " a recent paper from Doolock Arnold at all challenges of real world reinforcement learning" }, { "end": 1852.12, "start": 1846.04, "text": " where they try to isolate what are the challenges keeping this out of the out of practice." }, { "end": 1861.24, "start": 1852.12, "text": " I wonder how you feel about where we are in terms of bringing RL into real world applications" }, { "end": 1866.04, "start": 1861.24, "text": " on a large scale. Is it very distant? Is it a distant dream? Is it just around the corner?" }, { "end": 1867.04, "start": 1866.04, "text": " How do you feel about that?" }, { "end": 1876.04, "start": 1867.04, "text": " I guess it's going to be a matter of year process. We'll see more and more of our applications." }, { "end": 1884.68, "start": 1876.04, "text": " I expect that. But as with control, there are certain risks in what, right? And so people" }, { "end": 1893.72, "start": 1884.68, "text": " are conservative and when it comes to applying learning algorithms that would be learning online." }, { "end": 1898.08, "start": 1893.72, "text": " If you're learning offline, then you have a batch of data and learning with a batch of data" }, { "end": 1904.96, "start": 1898.08, "text": " is really complicated. Not only really complicated, it could be just impossible because the data" }, { "end": 1914.08, "start": 1904.96, "text": " may not have the information that you actually need to come up with really good policies." }, { "end": 1921.76, "start": 1914.08, "text": " So Arad is again, read out with all of these challenges when it comes to applications." }, { "end": 1934.56, "start": 1921.76, "text": " So the easiest applications could be when you have some internet, I don't know, various" }, { "end": 1945.12, "start": 1934.56, "text": " systems where everything happens virtually. So to say, and maybe the impact of one, not" }, { "end": 1952.6, "start": 1945.12, "text": " so great decision is not so high, right? So I expect more applications to come, but I" }, { "end": 1961.4799999999998, "start": 1952.6, "text": " don't expect that this is going to be happening like as a beanfall or something like that." }, { "end": 1967.8, "start": 1961.4799999999998, "text": " Do you think that model-based RL is critical to getting real-world systems working and" }, { "end": 1977.28, "start": 1967.8, "text": " safe? I don't know about that. It depends on the application, I guess. Model-based RL is" }, { "end": 1984, "start": 1977.28, "text": " a great idea, but it's not without any problems. Yeah, I don't know the answer." }, { "end": 1989.52, "start": 1984, "text": " Do you have any comments about time in model-based RL? Like it's always struck me as restrictive" }, { "end": 1995.52, "start": 1989.52, "text": " that the standard model is a one-step, one-time-step model. And if you have your time scale very small," }, { "end": 2000.44, "start": 1995.52, "text": " then it could be modeling such a small amount of time that makes it hard to model in a longer" }, { "end": 2003.2, "start": 2000.44, "text": " term. I guess people have been looking at the" }, { "end": 2012.16, "start": 2003.2, "text": " modest-step predictions and compressing time or abstracting time in various ways. So" }, { "end": 2019.8799999999999, "start": 2012.16, "text": " Harika RL and various approaches to that, you can easily imagine models and people have" }, { "end": 2027.2800000000002, "start": 2019.88, "text": " been looking at building models that may modest-step predictions at a time. It's not" }, { "end": 2035.6000000000001, "start": 2027.2800000000002, "text": " trivial to do it, right? Because I use stang and policy, like for what is the thing that" }, { "end": 2043.0400000000002, "start": 2035.6000000000001, "text": " you're trying to predict, right? I guess we'll keep trying. I agree that it's an important" }, { "end": 2050.36, "start": 2043.04, "text": " aspect and maybe it's receiving less tension, but maybe it's because it's more complicated" }, { "end": 2057.6, "start": 2050.36, "text": " to come up. So it's more complicated as a problem and as a result, it takes more time" }, { "end": 2061.8, "start": 2057.6, "text": " to come up as the set of right ideas for this study." }, { "end": 2067.24, "start": 2061.8, "text": " I guess we have the prediction on paper from David Silver and Company that seemed to be" }, { "end": 2072.56, "start": 2067.24, "text": " abstracting time away in some sense. I actually didn't follow that entire paper, but I read" }, { "end": 2078.72, "start": 2072.56, "text": " it and tried to follow it. It seems like they overcame the single-time-step issue and" }, { "end": 2082.7599999999998, "start": 2078.72, "text": " found some way to make time abstract, which seems very appealing." }, { "end": 2088, "start": 2082.7599999999998, "text": " I guess there are a bunch of papers even before that paper, people have been talking about" }, { "end": 2090.52, "start": 2088, "text": " compressing time and abstracting time." }, { "end": 2095.96, "start": 2090.52, "text": " Do you have any comments on explainability in RL? Do you feel like explainability is absolutely" }, { "end": 2098.44, "start": 2095.96, "text": " required for practical systems?" }, { "end": 2106.04, "start": 2098.44, "text": " It basically depends on the system. For subsystems, it's going to be required, but for others," }, { "end": 2111.16, "start": 2106.04, "text": " not so much. So I think explainability is an important topic, but there will be lots" }, { "end": 2115.04, "start": 2111.16, "text": " of problems where explainability is not strictly required." }, { "end": 2120.64, "start": 2115.04, "text": " And also explainability is, you know, like it's relative to who you are explaining to." }, { "end": 2127.36, "start": 2120.64, "text": " So what does it mean that you explain something? Some decision that was going on." }, { "end": 2133.92, "start": 2127.36, "text": " It means that whoever you're explaining to is going to accept the explanation as the" }, { "end": 2140.6, "start": 2133.92, "text": " explanation. So with humans, when they are explaining things to each other, they take" }, { "end": 2145.56, "start": 2140.6, "text": " into account what the other person knows and so on and so forth. So it's also true" }, { "end": 2152.7200000000003, "start": 2145.56, "text": " that the explanations may be, you know, like high-level and maybe, you know, not super" }, { "end": 2158.16, "start": 2152.72, "text": " precise. It's just like, you know, like oftentimes we tell each other, tell a million" }, { "end": 2166.12, "start": 2158.16, "text": " intuition about this decision that you made. And if the explanation receives matches or" }, { "end": 2172.52, "start": 2166.12, "text": " expectations, then we accept that. So it's a tricky question when humans are involved," }, { "end": 2173.52, "start": 2172.52, "text": " I guess." }, { "end": 2179.7999999999997, "start": 2173.52, "text": " Do you think that safe RL systems have to be explainable? Like is that a required property" }, { "end": 2183.8, "start": 2179.8, "text": " of a safe system or is it maybe sometimes optional?" }, { "end": 2193.2400000000002, "start": 2183.8, "text": " Pretty sure that it's going to be optional a lot of times. But safety is going to be also" }, { "end": 2199.36, "start": 2193.2400000000002, "text": " pretty cool. Whether it's an RL system, I guess for RL systems, it's even more complicated" }, { "end": 2206.76, "start": 2199.36, "text": " than for any other systems. Because you have a sequence of decisions that you make and" }, { "end": 2212.84, "start": 2206.76, "text": " then maybe those lead to a garden pass that you'll never ever enter, except for once" }, { "end": 2221.32, "start": 2212.84, "text": " in lifetime, but that could be highly costly and could have bad consequences. How do you" }, { "end": 2225.5600000000004, "start": 2221.32, "text": " know that that's not going to happen?" }, { "end": 2234.2400000000002, "start": 2225.5600000000004, "text": " It's a very important topic and it's good that a lot of people are looking at it." }, { "end": 2237.24, "start": 2234.24, "text": " Do you think AGI is worth discussing these days?" }, { "end": 2240.24, "start": 2237.24, "text": " Why not?" }, { "end": 2246.4799999999996, "start": 2240.24, "text": " So do you think of AGI as something very abstract that can never be fully attained," }, { "end": 2249.4399999999996, "start": 2246.4799999999996, "text": " like an asymptote or something? Or do you think it's something that can really exist" }, { "end": 2250.4399999999996, "start": 2249.4399999999996, "text": " at some point?" }, { "end": 2258.6, "start": 2250.4399999999996, "text": " I guess discussions are important no matter what at a very high level. So the idea that" }, { "end": 2270.08, "start": 2258.6, "text": " you should increase generality is a very appealing idea. Of course, at the same time, we also" }, { "end": 2281.24, "start": 2270.08, "text": " know that generality could come at some price. So the whole idea of that, you want to be as" }, { "end": 2289.4399999999996, "start": 2281.24, "text": " generality as possible and not compromise, performance is already intriguing. I think that what" }, { "end": 2299.9199999999996, "start": 2289.4399999999996, "text": " we see from the history of AGI is that people at times were trying too hard to over-specialize" }, { "end": 2306.8799999999997, "start": 2299.9199999999996, "text": " too early. So it was an instance of premature optimization and that's a mistake that it" }, { "end": 2316.28, "start": 2306.88, "text": " is easy to fall into. As such, it's really interesting to see that these days you have all" }, { "end": 2324.6800000000003, "start": 2316.28, "text": " these learning algorithms that are achieving things that we couldn't imagine achieving" }, { "end": 2331.6400000000003, "start": 2324.6800000000003, "text": " a lot of a long time ago and they're even more generative. So by increasing generality," }, { "end": 2338.72, "start": 2331.64, "text": " you can be better. That's a very intriguing idea. But of course, it's not true in the" }, { "end": 2346.8399999999997, "start": 2338.72, "text": " first case. So you have to find the right way to navigate this space and I find this" }, { "end": 2347.8399999999997, "start": 2346.8399999999997, "text": " quite interesting." }, { "end": 2354.72, "start": 2347.8399999999997, "text": " Can you comment on the role of reinforcement learning on the path to AGI? What is a relationship" }, { "end": 2362, "start": 2354.72, "text": " between these two? Reinforcement learning is just a learning algorithm for a specific situation" }, { "end": 2367.04, "start": 2362, "text": " that you need to make decisions and sequential. And I wonder if there is some uncertainty" }, { "end": 2375.3199999999997, "start": 2367.04, "text": " involved. To me, that's really fundamental to intelligence. And if you want to abstract" }, { "end": 2381.8799999999997, "start": 2375.3199999999997, "text": " away a lot of, it has to be in the intelligence then maybe one simple model is just to say" }, { "end": 2389.6800000000003, "start": 2381.88, "text": " that you want to solve as many reinforcement learning problems with a single algorithm" }, { "end": 2401.04, "start": 2389.6800000000003, "text": " as possible. So it's quite foundational to AGI or whatever you want to call it." }, { "end": 2407.04, "start": 2401.04, "text": " So AI research is very open these days and we openly published in general. Then we saw" }, { "end": 2414, "start": 2407.04, "text": " an opening, I'm limiting the publishing of GPT 2 and they released it over only after" }, { "end": 2419.12, "start": 2414, "text": " some time and they said it risked. Do you feel like publishing in RL should continue" }, { "end": 2424.88, "start": 2419.12, "text": " openly for the foreseeable future? Or do you see a future in which at some point it's" }, { "end": 2425.88, "start": 2424.88, "text": " not safe to do that?" }, { "end": 2432.48, "start": 2425.88, "text": " Right. So it's my personal opinion that we should keep science open and that's our" }, { "end": 2444.52, "start": 2432.48, "text": " safeest bat. And humanity has a history of being able to deal with new information that" }, { "end": 2452.72, "start": 2444.52, "text": " is generated by scientists and not without any problems, but the things work out well." }, { "end": 2458.16, "start": 2452.72, "text": " And so I'm quite optimistic. I would take an optimistic stance on this and I would" }, { "end": 2463.8799999999997, "start": 2458.16, "text": " say that we should keep publishing openly. Of course, there are some risks. Then like" }, { "end": 2470.8799999999997, "start": 2463.8799999999997, "text": " in specific cases, you might override this rule, but I think in general, we should aim" }, { "end": 2474.7999999999997, "start": 2470.8799999999997, "text": " for an open publishing model." }, { "end": 2480.92, "start": 2474.7999999999997, "text": " And then you, I once heard you quote Voltaire and maybe Spider-Man's uncle Ben who both" }, { "end": 2486.7999999999997, "start": 2480.92, "text": " said, with great power comes great responsibility. Do you ever worry that RL might be used for" }, { "end": 2492.04, "start": 2486.8, "text": " some unsavory purposes like maybe you know, Starcraft algorithms being used in military" }, { "end": 2496.6000000000004, "start": 2492.04, "text": " or machine learning being used to sway the electorate? Do you think that any of the RL" }, { "end": 2502.0800000000004, "start": 2496.6000000000004, "text": " community can do to help ensure that the progress is good for humanity on the balance?" }, { "end": 2509.5600000000004, "start": 2502.0800000000004, "text": " I think the RL community has responsibility of sharing information about the progress" }, { "end": 2518.84, "start": 2509.56, "text": " it makes and in from the public about the potential ups and downs of older advances that" }, { "end": 2525.16, "start": 2518.84, "text": " are being made at the same time as a researcher, you know, like your everyday work, like you" }, { "end": 2530.7599999999998, "start": 2525.16, "text": " have to decide whether you're going to like continue on your career and then pursue" }, { "end": 2541.32, "start": 2530.76, "text": " research or you're going to maybe contribute in other ways by working on keeping everyone" }, { "end": 2551.0400000000004, "start": 2541.32, "text": " as safe. So I think that we have roles for like these are both a lot of nice roles and" }, { "end": 2556.6400000000003, "start": 2551.0400000000004, "text": " everyone should decide for themselves about what they think they should be and you can" }, { "end": 2561.56, "start": 2556.64, "text": " of course go back and force as well if you wish." }, { "end": 2568.3599999999997, "start": 2561.56, "text": " So these bandit models with the rewards are probably running a large part of e-commerce." }, { "end": 2573.8399999999997, "start": 2568.3599999999997, "text": " If we thought about these bandit rewards, if we included some notion of externalities" }, { "end": 2578.8799999999997, "start": 2573.8399999999997, "text": " which often get ignored like environmental or social impact, maybe you know, all of" }, { "end": 2582.3199999999997, "start": 2578.8799999999997, "text": " global commerce could kind of become a little more green or a little more clean or a little" }, { "end": 2587.4, "start": 2582.32, "text": " more ethical. I wonder if you think that there's a place for an idea like that." }, { "end": 2594.48, "start": 2587.4, "text": " Oh, absolutely. I think that companies who are employing algorithms like this are already" }, { "end": 2600.76, "start": 2594.48, "text": " thinking about this. I think that it's already happening." }, { "end": 2605.6000000000004, "start": 2600.76, "text": " Great. There's been a lot of discussion about failure modes for very simple rewards like" }, { "end": 2611.04, "start": 2605.6000000000004, "text": " optimizing for engagement on social media, leading to extreme content. I wonder if you could" }, { "end": 2616.32, "start": 2611.04, "text": " have any opinions on how we could avoid these unwanted outcomes from those systems?" }, { "end": 2626.96, "start": 2616.32, "text": " I'm not an expert on these specific topics, right. But clearly when there are feedback loops" }, { "end": 2633.56, "start": 2626.96, "text": " and this is about feedback loops, right. So you design the algorithm that is linked to" }, { "end": 2642.08, "start": 2633.56, "text": " some consequences and the result, things change and the algorithm tariffs and things can" }, { "end": 2650.48, "start": 2642.08, "text": " go really bad. So we should always think about the feedback loops in the systems. I think" }, { "end": 2659.6, "start": 2650.48, "text": " this is again happening already. I think it's the interest of the companies as far to take" }, { "end": 2665.04, "start": 2659.6, "text": " care of all of these risks, right. So if they want to survive in the long term and it's" }, { "end": 2675.52, "start": 2665.04, "text": " in their benefit of not causing like this pretty unfortunate unfolding of events." }, { "end": 2681.56, "start": 2675.52, "text": " Can you maybe contrast approaches to RL at DeepMind versus maybe what other institutions" }, { "end": 2687.7999999999997, "start": 2681.56, "text": " or labs are doing? Is there like a certain deep-mind way or a perspective or approach?" }, { "end": 2700.76, "start": 2687.8, "text": " I think DeepMind is representing maybe not evenly but quite a bit of every aspect of how" }, { "end": 2707.6000000000004, "start": 2700.76, "text": " you can approach reinforcement learning and machine learning problems. It's a big company." }, { "end": 2719.2, "start": 2707.6, "text": " So I don't think of DeepMind as just an entity that has some very particle angle. Of course," }, { "end": 2728.72, "start": 2719.2, "text": " DeepMind being part of all of the bad has access to huge compute and resources. And just" }, { "end": 2739.2, "start": 2728.72, "text": " this fact is going to mean that you will see keep seeing papers that heavily use these" }, { "end": 2747.68, "start": 2739.2, "text": " resources. But I think that that's okay because it's all about exploring the space of" }, { "end": 2752.9199999999996, "start": 2747.68, "text": " possibilities, exploring what's possible and once you have access to these resources," }, { "end": 2758.92, "start": 2752.92, "text": " then you should try to take advantage of that. So can I ask you what are you focused on" }, { "end": 2766.4, "start": 2758.92, "text": " these days personally and work wide? I'm trying to focus more on exploration in reinforcement" }, { "end": 2773.76, "start": 2766.4, "text": " learning. We just finished with my colleague Thor, multi-more, dis-booking, and so trying" }, { "end": 2780.8, "start": 2773.76, "text": " to move a little bit more towards the RS space. And what is reinforcement learning? How" }, { "end": 2787.96, "start": 2780.8, "text": " to use side information in reinforcement learning? How to do a spot on this specification" }, { "end": 2792.1600000000003, "start": 2787.96, "text": " in reinforcement learning? These are the topics I look at currently." }, { "end": 2797.44, "start": 2792.1600000000003, "text": " So aside from your current work, are there a few specific trends in RRL that you find" }, { "end": 2806.04, "start": 2797.44, "text": " super interesting at this moment very recently? Yeah, I really like the works where people" }, { "end": 2815.16, "start": 2806.04, "text": " start to look at simple pumpy-p's. So pumpy-p is the spatial observable MDP. And in these" }, { "end": 2821.16, "start": 2815.16, "text": " simple models, you're not observing the state directly. So maybe there are a few states," }, { "end": 2826.72, "start": 2821.16, "text": " a hundred states or whatnot, but you make observations in the state and then you observe maybe" }, { "end": 2832.16, "start": 2826.72, "text": " an image, which is a very high dimensional quality. And you try to design algorithms" }, { "end": 2840.6, "start": 2832.16, "text": " that don't break down, even though the observation space, the possible observation space is human" }, { "end": 2846.3999999999996, "start": 2840.6, "text": " juice, right? The handling state space is also you try to discover underlying regularities" }, { "end": 2851.92, "start": 2846.3999999999996, "text": " and take advantage of this. So there's been a number of papers that came out on this," }, { "end": 2853.6, "start": 2851.92, "text": " which are really nice." }, { "end": 2857.92, "start": 2853.6, "text": " I have one last question for you. DeepMind has been around now for 10 years. Can you help" }, { "end": 2861.72, "start": 2857.92, "text": " us imagine what reinforcement learning might be like in another 10 years?" }, { "end": 2873.2799999999997, "start": 2861.72, "text": " I have no idea. I hope that it's going to be as popular, even more popular than today." }, { "end": 2880.56, "start": 2873.2799999999997, "text": " But we need a lot more new ideas. I guess, Harika, LaGre, dealing with spatial observability," }, { "end": 2888.3199999999997, "start": 2880.56, "text": " taking those actions to just collect information. We need a lot more research." }, { "end": 2894.2000000000003, "start": 2888.32, "text": " So I found your name appears as Prince Chaba in Hungarian mythology, meaning a gift from" }, { "end": 2898.4, "start": 2894.2000000000003, "text": " the heavens. I want to say this has been a real gift to myself and our listeners. Thank" }, { "end": 2899.4, "start": 2898.4, "text": " you so much, Chaba." }, { "end": 2907.6800000000003, "start": 2899.4, "text": " Hi, thank you." }, { "end": 2918.2000000000003, "start": 2907.6800000000003, "text": " That's our episode for today, folks. Be sure to check talkrl.com for more great" }, { "end": 2919.2, "start": 2918.2, "text": " episodes." } ]
Ben Eysenbach
Ben Eysenbach schools us on human supervision, SORB, DIAYN, techniques for exploration, teaching RL, virtual conferences, and much more!
https://media.transistor…76a.mp3?src=site
This is TalkAreal Podcast. All reinforcement learning all the time. Interviews with brilliant folks across the world of RL. I'm your host, Rob and Chauhan. Ben Eisenbach is a PhD student in the Machine Learning Department at Carnegie Mellon University. He was a resident at Google Brain and studied math and computer science at MIT. He co-founded the ICML Exploration in Reinforcement Learning Workshop. Ben, thanks for joining us today. Of course, I'm glad to be here. So how do you describe your area of focus? I'm interested in a number of areas of reinforcement learning. The question I'm most interested about is the dependence of reinforcement learning on human supervision. So when we want to get a robot or a self-relevant car or some autonomous agent to perform some task, we need to tell it what to do. We have a number of tools for telling it what we want to do. We can design some reward function. We can constrain its actions. We can provide it some demonstration. And all of these types of supervision cost time. iterating on experiments trying to modify and tweak our reward function, trying to keen key observation space. All of these things take a lot of time. As the problem, and most excited about is figuring out how we can reduce the number of human hours that go into getting our robots and getting our self-relevant cars and other machines to learn tasks that we want them to learn. And be really excited to see plots and papers. You're not a amount of time that took the robot to learn the task, but a amount of time that took a human to teach the robot to perform the task. So that's probably what I'm interested in. So that topic doesn't come up that often. I'm trying to think of a paper where I've seen records of how many hours were spent by humans. Is it a metric that you think could be a widespread metric? I hope so. I think it's a hard metric to use sometimes because you have to normalize for different factors. If an industry of someone has 10XS compute as some other company, then maybe you need to spend more research your time, human time, to get the robots to do the thing. But I do think that it's even if we can't actually show that plot and paper is something useful to be aiming for. Would the Holy Grail for you be RL that requires no human time, or just very, is very judicious with human time? Exactly. Yeah. I would love it. I could go into the lab one day, take the robot, show it a couple of demonstrations, or send it a couple of YouTube videos ahead of time, and have a field very quickly learn your tasks. Even if that means I show the robot a couple of demonstrations, or give it some pieces of reward function, lock it in a closet, and then come back a month later, and only then it's learned the task. Because to me, the time that the robot spends learning by itself locked in the closet is very cheap. So you're doing your PhD. Now I always wondered what it was like as a PhD student in terms of the relationship with your advisor. Can you tell us share with us a bit about how that relationship works? Sure. So I'm co-advised. I have two advisors, Russell and Solikutinov here at CMU, and Sergey Leffen at UC Berkeley. One thing I really like about my relic and keep with my advisors is that they provide complimentary sets of skills and expertise. If I just pick one word to describe my area of research, I'd say deep reinforcement learning. A very crude approximation of my advisors is Russell Kutinov does deep learning, and Sergey Leffen does reinforcement learning, and so together the intersection of them is deep reinforcement learning, because exactly where I lie. Cool. So let's talk about some of your recent papers. Great. So the first one is search on the replay buffer, bridging motion planning, and reinforcement learning. Can you describe the general idea of this paper? Absolutely. And we take a historical perspective for a second. The control community and the robotics community and the reinforcement learning community have won robots to perform long horizon tasks for a really long time. Classically there have been sort of two ways of getting robots to solve problems. The first step techniques are symbolic approaches or planning based approaches. They say we have some certain number of states discrete, and we're going to try to find a path that goes from our current state to some target state. And outcomes for doing this include Dijkstra's algorithm, A-Star, probabilistic road maps, and things of that nature. The other school of learning methods, connect cleanness methods, take a more can view states in a more continuous fashion, and say we'll just throw function approximators at the problem and hope they solve that. So from this we get algorithms like DQN, like reinforce, and most of modern reinforcement learning. The goal of this project was to try to figure how it can take a number of those tools from the planning community and make them applicable to deep reinforcement learning algorithms. And so the way we went about doing that was by noting that the reinforcement planning algorithms reasoned over long horizons very, very well. Graphs, work is a remarkably competitive and fast algorithm. But on the flip side, these planning approaches don't scale to high-dimensional observations very well. And this manifests itself in a couple ways. For one, given say a couple images, it's hard to determine what action you could take to another image. And the second is, it's often hard to figure out whether how far apart two images are, how far apart two observations are. For example, maybe you have a robot in your kitchen and it's looking at the bottom part of a cupboard, and maybe you have an image of your bathroom, and there is a similar looking cupboard. Now you know, as a human, that your kitchen and your bathroom are fairly far away from each other. But for the robot, it's just looking at an image of a door. It has to have a rather nuanced sense of perception to be able to detect this is the kitchen cupboard versus this is the bathroom cupboard. And this is exactly where the tools of function or approximation can be helpful. So I love this combination of classical planning on one side and RL on the other side. In a previous life, I actually did built a star algorithms for transportation. How did this idea for this paper come about and what was kind of the journey like from the initial conception to what we saw published? That's a really good question. I guess we started exploring a couple different areas. So one area I interested in is this general notion of multitask reinforcement learning or goal-conditioned RL. And the idea of goal-conditioned RL is that you have some agent that takes is input not only in observation of the world, but some notion of a goal or an image of what the world should look like. And the robot has to take actions to convert the world from its current state into the state into the goal state. So for example, in an navigation example, this might involve walking from one room to another room. Or in a manipulation example, this might mean stacking a whole bunch of blocks on top of each other, such that they look like the desired tower. So this idea of goal-conditioned reinforcement learning has been around for a really long time. Over the past, it was five years or so, there's been a lot of progress in making more robust goal-conditioned RL algorithms. And so one of the starting points for this project was thinking about if we assume goal-conditioned RL works. If we have this tool in our toolbox, what can we build? And one of the things that came to mind is if we have procedure that can navigate from one state to some other state, maybe we can somehow use this tool many, many times to solve complex tasks. Okay, so that was the kind of the setting where it seemed like this, something like this would be appropriate. How did you get started? And did you your first ideas about how to do this? Was it the same as what you ended up with? Or was there some changes in iterations along the way? Good question. So this is one of those sort of weird projects where the first thing we tried basically worked, but then things got harder as we scaled. So when we first had this idea kicking around, I implemented it on a very simple 2D navigation task. On this task, learning how to reach, how to navigate from one state to nearby state worked very well, and estimating the distance between two states also worked very well. And this meant that within about a week or so we could show fairly large improvements over current state-of-the-art methods on the simple task. The challenge, however, came in scaling up to more complex environments. I'll highlight one of the challenges there, and that was this so-called wormhole problem. Imagine that you have two states that look visually similar, but are actually very far away. For example, if we return to the kitchen cabinet versus bathroom cabinet that we talked about earlier, if the robot thinks that the kitchen cabinet and bathroom cabinet are actually close together, then when it's doing planning, it may assume that when it's in the kitchen, it can magically teleport to the bathroom because the bathroom cabinet and the kitchen cabinet look similar. And this sort of wormhole problem is disastrous for planning, because the plans that are produced don't make any sense. And while this wormhole problem wasn't a problem for some of the simple experiments we ran, when we started looking at more complicated environments with image-based observations, this problem did pop up. And then what about the reverse? I guess any error in the distance metric is going to cause problems for I'm just thinking of this from a classical planning point of view. So if does it ever happen that it thinks that two states are really far apart when they're actually close together or is that was that problem not much of an issue? Yeah, so we definitely had problems with both overestimating distances and underestimating distances. However, for the purpose of planning, underestimation is a much much bigger problem. And to see this, we can think about the number of paths from one state to another state. In most reasonable tasks, there are an exponential number of paths from one state to another state. And there might be even an exponential number of relatively short paths. And so if we spurously predict that two states are further away than they actually are, this may mean that we ignore some of those paths. But there's still many, many other short paths that we could consider to get from one state to another state. Because you have so many paths as opposed to a sparse network, in which case an overestimation distance might be a problem, but in this very dense network, it just goes around it. Exactly. For example, you could imagine navigating from the southeast corner of New York City to the northwest corner of New York City. And someone might go and tell you, oh, there's a traffic jam on this block. And there's still many ways that you can navigate from one side of this man of Manhattan to the other side of Manhattan. But if someone tells you, oh, if you go to this intersection, there's a helicopter that will fly you to your destination. And do you go to the intersection, the helicopter isn't there, then you've made a big error. In the paper, you talk about risk awareness. Can you say anything about how that works? Is that is that related to the ensemble? Yes. So the risk awareness was introduced to help deal with this underestimating overestimating distance problem. So in particular, we were worried that the agent would underestimate certain distances. And so what we said is that rather than learning a single estimate of distances, we're going to learn many estimates of all of the distances. So we learned some ensemble distance functions. And then to get our final estimate for the distance between two states, we asked each member of the ensemble how far away these two states are. And then we took the most pessimistic estimate from the ensemble. So if any member of the ensemble thought that these two states were far away, then we, for the purpose of planning, pretended that these two states were far away. Okay. And were they all trained, they were all trained, all the members of the ensemble were trained on the same data, or was it like a bootstrap thing? Yeah. So the proper way to do this would have been to train each kind of different subset of the data. In practice, when bootstraps are used in most, a bunch of deep learning today, the randomization and their initial weights is efficient to lead to different predictions later on. And so we trained them on the same data, but with different weight initialization. One thing that we did find fairly important there was that we couldn't share weights between members of the ensemble. So it's really tempting to say, we're training these five neural networks. Why don't we just have a single encoder carried between all of them and then separate heads for each member of the ensemble? It would have been much faster from a computational perspective. But the problem with this is that the ensembles are supposed to give us independent views of how far away two states are. And when they share an encoder, when they hear weights, their predictions now become correlated. And we found that this significantly degraded performance. So can you share with us like what size ensembles are we talking about? Were they huge or just a few networks? We used three members in our ensemble. We included an oblition at the end of the paper where we actually studied how important the size the ensemble was. And we found that two was much better than one. And three was only very slightly better than two. As we stopped it three. Cool. And then your paper mentions distribution RL. I think that's like C51, is that right? Exactly. Yes. Can you help us understand how you used the distribution RL? I think you did something specific here in terms of how you use the atoms. Yeah. So the distribution RL we used in our paper was actually a special case of C51. And so as you might recall, our first let's start with what distribution RL is. Distribution RL says that for a given state action pair, rather than predicting your expected return, your expected feature return, we're instead going to predict a distribution over your feature returns. That is, we're going to predict some probability you get two rewards, some probability five rewards, some probability you get ten reward, and so on. And in normal distribution RL, there's a rather complicated way to do bootstrapping with this entire distribution, which involves squawking the distribution by some discount factor, and then splitting up the distribution, discretizing the distribution for the Belman update. The, in our paper, we were using a rather special reward function that was minus one at every time step, and we were using a discount factor of one. And both of these choices meant that distribution RL was significantly simplified to implement for our setting. And I don't want to go into two molecular details, but it basically just corresponded to a bithift of the predictions. And in our experiments, we found that distribution RL was much more stable than using standard reinforcement learning. Can you say anything about exploration in this work? Was it, were you using a standard exploration, or was that a challenge? Yeah, so we mostly punted on the exploration problem. We assumed that the initial state distribution, or we used environments with the initial state distribution, was uniform over all states. And so this both helped make learning easier, and it also meant that for the purpose of planning, the state is that we were planning over, were uniformly distributed. One direction that I'm pretty excited about is figuring out how you could couple these sorts of planning algorithms with smarter exploration techniques. One work in this direction was done by Nikolai Savanov, but a year ago. And I think that's an interesting step in this direction. So you mentioned some interesting angles for future work in the paper. Do you plan to follow up any of these? Can you share that with us? Yes, perhaps the biggest bottleneck in SORP was actually learning the local policy. And so, Gulk-Nik-N-R-L, despite being much better today than it was 10 years ago, is still in its infancy. And figuring out how we can make Gulk-Nik-N-R-L algorithms that work even over very small scales is a pretty hard problem. But what search on the repo of offer shows is that you actually don't need more than that. If you can get Gulk-Nik-N-R-L working on a length scale of 10, 20 steps, then planning will be able to solve most of the rest of the problem. And so, the future direction I'm perhaps most excited about is just figuring out better ways of getting Gulk-Nik-N-R-L to work. And one of the reasons why Gulk-Nik-N-R-L is an exciting problem to work on is not only because that's the potential to be used in a combination with planning, but also because when we're in the multitask setting, there is much more supervision that we can leverage. A failure to solve to reach one goal or to solve one task might be a success for solving some other task. This is the intuition in the hindsight experience. Replay paper is also been explored in a number of other papers. And I'm currently working on better ways of using that insight to learn Gulk-Nik-N-R-L algorithms. Cool. Okay. Do you think that our brains could be doing something like this? Like, in the sword paper, you're showing that two very different methods could be combined to solve this problem. So, when we approach a problem like that, do you have any comments about that? Like, are we using a different method to think about states that are nearby versus long horizon kind of planning? Or is that just totally unknown at this point? I like that question. I am definitely not an expert in a human or animal-related neuroscience, so everything I say is speculation. There's definitely been some work that says that we have two modes of thinking. There's sort of famously Daniel Kahneman claims that we have system one and system two thinking. Other folks refer to our insect brains and our monkey brains to sort of differentiate high-level reasoning versus low-level reasoning. And I definitely could see that something of that nature could be going on inside their brain. I think it's also possible that what we have some sort of hierarchical structure in our brain is not discrete. It's not like we have the planning level and the reactive level, but rather might be continuous. I think an exciting area of research would be figuring out how we can design control algorithms that are continuous with respect to their level and some hierarchy. That does sound amazing. Okay, I can't wait to to hear what you come up with in that department. Let's move to another paper of yours. Diversity is all you need, learning diverse skills without a reward function. And that was at ICLR 2019. So I remember noticing this paper, I think back when it first came out, and I think it was on Professor Levin's Twitter feed. And I remember looking at the half-cheetah acrobatics and just finding that so entertaining. And I couldn't wait to read the paper just after looking at that. So I was excited to meet you and to realize that I could talk to the author of that paper that's kind of a great way to close the loop. So what was, can you share with our listeners what is the main idea of this diversity is all you need paper? We want robots to be able to do all sorts of things in the environments and where they operate. Often it's challenging to figure out what are meaningful behaviors in their environment. What are the sort of to draw an analogy to principle component analysis? What is the basis of behaviors that exist in some environment? And the motivating for doing this is that if we could somehow look at a robot interacting in an environment and say there are, say, ten principle behaviors that it can do. Ten motion primitives to draw an analogy to some of the older robotics literature. We could then assemble these ingredients, these primitive behaviors into more complex behaviors, dissolve new tasks much more rapidly. To look at it from a slightly different angle, while the number of parameters in our neural networks that are used for control might be on the order of millions, the number of useful or meaningful motion primitives might be on the order of dozens. And so if we can learn to control by composing these motion primitives or these skills, we should be able to learn significantly faster than if we have to directly tune the parameters of some neural network. As the motivation for this project was given some environment, return some set of motion primitive or some set of skills that can be used and ensemble to solve arbitrary tasks more quickly. How is diversity measured and defined in this context? We'll explain it by a game. So there's often used to the team building game for humans. So was it back in that you have two players in the running team and they stand at opposite ends of a football field and the goal is for them to communicate a message from one end of the football field to the other end of the football team. And the only thing they can do is they can jump up it down, they can wave their hands around, they can try to spell out letters with their arms. So let's say that one player is trying to send a message to the other player. We measure diversity as how well those players can communicate a message across the field where the first player is trying their best to act out whatever message they're trying to send. And the other players trying their best to discern what exactly is my teammate trying to spell out? What exactly are they trying to convey? So that's a game that they're playing. It's like a communication game, but what is there some if I'm understanding this correctly? Is there something that's encouraging to have them to have many signals as opposed to just having the oh they figured out how to do that one signal and to communicate that effect? Yeah, in this game, the player that is sending messages is given different messages to send. There might be given hundreds of different messages that they have to send. And if that player that's sending the message always does exactly the same thing, then they'll only be able to convey one message. For example, if the only thing that this player does is jump up and down, then the other player will have no idea what message they're sending. But if they have many different ways of jumping up and down or they know how to do some cartwheels or spell out a couple letters with their arms, spell out YMCA, for example, then there are many more messages that they could send across the field. The messages that are being sent in terms of the waving of the arms are these the states that the agents are visiting? Yeah, so to make the analogy or sort of complete the analogy, in the reinforcement learning setting, we have some agent interaction with an environment. And the AGM is going to play this game with itself. So the AGM is going to take some actions to visit some states. And then internally, there's a part of the agent we called it the discriminator that looks at these states and tries to infer what message was I trying to send. And so the behaviors are the sequence of actions that the agent takes. And the messages are some sort of codes. Because the robot's just talking to itself, these codes don't have to correspond to English sentences. And in our setting, we actually just used random one-hot vectors as these messages. Okay, and this sounds kind of related to like hierarchical RL and options, but it's different, right? These these behaviors are not options, right? And where are they? So the behaviors that we learned are similar in the sense that they can be composed hierarchically to solve more complex tasks. So we included one experiment at the end of our paper where we heard how, after learning the set of skills, we could learn some high-level policy that every, say, 100-time steps told us which of your skills to use to try to maximize some reward. One of the key differences, though, is that in the options framework, or in most other hierarchical RL, the low-level primitive skills are learned to maximize the reward function. And this is useful in some settings because it provides a reward, some reward signal for learning these low-level skills. But it's also challenging, because it means that the low-level skills you learn on one task cannot necessarily be used to solve some other task. And so the key difference is that in diverse days, all you need, the skills that we were learning, were not learned with respect to a single reward function, rather they were task-ignostic. And then meant that you could use them to solve many downstream tasks. Of course, the downside is that they attempted to cover all possible behaviors, and if you really only cared about one type of behavior, then many of the skills you learned would be useless. For example, if you only care about running forward, then learning how to jump up and down and learning how to do backflips aren't particularly useful. We have some second of the paper where we show how you can bias the skills to accomplish certain types of behaviors. So there is some way around this if you really want to do that. So you mentioned how this agent is playing a cooperative game. Most of the times, when I think when we encounter game type playing in RL or machine learning, at least from my perspective, it seems like they're mostly adversarial games. Can you say anything about the difference between adversarial and cooperative games and why the cooperative game? Why this particular instance cooperative makes sense? And I wonder, is there some intuition that we can get about when we want to play a cooperative game versus adversarial? That's a good question. And to be honest, I don't have a fantastic answer. One thing to notice that cooperative games are usually stable. In the sense that once you've found a solution, the two players of your game will tend to stay at that solution. So for example, in this communication game that we were talking about before, once the two players have worked out some strategy for communicating messages across the football field, now that has an incentive to deviate from that strategy. Whereas in contrast, if you look at an adversarial game, the sort that's played in generative adversarial networks, or to take a more simple example, rock paper scissors, players do have an incentive to deviate from their current strategy. For example, we're playing rock paper scissors, and you play rock, I'm going to play paper, and then you're going to play scissors. And then I'm going to play rock, and we'll keep cycling like this. So one of the benefits of dealing with cooperative games rather than competitive games is that the optimization problem might be easier. Cool, thanks for helping us understand that. The paper says that the discriminator works on the levels of states and not trajectories. And then, but also there was a line that said our method is not limited to learning skills that visit entirely disjoint sets of states. Can you help us understand how that works? So the trajectories don't have to be completely distinct, but hone in on those few distinct states, is that what's happening? Exactly, yeah. So you could imagine that maybe the agent always starts in the same state, and maybe there isn't that much you can do at the beginning of an episode. For example, maybe the agent always starts in some narrow hallway, and the only thing it can do with this narrow hallway is walk to the end of the hallway. And so while the agent is in this hallway, it's really hard to tell what message it's trying to send, or what skill is being executed. But let's say at the end of the hallway, there's a big open field, and there are many things the agent can do once it gets to this field. It can jump up and down, it can do backflips, it can play a game of soccer. The part that we were trying to explain in that part of the paper was saying that it's okay if there's certain states where you can't tell what skill the agent is doing, as long as there are other states, say states in the future, where you can tell which skill the agent is using. Now we also had that point about discriminating on the level of states rather than trajectories. And the point there was mostly in implementation detail, we said that when we're going to infer what skill we're using, we're going to make a prediction for every state acting pair, and then we're going to ensemble all of these predictions together. An alternative might be something like using a LSTM to read in every state and try to make a prediction. But as much harder to train recurrent models than these sort of backing models. And so we ended up using something simpler there. That said, there has been follow-up work that believes the valor paper that looks at learning skills conditioned on entire trajectories. But how did he get to that position? There could have been a set of different states one step before. So maybe like a one step state transition. Yeah, absolutely. And I think there's sort of fun to think about the sort of family of algorithms that are conditioned on some aspect of the behavior. And from that aspect of the behavior, they try to infer what skill is being used. So in our paper, this aspect of the environment was just bag of states looking at each state and predicting what skill is being used. You could look at an entire trajectory of states and try to infer what skill is being used. And this would allow you to discern the Michael Jordan dunk from other sorts of the Kobe Bryant dunk. I don't, maybe they have different dunk techniques. You also could look at say the initial state and the current state. And what this would allow you to do is see how the skill changes the state. And you can imagine this might be useful if you're hoping to gain many skills together in sequence. And in many other ways, you could imagine discriminating skills based on other aspects of trajectories. You could look at cumulants, you could look at actions, you could look at running averages or other functions. So I think it's a fairly exciting area to try to sit down and enumerate all these different ways of discriminating skills and thinking about when you could be most appropriate. Sounds like the properties of a seminal paper. Like there's just so many directions to go from here, which is awesome. Any follow-up work plans in this direction from you Ben? One thing that I've been looking into a little bit recently is figuring out how we can more intelligently use these sorts of discriminability ideas in service of maximizing a reward. That is, how could we use something like Diane in the inner loop of current state of the art or algorithm? Could we somehow use this for better exploration or for better policy improvement? This is still very much in its early stages, but I think it has some promise. So I got to meet you in Vancouver at your in Europe's 2019 poster for the SOAR paper. And I remember thinking that you should be teacher. Yeah. Because I thought you explained so well. You explained as if you actually want us to understand. Not just like you're just checking the box. It's like, yeah, the explanation has been sent. But you actually want us to understand, which I actually love and it comes through so clear. So thank you for that. Well, thank you. So it makes total sense that you were head TA for the Deep RL course at CMU. What's that like? Can you share a little bit about the course? I think it's what I'm looking at this. Love is, it looks like covers a lot. Yeah. So CMU is two reinforcement learning courses. That was the graduate offering. And there's an undergraduate offering of a very similar course in the spring. And so as the TA, I helped design part of the syllabus. I gave one or two of lectures and organized most of the assignments in grading. We had a team of fantastic TA is that helped with many of the day-to-day logistics of running office hours and helping with the grading. Do you have any advice for people who are trying to teach this stuff? I think one thing that's a bit challenging about many seminar courses and or many courses that survey a number of recent algorithms is that when we write research papers, we write them to highlight novelty. That is, we highlight all of the ways in which our work is different from prior work. But for the purpose of teaching, it makes a lot more sense to emphasize the similarities. And so one of the things that I tried to do in recitations and lectures and assignments was to highlight that many of the algorithms that we learn in this course are built from the same building blocks. And I think that this mindset helps cope with the enormous number of papers that are published on RRL almost every day. If we can discern what the underlying ingredients of each paper are, at least for me, that makes it much easier to understand what the core contribution of the paper is. That is, instead of saying this is a really complex paper, I can see it's, oh, it was this other paper plus two or three tweaks. Sort of mathematically, if the number of ingredients from what we build algorithms grows linearly, the number of possible algorithms, the number of possible ways of combining these ingredients in new ways would grow exponentially. And so being able to infer the ingredients from what algorithms are built seems like a fairly powerful way of understanding algorithms. That makes a lot of sense. That's kind of related to one of the reasons why I wanted to do this podcast is because I want to understand RL and more and more depth. And I was finding that resources to connect the dots between the different subfields and the different papers and the different perspectives were really hard to find. It just seemed like there was so much that that went on said if you only looked at written material or lectures. Absolutely. And not saying that I'm an expert on this at all. But I do think that it's helpful to figure out how do we connect all these dots? Because most of the dots are often closer than we think. So you co-founded the ICML exploration in reinforced and learning workshop. Can you say a bit about that? How did you come to to co-found that workshop? Yeah, so that was with Storia Bupati-Raku. I started that when we were both part of the Google Brain residency. I guess that was early 2018. And the motivation for doing it was simply that there's a fair amount of work on exploration in RL. But it often is fairly disjoint. And we're hoping to gather together a whole bunch of the folks working on exploration to have a conversation and to exchange ideas to figure out how do we move the field forward? One of the primary aims of the workshop was to figure out how do we even measure success? What is the right metric for exploration? And it was fun to see over the two years that we had the workshop, what different metrics various people proposed? So do you feel like we are closer to having that figured out now? Closer is still a long way off from the solution. Well, it's not it's not the simplest problem, I guess. So we had from from DeepMind, we had the B Suite that came out a while back. And I guess that part of that was about exploration. Absolutely, yeah. One of the things I liked about the B Suite is that they propose a number of different metrics. And I think that sort of highlights that is often unclear exactly what we want from exploration. And so maybe it does make sense to not be optimizing for a single metric, but for a set of eight metrics. Do you have any comments on general approaches to exploration in RL these days? Yeah. So I think there are probably three or four different broad classes of techniques. And any technique, there's definitely areas for improvement. So I think so one main area is adding noise. Adding noise in one of many ways. So epsilon greedy just adds noise to the action. In algorithms like DDPG, we add noise that's correlated across time to the actions. There were two papers in I think 2015 maybe, noisy nets and parameter space noise that add noise not to the actions, but to the parameters of the actor. And so I think this is sort of one class of methods and it's fun to think about where else might we add noise or how could we tune the noise automatically. There was one paper a year or two ago by Obtek Dupte, they looked automatically tuning the noise added for exploration. A second class of techniques look at trying to capture uncertainty. These are often done by trying to figure out where is our policy or a Q-function most uncertain. And then rewarding the policy for going to states where this uncertainty is high. And then maybe a third set of approaches are those that learn some density model of where the agent has already been. These include not only compass methods, methods that learn some user VAE or a normalizing flow or something to model the distribution over states and actions that the agent has been to before. And then we can use this density to form some sort of exploration bonus, either in the form of by directly rewarding the agent for going to unseen states or by trying to look at how the density changes as the agent visits new states. And then I guess maybe one fourth type of exploration method are those based on posterior sampling. And they are rather than trying to learn a single policy or a single value function. You might learn the distribution over policies or the distribution over value functions. And things like bootstrap DQN are the typical example of these sorts of exploration methods. And while all these exploration methods seems kind of disjoint, I think that there actually might be some of them might in fact be the same. And one area that would be fun to explore would be figuring out the connections between each of these four methods or these four classes and methods. Cool, thanks for laying that out. So it sounds like we're still looking for the holy grail of the granunified theory of exploration in our health. Yeah, one thing to note is that optimal exploration is well defined but is completely intractable for most problems that we care about. Do you mean that like in a posterior sampling sense? I guess I mean that posterior sampling is an approximation to what's known as the Bayes optimal exploration strategy. So if we want to maximize cumulative return, there is some way to do it optimally for any MD given an MDP, there is some optimal exploration strategy. But actually computing this exploration strategy is extremely, extremely hard. It's hard because Bayesian inference is hard. Exact Bayesian inference is hard. Is that why it's hard? It's hard because it requires reasoning about all possible belief states, which grows exponentially in the size of the number of states in your MDP. I see. So your belief space MDP is much larger than your actual MDP. Is that what you're saying? Yes. Like a lot. Way, way, way, way bigger. That makes sense. Okay, thanks, thanks so much for laying that out for us. Here. Do you have any tips for us on keeping up with the day loose of papers? There's just so many new papers and it's hard. Everyone's excited about them all. I don't have the perfect solution, but I can tell you what I do. I have a giant spread key to papers and whenever anyone recommends a paper or I see an interesting paper reference somewhere, I add the list and then every day I pop one or two papers off the list and read them and I sample the papers uniformly at random. So it ends up being a mix of very new papers, very old papers, papers in between. And then I guess hope that if the paper is important enough, it will eventually come to the pop with the list. But yeah, there are way, way, way too many papers to read all of them. So random search is powerful folks. Yep. Are there researchers you really admire and look up to? Yeah, I think that there are folks both on the theory and application side that do some pretty cool work. So on the application side, anyone who gets any sort of reinforcement learning to work in the real world is amazing. So there are a couple of folks that have done some reinforcement learning for healthcare. Folks like finale dohivalaz and Emma Brunskyl. Emma Brunskyl has also done some pretty neat work into a reinforcement learning for education. That is figuring out what assignments do we give students so they can learn most quickly. I guess there's also been some work on doing reinforcement learning for optimizing batteries. And I think that was done by Stefano Armand and one of the students at the TITU Grover. I think that was also very cool. But then on the theory side, I think a lot of folks have done some pretty neat work, including Brian Zebert's done some nice work on actual entropy reinforcement learning. Tom Chow has done some very nice work on just pushing regular RL algorithms forward. And of course, I look up to both my advisors too. Do you have any advice for people who look up to you? One thing I'd recommend is not being afraid to ask for help. Very often, folks want to be helpful, but they don't know how to. And if you can tell someone, oh, can you recommend three papers I can read to learn about reinforcement learning? Or, oh, what was what algorithm did you use for implementing that paper? Or can you send me the code you used for that environment? Very often, people will be happy to say yes. And so just asking for help, I think, is one of the most useful skills. I'm sure there are other things too, but that's the first thing that comes to mind. So besides the things that you mentioned already in our chat, are there papers or trends in RL generally that you think are really interesting more recently? One trend that I'm pretty excited about is using VR setups to collect human demonstrations. There's been maybe a half dozen papers over the past year or two. One that comes to mind is Corey Lynch is learning from play data paper. I think there are a couple others as well. And the reason why I think this is exciting is that this using VR in motion capture to provide demonstration seems a lot easier than, say, designing reward functions. That is, we can provide many more bits per second of human interaction. And I think given that I suspect this trend will continue over the next couple of years, which I mean that very soon we'll have fairly large data sets of human demonstrations for maybe robotic mobilization for self-driving for maybe some other sorts of navigation tasks. And then the question will be how do we design algorithms that can effectively learn from these large sets of unlabeled human demonstrations? And this data is interesting because it's not random data, but it's also often not labeled with the human's intentions. And so it may be the case that figuring out the right way to merge in reverse reinforcement learning, that is inferring what the intention was and reinforcement learning, trying to maximize whatever reward the user was intending to do might be sort of cool. And so this intersection of reinforcement learning and worst reinforcement learning might provide a way forward to solve the handle all this motion capture data. That's sounds really cool. This this interview is happening in March mid March 2020, which we're of course we're all facing COVID and we're hearing about conferences being canceled or moved online. Apparently ICLR is going to be a virtual conference. What do you think about these virtual conferences? I think it's an opportunity to figure out how we can how to have conferences when people are remote. So realistically I expect the number of machine learning conferences to grow over the next couple of years. At the same time increasing concerns about climate change and increasing demands on people's times are probably going to be a harder to travel to all these conferences. It's a figuring out how do we make conferences still feel exciting and engaging? How do we still have the spontaneous run into friends and collaborators and conversations in the hallway of conferences is definitely going to be a challenge. But it is also an opportunity to figure out to sort of be forced to solve this problem. I think they'll probably improve conferences even after COVID is gone. Yeah, I totally agree. So at Neurup 2019 I just couldn't believe how packed it was and this is after they turned down, you know, huge chunk of the people who wanted to be there. And that's aside from from considering emissions but just the demand for for being there is just so huge. On the other hand the online tools that I've seen so far, I mean as much as I like SLI's live, I definitely wouldn't want to be relegated to that experience. So trying to imagine what a rich interactive experience could be like. Like for me, I guess some people at Neurup's were like, hey, you know, the poster sessions are crazy. We should have some people are even saying we should have less poster sessions and more talks. And I was like, that's crazy because like the poster session is where I met you and I could hear you know in depth about your work and to me that's the most beneficial part is these poster sessions which seem like the most challenging part to scale up. Absolutely, yeah. I don't have a solution. But I think forcing 5,000 or 10,000 or more of my folks to think hard about this problem over the next six months will result in an enormous number of human hours thought about this problem. And I'm fairly confident some folks will think of some clever solutions. Ben Eisenbach, I can't wait to hear what you came up with next and thanks so much for joining us here today. Thanks again, Ben. Absolutely. Thank you for having me, Robin. That's our episode for today folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 12.8, "start": 0, "text": " This is TalkAreal Podcast. All reinforcement learning all the time." }, { "end": 22.8, "start": 12.8, "text": " Interviews with brilliant folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 28.240000000000002, "start": 22.8, "text": " Ben Eisenbach is a PhD student in the Machine Learning Department at Carnegie Mellon University." }, { "end": 33.6, "start": 28.24, "text": " He was a resident at Google Brain and studied math and computer science at MIT." }, { "end": 40, "start": 33.6, "text": " He co-founded the ICML Exploration in Reinforcement Learning Workshop. Ben, thanks for joining us today." }, { "end": 41.76, "start": 40, "text": " Of course, I'm glad to be here." }, { "end": 44.08, "start": 41.76, "text": " So how do you describe your area of focus?" }, { "end": 48.92, "start": 44.08, "text": " I'm interested in a number of areas of reinforcement learning." }, { "end": 54.12, "start": 48.92, "text": " The question I'm most interested about is the dependence of reinforcement learning on human" }, { "end": 61.16, "start": 54.12, "text": " supervision. So when we want to get a robot or a self-relevant car or some autonomous agent" }, { "end": 65.24, "start": 61.16, "text": " to perform some task, we need to tell it what to do." }, { "end": 68.2, "start": 65.24, "text": " We have a number of tools for telling it what we want to do." }, { "end": 72.67999999999999, "start": 68.2, "text": " We can design some reward function. We can constrain its actions." }, { "end": 75.88, "start": 72.67999999999999, "text": " We can provide it some demonstration." }, { "end": 79.64, "start": 75.88, "text": " And all of these types of supervision cost time." }, { "end": 84.52, "start": 79.64, "text": " iterating on experiments trying to modify and tweak our reward function, trying to keen key" }, { "end": 86.12, "start": 84.52, "text": " observation space." }, { "end": 87.88, "start": 86.12, "text": " All of these things take a lot of time." }, { "end": 92.28, "start": 87.88, "text": " As the problem, and most excited about is figuring out how we can reduce the number of human" }, { "end": 98.84, "start": 92.28, "text": " hours that go into getting our robots and getting our self-relevant cars and other machines" }, { "end": 101.96000000000001, "start": 98.84, "text": " to learn tasks that we want them to learn." }, { "end": 105.16, "start": 101.96000000000001, "text": " And be really excited to see plots and papers." }, { "end": 110.11999999999999, "start": 105.16, "text": " You're not a amount of time that took the robot to learn the task, but a amount of time" }, { "end": 113.39999999999999, "start": 110.11999999999999, "text": " that took a human to teach the robot to perform the task." }, { "end": 115.64, "start": 114.2, "text": " So that's probably what I'm interested in." }, { "end": 118.36, "start": 116.28, "text": " So that topic doesn't come up that often." }, { "end": 122.92, "start": 118.36, "text": " I'm trying to think of a paper where I've seen records of how many hours were spent" }, { "end": 128.51999999999998, "start": 122.92, "text": " by humans. Is it a metric that you think could be a widespread metric?" }, { "end": 132.76, "start": 128.51999999999998, "text": " I hope so. I think it's a hard metric to use sometimes because you have to normalize" }, { "end": 137.95999999999998, "start": 132.76, "text": " for different factors. If an industry of someone has 10XS compute as some other company," }, { "end": 141.48, "start": 137.95999999999998, "text": " then maybe you need to spend more research your time, human time," }, { "end": 143.16, "start": 141.48, "text": " to get the robots to do the thing." }, { "end": 147.23999999999998, "start": 143.16, "text": " But I do think that it's even if we can't actually show that plot and paper is" }, { "end": 148.84, "start": 147.23999999999998, "text": " something useful to be aiming for." }, { "end": 155.32, "start": 148.84, "text": " Would the Holy Grail for you be RL that requires no human time, or just very," }, { "end": 158.35999999999999, "start": 156.6, "text": " is very judicious with human time?" }, { "end": 160.6, "start": 158.35999999999999, "text": " Exactly. Yeah. I would love it." }, { "end": 166.84, "start": 160.6, "text": " I could go into the lab one day, take the robot, show it a couple of demonstrations, or send it" }, { "end": 171.4, "start": 166.84, "text": " a couple of YouTube videos ahead of time, and have a field very quickly learn your tasks." }, { "end": 177.88, "start": 172.04, "text": " Even if that means I show the robot a couple of demonstrations, or give it some pieces of reward" }, { "end": 182.51999999999998, "start": 177.88, "text": " function, lock it in a closet, and then come back a month later, and only then it's learned the" }, { "end": 187.72, "start": 182.51999999999998, "text": " task. Because to me, the time that the robot spends learning by itself locked in the closet" }, { "end": 193.32, "start": 187.72, "text": " is very cheap. So you're doing your PhD. Now I always wondered what it was like as a PhD student" }, { "end": 198.44, "start": 193.32, "text": " in terms of the relationship with your advisor. Can you tell us share with us a bit about how" }, { "end": 204.68, "start": 198.44, "text": " that relationship works? Sure. So I'm co-advised. I have two advisors, Russell and Solikutinov here at CMU," }, { "end": 209.96, "start": 204.68, "text": " and Sergey Leffen at UC Berkeley. One thing I really like about my relic and keep with my advisors" }, { "end": 215.32, "start": 209.96, "text": " is that they provide complimentary sets of skills and expertise. If I just pick one word to" }, { "end": 219.88, "start": 215.32, "text": " describe my area of research, I'd say deep reinforcement learning. A very crude approximation of" }, { "end": 225.72, "start": 219.88, "text": " my advisors is Russell Kutinov does deep learning, and Sergey Leffen does reinforcement learning," }, { "end": 230.68, "start": 225.72, "text": " and so together the intersection of them is deep reinforcement learning, because exactly where I lie." }, { "end": 236.04, "start": 230.68, "text": " Cool. So let's talk about some of your recent papers. Great. So the first one is search on the" }, { "end": 240.6, "start": 236.04, "text": " replay buffer, bridging motion planning, and reinforcement learning. Can you describe the general" }, { "end": 246.6, "start": 240.6, "text": " idea of this paper? Absolutely. And we take a historical perspective for a second. The control" }, { "end": 250.6, "start": 246.6, "text": " community and the robotics community and the reinforcement learning community have won robots to" }, { "end": 256.04, "start": 250.6, "text": " perform long horizon tasks for a really long time. Classically there have been sort of two ways of" }, { "end": 263.24, "start": 256.04, "text": " getting robots to solve problems. The first step techniques are symbolic approaches or planning" }, { "end": 268.6, "start": 263.24, "text": " based approaches. They say we have some certain number of states discrete, and we're going to try" }, { "end": 275.48, "start": 268.6, "text": " to find a path that goes from our current state to some target state. And outcomes for doing this" }, { "end": 283.32000000000005, "start": 275.48, "text": " include Dijkstra's algorithm, A-Star, probabilistic road maps, and things of that nature." }, { "end": 287.64000000000004, "start": 283.32000000000005, "text": " The other school of learning methods, connect cleanness methods, take a more can" }, { "end": 294.92, "start": 288.36, "text": " view states in a more continuous fashion, and say we'll just throw function approximators" }, { "end": 300.68, "start": 294.92, "text": " at the problem and hope they solve that. So from this we get algorithms like DQN, like" }, { "end": 305.88, "start": 300.68, "text": " reinforce, and most of modern reinforcement learning. The goal of this project was to try to figure" }, { "end": 310.36, "start": 305.88, "text": " how it can take a number of those tools from the planning community and make them applicable" }, { "end": 317.48, "start": 310.36, "text": " to deep reinforcement learning algorithms. And so the way we went about doing that was by noting that" }, { "end": 324.6, "start": 317.48, "text": " the reinforcement planning algorithms reasoned over long horizons very, very well. Graphs, work is a" }, { "end": 329.64000000000004, "start": 324.6, "text": " remarkably competitive and fast algorithm. But on the flip side, these planning approaches" }, { "end": 335.16, "start": 329.64000000000004, "text": " don't scale to high-dimensional observations very well. And this manifests itself in a couple" }, { "end": 343.08000000000004, "start": 335.16, "text": " ways. For one, given say a couple images, it's hard to determine what action you could take to" }, { "end": 350.44, "start": 343.08, "text": " another image. And the second is, it's often hard to figure out whether how far apart two images" }, { "end": 356.12, "start": 350.44, "text": " are, how far apart two observations are. For example, maybe you have a robot in your kitchen and" }, { "end": 363.15999999999997, "start": 356.12, "text": " it's looking at the bottom part of a cupboard, and maybe you have an image of your bathroom," }, { "end": 368.36, "start": 363.15999999999997, "text": " and there is a similar looking cupboard. Now you know, as a human, that your kitchen and your" }, { "end": 374.12, "start": 368.36, "text": " bathroom are fairly far away from each other. But for the robot, it's just looking at an image of" }, { "end": 380.84000000000003, "start": 374.12, "text": " a door. It has to have a rather nuanced sense of perception to be able to detect this is the" }, { "end": 385.32, "start": 380.84000000000003, "text": " kitchen cupboard versus this is the bathroom cupboard. And this is exactly where the tools of" }, { "end": 392.04, "start": 385.32, "text": " function or approximation can be helpful. So I love this combination of classical planning on one" }, { "end": 397.8, "start": 392.04, "text": " side and RL on the other side. In a previous life, I actually did built a star algorithms for" }, { "end": 404.68, "start": 397.8, "text": " transportation. How did this idea for this paper come about and what was kind of the journey" }, { "end": 411.32, "start": 404.68, "text": " like from the initial conception to what we saw published? That's a really good question." }, { "end": 419.64, "start": 411.96000000000004, "text": " I guess we started exploring a couple different areas. So one area I interested in is this general" }, { "end": 427.56, "start": 419.64, "text": " notion of multitask reinforcement learning or goal-conditioned RL. And the idea of goal-conditioned" }, { "end": 433.48, "start": 427.56, "text": " RL is that you have some agent that takes is input not only in observation of the world," }, { "end": 440.44, "start": 434.04, "text": " but some notion of a goal or an image of what the world should look like. And the robot has to" }, { "end": 446.52, "start": 440.44, "text": " take actions to convert the world from its current state into the state into the goal state." }, { "end": 452.84000000000003, "start": 447.16, "text": " So for example, in an navigation example, this might involve walking from one room to another room." }, { "end": 458.59999999999997, "start": 452.84, "text": " Or in a manipulation example, this might mean stacking a whole bunch of blocks on top of each" }, { "end": 464.67999999999995, "start": 458.59999999999997, "text": " other, such that they look like the desired tower. So this idea of goal-conditioned reinforcement" }, { "end": 469.96, "start": 464.67999999999995, "text": " learning has been around for a really long time. Over the past, it was five years or so," }, { "end": 474.2, "start": 469.96, "text": " there's been a lot of progress in making more robust goal-conditioned RL algorithms." }, { "end": 480.84, "start": 474.2, "text": " And so one of the starting points for this project was thinking about if we assume goal-conditioned" }, { "end": 487.88, "start": 480.84, "text": " RL works. If we have this tool in our toolbox, what can we build? And one of the things that came" }, { "end": 494.59999999999997, "start": 487.88, "text": " to mind is if we have procedure that can navigate from one state to some other state, maybe we can" }, { "end": 501.08, "start": 494.59999999999997, "text": " somehow use this tool many, many times to solve complex tasks. Okay, so that was the kind of the" }, { "end": 505.32, "start": 501.08, "text": " setting where it seemed like this, something like this would be appropriate. How did you get started?" }, { "end": 510.59999999999997, "start": 505.32, "text": " And did you your first ideas about how to do this? Was it the same as what you ended up with?" }, { "end": 516.76, "start": 510.59999999999997, "text": " Or was there some changes in iterations along the way? Good question. So this is one of those" }, { "end": 521.08, "start": 516.76, "text": " sort of weird projects where the first thing we tried basically worked, but then things got" }, { "end": 529.64, "start": 521.08, "text": " harder as we scaled. So when we first had this idea kicking around, I implemented it on a very" }, { "end": 537.64, "start": 529.64, "text": " simple 2D navigation task. On this task, learning how to reach, how to navigate from one state to" }, { "end": 543.4, "start": 537.64, "text": " nearby state worked very well, and estimating the distance between two states also worked very well." }, { "end": 551.08, "start": 544.36, "text": " And this meant that within about a week or so we could show fairly large improvements over" }, { "end": 555.72, "start": 551.08, "text": " current state-of-the-art methods on the simple task. The challenge, however, came in scaling" }, { "end": 560.76, "start": 555.72, "text": " up to more complex environments. I'll highlight one of the challenges there, and that was this" }, { "end": 569.1600000000001, "start": 560.76, "text": " so-called wormhole problem. Imagine that you have two states that look visually similar," }, { "end": 576.36, "start": 569.1600000000001, "text": " but are actually very far away. For example, if we return to the kitchen cabinet versus bathroom" }, { "end": 582.0400000000001, "start": 576.36, "text": " cabinet that we talked about earlier, if the robot thinks that the kitchen cabinet and bathroom" }, { "end": 590.28, "start": 582.04, "text": " cabinet are actually close together, then when it's doing planning, it may assume that when it's" }, { "end": 596.4399999999999, "start": 590.28, "text": " in the kitchen, it can magically teleport to the bathroom because the bathroom cabinet and the" }, { "end": 604.28, "start": 596.4399999999999, "text": " kitchen cabinet look similar. And this sort of wormhole problem is disastrous for planning," }, { "end": 609.88, "start": 604.28, "text": " because the plans that are produced don't make any sense. And while this wormhole problem wasn't" }, { "end": 615.32, "start": 609.88, "text": " a problem for some of the simple experiments we ran, when we started looking at more complicated" }, { "end": 618.84, "start": 615.32, "text": " environments with image-based observations, this problem did pop up." }, { "end": 625.16, "start": 619.64, "text": " And then what about the reverse? I guess any error in the distance metric is going to cause problems" }, { "end": 629.96, "start": 625.16, "text": " for I'm just thinking of this from a classical planning point of view. So if does it ever happen" }, { "end": 634.52, "start": 629.96, "text": " that it thinks that two states are really far apart when they're actually close together or" }, { "end": 640.84, "start": 634.52, "text": " is that was that problem not much of an issue? Yeah, so we definitely had problems with both overestimating" }, { "end": 647.48, "start": 640.84, "text": " distances and underestimating distances. However, for the purpose of planning, underestimation is" }, { "end": 654.52, "start": 647.48, "text": " a much much bigger problem. And to see this, we can think about the number of paths from one state" }, { "end": 661.48, "start": 654.52, "text": " to another state. In most reasonable tasks, there are an exponential number of paths from one state" }, { "end": 666.04, "start": 661.48, "text": " to another state. And there might be even an exponential number of relatively short paths." }, { "end": 673.88, "start": 667.32, "text": " And so if we spurously predict that two states are further away than they actually are," }, { "end": 681.08, "start": 674.76, "text": " this may mean that we ignore some of those paths. But there's still many, many other short paths" }, { "end": 685.08, "start": 681.08, "text": " that we could consider to get from one state to another state. Because you have so many paths as" }, { "end": 690.6, "start": 685.08, "text": " opposed to a sparse network, in which case an overestimation distance might be a problem, but in" }, { "end": 695.4, "start": 690.6, "text": " this very dense network, it just goes around it. Exactly. For example, you could imagine navigating from" }, { "end": 703, "start": 696.12, "text": " the southeast corner of New York City to the northwest corner of New York City. And someone might" }, { "end": 708.6800000000001, "start": 703, "text": " go and tell you, oh, there's a traffic jam on this block. And there's still many ways that you" }, { "end": 713.24, "start": 708.6800000000001, "text": " can navigate from one side of this man of Manhattan to the other side of Manhattan. But if someone" }, { "end": 720.36, "start": 713.24, "text": " tells you, oh, if you go to this intersection, there's a helicopter that will fly you to your destination." }, { "end": 726.12, "start": 721.08, "text": " And do you go to the intersection, the helicopter isn't there, then you've made a big error." }, { "end": 732.44, "start": 726.12, "text": " In the paper, you talk about risk awareness. Can you say anything about how that works? Is that" }, { "end": 738.2, "start": 732.44, "text": " is that related to the ensemble? Yes. So the risk awareness was introduced to help deal with" }, { "end": 744.6800000000001, "start": 738.2, "text": " this underestimating overestimating distance problem. So in particular, we were worried that the" }, { "end": 752.36, "start": 746.0400000000001, "text": " agent would underestimate certain distances. And so what we said is that rather than learning" }, { "end": 759.1600000000001, "start": 752.36, "text": " a single estimate of distances, we're going to learn many estimates of all of the distances." }, { "end": 768.4399999999999, "start": 759.16, "text": " So we learned some ensemble distance functions. And then to get our final estimate for the distance" }, { "end": 774.28, "start": 768.4399999999999, "text": " between two states, we asked each member of the ensemble how far away these two states are." }, { "end": 782.6, "start": 775.4, "text": " And then we took the most pessimistic estimate from the ensemble. So if any member of the ensemble" }, { "end": 789.08, "start": 782.6, "text": " thought that these two states were far away, then we, for the purpose of planning, pretended that" }, { "end": 794.36, "start": 789.08, "text": " these two states were far away. Okay. And were they all trained, they were all trained, all the members" }, { "end": 800.0400000000001, "start": 794.36, "text": " of the ensemble were trained on the same data, or was it like a bootstrap thing? Yeah. So the" }, { "end": 804.12, "start": 800.0400000000001, "text": " proper way to do this would have been to train each kind of different subset of the data." }, { "end": 809.88, "start": 804.84, "text": " In practice, when bootstraps are used in most, a bunch of deep learning today," }, { "end": 818.04, "start": 809.88, "text": " the randomization and their initial weights is efficient to lead to different predictions later on." }, { "end": 821.8, "start": 818.76, "text": " And so we trained them on the same data, but with different weight initialization." }, { "end": 827.88, "start": 822.4399999999999, "text": " One thing that we did find fairly important there was that we couldn't share weights" }, { "end": 834.2, "start": 827.88, "text": " between members of the ensemble. So it's really tempting to say, we're training these five" }, { "end": 840.84, "start": 834.2, "text": " neural networks. Why don't we just have a single encoder carried between all of them and then" }, { "end": 845.5600000000001, "start": 840.84, "text": " separate heads for each member of the ensemble? It would have been much faster from a computational" }, { "end": 851.1600000000001, "start": 845.5600000000001, "text": " perspective. But the problem with this is that the ensembles are supposed to give us independent" }, { "end": 857.24, "start": 851.1600000000001, "text": " views of how far away two states are. And when they share an encoder, when they hear weights," }, { "end": 862.84, "start": 857.24, "text": " their predictions now become correlated. And we found that this significantly degraded performance." }, { "end": 869.1600000000001, "start": 862.84, "text": " So can you share with us like what size ensembles are we talking about? Were they huge or just a" }, { "end": 875.1600000000001, "start": 869.1600000000001, "text": " few networks? We used three members in our ensemble. We included an oblition at the end of the" }, { "end": 881, "start": 875.1600000000001, "text": " paper where we actually studied how important the size the ensemble was. And we found that" }, { "end": 888.76, "start": 882.12, "text": " two was much better than one. And three was only very slightly better than two. As we stopped" }, { "end": 895.24, "start": 888.76, "text": " it three. Cool. And then your paper mentions distribution RL. I think that's like C51, is that right?" }, { "end": 902.2, "start": 895.24, "text": " Exactly. Yes. Can you help us understand how you used the distribution RL? I think you did" }, { "end": 910.84, "start": 902.2, "text": " something specific here in terms of how you use the atoms. Yeah. So the distribution RL we used" }, { "end": 919.48, "start": 910.84, "text": " in our paper was actually a special case of C51. And so as you might recall, our first" }, { "end": 925.1600000000001, "start": 919.48, "text": " let's start with what distribution RL is. Distribution RL says that for a given state action pair," }, { "end": 932.36, "start": 925.96, "text": " rather than predicting your expected return, your expected feature return, we're instead going to" }, { "end": 938.52, "start": 932.36, "text": " predict a distribution over your feature returns. That is, we're going to predict some probability" }, { "end": 944.12, "start": 938.52, "text": " you get two rewards, some probability five rewards, some probability you get ten reward, and so on." }, { "end": 953.4, "start": 945, "text": " And in normal distribution RL, there's a rather complicated way to do bootstrapping with this" }, { "end": 958.52, "start": 953.4, "text": " entire distribution, which involves squawking the distribution by some discount factor," }, { "end": 964.84, "start": 959.16, "text": " and then splitting up the distribution, discretizing the distribution for the Belman update." }, { "end": 975.24, "start": 964.84, "text": " The, in our paper, we were using a rather special reward function that was minus one at every time" }, { "end": 981.88, "start": 975.24, "text": " step, and we were using a discount factor of one. And both of these choices meant that distribution" }, { "end": 988.84, "start": 981.88, "text": " RL was significantly simplified to implement for our setting. And I don't want to go into" }, { "end": 994.6800000000001, "start": 988.84, "text": " two molecular details, but it basically just corresponded to a bithift of the predictions." }, { "end": 1002.0400000000001, "start": 995.32, "text": " And in our experiments, we found that distribution RL was much more stable than using standard" }, { "end": 1007.88, "start": 1003.24, "text": " reinforcement learning. Can you say anything about exploration in this work?" }, { "end": 1013, "start": 1008.6800000000001, "text": " Was it, were you using a standard exploration, or was that a challenge?" }, { "end": 1019.72, "start": 1013, "text": " Yeah, so we mostly punted on the exploration problem. We assumed that the initial state distribution," }, { "end": 1024.44, "start": 1019.72, "text": " or we used environments with the initial state distribution, was uniform over all states." }, { "end": 1032.04, "start": 1025.24, "text": " And so this both helped make learning easier, and it also meant that for the purpose of planning," }, { "end": 1036.36, "start": 1032.04, "text": " the state is that we were planning over, were uniformly distributed. One direction that I'm" }, { "end": 1041.48, "start": 1036.36, "text": " pretty excited about is figuring out how you could couple these sorts of planning algorithms" }, { "end": 1047.32, "start": 1041.48, "text": " with smarter exploration techniques. One work in this direction was done by Nikolai Savanov," }, { "end": 1052.44, "start": 1048.44, "text": " but a year ago. And I think that's an interesting step in this direction." }, { "end": 1058.28, "start": 1053.08, "text": " So you mentioned some interesting angles for future work in the paper." }, { "end": 1062.44, "start": 1058.28, "text": " Do you plan to follow up any of these? Can you share that with us?" }, { "end": 1070.6, "start": 1062.44, "text": " Yes, perhaps the biggest bottleneck in SORP was actually learning the local policy." }, { "end": 1076.9199999999998, "start": 1070.6, "text": " And so, Gulk-Nik-N-R-L, despite being much better today than it was 10 years ago," }, { "end": 1082.6799999999998, "start": 1076.9199999999998, "text": " is still in its infancy. And figuring out how we can make Gulk-Nik-N-R-L algorithms" }, { "end": 1089.3999999999999, "start": 1082.6799999999998, "text": " that work even over very small scales is a pretty hard problem. But what" }, { "end": 1093.8799999999999, "start": 1089.3999999999999, "text": " search on the repo of offer shows is that you actually don't need more than that." }, { "end": 1101.64, "start": 1093.88, "text": " If you can get Gulk-Nik-N-R-L working on a length scale of 10, 20 steps," }, { "end": 1104.7600000000002, "start": 1102.2800000000002, "text": " then planning will be able to solve most of the rest of the problem." }, { "end": 1111.48, "start": 1105.72, "text": " And so, the future direction I'm perhaps most excited about is just figuring out" }, { "end": 1115.48, "start": 1111.48, "text": " better ways of getting Gulk-Nik-N-R-L to work. And one of the reasons why" }, { "end": 1120.1200000000001, "start": 1115.48, "text": " Gulk-Nik-N-R-L is an exciting problem to work on is not only because that's the potential" }, { "end": 1125.8, "start": 1120.12, "text": " to be used in a combination with planning, but also because when we're in the multitask setting," }, { "end": 1132.6799999999998, "start": 1126.4399999999998, "text": " there is much more supervision that we can leverage. A failure to solve to reach one goal or" }, { "end": 1137, "start": 1132.6799999999998, "text": " to solve one task might be a success for solving some other task. This is the intuition" }, { "end": 1140.6799999999998, "start": 1137, "text": " in the hindsight experience. Replay paper is also been explored in a number of other papers." }, { "end": 1147.9599999999998, "start": 1141.2399999999998, "text": " And I'm currently working on better ways of using that insight to learn Gulk-Nik-N-R-L algorithms." }, { "end": 1152.68, "start": 1147.96, "text": " Cool. Okay. Do you think that our brains could be doing something like this? Like," }, { "end": 1157.24, "start": 1152.68, "text": " in the sword paper, you're showing that two very different methods could be combined to solve" }, { "end": 1162.1200000000001, "start": 1157.24, "text": " this problem. So, when we approach a problem like that, do you have any comments about that?" }, { "end": 1168.8400000000001, "start": 1162.1200000000001, "text": " Like, are we using a different method to think about states that are nearby versus long horizon" }, { "end": 1171.56, "start": 1168.8400000000001, "text": " kind of planning? Or is that just totally unknown at this point?" }, { "end": 1179.72, "start": 1171.56, "text": " I like that question. I am definitely not an expert in a human or animal-related neuroscience," }, { "end": 1186.12, "start": 1179.72, "text": " so everything I say is speculation. There's definitely been some work that says that we have two" }, { "end": 1192.2, "start": 1186.12, "text": " modes of thinking. There's sort of famously Daniel Kahneman claims that we have system one and" }, { "end": 1198.04, "start": 1192.2, "text": " system two thinking. Other folks refer to our insect brains and our monkey brains to sort of" }, { "end": 1203.3999999999999, "start": 1198.04, "text": " differentiate high-level reasoning versus low-level reasoning. And I definitely could see that" }, { "end": 1208.12, "start": 1203.3999999999999, "text": " something of that nature could be going on inside their brain. I think it's also possible that" }, { "end": 1214.6, "start": 1208.84, "text": " what we have some sort of hierarchical structure in our brain is not discrete. It's not like we have" }, { "end": 1220.68, "start": 1214.6, "text": " the planning level and the reactive level, but rather might be continuous. I think an exciting" }, { "end": 1226.68, "start": 1220.68, "text": " area of research would be figuring out how we can design control algorithms that are continuous" }, { "end": 1231.88, "start": 1226.68, "text": " with respect to their level and some hierarchy. That does sound amazing. Okay, I can't wait to" }, { "end": 1238.8400000000001, "start": 1232.68, "text": " to hear what you come up with in that department. Let's move to another paper of yours." }, { "end": 1244.44, "start": 1238.8400000000001, "text": " Diversity is all you need, learning diverse skills without a reward function. And that was at" }, { "end": 1252.2, "start": 1244.44, "text": " ICLR 2019. So I remember noticing this paper, I think back when it first came out, and I think it" }, { "end": 1260.8400000000001, "start": 1252.2, "text": " was on Professor Levin's Twitter feed. And I remember looking at the half-cheetah acrobatics and just" }, { "end": 1266.6000000000001, "start": 1261.8, "text": " finding that so entertaining. And I couldn't wait to read the paper just after looking at that." }, { "end": 1272.44, "start": 1268.6000000000001, "text": " So I was excited to meet you and to realize that I could talk to the author of that paper that's" }, { "end": 1276.76, "start": 1272.44, "text": " kind of a great way to close the loop. So what was, can you share with our listeners what is the" }, { "end": 1282.28, "start": 1276.76, "text": " main idea of this diversity is all you need paper? We want robots to be able to do all sorts of" }, { "end": 1287.48, "start": 1282.28, "text": " things in the environments and where they operate. Often it's challenging to figure out what are" }, { "end": 1296.12, "start": 1287.48, "text": " meaningful behaviors in their environment. What are the sort of to draw an analogy to principle" }, { "end": 1303.08, "start": 1296.12, "text": " component analysis? What is the basis of behaviors that exist in some environment? And the" }, { "end": 1309.48, "start": 1303.08, "text": " motivating for doing this is that if we could somehow look at a robot interacting in an environment" }, { "end": 1317.24, "start": 1309.48, "text": " and say there are, say, ten principle behaviors that it can do. Ten motion primitives to draw an" }, { "end": 1322.76, "start": 1317.24, "text": " analogy to some of the older robotics literature. We could then assemble these ingredients, these" }, { "end": 1328.6, "start": 1322.76, "text": " primitive behaviors into more complex behaviors, dissolve new tasks much more rapidly. To" }, { "end": 1336.36, "start": 1328.6, "text": " look at it from a slightly different angle, while the number of parameters in our neural networks" }, { "end": 1344.9199999999998, "start": 1336.36, "text": " that are used for control might be on the order of millions, the number of useful or meaningful" }, { "end": 1352.76, "start": 1346.04, "text": " motion primitives might be on the order of dozens. And so if we can learn to control by" }, { "end": 1358.04, "start": 1352.76, "text": " composing these motion primitives or these skills, we should be able to learn significantly faster" }, { "end": 1363.32, "start": 1358.04, "text": " than if we have to directly tune the parameters of some neural network. As the motivation for this" }, { "end": 1370.04, "start": 1363.32, "text": " project was given some environment, return some set of motion primitive or some set of skills" }, { "end": 1379, "start": 1370.04, "text": " that can be used and ensemble to solve arbitrary tasks more quickly. How is diversity measured and" }, { "end": 1385.3999999999999, "start": 1379, "text": " defined in this context? We'll explain it by a game. So there's often used to the team building game" }, { "end": 1392.2800000000002, "start": 1385.4, "text": " for humans. So was it back in that you have two players in the running team and they stand at" }, { "end": 1398.44, "start": 1392.2800000000002, "text": " opposite ends of a football field and the goal is for them to communicate a message from one end of" }, { "end": 1402.52, "start": 1398.44, "text": " the football field to the other end of the football team. And the only thing they can do is they can" }, { "end": 1408.1200000000001, "start": 1402.52, "text": " jump up it down, they can wave their hands around, they can try to spell out letters with their arms." }, { "end": 1414.2800000000002, "start": 1408.1200000000001, "text": " So let's say that one player is trying to send a message to the other player. We measure diversity" }, { "end": 1420.12, "start": 1414.28, "text": " as how well those players can communicate a message across the field where the first player is" }, { "end": 1425.8, "start": 1420.12, "text": " trying their best to act out whatever message they're trying to send. And the other players trying" }, { "end": 1431.32, "start": 1425.8, "text": " their best to discern what exactly is my teammate trying to spell out? What exactly are they trying to" }, { "end": 1437, "start": 1431.32, "text": " convey? So that's a game that they're playing. It's like a communication game, but what is there some" }, { "end": 1441.8, "start": 1437, "text": " if I'm understanding this correctly? Is there something that's encouraging to have them to have" }, { "end": 1446.84, "start": 1441.8, "text": " many signals as opposed to just having the oh they figured out how to do that one signal and to" }, { "end": 1453.3999999999999, "start": 1446.84, "text": " communicate that effect? Yeah, in this game, the player that is sending messages is given different" }, { "end": 1458.44, "start": 1453.3999999999999, "text": " messages to send. There might be given hundreds of different messages that they have to send." }, { "end": 1464.84, "start": 1459.24, "text": " And if that player that's sending the message always does exactly the same thing, then they'll" }, { "end": 1469.96, "start": 1464.84, "text": " only be able to convey one message. For example, if the only thing that this player does is jump up" }, { "end": 1475.64, "start": 1469.96, "text": " and down, then the other player will have no idea what message they're sending. But if they have many" }, { "end": 1480.52, "start": 1475.64, "text": " different ways of jumping up and down or they know how to do some cartwheels or spell out a couple" }, { "end": 1486.52, "start": 1480.52, "text": " letters with their arms, spell out YMCA, for example, then there are many more messages that they" }, { "end": 1492.92, "start": 1486.52, "text": " could send across the field. The messages that are being sent in terms of the waving of the" }, { "end": 1498.28, "start": 1492.92, "text": " arms are these the states that the agents are visiting? Yeah, so to make the analogy or" }, { "end": 1503.16, "start": 1498.28, "text": " sort of complete the analogy, in the reinforcement learning setting, we have some agent interaction" }, { "end": 1510.52, "start": 1503.16, "text": " with an environment. And the AGM is going to play this game with itself. So the AGM is going to take" }, { "end": 1516.76, "start": 1510.52, "text": " some actions to visit some states. And then internally, there's a part of the agent we called" }, { "end": 1522.92, "start": 1516.76, "text": " it the discriminator that looks at these states and tries to infer what message was I trying to send." }, { "end": 1533.16, "start": 1522.92, "text": " And so the behaviors are the sequence of actions that the agent takes. And the messages are" }, { "end": 1539.4, "start": 1533.88, "text": " some sort of codes. Because the robot's just talking to itself, these codes don't have to correspond" }, { "end": 1545.88, "start": 1539.4, "text": " to English sentences. And in our setting, we actually just used random one-hot vectors as these" }, { "end": 1553.0800000000002, "start": 1545.88, "text": " messages. Okay, and this sounds kind of related to like hierarchical RL and options, but it's different," }, { "end": 1558.7600000000002, "start": 1553.0800000000002, "text": " right? These these behaviors are not options, right? And where are they? So the behaviors that we" }, { "end": 1566.7600000000002, "start": 1558.7600000000002, "text": " learned are similar in the sense that they can be composed hierarchically to solve more complex tasks." }, { "end": 1572.6000000000001, "start": 1567.8000000000002, "text": " So we included one experiment at the end of our paper where we heard how, after learning the" }, { "end": 1582.1999999999998, "start": 1572.6, "text": " set of skills, we could learn some high-level policy that every, say, 100-time steps told us" }, { "end": 1589.56, "start": 1582.1999999999998, "text": " which of your skills to use to try to maximize some reward. One of the key differences, though," }, { "end": 1598.4399999999998, "start": 1589.56, "text": " is that in the options framework, or in most other hierarchical RL, the low-level primitive skills" }, { "end": 1606.68, "start": 1598.44, "text": " are learned to maximize the reward function. And this is useful in some settings because it provides" }, { "end": 1613.4, "start": 1606.68, "text": " a reward, some reward signal for learning these low-level skills. But it's also challenging," }, { "end": 1618.52, "start": 1613.4, "text": " because it means that the low-level skills you learn on one task cannot necessarily be used" }, { "end": 1624.28, "start": 1618.52, "text": " to solve some other task. And so the key difference is that in diverse days, all you need, the skills" }, { "end": 1628.92, "start": 1624.28, "text": " that we were learning, were not learned with respect to a single reward function, rather they were" }, { "end": 1633.8799999999999, "start": 1628.92, "text": " task-ignostic. And then meant that you could use them to solve many downstream tasks. Of course," }, { "end": 1639.72, "start": 1633.8799999999999, "text": " the downside is that they attempted to cover all possible behaviors, and if you really only" }, { "end": 1645.48, "start": 1639.72, "text": " cared about one type of behavior, then many of the skills you learned would be useless. For example," }, { "end": 1650.12, "start": 1645.48, "text": " if you only care about running forward, then learning how to jump up and down and learning how" }, { "end": 1654.9199999999998, "start": 1650.12, "text": " to do backflips aren't particularly useful. We have some second of the paper where we show how you" }, { "end": 1660.84, "start": 1654.9199999999998, "text": " can bias the skills to accomplish certain types of behaviors. So there is some way around this if" }, { "end": 1666.4399999999998, "start": 1660.84, "text": " you really want to do that. So you mentioned how this agent is playing a cooperative game." }, { "end": 1673.1599999999999, "start": 1666.4399999999998, "text": " Most of the times, when I think when we encounter game type playing in RL or machine learning," }, { "end": 1678.76, "start": 1673.1599999999999, "text": " at least from my perspective, it seems like they're mostly adversarial games. Can you say anything" }, { "end": 1683.96, "start": 1678.76, "text": " about the difference between adversarial and cooperative games and why the cooperative game?" }, { "end": 1690.52, "start": 1684.68, "text": " Why this particular instance cooperative makes sense? And I wonder, is there some intuition that" }, { "end": 1696.12, "start": 1690.52, "text": " we can get about when we want to play a cooperative game versus adversarial? That's a good question." }, { "end": 1702.76, "start": 1696.12, "text": " And to be honest, I don't have a fantastic answer. One thing to notice that cooperative games" }, { "end": 1709.48, "start": 1702.76, "text": " are usually stable. In the sense that once you've found a solution, the two players of your game" }, { "end": 1715.08, "start": 1709.48, "text": " will tend to stay at that solution. So for example, in this communication game that we were talking about" }, { "end": 1719.96, "start": 1715.08, "text": " before, once the two players have worked out some strategy for communicating messages across the" }, { "end": 1725.08, "start": 1719.96, "text": " football field, now that has an incentive to deviate from that strategy. Whereas in contrast," }, { "end": 1730.12, "start": 1725.08, "text": " if you look at an adversarial game, the sort that's played in generative adversarial networks," }, { "end": 1736.1999999999998, "start": 1730.12, "text": " or to take a more simple example, rock paper scissors, players do have an incentive to deviate" }, { "end": 1741.6399999999999, "start": 1736.1999999999998, "text": " from their current strategy. For example, we're playing rock paper scissors, and you play rock," }, { "end": 1746.28, "start": 1741.6399999999999, "text": " I'm going to play paper, and then you're going to play scissors. And then I'm going to play rock," }, { "end": 1752.28, "start": 1746.28, "text": " and we'll keep cycling like this. So one of the benefits of dealing with cooperative games rather" }, { "end": 1758.04, "start": 1752.28, "text": " than competitive games is that the optimization problem might be easier. Cool, thanks for helping" }, { "end": 1763.3999999999999, "start": 1758.04, "text": " us understand that. The paper says that the discriminator works on the levels of states and not" }, { "end": 1769.08, "start": 1763.3999999999999, "text": " trajectories. And then, but also there was a line that said our method is not limited to learning" }, { "end": 1775.1599999999999, "start": 1769.08, "text": " skills that visit entirely disjoint sets of states. Can you help us understand how that works? So" }, { "end": 1781.32, "start": 1775.1599999999999, "text": " the trajectories don't have to be completely distinct, but hone in on those few distinct states," }, { "end": 1788.52, "start": 1781.32, "text": " is that what's happening? Exactly, yeah. So you could imagine that maybe the agent always starts" }, { "end": 1793.6399999999999, "start": 1788.52, "text": " in the same state, and maybe there isn't that much you can do at the beginning of an episode." }, { "end": 1798.4399999999998, "start": 1793.6399999999999, "text": " For example, maybe the agent always starts in some narrow hallway, and the only thing it can do" }, { "end": 1803, "start": 1798.4399999999998, "text": " with this narrow hallway is walk to the end of the hallway. And so while the agent is in this hallway," }, { "end": 1807.96, "start": 1803, "text": " it's really hard to tell what message it's trying to send, or what skill is being executed." }, { "end": 1811.96, "start": 1807.96, "text": " But let's say at the end of the hallway, there's a big open field, and there are many things the agent" }, { "end": 1816.92, "start": 1811.96, "text": " can do once it gets to this field. It can jump up and down, it can do backflips, it can play a game" }, { "end": 1822.3600000000001, "start": 1816.92, "text": " of soccer. The part that we were trying to explain in that part of the paper was saying that" }, { "end": 1828.04, "start": 1822.3600000000001, "text": " it's okay if there's certain states where you can't tell what skill the agent is doing," }, { "end": 1835.64, "start": 1828.76, "text": " as long as there are other states, say states in the future, where you can tell which skill the agent" }, { "end": 1841.16, "start": 1835.64, "text": " is using. Now we also had that point about discriminating on the level of states rather than" }, { "end": 1848.68, "start": 1841.16, "text": " trajectories. And the point there was mostly in implementation detail, we said that when we're going" }, { "end": 1854.3600000000001, "start": 1848.68, "text": " to infer what skill we're using, we're going to make a prediction for every state acting pair," }, { "end": 1859.24, "start": 1854.3600000000001, "text": " and then we're going to ensemble all of these predictions together. An alternative might be" }, { "end": 1866.76, "start": 1859.24, "text": " something like using a LSTM to read in every state and try to make a prediction. But as much" }, { "end": 1873.4, "start": 1866.76, "text": " harder to train recurrent models than these sort of backing models. And so we ended up using" }, { "end": 1879.48, "start": 1873.4, "text": " something simpler there. That said, there has been follow-up work that believes the valor paper" }, { "end": 1883.24, "start": 1879.48, "text": " that looks at learning skills conditioned on entire trajectories." }, { "end": 1889.24, "start": 1883.24, "text": " But how did he get to that position? There could have been a set of different states" }, { "end": 1894.04, "start": 1890.76, "text": " one step before. So maybe like a one step state transition." }, { "end": 1902.92, "start": 1895.8, "text": " Yeah, absolutely. And I think there's sort of fun to think about the sort of family of algorithms" }, { "end": 1909.88, "start": 1903.48, "text": " that are conditioned on some aspect of the behavior. And from that aspect of the behavior," }, { "end": 1918.0400000000002, "start": 1909.88, "text": " they try to infer what skill is being used. So in our paper, this aspect of the environment was just" }, { "end": 1924.5200000000002, "start": 1919.24, "text": " bag of states looking at each state and predicting what skill is being used. You could look at" }, { "end": 1929.8000000000002, "start": 1924.5200000000002, "text": " an entire trajectory of states and try to infer what skill is being used. And this would allow you" }, { "end": 1935.96, "start": 1929.8000000000002, "text": " to discern the Michael Jordan dunk from other sorts of the Kobe Bryant dunk. I don't," }, { "end": 1942.2, "start": 1935.96, "text": " maybe they have different dunk techniques. You also could look at say the initial state and the" }, { "end": 1950.52, "start": 1942.2, "text": " current state. And what this would allow you to do is see how the skill changes the state." }, { "end": 1956.76, "start": 1951.4, "text": " And you can imagine this might be useful if you're hoping to gain many skills together in sequence." }, { "end": 1964.6000000000001, "start": 1957.88, "text": " And in many other ways, you could imagine discriminating skills based on other aspects of trajectories." }, { "end": 1971, "start": 1964.6, "text": " You could look at cumulants, you could look at actions, you could look at running averages or other" }, { "end": 1975.7199999999998, "start": 1971, "text": " functions. So I think it's a fairly exciting area to try to sit down and enumerate all these" }, { "end": 1981.1599999999999, "start": 1975.7199999999998, "text": " different ways of discriminating skills and thinking about when you could be most appropriate." }, { "end": 1984.6799999999998, "start": 1981.1599999999999, "text": " Sounds like the properties of a seminal paper. Like there's just so many directions to go from" }, { "end": 1989.9599999999998, "start": 1984.6799999999998, "text": " here, which is awesome. Any follow-up work plans in this direction from you Ben?" }, { "end": 1994.6000000000001, "start": 1989.96, "text": " One thing that I've been looking into a little bit recently is figuring out how we can more" }, { "end": 2001.88, "start": 1994.6000000000001, "text": " intelligently use these sorts of discriminability ideas in service of maximizing a reward." }, { "end": 2007.64, "start": 2001.88, "text": " That is, how could we use something like Diane in the inner loop of" }, { "end": 2014.6000000000001, "start": 2009.32, "text": " current state of the art or algorithm? Could we somehow use this for better exploration or for" }, { "end": 2021, "start": 2014.6, "text": " better policy improvement? This is still very much in its early stages, but I think it has some" }, { "end": 2027.24, "start": 2021, "text": " promise. So I got to meet you in Vancouver at your in Europe's 2019 poster for the SOAR paper." }, { "end": 2033.3999999999999, "start": 2027.24, "text": " And I remember thinking that you should be teacher. Yeah. Because I thought you explained so well." }, { "end": 2038.76, "start": 2033.9599999999998, "text": " You explained as if you actually want us to understand. Not just like you're just checking the" }, { "end": 2042.84, "start": 2038.76, "text": " box. It's like, yeah, the explanation has been sent. But you actually want us to understand," }, { "end": 2046.1999999999998, "start": 2042.84, "text": " which I actually love and it comes through so clear. So thank you for that." }, { "end": 2051.56, "start": 2046.1999999999998, "text": " Well, thank you. So it makes total sense that you were head TA for the Deep RL course at CMU." }, { "end": 2055.88, "start": 2051.56, "text": " What's that like? Can you share a little bit about the course? I think it's what I'm looking at" }, { "end": 2064.12, "start": 2055.88, "text": " this. Love is, it looks like covers a lot. Yeah. So CMU is two reinforcement learning courses." }, { "end": 2069.3199999999997, "start": 2064.12, "text": " That was the graduate offering. And there's an undergraduate offering of a very similar course" }, { "end": 2078.1200000000003, "start": 2069.32, "text": " in the spring. And so as the TA, I helped design part of the syllabus. I gave one or two of lectures" }, { "end": 2084.44, "start": 2078.1200000000003, "text": " and organized most of the assignments in grading. We had a team of fantastic TA is that helped" }, { "end": 2090.04, "start": 2085.8, "text": " with many of the day-to-day logistics of running office hours and helping with the grading." }, { "end": 2094.84, "start": 2090.92, "text": " Do you have any advice for people who are trying to teach this stuff? I think one thing that's" }, { "end": 2103.48, "start": 2094.84, "text": " a bit challenging about many seminar courses and or many courses that survey a number of" }, { "end": 2110.52, "start": 2103.48, "text": " recent algorithms is that when we write research papers, we write them to highlight novelty." }, { "end": 2116.52, "start": 2111.32, "text": " That is, we highlight all of the ways in which our work is different from prior work. But for the" }, { "end": 2123.08, "start": 2116.52, "text": " purpose of teaching, it makes a lot more sense to emphasize the similarities. And so one of the things" }, { "end": 2130.6, "start": 2123.08, "text": " that I tried to do in recitations and lectures and assignments was to highlight that many of the" }, { "end": 2137.64, "start": 2130.6, "text": " algorithms that we learn in this course are built from the same building blocks. And I think that" }, { "end": 2145.4, "start": 2137.64, "text": " this mindset helps cope with the enormous number of papers that are published on RRL almost every day." }, { "end": 2152.36, "start": 2146.12, "text": " If we can discern what the underlying ingredients of each paper are, at least for me, that makes" }, { "end": 2157.08, "start": 2152.36, "text": " it much easier to understand what the core contribution of the paper is. That is, instead of saying" }, { "end": 2163.1600000000003, "start": 2157.08, "text": " this is a really complex paper, I can see it's, oh, it was this other paper plus two or three tweaks." }, { "end": 2170.04, "start": 2163.96, "text": " Sort of mathematically, if the number of ingredients from what we build algorithms grows linearly," }, { "end": 2174.04, "start": 2170.04, "text": " the number of possible algorithms, the number of possible ways of combining these ingredients" }, { "end": 2180.1200000000003, "start": 2174.04, "text": " in new ways would grow exponentially. And so being able to infer the ingredients from what" }, { "end": 2184.7599999999998, "start": 2180.12, "text": " algorithms are built seems like a fairly powerful way of understanding algorithms." }, { "end": 2189.24, "start": 2184.7599999999998, "text": " That makes a lot of sense. That's kind of related to one of the reasons why I wanted to do this" }, { "end": 2196.04, "start": 2189.24, "text": " podcast is because I want to understand RL and more and more depth. And I was finding that resources" }, { "end": 2200.44, "start": 2196.04, "text": " to connect the dots between the different subfields and the different papers and the different" }, { "end": 2204.7599999999998, "start": 2200.44, "text": " perspectives were really hard to find. It just seemed like there was so much that that went" }, { "end": 2210.1200000000003, "start": 2204.76, "text": " on said if you only looked at written material or lectures. Absolutely. And not saying that I'm an" }, { "end": 2215.32, "start": 2210.1200000000003, "text": " expert on this at all. But I do think that it's helpful to figure out how do we connect all these" }, { "end": 2221.4, "start": 2215.32, "text": " dots? Because most of the dots are often closer than we think. So you co-founded the ICML" }, { "end": 2226.6800000000003, "start": 2221.4, "text": " exploration in reinforced and learning workshop. Can you say a bit about that? How did you come to" }, { "end": 2232.76, "start": 2226.6800000000003, "text": " to co-found that workshop? Yeah, so that was with Storia Bupati-Raku. I started that when we were" }, { "end": 2241.4, "start": 2232.76, "text": " both part of the Google Brain residency. I guess that was early 2018. And the motivation for doing" }, { "end": 2248.5200000000004, "start": 2241.4, "text": " it was simply that there's a fair amount of work on exploration in RL. But it often is fairly" }, { "end": 2253.96, "start": 2248.5200000000004, "text": " disjoint. And we're hoping to gather together a whole bunch of the folks working on exploration" }, { "end": 2259.48, "start": 2253.96, "text": " to have a conversation and to exchange ideas to figure out how do we move the field forward?" }, { "end": 2265.72, "start": 2259.48, "text": " One of the primary aims of the workshop was to figure out how do we even measure success?" }, { "end": 2270.84, "start": 2265.72, "text": " What is the right metric for exploration? And it was fun to see over the two years that we had" }, { "end": 2276.68, "start": 2270.84, "text": " the workshop, what different metrics various people proposed? So do you feel like we are closer to" }, { "end": 2281.08, "start": 2276.68, "text": " having that figured out now? Closer is still a long way off from the solution. Well, it's not" }, { "end": 2285.8, "start": 2281.08, "text": " it's not the simplest problem, I guess. So we had from from DeepMind, we had the B Suite that came out" }, { "end": 2291.32, "start": 2285.8, "text": " a while back. And I guess that part of that was about exploration. Absolutely, yeah. One of the" }, { "end": 2296.76, "start": 2291.32, "text": " things I liked about the B Suite is that they propose a number of different metrics. And I think" }, { "end": 2301.5600000000004, "start": 2296.76, "text": " that sort of highlights that is often unclear exactly what we want from exploration. And so maybe it" }, { "end": 2305.8, "start": 2301.5600000000004, "text": " does make sense to not be optimizing for a single metric, but for a set of eight metrics." }, { "end": 2311.96, "start": 2306.52, "text": " Do you have any comments on general approaches to exploration in RL these days?" }, { "end": 2317.08, "start": 2311.96, "text": " Yeah. So I think there are probably three or four different broad classes of techniques. And" }, { "end": 2322.92, "start": 2317.08, "text": " any technique, there's definitely areas for improvement. So I think so one main area is adding noise." }, { "end": 2329.56, "start": 2323.7200000000003, "text": " Adding noise in one of many ways. So epsilon greedy just adds noise to the action." }, { "end": 2334.84, "start": 2329.56, "text": " In algorithms like DDPG, we add noise that's correlated across time to the actions." }, { "end": 2342.6800000000003, "start": 2334.84, "text": " There were two papers in I think 2015 maybe, noisy nets and parameter space noise that add noise not" }, { "end": 2348.6000000000004, "start": 2342.6800000000003, "text": " to the actions, but to the parameters of the actor. And so I think this is sort of one class of" }, { "end": 2352.52, "start": 2348.6000000000004, "text": " methods and it's fun to think about where else might we add noise or how could we tune the noise" }, { "end": 2359.1600000000003, "start": 2352.52, "text": " automatically. There was one paper a year or two ago by Obtek Dupte, they looked automatically" }, { "end": 2365.8799999999997, "start": 2359.16, "text": " tuning the noise added for exploration. A second class of techniques look at trying to capture" }, { "end": 2371.72, "start": 2365.8799999999997, "text": " uncertainty. These are often done by trying to figure out where is our policy or a Q-function" }, { "end": 2376.92, "start": 2371.72, "text": " most uncertain. And then rewarding the policy for going to states where this uncertainty is high." }, { "end": 2382.3599999999997, "start": 2377.56, "text": " And then maybe a third set of approaches are those that learn some density model" }, { "end": 2387.08, "start": 2382.3599999999997, "text": " of where the agent has already been. These include not only compass methods, methods that" }, { "end": 2394.36, "start": 2387.08, "text": " learn some user VAE or a normalizing flow or something to model the distribution over states" }, { "end": 2399.4, "start": 2394.36, "text": " and actions that the agent has been to before. And then we can use this density to form some sort" }, { "end": 2407.16, "start": 2399.4, "text": " of exploration bonus, either in the form of by directly rewarding the agent for going to unseen states" }, { "end": 2412.52, "start": 2408.04, "text": " or by trying to look at how the density changes as the agent visits new states." }, { "end": 2419.24, "start": 2412.52, "text": " And then I guess maybe one fourth type of exploration method are those based on posterior sampling." }, { "end": 2424.92, "start": 2419.24, "text": " And they are rather than trying to learn a single policy or a single value function. You might" }, { "end": 2430.2, "start": 2424.92, "text": " learn the distribution over policies or the distribution over value functions. And things like" }, { "end": 2436.04, "start": 2430.2, "text": " bootstrap DQN are the typical example of these sorts of exploration methods. And while all these" }, { "end": 2441.8, "start": 2436.04, "text": " exploration methods seems kind of disjoint, I think that there actually might be some of them might" }, { "end": 2447.2400000000002, "start": 2441.8, "text": " in fact be the same. And one area that would be fun to explore would be figuring out the" }, { "end": 2451.7200000000003, "start": 2447.2400000000002, "text": " connections between each of these four methods or these four classes and methods." }, { "end": 2455.88, "start": 2451.7200000000003, "text": " Cool, thanks for laying that out. So it sounds like we're still looking for the holy grail of" }, { "end": 2461.4, "start": 2455.88, "text": " the granunified theory of exploration in our health. Yeah, one thing to note is that optimal" }, { "end": 2468.76, "start": 2461.4, "text": " exploration is well defined but is completely intractable for most problems that we care about." }, { "end": 2473.4, "start": 2468.76, "text": " Do you mean that like in a posterior sampling sense? I guess I mean that posterior sampling is" }, { "end": 2479.0800000000004, "start": 2473.4, "text": " an approximation to what's known as the Bayes optimal exploration strategy. So if we want to" }, { "end": 2485.0800000000004, "start": 2479.0800000000004, "text": " maximize cumulative return, there is some way to do it optimally for any MD given an MDP, there is" }, { "end": 2491.4, "start": 2485.0800000000004, "text": " some optimal exploration strategy. But actually computing this exploration strategy is extremely," }, { "end": 2497.1600000000003, "start": 2491.4, "text": " extremely hard. It's hard because Bayesian inference is hard. Exact Bayesian inference is hard." }, { "end": 2504.2799999999997, "start": 2497.16, "text": " Is that why it's hard? It's hard because it requires reasoning about all possible belief states," }, { "end": 2510.8399999999997, "start": 2504.2799999999997, "text": " which grows exponentially in the size of the number of states in your MDP. I see. So your belief" }, { "end": 2517.3199999999997, "start": 2510.8399999999997, "text": " space MDP is much larger than your actual MDP. Is that what you're saying? Yes. Like a lot." }, { "end": 2522.8399999999997, "start": 2518.2799999999997, "text": " Way, way, way, way bigger. That makes sense. Okay, thanks, thanks so much for laying that out for us." }, { "end": 2527.2400000000002, "start": 2522.84, "text": " Here. Do you have any tips for us on keeping up with the day loose of papers? There's just so" }, { "end": 2532.04, "start": 2527.2400000000002, "text": " many new papers and it's hard. Everyone's excited about them all. I don't have the perfect solution," }, { "end": 2537.48, "start": 2532.04, "text": " but I can tell you what I do. I have a giant spread key to papers and whenever anyone recommends a" }, { "end": 2544.76, "start": 2537.48, "text": " paper or I see an interesting paper reference somewhere, I add the list and then every day I pop" }, { "end": 2549.7200000000003, "start": 2544.76, "text": " one or two papers off the list and read them and I sample the papers uniformly at random. So it" }, { "end": 2555.24, "start": 2549.72, "text": " ends up being a mix of very new papers, very old papers, papers in between. And then I guess hope that" }, { "end": 2560.12, "start": 2556.2799999999997, "text": " if the paper is important enough, it will eventually come to the pop with the list. But yeah," }, { "end": 2567.3199999999997, "start": 2560.12, "text": " there are way, way, way too many papers to read all of them. So random search is powerful folks." }, { "end": 2573.3999999999996, "start": 2567.3199999999997, "text": " Yep. Are there researchers you really admire and look up to? Yeah, I think that there are folks" }, { "end": 2579.8, "start": 2573.4, "text": " both on the theory and application side that do some pretty cool work. So on the application side," }, { "end": 2584.6, "start": 2579.8, "text": " anyone who gets any sort of reinforcement learning to work in the real world is amazing." }, { "end": 2590.04, "start": 2585.7200000000003, "text": " So there are a couple of folks that have done some reinforcement learning for healthcare." }, { "end": 2595.56, "start": 2590.36, "text": " Folks like finale dohivalaz and Emma Brunskyl. Emma Brunskyl has also done some pretty neat work" }, { "end": 2601.08, "start": 2596.28, "text": " into a reinforcement learning for education. That is figuring out what assignments do we give students" }, { "end": 2608.68, "start": 2601.08, "text": " so they can learn most quickly. I guess there's also been some work on doing reinforcement learning" }, { "end": 2614.92, "start": 2608.68, "text": " for optimizing batteries. And I think that was done by Stefano Armand and one of the students" }, { "end": 2620.04, "start": 2614.92, "text": " at the TITU Grover. I think that was also very cool. But then on the theory side, I think a lot of" }, { "end": 2626.04, "start": 2620.04, "text": " folks have done some pretty neat work, including Brian Zebert's done some nice work on" }, { "end": 2631.88, "start": 2626.04, "text": " actual entropy reinforcement learning. Tom Chow has done some very nice work on just pushing" }, { "end": 2637.56, "start": 2631.88, "text": " regular RL algorithms forward. And of course, I look up to both my advisors too. Do you have any advice" }, { "end": 2643.8, "start": 2637.56, "text": " for people who look up to you? One thing I'd recommend is not being afraid to ask for help." }, { "end": 2650.92, "start": 2644.44, "text": " Very often, folks want to be helpful, but they don't know how to. And if you can tell someone," }, { "end": 2657.32, "start": 2650.92, "text": " oh, can you recommend three papers I can read to learn about reinforcement learning? Or, oh," }, { "end": 2664.44, "start": 2657.32, "text": " what was what algorithm did you use for implementing that paper? Or can you send me the code you" }, { "end": 2672.52, "start": 2664.44, "text": " used for that environment? Very often, people will be happy to say yes. And so just asking for help," }, { "end": 2677.48, "start": 2672.52, "text": " I think, is one of the most useful skills. I'm sure there are other things too, but that's the" }, { "end": 2682.76, "start": 2677.48, "text": " first thing that comes to mind. So besides the things that you mentioned already in our chat," }, { "end": 2689.2400000000002, "start": 2682.76, "text": " are there papers or trends in RL generally that you think are really interesting more recently?" }, { "end": 2695.8, "start": 2689.2400000000002, "text": " One trend that I'm pretty excited about is using VR setups to collect human demonstrations." }, { "end": 2703.08, "start": 2696.36, "text": " There's been maybe a half dozen papers over the past year or two. One that comes to mind is" }, { "end": 2708.68, "start": 2703.08, "text": " Corey Lynch is learning from play data paper. I think there are a couple others as well. And the" }, { "end": 2717.64, "start": 2708.68, "text": " reason why I think this is exciting is that this using VR in motion capture to provide demonstration" }, { "end": 2725.16, "start": 2717.64, "text": " seems a lot easier than, say, designing reward functions. That is, we can provide many more bits per" }, { "end": 2734.04, "start": 2725.16, "text": " second of human interaction. And I think given that I suspect this trend will continue over the" }, { "end": 2738.52, "start": 2734.04, "text": " next couple of years, which I mean that very soon we'll have fairly large data sets of human" }, { "end": 2747, "start": 2738.52, "text": " demonstrations for maybe robotic mobilization for self-driving for maybe some other sorts of navigation" }, { "end": 2752.44, "start": 2747, "text": " tasks. And then the question will be how do we design algorithms that can effectively learn from" }, { "end": 2759.2400000000002, "start": 2752.44, "text": " these large sets of unlabeled human demonstrations? And this data is interesting because it's not random" }, { "end": 2765.7200000000003, "start": 2759.2400000000002, "text": " data, but it's also often not labeled with the human's intentions. And so it may be the case that" }, { "end": 2771.32, "start": 2765.7200000000003, "text": " figuring out the right way to merge in reverse reinforcement learning, that is inferring what the" }, { "end": 2779.64, "start": 2771.32, "text": " intention was and reinforcement learning, trying to maximize whatever reward the user was intending to" }, { "end": 2785, "start": 2779.64, "text": " do might be sort of cool. And so this intersection of reinforcement learning and worst reinforcement" }, { "end": 2791, "start": 2785, "text": " learning might provide a way forward to solve the handle all this motion capture data. That's" }, { "end": 2799.3199999999997, "start": 2791, "text": " sounds really cool. This this interview is happening in March mid March 2020, which we're of course" }, { "end": 2804.2, "start": 2799.3199999999997, "text": " we're all facing COVID and we're hearing about conferences being canceled or moved online." }, { "end": 2810.8399999999997, "start": 2804.2, "text": " Apparently ICLR is going to be a virtual conference. What do you think about these virtual conferences?" }, { "end": 2816.4399999999996, "start": 2811.56, "text": " I think it's an opportunity to figure out how we can how to have conferences when people" }, { "end": 2820.8399999999997, "start": 2816.4399999999996, "text": " are remote. So realistically I expect the number of machine learning conferences to grow over the" }, { "end": 2826.2799999999997, "start": 2820.8399999999997, "text": " next couple of years. At the same time increasing concerns about climate change and increasing" }, { "end": 2830.68, "start": 2826.2799999999997, "text": " demands on people's times are probably going to be a harder to travel to all these conferences." }, { "end": 2837.7999999999997, "start": 2830.68, "text": " It's a figuring out how do we make conferences still feel exciting and engaging? How do we still have" }, { "end": 2845.48, "start": 2838.52, "text": " the spontaneous run into friends and collaborators and conversations in the hallway of conferences" }, { "end": 2848.9199999999996, "start": 2845.48, "text": " is definitely going to be a challenge. But it is also an opportunity to figure out" }, { "end": 2854.8399999999997, "start": 2849.48, "text": " to sort of be forced to solve this problem. I think they'll probably improve conferences even" }, { "end": 2861.7200000000003, "start": 2854.84, "text": " after COVID is gone. Yeah, I totally agree. So at Neurup 2019 I just couldn't believe how packed it was" }, { "end": 2865.96, "start": 2862.28, "text": " and this is after they turned down, you know, huge chunk of the people who wanted to be there." }, { "end": 2871, "start": 2867.6400000000003, "text": " And that's aside from from considering emissions but just the demand for" }, { "end": 2877.48, "start": 2872.6000000000004, "text": " for being there is just so huge. On the other hand the online tools that I've seen so far," }, { "end": 2883.1600000000003, "start": 2878.04, "text": " I mean as much as I like SLI's live, I definitely wouldn't want to be relegated to that experience." }, { "end": 2889.48, "start": 2883.16, "text": " So trying to imagine what a rich interactive experience could be like. Like for me," }, { "end": 2893.96, "start": 2890.6, "text": " I guess some people at Neurup's were like, hey, you know, the poster sessions are crazy." }, { "end": 2898.3599999999997, "start": 2894.3599999999997, "text": " We should have some people are even saying we should have less poster sessions and more talks." }, { "end": 2901.96, "start": 2898.3599999999997, "text": " And I was like, that's crazy because like the poster session is where I met you and I could hear" }, { "end": 2907.08, "start": 2901.96, "text": " you know in depth about your work and to me that's the most beneficial part is these poster sessions" }, { "end": 2911.3199999999997, "start": 2907.08, "text": " which seem like the most challenging part to scale up. Absolutely, yeah. I don't have a solution." }, { "end": 2918.6800000000003, "start": 2911.32, "text": " But I think forcing 5,000 or 10,000 or more of my folks to think hard about this problem over the next six months" }, { "end": 2924.92, "start": 2919.32, "text": " will result in an enormous number of human hours thought about this problem." }, { "end": 2928.6800000000003, "start": 2924.92, "text": " And I'm fairly confident some folks will think of some clever solutions. Ben Eisenbach," }, { "end": 2933.7200000000003, "start": 2928.6800000000003, "text": " I can't wait to hear what you came up with next and thanks so much for joining us here today." }, { "end": 2945.16, "start": 2933.72, "text": " Thanks again, Ben. Absolutely. Thank you for having me, Robin." }, { "end": 2961.08, "start": 2945.16, "text": " That's our episode for today folks. Be sure to check talkrl.com for more great episodes." } ]
NeurIPS 2019 Deep RL Workshop
Hear directly from presenters at the NeurIPS 2019 Deep RL Workshop on their work!
https://media.transistor…6c6.mp3?src=site
This is TalkRL, all reinforcement learning all the time. Subscribe at talkrl.com slash subscribe. We're here at Neurips 2019 in Rainy Vancouver. And we're now getting here directly from a number of presenters at the Neurips D-RL workshop. So Matias Abatelli here from the University of Liesge, big fan of your podcast by the way. I'm here to present a poster about the new family of Model 3 Deep Reinforcement Learning algorithms. And the paper is called two value functions are better than one, two words characterizing a new family of Deepar Al-Algorithms. And this is also joint work between the University of Belgium from Liesge and University in the Netherlands in Hroningen. So the main idea that I'm trying to pitch today is that actually we should care about approximating the state action value function which is usually known as Q. Alongside the approximation of the state value function V in Model 3 reinforcement learning. So current state of the art, algorithm usually care about approximating Q. But actually I introduce a new family of algorithms which also approximate V which seem to work much better. I basically studied the properties of these algorithms and I show multiple interesting facts. So the training scheme of approximating two value functions instead of one, I showed that it's actually beneficial when we are learning on policy and when we are learning off policy. So what I do, I introduce two new Model 3 reinforcement learning algorithms which follow this specific learning scheme which all seem to perform algorithms like DQN and DQN. These algorithms have also interesting properties in terms of the quality of the Q function we're learning. I show that my algorithms called the DQV and DQV max suffer less from the overestimation bias of the Q function which is a problem when it comes to Model 3D reinforcement learning algorithms. So when we are actually porting RL algorithms to neural networks we know that training can become very, very unstable and one of the causes of this divergence is the overestimation bias of the Q function and I show that the algorithms that I introduce suffer less from this phenomenon. So they don't just learn much faster but they also learn accurately which is something nice. And in terms of quantitative results it's kind of cool because this specific training dynamic of approximating two value functions instead of one allows the algorithms of the DQV family to outperform the QN and the DQN on a set of benchmarks on which these algorithms failed at the beginning. So we've seen a lot of progress when it comes to Model 3D parallel but some environments for example from the Atari benchmark are still challenging and DQV and DQV max are able to achieve super human performance where algorithms like a DQN and a DQN failed. So can you comment on the relationship between your new algorithms and maybe establish approaches that separate the V out like a dueling? Right, so one key component of the algorithms that I'm introducing is that we require two sets of separate weights so this means that two independent neural networks are needed in order to approximate a state value function and the state action value function. When it comes to the dueling architecture we have these huge blocks, convolutional blocks which are shared among the architecture and in different heads. One of these sets for example also estimates the advantage function which is something that the algorithms of my family do not do. But I've tried some approaches which are inspired from an architecture point of view by the dueling architecture and these are algorithms which I call dueling DQV. So in some sense I again like fix these convolutional layers and I try to share them among the neural architecture before approximating the state value function and the state action value function but I actually show that this is not really beneficial in my case. So one key component of the algorithms of the DQV family is to really use two different sets of weights and two separately parametrious neural networks. I'm Eric Steinberger, I'm a student undergrad at the University of Cambridge and now part time at Facebook Air Research. I worked on a single deep counterfactual re-grapmanimization which is an algorithm that solves two player zero sum and perfect information games that might be too big to solve with tabular algorithms and is better than the state of the art was before which was deep CFR. What I did is I found a way to simplify deep CFR to require less neural network approximation and in fact less training in total while yielding better that is more accurate results both theoretically and in practice. So here you started working on deep oral in high school that can be too common. Now yes I started working on this while I was in my last year in high school and finished it in my first year of university. This is just kind of a project where I thought this would be fun like if it existed. Back when I started it there was no good algorithm that used neural networks to play imperfect information games and things like PPO are you know they work in many games but they don't really, that wasn't really much work on them converging to Nash equilibria. There are some algorithms to do but not really well and I thought I'd take the state of the art tabular algorithm, CFR, counterfactual re-grapmanimization and try to apply neural networks to it and basically just sort of approximate the shit out of it. That worked. My name is Santoriano Jona, I'm working in the University of Lins in the Johannes Capri University. Yeah and we are presenting in Rada it's a new method for reinforcement learning to decompose the return and to perform reward distribution in a way that the learn agent will learn much much faster than using TD or using Monte Carlo. Can you tell me more about how Rada works? Yes, so traditional methods will take a very long time to learn or to solve an NDP when the rewards are delay. A delay reward means that you get the reward only at the end of the episode or that the reward is a sparse. In any way every time that you are making an action which is very relevant to the reward this reward you see very very late. So in our method we perform a return the composition so we decompose the return and then we using contribution analysis we do reward redistribution. So we change how the agency the reward from the environment. This new reward redistribution leads to a new NDP which is much much easier to solve because in the optimal case all the future specter reward that you might expect from one state action it's giving immediately. So you can't forget about the future. My name is Pochrater. I'm the head of Salitte Eileb in Lins. I'm the last author of this paper Rada Return to the Composition for the late rewards. What we present here is a paradigm shift. It's completely different than previous reinforcement learning algorithms. We use supervised learning for identifying key events which correspond to step functions or value functions. And by doing this we are in experiments exponentially faster than TD methods. We exponentially faster than Monte Carlo methods. We exponentially faster than Monte Carlo 3 search, all this previous methods do not work. And our assumption is to have delayed reward and it have to be model free. As soon as you have a model use model-based methods like Monte Carlo 3 search. What for the hard challenges use our method? For example if you want to train an agent for a game you have to make strategic decisions at the very beginning of the game. And at the end of the game it turns out whether you're in or lose. And standard reinforcement learning methods cannot capture this. Cannot capture a set you did decision at the beginning which is important at the end. But we can. We can do that. Because we can identify reward changing actions at the very beginning. So your claim sound quite dramatic. Can you help me understand or are there some limitations to this approach? So limitation of our approach. Limitations are if the sequences are extremely long sensor method we are using it's called LSTM. Actually invited by myself. We use LSTM and if we have problems for very long sequences another problem would be if there is no delayed reward. Then we have an overhead and we introduce noise which can hamper learning. But if you have delayed reward and if you don't have a model please use this and don't use all the other stuff which is the textbooks forget about it. Hi my name is Nathan Lambert and I'm a PhD student at UC Berkeley and from my internship at Facebook AI working under Roberto Clandra we were studying what we call objective mismatch and model based reinforcement learning. So in model based RL when you have this dual optimization problem between training a dynamics model to be accurate and trying to optimize reward when you're doing control there is a missing link because the dynamics model that we train is not being optimized to maximize reward. And sometimes we've seen exploitation and this missing link in the process could be a minimization on the sample efficiency and max reward that you can get with these model based RL algorithms. So how does your work address this? So we started to address this by changing the weighting on the state action next state pairs when we're training dynamics models to start with a more optimal trajectory and reweight towards that and that focuses the dynamics model on one task and we showed that you can get a bit better sample efficiency by reweighting there. But most of the work is showing how this issue emerges and control and kind of ways people should be thinking about training their models. So my name is Akhil Bagaria and this is joint work with my advisor George Conradaris and we're both at Brown University. So this poster really is about hierarchical reinforcement learning and the idea is that usually reinforcement learning agents are doomed to make decisions at very many skill times skill. Maybe they have to make a decision every nanosecond to decide how to move each other muscles. But what we'd like to do is lower the decision making burden on the RL agent and the way we do that is by learning some higher level actions. So maybe as a human being you have some higher level skills that you have for example opening the door or holding the mic or giving a presentation rather than each muscular decision that you make. We know that these skills can be formulated using the options framework that was proposed in 1999 and ever since then we've known that these options are useful but we don't have a good way to discover these options autonomously just from interacting with the environment. So this work is an attempt at doing that autonomously for RL agents. So here in this task consider that there's a robot and it has some start state and some goal that it needs to get to and it needs to find some optimal way of getting to that goal. The way we create higher level actions or skills in this scenario is that the agent will try to do this task again and again and then it'll find some small modular skill that with very high probability can succeed in getting to the goal and it turns out that this higher level action is one that initiates somewhere near the goal. So in that way we recursively define what a useful skill is which means that a skill should either solve the problem for you or it should take you to a place from where you have high confidence to solve the problem. And if you do this recursively turns out you can construct a chain data structure that goes all the way from your goal state back to your initial state and we've reduced the whole problem of making decisions over thousands of time steps into one which only requires four decisions in this case. So the rest of the poster is basically our experiments and results with this algorithm and we see that in complicated tasks which have required decision making over long horizons as well as those that are sparse award you can get a huge speed up compared to flat reinforcement learning agents. In this case we compare our performance against DDPG which is a popular state of the art actocratic method and we see that in all of these cases DDPG has a really hard time even learning anything useful in all the tasks we consider whereas our method is able to by breaking down the problem into a series of subproblems gain a lot of sample efficiency. I'm Kierel. This work was done with blueriver technology. They're an agricultural AI startup. I guess not so much a startup anymore. Right now I'm working with University of Calgary and part time with Microsoft kind of continue all in the same work and essentially here the idea is we want to start using reinforcement learning on actual hardware rather than just on simulator. So one of the problems that one of the speakers yesterday is workshop biological and artificial RL workshop was talking about is that all these control environments like Mojoku or Bullet they're very deterministic and you're missing some information. So real sensors are noisy. Real actuators are incorrect in terms of they're not just square wave functions. So essentially what we want to do is we want to actually get people to start trying their algorithms on real hardware and that's kind of what we built. Hi my name is Leonard Adel and I'm from ETH Zurich and I'm presenting Le Dewebchev deep reinforcement learning agent for families of text-based games and this is joint work with my supervisor Thomas Hoffmann also from ETH Zurich and what we did in this work is that we developed an agent that can play these text-based games and text-based games are an interesting area of research because they somehow give us a small environment in which we can learn reinforcement learning agents in this natural language domain. And this is super interesting because we can restrict the environment and the commands that we can issue and also the state that we get to develop agents that then maybe eventually also perform on larger tasks. So exactly and that's in one particular domain which is the domain of cooking in a modern house environment. So this was the setting and we had several different games, 500 different games, all of the same families somehow but all different tasks and different environments and everything was just given by natural language and the agent could only interact with natural language and we developed an agent for this and we outperformed all the vanilla baselines and also competed in the first text world problems competition this year and ranked second among more than 20 competitors. Hello, I'm Hardik Menshiri, I'm from PCS Richach Mumbai, I'm presenting my work here by accelerating training in power man with imitation and reinforcement learning. I basically work in applying reinforcement learning to multi agent scenarios and industrial optimization problems such as supply chain optimization, container loading into the ships, bin backing etc. The work that I'm presenting here is about the power man game. We have developed a curriculum plus imitation learning framework where we are reducing the training time by 10 order of 10 and we are looking at something which can be generalized to all of the environments given you have a noisy expert, noisy expert samples with you. Hey, I'm Danijar and I'm a PhD student at the University of Toronto and student researcher at Google Brain. With a paper here at Nierob's title, Dream to Control, Learning Behaviours by Layton Imagination, where the goal is to learn a world model from experience over time and so in our L agent uses this world model to then learn long horizon behaviors in imagination. And using this approach we can solve a range of new tasks and exceed the performance of both model free and model based methods. For this to work, we first learn dynamics model from experience and since the inputs are high dimensional images in our case, we embed them into a compact sequence of Layton states which makes it much more efficient to predict forward. In this Layton space, we then predict actions and values that are trained against each other to learn long horizon behaviors. Do you want to comment on how this work differs from previous work from David Haar and Schmidt Hubert or the related work? Sure. So, David Haar was learning a world model one step at a time. So he collects some random data and then trains the variational auto encoder to abstract the images. And then this is fixed and you learn a recurrent neural network on top to model the transitions. And he then uses evolutionary optimization to find a linear policy in this Layton space of the model. In contrast, we're here learning the whole model end to end. So the features and code extracted from the images can help for the long term predictions of the model. We also learn larger policy networks and we learn value functions which lets us look beyond the planning horizon or beyond the imagination horizon and solve long horizon tasks. Instead of the evolutionary optimization, we back propagate through the differentiable dynamics. So we make use of the fact that the model is implemented as a neural network to efficiently optimize the parameters of the action model. We evaluated Dreamer on 20 visual control tasks where it outperforms both previous model three and model based methods using less data, achieving higher final performance and reducing wall clock time. We also evaluated different representation learning methods to learn the dynamics model. And realized that pixel reconstruction still works best but contrastive estimation is getting quite close to it. And the differences in performance really let us believe that further work on new representation learning methods would translate to higher performance of the agent as well. Can you remind us about what is contrastive estimation? Yes, so the easiest way to learn the dynamics model would be to use pixel reconstruction. So it's basically learned as a sequential variational order encoder. But pixel reconstruction can become difficult in environments with enough visual complexity. So contrastive estimation instead tries to classify by summarizing the history of past images whether the next image is a valid image in the same sequence or comes from a different sequence. So the representation is then just the representation that this classifier learns. Hello, my name is Zheng Chun-Kiim and I'm a master student working with Professor George Conrader at Brown University. My work here is adaptive temperature tuning for mellomex in deep reinforcement learning. So mellomex is a recently proposed alternative softmax operator in deep reinforcement learning. And they're having many previous works that combined mellomex and deep cune networks. And one of the limitations of this previous algorithm is that mellomex has an additional temperature hyperparameter that requires the use of exhaustive grid search to find optimal values. And in this work we proposed a new adaptive online algorithm that can tune the temperature hyperparameter using a technique called meta graded reinforcement learning. The idea is that meta graded reinforcement learning algorithm optimizes the return function itself by tuning the meta parameter. And our idea is to set the temperature as a meta parameter and update those parameters along the updates of the original Q functions. So we have presented some pre-liminator results in a simple domain acrobot and showed that our new adaptive algorithm performs better than the previous ones. And we are planning to test this algorithm on largest scale domains like Atari Games. Thank you. Hi. So this work is meta learning curiosity algorithm. So we're for Anilette and I'm Martin Schneider and we're from MIT. We're interested in the paradigm in reinforcement learning where you can get an agent to explore by feeding it proxy rewards that encourage it to explore the environment. There's been a variety of work on hand designing these exploration policies but they often tend to not generalize very well between different types of environments. So we're interested in using meta learning with meta learning programs and algorithms to find new curiosity policies that will help it generalize between drastically different environments. So in our setup we define space of curiosity algorithms to find through a DSL and a program synthesis process. Our search algorithm then looks through the space of programs and finds new potentially interesting curiosity algorithms out of that. We start by meta training in a simple grid-world environment and then see that the good curiosity algorithms in that simple environment actually may be surprisingly generalize well to more complicated environments like Mojoco. And in fact in Mojoco our meta learned algorithms are statistically on par with some of the hand design, curiosity algorithms in the literature. And if you are interested in using our work and using our code it is online at bit.ly slash meta-algrathems. Can you give us an example of what types of things it looks for in a curiosity algorithm like Warrout in concrete terms? Yeah. So one of the advantages of learning algorithms here rather than learning weights is that you can sort of interpret a little bit the types of rewards that the agent is getting. One of the popular works from the literature for example tries to encourage the agent to get to new parts of the state space that it hasn't seen before. One of the meta learned algorithms that emerged from our search process is one that we believe hasn't been explored before. And our this algorithm tries to encourage the agent to get to parts of the state space while it will start making different actions. So it encourage to take one action in the current state and then in the next state take a different type of action. So my name is Patrick. I'm in the University of California Berkeley. So my poster is about predictive coding for boosting deep RL with sparse rewards. So we basically are trying to apply contrastive predictive coding to the task of RL. Where predictive coding tries to find encoding of raw states into a form such that the past states and future states mutual information is maximized. And we basically found that after applying this technique to sequence of state trajectories the encoding is actually clear of all of understanding the environment dynamics and focus on features that are most useful for learning. So we basically use this method for two ways of rewards shaping. The first way being classic and the second way being optimized the negative distance. So for the first one we apply this on the grid world environment where we try to basically class the different states in the grid work into clusters. And we found that after applying predictive coding the classes actually correspond to the natural positions in the grid. And so awarding the agent to go to the class that comes in to go actually allow us to boost the learning by wherever our chair goes. And for the second part for the negative distance we applied on variety of geocode continuous environments. And we basically found that applying this predictive coding allowed us to find features that are most important for learning and actually flattened the environment structure so that applying simple negative distance is able to create learning boost compared to just using as far as the work case. So that's like the big picture. Hello, I'm Zhang Lehong. This is my work during the internship in preferred networks. My work is about swarmed inspired reinforcement learning via collaborative inter-egent knowledge installation. This work is inspired by swarmed intelligence. We implement swarmed intelligence in reinforcement learning framework. It can be imagined that a swarm of aerial agents search for the optimal policy to solve the same task. We use the knowledge knowledge installation to share the knowledge between each agent in the swarm. And in the experiment result we achieved, we improved the state of the R. The performance of a software critical in musical benchmarks. Thanks. My name is Nick Patosa. I'm with the Georgia Tech School of Interactive Computing and my research was on multiplayer Alpha Zero. So the original Alpha Zero algorithm as proposed and implemented by D-Mind was focusing on two player games such as Chess and Go. So the question that I wanted to explore was what happens when we bring this up to end players. So I did this by making a few simple modifications to Alpha Zero, specifically modifying the output of the D-Network from outputting a value vector for each, sorry, value scale for each state to instead output a value vector. That way each entry in the vector is the value for each player. And then doing this, I change up the Monte Carlo research a little bit to take into account the index into the vector and do the whole algorithm that way. And with these two simple changes, I explore two very simple environments, which were three player versions of Tic Tac Toe and Connect For. And I find that the multiplayer Alpha Zero successfully able to learn strong policies that can be humans and be uninformed MCTS agents at these games. So suggest that future research in the series might be promising. So my name is Mark Britton. I'm a PhD student in Iowa State University and our poster that we're presenting here is Priority Sequence Experience Replay. So the main idea here is we're trying to improve the credit assignment and deep reinforcement learning settings, particularly off policy. So we follow a similar motivation with Priority Experience Replay, whereas when we reach a surprise transition, which means that there's a state with a high TD error, we want to actually decay a portion of this priority along the sequence that we took to reach there. And in doing this, we can use the sequential replay information in the replay buffer to improve the sample efficiency of the model. So what we show here is that in the blind cliff walk environment, as we track the mean squared error between the true Q value and the predicted Q value, we find that Priority Sequence Experience Replay drastically improves upon both P, R, and uniform sampling and gets closer to an oracle that has perfect information. So then we then evaluate this result on Atari against DQN with Priority Experience Replay and DQN with uniform sampling strategy. And we find that the relative performance between Priority Sequence Experience Replay and Priority Experience Replay is very drastic. We find that PSCR improves upon P, R, and the majority of the games and outperforms uniform in all the games as well. So this is recurrent neural linear posterior sampling for non-stationary bandits. This is worked on Alicia, together with Yurg and Schmidt, Hooper, Aditya Ramash and Paulo Halben. So the idea is very simple. We are trying to use a recurrent neural network to predict the reward for each given arm in a non-stationary multi-arm bandit problem. And in order to deal with exploration, we apply posterior sampling. OK, so I'm Monsieur Mujiga. And this is the improving gradient estimation in evolutionary strategies with past descend directions paper. And the idea is essentially in evolutionary strategies, we try to make random perturbations to our current search point to try to estimate the gradients and go in that descend direction of the estimated gradient. Recently, the idea of using biased gradients to estimate this gradient better was introduced by in the paper guided evolutionary strategies. And essentially, we build in that work and we try to find better ways of using biased gradient estimates by essentially just taking a sample in the positive and negative direction of this biased gradient. And what we show is that this essentially is better than what was done before in the literature. And additionally, one of the main caveats of the previous work was that you needed to know how good this biased gradient was. In our case, this is not necessary as if the biased gradient is bad, our algorithm will discard the automatic alleys. And this lets us use some kind of momentum in the search space. So as biased gradient estimate, we use the previous gradient estimate. And this kind of works really well in linear functions, which of course the gradient never changes. In the case of general functions, it depends on higher order terms. And then essentially, we run a bunch of toy experiments in quadratic function and in MNIST where we show really that what we predicted in the theory kind of happens were like using biased radiance is good until they become too biased or like using this momentum actually helps especially for smaller learning rates. We try to run some experiments in RL in the OpenA-A RoboSchool environment, but unfortunately this doesn't work so well so far. We believe that the noise in the gradient estimation kind of guides exploration in ES. So there's a lot of moving parts going on in reinforcement learning and it may be possible that just better gradient estimates are not necessarily like do not necessarily result in better performance in RL. And that's about it. Hi, my name is Daniel Saita and I'm a PhD student in computer science at the University of California Berkeley. I work in the area of the machine learning and robotics. So this particular project is about giving a set of teachers that we have at our disposal for a given learner. How do we pick the right teacher at the right time? So the different teachers for any given environment are saved from the sequence of snapshot at equally safe intervals throughout a training run. The teachers earlier in the snapshot had very low award and teachers later in the snapshot had a higher award but were investigating at any given time for a Q-learning-based agent where each mini batch is a mix of student, self-generated data and data from the teacher. The teacher selection function should be used though. I didn't give an iteration. We have the option of keeping our teacher or using a different teacher or that kind of thing and then the main highlight of our work is that it is generally not ideal to always pick the very best teacher, the very highest-roading teacher. Sometimes it's better to pick teachers that are only a little bit better than the student in terms of award and some hypotheses are due to either we may want to avoid overfitting to a teacher or we may also want a teacher that has a similar distribution, a similarity with that with the learner. So thank you for your attention. My name is Luqueciano. I'm from Brazil and from I don't know it's a institute of technology. My work is bottom up Metapolicy Search in Bumps and this work we basically feel some expert policies to conduct a meta-training to be able to sample some policies from this meta-polices and then solve some unsing-tasked during training. It's basically a meta-learning algorithm that you use imitation learning to conduct the whole process. So I'm Janis Plebaliag from Sequelinria and University of Lille in France. So I will shortly give you an overview of Merl, Multil, Ed, reinforcement learning. So in this work we want to facilitate representation learning for better, simple, easy efficiency and improved final performance. So we want to maximally use each of the agents' interrogation into the environment. So we propose a method that is technically applicable to any policy-gradient method and environment, so it's a real plug-and-play method. Instead of using prior knowledge, which is task-dependent, we want to use problem knowledge. So self-performance assessment and accurate expectations. Merl incorporates the fraction of variance explained Vx as an auxiliary task, a measure of the discrepancy between the estimated state-value function and the observed returns. It is formalized by the fraction of variance explained from Valset, paper from 1985. And we observe better performance in nine continuous control tasks, mugeaux-co-tasks. We also want it to see if the method could better transfer the learning. So we chose five Atari games with the same number of actions. And we observe again the same better results on the stats. So one future work we are interested in is find even more quantity of accurate expectations or self-performance assessments and to better study the correlation and the effect of their combinations. Hi, I'm Boen Baker. I'm a research scientist at OpenAI working on the multi-agent team and I'm going to briefly describe our recent work on emergent autocricula in agents emerging, emergently using tools. We looked at the game of hide-and-seek in a physically grounded and embodied environment. And the reason that we look at this game is that we've seen recently the power of these arms races in video games to produce wildly complex behavior. We've seen it in Go and StarCraft and Dota. However, it's really unlikely or it's hard to believe that an agent in those games would eventually pop out and solve a task that's actually useful to humans. And so we set out to see if we could induce those arms race dynamics in a more physically grounded and maybe human analogous environment. And so what we found is actually that when we put agents into this physical world, just playing the game of hide-and-seek, that they go through six distinct, semantically different rounds of strategy, each one very different than the last. And you see that because once one team learns a skill, it creates a new pressure for the other team to adapt. And we're also excited about that because of the analogs to our own evolution on Earth. And so the general hope for this type of work, and I think all works like this, is moving towards environments and games where agents are creating tasks for each other, rather than the RL experimenters designing the suite of tasks. And maybe we'll actually be able to have something truly intelligent emerge from these. I am Daniel Graves from Huawei Technologies Canada, based in Emoton. And I'm doing Learning and Op-Policy predictive state representation with deep reinforcement learning for vision-based steering and autonomous driving. So what this project is really all about is we want to come up with a way to learn better representations for deep reinforcement learning. So we have a task which is to steer a vehicle based on images only. So if we just apply a front image to a vehicle, how can we decide on the right steering angle to choose to ensure that the vehicle remains centered in the lane. So the approach that we took is actually to build a very compact predictive state representation based off of something called general value functions, which is a way of predicting the future with reinforcement learning, actually. And this approach of general value functions actually was originally developed by Richard Sutton and the predictive state representation idea was also an idea that's many decades old. But what we're doing here is applying it to the deep reinforcement learning setting where we learn a very compact set of predictions. Now these predictions are actually based off of the lane centeredness of the vehicle and the orientation, the angle orientation, or offset of the vehicle with respect to the road. And so if we predict these at different temple horizons, we actually capture the lane information of the road, which we can use as essentially an error of how far off our vehicle would be if it kept driving in a straight line from where it's currently at. So with this, we actually conform a very simple policy which is try to minimize that error. And so we just form a linear policy based on top of that news reinforcement learning in a very shallow sense to learn very quickly instead of having to learn the entire policy right from the image. It learns faster and it tends to steer more comfortably and a lot less jitter. It's more robust, it's more general to unseen tracks when we tested it. So overall it's very promising. And just last week before I came here, we applied it to a real robot. The train from real robotic data, it's not on the poster here, but actually it works quite well surprisingly. So it's very good. So our paper is called Monkey Task reinforcement learning with our interferences. This is a drone work. So I can't hear you. And this is a drone work that is dropped. I have a check circle at Carol and Chelsea from Stanford, Berkeley and Google. So the goal of this work is to build more attention robots such that they can sum multiple tasks. We are present like deep learning makes it possible for robots with different skills, but they also suffer from optimization challenges. We have passed the size that may be caused by conflict ingredients, which is basically characterized by neck constant similarity between two task ingredients. And also they might lead to unexpected damage due to high curvature. So the purpose is to project conflict ingredients or PC grid method, which projects to conflict ingredients onto the normal plane of the other one. And if the two grids are not conflicting, which is the reason that I'm done. And we applied this to Monkey Task supervised learning and the Monkey Task reinforcement in benchmarks. And I observed that our method can learn much more efficiently than tuning tasks independently. And we can also up-up-perform several Monkey Task learning baselines. So my name is George Tucker and I work at Google Brain. And we're presenting our work on behavior-regularized offline reinforcement learning, which was done primarily by EFANWU who interned with us this past summer. And where this work was primarily an empirical evaluation of different offline RL algorithms. So we're focusing on the setting where you just get a batch of offline data. And you don't get interact with the environment, but you want to do better than the behavior policy. And we evaluated a couple recent papers, primarily this one called batch constraint queue learning and another one, Bair. And we compared it to a simple baseline that just regularizes with KL divergence to the behavior policy. And what we find is that when we hyper-primiter tune the baseline carefully, we can get similar and in some cases better performance than either of the two recent papers across the benchmark domains. And I think the main takeaway from this paper is not that there's something wrong with the previously proposed works, it's really about our baselines and benchmarks. Our benchmarks are too easy. Basically all the methods do very similarly and we're not able to tell apart the algorithmic innovations and what actually matters on these easy benchmarks. So that's a call to action really to improve our benchmarks and also to improve our evaluation protocol. We need to be very careful about making sure that we're giving the baseline algorithm a fair chance. Hi, my name's Ben Eisenbuck and I'm going to talk about a poster called if Max Enn reinforcement learning is the answer, what is the question? And the motivation for this work is that maximum entropy reinforcement learning is an algorithm that's very popular in reinforcement learning today. And it's popular not only in the reinforcement learning community, but also has been observed in nature. And so the question we try to answer is when is Max Enn reinforcement learning the optimal thing to do? And one thing that's clear is that Max Enn reinforcement learning is not the optimal thing to do if we want to maximize reward. So when is it the optimal thing to do? Is nature just irrational? In this paper we show that Max Enn reinforcement learning is the optimal thing to do. And two certain cases that involve variability or uncertainty in our reward function. My name is Felipe Leno da Silva. I'm a researcher from the University of São Paulo and my paper is called Uncertainty Aware Action Advising for Deep Reinforcement Learning Asians. So the main idea of my paper is that agents supplying reinforcement learning usually take a very long time to learn a very good policy. But in some situations you might have available already competent policy like for example a human who is able to provide guidance to the agent or you might have a legacy system or something like this and your learning agent might ask for action suggestions to this policy to learn faster. The main problem is when the agent should ask for those suggestions because you want your agent just to ask for suggestions when it doesn't have a good policy yet for performing the tasks. So we have proposed an algorithm in which the agent has a confidence function and when the uncertainty of the agent for applying an action for a given state is high then the agent will ask for suggestions. When the uncertainty is low it means that the agent already has a good policy for that state so it doesn't need to ask for suggestions. And we have compared our algorithm with other similar teachers to the frameworks and we have shown that in general our algorithm improves the learning performance. Hi I'm Rishabh and I'm presenting the posters striving for simplicity in off policy deep reinforcement learning. In this work we show that if you do offline deep reinforcement learning on dataset collected from a DQN agent you can actually outperform some of the recent state of the art agents including online c5. Which is if you collect data from DQN from 2013 and just do a good job of optimization using something simple like random ensembles you can actually match the performance of the online c5. So I'm Raj I'm a PhD student at Georgia Tech so I developed Jericho which is what this paper is. It's talking about interactive fiction games as a domain for exploring the mix of natural language processing and reinforcement learning. And so the overall idea here is that we wanted to have a framework which is kind of like open AI gym but for tech games and so the main idea of why you want to study tech games is that they let you explore these challenges at the intersection of RL and NLP without the sort of costly interaction that you get, say if you're trying to train something like a chatbot or whatever. And so like a few of the challenges we've outlined here on this poster the first is that of knowledge representation so like they're partially observable environments. So an agent sees and talks to the world purely through textural natural language and then the descriptions that it gets from the world can potentially be incomplete. It doesn't have access to the world state at any given point. The second is sort of that of common sense reasoning and it's like bows down to the question of like how does the agent figure out how to do things with objects that it has or how can it interact with commonplace objects. So say the agent comes across a mailbox in the world. So we as humans know that like a normal way to interact with a mailbox would be to try and open it. But how do we get this to the agent? So like the agent could try and eat the mailbox you know like instead of opening it try to chew the lid off but just opening it like normally would probably be more effective straight up. And then the final thing is that of like the combinatorial action space. And so this is like sort of what we really focus on here. And so in terms of the combinatorial action space so in a game a popular text game you're required to generate actions that are forwards in length. And so your action space is your vocabulary to the power for which in a popular text game like Zork means that you have about 240 billion actions at every single step. And so like the question really becomes is like how do you adapt RL algorithms to deal with the sort of continuous, like ridiculously large action space. And so to help with this that's why we came up with this framework called Jericho. And so we introduce a set of handicaps in this framework to help agents you know like ease into the task and like more intelligently interact with this environment. And so like a couple of the handicaps that we have like the first handicapped that we have is that of like the template based action space. So the template action space is so you know how I said you have to generate like four to five word actions. So a lot of combinations of four to five words just don't make sense right they're ungrammatical. And so it turns out what you could do is you can group verbs and prepositions together to basically generate a series of like action templates. So like a template would be something like take blank from blank or you know push something, take something, open something, so on. And so the problem goes from having to generate a bunch of words in a row to the problem of having to generate a template and then fill in the template with the correct objects that you want to interact with it. And so this reduces your action space down from this like 240 billion per step to about a hundred million or so. So it's still kind of big but it's a little bit more manageable. And so it turns out like in our we have another handicapped so it turns out you can actually go one step further than that. And so this sort of concept of a template action space what it does it eliminates some of the ungrammatical actions but there's still some combinations that you know still just don't make sense. So if you try to say you know take you know gothic from door like what does that mean. So we introduced a concept of a valid action. So in this case a valid action is an action that is grammatically correct, contextually relevant and guaranteed to produce change in the world in any given state. And so Jericho has the ability to detect the set of valid actions in any given game state. And so in terms of the size of the action space the number of valid actions is really only about a hundred or so per step. So in this sort of handicapped you've gone from this hundred million actions space to a hundred actions space. And so on top of this we introduce a bunch of baseline agents that are designed to use each of these handicaps. And so the first baseline agent is the DRN which is essentially Q learning over the valid actions. It's learning to score the correct valid actions. The other one is the template dqn which produces independent q value estimates over both the template and objects. So this is the one using the template action space vocabulary, the hundred million size one. And then on top of that we have two other additional agents one of which is the random agent which basically picks from a random common set of actions that can be performed in any text game. So actions like moving around, navigating, picking up objects, putting down objects and so on. And then we have another agent which is called Nail which is a heuristic based agent. So this heuristic based agent isn't trained on any one game per se but is designed to play all sorts of interactive fiction games in general. And so this framework, this Jericho framework as a whole has a lot of games. We've got like about 30 games in total that you see here. So it's a wide variety. So we support a lot of different games, everything from slice of life, home, salary man walking simulator type things to love crafty and horror games. And so it's a wide variety of genres, game structures, reward functions and so on. And so the just of the results is that when we tested these agents on this game, the best performing agent is the DRN because of course it uses all the handicaps that we have. But even this agent really only gets to about a normalized completion of about 11 percent across all of this. And so what this is really telling us is there's a lot current oral algorithms cannot do in terms of the challenges presented by interactive fiction games in general and that there is a lot of space for improvement in this area. So we really hope that there's like more people that work in this area to explore ideas at the intersection of NLP and RL. Hello, I'm Adam Stucke. I'm a PhD student with Peter Beale at UC Berkeley and I'm showing off a poster here on RLPYT, RLPIT, a deep reinforcement learning library that we've recently come out with in PyTorch. That's intended to be a high throughput research oriented code base for doing reinforcement learning. So some interesting things about it is that it's one of the first code bases to incorporate reinforcement learning algorithms from all three of the major families from policy gradient to deep queue learning to queue based policy gradient. And whereas historically they've kind of been implemented maybe one off in separate repositories. Finally they're all in one place here together. And it turns out they share a lot of common infrastructure code that has to do with the reinforcement learning workflow. So what we've done is we've optimized that infrastructure code and built in many convenient options for parallelizing the workflow either across multiple CPUs or multiple GPUs. And on top of this we've written a very modular implementation of all the algorithms so that it should be easy for researchers to pick up and make modifications and put their own enhancements on pushing the field forward. And we have the code is open source and available now on github.com slash a stuc. A st o o k e slash rl p y t and we also have a white paper you can look for an archive that gives a conceptual overview of the design decisions that we put into the code base. Thanks. My name is Tanmay Agarwal and I along with my colleague Hadesh our students at Carnegie Mellon University. Today we are going to talk about our paper on learning to drive using waypoints which combines low level navigational markers called waypoints with high dimensional images using deep reinforcement learning. Currently most traditional autonomous driving pipelines are high-deem modularized with different subsystems for localization, perception, active prediction, planning and control. However, these modules require much hand engineering and are highly prone to generalizable errors which raises an interesting research direction to study deep reinforcement learning for autonomous driving with the potential generalizability to unseen scenarios. In this work we propose an architecture that comprises of a convolution autoencoder and policy network. The convolution autoencoder learns us late in representation of a semantically segmented input image which is then combined with waypoint features to form the input to our policy network. The policy network comprises of a two layer multilayer perceptron which along with the autoencoder network is trained simultaneously and independently. We demonstrate this using the Kala simulator wherein we train our RL agents using model free on policy, proximal policy, optimization algorithm. Our learned RL agents are then evaluated on the benchmark tasks that have four increasingly difficult scenarios right from driving straight to driving in town between any two points with other vehicle actors. We show that our agents learn to drive well from scratch without any pre-training or expert demonstrations. We also show comparable performance to the imitation learning initialized baseline for the most complex task with other vehicle actors. This work thus demonstrates preliminary results on how deep reinforcement learning can scale to autonomous driving. We plan to extend this work further by learning better state representations that encode other vehicle actors dynamics as well as compare this work with other model free RL algorithms towards improving the sample efficiency of our proposed method. This is TalkRL, all reinforcement learning all the time. Subscribe at TalkRL.com, slash Subscribe.
[ { "end": 12.36, "start": 0, "text": " This is TalkRL, all reinforcement learning all the time." }, { "end": 16.12, "start": 12.36, "text": " Subscribe at talkrl.com slash subscribe." }, { "end": 19.28, "start": 16.12, "text": " We're here at Neurips 2019 in Rainy Vancouver." }, { "end": 24.72, "start": 19.28, "text": " And we're now getting here directly from a number of presenters at the Neurips D-RL workshop." }, { "end": 30.479999999999997, "start": 24.72, "text": " So Matias Abatelli here from the University of Liesge, big fan of your podcast by the way." }, { "end": 34.92, "start": 30.479999999999997, "text": " I'm here to present a poster about the new family of Model 3 Deep Reinforcement Learning" }, { "end": 36.16, "start": 34.92, "text": " algorithms." }, { "end": 40.68, "start": 36.16, "text": " And the paper is called two value functions are better than one, two words characterizing" }, { "end": 43.44, "start": 40.68, "text": " a new family of Deepar Al-Algorithms." }, { "end": 48.04, "start": 43.44, "text": " And this is also joint work between the University of Belgium from Liesge and University in the Netherlands" }, { "end": 49.72, "start": 48.04, "text": " in Hroningen." }, { "end": 54.879999999999995, "start": 49.72, "text": " So the main idea that I'm trying to pitch today is that actually we should care about" }, { "end": 61.76, "start": 54.879999999999995, "text": " approximating the state action value function which is usually known as Q. Alongside the approximation" }, { "end": 65.68, "start": 61.76, "text": " of the state value function V in Model 3 reinforcement learning." }, { "end": 72.36, "start": 65.68, "text": " So current state of the art, algorithm usually care about approximating Q. But actually" }, { "end": 77.12, "start": 72.36, "text": " I introduce a new family of algorithms which also approximate V which seem to work much" }, { "end": 78.44, "start": 77.12, "text": " better." }, { "end": 83.72, "start": 78.44, "text": " I basically studied the properties of these algorithms and I show multiple interesting" }, { "end": 84.72, "start": 83.72, "text": " facts." }, { "end": 89.6, "start": 84.72, "text": " So the training scheme of approximating two value functions instead of one, I showed" }, { "end": 93.92, "start": 89.6, "text": " that it's actually beneficial when we are learning on policy and when we are learning" }, { "end": 95.96, "start": 93.92, "text": " off policy." }, { "end": 101.24, "start": 95.96, "text": " So what I do, I introduce two new Model 3 reinforcement learning algorithms which follow" }, { "end": 107.8, "start": 101.24, "text": " this specific learning scheme which all seem to perform algorithms like DQN and DQN." }, { "end": 112.6, "start": 107.8, "text": " These algorithms have also interesting properties in terms of the quality of the Q function" }, { "end": 113.84, "start": 112.6, "text": " we're learning." }, { "end": 119.52, "start": 113.84, "text": " I show that my algorithms called the DQV and DQV max suffer less from the overestimation" }, { "end": 125.2, "start": 119.52, "text": " bias of the Q function which is a problem when it comes to Model 3D reinforcement learning" }, { "end": 126.2, "start": 125.2, "text": " algorithms." }, { "end": 130.76, "start": 126.2, "text": " So when we are actually porting RL algorithms to neural networks we know that training" }, { "end": 135.96, "start": 130.76, "text": " can become very, very unstable and one of the causes of this divergence is the overestimation" }, { "end": 139.72, "start": 135.96, "text": " bias of the Q function and I show that the algorithms that I introduce suffer less from" }, { "end": 140.88, "start": 139.72, "text": " this phenomenon." }, { "end": 145.04000000000002, "start": 140.88, "text": " So they don't just learn much faster but they also learn accurately which is something" }, { "end": 147.52, "start": 145.04000000000002, "text": " nice." }, { "end": 152.32, "start": 147.52, "text": " And in terms of quantitative results it's kind of cool because this specific training dynamic" }, { "end": 157.20000000000002, "start": 152.32, "text": " of approximating two value functions instead of one allows the algorithms of the DQV family" }, { "end": 163.68, "start": 157.20000000000002, "text": " to outperform the QN and the DQN on a set of benchmarks on which these algorithms failed" }, { "end": 165.20000000000002, "start": 163.68, "text": " at the beginning." }, { "end": 170.56, "start": 165.2, "text": " So we've seen a lot of progress when it comes to Model 3D parallel but some environments" }, { "end": 177.6, "start": 170.56, "text": " for example from the Atari benchmark are still challenging and DQV and DQV max are" }, { "end": 183, "start": 177.6, "text": " able to achieve super human performance where algorithms like a DQN and a DQN failed." }, { "end": 188.28, "start": 183, "text": " So can you comment on the relationship between your new algorithms and maybe establish approaches" }, { "end": 191.12, "start": 188.28, "text": " that separate the V out like a dueling?" }, { "end": 196.44, "start": 191.12, "text": " Right, so one key component of the algorithms that I'm introducing is that we require two" }, { "end": 201.8, "start": 196.44, "text": " sets of separate weights so this means that two independent neural networks are needed" }, { "end": 206.08, "start": 201.8, "text": " in order to approximate a state value function and the state action value function." }, { "end": 211.12, "start": 206.08, "text": " When it comes to the dueling architecture we have these huge blocks, convolutional blocks" }, { "end": 216.36, "start": 211.12, "text": " which are shared among the architecture and in different heads." }, { "end": 219.36, "start": 216.36, "text": " One of these sets for example also estimates the advantage function which is something that" }, { "end": 222.92000000000002, "start": 219.36, "text": " the algorithms of my family do not do." }, { "end": 228.92000000000002, "start": 222.92000000000002, "text": " But I've tried some approaches which are inspired from an architecture point of view by the" }, { "end": 233.60000000000002, "start": 228.92000000000002, "text": " dueling architecture and these are algorithms which I call dueling DQV." }, { "end": 240.08, "start": 233.60000000000002, "text": " So in some sense I again like fix these convolutional layers and I try to share them among the" }, { "end": 244.16000000000003, "start": 240.08, "text": " neural architecture before approximating the state value function and the state action" }, { "end": 248.64000000000001, "start": 244.16000000000003, "text": " value function but I actually show that this is not really beneficial in my case." }, { "end": 253.44, "start": 248.64, "text": " So one key component of the algorithms of the DQV family is to really use two different" }, { "end": 256.88, "start": 253.44, "text": " sets of weights and two separately parametrious neural networks." }, { "end": 261, "start": 256.88, "text": " I'm Eric Steinberger, I'm a student undergrad at the University of Cambridge and now part" }, { "end": 262.8, "start": 261, "text": " time at Facebook Air Research." }, { "end": 269.32, "start": 262.8, "text": " I worked on a single deep counterfactual re-grapmanimization which is an algorithm that solves two" }, { "end": 275.32, "start": 269.32, "text": " player zero sum and perfect information games that might be too big to solve with tabular" }, { "end": 281.04, "start": 275.32, "text": " algorithms and is better than the state of the art was before which was deep CFR." }, { "end": 287, "start": 281.04, "text": " What I did is I found a way to simplify deep CFR to require less neural network approximation" }, { "end": 292.6, "start": 287, "text": " and in fact less training in total while yielding better that is more accurate results both" }, { "end": 294.24, "start": 292.6, "text": " theoretically and in practice." }, { "end": 298.8, "start": 294.24, "text": " So here you started working on deep oral in high school that can be too common." }, { "end": 302.76, "start": 298.8, "text": " Now yes I started working on this while I was in my last year in high school and finished" }, { "end": 305.2, "start": 302.76, "text": " it in my first year of university." }, { "end": 309.44, "start": 305.2, "text": " This is just kind of a project where I thought this would be fun like if it existed." }, { "end": 314.8, "start": 309.44, "text": " Back when I started it there was no good algorithm that used neural networks to play imperfect" }, { "end": 320.44, "start": 314.8, "text": " information games and things like PPO are you know they work in many games but they don't" }, { "end": 324.84, "start": 320.44, "text": " really, that wasn't really much work on them converging to Nash equilibria." }, { "end": 328.88, "start": 324.84, "text": " There are some algorithms to do but not really well and I thought I'd take the state of" }, { "end": 333.44, "start": 328.88, "text": " the art tabular algorithm, CFR, counterfactual re-grapmanimization and try to apply neural" }, { "end": 337.28, "start": 333.44, "text": " networks to it and basically just sort of approximate the shit out of it." }, { "end": 338.28, "start": 337.28, "text": " That worked." }, { "end": 343.28, "start": 338.28, "text": " My name is Santoriano Jona, I'm working in the University of Lins in the Johannes Capri" }, { "end": 344.28, "start": 343.28, "text": " University." }, { "end": 349.92, "start": 344.28, "text": " Yeah and we are presenting in Rada it's a new method for reinforcement learning to" }, { "end": 356.44, "start": 349.92, "text": " decompose the return and to perform reward distribution in a way that the learn agent" }, { "end": 362.28, "start": 356.44, "text": " will learn much much faster than using TD or using Monte Carlo." }, { "end": 364.71999999999997, "start": 362.28, "text": " Can you tell me more about how Rada works?" }, { "end": 372.71999999999997, "start": 364.71999999999997, "text": " Yes, so traditional methods will take a very long time to learn or to solve an NDP when" }, { "end": 374.47999999999996, "start": 372.71999999999997, "text": " the rewards are delay." }, { "end": 378.64, "start": 374.47999999999996, "text": " A delay reward means that you get the reward only at the end of the episode or that the" }, { "end": 380.71999999999997, "start": 378.64, "text": " reward is a sparse." }, { "end": 384.52, "start": 380.71999999999997, "text": " In any way every time that you are making an action which is very relevant to the reward" }, { "end": 387.76, "start": 384.52, "text": " this reward you see very very late." }, { "end": 395.44, "start": 387.76, "text": " So in our method we perform a return the composition so we decompose the return and then we" }, { "end": 399.12, "start": 395.44, "text": " using contribution analysis we do reward redistribution." }, { "end": 403.24, "start": 399.12, "text": " So we change how the agency the reward from the environment." }, { "end": 409.2, "start": 403.24, "text": " This new reward redistribution leads to a new NDP which is much much easier to solve because" }, { "end": 414.48, "start": 409.2, "text": " in the optimal case all the future specter reward that you might expect from one state" }, { "end": 416.92, "start": 414.48, "text": " action it's giving immediately." }, { "end": 419.08000000000004, "start": 416.92, "text": " So you can't forget about the future." }, { "end": 420.08000000000004, "start": 419.08000000000004, "text": " My name is Pochrater." }, { "end": 424.08000000000004, "start": 420.08000000000004, "text": " I'm the head of Salitte Eileb in Lins." }, { "end": 431.8, "start": 424.08000000000004, "text": " I'm the last author of this paper Rada Return to the Composition for the late rewards." }, { "end": 436.08000000000004, "start": 431.8, "text": " What we present here is a paradigm shift." }, { "end": 441.72, "start": 436.08000000000004, "text": " It's completely different than previous reinforcement learning algorithms." }, { "end": 450.52000000000004, "start": 441.72, "text": " We use supervised learning for identifying key events which correspond to step functions" }, { "end": 452.68, "start": 450.52000000000004, "text": " or value functions." }, { "end": 462.28000000000003, "start": 452.68, "text": " And by doing this we are in experiments exponentially faster than TD methods." }, { "end": 466.32000000000005, "start": 462.28000000000003, "text": " We exponentially faster than Monte Carlo methods." }, { "end": 475.96, "start": 466.32, "text": " We exponentially faster than Monte Carlo 3 search, all this previous methods do not work." }, { "end": 481.04, "start": 475.96, "text": " And our assumption is to have delayed reward and it have to be model free." }, { "end": 487.71999999999997, "start": 481.04, "text": " As soon as you have a model use model-based methods like Monte Carlo 3 search." }, { "end": 491.8, "start": 487.71999999999997, "text": " What for the hard challenges use our method?" }, { "end": 498.8, "start": 491.8, "text": " For example if you want to train an agent for a game you have to make strategic decisions" }, { "end": 501.96000000000004, "start": 498.8, "text": " at the very beginning of the game." }, { "end": 506.04, "start": 501.96000000000004, "text": " And at the end of the game it turns out whether you're in or lose." }, { "end": 509.52, "start": 506.04, "text": " And standard reinforcement learning methods cannot capture this." }, { "end": 516.36, "start": 509.52, "text": " Cannot capture a set you did decision at the beginning which is important at the end." }, { "end": 517.36, "start": 516.36, "text": " But we can." }, { "end": 519.04, "start": 517.36, "text": " We can do that." }, { "end": 526.0799999999999, "start": 519.04, "text": " Because we can identify reward changing actions at the very beginning." }, { "end": 528.4399999999999, "start": 526.0799999999999, "text": " So your claim sound quite dramatic." }, { "end": 532.88, "start": 528.4399999999999, "text": " Can you help me understand or are there some limitations to this approach?" }, { "end": 534.76, "start": 532.88, "text": " So limitation of our approach." }, { "end": 540.48, "start": 534.76, "text": " Limitations are if the sequences are extremely long sensor method we are using it's called" }, { "end": 541.48, "start": 540.48, "text": " LSTM." }, { "end": 544.4, "start": 541.48, "text": " Actually invited by myself." }, { "end": 553.92, "start": 544.4, "text": " We use LSTM and if we have problems for very long sequences another problem would be" }, { "end": 557.24, "start": 553.92, "text": " if there is no delayed reward." }, { "end": 565.3199999999999, "start": 557.24, "text": " Then we have an overhead and we introduce noise which can hamper learning." }, { "end": 570.12, "start": 565.3199999999999, "text": " But if you have delayed reward and if you don't have a model please use this and don't" }, { "end": 573.68, "start": 570.12, "text": " use all the other stuff which is the textbooks forget about it." }, { "end": 579.4799999999999, "start": 573.68, "text": " Hi my name is Nathan Lambert and I'm a PhD student at UC Berkeley and from my internship" }, { "end": 585.4, "start": 579.4799999999999, "text": " at Facebook AI working under Roberto Clandra we were studying what we call objective mismatch" }, { "end": 587.8, "start": 585.4, "text": " and model based reinforcement learning." }, { "end": 593.4799999999999, "start": 587.8, "text": " So in model based RL when you have this dual optimization problem between training a" }, { "end": 598.52, "start": 593.4799999999999, "text": " dynamics model to be accurate and trying to optimize reward when you're doing control" }, { "end": 603.6, "start": 598.52, "text": " there is a missing link because the dynamics model that we train is not being optimized" }, { "end": 605.04, "start": 603.6, "text": " to maximize reward." }, { "end": 613.48, "start": 605.04, "text": " And sometimes we've seen exploitation and this missing link in the process could be a minimization" }, { "end": 619.68, "start": 613.48, "text": " on the sample efficiency and max reward that you can get with these model based RL algorithms." }, { "end": 622.92, "start": 619.68, "text": " So how does your work address this?" }, { "end": 628.3199999999999, "start": 622.92, "text": " So we started to address this by changing the weighting on the state action next state" }, { "end": 633.76, "start": 628.3199999999999, "text": " pairs when we're training dynamics models to start with a more optimal trajectory and" }, { "end": 639.1999999999999, "start": 633.76, "text": " reweight towards that and that focuses the dynamics model on one task and we showed that" }, { "end": 642.8, "start": 639.1999999999999, "text": " you can get a bit better sample efficiency by reweighting there." }, { "end": 648.48, "start": 642.8, "text": " But most of the work is showing how this issue emerges and control and kind of ways people" }, { "end": 651.36, "start": 648.48, "text": " should be thinking about training their models." }, { "end": 656.32, "start": 651.36, "text": " So my name is Akhil Bagaria and this is joint work with my advisor George Conradaris and" }, { "end": 658.88, "start": 656.32, "text": " we're both at Brown University." }, { "end": 664, "start": 658.88, "text": " So this poster really is about hierarchical reinforcement learning and the idea is that" }, { "end": 669.52, "start": 664, "text": " usually reinforcement learning agents are doomed to make decisions at very many skill times" }, { "end": 670.52, "start": 669.52, "text": " skill." }, { "end": 673.5600000000001, "start": 670.52, "text": " Maybe they have to make a decision every nanosecond to decide how to move each other" }, { "end": 674.5600000000001, "start": 673.5600000000001, "text": " muscles." }, { "end": 679.12, "start": 674.5600000000001, "text": " But what we'd like to do is lower the decision making burden on the RL agent and the way" }, { "end": 682.24, "start": 679.12, "text": " we do that is by learning some higher level actions." }, { "end": 686.5600000000001, "start": 682.24, "text": " So maybe as a human being you have some higher level skills that you have for example opening" }, { "end": 691.96, "start": 686.5600000000001, "text": " the door or holding the mic or giving a presentation rather than each muscular decision that you" }, { "end": 694.52, "start": 691.96, "text": " make." }, { "end": 700.08, "start": 694.52, "text": " We know that these skills can be formulated using the options framework that was proposed" }, { "end": 705.8, "start": 700.08, "text": " in 1999 and ever since then we've known that these options are useful but we don't have" }, { "end": 711.4399999999999, "start": 705.8, "text": " a good way to discover these options autonomously just from interacting with the environment." }, { "end": 716.56, "start": 711.4399999999999, "text": " So this work is an attempt at doing that autonomously for RL agents." }, { "end": 723.3199999999999, "start": 716.56, "text": " So here in this task consider that there's a robot and it has some start state and some" }, { "end": 726.5999999999999, "start": 723.3199999999999, "text": " goal that it needs to get to and it needs to find some optimal way of getting to that" }, { "end": 728.1999999999999, "start": 726.5999999999999, "text": " goal." }, { "end": 734.16, "start": 728.1999999999999, "text": " The way we create higher level actions or skills in this scenario is that the agent will" }, { "end": 741.92, "start": 734.16, "text": " try to do this task again and again and then it'll find some small modular skill that" }, { "end": 746.7199999999999, "start": 741.92, "text": " with very high probability can succeed in getting to the goal and it turns out that this" }, { "end": 752.52, "start": 746.7199999999999, "text": " higher level action is one that initiates somewhere near the goal." }, { "end": 759.04, "start": 752.52, "text": " So in that way we recursively define what a useful skill is which means that a skill" }, { "end": 762.9599999999999, "start": 759.04, "text": " should either solve the problem for you or it should take you to a place from where" }, { "end": 765.5600000000001, "start": 762.96, "text": " you have high confidence to solve the problem." }, { "end": 769.96, "start": 765.5600000000001, "text": " And if you do this recursively turns out you can construct a chain data structure that" }, { "end": 774.6, "start": 769.96, "text": " goes all the way from your goal state back to your initial state and we've reduced the whole" }, { "end": 779.5600000000001, "start": 774.6, "text": " problem of making decisions over thousands of time steps into one which only requires" }, { "end": 782.6800000000001, "start": 779.5600000000001, "text": " four decisions in this case." }, { "end": 788.8000000000001, "start": 782.6800000000001, "text": " So the rest of the poster is basically our experiments and results with this algorithm" }, { "end": 794.92, "start": 788.8, "text": " and we see that in complicated tasks which have required decision making over long horizons" }, { "end": 801.16, "start": 794.92, "text": " as well as those that are sparse award you can get a huge speed up compared to flat reinforcement" }, { "end": 802.16, "start": 801.16, "text": " learning agents." }, { "end": 806.28, "start": 802.16, "text": " In this case we compare our performance against DDPG which is a popular state of the art" }, { "end": 812.68, "start": 806.28, "text": " actocratic method and we see that in all of these cases DDPG has a really hard time" }, { "end": 816.92, "start": 812.68, "text": " even learning anything useful in all the tasks we consider whereas our method is able" }, { "end": 823.92, "start": 816.92, "text": " to by breaking down the problem into a series of subproblems gain a lot of sample efficiency." }, { "end": 826.0799999999999, "start": 823.92, "text": " I'm Kierel." }, { "end": 828.28, "start": 826.0799999999999, "text": " This work was done with blueriver technology." }, { "end": 831.4399999999999, "start": 828.28, "text": " They're an agricultural AI startup." }, { "end": 835.8399999999999, "start": 831.4399999999999, "text": " I guess not so much a startup anymore." }, { "end": 840.64, "start": 835.8399999999999, "text": " Right now I'm working with University of Calgary and part time with Microsoft kind of" }, { "end": 848.4399999999999, "start": 840.64, "text": " continue all in the same work and essentially here the idea is we want to start using" }, { "end": 852.92, "start": 848.4399999999999, "text": " reinforcement learning on actual hardware rather than just on simulator." }, { "end": 860.12, "start": 852.92, "text": " So one of the problems that one of the speakers yesterday is workshop biological and artificial" }, { "end": 866.52, "start": 860.12, "text": " RL workshop was talking about is that all these control environments like Mojoku or Bullet" }, { "end": 872.92, "start": 866.52, "text": " they're very deterministic and you're missing some information." }, { "end": 875.28, "start": 872.92, "text": " So real sensors are noisy." }, { "end": 881.72, "start": 875.28, "text": " Real actuators are incorrect in terms of they're not just square wave functions." }, { "end": 886.72, "start": 881.72, "text": " So essentially what we want to do is we want to actually get people to start trying their" }, { "end": 890.8, "start": 886.72, "text": " algorithms on real hardware and that's kind of what we built." }, { "end": 899.5999999999999, "start": 890.8, "text": " Hi my name is Leonard Adel and I'm from ETH Zurich and I'm presenting Le Dewebchev" }, { "end": 904.12, "start": 899.5999999999999, "text": " deep reinforcement learning agent for families of text-based games and this is joint work" }, { "end": 911.64, "start": 904.12, "text": " with my supervisor Thomas Hoffmann also from ETH Zurich and what we did in this work" }, { "end": 916.7199999999999, "start": 911.64, "text": " is that we developed an agent that can play these text-based games and text-based games" }, { "end": 925.08, "start": 916.72, "text": " are an interesting area of research because they somehow give us a small environment in" }, { "end": 930.6, "start": 925.08, "text": " which we can learn reinforcement learning agents in this natural language domain." }, { "end": 935.8000000000001, "start": 930.6, "text": " And this is super interesting because we can restrict the environment and the commands" }, { "end": 941.76, "start": 935.8000000000001, "text": " that we can issue and also the state that we get to develop agents that then maybe eventually" }, { "end": 945.48, "start": 941.76, "text": " also perform on larger tasks." }, { "end": 954.96, "start": 945.48, "text": " So exactly and that's in one particular domain which is the domain of cooking in a modern" }, { "end": 956.28, "start": 954.96, "text": " house environment." }, { "end": 963.32, "start": 956.28, "text": " So this was the setting and we had several different games, 500 different games, all" }, { "end": 969.48, "start": 963.32, "text": " of the same families somehow but all different tasks and different environments and everything" }, { "end": 977.6800000000001, "start": 969.48, "text": " was just given by natural language and the agent could only interact with natural language" }, { "end": 983.36, "start": 977.6800000000001, "text": " and we developed an agent for this and we outperformed all the vanilla baselines and also" }, { "end": 988.6800000000001, "start": 983.36, "text": " competed in the first text world problems competition this year and ranked second among" }, { "end": 990.6800000000001, "start": 988.6800000000001, "text": " more than 20 competitors." }, { "end": 997.9200000000001, "start": 990.6800000000001, "text": " Hello, I'm Hardik Menshiri, I'm from PCS Richach Mumbai, I'm presenting my work here" }, { "end": 1003.52, "start": 997.92, "text": " by accelerating training in power man with imitation and reinforcement learning." }, { "end": 1009.4399999999999, "start": 1003.52, "text": " I basically work in applying reinforcement learning to multi agent scenarios and industrial" }, { "end": 1014.76, "start": 1009.4399999999999, "text": " optimization problems such as supply chain optimization, container loading into the" }, { "end": 1018.04, "start": 1014.76, "text": " ships, bin backing etc." }, { "end": 1022.36, "start": 1018.04, "text": " The work that I'm presenting here is about the power man game." }, { "end": 1028.48, "start": 1022.36, "text": " We have developed a curriculum plus imitation learning framework where we are reducing" }, { "end": 1038.68, "start": 1028.48, "text": " the training time by 10 order of 10 and we are looking at something which can be generalized" }, { "end": 1047.6, "start": 1038.68, "text": " to all of the environments given you have a noisy expert, noisy expert samples with you." }, { "end": 1053.32, "start": 1047.6, "text": " Hey, I'm Danijar and I'm a PhD student at the University of Toronto and student researcher" }, { "end": 1054.8, "start": 1053.32, "text": " at Google Brain." }, { "end": 1058.84, "start": 1054.8, "text": " With a paper here at Nierob's title, Dream to Control, Learning Behaviours by Layton" }, { "end": 1066.1999999999998, "start": 1058.84, "text": " Imagination, where the goal is to learn a world model from experience over time and so" }, { "end": 1072.9599999999998, "start": 1066.1999999999998, "text": " in our L agent uses this world model to then learn long horizon behaviors in imagination." }, { "end": 1078.24, "start": 1072.96, "text": " And using this approach we can solve a range of new tasks and exceed the performance of" }, { "end": 1081.48, "start": 1078.24, "text": " both model free and model based methods." }, { "end": 1087.4, "start": 1081.48, "text": " For this to work, we first learn dynamics model from experience and since the inputs are" }, { "end": 1094.32, "start": 1087.4, "text": " high dimensional images in our case, we embed them into a compact sequence of Layton" }, { "end": 1098.1200000000001, "start": 1094.32, "text": " states which makes it much more efficient to predict forward." }, { "end": 1103.84, "start": 1098.12, "text": " In this Layton space, we then predict actions and values that are trained against each other" }, { "end": 1106.36, "start": 1103.84, "text": " to learn long horizon behaviors." }, { "end": 1110.4399999999998, "start": 1106.36, "text": " Do you want to comment on how this work differs from previous work from David Haar and" }, { "end": 1112.6799999999998, "start": 1110.4399999999998, "text": " Schmidt Hubert or the related work?" }, { "end": 1113.6799999999998, "start": 1112.6799999999998, "text": " Sure." }, { "end": 1117.84, "start": 1113.6799999999998, "text": " So, David Haar was learning a world model one step at a time." }, { "end": 1122.6399999999999, "start": 1117.84, "text": " So he collects some random data and then trains the variational auto encoder to abstract" }, { "end": 1123.9599999999998, "start": 1122.6399999999999, "text": " the images." }, { "end": 1132.1200000000001, "start": 1123.96, "text": " And then this is fixed and you learn a recurrent neural network on top to model the transitions." }, { "end": 1137.64, "start": 1132.1200000000001, "text": " And he then uses evolutionary optimization to find a linear policy in this Layton space" }, { "end": 1139.24, "start": 1137.64, "text": " of the model." }, { "end": 1143, "start": 1139.24, "text": " In contrast, we're here learning the whole model end to end." }, { "end": 1149.24, "start": 1143, "text": " So the features and code extracted from the images can help for the long term predictions" }, { "end": 1150.64, "start": 1149.24, "text": " of the model." }, { "end": 1154.72, "start": 1150.64, "text": " We also learn larger policy networks and we learn value functions which lets us look" }, { "end": 1160.3600000000001, "start": 1154.72, "text": " beyond the planning horizon or beyond the imagination horizon and solve long horizon tasks." }, { "end": 1165.2800000000002, "start": 1160.3600000000001, "text": " Instead of the evolutionary optimization, we back propagate through the differentiable" }, { "end": 1166.2800000000002, "start": 1165.2800000000002, "text": " dynamics." }, { "end": 1170.3200000000002, "start": 1166.2800000000002, "text": " So we make use of the fact that the model is implemented as a neural network to efficiently" }, { "end": 1173.5600000000002, "start": 1170.3200000000002, "text": " optimize the parameters of the action model." }, { "end": 1178.68, "start": 1173.5600000000002, "text": " We evaluated Dreamer on 20 visual control tasks where it outperforms both previous model" }, { "end": 1184.72, "start": 1178.68, "text": " three and model based methods using less data, achieving higher final performance and" }, { "end": 1187.28, "start": 1184.72, "text": " reducing wall clock time." }, { "end": 1192.44, "start": 1187.28, "text": " We also evaluated different representation learning methods to learn the dynamics model." }, { "end": 1198.1200000000001, "start": 1192.44, "text": " And realized that pixel reconstruction still works best but contrastive estimation is" }, { "end": 1199.68, "start": 1198.1200000000001, "text": " getting quite close to it." }, { "end": 1206.4, "start": 1199.68, "text": " And the differences in performance really let us believe that further work on new representation" }, { "end": 1211.76, "start": 1206.4, "text": " learning methods would translate to higher performance of the agent as well." }, { "end": 1215.76, "start": 1211.76, "text": " Can you remind us about what is contrastive estimation?" }, { "end": 1221.4, "start": 1215.76, "text": " Yes, so the easiest way to learn the dynamics model would be to use pixel reconstruction." }, { "end": 1225.0400000000002, "start": 1221.4, "text": " So it's basically learned as a sequential variational order encoder." }, { "end": 1231.72, "start": 1225.0400000000002, "text": " But pixel reconstruction can become difficult in environments with enough visual complexity." }, { "end": 1237.64, "start": 1231.72, "text": " So contrastive estimation instead tries to classify by summarizing the history of past" }, { "end": 1243.48, "start": 1237.64, "text": " images whether the next image is a valid image in the same sequence or comes from a different" }, { "end": 1244.48, "start": 1243.48, "text": " sequence." }, { "end": 1250.3600000000001, "start": 1244.48, "text": " So the representation is then just the representation that this classifier learns." }, { "end": 1254.3600000000001, "start": 1250.3600000000001, "text": " Hello, my name is Zheng Chun-Kiim and I'm a master student working with Professor George" }, { "end": 1256.52, "start": 1254.3600000000001, "text": " Conrader at Brown University." }, { "end": 1261.84, "start": 1256.52, "text": " My work here is adaptive temperature tuning for mellomex in deep reinforcement learning." }, { "end": 1267.92, "start": 1261.84, "text": " So mellomex is a recently proposed alternative softmax operator in deep reinforcement learning." }, { "end": 1272.56, "start": 1267.92, "text": " And they're having many previous works that combined mellomex and deep cune networks." }, { "end": 1276.72, "start": 1272.56, "text": " And one of the limitations of this previous algorithm is that mellomex has an additional" }, { "end": 1281.32, "start": 1276.72, "text": " temperature hyperparameter that requires the use of exhaustive grid search to find optimal" }, { "end": 1282.6399999999999, "start": 1281.32, "text": " values." }, { "end": 1287.4, "start": 1282.64, "text": " And in this work we proposed a new adaptive online algorithm that can tune the temperature" }, { "end": 1291.48, "start": 1287.4, "text": " hyperparameter using a technique called meta graded reinforcement learning." }, { "end": 1296.6000000000001, "start": 1291.48, "text": " The idea is that meta graded reinforcement learning algorithm optimizes the return function" }, { "end": 1299.3600000000001, "start": 1296.6000000000001, "text": " itself by tuning the meta parameter." }, { "end": 1304.0800000000002, "start": 1299.3600000000001, "text": " And our idea is to set the temperature as a meta parameter and update those parameters" }, { "end": 1307.16, "start": 1304.0800000000002, "text": " along the updates of the original Q functions." }, { "end": 1314.48, "start": 1307.16, "text": " So we have presented some pre-liminator results in a simple domain acrobot and showed" }, { "end": 1318.68, "start": 1314.48, "text": " that our new adaptive algorithm performs better than the previous ones." }, { "end": 1323.0800000000002, "start": 1318.68, "text": " And we are planning to test this algorithm on largest scale domains like Atari Games." }, { "end": 1324.0800000000002, "start": 1323.0800000000002, "text": " Thank you." }, { "end": 1326.0800000000002, "start": 1324.0800000000002, "text": " Hi." }, { "end": 1328.44, "start": 1326.0800000000002, "text": " So this work is meta learning curiosity algorithm." }, { "end": 1331.8400000000001, "start": 1328.44, "text": " So we're for Anilette and I'm Martin Schneider and we're from MIT." }, { "end": 1336.0400000000002, "start": 1331.8400000000001, "text": " We're interested in the paradigm in reinforcement learning where you can get an agent to explore" }, { "end": 1340.32, "start": 1336.04, "text": " by feeding it proxy rewards that encourage it to explore the environment." }, { "end": 1344.52, "start": 1340.32, "text": " There's been a variety of work on hand designing these exploration policies but they often" }, { "end": 1348.28, "start": 1344.52, "text": " tend to not generalize very well between different types of environments." }, { "end": 1352.72, "start": 1348.28, "text": " So we're interested in using meta learning with meta learning programs and algorithms" }, { "end": 1357.52, "start": 1352.72, "text": " to find new curiosity policies that will help it generalize between drastically different" }, { "end": 1358.84, "start": 1357.52, "text": " environments." }, { "end": 1365.28, "start": 1358.84, "text": " So in our setup we define space of curiosity algorithms to find through a DSL and a program" }, { "end": 1366.76, "start": 1365.28, "text": " synthesis process." }, { "end": 1371.28, "start": 1366.76, "text": " Our search algorithm then looks through the space of programs and finds new potentially interesting" }, { "end": 1373.32, "start": 1371.28, "text": " curiosity algorithms out of that." }, { "end": 1379.52, "start": 1373.32, "text": " We start by meta training in a simple grid-world environment and then see that the good curiosity" }, { "end": 1383.8, "start": 1379.52, "text": " algorithms in that simple environment actually may be surprisingly generalize well to more" }, { "end": 1386.28, "start": 1383.8, "text": " complicated environments like Mojoco." }, { "end": 1391.12, "start": 1386.28, "text": " And in fact in Mojoco our meta learned algorithms are statistically on par with some of the hand" }, { "end": 1394.04, "start": 1391.12, "text": " design, curiosity algorithms in the literature." }, { "end": 1399.72, "start": 1394.04, "text": " And if you are interested in using our work and using our code it is online at bit.ly slash" }, { "end": 1402.48, "start": 1399.72, "text": " meta-algrathems." }, { "end": 1407.72, "start": 1402.48, "text": " Can you give us an example of what types of things it looks for in a curiosity algorithm" }, { "end": 1410.76, "start": 1407.72, "text": " like Warrout in concrete terms?" }, { "end": 1411.76, "start": 1410.76, "text": " Yeah." }, { "end": 1416.48, "start": 1411.76, "text": " So one of the advantages of learning algorithms here rather than learning weights is that you" }, { "end": 1421.1599999999999, "start": 1416.48, "text": " can sort of interpret a little bit the types of rewards that the agent is getting." }, { "end": 1425.3200000000002, "start": 1421.16, "text": " One of the popular works from the literature for example tries to encourage the agent to" }, { "end": 1428.88, "start": 1425.3200000000002, "text": " get to new parts of the state space that it hasn't seen before." }, { "end": 1433.8400000000001, "start": 1428.88, "text": " One of the meta learned algorithms that emerged from our search process is one that we believe" }, { "end": 1436.3600000000001, "start": 1433.8400000000001, "text": " hasn't been explored before." }, { "end": 1441.0400000000002, "start": 1436.3600000000001, "text": " And our this algorithm tries to encourage the agent to get to parts of the state space" }, { "end": 1443.16, "start": 1441.0400000000002, "text": " while it will start making different actions." }, { "end": 1448.48, "start": 1443.16, "text": " So it encourage to take one action in the current state and then in the next state take a different" }, { "end": 1450.0400000000002, "start": 1448.48, "text": " type of action." }, { "end": 1451.6399999999999, "start": 1450.04, "text": " So my name is Patrick." }, { "end": 1454.32, "start": 1451.6399999999999, "text": " I'm in the University of California Berkeley." }, { "end": 1459.84, "start": 1454.32, "text": " So my poster is about predictive coding for boosting deep RL with sparse rewards." }, { "end": 1464.96, "start": 1459.84, "text": " So we basically are trying to apply contrastive predictive coding to the task of RL." }, { "end": 1469.68, "start": 1464.96, "text": " Where predictive coding tries to find encoding of raw states into a form such that the past" }, { "end": 1472.6, "start": 1469.68, "text": " states and future states mutual information is maximized." }, { "end": 1477.76, "start": 1472.6, "text": " And we basically found that after applying this technique to sequence of state trajectories" }, { "end": 1482, "start": 1477.76, "text": " the encoding is actually clear of all of understanding the environment dynamics and focus" }, { "end": 1484.76, "start": 1482, "text": " on features that are most useful for learning." }, { "end": 1488.96, "start": 1484.76, "text": " So we basically use this method for two ways of rewards shaping." }, { "end": 1493.52, "start": 1488.96, "text": " The first way being classic and the second way being optimized the negative distance." }, { "end": 1499.08, "start": 1493.52, "text": " So for the first one we apply this on the grid world environment where we try to basically" }, { "end": 1502.6, "start": 1499.08, "text": " class the different states in the grid work into clusters." }, { "end": 1506.04, "start": 1502.6, "text": " And we found that after applying predictive coding the classes actually correspond to the" }, { "end": 1508.32, "start": 1506.04, "text": " natural positions in the grid." }, { "end": 1512.76, "start": 1508.32, "text": " And so awarding the agent to go to the class that comes in to go actually allow us to" }, { "end": 1516.48, "start": 1512.76, "text": " boost the learning by wherever our chair goes." }, { "end": 1520.8, "start": 1516.48, "text": " And for the second part for the negative distance we applied on variety of geocode continuous" }, { "end": 1522.12, "start": 1520.8, "text": " environments." }, { "end": 1529.1599999999999, "start": 1522.12, "text": " And we basically found that applying this predictive coding allowed us to find features that" }, { "end": 1534.6, "start": 1529.1599999999999, "text": " are most important for learning and actually flattened the environment structure so that" }, { "end": 1540.9199999999998, "start": 1534.6, "text": " applying simple negative distance is able to create learning boost compared to just using" }, { "end": 1542.3999999999999, "start": 1540.9199999999998, "text": " as far as the work case." }, { "end": 1544.8, "start": 1542.3999999999999, "text": " So that's like the big picture." }, { "end": 1546.6399999999999, "start": 1544.8, "text": " Hello, I'm Zhang Lehong." }, { "end": 1552, "start": 1546.6399999999999, "text": " This is my work during the internship in preferred networks." }, { "end": 1556.8, "start": 1552, "text": " My work is about swarmed inspired reinforcement learning via collaborative inter-egent knowledge" }, { "end": 1558.6799999999998, "start": 1556.8, "text": " installation." }, { "end": 1562.32, "start": 1558.6799999999998, "text": " This work is inspired by swarmed intelligence." }, { "end": 1566.72, "start": 1562.32, "text": " We implement swarmed intelligence in reinforcement learning framework." }, { "end": 1574.36, "start": 1566.72, "text": " It can be imagined that a swarm of aerial agents search for the optimal policy to solve the" }, { "end": 1575.36, "start": 1574.36, "text": " same task." }, { "end": 1581.48, "start": 1575.36, "text": " We use the knowledge knowledge installation to share the knowledge between each agent in" }, { "end": 1583.28, "start": 1581.48, "text": " the swarm." }, { "end": 1590.8, "start": 1583.28, "text": " And in the experiment result we achieved, we improved the state of the R. The performance" }, { "end": 1594.32, "start": 1590.8, "text": " of a software critical in musical benchmarks." }, { "end": 1596.08, "start": 1594.32, "text": " Thanks." }, { "end": 1597.08, "start": 1596.08, "text": " My name is Nick Patosa." }, { "end": 1602.6, "start": 1597.08, "text": " I'm with the Georgia Tech School of Interactive Computing and my research was on multiplayer Alpha" }, { "end": 1603.6, "start": 1602.6, "text": " Zero." }, { "end": 1608.44, "start": 1603.6, "text": " So the original Alpha Zero algorithm as proposed and implemented by D-Mind was focusing" }, { "end": 1610.9199999999998, "start": 1608.44, "text": " on two player games such as Chess and Go." }, { "end": 1614.68, "start": 1610.9199999999998, "text": " So the question that I wanted to explore was what happens when we bring this up to end" }, { "end": 1616.28, "start": 1614.68, "text": " players." }, { "end": 1622.08, "start": 1616.28, "text": " So I did this by making a few simple modifications to Alpha Zero, specifically modifying the output" }, { "end": 1626.48, "start": 1622.08, "text": " of the D-Network from outputting a value vector for each, sorry, value scale for each state" }, { "end": 1628.24, "start": 1626.48, "text": " to instead output a value vector." }, { "end": 1631.12, "start": 1628.24, "text": " That way each entry in the vector is the value for each player." }, { "end": 1635.92, "start": 1631.12, "text": " And then doing this, I change up the Monte Carlo research a little bit to take into account" }, { "end": 1639.76, "start": 1635.92, "text": " the index into the vector and do the whole algorithm that way." }, { "end": 1644.6, "start": 1639.76, "text": " And with these two simple changes, I explore two very simple environments, which were three" }, { "end": 1647.9199999999998, "start": 1644.6, "text": " player versions of Tic Tac Toe and Connect For." }, { "end": 1654.1599999999999, "start": 1647.9199999999998, "text": " And I find that the multiplayer Alpha Zero successfully able to learn strong policies" }, { "end": 1659.32, "start": 1654.1599999999999, "text": " that can be humans and be uninformed MCTS agents at these games." }, { "end": 1663.3999999999999, "start": 1659.32, "text": " So suggest that future research in the series might be promising." }, { "end": 1664.3999999999999, "start": 1663.3999999999999, "text": " So my name is Mark Britton." }, { "end": 1668.76, "start": 1664.3999999999999, "text": " I'm a PhD student in Iowa State University and our poster that we're presenting here" }, { "end": 1670.9599999999998, "start": 1668.76, "text": " is Priority Sequence Experience Replay." }, { "end": 1676.16, "start": 1670.96, "text": " So the main idea here is we're trying to improve the credit assignment and deep reinforcement" }, { "end": 1678.6000000000001, "start": 1676.16, "text": " learning settings, particularly off policy." }, { "end": 1684.72, "start": 1678.6000000000001, "text": " So we follow a similar motivation with Priority Experience Replay, whereas when we reach a" }, { "end": 1689.8, "start": 1684.72, "text": " surprise transition, which means that there's a state with a high TD error, we want to actually" }, { "end": 1694.3600000000001, "start": 1689.8, "text": " decay a portion of this priority along the sequence that we took to reach there." }, { "end": 1699.44, "start": 1694.3600000000001, "text": " And in doing this, we can use the sequential replay information in the replay buffer to" }, { "end": 1704.64, "start": 1699.44, "text": " improve the sample efficiency of the model." }, { "end": 1711.04, "start": 1704.64, "text": " So what we show here is that in the blind cliff walk environment, as we track the mean" }, { "end": 1717.28, "start": 1711.04, "text": " squared error between the true Q value and the predicted Q value, we find that Priority" }, { "end": 1723.6000000000001, "start": 1717.28, "text": " Sequence Experience Replay drastically improves upon both P, R, and uniform sampling and gets" }, { "end": 1726.48, "start": 1723.6000000000001, "text": " closer to an oracle that has perfect information." }, { "end": 1732.2, "start": 1726.48, "text": " So then we then evaluate this result on Atari against DQN with Priority Experience Replay" }, { "end": 1735.04, "start": 1732.2, "text": " and DQN with uniform sampling strategy." }, { "end": 1740.44, "start": 1735.04, "text": " And we find that the relative performance between Priority Sequence Experience Replay and" }, { "end": 1743.44, "start": 1740.44, "text": " Priority Experience Replay is very drastic." }, { "end": 1752.08, "start": 1743.44, "text": " We find that PSCR improves upon P, R, and the majority of the games and outperforms uniform" }, { "end": 1754.16, "start": 1752.08, "text": " in all the games as well." }, { "end": 1757.96, "start": 1754.16, "text": " So this is recurrent neural linear posterior sampling for non-stationary bandits." }, { "end": 1762.0800000000002, "start": 1757.96, "text": " This is worked on Alicia, together with Yurg and Schmidt, Hooper, Aditya Ramash and Paulo" }, { "end": 1763.4, "start": 1762.0800000000002, "text": " Halben." }, { "end": 1765.5600000000002, "start": 1763.4, "text": " So the idea is very simple." }, { "end": 1769.64, "start": 1765.5600000000002, "text": " We are trying to use a recurrent neural network to predict the reward for each given arm" }, { "end": 1773, "start": 1769.64, "text": " in a non-stationary multi-arm bandit problem." }, { "end": 1776.3200000000002, "start": 1773, "text": " And in order to deal with exploration, we apply posterior sampling." }, { "end": 1778.8400000000001, "start": 1776.3200000000002, "text": " OK, so I'm Monsieur Mujiga." }, { "end": 1782.6000000000001, "start": 1778.8400000000001, "text": " And this is the improving gradient estimation in evolutionary strategies with past descend" }, { "end": 1784.6799999999998, "start": 1782.6, "text": " directions paper." }, { "end": 1790.36, "start": 1784.6799999999998, "text": " And the idea is essentially in evolutionary strategies, we try to make random perturbations" }, { "end": 1795.36, "start": 1790.36, "text": " to our current search point to try to estimate the gradients and go in that descend direction" }, { "end": 1797.36, "start": 1795.36, "text": " of the estimated gradient." }, { "end": 1804.6799999999998, "start": 1797.36, "text": " Recently, the idea of using biased gradients to estimate this gradient better was introduced" }, { "end": 1809.12, "start": 1804.6799999999998, "text": " by in the paper guided evolutionary strategies." }, { "end": 1812.9599999999998, "start": 1809.12, "text": " And essentially, we build in that work and we try to find better ways of using biased" }, { "end": 1817.56, "start": 1812.9599999999998, "text": " gradient estimates by essentially just taking a sample in the positive and negative direction" }, { "end": 1819.1999999999998, "start": 1817.56, "text": " of this biased gradient." }, { "end": 1822.56, "start": 1819.1999999999998, "text": " And what we show is that this essentially is better than what was done before in the" }, { "end": 1824.2399999999998, "start": 1822.56, "text": " literature." }, { "end": 1829, "start": 1824.2399999999998, "text": " And additionally, one of the main caveats of the previous work was that you needed to" }, { "end": 1831.28, "start": 1829, "text": " know how good this biased gradient was." }, { "end": 1835.8, "start": 1831.28, "text": " In our case, this is not necessary as if the biased gradient is bad, our algorithm will" }, { "end": 1837.7199999999998, "start": 1835.8, "text": " discard the automatic alleys." }, { "end": 1841.64, "start": 1837.72, "text": " And this lets us use some kind of momentum in the search space." }, { "end": 1846.88, "start": 1841.64, "text": " So as biased gradient estimate, we use the previous gradient estimate." }, { "end": 1850.2, "start": 1846.88, "text": " And this kind of works really well in linear functions, which of course the gradient never" }, { "end": 1851.2, "start": 1850.2, "text": " changes." }, { "end": 1855.1200000000001, "start": 1851.2, "text": " In the case of general functions, it depends on higher order terms." }, { "end": 1858.4, "start": 1855.1200000000001, "text": " And then essentially, we run a bunch of toy experiments in quadratic function and in" }, { "end": 1863.68, "start": 1858.4, "text": " MNIST where we show really that what we predicted in the theory kind of happens were like using" }, { "end": 1868.0800000000002, "start": 1863.68, "text": " biased radiance is good until they become too biased or like using this momentum actually" }, { "end": 1871.2, "start": 1868.0800000000002, "text": " helps especially for smaller learning rates." }, { "end": 1876.1200000000001, "start": 1871.2, "text": " We try to run some experiments in RL in the OpenA-A RoboSchool environment, but unfortunately" }, { "end": 1879.52, "start": 1876.1200000000001, "text": " this doesn't work so well so far." }, { "end": 1885.16, "start": 1879.52, "text": " We believe that the noise in the gradient estimation kind of guides exploration in ES." }, { "end": 1889.4, "start": 1885.16, "text": " So there's a lot of moving parts going on in reinforcement learning and it may be possible" }, { "end": 1895.48, "start": 1889.4, "text": " that just better gradient estimates are not necessarily like do not necessarily result" }, { "end": 1898.52, "start": 1895.48, "text": " in better performance in RL." }, { "end": 1900.3200000000002, "start": 1898.52, "text": " And that's about it." }, { "end": 1906.3600000000001, "start": 1900.3200000000002, "text": " Hi, my name is Daniel Saita and I'm a PhD student in computer science at the University of" }, { "end": 1907.3600000000001, "start": 1906.3600000000001, "text": " California Berkeley." }, { "end": 1910.8000000000002, "start": 1907.3600000000001, "text": " I work in the area of the machine learning and robotics." }, { "end": 1916.3200000000002, "start": 1910.8000000000002, "text": " So this particular project is about giving a set of teachers that we have at our disposal" }, { "end": 1917.6000000000001, "start": 1916.3200000000002, "text": " for a given learner." }, { "end": 1920.52, "start": 1917.6, "text": " How do we pick the right teacher at the right time?" }, { "end": 1926.04, "start": 1920.52, "text": " So the different teachers for any given environment are saved from the sequence of snapshot at" }, { "end": 1928.8, "start": 1926.04, "text": " equally safe intervals throughout a training run." }, { "end": 1933.6399999999999, "start": 1928.8, "text": " The teachers earlier in the snapshot had very low award and teachers later in the snapshot" }, { "end": 1940.6, "start": 1933.6399999999999, "text": " had a higher award but were investigating at any given time for a Q-learning-based agent" }, { "end": 1946.76, "start": 1940.6, "text": " where each mini batch is a mix of student, self-generated data and data from the teacher." }, { "end": 1949, "start": 1946.76, "text": " The teacher selection function should be used though." }, { "end": 1950.24, "start": 1949, "text": " I didn't give an iteration." }, { "end": 1956, "start": 1950.24, "text": " We have the option of keeping our teacher or using a different teacher or that kind of thing" }, { "end": 1961.08, "start": 1956, "text": " and then the main highlight of our work is that it is generally not ideal to always pick" }, { "end": 1964, "start": 1961.08, "text": " the very best teacher, the very highest-roading teacher." }, { "end": 1967.92, "start": 1964, "text": " Sometimes it's better to pick teachers that are only a little bit better than the student" }, { "end": 1974.2, "start": 1967.92, "text": " in terms of award and some hypotheses are due to either we may want to avoid overfitting" }, { "end": 1980.0800000000002, "start": 1974.2, "text": " to a teacher or we may also want a teacher that has a similar distribution, a similarity" }, { "end": 1982.0800000000002, "start": 1980.0800000000002, "text": " with that with the learner." }, { "end": 1987.24, "start": 1982.0800000000002, "text": " So thank you for your attention." }, { "end": 1988.24, "start": 1987.24, "text": " My name is Luqueciano." }, { "end": 1992.2, "start": 1988.24, "text": " I'm from Brazil and from I don't know it's a institute of technology." }, { "end": 1997.52, "start": 1992.2, "text": " My work is bottom up Metapolicy Search in Bumps and this work we basically feel some" }, { "end": 2005.08, "start": 1997.52, "text": " expert policies to conduct a meta-training to be able to sample some policies from this" }, { "end": 2009.44, "start": 2005.08, "text": " meta-polices and then solve some unsing-tasked during training." }, { "end": 2014.76, "start": 2009.44, "text": " It's basically a meta-learning algorithm that you use imitation learning to conduct the" }, { "end": 2018.72, "start": 2014.76, "text": " whole process." }, { "end": 2024.24, "start": 2018.72, "text": " So I'm Janis Plebaliag from Sequelinria and University of Lille in France." }, { "end": 2030.76, "start": 2024.24, "text": " So I will shortly give you an overview of Merl, Multil, Ed, reinforcement learning." }, { "end": 2035.64, "start": 2030.76, "text": " So in this work we want to facilitate representation learning for better, simple, easy efficiency" }, { "end": 2038.56, "start": 2035.64, "text": " and improved final performance." }, { "end": 2044.8, "start": 2038.56, "text": " So we want to maximally use each of the agents' interrogation into the environment." }, { "end": 2051.2, "start": 2044.8, "text": " So we propose a method that is technically applicable to any policy-gradient method" }, { "end": 2056.3199999999997, "start": 2051.2, "text": " and environment, so it's a real plug-and-play method." }, { "end": 2062, "start": 2056.3199999999997, "text": " Instead of using prior knowledge, which is task-dependent, we want to use problem knowledge." }, { "end": 2067.2799999999997, "start": 2062, "text": " So self-performance assessment and accurate expectations." }, { "end": 2072.8399999999997, "start": 2067.2799999999997, "text": " Merl incorporates the fraction of variance explained Vx as an auxiliary task, a measure" }, { "end": 2079.7599999999998, "start": 2072.8399999999997, "text": " of the discrepancy between the estimated state-value function and the observed returns." }, { "end": 2088.28, "start": 2079.76, "text": " It is formalized by the fraction of variance explained from Valset, paper from 1985." }, { "end": 2097.1200000000003, "start": 2088.28, "text": " And we observe better performance in nine continuous control tasks, mugeaux-co-tasks." }, { "end": 2101.5200000000004, "start": 2097.1200000000003, "text": " We also want it to see if the method could better transfer the learning." }, { "end": 2105.5600000000004, "start": 2101.5200000000004, "text": " So we chose five Atari games with the same number of actions." }, { "end": 2111.4, "start": 2105.56, "text": " And we observe again the same better results on the stats." }, { "end": 2120.7999999999997, "start": 2111.4, "text": " So one future work we are interested in is find even more quantity of accurate expectations" }, { "end": 2128.64, "start": 2120.7999999999997, "text": " or self-performance assessments and to better study the correlation and the effect of their" }, { "end": 2129.64, "start": 2128.64, "text": " combinations." }, { "end": 2133.56, "start": 2129.64, "text": " Hi, I'm Boen Baker." }, { "end": 2138.64, "start": 2133.56, "text": " I'm a research scientist at OpenAI working on the multi-agent team and I'm going to" }, { "end": 2145.72, "start": 2138.64, "text": " briefly describe our recent work on emergent autocricula in agents emerging, emergently" }, { "end": 2147.84, "start": 2145.72, "text": " using tools." }, { "end": 2153.08, "start": 2147.84, "text": " We looked at the game of hide-and-seek in a physically grounded and embodied environment." }, { "end": 2158.2799999999997, "start": 2153.08, "text": " And the reason that we look at this game is that we've seen recently the power of these" }, { "end": 2162.04, "start": 2158.2799999999997, "text": " arms races in video games to produce wildly complex behavior." }, { "end": 2164.52, "start": 2162.04, "text": " We've seen it in Go and StarCraft and Dota." }, { "end": 2169.2, "start": 2164.52, "text": " However, it's really unlikely or it's hard to believe that an agent in those games would" }, { "end": 2173.04, "start": 2169.2, "text": " eventually pop out and solve a task that's actually useful to humans." }, { "end": 2179.48, "start": 2173.04, "text": " And so we set out to see if we could induce those arms race dynamics in a more physically" }, { "end": 2182.52, "start": 2179.48, "text": " grounded and maybe human analogous environment." }, { "end": 2186.6, "start": 2182.52, "text": " And so what we found is actually that when we put agents into this physical world, just" }, { "end": 2191.64, "start": 2186.6, "text": " playing the game of hide-and-seek, that they go through six distinct, semantically different" }, { "end": 2195.7999999999997, "start": 2191.64, "text": " rounds of strategy, each one very different than the last." }, { "end": 2201.12, "start": 2195.7999999999997, "text": " And you see that because once one team learns a skill, it creates a new pressure for the" }, { "end": 2202.96, "start": 2201.12, "text": " other team to adapt." }, { "end": 2207.52, "start": 2202.96, "text": " And we're also excited about that because of the analogs to our own evolution on Earth." }, { "end": 2212.52, "start": 2207.52, "text": " And so the general hope for this type of work, and I think all works like this, is moving" }, { "end": 2220.12, "start": 2212.52, "text": " towards environments and games where agents are creating tasks for each other, rather than" }, { "end": 2224.36, "start": 2220.12, "text": " the RL experimenters designing the suite of tasks." }, { "end": 2230.48, "start": 2224.36, "text": " And maybe we'll actually be able to have something truly intelligent emerge from these." }, { "end": 2234.68, "start": 2230.48, "text": " I am Daniel Graves from Huawei Technologies Canada, based in Emoton." }, { "end": 2238.52, "start": 2234.68, "text": " And I'm doing Learning and Op-Policy predictive state representation with deep reinforcement" }, { "end": 2242.7599999999998, "start": 2238.52, "text": " learning for vision-based steering and autonomous driving." }, { "end": 2249.2, "start": 2242.7599999999998, "text": " So what this project is really all about is we want to come up with a way to learn better" }, { "end": 2251.4, "start": 2249.2, "text": " representations for deep reinforcement learning." }, { "end": 2257.04, "start": 2251.4, "text": " So we have a task which is to steer a vehicle based on images only." }, { "end": 2264.12, "start": 2257.04, "text": " So if we just apply a front image to a vehicle, how can we decide on the right steering angle" }, { "end": 2269.08, "start": 2264.12, "text": " to choose to ensure that the vehicle remains centered in the lane." }, { "end": 2274.7599999999998, "start": 2269.08, "text": " So the approach that we took is actually to build a very compact predictive state representation" }, { "end": 2278.44, "start": 2274.7599999999998, "text": " based off of something called general value functions, which is a way of predicting the" }, { "end": 2281.96, "start": 2278.44, "text": " future with reinforcement learning, actually." }, { "end": 2287.56, "start": 2281.96, "text": " And this approach of general value functions actually was originally developed by Richard" }, { "end": 2294.7599999999998, "start": 2287.56, "text": " Sutton and the predictive state representation idea was also an idea that's many decades" }, { "end": 2295.7599999999998, "start": 2294.7599999999998, "text": " old." }, { "end": 2300.16, "start": 2295.7599999999998, "text": " But what we're doing here is applying it to the deep reinforcement learning setting where" }, { "end": 2303.4, "start": 2300.16, "text": " we learn a very compact set of predictions." }, { "end": 2308.16, "start": 2303.4, "text": " Now these predictions are actually based off of the lane centeredness of the vehicle and" }, { "end": 2313.36, "start": 2308.16, "text": " the orientation, the angle orientation, or offset of the vehicle with respect to the road." }, { "end": 2317.88, "start": 2313.36, "text": " And so if we predict these at different temple horizons, we actually capture the lane" }, { "end": 2324.7200000000003, "start": 2317.88, "text": " information of the road, which we can use as essentially an error of how far off our" }, { "end": 2330.4, "start": 2324.7200000000003, "text": " vehicle would be if it kept driving in a straight line from where it's currently at." }, { "end": 2335.2000000000003, "start": 2330.4, "text": " So with this, we actually conform a very simple policy which is try to minimize that error." }, { "end": 2339.2400000000002, "start": 2335.2000000000003, "text": " And so we just form a linear policy based on top of that news reinforcement learning" }, { "end": 2347.24, "start": 2339.24, "text": " in a very shallow sense to learn very quickly instead of having to learn the entire policy" }, { "end": 2348.8799999999997, "start": 2347.24, "text": " right from the image." }, { "end": 2357.12, "start": 2348.8799999999997, "text": " It learns faster and it tends to steer more comfortably and a lot less jitter." }, { "end": 2362.2799999999997, "start": 2357.12, "text": " It's more robust, it's more general to unseen tracks when we tested it." }, { "end": 2364.2, "start": 2362.2799999999997, "text": " So overall it's very promising." }, { "end": 2367.8799999999997, "start": 2364.2, "text": " And just last week before I came here, we applied it to a real robot." }, { "end": 2372.48, "start": 2367.88, "text": " The train from real robotic data, it's not on the poster here, but actually it works quite" }, { "end": 2373.48, "start": 2372.48, "text": " well surprisingly." }, { "end": 2376.88, "start": 2373.48, "text": " So it's very good." }, { "end": 2380.84, "start": 2376.88, "text": " So our paper is called Monkey Task reinforcement learning with our interferences." }, { "end": 2382.6800000000003, "start": 2380.84, "text": " This is a drone work." }, { "end": 2384.32, "start": 2382.6800000000003, "text": " So I can't hear you." }, { "end": 2386.4, "start": 2384.32, "text": " And this is a drone work that is dropped." }, { "end": 2391.44, "start": 2386.4, "text": " I have a check circle at Carol and Chelsea from Stanford, Berkeley and Google." }, { "end": 2397.84, "start": 2391.44, "text": " So the goal of this work is to build more attention robots such that they can sum" }, { "end": 2398.84, "start": 2397.84, "text": " multiple tasks." }, { "end": 2403.88, "start": 2398.84, "text": " We are present like deep learning makes it possible for robots with different skills, but" }, { "end": 2408, "start": 2403.88, "text": " they also suffer from optimization challenges." }, { "end": 2413.32, "start": 2408, "text": " We have passed the size that may be caused by conflict ingredients, which is basically" }, { "end": 2418.44, "start": 2413.32, "text": " characterized by neck constant similarity between two task ingredients." }, { "end": 2423.32, "start": 2418.44, "text": " And also they might lead to unexpected damage due to high curvature." }, { "end": 2429.8, "start": 2423.32, "text": " So the purpose is to project conflict ingredients or PC grid method, which projects to conflict" }, { "end": 2432.96, "start": 2429.8, "text": " ingredients onto the normal plane of the other one." }, { "end": 2436.32, "start": 2432.96, "text": " And if the two grids are not conflicting, which is the reason that I'm done." }, { "end": 2440.52, "start": 2436.32, "text": " And we applied this to Monkey Task supervised learning and the Monkey Task reinforcement" }, { "end": 2441.52, "start": 2440.52, "text": " in benchmarks." }, { "end": 2447.2000000000003, "start": 2441.52, "text": " And I observed that our method can learn much more efficiently than tuning tasks independently." }, { "end": 2453.0800000000004, "start": 2447.2000000000003, "text": " And we can also up-up-perform several Monkey Task learning baselines." }, { "end": 2457.48, "start": 2453.08, "text": " So my name is George Tucker and I work at Google Brain." }, { "end": 2462.64, "start": 2457.48, "text": " And we're presenting our work on behavior-regularized offline reinforcement learning, which was" }, { "end": 2468.4, "start": 2462.64, "text": " done primarily by EFANWU who interned with us this past summer." }, { "end": 2475.44, "start": 2468.4, "text": " And where this work was primarily an empirical evaluation of different offline RL algorithms." }, { "end": 2480.7999999999997, "start": 2475.44, "text": " So we're focusing on the setting where you just get a batch of offline data." }, { "end": 2485.2000000000003, "start": 2480.8, "text": " And you don't get interact with the environment, but you want to do better than the behavior" }, { "end": 2486.92, "start": 2485.2000000000003, "text": " policy." }, { "end": 2492.32, "start": 2486.92, "text": " And we evaluated a couple recent papers, primarily this one called batch constraint queue learning" }, { "end": 2494.4, "start": 2492.32, "text": " and another one, Bair." }, { "end": 2501.04, "start": 2494.4, "text": " And we compared it to a simple baseline that just regularizes with KL divergence to the" }, { "end": 2503.4, "start": 2501.04, "text": " behavior policy." }, { "end": 2509.96, "start": 2503.4, "text": " And what we find is that when we hyper-primiter tune the baseline carefully, we can get similar" }, { "end": 2516.08, "start": 2509.96, "text": " and in some cases better performance than either of the two recent papers across the benchmark" }, { "end": 2517.6, "start": 2516.08, "text": " domains." }, { "end": 2521.6, "start": 2517.6, "text": " And I think the main takeaway from this paper is not that there's something wrong with" }, { "end": 2527.52, "start": 2521.6, "text": " the previously proposed works, it's really about our baselines and benchmarks." }, { "end": 2529.92, "start": 2527.52, "text": " Our benchmarks are too easy." }, { "end": 2536.76, "start": 2529.92, "text": " Basically all the methods do very similarly and we're not able to tell apart the algorithmic" }, { "end": 2541.8, "start": 2536.76, "text": " innovations and what actually matters on these easy benchmarks." }, { "end": 2548.36, "start": 2541.8, "text": " So that's a call to action really to improve our benchmarks and also to improve our evaluation" }, { "end": 2549.36, "start": 2548.36, "text": " protocol." }, { "end": 2554.2000000000003, "start": 2549.36, "text": " We need to be very careful about making sure that we're giving the baseline algorithm a" }, { "end": 2556.2000000000003, "start": 2554.2000000000003, "text": " fair chance." }, { "end": 2561.92, "start": 2556.2000000000003, "text": " Hi, my name's Ben Eisenbuck and I'm going to talk about a poster called if Max Enn reinforcement" }, { "end": 2564.96, "start": 2561.92, "text": " learning is the answer, what is the question?" }, { "end": 2570.84, "start": 2564.96, "text": " And the motivation for this work is that maximum entropy reinforcement learning is an algorithm" }, { "end": 2574.4, "start": 2570.84, "text": " that's very popular in reinforcement learning today." }, { "end": 2579.84, "start": 2574.4, "text": " And it's popular not only in the reinforcement learning community, but also has been observed" }, { "end": 2580.84, "start": 2579.84, "text": " in nature." }, { "end": 2585.52, "start": 2580.84, "text": " And so the question we try to answer is when is Max Enn reinforcement learning the optimal" }, { "end": 2587.68, "start": 2585.52, "text": " thing to do?" }, { "end": 2590.96, "start": 2587.68, "text": " And one thing that's clear is that Max Enn reinforcement learning is not the optimal" }, { "end": 2594.92, "start": 2590.96, "text": " thing to do if we want to maximize reward." }, { "end": 2597.2000000000003, "start": 2594.92, "text": " So when is it the optimal thing to do?" }, { "end": 2599.64, "start": 2597.2000000000003, "text": " Is nature just irrational?" }, { "end": 2604.2000000000003, "start": 2599.64, "text": " In this paper we show that Max Enn reinforcement learning is the optimal thing to do." }, { "end": 2613.32, "start": 2604.2000000000003, "text": " And two certain cases that involve variability or uncertainty in our reward function." }, { "end": 2615.56, "start": 2613.32, "text": " My name is Felipe Leno da Silva." }, { "end": 2621.88, "start": 2615.56, "text": " I'm a researcher from the University of São Paulo and my paper is called Uncertainty" }, { "end": 2625.84, "start": 2621.88, "text": " Aware Action Advising for Deep Reinforcement Learning Asians." }, { "end": 2631.2400000000002, "start": 2625.84, "text": " So the main idea of my paper is that agents supplying reinforcement learning usually take" }, { "end": 2635, "start": 2631.2400000000002, "text": " a very long time to learn a very good policy." }, { "end": 2642.36, "start": 2635, "text": " But in some situations you might have available already competent policy like for example a" }, { "end": 2647.7200000000003, "start": 2642.36, "text": " human who is able to provide guidance to the agent or you might have a legacy system" }, { "end": 2653.2799999999997, "start": 2647.72, "text": " or something like this and your learning agent might ask for action suggestions to this" }, { "end": 2655.16, "start": 2653.2799999999997, "text": " policy to learn faster." }, { "end": 2662.6, "start": 2655.16, "text": " The main problem is when the agent should ask for those suggestions because you want your" }, { "end": 2668.48, "start": 2662.6, "text": " agent just to ask for suggestions when it doesn't have a good policy yet for performing" }, { "end": 2669.48, "start": 2668.48, "text": " the tasks." }, { "end": 2676.72, "start": 2669.48, "text": " So we have proposed an algorithm in which the agent has a confidence function and when" }, { "end": 2682.16, "start": 2676.72, "text": " the uncertainty of the agent for applying an action for a given state is high then the" }, { "end": 2684.4399999999996, "start": 2682.16, "text": " agent will ask for suggestions." }, { "end": 2688.24, "start": 2684.4399999999996, "text": " When the uncertainty is low it means that the agent already has a good policy for that" }, { "end": 2691.8799999999997, "start": 2688.24, "text": " state so it doesn't need to ask for suggestions." }, { "end": 2697.7999999999997, "start": 2691.8799999999997, "text": " And we have compared our algorithm with other similar teachers to the frameworks and we" }, { "end": 2702.7999999999997, "start": 2697.7999999999997, "text": " have shown that in general our algorithm improves the learning performance." }, { "end": 2708.6000000000004, "start": 2702.8, "text": " Hi I'm Rishabh and I'm presenting the posters striving for simplicity in off policy" }, { "end": 2710.32, "start": 2708.6000000000004, "text": " deep reinforcement learning." }, { "end": 2714.0800000000004, "start": 2710.32, "text": " In this work we show that if you do offline deep reinforcement learning on dataset" }, { "end": 2718.28, "start": 2714.0800000000004, "text": " collected from a DQN agent you can actually outperform some of the recent state of the" }, { "end": 2720, "start": 2718.28, "text": " art agents including online c5." }, { "end": 2724.8, "start": 2720, "text": " Which is if you collect data from DQN from 2013 and just do a good job of optimization" }, { "end": 2728.52, "start": 2724.8, "text": " using something simple like random ensembles you can actually match the performance of the" }, { "end": 2729.52, "start": 2728.52, "text": " online c5." }, { "end": 2740.4, "start": 2729.52, "text": " So I'm Raj I'm a PhD student at Georgia Tech so I developed Jericho which is what this" }, { "end": 2741.4, "start": 2740.4, "text": " paper is." }, { "end": 2748.16, "start": 2741.4, "text": " It's talking about interactive fiction games as a domain for exploring the mix of natural" }, { "end": 2751.36, "start": 2748.16, "text": " language processing and reinforcement learning." }, { "end": 2756.12, "start": 2751.36, "text": " And so the overall idea here is that we wanted to have a framework which is kind of like" }, { "end": 2762.72, "start": 2756.12, "text": " open AI gym but for tech games and so the main idea of why you want to study tech games" }, { "end": 2768.96, "start": 2762.72, "text": " is that they let you explore these challenges at the intersection of RL and NLP without" }, { "end": 2772.72, "start": 2768.96, "text": " the sort of costly interaction that you get, say if you're trying to train something" }, { "end": 2775.72, "start": 2772.72, "text": " like a chatbot or whatever." }, { "end": 2779.44, "start": 2775.72, "text": " And so like a few of the challenges we've outlined here on this poster the first is" }, { "end": 2784.4, "start": 2779.44, "text": " that of knowledge representation so like they're partially observable environments." }, { "end": 2789.2400000000002, "start": 2784.4, "text": " So an agent sees and talks to the world purely through textural natural language and then" }, { "end": 2792.6800000000003, "start": 2789.2400000000002, "text": " the descriptions that it gets from the world can potentially be incomplete." }, { "end": 2796.52, "start": 2792.6800000000003, "text": " It doesn't have access to the world state at any given point." }, { "end": 2799.6, "start": 2796.52, "text": " The second is sort of that of common sense reasoning and it's like bows down to the" }, { "end": 2804.76, "start": 2799.6, "text": " question of like how does the agent figure out how to do things with objects that it has" }, { "end": 2808.32, "start": 2804.76, "text": " or how can it interact with commonplace objects." }, { "end": 2812.2400000000002, "start": 2808.32, "text": " So say the agent comes across a mailbox in the world." }, { "end": 2817.24, "start": 2812.24, "text": " So we as humans know that like a normal way to interact with a mailbox would be to try" }, { "end": 2818.24, "start": 2817.24, "text": " and open it." }, { "end": 2820.24, "start": 2818.24, "text": " But how do we get this to the agent?" }, { "end": 2823.2799999999997, "start": 2820.24, "text": " So like the agent could try and eat the mailbox you know like instead of opening it" }, { "end": 2828.6, "start": 2823.2799999999997, "text": " try to chew the lid off but just opening it like normally would probably be more effective" }, { "end": 2829.6, "start": 2828.6, "text": " straight up." }, { "end": 2833.16, "start": 2829.6, "text": " And then the final thing is that of like the combinatorial action space." }, { "end": 2836.4799999999996, "start": 2833.16, "text": " And so this is like sort of what we really focus on here." }, { "end": 2841.4799999999996, "start": 2836.4799999999996, "text": " And so in terms of the combinatorial action space so in a game a popular text game you're" }, { "end": 2847.44, "start": 2841.48, "text": " required to generate actions that are forwards in length." }, { "end": 2852.36, "start": 2847.44, "text": " And so your action space is your vocabulary to the power for which in a popular text game" }, { "end": 2857.96, "start": 2852.36, "text": " like Zork means that you have about 240 billion actions at every single step." }, { "end": 2863.84, "start": 2857.96, "text": " And so like the question really becomes is like how do you adapt RL algorithms to deal" }, { "end": 2870.8, "start": 2863.84, "text": " with the sort of continuous, like ridiculously large action space." }, { "end": 2876.1200000000003, "start": 2870.8, "text": " And so to help with this that's why we came up with this framework called Jericho." }, { "end": 2881.04, "start": 2876.1200000000003, "text": " And so we introduce a set of handicaps in this framework to help agents you know like" }, { "end": 2887.0800000000004, "start": 2881.04, "text": " ease into the task and like more intelligently interact with this environment." }, { "end": 2891.0800000000004, "start": 2887.0800000000004, "text": " And so like a couple of the handicaps that we have like the first handicapped that we have" }, { "end": 2894.04, "start": 2891.0800000000004, "text": " is that of like the template based action space." }, { "end": 2898.0800000000004, "start": 2894.04, "text": " So the template action space is so you know how I said you have to generate like four to" }, { "end": 2899.36, "start": 2898.0800000000004, "text": " five word actions." }, { "end": 2904.36, "start": 2899.36, "text": " So a lot of combinations of four to five words just don't make sense right they're ungrammatical." }, { "end": 2909.36, "start": 2904.36, "text": " And so it turns out what you could do is you can group verbs and prepositions together" }, { "end": 2912.4, "start": 2909.36, "text": " to basically generate a series of like action templates." }, { "end": 2918, "start": 2912.4, "text": " So like a template would be something like take blank from blank or you know push something," }, { "end": 2921.8, "start": 2918, "text": " take something, open something, so on." }, { "end": 2926.36, "start": 2921.8, "text": " And so the problem goes from having to generate a bunch of words in a row to the problem" }, { "end": 2930.52, "start": 2926.36, "text": " of having to generate a template and then fill in the template with the correct objects" }, { "end": 2933.44, "start": 2930.52, "text": " that you want to interact with it." }, { "end": 2937.92, "start": 2933.44, "text": " And so this reduces your action space down from this like 240 billion per step to about" }, { "end": 2939.6400000000003, "start": 2937.92, "text": " a hundred million or so." }, { "end": 2943.6400000000003, "start": 2939.6400000000003, "text": " So it's still kind of big but it's a little bit more manageable." }, { "end": 2947.36, "start": 2943.6400000000003, "text": " And so it turns out like in our we have another handicapped so it turns out you can actually" }, { "end": 2949.88, "start": 2947.36, "text": " go one step further than that." }, { "end": 2956.92, "start": 2949.88, "text": " And so this sort of concept of a template action space what it does it eliminates some" }, { "end": 2963.12, "start": 2956.92, "text": " of the ungrammatical actions but there's still some combinations that you know still just" }, { "end": 2964.12, "start": 2963.12, "text": " don't make sense." }, { "end": 2971.1600000000003, "start": 2964.12, "text": " So if you try to say you know take you know gothic from door like what does that mean." }, { "end": 2974.84, "start": 2971.1600000000003, "text": " So we introduced a concept of a valid action." }, { "end": 2980.2400000000002, "start": 2974.84, "text": " So in this case a valid action is an action that is grammatically correct, contextually" }, { "end": 2985.4, "start": 2980.2400000000002, "text": " relevant and guaranteed to produce change in the world in any given state." }, { "end": 2991.1600000000003, "start": 2985.4, "text": " And so Jericho has the ability to detect the set of valid actions in any given game state." }, { "end": 2996, "start": 2991.1600000000003, "text": " And so in terms of the size of the action space the number of valid actions is really" }, { "end": 2998.2000000000003, "start": 2996, "text": " only about a hundred or so per step." }, { "end": 3005.2, "start": 2998.2, "text": " So in this sort of handicapped you've gone from this hundred million actions space to a hundred" }, { "end": 3006.12, "start": 3005.2, "text": " actions space." }, { "end": 3012.48, "start": 3006.12, "text": " And so on top of this we introduce a bunch of baseline agents that are designed to use" }, { "end": 3013.68, "start": 3012.48, "text": " each of these handicaps." }, { "end": 3020.04, "start": 3013.68, "text": " And so the first baseline agent is the DRN which is essentially Q learning over the valid" }, { "end": 3021.04, "start": 3020.04, "text": " actions." }, { "end": 3025.08, "start": 3021.04, "text": " It's learning to score the correct valid actions." }, { "end": 3029.84, "start": 3025.08, "text": " The other one is the template dqn which produces independent q value estimates over both" }, { "end": 3030.84, "start": 3029.84, "text": " the template and objects." }, { "end": 3035.48, "start": 3030.84, "text": " So this is the one using the template action space vocabulary, the hundred million size" }, { "end": 3036.84, "start": 3035.48, "text": " one." }, { "end": 3040.92, "start": 3036.84, "text": " And then on top of that we have two other additional agents one of which is the random" }, { "end": 3046.7999999999997, "start": 3040.92, "text": " agent which basically picks from a random common set of actions that can be performed in any" }, { "end": 3048.36, "start": 3046.7999999999997, "text": " text game." }, { "end": 3053.52, "start": 3048.36, "text": " So actions like moving around, navigating, picking up objects, putting down objects and" }, { "end": 3054.52, "start": 3053.52, "text": " so on." }, { "end": 3059.04, "start": 3054.52, "text": " And then we have another agent which is called Nail which is a heuristic based agent." }, { "end": 3064.7599999999998, "start": 3059.04, "text": " So this heuristic based agent isn't trained on any one game per se but is designed to" }, { "end": 3069.08, "start": 3064.7599999999998, "text": " play all sorts of interactive fiction games in general." }, { "end": 3075.48, "start": 3069.08, "text": " And so this framework, this Jericho framework as a whole has a lot of games." }, { "end": 3078.84, "start": 3075.48, "text": " We've got like about 30 games in total that you see here." }, { "end": 3081.28, "start": 3078.84, "text": " So it's a wide variety." }, { "end": 3087.92, "start": 3081.28, "text": " So we support a lot of different games, everything from slice of life, home, salary man walking" }, { "end": 3092.88, "start": 3087.92, "text": " simulator type things to love crafty and horror games." }, { "end": 3099.1600000000003, "start": 3092.88, "text": " And so it's a wide variety of genres, game structures, reward functions and so on." }, { "end": 3105.0400000000004, "start": 3099.1600000000003, "text": " And so the just of the results is that when we tested these agents on this game, the best" }, { "end": 3109.52, "start": 3105.0400000000004, "text": " performing agent is the DRN because of course it uses all the handicaps that we have." }, { "end": 3117, "start": 3109.52, "text": " But even this agent really only gets to about a normalized completion of about 11 percent" }, { "end": 3118, "start": 3117, "text": " across all of this." }, { "end": 3124.2, "start": 3118, "text": " And so what this is really telling us is there's a lot current oral algorithms cannot do in" }, { "end": 3129.92, "start": 3124.2, "text": " terms of the challenges presented by interactive fiction games in general and that there is a" }, { "end": 3132.72, "start": 3129.92, "text": " lot of space for improvement in this area." }, { "end": 3138.16, "start": 3132.72, "text": " So we really hope that there's like more people that work in this area to explore ideas" }, { "end": 3140.56, "start": 3138.16, "text": " at the intersection of NLP and RL." }, { "end": 3142.56, "start": 3140.56, "text": " Hello, I'm Adam Stucke." }, { "end": 3148.68, "start": 3142.56, "text": " I'm a PhD student with Peter Beale at UC Berkeley and I'm showing off a poster here on RLPYT," }, { "end": 3154, "start": 3148.68, "text": " RLPIT, a deep reinforcement learning library that we've recently come out with in PyTorch." }, { "end": 3159.2, "start": 3154, "text": " That's intended to be a high throughput research oriented code base for doing reinforcement" }, { "end": 3160.2, "start": 3159.2, "text": " learning." }, { "end": 3164.8799999999997, "start": 3160.2, "text": " So some interesting things about it is that it's one of the first code bases to incorporate" }, { "end": 3169.2000000000003, "start": 3164.88, "text": " reinforcement learning algorithms from all three of the major families from policy gradient" }, { "end": 3173.92, "start": 3169.2000000000003, "text": " to deep queue learning to queue based policy gradient." }, { "end": 3178.04, "start": 3173.92, "text": " And whereas historically they've kind of been implemented maybe one off in separate repositories." }, { "end": 3181.04, "start": 3178.04, "text": " Finally they're all in one place here together." }, { "end": 3185.2400000000002, "start": 3181.04, "text": " And it turns out they share a lot of common infrastructure code that has to do with the" }, { "end": 3187.12, "start": 3185.2400000000002, "text": " reinforcement learning workflow." }, { "end": 3192.56, "start": 3187.12, "text": " So what we've done is we've optimized that infrastructure code and built in many convenient" }, { "end": 3198.2799999999997, "start": 3192.56, "text": " options for parallelizing the workflow either across multiple CPUs or multiple GPUs." }, { "end": 3201.56, "start": 3198.2799999999997, "text": " And on top of this we've written a very modular implementation of all the algorithms so that" }, { "end": 3207.72, "start": 3201.56, "text": " it should be easy for researchers to pick up and make modifications and put their own" }, { "end": 3213, "start": 3207.72, "text": " enhancements on pushing the field forward." }, { "end": 3218.4, "start": 3213, "text": " And we have the code is open source and available now on github.com slash a stuc." }, { "end": 3224.36, "start": 3218.4, "text": " A st o o k e slash rl p y t and we also have a white paper you can look for an archive" }, { "end": 3228.56, "start": 3224.36, "text": " that gives a conceptual overview of the design decisions that we put into the code base." }, { "end": 3230.06, "start": 3228.56, "text": " Thanks." }, { "end": 3234.32, "start": 3230.06, "text": " My name is Tanmay Agarwal and I along with my colleague Hadesh our students at Carnegie" }, { "end": 3235.48, "start": 3234.32, "text": " Mellon University." }, { "end": 3240.4, "start": 3235.48, "text": " Today we are going to talk about our paper on learning to drive using waypoints which" }, { "end": 3245.6800000000003, "start": 3240.4, "text": " combines low level navigational markers called waypoints with high dimensional images" }, { "end": 3248.48, "start": 3245.68, "text": " using deep reinforcement learning." }, { "end": 3253.8399999999997, "start": 3248.48, "text": " Currently most traditional autonomous driving pipelines are high-deem modularized with different" }, { "end": 3258.7999999999997, "start": 3253.8399999999997, "text": " subsystems for localization, perception, active prediction, planning and control." }, { "end": 3264.08, "start": 3258.7999999999997, "text": " However, these modules require much hand engineering and are highly prone to generalizable errors" }, { "end": 3268.64, "start": 3264.08, "text": " which raises an interesting research direction to study deep reinforcement learning for autonomous" }, { "end": 3273.56, "start": 3268.64, "text": " driving with the potential generalizability to unseen scenarios." }, { "end": 3278.44, "start": 3273.56, "text": " In this work we propose an architecture that comprises of a convolution autoencoder" }, { "end": 3280.4, "start": 3278.44, "text": " and policy network." }, { "end": 3285.08, "start": 3280.4, "text": " The convolution autoencoder learns us late in representation of a semantically segmented" }, { "end": 3291.64, "start": 3285.08, "text": " input image which is then combined with waypoint features to form the input to our policy network." }, { "end": 3296.32, "start": 3291.64, "text": " The policy network comprises of a two layer multilayer perceptron which along with the" }, { "end": 3301.2, "start": 3296.32, "text": " autoencoder network is trained simultaneously and independently." }, { "end": 3306.72, "start": 3301.2, "text": " We demonstrate this using the Kala simulator wherein we train our RL agents using model" }, { "end": 3312.64, "start": 3306.72, "text": " free on policy, proximal policy, optimization algorithm." }, { "end": 3317.7999999999997, "start": 3312.64, "text": " Our learned RL agents are then evaluated on the benchmark tasks that have four increasingly" }, { "end": 3323.3199999999997, "start": 3317.7999999999997, "text": " difficult scenarios right from driving straight to driving in town between any two points with" }, { "end": 3325.3999999999996, "start": 3323.3199999999997, "text": " other vehicle actors." }, { "end": 3330.64, "start": 3325.3999999999996, "text": " We show that our agents learn to drive well from scratch without any pre-training or expert" }, { "end": 3331.64, "start": 3330.64, "text": " demonstrations." }, { "end": 3337.2, "start": 3331.64, "text": " We also show comparable performance to the imitation learning initialized baseline for the" }, { "end": 3340.6, "start": 3337.2, "text": " most complex task with other vehicle actors." }, { "end": 3345.7599999999998, "start": 3340.6, "text": " This work thus demonstrates preliminary results on how deep reinforcement learning can scale" }, { "end": 3347.64, "start": 3345.7599999999998, "text": " to autonomous driving." }, { "end": 3352.24, "start": 3347.64, "text": " We plan to extend this work further by learning better state representations that encode" }, { "end": 3357.92, "start": 3352.24, "text": " other vehicle actors dynamics as well as compare this work with other model free RL algorithms" }, { "end": 3371.6, "start": 3357.92, "text": " towards improving the sample efficiency of our proposed method." }, { "end": 3375.04, "start": 3371.6, "text": " This is TalkRL, all reinforcement learning all the time." }, { "end": 3395.72, "start": 3375.04, "text": " Subscribe at TalkRL.com, slash Subscribe." } ]
Scott Fujimoto
Scott Fujimoto expounds on his TD3 and BCQ algorithms, DDPG, Benchmarking Batch RL, and more!
https://media.transistor…db0.mp3?src=site
This is TalkArail Podcast, all reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan. Scott Fujimoto is a PhD student at McGill University and Miele. He's the author of the TD-3 algorithm, as well as some of the recent developments in batch deep reinforcement learning. Scott, I'm super stoked to have you on the show. Thanks for having me on, Robin. Great, so I wonder if we can start with TD-3. So you have a paper from 2018 addressing function approximation error in actor-critic methods. Can you tell us how this TD-3 paper came about? Yeah, right. It's actually kind of a funny story. To some extent, TD-3 was a fluke. It was actually my first paper as a master student. We had been working on this radical idea that you have as a master student. It didn't really work, but we started with implementing a DDPG with this idea built into it. We ran it on Hapchita, which is one of these mejoko simulated locomotion tasks. On this first run, we got these really crazy results. They're way better than anything that we had seen before at the time, two times better than anything else. My first thought was like, oh my god, we've solved reinforcement learning. This is the greatest thing ever, right? But of course, we're scientists. We started digging into a little bit more. We started coming with these ideas built around function approximation in value learning. The original idea was built on that, but it wasn't right. It started us off in this right direction. It turns out that almost all of the improvement that we got in the beginning was actually just like this kind of implementation level details. But this excitement of a big fake breakthrough really pushed us in the right direction for actually making these real improvements. Although we never really significantly improved over our initial results, we ended up with these collection of actual ideas that actually genuinely worked. All that put on top of DDPG, and that sort of created TD-3. Great. And then, so TD-3, I think in your paper, you mentioned three major aspects that it adds to DDPG. Did you did all these ideas emerge at once, or did you pursue them one at a time, and maybe you could walk us through those? Right. So, yeah, TD-3 is these three core concepts that is just added on top of DDPG. It's all centered around this idea that when you're dealing with a deep reinforcement learning and after critic methods, you have this neural network, this function approximation, and that means we're relying on generalization for a lot of things, and there's going to be this function approximation error. And so, that means there's essentially two aspects that we wanted to address. There's the symptoms of function approximation error, which is essentially overestimation bias, and then there's sort of this root cause, which is the function approximation error itself, and sort of the high variance that comes from that, and some of the propagation errors along the side. Fortunately, there was a lot of really nice papers in overestimation bias already, specifically double-key learning. We were looking at double-key learning, and we were asking, well, why isn't anyone using this for DDPG? Is overestimation bias not a real problem for these sector critic methods? Like, what's going on here? So, the first thing, our first main improvement, which is the twin in TD-3. Well, TD-3, I should say, stands for twin delayed DDPG and TD-3 for short. So, twin comes from this idea of we have two Q-networks, and this is similar to double-Q learning, which is a common method for discrete RL. We take two Q-networks, and then we're sort of estimating the value, we'll actually just take the minimum over the two, which is very similar to what double-Q learning does, but it's a little bit more, I guess. The delayed has to do with this idea of value convergence, to some extent. So, in value-based learning, we have these target networks that we use in T-PRL, and we were asking, what are the role of these target networks? Why are they important? And what it boils down to is they really relate a lot to this framework, I guess, fitted Q iteration, where we treat reinforcement learning something like a supervised learning problem, where you set up your problem, and then you essentially have a nice value target, and you want your networks to sort of converge towards this value target, doing one-bellament update, and then we'll update everything. And the target network sort of approximates that by letting you do that, at portions of the state-action space, as opposed to doing everything in your replay buffer. And so, we were looking at that idea, and what we found is that actually using some mixture of ideas from target networks, we were able to improve learning and stability by actually just delaying the policy update, which really means we update the policy at a lower frequency than we update the critic, and that sort of helps allow the value estimate converged before you make an update to the policy, which sort of improves stability. And then the final thing is something a little bit smaller. It's just a regularization strategy, where we do some things similar to Sarsa, where you add a little bit of noise to the target actions in the value-based update, and then that sort of reduces variance a little bit. So, yeah, all these three ideas come together for TD3. The first one is to do with over-estimation bias, and the next two are sort of dealing with sort of the variance and instability problems that come from function approximation error. Cool, and then could you tell us a little bit more about that third one, which is related to Sarsa? How I was looking through the code and actually trying to understand that one, that one I found a little harder. It's not too complicated. The idea is that when you're evaluating the value of an action, and in continuous space we have a meaningful distance measurement between actions. So in discrete action space, action A and action B could be radically different, but in continuous action space we know that the action 0 and the action 0.1, or everything in between, is something similar. So, the idea is that although there might be error on action 0, and there will be some error on action 0.1, those are not necessarily correlated. Or maybe they will be a little bit, but at the very least, averaging the value between those actions should give you a lower variance estimate. So, what we're actually doing is, we're adding a little bit of noise to the policy, and then over multiple updates, we're actually averaging over a small range around the action. So, rather than use just a deterministic policy, we're using a policy with a little bit of noise added to it in the target update. And the other small little details that we do is, because when you add noise, sometimes you'll get an action that's actually quite far away. You know, if you think about the Gaussian, some are close and are far. We'll just clip the Gaussian. So, we'll only look at actions that are within a small enough range, such that it's a meaningful measurement of the mean action that we're looking at. TD3 is, I think if I understand correctly, it's either state of the art on some problems right now, is that correct? When you're looking at these off-policy algorithms for continuous control, especially the Majoko test, you're really thinking about TD3 or SAC. And at that point, they're very similar algorithms. SAC, at least the most recent version, includes sort of our ideas in double-Q learning into it. After the fact, the algorithms look quite similar as a result. So, yeah, the performance between the two are very similar. And they're definitely a step ahead of some of the other popular algorithms, like a PPO or a tier PEO. Okay, and then SAC adds this concept of maximizing entropy. Is that a relevant concept in the world of TD3? Yeah, it's actually some, there's like a weird relationship that just sort of happened where SAC is like a, or maybe I should say the other way around, TD3 is like a deterministic version of SAC. They actually have somewhat similar ideas. For example, this sort of Sarsistile update that we talked about sort of arrives naturally in SAC because it's a stochastic policy. They both use the Clip Double-Q learning, which is the twin in TD3. And they're both off-policy RL algorithms. Is the maximum entropy relevant in SAC? It is, of course, the results in SAC are great. Inpirically, the papers get very similar performance on most tasks. But I believe SAC usually edges out TD3 in the long run. This maximum entry saying adds something for exploration. So, if you run the algorithm for, I don't know, five million time steps or something, you'll start to see SAC edge out TD3 a little bit more. But on the other hand, to make a pitch for my own argument, if I'm on an algorithm, TD3 is much simpler to implement. And tends to run a little bit faster just on a walk-alc time. It is remarkably elegant. And I think that's definitely a beautiful thing, especially in this space, where there's just so much complexity everywhere you look. Yeah, exactly. There's a lot of things, like when we first started getting into it, there's a lot of algorithms that are just hard to get to work. Basically, PPO, for example, is an example of an algorithm where, once you have it set up nicely, it runs on most problems without having to tune it. But getting it working in the first place is not an easy thing. So, with TD3, we wanted an algorithm that was straightforward and easily understood. So, another algorithm in this DDPG family is D4PG. And that adds a few things, distributed actors, distributional critic, and these ends-topper turns. Are these things... Is D4PG still relevant? Are these additions also relevant still? Yeah, it's funny that you asked that, right? I think D4PG is like 2017 or 2018 paper. And it's true that it's less relevant now today, but it shows how fast the field moves, I guess, right? So, a paper so recent is already possibly outdated. Because it's a distributed algorithm, it's somewhat... Yeah, it's definitely orthogonal. It's sort of in a different world. You know, they're using a lot of data, they're running things in parallel. Of course, they get really nice results. And all the improvements are totally orthogonal to TD3. TD3, the idea really was, here's some improvements to actor critic algorithms with deep learning. And ideally, you could take those improvements and just throw them on top of any actor critic algorithm. It just so happens that DDPG was like the prime candidate. But there's no reason why you can combine it with D4PG and then use the distributional critic with... I mean, maybe it would take a little bit of thinking about how to combine that exactly. But it's definitely possible, and there's no reason why there's any conflict there. When we look at something like TD3, it's state of the art. Like, how far are we from squeezing the most possible out of these continuous action trajectories? Is there any sense of how much further we can go? Yeah, it's interesting that you asked that actually, because when we first came out with TD3, I thought, well, this has got to be the end. We just had PPO, and it was getting good performance. And now we've edited it a little bit more. And how much more can we really push these environments? And when you actually visualize the tasks too, they seem to be working really, really well. So it's like, how much more can we get? And then SAC came out, and the long-run it out performs TD3. So we saw even more performance. And actually, recently, there was a paper, I believe, from DeepMind called VMPO. And they're, again, they're looking at the sort of distributed setting. So they're using tons of parallel actors and a lot of data, like in the billions of data points. So a crazy amount. But one of the interesting things, at least for me from that paper, was that they were actually able to get even more performance. So they tested, at least, on Walker and Ant. And their performance, like 50% higher than the final performance of SAC. And I don't know what would happen if you ran SAC or TD3 for, you know, a billion time steps. I mean, in the paper, we're looking at one million time steps for some perspective. So a billion is just a crazy amount. But there is, I guess, room for improvement there. And the other interesting thing, of course, is that there's definitely room for improvement in the sample efficiency. For example, on some tasks, we're able to get a good performance in the first 50,000 time steps, which seems like, I mean, that's a very short amount of time. But on the others, it takes much more. And so surely there must be a way that we could get, you know, push these things a little bit further. That being said, we probably are nearing the end of the Mejoko timeline. It's probably time to look at some more challenging benchmarks. But yeah, if your goal was performance, there's still a little bit more to get there. So do you have more work lined up in this space? Not necessarily directly, of course. You know, if we were to happen upon something that would improve after critic algorithms more, the first thing I'm going to do is throw it on top of TD3 and see what happens. It's not directly the goal. But of course, like, I will admit, I'm a competitive person. So it's always at the back of my mind like, well, maybe we could get a little bit better and then reclaim the top spot as the number one algorithm or whatever. It's not the goal, but we're looking at things that are related, you know. Deep RL is definitely my research focus. And so it seems likely that eventually we'll come across something that could make a difference. And then the first thing I'm going to do is how does it work with TD3? I want to move to another important paper viewers. Off policy, deeper, enforcement learning, without exploration. Yeah, so that was a paper I wrote with my supervisors, actually David Meager and Dwayna Preka. And it's sort of this one of the first papers in recent time looking at a batch deep RL. A batch deep RL, I guess, for those who might not know, is sort of this problem where you're given a fixed batch of data, so like a fixed data set. And unlike other problems, there's no more interaction with the environment. So here's your data set. What can you do with it, basically? So it's similar to, I guess, imitation learning, but the data is maybe not from an expert. It could be arbitrarily bad. Maybe it's good. Who knows, right? And so that's an interesting problem because it's very practical. There's a lot of scenarios that you can imagine that you just have some data. And that's what you have. For example, I'm from the robotics group at McGill. And we have this nice aquatic robot. And so occasionally we run field trials in Barbados, which is a lot of fun, of course. But it also means if you, once you've collected your data in Barbados and you come back to Canada, that's all you have, right? So if you want to run an RL algorithm on your data, it better be a batch RL algorithm. So I love this paper. I saw it early this year. And I shared it around with my friends who I thought would appreciate it. And some of them actually understood the importance of that. That was that I was super excited about. I enjoyed your ICML talk on this paper, including your art skills, by the way. Thank you. Yeah, not a lot of people commented on that. So it's nice to hear some appreciation. So how did this paper come about? What led you to focus on this? Right. So yeah, maybe I'll give a little bit more on the paper. We were looking at batch RL. But the interesting thing about batch RL is that we found that it doesn't work. You know, set up as an off-pulsie learning problem. And the first thought, of course, is, oh, we'll just run something like DQN or DDPG, whenever a nice off-pulsie algorithms. And, you know, I'm sure it will work well. And it turns how fit it doesn't. And so how we came across this was I was looking at exploration. And I thought, okay, if we were going to do some exploration tasks on with these sort of major equipment environments, maybe I'll look at sort of how quickly can these things learn if I were to give it the best data possible. So it's like, even though it's an exploration thing, I was looking at, let's remove exploration. So I'll give it some expert data. So I took a DDPG agent, trained it to completion, then collected some sort of expert trajectories, and then passed that data set to a new agent. And, you know, I was unsure what to expect, but I thought it would learn something. And it turns out that it can't learn at all, which is sort of a shocking result. You know, here's expert data, now learn something, and you get literally nothing. It fails completely. So that was very, very surprising. And then we spent quite a bit of time sort of poking and prodding, trying to figure out what the hack was going on, because this was so unexpected. But it turns out it's actually a very simple reason. We called it extrapolation error. And the idea is that, especially with these neural networks that are generalizing, if you have this finite data set, you have access to only part of the data. So for some state action pairs, you might be very certain about, you know, what happens with them, the reward, the value, whatever, and the state action pairs that you've never seen before. But that means for a given state, there's actions you've seen, you haven't seen, but your critic, your value network will give you an estimate for all of those actions. So for those unknown actions, you may extrapolate very negatively, and you may just never take those actions, or you may extrapolate very positively. And then now your agent is selecting those actions. If you're agent selecting those actions, it has no idea how well it actually performs. It thinks it's going to perform well, but the performance is who knows, right? So if you give your agent a bunch of expert trajectories, it will generalize those expert trajectories to random trajectories or random actions, and it will think, oh, this will also be expert. You try to take those non-expert actions, and you find out very quickly that you don't get a good performance. The other problem, of course, maybe ties back to the overestimation bias problem, is that not only are you overestimating the value when you actually take it, you're propagating those values back. So we found that when you do this batch problem, you end up with a situation where the value estimate tends to diverge, and then that totally destroys your learning process, and your agent just fails horribly. So if I understood this paper correctly, like there's all these papers in the literature going back on doing applied RL off-policy, and I think this work calls into question the correctness of all those agents, like a lot of them use FQI, NFQ, or DQN. So given the insights you brought in this paper, like are all those agents probably just incorrect? Incorrects is a big word, I think. They definitely don't claim anything false. There's nothing wrong about any of those papers, but I think there's a misconception, like a general misconception about the field about what you can and can't do in off-policy learning. So our paper was about sort of exposing this issue of extrapolation error when you're dealing with finite data sets, unlike modern problems, which tend to be quite large, you know, e-neural networks to solve them. But when you're working, you know, you're looking at these classical algorithms and you're working with small domains, a small data set is not as much of a problem because you can still get quite a bit of coverage over like a small problem. Like if you're looking at cart pull, you don't need tons of trajectories and data points to sort of have a nice coverage over the state and action space. So I assume when you look at these smaller methods like FQI that initially worked off, I think believe it's decision tree, you know, if maybe if they didn't scale with a small amount of data, the first slot would be, oh, okay, we'll just give it a little bit more data. And then, oh, look, it works. So this problem of extrapolation error is really only easily seen or found when you look at large problems with neural networks. So that's probably why we're the first people to really write a paper about it. And this is possible, of course, that there's some senior prof out there going off. Of course, I knew this was a thing, but at the time, at least I hadn't seen anything on it. But to go back to your original question, are they wrong? I don't think so, but people definitely thought that Bachelorette wasn't like a separate problem that we need to think about. I thought that if you looked up Bachelorette, I found some slides saying, like from the University saying, DQN works in a problem in Bachelorette, it makes perfect sense. And intuitively it does. So the paper really is sort of combating this sort of misconception that you can just take these algorithms that work on small problems and that have sort of theoretical guarantees given infinite data. But you can't scale these up with just DQN naively, at least. So we're just really trying to combat that misconception, basically. So when I first read your paper, I thought, wow, here we are in 2018. Actually, I read it in early 2019. And this is really fundamental stuff. And we're just the field is just starting to figure that out. So at first I was like a little surprised. I'm like, really? No one knows these fundamentals. And then I was like, wow, that's actually really exciting to be here at this time when people like you are just figuring out these really fundamental points about how RL really works. Yeah, it's a great time to be in the field. I think we're at the point where I guess DeepRL doesn't truly work on things that matter. But like 20 years from now, maybe we'll be looking back and go, OK, that was the time when we figured it all out. So I think there was probably an initial burst of really exciting stuff from Rich Sutton's time in the 90s where a whole bunch of cool algorithms came out all at once. And I think, or at least I hope, that we're on the verge of a whole new set of cool algorithms and that will sort of shape the next 20 years of RL. Well, I definitely think your batch work is historic. I don't usually say that, but I can't imagine that it won't be a huge, usually seminal. So I'm super stoked to have you here. Yeah, thanks for that. Yeah, totally. Thanks for being here. So can we talk about how BCQ works in a little more depth? It uses, it's based on DDPG, is that right? Right. So yeah, BCQ is our algorithm to deal with extrapolation error. And I'll say that when we were looking at this problem extrapolation error, my goal in creating an algorithm was to show that we understood extrapolation error. And my thought was, if we can create an algorithm that solves these batch or L problems that no other algorithm can currently solve regardless of how we get there, then it means that at least we've understood what the problem is or what the core issue is with batch or L so far. So we came with BCQ. BCQ is meant to dealt with or deal with continuous action spaces. And I'll be the first to admit it's a bit of a messy algorithm. There's kind of a lot of moving parts going on. So it's a bit confusing, but maybe I'll take a step back and I'll explain the are more recent version of BCQ, which is sort of the discrete action version of BCQ, which is quite simple. And then we can sort of double back to what the original version looked like and why being continuous spaces is confusing. Right. So the algorithm in the screen action space is very simple. It looks essentially like DQN. The idea is that this extrapolation area is sort of this error introduced from out of distribution actions or actions that we've never seen before for a given state. So what we'll do to combat that is we'll train something that looks like an imitation module, something like behavior cloning will just estimate for a given state. What's the probability of each of the actions being in the batch? And then we can just threshold basically. So if an action is very low probability of being contained in the data set, then we probably can't generalize very well to it and we shouldn't take that action. So we'll just eliminate it. So the final algorithm looks like DQN with a few actions sort of shaved off and it turns out that sort of really helps a limiting extrapolation error and you can do batch or L. The problem is in a discrete action space things get a little bit more hairy and sorry, a continuous action space. And unfortunately we started in a continuous action space which made things difficult. So the algorithm again instead of DQN is actually I would say it's still close to DQN. But we have this problem where we need to eliminate actions that are say low probability of being in the batch. And we took a separate a different approach of sort of thresholding and instead went with a sampling idea. So the idea was if we can sort of train a VAE or any sort of generative model to sort of model the batch, a state condition VAE, if you take that state and you sample actions from it, those actions that we sample from the generative model will be sort of high probability of being in the batch. Once you're at that point, you can then just select the highest valued action. So there's now this sort of learned generative model and then sample from it, then select the highest action. Unfortunately, there's one more step to get there with the algorithm and that is what we call the perturbation model. And the idea was suppose for some action or some state you had seen a lot of actions. You had maybe covered the entire action space roughly. Then sampling, I don't know, 10 actions randomly would give you a very poor coverage and you'd probably end up with something very sub-optimal. And then the solution of course would be to sample even more actions. Maybe we could sample hundreds or thousands or something, but this starts to become very like not scalable. So our solution there was will allow a secondary model called the perturbation model, which is essentially like an actor, and allow it to perturb any of the actions that we sample. So we'll sample a bunch of actions, we'll perturb them in a small range, so just add a little adjustments so that we can get a little bit more value out of them. And then we'll select the highest value action. And so that way we get a more coverage basically. So yeah, this is starting to get a little messy and there's a few other little details here and there, but that's the core idea behind BCQ. Great. I mean, from my perspective, it's still super elegant considering all that it does and how concise it is. So was it VIE at an obvious choice here? Would you mention other generative models could work? Would any other models make sense here? Yeah, well, I'm glad you think it's elegant. A VIE was the easiest choice in the sense that, yeah, of course you could swap the VIE for a GAN or some normalizing flows or something like that. Interestingly enough, a VIE tends to work the best, or at least from our experience. And this is just a vanilla VIE. So of course you could do something more intelligent, but a VIE worked nicely because compared to a GAN, it tend to generalize worse, which is sort of counterintuitive maybe. But if you want to sample things that are only very similar to what's in the batch, like you're just trying to memorize what's in the batch, then you actually really only want to see things in the batch. You don't want to sort of generalize things that are kind of in the batch or similar to things in the batch. So a VIE worked nicely for that, and it was a little bit more straightforward to get working because GANs are kind of notoriously hard to tune and to get working properly. So yeah, nothing stopping you from selecting a different generative model. And I think it's also a bit of the weak point in the continuous version of PCQ. Getting the VIE to work properly has some challenges to it. And I suspect using a better gender model would make your life easier. There's been some follow-up work, for example, BearQL, where they looked at a setting with only a single behavior policy. When you're doing that, it means you can now train something more similar to behavior-cloning with just a uni-modal policy. And then that makes your life a lot easier. Now you don't have to deal with this VIE thing. The VIE really is useful because it lets you handle multi-modality. So if you have a mixture of policies collecting the data in the batch, then that will be handled nicely by a generative model. So depending on what assumptions you can make, you can maybe avoid it elegant. I don't know, but it works. And that's all we really wanted for the first version. So do you have to worry about sizing the VIE and tuning it specifically for the domain? It really needs to memorize the whole batch and have a sensible generalization. I guess the hope is that it's almost relative to the size of the batch as opposed to the size of the domain because you're just memorizing the things that are in the batch. It's definitely, like I said, a weak point to the paper in terms of getting it to work properly. We did come up with a setting that worked across all the Magyoko tasks, so it's not like you needed so specific tuning to get it to work. But it's fragile. Changing the hyper parameters is a nice way to get your algorithm to not learn properly. So yeah, I think if you were switching to a totally new domain, you'd probably have to tune that element of it. Fortunately, I guess it's also not necessarily in our problem anymore. So there's tons of great research out there on generative models. And we were using the most basic vanilla VIE, so I assume that just swapping it with something a little more sophisticated, which would be, of course, harder to program, harder to get working and a little bit messier, larger component. But it would probably make your life easier when actually trying to run it. The hope I guess is that the BCK will sort of naturally evolve as the field and nearby fields just improve on the run. So thinking about how this internal work, it seems like the VIE has this sense of state similarity and then the Q network might have a different sense of state similarity. Do you think they could be looking at these state similarities differently and could that be an issue or is that an on an issue? Yeah, I love this question. This is a great question actually. It's something we've been thinking about a lot. Is it an issue? I'll say apparently not because it still works, right? But there is a discrepancy there, right? The VIE or any generative model is trying to measure the density of the of the batch, the probability space or whatever. But on the other hand, what we're actually interested is not the density. We're interested in sort of the generalization properties of the Q network. So, you know, for a given state, what actions can we generalize properly to? And that usually will correspond to something similar to the density of the data set or the distribution of the data set. But it might not exactly, right? So there is a discrepancy there. We spend some time trying to sort of build on that and see if we could improve it. Use that sort of idea. Like maybe we can take advantage more of the Q network rather than having this generative model. Yeah, we never really made any super exciting meaningful progress. I think it's an important question that we need to be thinking about. But I don't have any great answers for it. But yeah, it's definitely an interesting thing. And the other thing I'll say is that for our discrete version of BCQ is that we used, we shared some layers. So the convolutional layers, this we used in Atari, the convolutional layers are shared between the imitation network and sort of the Q network. And then that helps, I guess, sort of keep the similarity similar to reuse words. But yeah, not exactly what we're looking for. So I think, yeah, it's a, it's a bit of an open question right now and an option for improving the algorithm for sure. Cool. And then if, if our agent never do get into states that were far from states that were seen in the batch. Is it true that just all bets are off completely? Like is there, is there anything we can do in that scenario? Maybe that's a little outside of the scope of this paper. But yeah, it's a question that people ask a lot. So you're not alone in that. I think it is out of the scope of the paper in the sense that it's not exactly the problem that we're trying to tackle. And the way I see it is, you know, let's say you train a robot to play hockey or something and then you ask it to bake a cake. You know, this algorithm is definitely not going to be the one to save you. We really need better transfer learning methods or better generalization or something like that to really solve that type of problem. And then I guess that goes back to your previous question, of course, is once we start generalizing really well, then we need to really look at generalization more. So then we do the distribution of the data set. Yeah, B.C.Q doesn't solve it. I wish it did, but no, not quite. Yeah, that makes sense because that wasn't what you were trying to do. I think if I understand B.C.Q is just trying to keep you out of those areas. Yeah, exactly, exactly. So rather than try to bake a cake, well, you know, robot, you don't know how to bake a cake stick to playing hockey for now. That's really the core idea. So in your B.C.Q paper, B.C.Q was compared to a number of other algorithms, DDPG, a desktrized TQN and some versions of behavior cloning. We didn't see it compared to the top continuous actions based algorithms like SAC and your own TD3. Then there was, of course, a closely related paper striving for simplicity and off policy DPRL from Agarwal at all. That showed your TD3 did really well in batch setting, some are even beating B.C.Q in some cases. So do you have any comments about how TD3 performs in batch setting, given that from what I understand it wasn't designed for batch use? Yeah, so it was kind of nice to see TD3 do well in a batch setting. Of course, we knew that in some cases TD3 did do well because, of course, I tried my own algorithm on these problems. But what it boils down to is sort of this problem of overestimation bias. So when you have extrapolation error and you're sort of generalizing poorly and sort of generalizing actions to be higher value than they really are, you create this problem over estimation bias. B.C.Q deals with that in a few ways. One, of course, is it says, don't take those actions that are higher value than we think they are. And then it also has some of our ideas around double-Q learning and dealing over estimation bias that are in TD3, we included them in B.C.Q, of course. And I think for some of our batch settings, just dealing with over estimation bias is enough for you to get a good performance. So TD3 does it in the same way that B.C.Q does it and so they both do well. And so that wasn't necessarily surprising. And I don't think it in any way contradicts anything in our paper. I want to move to your more recent paper, benchmarking batch, deeper, and more recent learning algorithms. Yeah, so that was a paper I did during an internship at Facebook, actually, and we were working with Team Eduardo Conti, Muhammad Gavanca, found this today, and as you all know. And in that paper, we were sort of, I mean, this had come right after a Garwal and I was striving for simplicity in off-pulsing learning paper. And that paper had tested some of our understanding and some of the ideas that we had about extrapolation error and batch learning. So we wanted to sort of retest some of these things. And I'll take a step back and I'll say what they did in their paper. So in the striving for simplicity paper, they had this experiment where they trained DQN start to finish and collected all the data that it had gathered during the entire process. So over the 50 million time steps, they looked at all the state action pairs that it had ever seen. They put that into one giant data set and they trained DQN a few other agents on this giant data set. And it turns out in that setting, you can actually learn quite well, basically. That raises the question, does BatchRL work in nice enough settings? Maybe BatchRL is only really a hard thing to do in continuous action spaces, sort of what's going on here. So the first thing we wanted to do was sort of get a better understanding of that. The second thing that we wanted to talk about was there's a lot of, there's been quite a few BatchRL papers in the last few months, basically. The short timeline we're looking at here. But the thing with BatchRL is it's very easy to come up with a new problem. There's no sort of standard Batch that you're supposed to look at. You can come up with a new Batch for whatever paper you want, right? So you can write a Batch that the data is, you know, a lot of expert trajectories. Maybe it's a tool-of-random. Maybe there's several behavior policies. Maybe it's a single behavior policy. So we wanted to say, okay, what happens? We'll just look at all of them. We'll look at one setting. And we're not saying that this is the setting that you should look at for BatchRL. It's just A setting. Let's just see what happens. Let's put everything on an even ground and see what happens. And finally, I guess the third thing is to say this is also the paper where we introduced the discrete version of BCQ, which is sort of the cleanest version, the one I like the most. And the conclusion from the paper was that nothing works amazingly on the single behavior policy setting. So all the algorithms that we tried, for example, we tested, you know, DQN and QR, DQN, which Agarwell and Al said would work pretty well. And on this setting, with a single behavior policy, less diverse data, they didn't work so well, basically. We tested some other recent algorithms and we also tested our BCQ algorithm. And we found that although it worked the best, it wasn't actually like super amazing. Like you'd hope that it would, you know, dramatically outperform the behavior policy. In a lot of cases, it just matches the performance of the behavior policy. So it looks something more similar to say robust imitation. It's sort of uncertain about whether or not that reason is just because a single behavior policy just doesn't give you enough data to generalize and get better performance. Or, you know, maybe the algorithm itself is just fundamentally not strong enough to really tackle these problems in a way that's like truly satisfying. But either way, there's some interesting results there. And so, you know, we stuck it together and it's now a nice workshop paper. So I'm quoting from the paper. There's one line that says it's easier to sample from pi a condition on s than to model pi, the condition on s exactly in a continuous action space. Can you say more about that line? Is that super obvious to you? Right. I would say it's super obvious, but I think it's more of a property of just the tools that we have available. So in generative modeling, of course, there are ways to model sort of the density or the distribution of the data set, normalizing flows or something like that. But we do have a lot of nice techniques that just let us sample even without modeling exactly. So in a VAE, you can't necessarily recover the true distribution, but you can sample from the distribution. So that makes it easier essentially to, and the reason why we use the VAE in the sort of continuous version of B.C.Q. Because it just makes your life easier basically. Whereas getting that exact density function is not easy. However, in a discrete action space, it's actually quite an easy problem to do it. You can just train like your standard cross-century loss kind of thing, behavioral, glowing kind of model, and you'll get some kind of estimate of it. Okay, then, and then going back to Uggarwals paper, striving for simplicity. They say they have a line where they say contrary to recent work. And they say, Zang is Sutton 2017 and Fujimoto at all. That was your paper 2019. They find the log data, DQM data is sufficient. So now you talked about this earlier that different data sets have different properties. But is that, um, does that fully explain why they got different results than you did? Right, yeah, so I think some people read their paper and thought, oh, you know, one of these papers has to be wrong. But I don't think of the papers as contradictory in any way. I think their paper is very much complementary to ours. And I think our follow-up work, again, is complementary. So like, to me, the story here of the first paper was, Bachelors is a really hard problem. You know, we need to think about it very carefully. You can't just run your algorithms naively and solve Bachelorel. In their work, they're maybe saying, you know, Bachelorel works if things are sort of set up nicely. If we have huge data sets, diverse data sets, this is now a solvable problem, which means that, you know, if you're maybe a practitioner and you're thinking, I'll have this Bachelorel problem, what can I do? Well, that means there's two approaches, I guess. You could, you could use maybe BCQ or some Bachelorel algorithm, you know, to carefully deal with this problem. Or maybe you could just collect a little bit more data in a diverse enough way. And so that's definitely an interesting result. And I don't think, yeah, there's, I don't think there's any contradiction there, which is nice. And I do think the diversity explains most of it, at least from my perspective. In the Agarwal et al paper, they showed good results with REM. That's the random ensemble mixture. It's mixing DQNs with multiple DQNs with this, this simplex. Yeah. Yeah. And in your paper, REM didn't do that well. Generally QRDQ dominated it. Is that, so is that again, do the data set, do you think being more diverse in their, in their case, or, or is there some, do you think there's some other thing going on there? Right. Yeah, I guess the one of the things I want to say about our paper is that although, you know, we looked at a couple of algorithms and a lot of them didn't really work very well. It doesn't mean that their algorithms don't work, right? We were looking at a new setting. So, you know, REM, for example, was designed to work well in this sort of patch setting with very diverse data. And it totally does work in that setting. And it works very well in that setting. So, we were looking at a different setting just to see what would happen, just to see, you know, confirm some of our claims from the first paper in a new setting and see what happens and make sure everything's true. Does their algorithm work? Yes, it definitely does. Is there a good reason for why or what the difference is? I'm not sure, right? Yeah. We don't know enough about their algorithm. It's very new. So, and it's not something we tested dramatically, of course, because that wasn't really the goal of the paper. So, I can't say that I understand, you know, all the details of their algorithm, you know, beyond implementation, of course, to really say why or why wouldn't work. I suspect though that it mostly comes down to the fact that it's this new setting that REM just wasn't meant to handle. Natasha, Jake's was our guest for episode one. You compared her KL control algorithm and she compared her version of your BCQ. Modified for discrete actions in her paper way off policy, batch deeper, enforcement learning of implicit human preferences in dialogue. So, does her version of BCQ in that paper differ from yours? It does a little bit. So, actually Natasha, Natasha is awesome. Basically, she reached out to me after reading the paper, you know, said, it's super cool. You know, we're working on similar things and we had a nice discussion about it. And one thing I really appreciated is she wanted to be very fair about comparing to BCQ. So, you know, she asked me, you know, how can we do BCQ in a discrete setting? What would look like and what would you feel like is a fair comparison? And so, you know, based on that discussion, they came up with their version of discrete BCQ. The key difference is that their version still relies on the sort of sampling property where you sample from, I mean, they're looking at a prior and somewhat of a different setting. So, they were sampling from this prior basically and then selecting actions that are the highest valued. It turns out, of course, that if you just threshold using the probability distribution or the density of the data, you end up with something much nicer and you get a better performance. So, it's not like the greatest comparison to BCQ, but at the time, like, it's super fair. This is, you know, really appreciate what she did. And that's one of the reasons why we included her algorithm in our papers because, okay, her algorithm's showing that it beats BCQ. Does it really like me to actually double check that? And I will say, again, like, their results were really interesting and again for really different problems. So, of course, it didn't work in our setting, but that's not to say at all that KL Control is like some bad algorithm or something. It's a super cool paper. They have a really awesome demo and, you know, people should read it and check it out because it's great. So, yeah, we're not saying anything bad about the paper, but maybe the lesson is that a lot of these bad trail algorithms are not general purpose like we might hope. I'm grateful to Nostasha because she helped me kick off this podcast and set the bar really high for guests. So, that's that was unforgettable for me. Yeah, I'm happy to be following in her footsteps. So, regarding distributional agents was the choice of QR DQN obvious here, like would other distributional agents be suitable like C51 or IQN? Or is it more because they were used in previous papers? Right. We looked at QR DQN because that was the paper that they used or that was the algorithm that you used in striving for simplicity. So, that was really like the only motivation. We just wanted to look at theirs. That being said, I do think C51, IQN would work well. Of course, after seeing the results in both papers, one thought is, why does, you know, these distributional methods work better? And from what I understand, the hypothesis that distributional RL works well because it learns a better feature representation. And under that hypothesis, it makes a lot of sense that it would work well in the BATURAL problem. Because if this, you know, error, extrapolation error is a problem that comes from generalizing poorly, then a better feature representation would help you generalize better into more actions. And that sort of naturally would improve BATURAL. Hopefully that hypothesis is true, so things make sense in my head. But, yeah, it was just a clear choice based on the prior work. Are you working on more BATCH stuff right now? Like, can you share anything about what you're working on these days? BATURAL stuff, I'll say we gave it a good effort to sort of take it to the next step. That was the summer. And we didn't really come up with anything super exciting that I can talk about with a lot of passion or anything. But I will say there's been a lot of cool work from other people in BATURAL. I mean, I don't have paper titles off the top of my head, but there's a lot in recent months. And I think there will be a lot more in the future. So I'm really excited for whatever one else is doing. And I think we'll see a lot of cool developments in BATURAL. As far as what I'm working on, I've always been really interested in sort of understanding the relationships in all the components in DPRL. So, you know, this paper was about understanding, you know, what data do we need and how that sort of works into the errors and learning and things like that. But there's all kinds of different components in DPRL and all these little things that we don't understand and, you know, why does this work, why does this work, why is it hard to tune and, you know, how does target networks come in and experience replay. And there's all these kind of weird things. And this is big, complex machine. And so, you know, what I've been working on is just try to understand piece by piece a little bit more. And sometimes, you know, we learn something new and it's just not that interesting. And sometimes, you know, it leads to something really exciting like opening up BATURAL to new people. And so, yeah, that's the direction I've been working on and hopefully it will come up with some cool breakthroughs. But yeah, it's an exciting time to be in RL. So, hopefully you'll see something exciting from me in the future. I have no doubt. So, aside from the BATURAL and what you're doing now, do you have any comments on what you find exciting in RL that maybe other people are working on these days? Yeah, I mean, I've always been a bit of a secret admirer of model based RL, maybe from the distance. I did some work in model based RL at the start of my master's degree and was like, well, this is really hard and not working any time soon. So, I moved on to model free and then TD3 happened. So, that was fortunate. But yeah, I think model based RL is really exciting. And there's been some really cool developments in sort of combining model free in model based RL. And I think model based RL has always been sort of the most intuitive and natural form of RL. It just seems obvious that it would be important. It's just how we think as humans about problems often was like this planning component. And so, I feel like it's going to play a huge role in the future of RL. So, I don't intend on working on it in the near future, but I'm always really excited to see new model based RL papers. Am I going to see you at Nureps 2019 in Vancouver? Let's talk RL's hometown. Yeah, you definitely will. We're presenting Benchmarking deep batch RL papers at the Deep RL workshop. So, if you were any of the listeners, you know, are around definitely come by, say hello. I feel like if someone is listening right now and they say, oh, you know, I heard you on talk RL. I should have some kind of like secret prize for them or something. But I'm a poor grad student. So, maybe I can just give out like some high fives or something. But, yeah, definitely come by and say hello. Well, I would take a high five from Scott Fujimoto any day. So, that sounds great. Thank you. Glad. So, I'm super grateful for you for being here and sharing your time and your insight with all our listeners and with myself. And I'm totally looking forward to hearing from you at Nureps and whatever you come up with next. Scott, thanks so much. Yeah, thank you so much for having me. That's our episode for today, folks. Be sure to check talk RL.com for more great episodes.
[ { "end": 13, "start": 0, "text": " This is TalkArail Podcast, all reinforcement learning, all the time." }, { "end": 20, "start": 13, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan." }, { "end": 24, "start": 20, "text": " Scott Fujimoto is a PhD student at McGill University and Miele." }, { "end": 29, "start": 24, "text": " He's the author of the TD-3 algorithm, as well as some of the recent developments in batch" }, { "end": 33, "start": 29, "text": " deep reinforcement learning. Scott, I'm super stoked to have you on the show." }, { "end": 35, "start": 33, "text": " Thanks for having me on, Robin." }, { "end": 38, "start": 35, "text": " Great, so I wonder if we can start with TD-3." }, { "end": 44, "start": 38, "text": " So you have a paper from 2018 addressing function approximation error in actor-critic methods." }, { "end": 47, "start": 44, "text": " Can you tell us how this TD-3 paper came about?" }, { "end": 49, "start": 47, "text": " Yeah, right. It's actually kind of a funny story." }, { "end": 54, "start": 49, "text": " To some extent, TD-3 was a fluke. It was actually my first paper as a master student." }, { "end": 59, "start": 54, "text": " We had been working on this radical idea that you have as a master student." }, { "end": 68, "start": 59, "text": " It didn't really work, but we started with implementing a DDPG with this idea built into it." }, { "end": 73, "start": 68, "text": " We ran it on Hapchita, which is one of these mejoko simulated locomotion tasks." }, { "end": 77, "start": 73, "text": " On this first run, we got these really crazy results." }, { "end": 82, "start": 77, "text": " They're way better than anything that we had seen before at the time, two times better than anything else." }, { "end": 87, "start": 82, "text": " My first thought was like, oh my god, we've solved reinforcement learning." }, { "end": 90, "start": 87, "text": " This is the greatest thing ever, right?" }, { "end": 92, "start": 90, "text": " But of course, we're scientists." }, { "end": 95, "start": 92, "text": " We started digging into a little bit more." }, { "end": 101, "start": 95, "text": " We started coming with these ideas built around function approximation in value learning." }, { "end": 105, "start": 101, "text": " The original idea was built on that, but it wasn't right." }, { "end": 109, "start": 105, "text": " It started us off in this right direction." }, { "end": 116, "start": 109, "text": " It turns out that almost all of the improvement that we got in the beginning was actually just like this kind of implementation level details." }, { "end": 123, "start": 116, "text": " But this excitement of a big fake breakthrough really pushed us in the right direction for actually making these real improvements." }, { "end": 132, "start": 123, "text": " Although we never really significantly improved over our initial results, we ended up with these collection of actual ideas that actually genuinely worked." }, { "end": 137, "start": 132, "text": " All that put on top of DDPG, and that sort of created TD-3." }, { "end": 144, "start": 137, "text": " Great. And then, so TD-3, I think in your paper, you mentioned three major aspects that it adds to DDPG." }, { "end": 152, "start": 144, "text": " Did you did all these ideas emerge at once, or did you pursue them one at a time, and maybe you could walk us through those?" }, { "end": 159, "start": 152, "text": " Right. So, yeah, TD-3 is these three core concepts that is just added on top of DDPG." }, { "end": 165, "start": 159, "text": " It's all centered around this idea that when you're dealing with a deep reinforcement learning and after critic methods," }, { "end": 171, "start": 165, "text": " you have this neural network, this function approximation, and that means we're relying on generalization for a lot of things," }, { "end": 174, "start": 171, "text": " and there's going to be this function approximation error." }, { "end": 178, "start": 174, "text": " And so, that means there's essentially two aspects that we wanted to address." }, { "end": 183, "start": 178, "text": " There's the symptoms of function approximation error, which is essentially overestimation bias," }, { "end": 189, "start": 183, "text": " and then there's sort of this root cause, which is the function approximation error itself, and sort of the high variance that comes from that," }, { "end": 192, "start": 189, "text": " and some of the propagation errors along the side." }, { "end": 199, "start": 192, "text": " Fortunately, there was a lot of really nice papers in overestimation bias already, specifically double-key learning." }, { "end": 205, "start": 199, "text": " We were looking at double-key learning, and we were asking, well, why isn't anyone using this for DDPG?" }, { "end": 210, "start": 205, "text": " Is overestimation bias not a real problem for these sector critic methods? Like, what's going on here?" }, { "end": 215, "start": 210, "text": " So, the first thing, our first main improvement, which is the twin in TD-3." }, { "end": 219, "start": 215, "text": " Well, TD-3, I should say, stands for twin delayed DDPG and TD-3 for short." }, { "end": 225, "start": 219, "text": " So, twin comes from this idea of we have two Q-networks, and this is similar to double-Q learning," }, { "end": 229, "start": 225, "text": " which is a common method for discrete RL." }, { "end": 232, "start": 229, "text": " We take two Q-networks, and then we're sort of estimating the value," }, { "end": 237, "start": 232, "text": " we'll actually just take the minimum over the two, which is very similar to what double-Q learning does," }, { "end": 239, "start": 237, "text": " but it's a little bit more, I guess." }, { "end": 246, "start": 239, "text": " The delayed has to do with this idea of value convergence, to some extent." }, { "end": 250, "start": 246, "text": " So, in value-based learning, we have these target networks that we use in T-PRL," }, { "end": 255, "start": 250, "text": " and we were asking, what are the role of these target networks? Why are they important?" }, { "end": 261, "start": 255, "text": " And what it boils down to is they really relate a lot to this framework, I guess, fitted Q iteration," }, { "end": 264, "start": 261, "text": " where we treat reinforcement learning something like a supervised learning problem," }, { "end": 270, "start": 264, "text": " where you set up your problem, and then you essentially have a nice value target," }, { "end": 273, "start": 270, "text": " and you want your networks to sort of converge towards this value target," }, { "end": 276, "start": 273, "text": " doing one-bellament update, and then we'll update everything." }, { "end": 279, "start": 276, "text": " And the target network sort of approximates that by letting you do that," }, { "end": 285, "start": 279, "text": " at portions of the state-action space, as opposed to doing everything in your replay buffer." }, { "end": 295, "start": 285, "text": " And so, we were looking at that idea, and what we found is that actually using some mixture of ideas from target networks," }, { "end": 300, "start": 295, "text": " we were able to improve learning and stability by actually just delaying the policy update," }, { "end": 303, "start": 300, "text": " which really means we update the policy at a lower frequency than we update the critic," }, { "end": 308, "start": 303, "text": " and that sort of helps allow the value estimate converged before you make an update to the policy," }, { "end": 310, "start": 308, "text": " which sort of improves stability." }, { "end": 313, "start": 310, "text": " And then the final thing is something a little bit smaller." }, { "end": 316, "start": 313, "text": " It's just a regularization strategy, where we do some things similar to Sarsa," }, { "end": 320, "start": 316, "text": " where you add a little bit of noise to the target actions in the value-based update," }, { "end": 323, "start": 320, "text": " and then that sort of reduces variance a little bit." }, { "end": 326, "start": 323, "text": " So, yeah, all these three ideas come together for TD3." }, { "end": 331, "start": 326, "text": " The first one is to do with over-estimation bias, and the next two are sort of dealing with sort of the variance" }, { "end": 335, "start": 331, "text": " and instability problems that come from function approximation error." }, { "end": 340, "start": 335, "text": " Cool, and then could you tell us a little bit more about that third one, which is related to Sarsa?" }, { "end": 344, "start": 340, "text": " How I was looking through the code and actually trying to understand that one," }, { "end": 346, "start": 344, "text": " that one I found a little harder." }, { "end": 348, "start": 346, "text": " It's not too complicated." }, { "end": 352, "start": 348, "text": " The idea is that when you're evaluating the value of an action," }, { "end": 357, "start": 352, "text": " and in continuous space we have a meaningful distance measurement between actions." }, { "end": 361, "start": 357, "text": " So in discrete action space, action A and action B could be radically different," }, { "end": 366, "start": 361, "text": " but in continuous action space we know that the action 0 and the action 0.1," }, { "end": 369, "start": 366, "text": " or everything in between, is something similar." }, { "end": 374, "start": 369, "text": " So, the idea is that although there might be error on action 0," }, { "end": 380, "start": 374, "text": " and there will be some error on action 0.1, those are not necessarily correlated." }, { "end": 382, "start": 380, "text": " Or maybe they will be a little bit, but at the very least," }, { "end": 387, "start": 382, "text": " averaging the value between those actions should give you a lower variance estimate." }, { "end": 389, "start": 387, "text": " So, what we're actually doing is," }, { "end": 391, "start": 389, "text": " we're adding a little bit of noise to the policy," }, { "end": 397, "start": 391, "text": " and then over multiple updates, we're actually averaging over a small range around the action." }, { "end": 401, "start": 397, "text": " So, rather than use just a deterministic policy, we're using a policy" }, { "end": 404, "start": 401, "text": " with a little bit of noise added to it in the target update." }, { "end": 406, "start": 404, "text": " And the other small little details that we do is," }, { "end": 409, "start": 406, "text": " because when you add noise, sometimes you'll get an action that's actually quite far away." }, { "end": 412, "start": 409, "text": " You know, if you think about the Gaussian, some are close and are far." }, { "end": 414, "start": 412, "text": " We'll just clip the Gaussian." }, { "end": 417, "start": 414, "text": " So, we'll only look at actions that are within a small enough range," }, { "end": 422, "start": 417, "text": " such that it's a meaningful measurement of the mean action that we're looking at." }, { "end": 425, "start": 422, "text": " TD3 is, I think if I understand correctly," }, { "end": 428, "start": 425, "text": " it's either state of the art on some problems right now, is that correct?" }, { "end": 432, "start": 428, "text": " When you're looking at these off-policy algorithms for continuous control," }, { "end": 436, "start": 432, "text": " especially the Majoko test, you're really thinking about TD3 or SAC." }, { "end": 439, "start": 436, "text": " And at that point, they're very similar algorithms." }, { "end": 447, "start": 439, "text": " SAC, at least the most recent version, includes sort of our ideas in double-Q learning into it." }, { "end": 450, "start": 447, "text": " After the fact, the algorithms look quite similar as a result." }, { "end": 453, "start": 450, "text": " So, yeah, the performance between the two are very similar." }, { "end": 457, "start": 453, "text": " And they're definitely a step ahead of some of the other popular algorithms," }, { "end": 459, "start": 457, "text": " like a PPO or a tier PEO." }, { "end": 463, "start": 459, "text": " Okay, and then SAC adds this concept of maximizing entropy." }, { "end": 466, "start": 463, "text": " Is that a relevant concept in the world of TD3?" }, { "end": 474, "start": 466, "text": " Yeah, it's actually some, there's like a weird relationship that just sort of happened where SAC is like a," }, { "end": 478, "start": 474, "text": " or maybe I should say the other way around, TD3 is like a deterministic version of SAC." }, { "end": 480, "start": 478, "text": " They actually have somewhat similar ideas." }, { "end": 485, "start": 480, "text": " For example, this sort of Sarsistile update that we talked about sort of arrives naturally in SAC" }, { "end": 487, "start": 485, "text": " because it's a stochastic policy." }, { "end": 491, "start": 487, "text": " They both use the Clip Double-Q learning, which is the twin in TD3." }, { "end": 495, "start": 491, "text": " And they're both off-policy RL algorithms." }, { "end": 497, "start": 495, "text": " Is the maximum entropy relevant in SAC?" }, { "end": 501, "start": 497, "text": " It is, of course, the results in SAC are great." }, { "end": 506, "start": 501, "text": " Inpirically, the papers get very similar performance on most tasks." }, { "end": 510, "start": 506, "text": " But I believe SAC usually edges out TD3 in the long run." }, { "end": 513, "start": 510, "text": " This maximum entry saying adds something for exploration." }, { "end": 516, "start": 513, "text": " So, if you run the algorithm for, I don't know, five million time steps or something," }, { "end": 518, "start": 516, "text": " you'll start to see SAC edge out TD3 a little bit more." }, { "end": 522, "start": 518, "text": " But on the other hand, to make a pitch for my own argument," }, { "end": 527, "start": 522, "text": " if I'm on an algorithm, TD3 is much simpler to implement." }, { "end": 530, "start": 527, "text": " And tends to run a little bit faster just on a walk-alc time." }, { "end": 533, "start": 530, "text": " It is remarkably elegant." }, { "end": 537, "start": 533, "text": " And I think that's definitely a beautiful thing, especially in this space," }, { "end": 540, "start": 537, "text": " where there's just so much complexity everywhere you look." }, { "end": 541, "start": 540, "text": " Yeah, exactly." }, { "end": 544, "start": 541, "text": " There's a lot of things, like when we first started getting into it," }, { "end": 546, "start": 544, "text": " there's a lot of algorithms that are just hard to get to work." }, { "end": 551, "start": 546, "text": " Basically, PPO, for example, is an example of an algorithm where," }, { "end": 555, "start": 551, "text": " once you have it set up nicely, it runs on most problems without having to tune it." }, { "end": 558, "start": 555, "text": " But getting it working in the first place is not an easy thing." }, { "end": 564, "start": 558, "text": " So, with TD3, we wanted an algorithm that was straightforward and easily understood." }, { "end": 570, "start": 564, "text": " So, another algorithm in this DDPG family is D4PG." }, { "end": 573, "start": 570, "text": " And that adds a few things, distributed actors," }, { "end": 577, "start": 573, "text": " distributional critic, and these ends-topper turns." }, { "end": 580, "start": 577, "text": " Are these things..." }, { "end": 582, "start": 580, "text": " Is D4PG still relevant?" }, { "end": 587, "start": 582, "text": " Are these additions also relevant still?" }, { "end": 589, "start": 587, "text": " Yeah, it's funny that you asked that, right?" }, { "end": 594, "start": 589, "text": " I think D4PG is like 2017 or 2018 paper." }, { "end": 597, "start": 594, "text": " And it's true that it's less relevant now today," }, { "end": 600, "start": 597, "text": " but it shows how fast the field moves, I guess, right?" }, { "end": 605, "start": 600, "text": " So, a paper so recent is already possibly outdated." }, { "end": 608, "start": 605, "text": " Because it's a distributed algorithm, it's somewhat..." }, { "end": 609, "start": 608, "text": " Yeah, it's definitely orthogonal." }, { "end": 611, "start": 609, "text": " It's sort of in a different world." }, { "end": 614, "start": 611, "text": " You know, they're using a lot of data, they're running things in parallel." }, { "end": 616, "start": 614, "text": " Of course, they get really nice results." }, { "end": 620, "start": 616, "text": " And all the improvements are totally orthogonal to TD3." }, { "end": 626, "start": 620, "text": " TD3, the idea really was, here's some improvements to actor critic algorithms with deep learning." }, { "end": 631, "start": 626, "text": " And ideally, you could take those improvements and just throw them on top of any actor critic algorithm." }, { "end": 634, "start": 631, "text": " It just so happens that DDPG was like the prime candidate." }, { "end": 636, "start": 634, "text": " But there's no reason why you can combine it with D4PG" }, { "end": 639, "start": 636, "text": " and then use the distributional critic with..." }, { "end": 642, "start": 639, "text": " I mean, maybe it would take a little bit of thinking about how to combine that exactly." }, { "end": 647, "start": 642, "text": " But it's definitely possible, and there's no reason why there's any conflict there." }, { "end": 650, "start": 647, "text": " When we look at something like TD3, it's state of the art." }, { "end": 655, "start": 650, "text": " Like, how far are we from squeezing the most possible out of these continuous" }, { "end": 657, "start": 655, "text": " action trajectories?" }, { "end": 661, "start": 657, "text": " Is there any sense of how much further we can go?" }, { "end": 663, "start": 661, "text": " Yeah, it's interesting that you asked that actually," }, { "end": 666, "start": 663, "text": " because when we first came out with TD3, I thought," }, { "end": 668, "start": 666, "text": " well, this has got to be the end." }, { "end": 670, "start": 668, "text": " We just had PPO, and it was getting good performance." }, { "end": 672, "start": 670, "text": " And now we've edited it a little bit more." }, { "end": 675, "start": 672, "text": " And how much more can we really push these environments?" }, { "end": 678, "start": 675, "text": " And when you actually visualize the tasks too," }, { "end": 680, "start": 678, "text": " they seem to be working really, really well." }, { "end": 682, "start": 680, "text": " So it's like, how much more can we get?" }, { "end": 686, "start": 682, "text": " And then SAC came out, and the long-run it out performs TD3." }, { "end": 688, "start": 686, "text": " So we saw even more performance." }, { "end": 690, "start": 688, "text": " And actually, recently, there was a paper, I believe," }, { "end": 692, "start": 690, "text": " from DeepMind called VMPO." }, { "end": 695, "start": 692, "text": " And they're, again, they're looking at the sort of distributed setting." }, { "end": 699, "start": 695, "text": " So they're using tons of parallel actors and a lot of data," }, { "end": 701, "start": 699, "text": " like in the billions of data points." }, { "end": 702, "start": 701, "text": " So a crazy amount." }, { "end": 705, "start": 702, "text": " But one of the interesting things, at least for me from that paper," }, { "end": 709, "start": 705, "text": " was that they were actually able to get even more performance." }, { "end": 712, "start": 709, "text": " So they tested, at least, on Walker and Ant." }, { "end": 716, "start": 712, "text": " And their performance, like 50% higher than the final performance of SAC." }, { "end": 721, "start": 716, "text": " And I don't know what would happen if you ran SAC or TD3 for, you know, a billion time steps." }, { "end": 724, "start": 721, "text": " I mean, in the paper, we're looking at one million time steps for some perspective." }, { "end": 726, "start": 724, "text": " So a billion is just a crazy amount." }, { "end": 729, "start": 726, "text": " But there is, I guess, room for improvement there." }, { "end": 732, "start": 729, "text": " And the other interesting thing, of course, is that there's definitely room for improvement" }, { "end": 733, "start": 732, "text": " in the sample efficiency." }, { "end": 736, "start": 733, "text": " For example, on some tasks, we're able to get a good performance" }, { "end": 741, "start": 736, "text": " in the first 50,000 time steps, which seems like, I mean, that's a very short amount of time." }, { "end": 744, "start": 741, "text": " But on the others, it takes much more." }, { "end": 750, "start": 744, "text": " And so surely there must be a way that we could get, you know, push these things a little bit further." }, { "end": 754, "start": 750, "text": " That being said, we probably are nearing the end of the Mejoko timeline." }, { "end": 757, "start": 754, "text": " It's probably time to look at some more challenging benchmarks." }, { "end": 761, "start": 757, "text": " But yeah, if your goal was performance, there's still a little bit more to get there." }, { "end": 764, "start": 761, "text": " So do you have more work lined up in this space?" }, { "end": 768, "start": 764, "text": " Not necessarily directly, of course." }, { "end": 774, "start": 768, "text": " You know, if we were to happen upon something that would improve after critic algorithms more," }, { "end": 778, "start": 774, "text": " the first thing I'm going to do is throw it on top of TD3 and see what happens." }, { "end": 781, "start": 778, "text": " It's not directly the goal." }, { "end": 784, "start": 781, "text": " But of course, like, I will admit, I'm a competitive person." }, { "end": 787, "start": 784, "text": " So it's always at the back of my mind like, well, maybe we could get a little bit better" }, { "end": 791, "start": 787, "text": " and then reclaim the top spot as the number one algorithm or whatever." }, { "end": 795, "start": 791, "text": " It's not the goal, but we're looking at things that are related, you know." }, { "end": 797, "start": 795, "text": " Deep RL is definitely my research focus." }, { "end": 801, "start": 797, "text": " And so it seems likely that eventually we'll come across something that could make a difference." }, { "end": 805, "start": 801, "text": " And then the first thing I'm going to do is how does it work with TD3?" }, { "end": 809, "start": 805, "text": " I want to move to another important paper viewers." }, { "end": 812, "start": 809, "text": " Off policy, deeper, enforcement learning, without exploration." }, { "end": 818, "start": 812, "text": " Yeah, so that was a paper I wrote with my supervisors, actually David Meager and Dwayna Preka." }, { "end": 824, "start": 818, "text": " And it's sort of this one of the first papers in recent time looking at a batch deep RL." }, { "end": 828, "start": 824, "text": " A batch deep RL, I guess, for those who might not know, is sort of this problem" }, { "end": 832, "start": 828, "text": " where you're given a fixed batch of data, so like a fixed data set." }, { "end": 836, "start": 832, "text": " And unlike other problems, there's no more interaction with the environment." }, { "end": 839, "start": 836, "text": " So here's your data set. What can you do with it, basically?" }, { "end": 844, "start": 839, "text": " So it's similar to, I guess, imitation learning, but the data is maybe not from an expert." }, { "end": 847, "start": 844, "text": " It could be arbitrarily bad. Maybe it's good. Who knows, right?" }, { "end": 850, "start": 847, "text": " And so that's an interesting problem because it's very practical." }, { "end": 853, "start": 850, "text": " There's a lot of scenarios that you can imagine that you just have some data." }, { "end": 857, "start": 853, "text": " And that's what you have. For example, I'm from the robotics group at McGill." }, { "end": 859, "start": 857, "text": " And we have this nice aquatic robot." }, { "end": 863, "start": 859, "text": " And so occasionally we run field trials in Barbados, which is a lot of fun, of course." }, { "end": 868, "start": 863, "text": " But it also means if you, once you've collected your data in Barbados and you come back to Canada," }, { "end": 873, "start": 868, "text": " that's all you have, right? So if you want to run an RL algorithm on your data," }, { "end": 876, "start": 873, "text": " it better be a batch RL algorithm." }, { "end": 879, "start": 876, "text": " So I love this paper. I saw it early this year." }, { "end": 882, "start": 879, "text": " And I shared it around with my friends who I thought would appreciate it." }, { "end": 885, "start": 882, "text": " And some of them actually understood the importance of that." }, { "end": 887, "start": 885, "text": " That was that I was super excited about." }, { "end": 892, "start": 887, "text": " I enjoyed your ICML talk on this paper, including your art skills, by the way." }, { "end": 896, "start": 892, "text": " Thank you. Yeah, not a lot of people commented on that." }, { "end": 899, "start": 896, "text": " So it's nice to hear some appreciation." }, { "end": 902, "start": 899, "text": " So how did this paper come about? What led you to focus on this?" }, { "end": 906, "start": 902, "text": " Right. So yeah, maybe I'll give a little bit more on the paper. We were looking at batch RL." }, { "end": 911, "start": 906, "text": " But the interesting thing about batch RL is that we found that it doesn't work." }, { "end": 914, "start": 911, "text": " You know, set up as an off-pulsie learning problem." }, { "end": 918, "start": 914, "text": " And the first thought, of course, is, oh, we'll just run something like DQN or DDPG," }, { "end": 922, "start": 918, "text": " whenever a nice off-pulsie algorithms. And, you know, I'm sure it will work well." }, { "end": 924, "start": 922, "text": " And it turns how fit it doesn't." }, { "end": 928, "start": 924, "text": " And so how we came across this was I was looking at exploration." }, { "end": 934, "start": 928, "text": " And I thought, okay, if we were going to do some exploration tasks on with these sort of major equipment environments," }, { "end": 940, "start": 934, "text": " maybe I'll look at sort of how quickly can these things learn if I were to give it the best data possible." }, { "end": 943, "start": 940, "text": " So it's like, even though it's an exploration thing, I was looking at, let's remove exploration." }, { "end": 945, "start": 943, "text": " So I'll give it some expert data." }, { "end": 950, "start": 945, "text": " So I took a DDPG agent, trained it to completion, then collected some sort of expert trajectories," }, { "end": 953, "start": 950, "text": " and then passed that data set to a new agent." }, { "end": 958, "start": 953, "text": " And, you know, I was unsure what to expect, but I thought it would learn something." }, { "end": 962, "start": 958, "text": " And it turns out that it can't learn at all, which is sort of a shocking result." }, { "end": 967, "start": 962, "text": " You know, here's expert data, now learn something, and you get literally nothing." }, { "end": 970, "start": 967, "text": " It fails completely. So that was very, very surprising." }, { "end": 974, "start": 970, "text": " And then we spent quite a bit of time sort of poking and prodding, trying to figure out what the hack was going on," }, { "end": 976, "start": 974, "text": " because this was so unexpected." }, { "end": 978, "start": 976, "text": " But it turns out it's actually a very simple reason." }, { "end": 980, "start": 978, "text": " We called it extrapolation error." }, { "end": 985, "start": 980, "text": " And the idea is that, especially with these neural networks that are generalizing," }, { "end": 989, "start": 985, "text": " if you have this finite data set, you have access to only part of the data." }, { "end": 993, "start": 989, "text": " So for some state action pairs, you might be very certain about, you know, what happens with them," }, { "end": 997, "start": 993, "text": " the reward, the value, whatever, and the state action pairs that you've never seen before." }, { "end": 1001, "start": 997, "text": " But that means for a given state, there's actions you've seen, you haven't seen," }, { "end": 1006, "start": 1001, "text": " but your critic, your value network will give you an estimate for all of those actions." }, { "end": 1011, "start": 1006, "text": " So for those unknown actions, you may extrapolate very negatively," }, { "end": 1015, "start": 1011, "text": " and you may just never take those actions, or you may extrapolate very positively." }, { "end": 1018, "start": 1015, "text": " And then now your agent is selecting those actions." }, { "end": 1022, "start": 1018, "text": " If you're agent selecting those actions, it has no idea how well it actually performs." }, { "end": 1026, "start": 1022, "text": " It thinks it's going to perform well, but the performance is who knows, right?" }, { "end": 1030, "start": 1026, "text": " So if you give your agent a bunch of expert trajectories," }, { "end": 1035, "start": 1030, "text": " it will generalize those expert trajectories to random trajectories or random actions," }, { "end": 1037, "start": 1035, "text": " and it will think, oh, this will also be expert." }, { "end": 1042, "start": 1037, "text": " You try to take those non-expert actions, and you find out very quickly that you don't get a good performance." }, { "end": 1046, "start": 1042, "text": " The other problem, of course, maybe ties back to the overestimation bias problem," }, { "end": 1050, "start": 1046, "text": " is that not only are you overestimating the value when you actually take it," }, { "end": 1052, "start": 1050, "text": " you're propagating those values back." }, { "end": 1055, "start": 1052, "text": " So we found that when you do this batch problem, you end up with a situation" }, { "end": 1059, "start": 1055, "text": " where the value estimate tends to diverge, and then that totally destroys your learning process," }, { "end": 1062, "start": 1059, "text": " and your agent just fails horribly." }, { "end": 1067, "start": 1062, "text": " So if I understood this paper correctly, like there's all these papers in the literature going back" }, { "end": 1073, "start": 1067, "text": " on doing applied RL off-policy, and I think this work calls into question" }, { "end": 1078, "start": 1073, "text": " the correctness of all those agents, like a lot of them use FQI, NFQ, or DQN." }, { "end": 1082, "start": 1078, "text": " So given the insights you brought in this paper," }, { "end": 1086, "start": 1082, "text": " like are all those agents probably just incorrect?" }, { "end": 1092, "start": 1086, "text": " Incorrects is a big word, I think. They definitely don't claim anything false." }, { "end": 1098, "start": 1092, "text": " There's nothing wrong about any of those papers, but I think there's a misconception," }, { "end": 1103, "start": 1098, "text": " like a general misconception about the field about what you can and can't do in off-policy learning." }, { "end": 1108, "start": 1103, "text": " So our paper was about sort of exposing this issue of extrapolation error" }, { "end": 1112, "start": 1108, "text": " when you're dealing with finite data sets, unlike modern problems," }, { "end": 1115, "start": 1112, "text": " which tend to be quite large, you know, e-neural networks to solve them." }, { "end": 1118, "start": 1115, "text": " But when you're working, you know, you're looking at these classical algorithms" }, { "end": 1123, "start": 1118, "text": " and you're working with small domains, a small data set is not as much of a problem" }, { "end": 1126, "start": 1123, "text": " because you can still get quite a bit of coverage over like a small problem." }, { "end": 1129, "start": 1126, "text": " Like if you're looking at cart pull, you don't need tons of trajectories and data points" }, { "end": 1133, "start": 1129, "text": " to sort of have a nice coverage over the state and action space." }, { "end": 1138, "start": 1133, "text": " So I assume when you look at these smaller methods like FQI that initially worked off," }, { "end": 1143, "start": 1138, "text": " I think believe it's decision tree, you know, if maybe if they didn't scale with a small amount of data," }, { "end": 1146, "start": 1143, "text": " the first slot would be, oh, okay, we'll just give it a little bit more data." }, { "end": 1148, "start": 1146, "text": " And then, oh, look, it works." }, { "end": 1156, "start": 1148, "text": " So this problem of extrapolation error is really only easily seen or found when you look at large problems with neural networks." }, { "end": 1159, "start": 1156, "text": " So that's probably why we're the first people to really write a paper about it." }, { "end": 1163, "start": 1159, "text": " And this is possible, of course, that there's some senior prof out there going off." }, { "end": 1167, "start": 1163, "text": " Of course, I knew this was a thing, but at the time, at least I hadn't seen anything on it." }, { "end": 1170, "start": 1167, "text": " But to go back to your original question, are they wrong?" }, { "end": 1179, "start": 1170, "text": " I don't think so, but people definitely thought that Bachelorette wasn't like a separate problem that we need to think about." }, { "end": 1185, "start": 1179, "text": " I thought that if you looked up Bachelorette, I found some slides saying, like from the University saying," }, { "end": 1188, "start": 1185, "text": " DQN works in a problem in Bachelorette, it makes perfect sense." }, { "end": 1190, "start": 1188, "text": " And intuitively it does." }, { "end": 1197, "start": 1190, "text": " So the paper really is sort of combating this sort of misconception that you can just take these algorithms that work on small problems" }, { "end": 1201, "start": 1197, "text": " and that have sort of theoretical guarantees given infinite data." }, { "end": 1204, "start": 1201, "text": " But you can't scale these up with just DQN naively, at least." }, { "end": 1207, "start": 1204, "text": " So we're just really trying to combat that misconception, basically." }, { "end": 1213, "start": 1207, "text": " So when I first read your paper, I thought, wow, here we are in 2018." }, { "end": 1215, "start": 1213, "text": " Actually, I read it in early 2019." }, { "end": 1217, "start": 1215, "text": " And this is really fundamental stuff." }, { "end": 1220, "start": 1217, "text": " And we're just the field is just starting to figure that out." }, { "end": 1222, "start": 1220, "text": " So at first I was like a little surprised." }, { "end": 1224, "start": 1222, "text": " I'm like, really? No one knows these fundamentals." }, { "end": 1234, "start": 1224, "text": " And then I was like, wow, that's actually really exciting to be here at this time when people like you are just figuring out these really fundamental points about how RL really works." }, { "end": 1237, "start": 1234, "text": " Yeah, it's a great time to be in the field." }, { "end": 1242, "start": 1237, "text": " I think we're at the point where I guess DeepRL doesn't truly work on things that matter." }, { "end": 1249, "start": 1242, "text": " But like 20 years from now, maybe we'll be looking back and go, OK, that was the time when we figured it all out." }, { "end": 1259, "start": 1249, "text": " So I think there was probably an initial burst of really exciting stuff from Rich Sutton's time in the 90s where a whole bunch of cool algorithms came out all at once." }, { "end": 1266, "start": 1259, "text": " And I think, or at least I hope, that we're on the verge of a whole new set of cool algorithms and that will sort of shape the next 20 years of RL." }, { "end": 1270, "start": 1266, "text": " Well, I definitely think your batch work is historic." }, { "end": 1276, "start": 1270, "text": " I don't usually say that, but I can't imagine that it won't be a huge, usually seminal." }, { "end": 1279, "start": 1276, "text": " So I'm super stoked to have you here." }, { "end": 1280, "start": 1279, "text": " Yeah, thanks for that." }, { "end": 1282, "start": 1280, "text": " Yeah, totally. Thanks for being here." }, { "end": 1286, "start": 1282, "text": " So can we talk about how BCQ works in a little more depth?" }, { "end": 1290, "start": 1286, "text": " It uses, it's based on DDPG, is that right?" }, { "end": 1294, "start": 1290, "text": " Right. So yeah, BCQ is our algorithm to deal with extrapolation error." }, { "end": 1305, "start": 1294, "text": " And I'll say that when we were looking at this problem extrapolation error, my goal in creating an algorithm was to show that we understood extrapolation error." }, { "end": 1317, "start": 1305, "text": " And my thought was, if we can create an algorithm that solves these batch or L problems that no other algorithm can currently solve regardless of how we get there, then it means that at least we've understood what the problem is or what the core issue is with batch or L so far." }, { "end": 1320, "start": 1317, "text": " So we came with BCQ." }, { "end": 1325, "start": 1320, "text": " BCQ is meant to dealt with or deal with continuous action spaces." }, { "end": 1328, "start": 1325, "text": " And I'll be the first to admit it's a bit of a messy algorithm." }, { "end": 1330, "start": 1328, "text": " There's kind of a lot of moving parts going on." }, { "end": 1339, "start": 1330, "text": " So it's a bit confusing, but maybe I'll take a step back and I'll explain the are more recent version of BCQ, which is sort of the discrete action version of BCQ, which is quite simple." }, { "end": 1346, "start": 1339, "text": " And then we can sort of double back to what the original version looked like and why being continuous spaces is confusing." }, { "end": 1347, "start": 1346, "text": " Right." }, { "end": 1350, "start": 1347, "text": " So the algorithm in the screen action space is very simple." }, { "end": 1352, "start": 1350, "text": " It looks essentially like DQN." }, { "end": 1361, "start": 1352, "text": " The idea is that this extrapolation area is sort of this error introduced from out of distribution actions or actions that we've never seen before for a given state." }, { "end": 1370, "start": 1361, "text": " So what we'll do to combat that is we'll train something that looks like an imitation module, something like behavior cloning will just estimate for a given state." }, { "end": 1373, "start": 1370, "text": " What's the probability of each of the actions being in the batch?" }, { "end": 1375, "start": 1373, "text": " And then we can just threshold basically." }, { "end": 1382, "start": 1375, "text": " So if an action is very low probability of being contained in the data set, then we probably can't generalize very well to it and we shouldn't take that action." }, { "end": 1384, "start": 1382, "text": " So we'll just eliminate it." }, { "end": 1394, "start": 1384, "text": " So the final algorithm looks like DQN with a few actions sort of shaved off and it turns out that sort of really helps a limiting extrapolation error and you can do batch or L." }, { "end": 1400, "start": 1394, "text": " The problem is in a discrete action space things get a little bit more hairy and sorry, a continuous action space." }, { "end": 1405, "start": 1400, "text": " And unfortunately we started in a continuous action space which made things difficult." }, { "end": 1410, "start": 1405, "text": " So the algorithm again instead of DQN is actually I would say it's still close to DQN." }, { "end": 1415, "start": 1410, "text": " But we have this problem where we need to eliminate actions that are say low probability of being in the batch." }, { "end": 1421, "start": 1415, "text": " And we took a separate a different approach of sort of thresholding and instead went with a sampling idea." }, { "end": 1437, "start": 1421, "text": " So the idea was if we can sort of train a VAE or any sort of generative model to sort of model the batch, a state condition VAE, if you take that state and you sample actions from it, those actions that we sample from the generative model will be sort of high probability of being in the batch." }, { "end": 1441, "start": 1437, "text": " Once you're at that point, you can then just select the highest valued action." }, { "end": 1447, "start": 1441, "text": " So there's now this sort of learned generative model and then sample from it, then select the highest action." }, { "end": 1453, "start": 1447, "text": " Unfortunately, there's one more step to get there with the algorithm and that is what we call the perturbation model." }, { "end": 1459, "start": 1453, "text": " And the idea was suppose for some action or some state you had seen a lot of actions." }, { "end": 1462, "start": 1459, "text": " You had maybe covered the entire action space roughly." }, { "end": 1470, "start": 1462, "text": " Then sampling, I don't know, 10 actions randomly would give you a very poor coverage and you'd probably end up with something very sub-optimal." }, { "end": 1478, "start": 1470, "text": " And then the solution of course would be to sample even more actions. Maybe we could sample hundreds or thousands or something, but this starts to become very like not scalable." }, { "end": 1489, "start": 1478, "text": " So our solution there was will allow a secondary model called the perturbation model, which is essentially like an actor, and allow it to perturb any of the actions that we sample." }, { "end": 1498, "start": 1489, "text": " So we'll sample a bunch of actions, we'll perturb them in a small range, so just add a little adjustments so that we can get a little bit more value out of them." }, { "end": 1503, "start": 1498, "text": " And then we'll select the highest value action. And so that way we get a more coverage basically." }, { "end": 1510, "start": 1503, "text": " So yeah, this is starting to get a little messy and there's a few other little details here and there, but that's the core idea behind BCQ." }, { "end": 1517, "start": 1510, "text": " Great. I mean, from my perspective, it's still super elegant considering all that it does and how concise it is." }, { "end": 1521, "start": 1517, "text": " So was it VIE at an obvious choice here?" }, { "end": 1526, "start": 1521, "text": " Would you mention other generative models could work? Would any other models make sense here?" }, { "end": 1538, "start": 1526, "text": " Yeah, well, I'm glad you think it's elegant. A VIE was the easiest choice in the sense that, yeah, of course you could swap the VIE for a GAN or some normalizing flows or something like that." }, { "end": 1544, "start": 1538, "text": " Interestingly enough, a VIE tends to work the best, or at least from our experience. And this is just a vanilla VIE." }, { "end": 1555, "start": 1544, "text": " So of course you could do something more intelligent, but a VIE worked nicely because compared to a GAN, it tend to generalize worse, which is sort of counterintuitive maybe." }, { "end": 1563, "start": 1555, "text": " But if you want to sample things that are only very similar to what's in the batch, like you're just trying to memorize what's in the batch, then you actually really only want to see things in the batch." }, { "end": 1567, "start": 1563, "text": " You don't want to sort of generalize things that are kind of in the batch or similar to things in the batch." }, { "end": 1575, "start": 1567, "text": " So a VIE worked nicely for that, and it was a little bit more straightforward to get working because GANs are kind of notoriously hard to tune and to get working properly." }, { "end": 1578, "start": 1575, "text": " So yeah, nothing stopping you from selecting a different generative model." }, { "end": 1586, "start": 1578, "text": " And I think it's also a bit of the weak point in the continuous version of PCQ. Getting the VIE to work properly has some challenges to it." }, { "end": 1597, "start": 1586, "text": " And I suspect using a better gender model would make your life easier. There's been some follow-up work, for example, BearQL, where they looked at a setting with only a single behavior policy." }, { "end": 1603, "start": 1597, "text": " When you're doing that, it means you can now train something more similar to behavior-cloning with just a uni-modal policy." }, { "end": 1611, "start": 1603, "text": " And then that makes your life a lot easier. Now you don't have to deal with this VIE thing. The VIE really is useful because it lets you handle multi-modality." }, { "end": 1619, "start": 1611, "text": " So if you have a mixture of policies collecting the data in the batch, then that will be handled nicely by a generative model." }, { "end": 1630, "start": 1619, "text": " So depending on what assumptions you can make, you can maybe avoid it elegant. I don't know, but it works. And that's all we really wanted for the first version." }, { "end": 1644, "start": 1630, "text": " So do you have to worry about sizing the VIE and tuning it specifically for the domain? It really needs to memorize the whole batch and have a sensible generalization." }, { "end": 1660, "start": 1644, "text": " I guess the hope is that it's almost relative to the size of the batch as opposed to the size of the domain because you're just memorizing the things that are in the batch. It's definitely, like I said, a weak point to the paper in terms of getting it to work properly." }, { "end": 1675, "start": 1660, "text": " We did come up with a setting that worked across all the Magyoko tasks, so it's not like you needed so specific tuning to get it to work. But it's fragile. Changing the hyper parameters is a nice way to get your algorithm to not learn properly." }, { "end": 1690, "start": 1675, "text": " So yeah, I think if you were switching to a totally new domain, you'd probably have to tune that element of it. Fortunately, I guess it's also not necessarily in our problem anymore. So there's tons of great research out there on generative models." }, { "end": 1706, "start": 1690, "text": " And we were using the most basic vanilla VIE, so I assume that just swapping it with something a little more sophisticated, which would be, of course, harder to program, harder to get working and a little bit messier, larger component. But it would probably make your life easier when actually trying to run it." }, { "end": 1714, "start": 1706, "text": " The hope I guess is that the BCK will sort of naturally evolve as the field and nearby fields just improve on the run." }, { "end": 1725, "start": 1714, "text": " So thinking about how this internal work, it seems like the VIE has this sense of state similarity and then the Q network might have a different sense of state similarity." }, { "end": 1733, "start": 1725, "text": " Do you think they could be looking at these state similarities differently and could that be an issue or is that an on an issue?" }, { "end": 1745, "start": 1733, "text": " Yeah, I love this question. This is a great question actually. It's something we've been thinking about a lot. Is it an issue? I'll say apparently not because it still works, right? But there is a discrepancy there, right?" }, { "end": 1753, "start": 1745, "text": " The VIE or any generative model is trying to measure the density of the of the batch, the probability space or whatever." }, { "end": 1765, "start": 1753, "text": " But on the other hand, what we're actually interested is not the density. We're interested in sort of the generalization properties of the Q network. So, you know, for a given state, what actions can we generalize properly to?" }, { "end": 1776, "start": 1765, "text": " And that usually will correspond to something similar to the density of the data set or the distribution of the data set. But it might not exactly, right? So there is a discrepancy there." }, { "end": 1788, "start": 1776, "text": " We spend some time trying to sort of build on that and see if we could improve it. Use that sort of idea. Like maybe we can take advantage more of the Q network rather than having this generative model." }, { "end": 1800, "start": 1788, "text": " Yeah, we never really made any super exciting meaningful progress. I think it's an important question that we need to be thinking about. But I don't have any great answers for it. But yeah, it's definitely an interesting thing." }, { "end": 1813, "start": 1800, "text": " And the other thing I'll say is that for our discrete version of BCQ is that we used, we shared some layers. So the convolutional layers, this we used in Atari, the convolutional layers are shared between the imitation network and sort of the Q network." }, { "end": 1828, "start": 1813, "text": " And then that helps, I guess, sort of keep the similarity similar to reuse words. But yeah, not exactly what we're looking for. So I think, yeah, it's a, it's a bit of an open question right now and an option for improving the algorithm for sure." }, { "end": 1841, "start": 1828, "text": " Cool. And then if, if our agent never do get into states that were far from states that were seen in the batch. Is it true that just all bets are off completely? Like is there, is there anything we can do in that scenario?" }, { "end": 1853, "start": 1841, "text": " Maybe that's a little outside of the scope of this paper. But yeah, it's a question that people ask a lot. So you're not alone in that. I think it is out of the scope of the paper in the sense that it's not exactly the problem that we're trying to tackle." }, { "end": 1863, "start": 1853, "text": " And the way I see it is, you know, let's say you train a robot to play hockey or something and then you ask it to bake a cake. You know, this algorithm is definitely not going to be the one to save you." }, { "end": 1880, "start": 1863, "text": " We really need better transfer learning methods or better generalization or something like that to really solve that type of problem. And then I guess that goes back to your previous question, of course, is once we start generalizing really well, then we need to really look at generalization more. So then we do the distribution of the data set." }, { "end": 1885, "start": 1880, "text": " Yeah, B.C.Q doesn't solve it. I wish it did, but no, not quite." }, { "end": 1892, "start": 1885, "text": " Yeah, that makes sense because that wasn't what you were trying to do. I think if I understand B.C.Q is just trying to keep you out of those areas." }, { "end": 1902, "start": 1892, "text": " Yeah, exactly, exactly. So rather than try to bake a cake, well, you know, robot, you don't know how to bake a cake stick to playing hockey for now. That's really the core idea." }, { "end": 1915, "start": 1902, "text": " So in your B.C.Q paper, B.C.Q was compared to a number of other algorithms, DDPG, a desktrized TQN and some versions of behavior cloning." }, { "end": 1925, "start": 1915, "text": " We didn't see it compared to the top continuous actions based algorithms like SAC and your own TD3." }, { "end": 1932, "start": 1925, "text": " Then there was, of course, a closely related paper striving for simplicity and off policy DPRL from Agarwal at all." }, { "end": 1941, "start": 1932, "text": " That showed your TD3 did really well in batch setting, some are even beating B.C.Q in some cases." }, { "end": 1952, "start": 1941, "text": " So do you have any comments about how TD3 performs in batch setting, given that from what I understand it wasn't designed for batch use?" }, { "end": 1964, "start": 1952, "text": " Yeah, so it was kind of nice to see TD3 do well in a batch setting. Of course, we knew that in some cases TD3 did do well because, of course, I tried my own algorithm on these problems." }, { "end": 1967, "start": 1964, "text": " But what it boils down to is sort of this problem of overestimation bias." }, { "end": 1978, "start": 1967, "text": " So when you have extrapolation error and you're sort of generalizing poorly and sort of generalizing actions to be higher value than they really are, you create this problem over estimation bias." }, { "end": 1987, "start": 1978, "text": " B.C.Q deals with that in a few ways. One, of course, is it says, don't take those actions that are higher value than we think they are." }, { "end": 1998, "start": 1987, "text": " And then it also has some of our ideas around double-Q learning and dealing over estimation bias that are in TD3, we included them in B.C.Q, of course." }, { "end": 2005, "start": 1998, "text": " And I think for some of our batch settings, just dealing with over estimation bias is enough for you to get a good performance." }, { "end": 2017, "start": 2005, "text": " So TD3 does it in the same way that B.C.Q does it and so they both do well. And so that wasn't necessarily surprising. And I don't think it in any way contradicts anything in our paper." }, { "end": 2023, "start": 2017, "text": " I want to move to your more recent paper, benchmarking batch, deeper, and more recent learning algorithms." }, { "end": 2033, "start": 2023, "text": " Yeah, so that was a paper I did during an internship at Facebook, actually, and we were working with Team Eduardo Conti, Muhammad Gavanca, found this today, and as you all know." }, { "end": 2041, "start": 2033, "text": " And in that paper, we were sort of, I mean, this had come right after a Garwal and I was striving for simplicity in off-pulsing learning paper." }, { "end": 2048, "start": 2041, "text": " And that paper had tested some of our understanding and some of the ideas that we had about extrapolation error and batch learning." }, { "end": 2053, "start": 2048, "text": " So we wanted to sort of retest some of these things. And I'll take a step back and I'll say what they did in their paper." }, { "end": 2063, "start": 2053, "text": " So in the striving for simplicity paper, they had this experiment where they trained DQN start to finish and collected all the data that it had gathered during the entire process." }, { "end": 2075, "start": 2063, "text": " So over the 50 million time steps, they looked at all the state action pairs that it had ever seen. They put that into one giant data set and they trained DQN a few other agents on this giant data set." }, { "end": 2085, "start": 2075, "text": " And it turns out in that setting, you can actually learn quite well, basically. That raises the question, does BatchRL work in nice enough settings?" }, { "end": 2090, "start": 2085, "text": " Maybe BatchRL is only really a hard thing to do in continuous action spaces, sort of what's going on here." }, { "end": 2093, "start": 2090, "text": " So the first thing we wanted to do was sort of get a better understanding of that." }, { "end": 2101, "start": 2093, "text": " The second thing that we wanted to talk about was there's a lot of, there's been quite a few BatchRL papers in the last few months, basically." }, { "end": 2108, "start": 2101, "text": " The short timeline we're looking at here. But the thing with BatchRL is it's very easy to come up with a new problem." }, { "end": 2114, "start": 2108, "text": " There's no sort of standard Batch that you're supposed to look at. You can come up with a new Batch for whatever paper you want, right?" }, { "end": 2122, "start": 2114, "text": " So you can write a Batch that the data is, you know, a lot of expert trajectories. Maybe it's a tool-of-random. Maybe there's several behavior policies. Maybe it's a single behavior policy." }, { "end": 2131, "start": 2122, "text": " So we wanted to say, okay, what happens? We'll just look at all of them. We'll look at one setting. And we're not saying that this is the setting that you should look at for BatchRL." }, { "end": 2136, "start": 2131, "text": " It's just A setting. Let's just see what happens. Let's put everything on an even ground and see what happens." }, { "end": 2143, "start": 2136, "text": " And finally, I guess the third thing is to say this is also the paper where we introduced the discrete version of BCQ, which is sort of the cleanest version, the one I like the most." }, { "end": 2149, "start": 2143, "text": " And the conclusion from the paper was that nothing works amazingly on the single behavior policy setting." }, { "end": 2157, "start": 2149, "text": " So all the algorithms that we tried, for example, we tested, you know, DQN and QR, DQN, which Agarwell and Al said would work pretty well." }, { "end": 2162, "start": 2157, "text": " And on this setting, with a single behavior policy, less diverse data, they didn't work so well, basically." }, { "end": 2170, "start": 2162, "text": " We tested some other recent algorithms and we also tested our BCQ algorithm. And we found that although it worked the best, it wasn't actually like super amazing." }, { "end": 2177, "start": 2170, "text": " Like you'd hope that it would, you know, dramatically outperform the behavior policy. In a lot of cases, it just matches the performance of the behavior policy." }, { "end": 2190, "start": 2177, "text": " So it looks something more similar to say robust imitation. It's sort of uncertain about whether or not that reason is just because a single behavior policy just doesn't give you enough data to generalize and get better performance." }, { "end": 2198, "start": 2190, "text": " Or, you know, maybe the algorithm itself is just fundamentally not strong enough to really tackle these problems in a way that's like truly satisfying." }, { "end": 2204, "start": 2198, "text": " But either way, there's some interesting results there. And so, you know, we stuck it together and it's now a nice workshop paper." }, { "end": 2218, "start": 2204, "text": " So I'm quoting from the paper. There's one line that says it's easier to sample from pi a condition on s than to model pi, the condition on s exactly in a continuous action space." }, { "end": 2221, "start": 2218, "text": " Can you say more about that line? Is that super obvious to you?" }, { "end": 2229, "start": 2221, "text": " Right. I would say it's super obvious, but I think it's more of a property of just the tools that we have available." }, { "end": 2240, "start": 2229, "text": " So in generative modeling, of course, there are ways to model sort of the density or the distribution of the data set, normalizing flows or something like that." }, { "end": 2250, "start": 2240, "text": " But we do have a lot of nice techniques that just let us sample even without modeling exactly. So in a VAE, you can't necessarily recover the true distribution, but you can sample from the distribution." }, { "end": 2256, "start": 2250, "text": " So that makes it easier essentially to, and the reason why we use the VAE in the sort of continuous version of B.C.Q." }, { "end": 2262, "start": 2256, "text": " Because it just makes your life easier basically. Whereas getting that exact density function is not easy." }, { "end": 2273, "start": 2262, "text": " However, in a discrete action space, it's actually quite an easy problem to do it. You can just train like your standard cross-century loss kind of thing, behavioral, glowing kind of model, and you'll get some kind of estimate of it." }, { "end": 2281, "start": 2273, "text": " Okay, then, and then going back to Uggarwals paper, striving for simplicity. They say they have a line where they say contrary to recent work." }, { "end": 2291, "start": 2281, "text": " And they say, Zang is Sutton 2017 and Fujimoto at all. That was your paper 2019. They find the log data, DQM data is sufficient." }, { "end": 2304, "start": 2291, "text": " So now you talked about this earlier that different data sets have different properties. But is that, um, does that fully explain why they got different results than you did?" }, { "end": 2314, "start": 2304, "text": " Right, yeah, so I think some people read their paper and thought, oh, you know, one of these papers has to be wrong. But I don't think of the papers as contradictory in any way." }, { "end": 2319, "start": 2314, "text": " I think their paper is very much complementary to ours. And I think our follow-up work, again, is complementary." }, { "end": 2324, "start": 2319, "text": " So like, to me, the story here of the first paper was, Bachelors is a really hard problem." }, { "end": 2335, "start": 2324, "text": " You know, we need to think about it very carefully. You can't just run your algorithms naively and solve Bachelorel. In their work, they're maybe saying, you know, Bachelorel works if things are sort of set up nicely." }, { "end": 2345, "start": 2335, "text": " If we have huge data sets, diverse data sets, this is now a solvable problem, which means that, you know, if you're maybe a practitioner and you're thinking, I'll have this Bachelorel problem, what can I do?" }, { "end": 2352, "start": 2345, "text": " Well, that means there's two approaches, I guess. You could, you could use maybe BCQ or some Bachelorel algorithm, you know, to carefully deal with this problem." }, { "end": 2359, "start": 2352, "text": " Or maybe you could just collect a little bit more data in a diverse enough way. And so that's definitely an interesting result." }, { "end": 2370, "start": 2359, "text": " And I don't think, yeah, there's, I don't think there's any contradiction there, which is nice. And I do think the diversity explains most of it, at least from my perspective." }, { "end": 2380, "start": 2370, "text": " In the Agarwal et al paper, they showed good results with REM. That's the random ensemble mixture. It's mixing DQNs with multiple DQNs with this, this simplex." }, { "end": 2400, "start": 2380, "text": " Yeah. Yeah. And in your paper, REM didn't do that well. Generally QRDQ dominated it. Is that, so is that again, do the data set, do you think being more diverse in their, in their case, or, or is there some, do you think there's some other thing going on there?" }, { "end": 2412, "start": 2400, "text": " Right. Yeah, I guess the one of the things I want to say about our paper is that although, you know, we looked at a couple of algorithms and a lot of them didn't really work very well. It doesn't mean that their algorithms don't work, right?" }, { "end": 2423, "start": 2412, "text": " We were looking at a new setting. So, you know, REM, for example, was designed to work well in this sort of patch setting with very diverse data. And it totally does work in that setting. And it works very well in that setting." }, { "end": 2433, "start": 2423, "text": " So, we were looking at a different setting just to see what would happen, just to see, you know, confirm some of our claims from the first paper in a new setting and see what happens and make sure everything's true." }, { "end": 2444, "start": 2433, "text": " Does their algorithm work? Yes, it definitely does. Is there a good reason for why or what the difference is? I'm not sure, right? Yeah. We don't know enough about their algorithm. It's very new." }, { "end": 2457, "start": 2444, "text": " So, and it's not something we tested dramatically, of course, because that wasn't really the goal of the paper. So, I can't say that I understand, you know, all the details of their algorithm, you know, beyond implementation, of course, to really say why or why wouldn't work." }, { "end": 2463, "start": 2457, "text": " I suspect though that it mostly comes down to the fact that it's this new setting that REM just wasn't meant to handle." }, { "end": 2473, "start": 2463, "text": " Natasha, Jake's was our guest for episode one. You compared her KL control algorithm and she compared her version of your BCQ." }, { "end": 2483, "start": 2473, "text": " Modified for discrete actions in her paper way off policy, batch deeper, enforcement learning of implicit human preferences in dialogue." }, { "end": 2497, "start": 2483, "text": " So, does her version of BCQ in that paper differ from yours? It does a little bit. So, actually Natasha, Natasha is awesome. Basically, she reached out to me after reading the paper, you know, said, it's super cool." }, { "end": 2512, "start": 2497, "text": " You know, we're working on similar things and we had a nice discussion about it. And one thing I really appreciated is she wanted to be very fair about comparing to BCQ. So, you know, she asked me, you know, how can we do BCQ in a discrete setting? What would look like and what would you feel like is a fair comparison?" }, { "end": 2517, "start": 2512, "text": " And so, you know, based on that discussion, they came up with their version of discrete BCQ." }, { "end": 2534, "start": 2517, "text": " The key difference is that their version still relies on the sort of sampling property where you sample from, I mean, they're looking at a prior and somewhat of a different setting. So, they were sampling from this prior basically and then selecting actions that are the highest valued." }, { "end": 2543, "start": 2534, "text": " It turns out, of course, that if you just threshold using the probability distribution or the density of the data, you end up with something much nicer and you get a better performance." }, { "end": 2551, "start": 2543, "text": " So, it's not like the greatest comparison to BCQ, but at the time, like, it's super fair. This is, you know, really appreciate what she did." }, { "end": 2560, "start": 2551, "text": " And that's one of the reasons why we included her algorithm in our papers because, okay, her algorithm's showing that it beats BCQ. Does it really like me to actually double check that?" }, { "end": 2572, "start": 2560, "text": " And I will say, again, like, their results were really interesting and again for really different problems. So, of course, it didn't work in our setting, but that's not to say at all that KL Control is like some bad algorithm or something." }, { "end": 2578, "start": 2572, "text": " It's a super cool paper. They have a really awesome demo and, you know, people should read it and check it out because it's great." }, { "end": 2587, "start": 2578, "text": " So, yeah, we're not saying anything bad about the paper, but maybe the lesson is that a lot of these bad trail algorithms are not general purpose like we might hope." }, { "end": 2596, "start": 2587, "text": " I'm grateful to Nostasha because she helped me kick off this podcast and set the bar really high for guests. So, that's that was unforgettable for me." }, { "end": 2599, "start": 2596, "text": " Yeah, I'm happy to be following in her footsteps." }, { "end": 2612, "start": 2599, "text": " So, regarding distributional agents was the choice of QR DQN obvious here, like would other distributional agents be suitable like C51 or IQN?" }, { "end": 2615, "start": 2612, "text": " Or is it more because they were used in previous papers?" }, { "end": 2623, "start": 2615, "text": " Right. We looked at QR DQN because that was the paper that they used or that was the algorithm that you used in striving for simplicity." }, { "end": 2631, "start": 2623, "text": " So, that was really like the only motivation. We just wanted to look at theirs. That being said, I do think C51, IQN would work well." }, { "end": 2639, "start": 2631, "text": " Of course, after seeing the results in both papers, one thought is, why does, you know, these distributional methods work better?" }, { "end": 2645, "start": 2639, "text": " And from what I understand, the hypothesis that distributional RL works well because it learns a better feature representation." }, { "end": 2660, "start": 2645, "text": " And under that hypothesis, it makes a lot of sense that it would work well in the BATURAL problem. Because if this, you know, error, extrapolation error is a problem that comes from generalizing poorly, then a better feature representation would help you generalize better into more actions." }, { "end": 2663, "start": 2660, "text": " And that sort of naturally would improve BATURAL." }, { "end": 2670, "start": 2663, "text": " Hopefully that hypothesis is true, so things make sense in my head. But, yeah, it was just a clear choice based on the prior work." }, { "end": 2676, "start": 2670, "text": " Are you working on more BATCH stuff right now? Like, can you share anything about what you're working on these days?" }, { "end": 2688, "start": 2676, "text": " BATURAL stuff, I'll say we gave it a good effort to sort of take it to the next step. That was the summer. And we didn't really come up with anything super exciting that I can talk about with a lot of passion or anything." }, { "end": 2697, "start": 2688, "text": " But I will say there's been a lot of cool work from other people in BATURAL. I mean, I don't have paper titles off the top of my head, but there's a lot in recent months." }, { "end": 2704, "start": 2697, "text": " And I think there will be a lot more in the future. So I'm really excited for whatever one else is doing. And I think we'll see a lot of cool developments in BATURAL." }, { "end": 2711, "start": 2704, "text": " As far as what I'm working on, I've always been really interested in sort of understanding the relationships in all the components in DPRL." }, { "end": 2722, "start": 2711, "text": " So, you know, this paper was about understanding, you know, what data do we need and how that sort of works into the errors and learning and things like that." }, { "end": 2733, "start": 2722, "text": " But there's all kinds of different components in DPRL and all these little things that we don't understand and, you know, why does this work, why does this work, why is it hard to tune and, you know, how does target networks come in and experience replay." }, { "end": 2741, "start": 2733, "text": " And there's all these kind of weird things. And this is big, complex machine. And so, you know, what I've been working on is just try to understand piece by piece a little bit more." }, { "end": 2750, "start": 2741, "text": " And sometimes, you know, we learn something new and it's just not that interesting. And sometimes, you know, it leads to something really exciting like opening up BATURAL to new people." }, { "end": 2761, "start": 2750, "text": " And so, yeah, that's the direction I've been working on and hopefully it will come up with some cool breakthroughs. But yeah, it's an exciting time to be in RL. So, hopefully you'll see something exciting from me in the future." }, { "end": 2772, "start": 2761, "text": " I have no doubt. So, aside from the BATURAL and what you're doing now, do you have any comments on what you find exciting in RL that maybe other people are working on these days?" }, { "end": 2784, "start": 2772, "text": " Yeah, I mean, I've always been a bit of a secret admirer of model based RL, maybe from the distance. I did some work in model based RL at the start of my master's degree and was like, well, this is really hard and not working any time soon." }, { "end": 2795, "start": 2784, "text": " So, I moved on to model free and then TD3 happened. So, that was fortunate. But yeah, I think model based RL is really exciting. And there's been some really cool developments in sort of combining model free in model based RL." }, { "end": 2806, "start": 2795, "text": " And I think model based RL has always been sort of the most intuitive and natural form of RL. It just seems obvious that it would be important. It's just how we think as humans about problems often was like this planning component." }, { "end": 2818, "start": 2806, "text": " And so, I feel like it's going to play a huge role in the future of RL. So, I don't intend on working on it in the near future, but I'm always really excited to see new model based RL papers." }, { "end": 2824, "start": 2818, "text": " Am I going to see you at Nureps 2019 in Vancouver? Let's talk RL's hometown." }, { "end": 2838, "start": 2824, "text": " Yeah, you definitely will. We're presenting Benchmarking deep batch RL papers at the Deep RL workshop. So, if you were any of the listeners, you know, are around definitely come by, say hello." }, { "end": 2851, "start": 2838, "text": " I feel like if someone is listening right now and they say, oh, you know, I heard you on talk RL. I should have some kind of like secret prize for them or something. But I'm a poor grad student. So, maybe I can just give out like some high fives or something." }, { "end": 2859, "start": 2851, "text": " But, yeah, definitely come by and say hello. Well, I would take a high five from Scott Fujimoto any day. So, that sounds great. Thank you." }, { "end": 2867, "start": 2859, "text": " Glad. So, I'm super grateful for you for being here and sharing your time and your insight with all our listeners and with myself." }, { "end": 2874, "start": 2867, "text": " And I'm totally looking forward to hearing from you at Nureps and whatever you come up with next. Scott, thanks so much." }, { "end": 2881, "start": 2874, "text": " Yeah, thank you so much for having me." }, { "end": 2905, "start": 2881, "text": " That's our episode for today, folks. Be sure to check talk RL.com for more great episodes." } ]
Jessica Hamrick
Jessica Hamrick sheds light on Model-based RL, Structured agents, Mental simulation, Metacontrol, Construction environments, Blueberries, and more!
https://media.transistor…236.mp3?src=site
This is TalkAreal Podcast. All reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan. Dr. Jessica Hammrich is a research scientist at DeepMind. She holds a PhD in psychology from UC Berkeley. It's very kind of you to join us and thanks so much for being here, Dr. Hammrich. Thanks so much for having me on the podcast. So how do you describe the area that you focus on? So having done my PhD in psychology and sort of coming from this cognitive science background, my research is that the intersection of cognitive science in AI, so my goal is to take insights that we have about how we know that people think about things and then try to apply those to building better machine learning algorithms and in particular, I sort of am doing that in the context of model-based methods and model-based RL. Your background in psychology and now working on RL, is that becoming a more common combo or is that still quite rare? So I would say that's actually, well, it's interesting because it's in some cases it's not, or in some sense it's not that common, but it also, if you look at the history of AI and psychology, well, in cognitive science in particular, they're actually very closely related. So a lot of the ideas that are in RL have come out of a lot of the work in psychology on how humans learn and how animals learn and with a lot of the stuff that I work on in terms of say, like, you know, model-based methods and thinking about, like, you know, building models of the world is also like a topic that's been explored fairly extensively in psychology and like at the intersection of psychology and AI, particularly in the past, though, you know, the fields have sort of like separated a little bit, and so it's maybe less crosstalk now than there used to be. Though there is still like a large number of people who have expertise in both, particularly, there's perhaps even more people, like, on the neuroscience side, a lot of people studying how people make decisions coming out from the perspective of neuroscience and then applying RL methods to model that as well. We had Michael Litman on for the second episode and he was talking about the RL DM conference that draws people from different fields, so maybe there's some conferences that have that focus on as overlap. Yeah, absolutely. Yeah, I think it's super exciting to see conferences like RL DM now that are sort of trying to bridge the gap between the fields. I actually haven't been to RL DM yet, but I hope I'll be able to make it one of these days because it really sounds like a fantastic conference. Me too. So I really enjoyed your ICML talk. Thank you. Where you spoke about your structured agents paper as part of it. I wonder, could you help our listeners understand, what is the gist of that paper? Yeah, so the idea there was to sort of explore ways of achieving or using more structure in agents. So let me like, preface that a little bit by saying. So there's sort of like, you know, in deep RL especially, there's like a little bit of this tension between the end to end approach where you sort of use like a relatively well accepted architecture, like maybe just like a CNN or whatever, and then you just train lots of data and you hope that the right representations and behavior will fall out of that. And then there's the more classical AI approach where you would build in all of this structure and basically make all the decisions for the agent. And so there wasn't really very much learning there at all. And I think that there's a lot of potential for exploring the space between those two ends of the spectrum. And building some amounts of structure into agents based on what we know about how the world works, but not building too much in and allowing that learning to sort of fill in the gaps. So that's kind of like the idea there. And so we were looking at that, particularly in the context of tasks where you really require reasoning about like the structure of the world, meaning like the fact that there are like objects in the world and those objects, you know, relate to each other in certain ways like objects can be on top of each other or next to each other or you know, they collide in certain ways. They have the physical interactions as well. And so we were interested in tasks that sort of were, you know, based in that type of structure in the world and then getting agents to be able to solve those tasks by giving them just the right structure in their architectures that they would, you know, sort of be able to do a good job there. So specifically, we looked at these block stacking tasks where the goal is to stack some blocks up in order to solve some sort of goal. So for example, in the easiest version of the task, we have like the silhouette task where the goal is to stack blocks just to sort of duplicate a structure that you're already given. So you sort of, you know, just have to place them, you look at where, look at the given structure and they just place the blocks so it matches the given structure. And then we have like harder versions of the task where you have to actually design a solution to a problem like, so if the goal is to stack blocks to say like create a shelter over another object, then you have to figure out, okay, how exactly do I place them so that they're stable and so that they'll, you know, appropriately like solve this problem of creating the shelter. And so the way that we approach doing this was by taking this idea that like, you know, these types of scenes in this task do have a lot of structure in them again, you know, sort of like what I was saying that this like structure in terms of the objects in a scene and the relationships to each other. And then allowing the agent to actually exploit that structure. And the way that we do that is using a type of neural network called a graph neural network, which processes graphs. And so we can represent the scene as a graph, allow the agent to process that graph. And then, you know, return an action on the basis of that structure. So the first time I encountered your paper, I, this structured agents paper, I had to do a double take because there was, there was this structure word was overloaded. Yeah. In a few different ways, there was, let's see if I can break it down there. Yeah, you mentioned structured representations and then structured computation. And then also your agents were building structures. So can you, can you just remind us of the distinction between the representation and the computation? Yeah. So I should say, I think if there is being basically like, when you're talking about the design of the agent, there's three types of structure that you could consider. So there's structure and the inputs, there's structure in the computations that are performed, and then there's structure in the outputs. And then, then there's also like, there's structure in the world, but that's like, when we refer to structure in the paper, we're referring more to structure in the agents themselves, which may or may not reflect some of the structure that's in the world. So the structure and the inputs is like things like, you know, they, you know, when the agent, so if you take like a typical RL agent, it probably receives this input like an image, right? And so that image is like a relatively unstructured representation because, you know, it's like, it's always the same dimensionality, it's relatively flat. And then, you know, it's very, very high dimensional. So you might be able to extract structure from that, like, you know, it's, it, in it, like, there, there is some function, some transformation of that input that would give you something that's more structured, but the representation itself is, it's just a grid, right? So there's, there's not too much structure there to exploit. However, something like say, like a graph has a lot more structure to it because there's sort of like these discrete, you know, entities like the nodes in the graph, and then you can have edges between different nodes that represent something like relationships between those nodes. And so now you're talking about, you know, there's like, you can represent different types of information in that way than you would be able to represent and say like an image. And so that's like, that's sort of like the structure of the input. And then, then you could talk about the structure of the computation, which would be like, so one, you know, possibility would be say, maybe you just take like, even if you have a structured input, like a graph representation, you could just have like an RNN, which like, you know, goes over all of the nodes in the graph and like processes each one of them. And then at the end gives you like a vector, some like latent representation of what the graph is. So that would sort of like convert it to an unstructured representation. And then you could just do like, again, you know, like MLPs or whatever on top of that vector. So that would be like an unstructured computation because you're again still like operating over this like internal unstructured representation. And then, but in contrast, if you have something like a graph neural network, which explicitly processes the graph, then that's doing a form of structured computation because you're making more assumptions about like, you know, the way that information is shared. So in a graph neural network, you basically assume that like every node gets processed the same way, every edge gets processed the same way. And that's a particular type of structure or, you know, or inducted bias that we're assuming in the computation of that algorithm. And then I think structured like outputs or structured actions are something we actually maybe talk even less about in RL. I think we don't talk about any of these things enough in RL, but actions, I think we talk about the least potentially, which is, you know, usually when we're talking about actions in RL, like say, if you're talking about in the context of a game, you would say, okay, well, the actions are like, you know, like move up, move right, move left, move down. They're sort of, you know, completely independent of what your states are. So, you know, at every state, you have the same actions that you can take. And it's like a relatively small amount of actions typically. But you could also consider having, you know, larger forms of structure or more structure in your actions as well. So maybe you can take actions that are, you know, a function of your inputs. And that's what we do in this paper where the actions that we have the agents take are actually things like place a block on top of another block, rather than say like placing a block at position like x, y. And so having this structure in the actions is an additional, you know, way of, you know, sort of providing a particular type of inductive bias to the agent that allows it to more easily reason about things like objects in the world. Okay. And then so I guess we're presupposing that something else is breaking down the image into this graph representation before it's input into the agent. Yes. Yeah. So in this paper, we, most of our experiments were with assuming that we had access to like the underlying object properties. So things like, you know, their position, their size, the shape, so on and so forth. I think in these particular construction scenes, the perception problem is not really like all that interesting. Like the shapes are super simple. I think, you know, probably you could take some sort of like off-the-shelf segmentation algorithm and it would probably do a reasonably good job here. And so we actually did like, we did some additional experiments where we were like given the ground truth segmentations and then like pass the segmented images like through CNN to get the embedding and then still do like a structured graph computation in the agents policy and you get almost the same level of performance there. So as long as you have something that could actually, you know, do the segmentation, I think it would work pretty well. I think, so in this particular environment, I think, you know, that the question of like, how do you go from like perception to to the structured representations is maybe not quite so interesting. Of course, there's like other environments where that question is much more interesting. Like if you're saying like a 3D first-person perspective sort of environment. Yeah. But yeah, that's like an open question of like how to make that work well. It's something a lot of people are working on and we're starting to see some progress on it. So I think we'll continue to probably see a lot more over the next couple of years. And then on the on the back end, maybe we need a little bit of smart to turn that that relative object-based action into an absolute action that could actually be executed. Yeah, I think that that's something that is like there's less work on that. So I think that there should be more work on that sort of thing of, you know, actually having the action space that the agent is interacting with be something that's learned so that, you know, it's sort of is like, it's a more efficient space to learn in or whatever, rather than like the true action space. But I think people aren't like that there isn't a lot of work that is focused on even training agents in these sort of more type more structured types of action spaces. So I think probably what we'll see is like, you know, maybe that will become more of an area of focus. Like people will first start to explore, you know, other types of structured actions that you might possibly use for different types of environments. And then we'll sort of as a field move towards then like also learning what those that structured representation should be. It seems to me as like as a human, like as a kid when we grab a some richer toy or something, we suddenly have a, it's almost like we have a new shape arm and we have these different actions we can do we've never seen before and we're spending a little bit of time to learn this new action space. That doesn't correspond to our, what we're used to in terms of how our motor control works. Maybe there's something somewhere there. Yeah, so I guess you're talking a little bit about like the idea of like tool use. Yeah. Yeah. Yeah, I think, I think of this very, well, yeah, so I think, well, there's, I guess there's sort of like two questions. Like one is like, how, like I think we as humans are sort of like we're born with assumptions about like this sort of like, you know, the right sort of like abstract action representations that we might use to interact with the world. So like already like, you know, when babies are born, they have some sort of notion already that objects exist in the world. And so they had like this bias towards that. And and so there's sort of the question of like how, how, what you get an agent to have like that kind of sort of abstract notion of like what types of things they could potentially interact with and like, you know, what types of action representations could be there out in the world. And then there's a question of like, given that you have this sort of like abstract representation or, you know, the sense of like in the abstract sense or in the general sense, like what types of actions can you take on things. Then how does that translate to then when you're you're put in a new environment with a new, say physical object you could interact with like, how do you sort of like fine tune what you already have to to be more appropriate for this new tool or new environment that you're in. So when this paper, I think for the continuous case used this algorithm that was referred to as RS0, which I gather is like a descendant of SVG. Yes. So is there can you help me is was there a rationale for choosing that specific continuous action model for you algorithm? Not in particular. This sort of like more if you're coming at it from like kind of a more traditional like our perspective, the more natural like choice of action would be like you place a block at a particular location. It doesn't make a lot of sense to try to like discretize like all of those locations in two grid and then like use something like DQN. We tried that but it didn't really work either. So we mostly just wanted to have like a solid comparison for you know some sort of like continuous based action agent. And you know so you know SVG can work reasonably well on the the R in RS0 is the stands for retray. So it adds this like retrays correction to it. And that that is just like you know it works reasonably well on like other tasks. So we we run ahead and use that. But I mean we could have just as easily used like DDPG or something else too. So this work has some very unique environments the the ones that you meant the block stacking environments you mentioned. I found them very refreshing especially after seeing you know so much Atari. There was a bunch of papers where they did doom and this stuff. So it was very refreshing to see these tasks. How did you set on these tasks? Was it was it easy to decide on these tasks or was there a lot of back and forth on these specifics of what tasks you'd focus on for this? Yeah the choice of the tasks kind of so the overall like environment of like the idea of doing block stacking sort of like has been a bit of a long time coming. So Peter Vitale one of the other authors on the paper he and I used to work together when we were both at MIT like a long time ago and we were both working in Josh Tenenbaum's lab and we worked on basically doing psychological modeling of how people reason about physical scenes where the scenes are things like you know stacks of blocks and then we asked people to make predictions about you know whether the towers will fall over or like what direction will they fall in and then we were doing modeling of people's behavior and those sorts of scenes. I think I saw your masters was on that topic is that right? Yeah exactly and yeah so that was like the work that I did both in undergrad and then leading into the vastuesthesis as well and so you know we've so those tasks were kind of like looking at like how do you make physical predictions but we were always kind of motivated by the idea that like you know people are very good at actually building things and constructing things out of blocks and that's a very like it's a natural human it's a very naturalistic behavior like you know kids love to play with blocks and and so we've always kind of really wanted I think to like you know be able to write a program that would be able to like learn to stack blocks too and create things. So so that sort of you know bitten something that both of us have been interested in for a very long time and then and you know it's nice because it has a lot of connections to like you know it's it kind of feels like an eco logically valid like set of tasks to train agents on at least to some extent you know it's inspired by like you know behaviors that children produce and and they're very challenging for modern machine learning methods because they kind of have this like very you know combinatorial compositional flavor to them right so like there's like many many ways you can stack a set of blocks together it's not just like you know a Atari where you have like you know basically the episode starts in it's the same like every time almost you know maybe there's a little little bit of non-determinism but it's roughly always the same whereas like in these sort of like compositional environments you can very easily get into states that you've never really seen before at all. And so then how we settled on like the particular four well the silhouette task so that's the one I mentioned is like you basically have to replicate a given structure that's kind of more similar to a lot of existing block stacking tasks in literature so often like if you see agents being trained to stack blocks they're sort of given an example tower and then they have to stack the blocks in the same way as the example that they were given. So that was sort of just based on like the type of thing that people have done before and then the other three tasks which are connecting covering and then another variant of covering called covering heart we're tried to get away from like this idea that you're sort of given the solution and you just have to replicate it we wanted to actually have agents have to design their own solutions. And so you know the connecting task that one is you have to stack the blocks so that they reach a point in this guy and it's or we actually had like three points so you have to actually construct maybe like up to three towers of blocks and then there's also obstacles that you have to stack the blocks around so if you if any of the blocks collide with an obstacle then the episode will terminate so you have to avoid the obstacles while trying to stack the blocks to reach these points in the sky and that kind of felt a little bit like you know maybe like what you would do like if you're just playing with blocks you're like okay I want to like stack the blocks up as high as possible or whatever it's sort of similar to that and then what's one of the other things that we do when you know we build things well we you know we build things so that we have you know like shelter or whatever so that sort of inspired like the the covering and covering hard tasks where the ideas you want to stack the blocks so that they cover the obstacles from above you can sort of think of it like so that if it were to rain the obstacles would say stay dry in the rain but again you can't touch the obstacles so you have to stack around them and so the sort of like yeah it's like a very loose analog to like this idea of building shelter and building something that actually like plays a functional role so that's kind of like how we we came up on those tasks oh I should say like yeah so the covering hard is like a variant of covering just where you have like a limited number of blocks that you can use and the I didn't mention but you can make some of the blocks sticky so they like both stick to any other object they come in contact with you can sort of think of it as like the object is covered in glue or the blocks are covered in glue and so in in the covering task the regular covering task the this like sticky block to make a block sticky you pay a penalty for it and you pay a relatively high penalty for it so the agent basically just learns not to ever use the sticky blocks just you know have to stack things in a stable way whereas in covering hard we lowered the penalty for the sticky blocks a little bit so the agent really kind of had to trade off between like okay should I use like you know should you know should I use a sticky block here and pay a little bit or like you know but then be able to cover more stuff or should I not use this to he block and potentially be able to cover less stuff because again it only has a small number of objects that I can actually use to sell the task so it requires like a little bit more reasoning than the the covering task. Was anything surprising to you in the results or were they more like what you guys expected to find? I think they were sort of like roughly what we expected I mean I had the intuition that you know having like worked with these sorts of like you know physical scenes with blocks for a very long time that there's some like again because of this sort of like combinatorial nature of them but they're they're very challenging and that they should be pretty challenging for like your standard like class of RL agents to solve. I think and then you know we have been working with like the GraphNural Networks for a while like this is not the first paper that we've you know used them with there's like my collaborators in particular have been you know working with with them for a number of years and so you know we had some intuitions too about like you know what types of problems those things work well on and in particular problems where you do have like a discrete set of entities that you need to reason about and reason about the relationships between those entities like say a set of blocks and you know whether those blocks are like you know touching each other on top of each other or whatever. So I think our intuitions was that like the you know using this sort of like structured representations would you know work well in these types of tasks. I think the thing that was potentially like the most surprising was like the impact of the structured actions that that was sort of something that you know going into this project we we hadn't really thought of doing that yet and then we sort of came up with the idea and then that was a replacement. Is that what you mean by that? Yeah exactly so like placing a block on top of another block rather than placing a block at like an absolute location. That sort of and oh I should mention so this is actually an interesting aspect of this paper is we didn't really go into details of the architecture that much but the way that it actually executes these relative actions is by so what the graph neural network does is it it takes in a graph is input or so the graph is again like these block properties like the position of the blocks you know their orientation their size so on and so forth. The graph neural network then processes this graph and so it like you know it passes information all over the graph and then it produces a new graph with activations on the nodes and the edges and so what we can do is take the activations on the edges to actually correspond to our actions or our q values in this case because we're doing q learning and because they're on the edges of the graph they correspond to things that are about the relationship between two objects or two blocks so specifically one edge might say correspond to like pick up one block which is you know like the start node of the edge and put it on top of a block which is the end node of the edge and then we can have multiple activations on each edge which my correspond to particular offset locations so say like you know not only do I want to put block a on top of block b but I want to put it on the left or the right or the center or so on. Having this you know actions actually on the edges of the graph itself so rather than you know these sort of like you know global actions where you have like a fixed action size was that wasn't you know we weren't necessarily initially planning to do that but like it sort of ended up working out really well to do that and and that that was sort of I think a very pleasing result to find. So are your agents like the alpha go of blocks blocking like are they super they did any of them get superhuman I mean I know that wasn't the point of from what I understood it wasn't the point of your paper but were they getting really insanely good at this? So I don't think you can do like a direct comparison here because in some ways they're superhuman but in other ways they're not so the way in which they're superhuman is actually to do this task well you have to have like pretty precise placement because again you have this issue where if you you know collide a block with any of the obstacles then the episode is over like you lose so so that sort of requires like very precise positioning and I think humans would not be that good at being so precise so in that sense the agent is probably much better than what humans can do. On the other hand though I think humans would be able to build much more complex structures than our agents are able to do so I think at least on the reason inside of things we still haven't really gotten to the level at what humans can think about and come up with which is the part that I'm more interested in anyways so so there's still work to be done. And if I followed this right I don't think it was a huge training budget is that right? Well it depends on do you mean like the number of episodes with the agent experienced or the the search that it was performing? Yeah well good question so that I guess that depends on which agent right whether it's doing the Monte Carlo tree search. Yes yeah well I mean in both cases we always had we always were training agents like sort of using your typical RL setup and then in that case I think well we always trained the agents to convergence so the amount of experience that they had depended a little bit but I would say the amount of experience that they had was sort of like on par with other RL tasks so we didn't you know I don't I don't think we sort of like improved on data efficiency compared to other RL agents necessarily but one aspect of our agents was that we in addition to having this sort of like structured policy we combined that using planning with Monte Carlo tree search and then in that case we also used a very small amount of search so only a search budget of 10 and found that you know having such a small search budget can in some cases still improve performance even though you're not actually doing like that deep of a planning you're not doing that deep of planning. So this paper references Azizadena Shelley's paper surprising negative result in generative adversarial tree search he was our guest back in episode four I think he looked so he used a learned model but is it is it right that this paper used the environment directly so that there was no issue of model inaccuracy in the learned model. Yeah so we yeah so all of the results reported in the main text are with the environment simulator. We did do some exploration that we have in the appendix of learning a model though our results there were I think we still the model that we learned basically just wasn't like good enough I had too much inaccuracy and and so we weren't able to really do that much better than the model free agent though I think that that's sort of more just a question of like improving the model learning which wasn't really the main focus of the paper anyways so yeah but yeah we referenced the the other paper the surprising negative results paper just because they also did something a little bit similar to what we do which is using tree search sort of in the training loop so you know usually if you think of using Monte Carlo tree search you think of say like you know you sort of like apply tree search just at test time and you may be like improve on your performance but what we actually did is during the training loop you know when the agent executes an episode for each action it runs some search uses the search to like you know find a better action to take and then executes that action in the environment and then as the resulting experience into the replay buffer and then learns from that experience so it's always using tree search during during training as well as like at test time um and uh yeah so we found that that like worked uh it can work in some cases so we found it worked particularly well like in our covering hard task though in other cases we found the behavior of like including the the search during training time was maybe a little bit more unstable potentially related to some of the same issues that they talked about in their paper where um the search sort of allows you to you know locally avoid taking bad actions but then um you're not actually learning from the experience of the bad actions that because you never actually take them in the environment um so uh yeah so it's an interesting question of like yeah uh thinking about how to do that a bit better are you able to share with us a little bit about what it was like working with the team on this paper um yeah it so it was really great actually um I think that's one of the the cool things about my job here I deep mind is I feel like it's very collaborative and I have an opportunity to work with a lot of people and um it's you know people who have like a really wide range of expertise I think sometimes in academia and in grad school in particular like the focus is sort of on like you know you need to get enough like first author publications so that you can you know graduate and then get a job um and so it's you know you might collaborate with people in some cases but in in other cases it can sometimes be I think a bit isolating um because you're really focused on like your own core research you know projects um and so it was really nice to actually get to you know work very closely and collaboratively with a bunch of other people and um you know sort of each of us having our own different areas of expertise like I'm for me coming from like a a Cardinal Science Psychology background you know like I have less like experience with sort of like um you know the nitty-gritty of like training agents though I'm you know I have more of that experience now these days but um but I I have um and so like you know like some of the people that I was collaborating with have like much more of that um whereas I have more experience say like you know thinking about like uh the connection to human cognition and one of the things that humans are good at what can we you know what insights can we bring from there into our agents and and so having you know working on a team very closely and all all sort of like bringing our different areas of expertise is like you know it's very satisfying it was really a lot of fun and I had a question about follow-up work but I'm not going to ask you anything about that because I see you just published a paper and a follow-up paper um just a few days ago object oriented state editing for hrl uh oh yes yeah so that's uh uh work um from a victor who was the first author on their construction paper as well and uh yeah so we were looking there at um some of like the same types of tasks but thinking about like how can we sort of like leverage you know uh more notions of hierarchy um so having like a hierarchical agent and uh uh in particular in that work we were sort of exploring this idea that well like so most hierarchical agents um are sort of you know they're like the just like goal conditioned sort of hierarchical thing where you say you know you have the high level controller and it tells the low level agent you know oh you should go here or you should solve this problem or whatever but um like a feudal network so something like that yeah exactly um though they tend to be a little bit focused on um you know like parts of the state space that you could actually like uh reach as a low level agent or that you might possibly experience um like in a different type of episode um uh and things that are often you know very like position based um so we were interested in saying like okay well what if like the the actions or the goals that the high level agent was giving the low level agent were more like object based instead um and and maybe potentially things that you might not you know actually even experience in the world um and so you know we gave the high level controller access to things like being able to add objects into the scene or delete objects from the scene or modify the properties of objects in the scene um in order to condition the behavior of the low level agent um and so you know the the results that we have in that paper are kind of just like a preliminary set of results um kind of demonstrating that uh you know the doing this sort of thing might potentially um work if you scale it up um though you know our results are sort of also with like uh we we looked at having like a heuristic you know handcrafted high level controller and stuff and we we also tried training both the high level and the low level controller but there gets a lot of thornier because you have to train both of them um together and then there's a question of like you know uh what's the rate I would you update the high level agent versus the low level agent and sort of all of these implementation issues that were still sort of like trying to sort out so okay I want to ask you about another paper um analogs of mental simulation and imagination in deep learning that was a solo paper for you uh and I think it's related to your dissertation is that right? Right so as I mentioned before like I did my PhD in psychology and so the the topic of my PhD work was on um this idea of mental simulation which is you can think of as sort of like our ability to imagine you know what the world might be like or what it was like in the past or how it might have been different um and um there's a whole like broad literature on mental simulation and mental imagery is another term you might hear um and so I was doing work on basically trying to do computational modeling of uh how humans you know mentally simulate things um like looking at questions like how do people decide like how many simulations to run or how do they you know extract the right information from their simulations and and so on and so forth and so during my PhD I spent a lot of time thinking about you know how humans are um doing this type of simulation and I was always very interested in the connections between that and you know some of the ideas that were going on in in AI and I actually like during my PhD I didn't have too much time to explore like the area of of deep RL and like planning and model based RL but when I got to deep mind I spent some more time thinking about that and reading that literature and getting up to speed on it and that's sort of how this review paper was born was I you know I wanted to sort of explicitly make the connections between those two fields um and so the paper is basically it's a review of a lot of the recent work in model based deep RL um and then explicitly sort of like trying to categorize and come up with like a taxonomy for that work in the context of how humans use mental simulation. Can you tell us a bit more about how humans do background planning versus decision time planning? Uh yeah so uh background planning it so I think a lot of times we sort of like conflate these ideas in in uh especially I think that they used to be maybe like a little bit different like before deep learning sort of came along like you had you know ideas like okay like you have Monte Carlo tree search that's like a decision time planning method and you have things like dyna which is a background planning method right um uh because you're you're just like simulating data from your model and then learning from that and and then when you're done you have a policy and you can just run that um uh but uh now I think we sort of like blend them a little bit like you see a lot of things that are doing you know they they may be like are doing both some form of decision time planning and some form of background planning um but uh and I think that that's probably true in humans too but it's also I think useful to think about those two things as being separate so um in background planning you sort of have that's you know kind of like um something that maybe potentially happens in downtime or just from like lots of like um uh you know uh if you you mentally imagine doing something um and then like maybe you know sort of like learn something new from that but you're not actually actively trying to do anything so one nice example of this is this idea of a mental practice where you can not you know for example if you're an athlete or a musician you can imagine you know taking some action like imagine throwing a ball or or performing your musical piece and then you actually find that like later if you actually go and do that then you're better at performing that action or playing that piece um and so you know even though you're not actually really practicing it you're you're practicing it using your mental model of the world so that's a bit similar to this you know the idea of background planning in AI and then but then I think like the majority of mental simulation corresponds more to ideas in uh decision time planning it would whereas the idea that you're actively running a mental simulation to make a decision like you know I need to uh you know decide where to place the next block so I'm going to imagine oh if I put it here oh it'll cause the tower to fall over but if I put it here um it'll be stable and so I you know I choose the second place okay so and as an example that maybe our listeners would be familiar with like alpha zero I think does both of these types of planning um yes that's right yeah so on and then decision time planning um I guess it doesn't have if if we just use the broad network predictions of the Q values then then we wouldn't be doing decision time planning but that's a much weaker agent that way uh yes right seems to be different like various ways that RL agents can use a model like like planning forward and planning backward um use it for exploration or for propagating credit I think that would be the dinocase or direct back propagation through the model dynamics like there's there's all these very different approaches and I wonder do these have analogs in human cognition but do we also use models in a variety of ways? Absolutely yeah so I think humans actually use models in a way that's like far more diverse than how agents use them um so you know like all of those things that you listed I think are are absolutely true um but there's a lot of other ways you could use models too um for example like um well I guess like in some of these cases like some of the things that you listed are I guess like pretty broad umbrellas like say like propagating credit but there's lots of ways that you could use a model to do that um but we don't really necessarily explore that much so like maybe you are actually using the model to do more like you know some form of like counterfactual reasoning or inference about like um you know certain aspects of the world and then you use that as a method for um you know say like explaining a way like certain reasons why you might have gotten a reward versus others um and that's definitely something that humans do but that you don't see like quite so frequently um in in agents um and uh yeah I think uh for some of the other things you listed like uh say like direct backprop through the model I mean it's a little hard to say like whether that is something that people do since that's kind of it's more of like um um this is very specific to the particular way that you're like implementing your model is like whether it's differentiable or not do are the models that we have in our minds differentiable uh I have no idea actually so um I'm not sure but uh I think I wonder how you would test that right yeah I mean I mean to me when I'm using a tool when I'm with a tool use thing we get into this mode where we're like oh if we just do a little bit more of that then we get a little more of that on the tool and and to that that's that that is like a differential differentiation it's well we definitely have very good ways I think of sort of identifying uh basically solving the credit assignment problem right um when we're using models like we know like if things happen one way then like you know you can tell like oh uh you know okay I tried something and it didn't quite work but I know that that's the right way to do it I just need to like you know do it better next time as opposed to like I tried something and I think that that's probably not the way to do it at all and it changed my strategy um that's like the sort of like you know model base like credit assignment that humans can do that like I don't I mean being able to differentiate through your model gives you some aspect of that like in particular like oh maybe I did it the right way and I need to just like adjust it um but you know say like the idea that this is not the right thing that I'm doing at all I should try something else entirely is as much more different from that so um so there's some yeah there's like definitely some some things that seem like there are parallels but there's also a lot of ways in which um the human use of models departs pretty strongly from the agent use of models but I think that that is like a very interesting area of research is how can we try to you know bring those um bring bring the the ways that agents use models closer to the way that people use models um in order to get more powerful model base reasoning so I think in your work you you refer to the models um in our minds that as being similar to pom dp's in some ways yeah well I think the pom dp framework is useful for sort of talking about the types of mental simulations that um people have explored in cognitive science um which are not always like sort of framed in the language of decision making um so for example like uh one of the the largest like sort of sub areas of mental simulation is this idea of like mental imagery where um you know if you imagine something you can sort of like see it in your mind's eye or whatever and and um and so there's like a lot of work on trying to explain you know like what are the representations that are used there what you know how are those representations useful like uh are they spatial or they miss symbolic and nature um all these sorts of questions but they're usually not sort of framed in the in the way of saying like oh like um you know these are uh sort of you know they're not framed in the language of say like this is you know transitions in your forward model or this is like um you know like uh your observation function which is going from like a low you know like a sort of a latent state representation to the actual like observations that you see um and so I think uh framing the recent the past research in mental simulation in terms of the palm D. P framework can sort of draw out you know a different way of thinking about it and a different way of asking questions about um those sort of cognitive phenomenon that we have necessarily um uh used in the past is there some uh sent notion of reward or value that's very common in mental simulation so I would say that that's probably the place where um like rl perhaps departs the most from human cognition I mean it was certainly we had some notion of like uh you know like what things are are good and like I know what goals we're directing ourselves to and those are things that you can obviously like um uh formulate in terms of reward um so in that sense there is a reward though I think it may be like it this is yeah there's a sense of reward in the sense that we're optimizing for something um though it's maybe I don't know if in all cases you would be able to find like the precise like reward scale or reward signal um for the task that you're doing in some cases you definitely can and there's a lot of neuroscience actually like exploring the idea that like you know the brain encodes reward um but if you're talking about like you know certain types of like high level cognitive phenomenon too like there's lots of things that you might be optimizing for at once you know not not necessarily just like doing well on like the particular tasks that you're trying to do but you're also optimizing for like you know how much effort is it taking me to do this task and like um you know should I like you know stop and go do something else right now like maybe I need to go eat something as I'm hungry or whatever and so um uh you know this there's in some sense like a lot of the um you know the reward signal that like that we might be using um when running mental simulations is potentially not quite the same as like what we would see like in a standard RL agent and maybe some of that is reflected in the paper you mentioned model based planning where there was that other process that was thinking about how long to think about this uh well sorry which paper precisely are you uh I think you mentioned one in your in your iCML talk um model based planning that was about the spaceships that would decide oh yeah yeah okay how long to plan the trajectories yeah yeah the other reason why I sort of asked is because I actually have I sort of have like two papers on this like one is from the cognitive science side and one is from the AI side so like on the cognitive science side I have some work that's looking at like um yeah how many simulations should you run before you actually make a decision yeah and so so what's that like um you know the time the speed accuracy trade off there um and looking at how humans make that that trade off um in the context of physical reasoning tasks and then I have this yeah this other paper the one that you mentioned um sort of more from the AI side which is uh actually getting an agent to also make those sort of trade-offs in their decisions it's almost like another meta level to the whole explore exploit um tension yeah absolutely and actually like so the there's a literature on on this sort of field of study it's called meta reasoning um it has meta in the name it's sometimes confused I think with meta learning but is is more about yeah reasoning about your reasoning process like what computations should I perform so that I can get the best result rather than you know like what uh you know what actions in the real world should I take to get the best result I saw David Silver um talk about that I seem to also talk about the predictron and these value-focused model concept that he has which seem to be it seems like he's moving away from this notion of the high fidelity transition functions the one step transitions and and more to this add some notion of more abstract notion of value um is that do you think that that line of work is more in the direction of how our cognition works in some ways yes in other ways no so I think having a purely sort of value based model is quite different from how humans use models because our models are are not for like one particular task I mean they might be biased for the particular task that we're trying to achieve so they might not be completely you know dissociated from the task but we definitely have like actual like state transition models where we understand how you know the configuration of things in the world is going to evolve and we know like you know why things occur not just like what good you know how likely it is for like good things to happen um but um you know when you say things like high fidelity transition models like in that sense I think it is more similar because we're also not you know the the transition models like um humans have are not say like doing um trying to do like pixel reconstruction about like you know predicting the next frame the the models that we have are much more abstract than that um so you know it's like we would be able to predict things you know over like longer time scales more you know more jumpier types of predictions and um and reasoning about sort of like more abstract notions like you know is this thing going to move from like over here to over here not like is it going to move by like you know 0.01 units or something so towards the end of your um analogs paper you mentioned some challenges for models to match human human cognition um I wonder if you would want to comment on how the the field of RL approaches these challenges today like where are we with these these three things you mentioned compositional and assembled on the fly um planning with only a handful of evaluations from noisy and complete models and models must generalize far from their training sets I'm quoting you here by the way supportive supporting creative exploration and a richer understanding of the world so these do you see seem like really big challenges where where is the field with these like let's say for the first one compositional and assembled on the fly so I think in terms of the compositional part is like I think we're like getting a better handle on that so like with um you know with the sort of like graph neural network agents that I talked about um like in the the paper with the you know the agents being trained to stack blocks like those are very compositional agents they have a very compositional understanding of the world um the assembled on the fly part though I think we're pretty far from like so what I mean by that is like when you have a model of the world like that model should be something that you construct like you know so you find your like say say you're an agent and you like you wake up in a new environment and you're like okay I got to solve a task and then you make a model of the world based on like what you see you don't just like have some model of the world that you've like trained over like you know a million episodes or whatever you have you sort of are like okay I can see that there's like these objects in the scene and I know like how they probably interacted with each other so I can sort of like really quickly assemble like a you know I'm basically a mental model of like what is what's here you know how those things work together um and you know be able to maybe even make predictions about how two things might work together that like you've never seen um you know interacting before um and so yeah this idea of assembling a model on the fly rather than sort of having like a this gigantic like uh sort of holistic model based on all of your previous experience does that sort of make sense totally so you're really tackling um part of this first one with with uh with this line of work with your um uh graph based agents and and the and the structure paper yeah yeah exactly so with the second one planning with only a handful of evaluations from noisy incomplete models I mean that one made me think of Pilko uh to start with just because it was good with uncertainty but I wonder what uh broader than that um how what do you where do you think we are with with item two planning with with noisy incomplete model models yeah so I I think that there's certainly like um there's some some instances where um you know planning with a a handful of evaluations or you know a handful of uh like you know episodes um can work reasonably well but I think they still like don't really match the the scope at which like humans are doing that so right like if you like you know say like you find yourself like in a new place like maybe you go to like a new shopping mall or something so like the layout's totally different all the people in the room are totally different and you need to like find your way to like the store that you need to go to you know you don't have a good model of that environment um and maybe you have like a a good abstract model of like how space works and better all but like you know you might still be like okay well like what if I went down over here like okay it looks like maybe those shops are like you know based on I don't know like all the closed stores are over there and then like uh you know maybe there's like toy stores over on the other side and like so you could like you know sort of very quickly just based off of like um only a very small amount of experience like uh quickly um you know construct sort of like an approximate model so it ties into like this idea of assembling a model on the fly um and then run forward simulations from this like probably wrong model and still like get useful information out of that um so like the sort of idea that like your the simulations from your model are like they're not even like a product like a little bit wrong they're like probably very wrong but like you can still probably get somewhere with them is is something that human cognition seems to be pretty robust too um but I don't think we really have anything that looks quite like that in our agents yet I guess we have the advantage of having an incredible amount of bias built in and ducked bias and um experience like the the whole evolution behind us and what would that even mean for for a a deep oral agent to have that behind it yeah totally I think that's kind of like the main question is like how do you get to the point where you have an agent that you know that has that sort of like that can do that sort of thing that can like get away with only a few like noisy evaluations from your like probably wrong model means that you have to already like have a lot of sort of good intuitions for a lot of other things and so like where do those intuitions come from in what way does the model sort of supplement those intuitions I think are very interesting and important research questions okay and then the third one models must generalize far from their training sets um creative exploration and richer understanding of the world I mean you mentioned the blueberry earth paper which I thought was I'm so glad you put that in there because I did I did start reading that I would recommend listeners read that um that paper um yeah how would an agent ever uh write a paper like that exactly I mean I don't know but I want to find out um yeah for uh for people who don't know what this paper is like so um basically someone on Stack Overflow or it was either Stack Overflow or Reddit I forget but um they asked the question of what what would happen if the earth were replaced by blueberries and then um this other person came along and they actually like wrote a paper uh sort of you know taking the physical principles uh at that you know would be relevant in this particular case and sort of like running them to their logical conclusion and so the you know the idea would be like oh okay well like the gravity of all the berries like suddenly like you know causes all the berries to like collapse in another themselves and like become jelly and then like you know so a like the earth would you know immediately become smaller um and then probably the heat from that would cause the jelly to be like you know boiling or whatever and it's just a very interesting like fun um uh a fun you know thought experiment basically of what would happen if that were the case but I think that it's such a good demonstration of like you know people think of all sorts of weird stuff like you know you know you know why does someone like think that that's like an interesting question to ask like what would happen if the earth were replaced by blueberries like um but I think that that it sort of it really underscores like what makes human cognition so interesting and so powerful is that we are able to come up with questions like that and we're able to come up with you know plausible at least explanations for the answers um and you know potentially if you look at like some types of RL agents that you know maybe like you know stuff looking at you know from the like literature on like generative models and like you know uh agains and like um you know uh like text generation and stuff like this like you have agents that also come up with kind of weird stuff but they they don't come up with the weird stuff in a way that's like for any reason in particular right it's just because they haven't like properly fit the actual like real distribution of the data um whereas people come up with weird stuff not because they haven't fit the true distribution of the data but because um they know that like there's things that they don't know and they want to explore um and understand the world around them and so I think that's a really like dig an important difference and I think that question of like how do you get agents to be anywhere close to that is like a huge on an answer question seems like they would really need to understand causality and not just correlation which we haven't really seen that much of in in DRO yeah uh totally so um so are our minds like running tons of these simulations all the time without us being aware of it like I know there's moments when I'm thinking carefully like how would this block or this object look if I rotated very conscious of running that but you know if I if we really knew what was happening would we find just thousands of these running all the time or is it more of a conscious thing how do you look at that like is it is simulation a big chunk of what's happening up there yes um so both is happening so we have models in like all over cognition so like in lots of different aspects of cognition so like one place where there are models for example is in the motor system there's very low level models that actually are assisting with motor control and those are like constantly happening you know every time you make a motion like those models are running forward on that point so it's not we're not just executing policies we're running models yeah are you saying okay cool yeah yeah the models in the in the motor system are used for um sort of accounting for the delay um that it actually physically takes like a signal to to go from the brain to the muscle um and then come back uh so because of that delay you know you can't actually get the sensory feedback from the world until a short period of time later and so the model actually compensates for that by being able to predict what will happen it allows um the mind or the brain to be able to come up with the next action to take before it's actually received the feedback from the world and it was super small note I just really love the indicators on the references um marking which ones were of a special interest that really helped me prioritize the references it's a small feature but that was cool yeah that's actually uh that's a feature of the journal um so I I wish that we had more journals like that in in machine learning um that sort of like you know prioritize these sort of like you know half opinion piece half review sort of things um you don't see that quite so much that would help there's so many references to look at yeah I know okay so um so if I from your own work um do you could you maybe comment on other things that are happening in the world of our role that that you find really interesting these days well one trend that I feel like I've noticed is just that I think we're moving away a little bit from the end to end um approach which I think is healthy um I mean so I I generally think that like having a diversity of research is important just in general like I think it's good if people are working on things even if I don't think that they're like you know the right way to do it or whatever or it's not the approach that I would take so I think it's good for everyone to you know be working on different stuff but I think I do think that the field has been maybe like a little bit like overly focused on end to end learning um where you don't like care quite so much about like what is the you know what are like the right inductive biases that we need to incorporate into our agents and you know what are like uh you know problems that are sort of like more compositional and these sorts of things and so thinking about like those types of structure or inductive bias I think it's super important and I think the field is starting to move a little bit back and starting to think about those a bit more so that's like that's very encouraging and exciting to me maybe mixing a little bit of the good old-fashioned AI back in there yeah exactly um do you have strong opinions about what RL might uh might look like in in the coming years like in three or 20 years strong opinions uh no I mean I hope that it will look more of like a blend of you know like sort of not just pushing like the model free thing but also like incorporating strong notions of planning and and models and and not you know not just like you know the sort of like next step models but like the things that I I talked about in my review paper but models that you know are you know more compositional and that enable things like counterfactual reasoning and like the sort of like really creative like problem solving and stuff um so I hope that RL will sort of like have all of those elements in it um eventually but my last question is do you have any suggestions for us for this podcast uh not in particular I think it's uh it seems like a very nice podcast I've really enjoyed um chatting with you today so uh yeah keep up the good work I guess thanks so much Dr. Hammerick on behalf of our listeners and myself um for your valuable time and your insight thanks our episode for today folks be sure to check talk rl dot com for more great episodes
[ { "end": 12.8, "start": 0, "text": " This is TalkAreal Podcast. All reinforcement learning, all the time." }, { "end": 22.400000000000002, "start": 12.8, "text": " Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chohan." }, { "end": 28.36, "start": 22.400000000000002, "text": " Dr. Jessica Hammrich is a research scientist at DeepMind. She holds a PhD in psychology" }, { "end": 33.96, "start": 28.36, "text": " from UC Berkeley. It's very kind of you to join us and thanks so much for being here, Dr. Hammrich." }, { "end": 41, "start": 33.96, "text": " Thanks so much for having me on the podcast. So how do you describe the area that you focus on?" }, { "end": 48.519999999999996, "start": 41, "text": " So having done my PhD in psychology and sort of coming from this cognitive science background," }, { "end": 56.120000000000005, "start": 50.28, "text": " my research is that the intersection of cognitive science in AI, so my goal is to take insights" }, { "end": 60.44, "start": 56.12, "text": " that we have about how we know that people think about things and then try to apply those to" }, { "end": 66.44, "start": 60.44, "text": " building better machine learning algorithms and in particular, I sort of am doing that in the" }, { "end": 73.24, "start": 66.44, "text": " context of model-based methods and model-based RL. Your background in psychology and now working" }, { "end": 80.44, "start": 73.24, "text": " on RL, is that becoming a more common combo or is that still quite rare? So I would say that's" }, { "end": 88.12, "start": 80.44, "text": " actually, well, it's interesting because it's in some cases it's not, or in some sense it's not" }, { "end": 94.75999999999999, "start": 88.12, "text": " that common, but it also, if you look at the history of AI and psychology, well, in cognitive" }, { "end": 100.12, "start": 94.75999999999999, "text": " science in particular, they're actually very closely related. So a lot of the ideas that are in RL" }, { "end": 105.96, "start": 100.12, "text": " have come out of a lot of the work in psychology on how humans learn and how animals learn" }, { "end": 113.16, "start": 105.96, "text": " and with a lot of the stuff that I work on in terms of say, like, you know, model-based methods" }, { "end": 118.67999999999999, "start": 113.16, "text": " and thinking about, like, you know, building models of the world is also like a topic that's been" }, { "end": 124.44, "start": 118.67999999999999, "text": " explored fairly extensively in psychology and like at the intersection of psychology and AI," }, { "end": 129.56, "start": 124.44, "text": " particularly in the past, though, you know, the fields have sort of like separated a little bit," }, { "end": 135.48, "start": 129.56, "text": " and so it's maybe less crosstalk now than there used to be. Though there is still like a large" }, { "end": 140.44, "start": 135.48, "text": " number of people who have expertise in both, particularly, there's perhaps even more people," }, { "end": 145.48, "start": 140.44, "text": " like, on the neuroscience side, a lot of people studying how people make decisions coming out" }, { "end": 150.2, "start": 145.48, "text": " from the perspective of neuroscience and then applying RL methods to model that as well." }, { "end": 155.72, "start": 150.92, "text": " We had Michael Litman on for the second episode and he was talking about the RL DM conference" }, { "end": 160.2, "start": 155.72, "text": " that draws people from different fields, so maybe there's some conferences that have that focus" }, { "end": 166.51999999999998, "start": 160.2, "text": " on as overlap. Yeah, absolutely. Yeah, I think it's super exciting to see conferences like RL DM now" }, { "end": 171.48, "start": 166.51999999999998, "text": " that are sort of trying to bridge the gap between the fields. I actually haven't been to RL DM yet," }, { "end": 175.79999999999998, "start": 171.48, "text": " but I hope I'll be able to make it one of these days because it really sounds like a fantastic" }, { "end": 184.83999999999997, "start": 175.79999999999998, "text": " conference. Me too. So I really enjoyed your ICML talk. Thank you. Where you spoke about your" }, { "end": 192.04, "start": 184.84, "text": " structured agents paper as part of it. I wonder, could you help our listeners understand," }, { "end": 200.84, "start": 192.04, "text": " what is the gist of that paper? Yeah, so the idea there was to sort of explore ways of achieving" }, { "end": 207.96, "start": 201.4, "text": " or using more structure in agents. So let me like, preface that a little bit by saying. So there's" }, { "end": 212.6, "start": 207.96, "text": " sort of like, you know, in deep RL especially, there's like a little bit of this tension between" }, { "end": 221.24, "start": 212.6, "text": " the end to end approach where you sort of use like a relatively well accepted architecture," }, { "end": 224.92, "start": 221.24, "text": " like maybe just like a CNN or whatever, and then you just train lots of data and you hope that" }, { "end": 229.79999999999998, "start": 224.92, "text": " the right representations and behavior will fall out of that. And then there's the more" }, { "end": 235.95999999999998, "start": 229.79999999999998, "text": " classical AI approach where you would build in all of this structure and basically make all the" }, { "end": 242.35999999999999, "start": 235.95999999999998, "text": " decisions for the agent. And so there wasn't really very much learning there at all. And I think" }, { "end": 250.76000000000002, "start": 242.36, "text": " that there's a lot of potential for exploring the space between those two ends of the spectrum." }, { "end": 257.24, "start": 251.96, "text": " And building some amounts of structure into agents based on what we know about how the world" }, { "end": 262.2, "start": 257.24, "text": " works, but not building too much in and allowing that learning to sort of fill in the gaps." }, { "end": 268.28000000000003, "start": 263.16, "text": " So that's kind of like the idea there. And so we were looking at that, particularly in the" }, { "end": 275.4, "start": 268.28, "text": " context of tasks where you really require reasoning about like the structure of the world," }, { "end": 279.32, "start": 275.4, "text": " meaning like the fact that there are like objects in the world and those objects, you know," }, { "end": 282.76, "start": 279.32, "text": " relate to each other in certain ways like objects can be on top of each other or next to each" }, { "end": 286.52, "start": 282.76, "text": " other or you know, they collide in certain ways. They have the physical interactions as well." }, { "end": 293.71999999999997, "start": 288.11999999999995, "text": " And so we were interested in tasks that sort of were, you know, based in that type of" }, { "end": 297.96, "start": 293.71999999999997, "text": " structure in the world and then getting agents to be able to solve those tasks by giving them just" }, { "end": 302.76, "start": 297.96, "text": " the right structure in their architectures that they would, you know, sort of be able to do a good" }, { "end": 307.79999999999995, "start": 302.76, "text": " job there. So specifically, we looked at these block stacking tasks where the goal is to stack" }, { "end": 314.2, "start": 307.79999999999995, "text": " some blocks up in order to solve some sort of goal. So for example, in the easiest version of the" }, { "end": 320.35999999999996, "start": 314.2, "text": " task, we have like the silhouette task where the goal is to stack blocks just to sort of duplicate" }, { "end": 325, "start": 320.35999999999996, "text": " a structure that you're already given. So you sort of, you know, just have to place them," }, { "end": 329.56, "start": 325, "text": " you look at where, look at the given structure and they just place the blocks so it matches the" }, { "end": 334.84, "start": 329.56, "text": " given structure. And then we have like harder versions of the task where you have to actually design" }, { "end": 340.2, "start": 334.84, "text": " a solution to a problem like, so if the goal is to stack blocks to say like create a shelter over" }, { "end": 344.36, "start": 340.2, "text": " another object, then you have to figure out, okay, how exactly do I place them so that they're stable" }, { "end": 348.68, "start": 344.36, "text": " and so that they'll, you know, appropriately like solve this problem of creating the shelter." }, { "end": 355.16, "start": 348.68, "text": " And so the way that we approach doing this was by taking this idea that like, you know," }, { "end": 359.8, "start": 355.16, "text": " these types of scenes in this task do have a lot of structure in them again, you know, sort of like" }, { "end": 363.8, "start": 359.8, "text": " what I was saying that this like structure in terms of the objects in a scene and the relationships" }, { "end": 368.92, "start": 363.8, "text": " to each other. And then allowing the agent to actually exploit that structure. And the way that" }, { "end": 374.12, "start": 368.92, "text": " we do that is using a type of neural network called a graph neural network, which processes" }, { "end": 379.48, "start": 374.12, "text": " graphs. And so we can represent the scene as a graph, allow the agent to process that graph." }, { "end": 383.48, "start": 380.2, "text": " And then, you know, return an action on the basis of that structure." }, { "end": 390.84000000000003, "start": 384.84000000000003, "text": " So the first time I encountered your paper, I, this structured agents paper, I had to do a" }, { "end": 395.64, "start": 390.84000000000003, "text": " double take because there was, there was this structure word was overloaded. Yeah." }, { "end": 399.56, "start": 395.64, "text": " In a few different ways, there was, let's see if I can break it down there. Yeah, you mentioned" }, { "end": 406.12, "start": 399.56, "text": " structured representations and then structured computation. And then also your agents were building" }, { "end": 413.08, "start": 406.12, "text": " structures. So can you, can you just remind us of the distinction between the representation and" }, { "end": 420.2, "start": 413.08, "text": " the computation? Yeah. So I should say, I think if there is being basically like, when you're" }, { "end": 424.92, "start": 420.2, "text": " talking about the design of the agent, there's three types of structure that you could consider." }, { "end": 429.32, "start": 424.92, "text": " So there's structure and the inputs, there's structure in the computations that are performed," }, { "end": 436.04, "start": 429.32, "text": " and then there's structure in the outputs. And then, then there's also like, there's structure" }, { "end": 441.48, "start": 436.04, "text": " in the world, but that's like, when we refer to structure in the paper, we're referring more" }, { "end": 446.68, "start": 441.48, "text": " to structure in the agents themselves, which may or may not reflect some of the structure that's" }, { "end": 451.8, "start": 446.68, "text": " in the world. So the structure and the inputs is like things like, you know, they, you know," }, { "end": 456.44, "start": 451.8, "text": " when the agent, so if you take like a typical RL agent, it probably receives this input" }, { "end": 460.92, "start": 456.44, "text": " like an image, right? And so that image is like a relatively unstructured representation because," }, { "end": 466.36, "start": 461.88, "text": " you know, it's like, it's always the same dimensionality, it's relatively flat." }, { "end": 471.8, "start": 467.32, "text": " And then, you know, it's very, very high dimensional. So you might be able to extract" }, { "end": 478.12, "start": 471.8, "text": " structure from that, like, you know, it's, it, in it, like, there, there is some function," }, { "end": 482.04, "start": 478.12, "text": " some transformation of that input that would give you something that's more structured, but the" }, { "end": 485.64, "start": 482.04, "text": " representation itself is, it's just a grid, right? So there's, there's not too much structure" }, { "end": 492.12, "start": 485.64, "text": " there to exploit. However, something like say, like a graph has a lot more structure to it because" }, { "end": 495.88, "start": 492.12, "text": " there's sort of like these discrete, you know, entities like the nodes in the graph," }, { "end": 499.48, "start": 495.88, "text": " and then you can have edges between different nodes that represent something like relationships" }, { "end": 504.36, "start": 499.48, "text": " between those nodes. And so now you're talking about, you know, there's like, you can represent" }, { "end": 508.12, "start": 504.36, "text": " different types of information in that way than you would be able to represent and say like an image." }, { "end": 515.08, "start": 510.12, "text": " And so that's like, that's sort of like the structure of the input. And then," }, { "end": 518.92, "start": 515.08, "text": " then you could talk about the structure of the computation, which would be like, so one," }, { "end": 523.32, "start": 519.4, "text": " you know, possibility would be say, maybe you just take like, even if you have a structured input," }, { "end": 528.2, "start": 523.32, "text": " like a graph representation, you could just have like an RNN, which like, you know, goes over all" }, { "end": 532.6, "start": 528.2, "text": " of the nodes in the graph and like processes each one of them. And then at the end gives you like" }, { "end": 537.4, "start": 532.6, "text": " a vector, some like latent representation of what the graph is. So that would sort of like convert" }, { "end": 542.0400000000001, "start": 537.4, "text": " it to an unstructured representation. And then you could just do like, again, you know, like MLPs" }, { "end": 547.72, "start": 542.0400000000001, "text": " or whatever on top of that vector. So that would be like an unstructured computation because you're" }, { "end": 553.96, "start": 547.72, "text": " again still like operating over this like internal unstructured representation. And then," }, { "end": 559.96, "start": 555.16, "text": " but in contrast, if you have something like a graph neural network, which explicitly processes" }, { "end": 564.84, "start": 559.96, "text": " the graph, then that's doing a form of structured computation because you're making more assumptions" }, { "end": 569.8000000000001, "start": 564.84, "text": " about like, you know, the way that information is shared. So in a graph neural network, you basically" }, { "end": 574.36, "start": 569.8000000000001, "text": " assume that like every node gets processed the same way, every edge gets processed the same way." }, { "end": 578.9200000000001, "start": 574.36, "text": " And that's a particular type of structure or, you know, or inducted bias that we're assuming" }, { "end": 586.44, "start": 579.72, "text": " in the computation of that algorithm. And then I think structured like outputs or structured" }, { "end": 590.5200000000001, "start": 586.44, "text": " actions are something we actually maybe talk even less about in RL. I think we don't talk about" }, { "end": 594.84, "start": 590.5200000000001, "text": " any of these things enough in RL, but actions, I think we talk about the least potentially," }, { "end": 599.32, "start": 595.8000000000001, "text": " which is, you know, usually when we're talking about actions in RL, like say, if you're talking" }, { "end": 603.1600000000001, "start": 599.32, "text": " about in the context of a game, you would say, okay, well, the actions are like, you know, like move up," }, { "end": 607.8800000000001, "start": 603.1600000000001, "text": " move right, move left, move down. They're sort of, you know, completely independent of what" }, { "end": 613.32, "start": 607.8800000000001, "text": " your states are. So, you know, at every state, you have the same actions that you can take. And it's" }, { "end": 619.5600000000001, "start": 613.32, "text": " like a relatively small amount of actions typically. But you could also consider having, you know," }, { "end": 623.96, "start": 619.5600000000001, "text": " larger forms of structure or more structure in your actions as well. So maybe you can take actions" }, { "end": 628.9200000000001, "start": 623.96, "text": " that are, you know, a function of your inputs. And that's what we do in this paper where the actions" }, { "end": 633.4000000000001, "start": 628.9200000000001, "text": " that we have the agents take are actually things like place a block on top of another block," }, { "end": 638.84, "start": 633.4000000000001, "text": " rather than say like placing a block at position like x, y. And so having this structure in the" }, { "end": 645.1600000000001, "start": 638.84, "text": " actions is an additional, you know, way of, you know, sort of providing a particular type of" }, { "end": 651.08, "start": 645.1600000000001, "text": " inductive bias to the agent that allows it to more easily reason about things like objects in" }, { "end": 659.1600000000001, "start": 651.08, "text": " the world. Okay. And then so I guess we're presupposing that something else is breaking down the" }, { "end": 666.6, "start": 659.1600000000001, "text": " image into this graph representation before it's input into the agent. Yes. Yeah. So in this paper," }, { "end": 672.84, "start": 666.6, "text": " we, most of our experiments were with assuming that we had access to like the underlying object" }, { "end": 677.88, "start": 672.84, "text": " properties. So things like, you know, their position, their size, the shape, so on and so forth." }, { "end": 685.96, "start": 680.2, "text": " I think in these particular construction scenes, the perception problem is not really like all" }, { "end": 691.72, "start": 685.96, "text": " that interesting. Like the shapes are super simple. I think, you know, probably you could take some" }, { "end": 696.6, "start": 691.72, "text": " sort of like off-the-shelf segmentation algorithm and it would probably do a reasonably good job here." }, { "end": 702.28, "start": 697.48, "text": " And so we actually did like, we did some additional experiments where we were like given the" }, { "end": 709.24, "start": 702.28, "text": " ground truth segmentations and then like pass the segmented images like through CNN to get the" }, { "end": 715, "start": 709.24, "text": " embedding and then still do like a structured graph computation in the agents policy and you get" }, { "end": 719.72, "start": 715, "text": " almost the same level of performance there. So as long as you have something that could actually," }, { "end": 724.9200000000001, "start": 719.72, "text": " you know, do the segmentation, I think it would work pretty well. I think, so in this particular" }, { "end": 729.96, "start": 724.9200000000001, "text": " environment, I think, you know, that the question of like, how do you go from like perception to" }, { "end": 735.88, "start": 732.0400000000001, "text": " to the structured representations is maybe not quite so interesting. Of course, there's like other" }, { "end": 741.08, "start": 735.88, "text": " environments where that question is much more interesting. Like if you're saying like a 3D first-person" }, { "end": 747, "start": 741.08, "text": " perspective sort of environment. Yeah. But yeah, that's like an open question of like how to make" }, { "end": 751.48, "start": 747, "text": " that work well. It's something a lot of people are working on and we're starting to see some progress" }, { "end": 757.16, "start": 751.48, "text": " on it. So I think we'll continue to probably see a lot more over the next couple of years." }, { "end": 764.6, "start": 758.84, "text": " And then on the on the back end, maybe we need a little bit of smart to turn that that relative" }, { "end": 771.96, "start": 764.6, "text": " object-based action into an absolute action that could actually be executed. Yeah, I think that" }, { "end": 778.0400000000001, "start": 771.96, "text": " that's something that is like there's less work on that. So I think that there should be more work" }, { "end": 784.12, "start": 778.0400000000001, "text": " on that sort of thing of, you know, actually having the action space that the agent is" }, { "end": 790.2, "start": 785.1600000000001, "text": " interacting with be something that's learned so that, you know, it's sort of is like," }, { "end": 794.2, "start": 790.2, "text": " it's a more efficient space to learn in or whatever, rather than like the true action space. But" }, { "end": 799.24, "start": 795.4000000000001, "text": " I think people aren't like that there isn't a lot of work that is focused on" }, { "end": 805, "start": 799.24, "text": " even training agents in these sort of more type more structured types of action spaces. So I" }, { "end": 810.12, "start": 805, "text": " think probably what we'll see is like, you know, maybe that will become more of an area of focus." }, { "end": 814.84, "start": 810.12, "text": " Like people will first start to explore, you know, other types of structured actions that you" }, { "end": 820.2, "start": 814.84, "text": " might possibly use for different types of environments. And then we'll sort of as a field move towards" }, { "end": 823.24, "start": 820.2, "text": " then like also learning what those that structured representation should be." }, { "end": 831, "start": 823.24, "text": " It seems to me as like as a human, like as a kid when we grab a some richer toy or something," }, { "end": 834.44, "start": 831, "text": " we suddenly have a, it's almost like we have a new shape arm and we have these different" }, { "end": 837.88, "start": 834.44, "text": " actions we can do we've never seen before and we're spending a little bit of time to learn" }, { "end": 844.76, "start": 837.88, "text": " this new action space. That doesn't correspond to our, what we're used to in terms of how our" }, { "end": 851.08, "start": 845.5600000000001, "text": " motor control works. Maybe there's something somewhere there. Yeah, so I guess you're talking a" }, { "end": 858.5200000000001, "start": 851.08, "text": " little bit about like the idea of like tool use. Yeah. Yeah. Yeah, I think, I think of this very," }, { "end": 863.32, "start": 859.32, "text": " well, yeah, so I think, well, there's, I guess there's sort of like two questions. Like one is like," }, { "end": 869.48, "start": 863.32, "text": " how, like I think we as humans are sort of like we're born with assumptions about like this sort of like," }, { "end": 876.5200000000001, "start": 869.48, "text": " you know, the right sort of like abstract action representations that we might use to interact" }, { "end": 881.24, "start": 876.52, "text": " with the world. So like already like, you know, when babies are born, they have some sort of notion" }, { "end": 888.52, "start": 881.24, "text": " already that objects exist in the world. And so they had like this bias towards that. And" }, { "end": 895.88, "start": 891.3199999999999, "text": " and so there's sort of the question of like how, how, what you get an agent to have like that kind" }, { "end": 900.92, "start": 895.88, "text": " of sort of abstract notion of like what types of things they could potentially interact with and" }, { "end": 906.1999999999999, "start": 900.92, "text": " like, you know, what types of action representations could be there out in the world. And then there's" }, { "end": 910.5200000000001, "start": 906.2, "text": " a question of like, given that you have this sort of like abstract representation or, you know," }, { "end": 917.6400000000001, "start": 911.32, "text": " the sense of like in the abstract sense or in the general sense, like what types of actions can you" }, { "end": 923.8000000000001, "start": 917.6400000000001, "text": " take on things. Then how does that translate to then when you're you're put in a new environment with" }, { "end": 928.9200000000001, "start": 923.8000000000001, "text": " a new, say physical object you could interact with like, how do you sort of like fine tune what you" }, { "end": 937, "start": 928.92, "text": " already have to to be more appropriate for this new tool or new environment that you're in. So" }, { "end": 944.36, "start": 937, "text": " when this paper, I think for the continuous case used this algorithm that was referred to as RS0," }, { "end": 952.52, "start": 944.36, "text": " which I gather is like a descendant of SVG. Yes. So is there can you help me is was there a rationale" }, { "end": 961.72, "start": 952.52, "text": " for choosing that specific continuous action model for you algorithm? Not in particular. This sort" }, { "end": 965.16, "start": 961.72, "text": " of like more if you're coming at it from like kind of a more traditional like our perspective," }, { "end": 969.96, "start": 965.16, "text": " the more natural like choice of action would be like you place a block at a particular location." }, { "end": 974.76, "start": 969.96, "text": " It doesn't make a lot of sense to try to like discretize like all of those locations in two grid" }, { "end": 980.52, "start": 974.76, "text": " and then like use something like DQN. We tried that but it didn't really work either. So we mostly" }, { "end": 987.96, "start": 980.52, "text": " just wanted to have like a solid comparison for you know some sort of like continuous based action" }, { "end": 997.88, "start": 987.96, "text": " agent. And you know so you know SVG can work reasonably well on the the R in RS0 is the" }, { "end": 1004.68, "start": 997.88, "text": " stands for retray. So it adds this like retrays correction to it. And that that is just like" }, { "end": 1009.64, "start": 1004.68, "text": " you know it works reasonably well on like other tasks. So we we run ahead and use that. But I mean" }, { "end": 1016.36, "start": 1009.64, "text": " we could have just as easily used like DDPG or something else too. So this work has some very" }, { "end": 1020.92, "start": 1016.36, "text": " unique environments the the ones that you meant the block stacking environments you mentioned." }, { "end": 1027.8799999999999, "start": 1021.96, "text": " I found them very refreshing especially after seeing you know so much Atari. There was a" }, { "end": 1032.2, "start": 1027.8799999999999, "text": " bunch of papers where they did doom and this stuff. So it was very refreshing to see these tasks." }, { "end": 1038.76, "start": 1033.24, "text": " How did you set on these tasks? Was it was it easy to decide on these tasks or was there a lot" }, { "end": 1042.84, "start": 1038.76, "text": " of back and forth on these specifics of what tasks you'd focus on for this?" }, { "end": 1052.6, "start": 1045.8799999999999, "text": " Yeah the choice of the tasks kind of so the overall like environment of like the idea of doing" }, { "end": 1060.84, "start": 1052.6, "text": " block stacking sort of like has been a bit of a long time coming. So Peter Vitale one of the" }, { "end": 1065.32, "start": 1060.84, "text": " other authors on the paper he and I used to work together when we were both at MIT like a long time" }, { "end": 1073.56, "start": 1065.32, "text": " ago and we were both working in Josh Tenenbaum's lab and we worked on basically doing psychological" }, { "end": 1078.28, "start": 1073.56, "text": " modeling of how people reason about physical scenes where the scenes are things like you know" }, { "end": 1082.52, "start": 1078.28, "text": " stacks of blocks and then we asked people to make predictions about you know whether the towers" }, { "end": 1087.3999999999999, "start": 1082.52, "text": " will fall over or like what direction will they fall in and then we were doing modeling of people's" }, { "end": 1091.32, "start": 1087.3999999999999, "text": " behavior and those sorts of scenes. I think I saw your masters was on that topic is that right?" }, { "end": 1100.12, "start": 1091.32, "text": " Yeah exactly and yeah so that was like the work that I did both in undergrad and then leading" }, { "end": 1106.84, "start": 1100.12, "text": " into the vastuesthesis as well and so you know we've so those tasks were kind of like looking at" }, { "end": 1111.24, "start": 1106.84, "text": " like how do you make physical predictions but we were always kind of motivated by the idea that like" }, { "end": 1116.52, "start": 1111.24, "text": " you know people are very good at actually building things and constructing things out of blocks" }, { "end": 1121.24, "start": 1116.52, "text": " and that's a very like it's a natural human it's a very naturalistic behavior like you know kids love" }, { "end": 1127.16, "start": 1121.24, "text": " to play with blocks and and so we've always kind of really wanted I think to like you know be able to" }, { "end": 1133.08, "start": 1127.16, "text": " write a program that would be able to like learn to stack blocks too and create things. So" }, { "end": 1139.16, "start": 1134.2, "text": " so that sort of you know bitten something that both of us have been interested in for a very long" }, { "end": 1145.08, "start": 1139.16, "text": " time and then and you know it's nice because it has a lot of connections to like you know it's" }, { "end": 1149.48, "start": 1145.08, "text": " it kind of feels like an eco logically valid like set of tasks to train agents on at least to" }, { "end": 1154.6799999999998, "start": 1149.48, "text": " some extent you know it's inspired by like you know behaviors that children produce and" }, { "end": 1160.12, "start": 1156.28, "text": " and they're very challenging for modern machine learning methods because they kind of have this" }, { "end": 1167.3999999999999, "start": 1160.12, "text": " like very you know combinatorial compositional flavor to them right so like there's like many" }, { "end": 1174.04, "start": 1167.3999999999999, "text": " many ways you can stack a set of blocks together it's not just like you know a Atari where you have" }, { "end": 1179.72, "start": 1174.04, "text": " like you know basically the episode starts in it's the same like every time almost you know maybe" }, { "end": 1183.48, "start": 1179.72, "text": " there's a little little bit of non-determinism but it's roughly always the same whereas like in" }, { "end": 1187.8, "start": 1183.48, "text": " these sort of like compositional environments you can very easily get into states that you've never" }, { "end": 1196.2, "start": 1187.8, "text": " really seen before at all. And so then how we settled on like the particular four well the silhouette" }, { "end": 1201.56, "start": 1196.2, "text": " task so that's the one I mentioned is like you basically have to replicate a given structure that's" }, { "end": 1207.08, "start": 1201.56, "text": " kind of more similar to a lot of existing block stacking tasks in literature so often like if" }, { "end": 1211.1599999999999, "start": 1207.08, "text": " you see agents being trained to stack blocks they're sort of given an example tower and then they" }, { "end": 1216.6799999999998, "start": 1211.1599999999999, "text": " have to stack the blocks in the same way as the example that they were given. So that was sort" }, { "end": 1222.6799999999998, "start": 1216.6799999999998, "text": " of just based on like the type of thing that people have done before and then the other three" }, { "end": 1229.3999999999999, "start": 1222.6799999999998, "text": " tasks which are connecting covering and then another variant of covering called covering heart" }, { "end": 1235.4, "start": 1229.4, "text": " we're tried to get away from like this idea that you're sort of given the solution and you just have" }, { "end": 1241.5600000000002, "start": 1235.4, "text": " to replicate it we wanted to actually have agents have to design their own solutions. And so you know" }, { "end": 1247.8000000000002, "start": 1242.2, "text": " the connecting task that one is you have to stack the blocks so that they reach a point in this" }, { "end": 1252.3600000000001, "start": 1247.8000000000002, "text": " guy and it's or we actually had like three points so you have to actually construct maybe like" }, { "end": 1256.6000000000001, "start": 1252.3600000000001, "text": " up to three towers of blocks and then there's also obstacles that you have to stack the blocks" }, { "end": 1263.8, "start": 1256.6, "text": " around so if you if any of the blocks collide with an obstacle then the episode will terminate so" }, { "end": 1267.8799999999999, "start": 1263.8, "text": " you have to avoid the obstacles while trying to stack the blocks to reach these points in the" }, { "end": 1272.6, "start": 1267.8799999999999, "text": " sky and that kind of felt a little bit like you know maybe like what you would do like if you're" }, { "end": 1276.28, "start": 1272.6, "text": " just playing with blocks you're like okay I want to like stack the blocks up as high as possible" }, { "end": 1281.8799999999999, "start": 1276.28, "text": " or whatever it's sort of similar to that and then what's one of the other things that we do when" }, { "end": 1286.3600000000001, "start": 1281.88, "text": " you know we build things well we you know we build things so that we have you know like shelter" }, { "end": 1292.3600000000001, "start": 1286.3600000000001, "text": " or whatever so that sort of inspired like the the covering and covering hard tasks where the ideas" }, { "end": 1298.6000000000001, "start": 1293.5600000000002, "text": " you want to stack the blocks so that they cover the obstacles from above you can sort of think of" }, { "end": 1304.0400000000002, "start": 1298.6000000000001, "text": " it like so that if it were to rain the obstacles would say stay dry in the rain but again you can't" }, { "end": 1309, "start": 1304.0400000000002, "text": " touch the obstacles so you have to stack around them and so the sort of like yeah it's like a very" }, { "end": 1314.84, "start": 1309, "text": " loose analog to like this idea of building shelter and building something that actually like plays a" }, { "end": 1322.04, "start": 1314.84, "text": " functional role so that's kind of like how we we came up on those tasks oh I should say like yeah" }, { "end": 1326.84, "start": 1322.04, "text": " so the covering hard is like a variant of covering just where you have like a limited number of" }, { "end": 1334.36, "start": 1326.84, "text": " blocks that you can use and the I didn't mention but you can make some of the blocks sticky so they" }, { "end": 1337.96, "start": 1334.36, "text": " like both stick to any other object they come in contact with you can sort of think of it as like" }, { "end": 1345, "start": 1337.96, "text": " the object is covered in glue or the blocks are covered in glue and so in in the covering task" }, { "end": 1352.68, "start": 1345.96, "text": " the regular covering task the this like sticky block to make a block sticky you pay a penalty" }, { "end": 1357.88, "start": 1352.68, "text": " for it and you pay a relatively high penalty for it so the agent basically just learns not to ever" }, { "end": 1363.88, "start": 1357.88, "text": " use the sticky blocks just you know have to stack things in a stable way whereas in covering hard" }, { "end": 1369.3200000000002, "start": 1363.88, "text": " we lowered the penalty for the sticky blocks a little bit so the agent really kind of had to trade" }, { "end": 1376.3600000000001, "start": 1369.3200000000002, "text": " off between like okay should I use like you know should you know should I use a sticky block here" }, { "end": 1382.8400000000001, "start": 1376.3600000000001, "text": " and pay a little bit or like you know but then be able to cover more stuff or should I not use this" }, { "end": 1387.24, "start": 1382.8400000000001, "text": " to he block and potentially be able to cover less stuff because again it only has a small number" }, { "end": 1391.88, "start": 1387.24, "text": " of objects that I can actually use to sell the task so it requires like a little bit more reasoning" }, { "end": 1399.88, "start": 1391.88, "text": " than the the covering task. Was anything surprising to you in the results or were they more like" }, { "end": 1406.0400000000002, "start": 1399.88, "text": " what you guys expected to find? I think they were sort of like roughly what we expected I mean I" }, { "end": 1411.24, "start": 1406.0400000000002, "text": " had the intuition that you know having like worked with these sorts of like you know physical" }, { "end": 1417.48, "start": 1411.24, "text": " scenes with blocks for a very long time that there's some like again because of this sort of like" }, { "end": 1421.8, "start": 1417.48, "text": " combinatorial nature of them but they're they're very challenging and that they should be pretty" }, { "end": 1430.44, "start": 1421.8, "text": " challenging for like your standard like class of RL agents to solve. I think and then you know we" }, { "end": 1434.92, "start": 1430.44, "text": " have been working with like the GraphNural Networks for a while like this is not the first paper that" }, { "end": 1441, "start": 1434.92, "text": " we've you know used them with there's like my collaborators in particular have been you know" }, { "end": 1446.04, "start": 1441, "text": " working with with them for a number of years and so you know we had some intuitions too about like" }, { "end": 1449.56, "start": 1446.04, "text": " you know what types of problems those things work well on and in particular problems where you do" }, { "end": 1453.8799999999999, "start": 1449.56, "text": " have like a discrete set of entities that you need to reason about and reason about the relationships" }, { "end": 1459.32, "start": 1453.8799999999999, "text": " between those entities like say a set of blocks and you know whether those blocks are like you know" }, { "end": 1465, "start": 1459.32, "text": " touching each other on top of each other or whatever. So I think our intuitions was that like the" }, { "end": 1471.1599999999999, "start": 1465, "text": " you know using this sort of like structured representations would you know work well in these" }, { "end": 1475.32, "start": 1471.1599999999999, "text": " types of tasks. I think the thing that was potentially like the most surprising was like the impact" }, { "end": 1479.8, "start": 1475.32, "text": " of the structured actions that that was sort of something that you know going into this project we" }, { "end": 1486.12, "start": 1479.8, "text": " we hadn't really thought of doing that yet and then we sort of came up with the idea and then that" }, { "end": 1490.9199999999998, "start": 1486.12, "text": " was a replacement. Is that what you mean by that? Yeah exactly so like placing a block on top of" }, { "end": 1498.6, "start": 1490.9199999999998, "text": " another block rather than placing a block at like an absolute location. That sort of and oh I should" }, { "end": 1502.6799999999998, "start": 1498.6, "text": " mention so this is actually an interesting aspect of this paper is we didn't really go into" }, { "end": 1507.48, "start": 1502.68, "text": " details of the architecture that much but the way that it actually executes these relative actions" }, { "end": 1515.24, "start": 1507.48, "text": " is by so what the graph neural network does is it it takes in a graph is input or so the graph is" }, { "end": 1520.52, "start": 1515.24, "text": " again like these block properties like the position of the blocks you know their orientation their" }, { "end": 1525.48, "start": 1520.52, "text": " size so on and so forth. The graph neural network then processes this graph and so it like you" }, { "end": 1530.3600000000001, "start": 1525.48, "text": " know it passes information all over the graph and then it produces a new graph with activations" }, { "end": 1534.84, "start": 1530.36, "text": " on the nodes and the edges and so what we can do is take the activations on the edges to actually" }, { "end": 1541.56, "start": 1534.84, "text": " correspond to our actions or our q values in this case because we're doing q learning and because" }, { "end": 1545.32, "start": 1541.56, "text": " they're on the edges of the graph they correspond to things that are about the relationship between" }, { "end": 1552.1999999999998, "start": 1545.32, "text": " two objects or two blocks so specifically one edge might say correspond to like pick up one block" }, { "end": 1557.24, "start": 1552.1999999999998, "text": " which is you know like the start node of the edge and put it on top of a block which is the end" }, { "end": 1563.64, "start": 1557.24, "text": " node of the edge and then we can have multiple activations on each edge which my correspond to" }, { "end": 1569.48, "start": 1563.64, "text": " particular offset locations so say like you know not only do I want to put block a on top of block b" }, { "end": 1575.08, "start": 1569.48, "text": " but I want to put it on the left or the right or the center or so on. Having this you know actions" }, { "end": 1579.4, "start": 1575.08, "text": " actually on the edges of the graph itself so rather than you know these sort of like you know global" }, { "end": 1584.44, "start": 1579.4, "text": " actions where you have like a fixed action size was that wasn't you know we weren't necessarily" }, { "end": 1589.3200000000002, "start": 1584.44, "text": " initially planning to do that but like it sort of ended up working out really well to do that" }, { "end": 1594.04, "start": 1590.04, "text": " and and that that was sort of I think a very pleasing result to find." }, { "end": 1602.76, "start": 1595.24, "text": " So are your agents like the alpha go of blocks blocking like are they super they did any of them" }, { "end": 1607.16, "start": 1602.76, "text": " get superhuman I mean I know that wasn't the point of from what I understood it wasn't the" }, { "end": 1610.6000000000001, "start": 1607.16, "text": " point of your paper but were they getting really insanely good at this?" }, { "end": 1618.36, "start": 1610.6, "text": " So I don't think you can do like a direct comparison here because in some ways they're superhuman" }, { "end": 1622.9199999999998, "start": 1618.36, "text": " but in other ways they're not so the way in which they're superhuman is actually to do this" }, { "end": 1628.12, "start": 1622.9199999999998, "text": " task well you have to have like pretty precise placement because again you have this issue where" }, { "end": 1633.3999999999999, "start": 1628.12, "text": " if you you know collide a block with any of the obstacles then the episode is over like you lose" }, { "end": 1640.4399999999998, "start": 1633.3999999999999, "text": " so so that sort of requires like very precise positioning and I think humans would not be that good" }, { "end": 1647, "start": 1640.44, "text": " at being so precise so in that sense the agent is probably much better than what humans can do." }, { "end": 1651.64, "start": 1648.04, "text": " On the other hand though I think humans would be able to build much more complex structures" }, { "end": 1656.04, "start": 1651.64, "text": " than our agents are able to do so I think at least on the reason inside of things we still" }, { "end": 1661.4, "start": 1656.04, "text": " haven't really gotten to the level at what humans can think about and come up with which is the part" }, { "end": 1667, "start": 1661.4, "text": " that I'm more interested in anyways so so there's still work to be done. And if I followed this" }, { "end": 1672.04, "start": 1667, "text": " right I don't think it was a huge training budget is that right? Well it depends on do you mean" }, { "end": 1677.08, "start": 1672.04, "text": " like the number of episodes with the agent experienced or the the search that it was performing?" }, { "end": 1681.8, "start": 1678.04, "text": " Yeah well good question so that I guess that depends on which agent right whether it's doing" }, { "end": 1688.36, "start": 1681.8, "text": " the Monte Carlo tree search. Yes yeah well I mean in both cases we always had we always were" }, { "end": 1695.72, "start": 1688.36, "text": " training agents like sort of using your typical RL setup and then in that case I think well we" }, { "end": 1699.88, "start": 1695.72, "text": " always trained the agents to convergence so the amount of experience that they had depended a" }, { "end": 1703.88, "start": 1699.88, "text": " little bit but I would say the amount of experience that they had was sort of like on par with other" }, { "end": 1708.92, "start": 1703.88, "text": " RL tasks so we didn't you know I don't I don't think we sort of like improved on data efficiency" }, { "end": 1715.96, "start": 1708.92, "text": " compared to other RL agents necessarily but one aspect of our agents was that we in addition to" }, { "end": 1723.08, "start": 1715.96, "text": " having this sort of like structured policy we combined that using planning with Monte Carlo tree" }, { "end": 1729.72, "start": 1723.08, "text": " search and then in that case we also used a very small amount of search so only a search budget of 10" }, { "end": 1736.04, "start": 1730.6799999999998, "text": " and found that you know having such a small search budget can in some cases still improve performance" }, { "end": 1742.84, "start": 1736.6799999999998, "text": " even though you're not actually doing like that deep of a planning you're not doing that deep" }, { "end": 1749.32, "start": 1742.84, "text": " of planning. So this paper references Azizadena Shelley's paper surprising negative result in" }, { "end": 1754.52, "start": 1749.32, "text": " generative adversarial tree search he was our guest back in episode four I think he looked so" }, { "end": 1761.96, "start": 1754.52, "text": " he used a learned model but is it is it right that this paper used the environment directly so that" }, { "end": 1768.84, "start": 1761.96, "text": " there was no issue of model inaccuracy in the learned model. Yeah so we yeah so all of the results" }, { "end": 1777.3999999999999, "start": 1768.84, "text": " reported in the main text are with the environment simulator. We did do some exploration that we" }, { "end": 1783.48, "start": 1777.4, "text": " have in the appendix of learning a model though our results there were I think we still the model" }, { "end": 1788.92, "start": 1783.48, "text": " that we learned basically just wasn't like good enough I had too much inaccuracy and and so we" }, { "end": 1793.48, "start": 1788.92, "text": " weren't able to really do that much better than the model free agent though I think that that's" }, { "end": 1799.5600000000002, "start": 1793.48, "text": " sort of more just a question of like improving the model learning which wasn't really the main" }, { "end": 1808.28, "start": 1799.56, "text": " focus of the paper anyways so yeah but yeah we referenced the the other paper the surprising" }, { "end": 1813.96, "start": 1808.28, "text": " negative results paper just because they also did something a little bit similar to what we do" }, { "end": 1820.6799999999998, "start": 1813.96, "text": " which is using tree search sort of in the training loop so you know usually if you think of using" }, { "end": 1825.8799999999999, "start": 1820.6799999999998, "text": " Monte Carlo tree search you think of say like you know you sort of like apply tree search just at" }, { "end": 1831, "start": 1825.88, "text": " test time and you may be like improve on your performance but what we actually did is during the" }, { "end": 1835.8000000000002, "start": 1831, "text": " training loop you know when the agent executes an episode for each action it runs some search" }, { "end": 1841.24, "start": 1835.8000000000002, "text": " uses the search to like you know find a better action to take and then executes that action in" }, { "end": 1845.88, "start": 1841.24, "text": " the environment and then as the resulting experience into the replay buffer and then learns from" }, { "end": 1851.8000000000002, "start": 1845.88, "text": " that experience so it's always using tree search during during training as well as like at test time" }, { "end": 1860.9199999999998, "start": 1851.8, "text": " um and uh yeah so we found that that like worked uh it can work in some cases so we found it worked" }, { "end": 1866.6, "start": 1860.9199999999998, "text": " particularly well like in our covering hard task though in other cases we found the behavior of" }, { "end": 1871.8, "start": 1866.6, "text": " like including the the search during training time was maybe a little bit more unstable" }, { "end": 1876.36, "start": 1871.8, "text": " potentially related to some of the same issues that they talked about in their paper where" }, { "end": 1884.28, "start": 1876.36, "text": " um the search sort of allows you to you know locally avoid taking bad actions but then um you're not" }, { "end": 1887.9599999999998, "start": 1884.28, "text": " actually learning from the experience of the bad actions that because you never actually take" }, { "end": 1894.1999999999998, "start": 1887.9599999999998, "text": " them in the environment um so uh yeah so it's an interesting question of like yeah uh thinking" }, { "end": 1898.76, "start": 1894.1999999999998, "text": " about how to do that a bit better are you able to share with us a little bit about what it was" }, { "end": 1907.8, "start": 1898.76, "text": " like working with the team on this paper um yeah it so it was really great actually um I think" }, { "end": 1913.8799999999999, "start": 1908.76, "text": " that's one of the the cool things about my job here I deep mind is I feel like it's very" }, { "end": 1919.32, "start": 1913.8799999999999, "text": " collaborative and I have an opportunity to work with a lot of people and um it's you know" }, { "end": 1926.36, "start": 1920.76, "text": " people who have like a really wide range of expertise I think sometimes in academia and in grad" }, { "end": 1930.6, "start": 1926.36, "text": " school in particular like the focus is sort of on like you know you need to get enough like first" }, { "end": 1935.7199999999998, "start": 1930.6, "text": " author publications so that you can you know graduate and then get a job um and so it's you know" }, { "end": 1940.12, "start": 1935.7199999999998, "text": " you might collaborate with people in some cases but in in other cases it can sometimes be I think" }, { "end": 1945.8799999999999, "start": 1940.12, "text": " a bit isolating um because you're really focused on like your own core research you know projects um" }, { "end": 1951.8799999999999, "start": 1946.76, "text": " and so it was really nice to actually get to you know work very closely and collaboratively with" }, { "end": 1956.68, "start": 1951.88, "text": " a bunch of other people and um you know sort of each of us having our own different areas of" }, { "end": 1961.88, "start": 1956.68, "text": " expertise like I'm for me coming from like a a Cardinal Science Psychology background you know like" }, { "end": 1967.8000000000002, "start": 1961.88, "text": " I have less like experience with sort of like um you know the nitty-gritty of like training agents" }, { "end": 1974.92, "start": 1967.8000000000002, "text": " though I'm you know I have more of that experience now these days but um but I I have um and so like" }, { "end": 1978.6000000000001, "start": 1974.92, "text": " you know like some of the people that I was collaborating with have like much more of that um" }, { "end": 1983.08, "start": 1978.6, "text": " whereas I have more experience say like you know thinking about like uh the connection to human" }, { "end": 1987.08, "start": 1983.08, "text": " cognition and one of the things that humans are good at what can we you know what insights can we" }, { "end": 1992.1999999999998, "start": 1987.08, "text": " bring from there into our agents and and so having you know working on a team very closely and all" }, { "end": 1997.32, "start": 1992.1999999999998, "text": " all sort of like bringing our different areas of expertise is like you know it's very satisfying" }, { "end": 2003.8, "start": 1997.32, "text": " it was really a lot of fun and I had a question about follow-up work but I'm not going to ask you" }, { "end": 2007.9599999999998, "start": 2003.8, "text": " anything about that because I see you just published a paper and a follow-up paper um just a few days" }, { "end": 2017, "start": 2007.96, "text": " ago object oriented state editing for hrl uh oh yes yeah so that's uh uh work um from a victor who" }, { "end": 2022.76, "start": 2017, "text": " was the first author on their construction paper as well and uh yeah so we were looking there at" }, { "end": 2027.32, "start": 2022.76, "text": " um some of like the same types of tasks but thinking about like how can we sort of like leverage" }, { "end": 2033.48, "start": 2028.28, "text": " you know uh more notions of hierarchy um so having like a hierarchical agent and uh" }, { "end": 2040.04, "start": 2033.48, "text": " uh in particular in that work we were sort of exploring this idea that well like so most hierarchical" }, { "end": 2045.64, "start": 2040.04, "text": " agents um are sort of you know they're like the just like goal conditioned sort of hierarchical" }, { "end": 2049.48, "start": 2045.64, "text": " thing where you say you know you have the high level controller and it tells the low level agent" }, { "end": 2053.48, "start": 2050.12, "text": " you know oh you should go here or you should solve this problem or whatever but um like a feudal" }, { "end": 2059.32, "start": 2053.48, "text": " network so something like that yeah exactly um though they tend to be a little bit focused on um" }, { "end": 2067.32, "start": 2059.32, "text": " you know like parts of the state space that you could actually like uh reach as a low level agent" }, { "end": 2074.6000000000004, "start": 2067.32, "text": " or that you might possibly experience um like in a different type of episode um uh and things that are" }, { "end": 2078.44, "start": 2074.6000000000004, "text": " often you know very like position based um so we were interested in saying like okay well what if" }, { "end": 2083.6400000000003, "start": 2078.44, "text": " like the the actions or the goals that the high level agent was giving the low level agent were more" }, { "end": 2089.16, "start": 2083.64, "text": " like object based instead um and and maybe potentially things that you might not you know actually" }, { "end": 2095.3199999999997, "start": 2089.16, "text": " even experience in the world um and so you know we gave the high level controller access to things like" }, { "end": 2100.7599999999998, "start": 2095.3199999999997, "text": " being able to add objects into the scene or delete objects from the scene or modify the properties" }, { "end": 2106.2799999999997, "start": 2100.7599999999998, "text": " of objects in the scene um in order to condition the behavior of the low level agent um and so you know" }, { "end": 2110.2799999999997, "start": 2106.2799999999997, "text": " the the results that we have in that paper are kind of just like a preliminary set of results" }, { "end": 2116.1200000000003, "start": 2110.28, "text": " um kind of demonstrating that uh you know the doing this sort of thing might potentially um work" }, { "end": 2122.1200000000003, "start": 2116.1200000000003, "text": " if you scale it up um though you know our results are sort of also with like uh we we looked at" }, { "end": 2126.76, "start": 2122.1200000000003, "text": " having like a heuristic you know handcrafted high level controller and stuff and we we also tried" }, { "end": 2130.2000000000003, "start": 2126.76, "text": " training both the high level and the low level controller but there gets a lot of thornier because" }, { "end": 2136.0400000000004, "start": 2131, "text": " you have to train both of them um together and then there's a question of like you know uh what's" }, { "end": 2139.96, "start": 2136.04, "text": " the rate I would you update the high level agent versus the low level agent and sort of all of these" }, { "end": 2144.6, "start": 2139.96, "text": " implementation issues that were still sort of like trying to sort out so okay I want to ask you" }, { "end": 2151.8, "start": 2144.6, "text": " about another paper um analogs of mental simulation and imagination in deep learning that was a solo" }, { "end": 2157.24, "start": 2151.8, "text": " paper for you uh and I think it's related to your dissertation is that right? Right so as I mentioned" }, { "end": 2163.8, "start": 2157.24, "text": " before like I did my PhD in psychology and so the the topic of my PhD work was on um this idea of" }, { "end": 2168.84, "start": 2163.8, "text": " mental simulation which is you can think of as sort of like our ability to imagine you know what the" }, { "end": 2175.5600000000004, "start": 2168.84, "text": " world might be like or what it was like in the past or how it might have been different um and um" }, { "end": 2181.5600000000004, "start": 2175.5600000000004, "text": " there's a whole like broad literature on mental simulation and mental imagery is another term" }, { "end": 2188.1200000000003, "start": 2181.5600000000004, "text": " you might hear um and so I was doing work on basically trying to do computational modeling of" }, { "end": 2194.2799999999997, "start": 2188.12, "text": " uh how humans you know mentally simulate things um like looking at questions like how do people" }, { "end": 2199.48, "start": 2194.2799999999997, "text": " decide like how many simulations to run or how do they you know extract the right information from" }, { "end": 2203.96, "start": 2199.48, "text": " their simulations and and so on and so forth and so during my PhD I spent a lot of time thinking" }, { "end": 2209.48, "start": 2203.96, "text": " about you know how humans are um doing this type of simulation and I was always very interested in" }, { "end": 2214.3599999999997, "start": 2209.48, "text": " the connections between that and you know some of the ideas that were going on in in AI and I" }, { "end": 2221, "start": 2214.36, "text": " actually like during my PhD I didn't have too much time to explore like the area of of deep RL" }, { "end": 2225.48, "start": 2221, "text": " and like planning and model based RL but when I got to deep mind I spent some more time thinking" }, { "end": 2230.44, "start": 2225.48, "text": " about that and reading that literature and getting up to speed on it and that's sort of how this" }, { "end": 2235.96, "start": 2230.92, "text": " review paper was born was I you know I wanted to sort of explicitly make the connections between those" }, { "end": 2243.88, "start": 2235.96, "text": " two fields um and so the paper is basically it's a review of a lot of the recent work in model based" }, { "end": 2249.8, "start": 2243.88, "text": " deep RL um and then explicitly sort of like trying to categorize and come up with like a tax" }, { "end": 2257.2400000000002, "start": 2249.8, "text": "onomy for that work in the context of how humans use mental simulation. Can you tell us a bit more" }, { "end": 2267.32, "start": 2257.2400000000002, "text": " about how humans do background planning versus decision time planning? Uh yeah so uh background" }, { "end": 2275, "start": 2267.32, "text": " planning it so I think a lot of times we sort of like conflate these ideas in in uh especially" }, { "end": 2278.92, "start": 2275, "text": " I think that they used to be maybe like a little bit different like before deep learning sort of" }, { "end": 2283.2400000000002, "start": 2278.92, "text": " came along like you had you know ideas like okay like you have Monte Carlo tree search that's like" }, { "end": 2288.76, "start": 2283.2400000000002, "text": " a decision time planning method and you have things like dyna which is a background planning method" }, { "end": 2293.6400000000003, "start": 2288.76, "text": " right um uh because you're you're just like simulating data from your model and then learning from" }, { "end": 2301.24, "start": 2293.64, "text": " that and and then when you're done you have a policy and you can just run that um uh but uh now I" }, { "end": 2305.24, "start": 2301.24, "text": " think we sort of like blend them a little bit like you see a lot of things that are doing you know" }, { "end": 2310.2799999999997, "start": 2305.24, "text": " they they may be like are doing both some form of decision time planning and some form of background" }, { "end": 2316.2799999999997, "start": 2310.2799999999997, "text": " planning um but uh and I think that that's probably true in humans too but it's also I think" }, { "end": 2321.8799999999997, "start": 2316.2799999999997, "text": " useful to think about those two things as being separate so um in background planning you sort of" }, { "end": 2329.32, "start": 2321.88, "text": " have that's you know kind of like um something that maybe potentially happens in downtime or just" }, { "end": 2336.2000000000003, "start": 2329.32, "text": " from like lots of like um uh you know uh if you you mentally imagine doing something um and then" }, { "end": 2341, "start": 2336.2000000000003, "text": " like maybe you know sort of like learn something new from that but you're not actually actively trying" }, { "end": 2346.12, "start": 2341, "text": " to do anything so one nice example of this is this idea of a mental practice where you can" }, { "end": 2351.72, "start": 2346.12, "text": " not you know for example if you're an athlete or a musician you can imagine you know taking some" }, { "end": 2356.92, "start": 2351.72, "text": " action like imagine throwing a ball or or performing your musical piece and then you actually find" }, { "end": 2361.7999999999997, "start": 2356.92, "text": " that like later if you actually go and do that then you're better at performing that action or" }, { "end": 2368.52, "start": 2361.7999999999997, "text": " playing that piece um and so you know even though you're not actually really practicing it you're" }, { "end": 2372.52, "start": 2368.52, "text": " you're practicing it using your mental model of the world so that's a bit similar to this you" }, { "end": 2377.24, "start": 2372.52, "text": " know the idea of background planning in AI and then but then I think like the majority of mental" }, { "end": 2383.8, "start": 2377.24, "text": " simulation corresponds more to ideas in uh decision time planning it would whereas the idea that" }, { "end": 2389.24, "start": 2383.8, "text": " you're actively running a mental simulation to make a decision like you know I need to uh you" }, { "end": 2393.4, "start": 2389.24, "text": " know decide where to place the next block so I'm going to imagine oh if I put it here oh it'll" }, { "end": 2397.64, "start": 2393.4, "text": " cause the tower to fall over but if I put it here um it'll be stable and so I you know I choose" }, { "end": 2402.92, "start": 2397.64, "text": " the second place okay so and as an example that maybe our listeners would be familiar with like" }, { "end": 2410.52, "start": 2402.92, "text": " alpha zero I think does both of these types of planning um yes that's right yeah so on and then" }, { "end": 2416.8399999999997, "start": 2410.52, "text": " decision time planning um I guess it doesn't have if if we just use the broad network predictions" }, { "end": 2422.2, "start": 2416.8399999999997, "text": " of the Q values then then we wouldn't be doing decision time planning but that's a much weaker" }, { "end": 2431.16, "start": 2422.2, "text": " agent that way uh yes right seems to be different like various ways that RL agents can use a model" }, { "end": 2437.48, "start": 2431.7999999999997, "text": " like like planning forward and planning backward um use it for exploration or for propagating" }, { "end": 2443.72, "start": 2437.48, "text": " credit I think that would be the dinocase or direct back propagation through the model dynamics" }, { "end": 2449.08, "start": 2444.3599999999997, "text": " like there's there's all these very different approaches and I wonder do these have analogs in" }, { "end": 2457, "start": 2449.08, "text": " human cognition but do we also use models in a variety of ways? Absolutely yeah so I think humans" }, { "end": 2463.88, "start": 2457, "text": " actually use models in a way that's like far more diverse than how agents use them um so you know" }, { "end": 2469.88, "start": 2464.52, "text": " like all of those things that you listed I think are are absolutely true um but there's a lot of" }, { "end": 2475.88, "start": 2469.88, "text": " other ways you could use models too um for example like um well I guess like in some of these cases" }, { "end": 2481.08, "start": 2475.88, "text": " like some of the things that you listed are I guess like pretty broad umbrellas like say like" }, { "end": 2485.08, "start": 2481.08, "text": " propagating credit but there's lots of ways that you could use a model to do that um but we don't" }, { "end": 2489.88, "start": 2485.08, "text": " really necessarily explore that much so like maybe you are actually using the model to do more like" }, { "end": 2495.8, "start": 2490.84, "text": " you know some form of like counterfactual reasoning or inference about like um you know certain" }, { "end": 2502.36, "start": 2496.6800000000003, "text": " aspects of the world and then you use that as a method for um you know say like explaining a way" }, { "end": 2507.2400000000002, "start": 2502.36, "text": " like certain reasons why you might have gotten a reward versus others um and that's definitely" }, { "end": 2516.1200000000003, "start": 2507.2400000000002, "text": " something that humans do but that you don't see like quite so frequently um in in agents um and uh" }, { "end": 2522.84, "start": 2517.08, "text": " yeah I think uh for some of the other things you listed like uh say like direct backprop through the" }, { "end": 2527.56, "start": 2522.84, "text": " model I mean it's a little hard to say like whether that is something that people do since that's kind" }, { "end": 2532.92, "start": 2527.56, "text": " of it's more of like um um this is very specific to the particular way that you're like" }, { "end": 2537.4, "start": 2532.92, "text": " implementing your model is like whether it's differentiable or not do are the models that we" }, { "end": 2545.4, "start": 2537.4, "text": " have in our minds differentiable uh I have no idea actually so um I'm not sure but uh I think" }, { "end": 2551.64, "start": 2545.4, "text": " I wonder how you would test that right yeah I mean I mean to me when I'm using a tool when" }, { "end": 2556.68, "start": 2551.64, "text": " I'm with a tool use thing we get into this mode where we're like oh if we just do a little bit more" }, { "end": 2561, "start": 2556.68, "text": " of that then we get a little more of that on the tool and and to that that's that that is like a" }, { "end": 2567.64, "start": 2561, "text": " differential differentiation it's well we definitely have very good ways I think of sort of" }, { "end": 2573.48, "start": 2567.64, "text": " identifying uh basically solving the credit assignment problem right um when we're using models" }, { "end": 2579.72, "start": 2573.48, "text": " like we know like if things happen one way then like you know you can tell like oh uh you know" }, { "end": 2583.48, "start": 2579.72, "text": " okay I tried something and it didn't quite work but I know that that's the right way to do it" }, { "end": 2587.96, "start": 2583.48, "text": " I just need to like you know do it better next time as opposed to like I tried something and I think" }, { "end": 2592.84, "start": 2587.96, "text": " that that's probably not the way to do it at all and it changed my strategy um that's like the" }, { "end": 2598.68, "start": 2592.84, "text": " sort of like you know model base like credit assignment that humans can do that like I don't I mean" }, { "end": 2603.16, "start": 2599.64, "text": " being able to differentiate through your model gives you some aspect of that like in particular" }, { "end": 2608.2, "start": 2603.16, "text": " like oh maybe I did it the right way and I need to just like adjust it um but you know say like" }, { "end": 2612.04, "start": 2608.2, "text": " the idea that this is not the right thing that I'm doing at all I should try something else" }, { "end": 2617.48, "start": 2612.04, "text": " entirely is as much more different from that so um so there's some yeah there's like definitely some" }, { "end": 2622.52, "start": 2618.84, "text": " some things that seem like there are parallels but there's also a lot of ways in which" }, { "end": 2628.2, "start": 2622.84, "text": " um the human use of models departs pretty strongly from the agent use of models but I think" }, { "end": 2633.64, "start": 2628.2, "text": " that that is like a very interesting area of research is how can we try to you know bring those" }, { "end": 2638.52, "start": 2633.64, "text": " um bring bring the the ways that agents use models closer to the way that people use models" }, { "end": 2645.08, "start": 2638.52, "text": " um in order to get more powerful model base reasoning so I think in your work you you refer to" }, { "end": 2653.64, "start": 2645.08, "text": " the models um in our minds that as being similar to pom dp's in some ways yeah well I think the" }, { "end": 2659.64, "start": 2653.64, "text": " pom dp framework is useful for sort of talking about the types of mental simulations that" }, { "end": 2666.7599999999998, "start": 2659.64, "text": " um people have explored in cognitive science um which are not always like sort of framed in" }, { "end": 2674.2000000000003, "start": 2666.76, "text": " the language of decision making um so for example like uh one of the the largest like sort of sub" }, { "end": 2679.7200000000003, "start": 2674.2000000000003, "text": " areas of mental simulation is this idea of like mental imagery where um you know if you imagine" }, { "end": 2684.0400000000004, "start": 2679.7200000000003, "text": " something you can sort of like see it in your mind's eye or whatever and and um and so there's like" }, { "end": 2688.28, "start": 2684.0400000000004, "text": " a lot of work on trying to explain you know like what are the representations that are used there" }, { "end": 2693.88, "start": 2688.28, "text": " what you know how are those representations useful like uh are they spatial or they miss symbolic" }, { "end": 2698.36, "start": 2693.88, "text": " and nature um all these sorts of questions but they're usually not sort of framed in the in the" }, { "end": 2704.44, "start": 2698.36, "text": " way of saying like oh like um you know these are uh sort of you know they're not framed in the" }, { "end": 2709.7200000000003, "start": 2704.44, "text": " language of say like this is you know transitions in your forward model or this is like um you know" }, { "end": 2714.76, "start": 2709.7200000000003, "text": " like uh your observation function which is going from like a low you know like a sort of a latent" }, { "end": 2721.2400000000002, "start": 2714.76, "text": " state representation to the actual like observations that you see um and so I think uh framing the" }, { "end": 2725.7999999999997, "start": 2721.24, "text": " recent the past research in mental simulation in terms of the palm D. P framework can sort of" }, { "end": 2731.3999999999996, "start": 2726.6, "text": " draw out you know a different way of thinking about it and a different way of asking questions" }, { "end": 2737.7999999999997, "start": 2731.3999999999996, "text": " about um those sort of cognitive phenomenon that we have necessarily um uh used in the past" }, { "end": 2746.2799999999997, "start": 2739.64, "text": " is there some uh sent notion of reward or value that's very common in mental simulation" }, { "end": 2756.0400000000004, "start": 2746.28, "text": " so I would say that that's probably the place where um like rl perhaps departs the most from" }, { "end": 2762.36, "start": 2757, "text": " human cognition I mean it was certainly we had some notion of like uh you know like what things" }, { "end": 2767.6400000000003, "start": 2762.36, "text": " are are good and like I know what goals we're directing ourselves to and those are things that" }, { "end": 2774.6000000000004, "start": 2767.6400000000003, "text": " you can obviously like um uh formulate in terms of reward um so in that sense there is a reward" }, { "end": 2779, "start": 2774.6, "text": " though I think it may be like it this is yeah there's a sense of reward in the sense that we're" }, { "end": 2785.64, "start": 2779, "text": " optimizing for something um though it's maybe I don't know if in all cases you would be able to find" }, { "end": 2791.4, "start": 2785.64, "text": " like the precise like reward scale or reward signal um for the task that you're doing in some cases" }, { "end": 2796.44, "start": 2791.4, "text": " you definitely can and there's a lot of neuroscience actually like exploring the idea that like" }, { "end": 2801.24, "start": 2796.44, "text": " you know the brain encodes reward um but if you're talking about like you know certain types of like" }, { "end": 2805.72, "start": 2801.24, "text": " high level cognitive phenomenon too like there's lots of things that you might be optimizing for at once" }, { "end": 2810.2799999999997, "start": 2805.72, "text": " you know not not necessarily just like doing well on like the particular tasks that you're trying" }, { "end": 2815.9599999999996, "start": 2810.2799999999997, "text": " to do but you're also optimizing for like you know how much effort is it taking me to do this task and" }, { "end": 2820.7599999999998, "start": 2815.9599999999996, "text": " like um you know should I like you know stop and go do something else right now like maybe I need" }, { "end": 2827.4799999999996, "start": 2820.7599999999998, "text": " to go eat something as I'm hungry or whatever and so um uh you know this there's in some sense like" }, { "end": 2835.88, "start": 2827.48, "text": " a lot of the um you know the reward signal that like that we might be using um when running mental" }, { "end": 2842.68, "start": 2835.88, "text": " simulations is potentially not quite the same as like what we would see like in a standard RL agent" }, { "end": 2849, "start": 2844.12, "text": " and maybe some of that is reflected in the paper you mentioned model based planning" }, { "end": 2853.2400000000002, "start": 2849.56, "text": " where there was that other process that was thinking about how long to think about this" }, { "end": 2859.64, "start": 2853.24, "text": " uh well sorry which paper precisely are you uh I think you mentioned one in your in your iCML talk" }, { "end": 2864.9199999999996, "start": 2859.64, "text": " um model based planning that was about the spaceships that would decide oh yeah yeah okay how long" }, { "end": 2869.64, "start": 2864.9199999999996, "text": " to plan the trajectories yeah yeah the other reason why I sort of asked is because I actually have" }, { "end": 2874.04, "start": 2869.64, "text": " I sort of have like two papers on this like one is from the cognitive science side and one is from" }, { "end": 2880.2, "start": 2874.04, "text": " the AI side so like on the cognitive science side I have some work that's looking at like um" }, { "end": 2885.56, "start": 2880.2, "text": " yeah how many simulations should you run before you actually make a decision yeah and so so what's" }, { "end": 2891.72, "start": 2885.56, "text": " that like um you know the time the speed accuracy trade off there um and looking at how humans make" }, { "end": 2896.68, "start": 2891.72, "text": " that that trade off um in the context of physical reasoning tasks and then I have this yeah this" }, { "end": 2902.04, "start": 2896.68, "text": " other paper the one that you mentioned um sort of more from the AI side which is uh actually" }, { "end": 2906.3599999999997, "start": 2902.04, "text": " getting an agent to also make those sort of trade-offs in their decisions it's almost like another" }, { "end": 2913.08, "start": 2906.36, "text": " meta level to the whole explore exploit um tension yeah absolutely and actually like so the there's" }, { "end": 2918.84, "start": 2913.08, "text": " a literature on on this sort of field of study it's called meta reasoning um it has meta in the name" }, { "end": 2923.96, "start": 2918.84, "text": " it's sometimes confused I think with meta learning but is is more about yeah reasoning about your" }, { "end": 2929.56, "start": 2923.96, "text": " reasoning process like what computations should I perform so that I can get the best result rather than" }, { "end": 2934.28, "start": 2929.56, "text": " you know like what uh you know what actions in the real world should I take to get the best result" }, { "end": 2942.1200000000003, "start": 2934.28, "text": " I saw David Silver um talk about that I seem to also talk about the predictron and" }, { "end": 2949.48, "start": 2942.6800000000003, "text": " these value-focused model concept that he has which seem to be it seems like he's moving away" }, { "end": 2956.36, "start": 2949.48, "text": " from this notion of the high fidelity transition functions the one step transitions and and more" }, { "end": 2963.48, "start": 2956.36, "text": " to this add some notion of more abstract notion of value um is that do you think that that line of" }, { "end": 2971.96, "start": 2963.48, "text": " work is more in the direction of how our cognition works in some ways yes in other ways no so I think" }, { "end": 2978.6, "start": 2972.52, "text": " having a purely sort of value based model is quite different from how humans use models because" }, { "end": 2984.04, "start": 2978.6, "text": " our models are are not for like one particular task I mean they might be biased for the particular" }, { "end": 2988.6, "start": 2984.04, "text": " task that we're trying to achieve so they might not be completely you know dissociated from the" }, { "end": 2994.92, "start": 2988.6, "text": " task but we definitely have like actual like state transition models where we understand how" }, { "end": 2999.7999999999997, "start": 2994.92, "text": " you know the configuration of things in the world is going to evolve and we know like you know why" }, { "end": 3006.12, "start": 2999.7999999999997, "text": " things occur not just like what good you know how likely it is for like good things to happen um but" }, { "end": 3011.4, "start": 3006.12, "text": " um you know when you say things like high fidelity transition models like in that sense I think it" }, { "end": 3017.64, "start": 3011.4, "text": " is more similar because we're also not you know the the transition models like um humans have are not" }, { "end": 3022.92, "start": 3017.64, "text": " say like doing um trying to do like pixel reconstruction about like you know predicting the next frame" }, { "end": 3029.08, "start": 3022.92, "text": " the the models that we have are much more abstract than that um so you know it's like we would be" }, { "end": 3035, "start": 3029.08, "text": " able to predict things you know over like longer time scales more you know more jumpier" }, { "end": 3042.2799999999997, "start": 3035.72, "text": " types of predictions and um and reasoning about sort of like more abstract notions like you know" }, { "end": 3046.12, "start": 3042.2799999999997, "text": " is this thing going to move from like over here to over here not like is it going to move by like" }, { "end": 3054.8399999999997, "start": 3046.12, "text": " you know 0.01 units or something so towards the end of your um analogs paper you mentioned some" }, { "end": 3061.72, "start": 3054.8399999999997, "text": " challenges for models to match human human cognition um I wonder if you would want to comment on how" }, { "end": 3067.56, "start": 3061.72, "text": " the the field of RL approaches these challenges today like where are we with these these three" }, { "end": 3074.12, "start": 3067.56, "text": " things you mentioned compositional and assembled on the fly um planning with only a handful of" }, { "end": 3080.6, "start": 3074.12, "text": " evaluations from noisy and complete models and models must generalize far from their training sets" }, { "end": 3086.2799999999997, "start": 3081.4, "text": " I'm quoting you here by the way supportive supporting creative exploration and a richer" }, { "end": 3092.12, "start": 3086.2799999999997, "text": " understanding of the world so these do you see seem like really big challenges where where is the" }, { "end": 3098.2, "start": 3092.12, "text": " field with these like let's say for the first one compositional and assembled on the fly so I think" }, { "end": 3104.4399999999996, "start": 3098.2, "text": " in terms of the compositional part is like I think we're like getting a better handle on that so like" }, { "end": 3109.48, "start": 3104.4399999999996, "text": " with um you know with the sort of like graph neural network agents that I talked about um like in the" }, { "end": 3114.2, "start": 3110.2, "text": " the paper with the you know the agents being trained to stack blocks like those are very compositional" }, { "end": 3119.64, "start": 3114.2, "text": " agents they have a very compositional understanding of the world um the assembled on the fly part though" }, { "end": 3127.24, "start": 3119.64, "text": " I think we're pretty far from like so what I mean by that is like when you have a model of the world" }, { "end": 3132.52, "start": 3127.24, "text": " like that model should be something that you construct like you know so you find your like say" }, { "end": 3136.2, "start": 3132.52, "text": " say you're an agent and you like you wake up in a new environment and you're like okay I got to" }, { "end": 3141.16, "start": 3136.2, "text": " solve a task and then you make a model of the world based on like what you see you don't just like" }, { "end": 3145.3999999999996, "start": 3141.16, "text": " have some model of the world that you've like trained over like you know a million episodes or whatever" }, { "end": 3150.3599999999997, "start": 3146.2799999999997, "text": " you have you sort of are like okay I can see that there's like these objects in the scene and I know" }, { "end": 3154.6, "start": 3150.3599999999997, "text": " like how they probably interacted with each other so I can sort of like really quickly assemble like a" }, { "end": 3161.24, "start": 3154.6, "text": " you know I'm basically a mental model of like what is what's here you know how those things work" }, { "end": 3166.2, "start": 3161.24, "text": " together um and you know be able to maybe even make predictions about how two things might work" }, { "end": 3172.6, "start": 3166.2, "text": " together that like you've never seen um you know interacting before um and so yeah this idea of" }, { "end": 3178.92, "start": 3172.6, "text": " assembling a model on the fly rather than sort of having like a this gigantic like uh sort of" }, { "end": 3182.44, "start": 3178.92, "text": " holistic model based on all of your previous experience does that sort of make sense" }, { "end": 3188.92, "start": 3182.44, "text": " totally so you're really tackling um part of this first one with with uh with this line of work with" }, { "end": 3197, "start": 3188.92, "text": " your um uh graph based agents and and the and the structure paper yeah yeah exactly so with the" }, { "end": 3201.96, "start": 3197, "text": " second one planning with only a handful of evaluations from noisy incomplete models I mean that" }, { "end": 3206.76, "start": 3201.96, "text": " one made me think of Pilko uh to start with just because it was good with uncertainty but I wonder" }, { "end": 3213, "start": 3206.76, "text": " what uh broader than that um how what do you where do you think we are with with item two planning" }, { "end": 3219.4, "start": 3213, "text": " with with noisy incomplete model models yeah so I I think that there's certainly like um there's some" }, { "end": 3226.36, "start": 3219.4, "text": " some instances where um you know planning with a a handful of evaluations or you know a handful of" }, { "end": 3233.2400000000002, "start": 3227.4, "text": " uh like you know episodes um can work reasonably well but I think they still like don't really match" }, { "end": 3239.72, "start": 3233.24, "text": " the the scope at which like humans are doing that so right like if you like you know say like" }, { "end": 3244.12, "start": 3239.72, "text": " you find yourself like in a new place like maybe you go to like a new shopping mall or something" }, { "end": 3248.04, "start": 3244.12, "text": " so like the layout's totally different all the people in the room are totally different and you" }, { "end": 3252.9199999999996, "start": 3248.04, "text": " need to like find your way to like the store that you need to go to you know you don't have a good" }, { "end": 3258.7599999999998, "start": 3252.9199999999996, "text": " model of that environment um and maybe you have like a a good abstract model of like how space works" }, { "end": 3262.92, "start": 3258.76, "text": " and better all but like you know you might still be like okay well like what if I went down over here" }, { "end": 3268.2000000000003, "start": 3262.92, "text": " like okay it looks like maybe those shops are like you know based on I don't know like all the closed" }, { "end": 3273.1600000000003, "start": 3268.2000000000003, "text": " stores are over there and then like uh you know maybe there's like toy stores over on the other side" }, { "end": 3278.92, "start": 3273.1600000000003, "text": " and like so you could like you know sort of very quickly just based off of like um only a very small" }, { "end": 3285.88, "start": 3278.92, "text": " amount of experience like uh quickly um you know construct sort of like an approximate model so it" }, { "end": 3290.76, "start": 3285.88, "text": " ties into like this idea of assembling a model on the fly um and then run forward simulations from" }, { "end": 3296.6800000000003, "start": 3290.76, "text": " this like probably wrong model and still like get useful information out of that um so like the" }, { "end": 3302.04, "start": 3296.6800000000003, "text": " sort of idea that like your the simulations from your model are like they're not even like a" }, { "end": 3306.84, "start": 3302.04, "text": " product like a little bit wrong they're like probably very wrong but like you can still probably" }, { "end": 3312.6800000000003, "start": 3306.84, "text": " get somewhere with them is is something that human cognition seems to be pretty robust too um but" }, { "end": 3316.2799999999997, "start": 3312.68, "text": " I don't think we really have anything that looks quite like that in our agents yet" }, { "end": 3323.24, "start": 3317.64, "text": " I guess we have the advantage of having an incredible amount of bias built in and ducked bias and" }, { "end": 3329.56, "start": 3324.2799999999997, "text": " um experience like the the whole evolution behind us and what would that even mean for" }, { "end": 3335.72, "start": 3329.56, "text": " for a a deep oral agent to have that behind it yeah totally I think that's kind of like the main" }, { "end": 3341.64, "start": 3335.72, "text": " question is like how do you get to the point where you have an agent that you know that has that" }, { "end": 3346.6, "start": 3341.64, "text": " sort of like that can do that sort of thing that can like get away with only a few like noisy" }, { "end": 3352.2, "start": 3346.6, "text": " evaluations from your like probably wrong model means that you have to already like have a lot" }, { "end": 3355.72, "start": 3352.2, "text": " of sort of good intuitions for a lot of other things and so like where do those intuitions come" }, { "end": 3360.3599999999997, "start": 3355.72, "text": " from in what way does the model sort of supplement those intuitions I think are very interesting" }, { "end": 3367.16, "start": 3361.16, "text": " and important research questions okay and then the third one models must generalize far from" }, { "end": 3372.68, "start": 3367.16, "text": " their training sets um creative exploration and richer understanding of the world I mean you" }, { "end": 3377.3199999999997, "start": 3372.68, "text": " mentioned the blueberry earth paper which I thought was I'm so glad you put that in there because I" }, { "end": 3383.8799999999997, "start": 3377.3199999999997, "text": " did I did start reading that I would recommend listeners read that um that paper um yeah how would" }, { "end": 3391, "start": 3383.8799999999997, "text": " an agent ever uh write a paper like that exactly I mean I don't know but I want to find out um" }, { "end": 3396.92, "start": 3391, "text": " yeah for uh for people who don't know what this paper is like so um basically someone" }, { "end": 3403, "start": 3396.92, "text": " on Stack Overflow or it was either Stack Overflow or Reddit I forget but um they asked the question" }, { "end": 3409.32, "start": 3403, "text": " of what what would happen if the earth were replaced by blueberries and then um this other person" }, { "end": 3414.92, "start": 3409.32, "text": " came along and they actually like wrote a paper uh sort of you know taking the physical principles" }, { "end": 3420.6800000000003, "start": 3415.7200000000003, "text": " uh at that you know would be relevant in this particular case and sort of like running them to" }, { "end": 3424.6, "start": 3420.6800000000003, "text": " their logical conclusion and so the you know the idea would be like oh okay well like the gravity" }, { "end": 3428.44, "start": 3424.6, "text": " of all the berries like suddenly like you know causes all the berries to like collapse in" }, { "end": 3433.3199999999997, "start": 3428.44, "text": " another themselves and like become jelly and then like you know so a like the earth would you know" }, { "end": 3438.44, "start": 3433.3199999999997, "text": " immediately become smaller um and then probably the heat from that would cause the jelly to be like" }, { "end": 3445.72, "start": 3438.44, "text": " you know boiling or whatever and it's just a very interesting like fun um uh a fun you know" }, { "end": 3449.64, "start": 3445.72, "text": " thought experiment basically of what would happen if that were the case but I think that it's" }, { "end": 3454.36, "start": 3449.64, "text": " such a good demonstration of like you know people think of all sorts of weird stuff like you know" }, { "end": 3459, "start": 3454.36, "text": " you know you know why does someone like think that that's like an interesting question to ask" }, { "end": 3464.04, "start": 3459, "text": " like what would happen if the earth were replaced by blueberries like um but I think that that" }, { "end": 3469.08, "start": 3464.04, "text": " it sort of it really underscores like what makes human cognition so interesting and so powerful" }, { "end": 3473.6400000000003, "start": 3469.08, "text": " is that we are able to come up with questions like that and we're able to come up with you know" }, { "end": 3480.76, "start": 3474.36, "text": " plausible at least explanations for the answers um and you know potentially if you look at like" }, { "end": 3485.48, "start": 3480.76, "text": " some types of RL agents that you know maybe like you know stuff looking at you know from the like" }, { "end": 3492.44, "start": 3485.48, "text": " literature on like generative models and like you know uh agains and like um you know uh like" }, { "end": 3496.6000000000004, "start": 3492.44, "text": " text generation and stuff like this like you have agents that also come up with kind of weird stuff" }, { "end": 3501.2400000000002, "start": 3496.6000000000004, "text": " but they they don't come up with the weird stuff in a way that's like for any reason in particular" }, { "end": 3506.6000000000004, "start": 3501.2400000000002, "text": " right it's just because they haven't like properly fit the actual like real distribution of the data" }, { "end": 3511.16, "start": 3506.6, "text": " um whereas people come up with weird stuff not because they haven't fit the true distribution of" }, { "end": 3515.48, "start": 3511.16, "text": " the data but because um they know that like there's things that they don't know and they want to explore" }, { "end": 3520.12, "start": 3516.04, "text": " um and understand the world around them and so I think that's a really like dig an important" }, { "end": 3525.24, "start": 3520.12, "text": " difference and I think that question of like how do you get agents to be anywhere close to that is" }, { "end": 3531.24, "start": 3525.24, "text": " like a huge on an answer question seems like they would really need to understand causality" }, { "end": 3536.04, "start": 3531.24, "text": " and not just correlation which we haven't really seen that much of in in DRO" }, { "end": 3545.32, "start": 3536.04, "text": " yeah uh totally so um so are our minds like running tons of these simulations all the time without" }, { "end": 3549.32, "start": 3545.32, "text": " us being aware of it like I know there's moments when I'm thinking carefully like how would this" }, { "end": 3555.48, "start": 3549.32, "text": " block or this object look if I rotated very conscious of running that but you know if I if we" }, { "end": 3560.2, "start": 3555.48, "text": " really knew what was happening would we find just thousands of these running all the time or is it" }, { "end": 3565.16, "start": 3560.2, "text": " more of a conscious thing how do you look at that like is it is simulation a big chunk of what's" }, { "end": 3572.52, "start": 3565.16, "text": " happening up there yes um so both is happening so we have models in like all over" }, { "end": 3577.56, "start": 3573.24, "text": " cognition so like in lots of different aspects of cognition so like one place where there are models" }, { "end": 3582.92, "start": 3577.56, "text": " for example is in the motor system there's very low level models that actually are assisting" }, { "end": 3588.2, "start": 3582.92, "text": " with motor control and those are like constantly happening you know every time you make a motion" }, { "end": 3593.64, "start": 3588.2, "text": " like those models are running forward on that point so it's not we're not just executing policies" }, { "end": 3600.04, "start": 3593.64, "text": " we're running models yeah are you saying okay cool yeah yeah the models in the in the motor system" }, { "end": 3607.72, "start": 3600.04, "text": " are used for um sort of accounting for the delay um that it actually physically takes like a" }, { "end": 3614.8399999999997, "start": 3607.72, "text": " signal to to go from the brain to the muscle um and then come back uh so because of that delay" }, { "end": 3618.8399999999997, "start": 3614.8399999999997, "text": " you know you can't actually get the sensory feedback from the world until a short period of time" }, { "end": 3623.3199999999997, "start": 3618.8399999999997, "text": " later and so the model actually compensates for that by being able to predict what will happen" }, { "end": 3628.2000000000003, "start": 3623.32, "text": " it allows um the mind or the brain to be able to come up with the next action to take before" }, { "end": 3632.76, "start": 3628.2000000000003, "text": " it's actually received the feedback from the world and it was super small note I just really love" }, { "end": 3638.52, "start": 3632.76, "text": " the indicators on the references um marking which ones were of a special interest that really helped" }, { "end": 3644.28, "start": 3638.52, "text": " me prioritize the references it's a small feature but that was cool yeah that's actually uh that's" }, { "end": 3649.56, "start": 3644.28, "text": " a feature of the journal um so I I wish that we had more journals like that in in machine learning" }, { "end": 3654.92, "start": 3649.56, "text": " um that sort of like you know prioritize these sort of like you know half opinion piece half review" }, { "end": 3659.24, "start": 3654.92, "text": " sort of things um you don't see that quite so much that would help there's so many references to" }, { "end": 3666.6, "start": 3659.24, "text": " look at yeah I know okay so um so if I from your own work um do you could you maybe comment on" }, { "end": 3670.2799999999997, "start": 3666.6, "text": " other things that are happening in the world of our role that that you find really interesting" }, { "end": 3675.56, "start": 3670.2799999999997, "text": " these days well one trend that I feel like I've noticed is just that I think we're moving away" }, { "end": 3682.36, "start": 3675.56, "text": " a little bit from the end to end um approach which I think is healthy um I mean so I I generally" }, { "end": 3686.68, "start": 3682.36, "text": " think that like having a diversity of research is important just in general like I think it's good" }, { "end": 3691.48, "start": 3686.68, "text": " if people are working on things even if I don't think that they're like you know the right way to do" }, { "end": 3695.16, "start": 3691.48, "text": " it or whatever or it's not the approach that I would take so I think it's good for everyone to" }, { "end": 3699.32, "start": 3695.16, "text": " you know be working on different stuff but I think I do think that the field has been maybe like a" }, { "end": 3705.56, "start": 3699.32, "text": " little bit like overly focused on end to end learning um where you don't like care quite so much about" }, { "end": 3710.2000000000003, "start": 3705.56, "text": " like what is the you know what are like the right inductive biases that we need to incorporate" }, { "end": 3715.56, "start": 3710.2000000000003, "text": " into our agents and you know what are like uh you know problems that are sort of like more compositional" }, { "end": 3719.32, "start": 3715.56, "text": " and these sorts of things and so thinking about like those types of structure or inductive bias" }, { "end": 3724.28, "start": 3719.32, "text": " I think it's super important and I think the field is starting to move a little bit back and" }, { "end": 3728.76, "start": 3724.28, "text": " starting to think about those a bit more so that's like that's very encouraging and exciting to me" }, { "end": 3734.76, "start": 3728.76, "text": " maybe mixing a little bit of the good old-fashioned AI back in there yeah exactly um do you have" }, { "end": 3741.1600000000003, "start": 3734.76, "text": " strong opinions about what RL might uh might look like in in the coming years like in three or 20" }, { "end": 3749.7200000000003, "start": 3741.1600000000003, "text": " years strong opinions uh no I mean I hope that it will look more of like a blend of you know like" }, { "end": 3754.5200000000004, "start": 3749.7200000000003, "text": " sort of not just pushing like the model free thing but also like incorporating strong notions of" }, { "end": 3760.04, "start": 3754.52, "text": " planning and and models and and not you know not just like you know the sort of like next step models" }, { "end": 3764.68, "start": 3760.04, "text": " but like the things that I I talked about in my review paper but models that you know are you know" }, { "end": 3768.92, "start": 3764.68, "text": " more compositional and that enable things like counterfactual reasoning and like the sort of like" }, { "end": 3774.44, "start": 3768.92, "text": " really creative like problem solving and stuff um so I hope that RL will sort of like have all of" }, { "end": 3780.92, "start": 3774.44, "text": " those elements in it um eventually but my last question is do you have any suggestions for us" }, { "end": 3788.2000000000003, "start": 3780.92, "text": " for this podcast uh not in particular I think it's uh it seems like a very nice podcast I've really" }, { "end": 3793.64, "start": 3788.2000000000003, "text": " enjoyed um chatting with you today so uh yeah keep up the good work I guess thanks so much Dr." }, { "end": 3813.7999999999997, "start": 3793.64, "text": " Hammerick on behalf of our listeners and myself um for your valuable time and your insight thanks" }, { "end": 3827.2400000000002, "start": 3813.8, "text": " our episode for today folks be sure to check talk rl dot com for more great episodes" } ]
Pablo Samuel Castro
Pablo Samuel Castro drops in and drops knowledge on distributional RL, bisimulation, the Dopamine RL Framework, TF-Agents, and much more!
https://media.transistor…8c9.mp3?src=site
This is TalkAeral Podcast, all reinforcement learning, all the time. Interviews of brilliant folks across the world of RL. I'm your host, Rob and Chohon. Dr. Pablo Samuel Castro is a staff research software engineer at Google Bryant. He's the main author of the dopamine RL framework. Pablo, thanks so much for joining us today. Thank you for having me. It's a pleasure to be here. So can you tell us a bit about your job? Like what is the staff research software engineer role at Google Bryant? Right. So in Google Brain, we have two types of roles, the software engineer and the research scientist. Software engineer proceeds research scientists. I think I don't think research scientists was a role until, I don't know, maybe most 10 years ago. So I joined Google as a software engineer after finishing my postdoc. And initially I was doing more applied machine learning in ads. So two years ago, after I had transferred to Montreal, the brain team opened up and I was lucky enough to be able to transfer there. So officially a software engineer, although what I do on a day-to-day basis, I'd say, is more like what a research scientist role is. And I think really that's kind of how everybody sees these roles. It's more of a spectrum rather than a binary thing. So you have software engineers that are a bit more heavily on the engineering side. Like for instance, people that build TensorFlow and things like that, they're really working on hard engineering problems. And then you have people maybe a bit like me that are a bit more on the research side. And same on the research scientist side, there's people that are really just focused on the research and others that are a bit more heavy on the engineering side. And you have everything in between. So it's really the name itself is more a question of expectations like when you're trying to get promoted or figuring out compensation, that type of thing. You get evaluated on certain components of what your role represents. So for engineering, you have to write high quality code and demonstrate that you're solving challenging problems with elegant solutions and that type of thing. What are you working on these days at work? So I'm working on a few things. So half of my research is actually in the intersection of creativity and machine learning. So I do a bunch of work with that, have a project to generate lyrics. So basically to help songwriters with lyrics to write more interesting lyrics, do a bunch of other stuff with music and music generation specifically with their use in live performance. So I do some work with that and the other half or probably most of my work more than half I'd say is in reinforcement learning. And here I'm more focused on, I guess what you call fundamental reinforcement learning. So looking at some of the core algorithms and some of the core theories behind these methods. More specifically what I've been thinking about a lot lately is representations and the types of representations that are learned by reinforcement learning agents and what it even means to be a representation, what it means to learn a representation and why you'd want to learn a good representation. I looked back at your master's thesis and your PhD dissertation quite briefly. They're very detailed. But I wanted to ask you a couple, about a couple concepts that showed up in there. Your master's thesis involved vision exploration and you talked about hyper-MDPs. Order, if you can help us understand what is a hyper-MDP. Is it related to a belief MDP? Is it a different thing? It's related to a belief MDP. But this was, so it's essentially you maintain almost like, I guess it's quite related to a belief MDP. I'm not sure I don't want to misspeak and say that they're the same. It's quite possible that they're actually the same object. But essentially the idea is that you don't maintain a single MDP from which you do your planning or your learning, but you maintain a higher level object from which you can sample MDPs. This is where the Bayesian part comes from. You maintain a variance over what you can sample and this you use to do exploration. If you're very confident about certain parts of the MDP, you'll be more prone to choose greedily in those areas, whereas in other areas where there's a bit more variance, then when you sample multiple MDPs, you're going to get different types of systems and this can induce better exploration. This is what I was looking at in my master's thesis. If you had this hyper-MDP that you could sample MDPs from, what form would it take? How would you represent such an object? Is it a concrete thing or is it a more conceptual thing? No, no, it's a concrete thing. You maintain what are called information states. The way we were doing it was fairly simple just with counts. For each state, you essentially maintain a set of counts. From this, you can derive a method for, I think we were using Thomson sampling. This was over 10 years ago, so it's possible. I haven't thought about this in a while. It's possible my memory is failing me. We were using Thomson sampling with these counts. For instance, if you have a lot of counts for a particular state, for the next state transitions, the variance in the MDPs that you sample from those counts is going to be much lower than for other states where your counts are aren't as high. For my master's thesis, we were approaching this as a linear programming problem. If I recall correctly, we were essentially drawing a bunch of samples, then we construct this rollout tree of the possible MDPs that you can sample from that and you solve this rollout tree using linear programming. It's fairly expensive, but this was at a time before deep nets were a thing for reinforcement learning. The problems we were tackling were a lot smaller and we were approaching it more from a theoretical angle. Okay, thanks. Then on your PhD dissertation, it was involved by simulation. Can you help us understand what that concept is? What is by simulation? Sure. By simulation is a notion that comes from concurrency theory. That notion of by simulation came from the concept of simulation. The idea here was if you have some complex system that you don't really get to see the internals of it, but you want to be able to say things about it, maybe you can construct a simulator of it. The way it would work is almost like a game. For whatever transition the real system makes, you can simulate that transition with your simulator. If you can demonstrate that this is the case and by demonstrating, prove it mathematically, then that means that whatever verification that you'd want to do, for instance, that you'll never reach some dangerous state or that you'll only reach it with low probability, you can do on your simulator because it exactly simulates your real system. By simulation is a bit stronger in the sense that it tells you that two systems simulate each other in essence. Initially in concurrency theory, this was studied in systems that were deterministic. No non-stochastic. Here you can have say notion of two-way simulation, but when you start adding stochastic transitions, it becomes a little trickier. Here's where the notion of by simulation that we use in MDPs started coming about. Here rather than looking at this two-way simulation, you're really looking at these two systems concurrently. Actually, there's a neat proof technique that's called co-induction, which is the dual of induction that we all know about. It's through these proof techniques of co-induction that you can prove things about these by simulation relations. When I say relation, it means that it's an equivalence relation. If you're now talking about MDPs, you can question whether two states are by similar and when they're by similar, that means that they're behaviorally indistinguishable. You can think of it as sort of as a state aggregation methodology. If you were able to compute the by simulation equivalence relations in an MDP, then you could potentially reduce the size of your MDP by collapsing all states that are within the same equivalence class. What does it mean to be by similar? You say that two states are by similar. If they, for all actions, they have the same immediate reward. Then also for all actions, they have the same probability of transitioning into equivalence classes of your by simulation equivalence relation. You can see it has this recursive definition to it or circular definition. The reason this works is via this notion of co-induction that I was mentioning before. If you're able to compute this by simulation equivalence relation and we have algorithms for doing this using just standard dynamic programming, then you can collapse the state space of your MDP. A lot of my work in my PhD was looking at these equivalence relations and seeing how they relate to, for instance, just grouping states based on optimal value functions. If two states have the same optimal value, maybe we grouped them together and how does that relate to by simulation equivalence relation? What if you group states that are equivalent under all policies or under a special class of policies? I also looked at that in the case for pom-dp's, which gets a bit more interesting because you have partial observability. Here there's a very close relation to predictive state representations, which is something that a lot of people thought about. Some people still think about it now, but it's not as prevalent as it used to be. These are equivalence relations. There are zero one. They're binary. Either two states are by similar or they're not. You can consider a generalization of this, which makes it a bit smoother, and this is what by simulation metrics are. Here it's rather than this zero one relationship that you get with equivalence relations, you have a distance. The important property of them is that if two states have a distance of zero, that means that they are equivalent according to the notion of by simulation equivalence relations. The closer two states are, the closer they are to being truly by similar. What this allows you to do now is you can, for instance, say create epsilon balls. You're going to group together all states that are within epsilon of each other according to this by simulation metric. This is a lot more expensive and more difficult to compute. You replace the equality of rewards is replaced by just simply the absolute difference of their rewards, but the equality of transition probabilities that's using the contour of it, or what's most known as the masterstein one. That masterstein is expensive to compute and you have to compute it multiple times for all pairs of states and all actions, so it gets really expensive. That's essentially the idea. It's this really nice theoretical tool. What's really nice about it is that the distance between two states is an upper bound on their difference in optimal value functions. That means that if you do group states that are, say, within epsilon of each other, you know that the approximation error that you're going to get for the optimal value function is going to be bounded by epsilon. It sounds like another type of generalization. We're generalizing the policy across similar states. If you didn't do that step and just fed all these similar states, I mean, I was looking at a recent paper, deep MDP, that showed two images from asteroids. In one image, the asteroids were blue, and the other one is a different color. The states were different, but it didn't really matter in terms of actions and rewards in terms of the NDP. It said those were by similar. We would hope that our RL algorithm would figure that out. Is what you're saying that you could use a different state representation before your, let's say, your model for your RL even begins so that it knows that those two are the same? Right. That's the hope. If you had an Oracle that was able to give you this bicemulation metric for the MDP that you're trying to learn about an optimal policy over, if you had this Oracle, then you could presumably construct an embedding or a representation of your states such that, for instance, the Euclidean distance or some type of distance in this manifold is exactly the bicemulation metric. If you have that, then essentially you're collapsing together states where, for instance, in the example that you're mentioning, pixel differences that really don't have any play any role in terms of the dynamics, they get collapsed together. And this deep MDP paper, that's kind of what they were arguing. So they're the result that they have there that relates it to bicemulation metrics is saying that if, according to their notions of lipsticks continuity, if two states, two ground states get mapped to the same latent state, that only happens when those two ground states are exactly by similar or have bicemulation distance of zero. So that means that you're not, you don't want to collapse two states that are behaviorally distinguishable because then if you're collapsing them and you're making policy choices for this collapse state, then you might be making suboptimal choices for one of them. Whereas if they are bicemular, then you can be sure that choosing the same action for both of them is okay because they are bicemular. So the problem here is that I started with saying say you have this oracle and obviously you don't have this oracle. So that's actually one of the things I'm working on quite a lot as well. And it's somewhat related to this notion of representation that I was mentioning is how is there a way that you can take this nice theoretical object that's the bicemulation metric and incorporate it into the learning process such that it helps you with building better representations that are able to generalize more? Thanks for explaining that to us. I don't really know. I hope that made sense. Well, I followed some of it and what I do, I'm going to listen to this again and I'm going to go back and look at these papers because this is really interesting. So I wanted to ask you about a paper you co-authored called a comparative analysis of expected and distributional reinforcement learning that was with Claire Lyle and Mark Belmer. So could you help us understand what's the main idea of this paper? Right. The institutional reinforcement learning is this new way of thinking about reinforcement learning that Mark and Will Dabney and Remy Munis published in 2017, where if you think about the Belman backup where you have the reward plus the discounted sum of expected future rewards, that's what we've been using for many, many years. You have that expectation, which is the expectation of the expected value of the future trajectories according to your policy. So what the neat thing that they did is they said, what if we replace that expectation with the distribution? So rather than backing up single values, we're backing up distributions. And so they did this. They introduced an algorithm they called C-51 in the paper where they're essentially maintaining a finite support over the possible values. And so they're essentially adjusting the distribution with each backup. And they were able to show that doing this gave some really significant advantages in Atari games, which is where they were running their experiments on. So this was, I mean, as a mathematical notion, it was really interesting and really neat. But also empirically, the fact that it works better was, I guess, somewhat surprising, because it didn't necessarily, even though they had convergence guarantees, they didn't have guarantees for necessarily having better performance. So the idea for this paper was to try to investigate where is this advantage coming from and are the situations where we don't have an advantage. And so Claire did a lot of theoretical work starting from ground zero. So let's take the simplest case, the tabular representations of states, and comparing these two ways of doing reinforcement learning. And so the way we did it was by, she'd call it a thought experiment. So if you were to observe the same, exactly the same trajectories, exactly the same samples with both types of algorithm. So you can imagine running the simulator in parallel, but the two copies are exactly synchronized. And you perform the backups both the expectation or the traditional backup and this distribution backup. What happens when you get to the end? Is there any difference? So it turns out that in the tabular case, there's no difference so that you don't really gain anything from doing distributional RL. When you go to the linear case, if you're doing representing your distribution as a cumulative distribution function, a CDF rather than probability mass function, then you also have exactly the same thing. But essentially what you get in the end, the performance of these two is the same. If you're not representing it as a CDF, then you don't necessarily get the same thing. Not that one's better, they're just kind of different. And she had some experiments in there that basically showed sometimes distributional wins, sometimes expectational wins. It's just different. Now when you go into the non-linear setting, which is what we typically use with deep nets, then you really start seeing a difference with distributional. And empirically, we show that this difference really comes with the expressivity of your representations. So we were taking linear, we were doing essentially linear function approximators by using a four-year basis. And you can increase the order of this four-year basis. And as you increase the order, distributional really started to shine more. So the point of the paper was essentially to show that it's almost like a, to be continued paper because we demonstrated that it's really distributional combined with deep nets where you see this advantage. So since then, we've still been trying to answer this question with some follow-up work. Okay. And then when you described representing the distribution using a CDF or the PMF. So which ones are the main distribution, distributional RL algorithms using? Like C51 is using PMF, is it? Yes. Yeah. And then... So that was one thing that came out of this paper that maybe we shouldn't be using PMFs when we do these backups. So we had this other paper at AIS, that's this year, where we were no longer using the, so we weren't enforcing that it be a proper distribution. So we got rid of the self-mex and we were still able to prove that this converges. It still was a PMF. So that kind of... The results we got were not state of the art in a sense, so we weren't able to win over what we had before. So the AIS, that's where it happened before this AAA paper of the comparative analysis. So the comparative analysis kind of demonstrated that you do actually need the CDF to be able to perform better. And so something like Quantile Regression seems to work better than C51. Okay. And then would IQN fall into that category as well? Yes. Along with Quantile Regression? Yes. Okay. And so interesting. So the idea of looking for this expectation equivalence, is that just to help you understand what's happening or do you really want to find that expectation equivalence to know that it's correct? Well, ultimately the way these algorithms are behaving is you're still taking this argmax when choosing the action, right? So you have whether you're representing your value as a single number or as a distribution, you're going to be taking an argmax. And that argmax is essentially taking the first moment of your distribution or just taking that expectation that you were backing up. So in terms of analyzing with respect to performance of these agents, you do kind of want to look at this expectation. Now obviously there can be other methods that look at other moments of the distribution, like the variance or the skewness or something like that. But we weren't looking at those methods. And I think that's an interesting avenue to look at. But there's no kind of canonical algorithm for that yet at least. So we focused on these expectations. Okay. And is it still unclear why distributional RL is helpful or is this now more clear? No, I wouldn't say. I mean, maybe it's more clear than before, but it's definitely not a solved problem. So we've been doing some work where we have some evidence that suggests that it learns better representations. And what it means to have better representations, what better means and what representation means is still we kind of have debates about this. Some of us in the team have different ideas for what this means. But in general, it does seem like on average they have quote unquote better, quote unquote representations. And this might actually come from the fact that you could think of distributional RL as almost like having auxiliary tasks, which if you think of papers like Unreal, they've been shown to really help with the learning process. Maybe they're serving as some type of regularization for your representations. That's still kind of open for discussion. And these distributional RL algorithms, they're performing really well, right? Like they perform better than without them. In general, that's why they're in rainbow and I think IQN is still state of the art on Summ Atari. Yes, yeah. So that's an important thing, they don't win all the time, but on average, in the majority of games, they do seem to give quite a big advantage. I think the more people are starting to look into them, one of the difficulties is that they aren't as simple to implement as some of the previous algorithms that people use. But I think more and more people are starting to really look into these and use them because they do tend to perform better than the next Spectational RL. And then aside from just their raw performance, I think they open up different types of policies, right? Like, could you not have, if you once you have the full distribution, you could say, okay, I'm not going to take any risky moves that risk getting these low rewards. So you might have a different policy to choose across your distribution of value functions. Is that true? Absolutely, yeah, yeah. So I think it does open up a lot more flexibility in terms of what you can do and what you can say about the behavior of your algorithms. So as I was saying, most of the time, even though we're maintaining a distribution, we take the first moment when we're choosing our action, which is just the mean. But if you could, as you're suggesting, take higher moments and make that and form the action choice, maybe for exploration or maybe for safe RL, it definitely opens up the door for more possibilities. Okay, but in terms of exploration, so I'm trying to understand how these could help with exploration because I think they're not, if I understand correctly, they're not capturing the uncertainty in your transitions or what you don't know about the environment. I'm trying to imagine what these distributions look like when you start training as opposed to when you're done. And are they informing you in the early stages of training about where exploration is needed? They're not, right? Not really, not really. But they do inform, I guess it would be a bit more applicable towards safe RL or safe exploration, if you will. So we've generated a bunch of videos where you, for instance, in space invaders, when you're close to dying, the distribution really shifts towards zero because it's essentially saying there's not much hope in what you can do. I don't know, maybe that's a point where you want to explore as much as you can because you might be able to find one in the scape hatch or something to escape that situation where it seems like all hope is lost. Yes, I mean, I don't think there's at least not that I know of an existing algorithm using these four exploration directly. But I do feel that there is something there that could potentially aid in these algorithms. I wish I could have gone back and told my teenage self that we would be really seriously discussing space invaders at this point in 2019, and this is a serious work and serious business. So I was looking at another paper that you co-authored named a geometric perspective on optimal representations for reinforcement learning. Can you help us understand what was the general idea with that paper? So this paper, and I just to be clear, this is mostly Mark's work. So I was assisting in this paper. So he, like most of the ideas and everything came from him, and I think he had a lot of really fruitful discussions with Dale Schroement about this. But the idea is he was, again, it came from this notion of trying to understand where the advantages of distributional reinforcement learning are coming from. And wondering whether it's really coming from the auxiliary task interpretation, and in general, how do auxiliary tasks help with this work? So out of this idea and out of this work actually came two papers. So the other one which I'm not on is the value function polytope paper that I see about this year. So Robert Dadaschi was the first author of that one. They essentially showed that you can view these value functions as a convex polytope. And so that theoretical work was used in the geometric perspective paper where you could essentially show that that extreme vertices on this polytope correspond to deterministic policies. And so one way you can think of representations, if you think of a good representation as something that can represent many value functions, then you want to find a representation such that the closest it can get to any possible value function is minimized. So if there exists an optimal representation, which I guess would be if you have ground states, would be exactly the ground states, then you would have an approximation of zero or an error of zero when you're trying to approximate any value function. Because the representation is expressive enough to be able to represent all of those value functions. So the idea of the geometric perspective paper is trying to learn these representations by minimizing this approximation error with multiple value functions, not just the optimal value function. So it's not just trying to get to the optimal value function. And as long as you can represent that, then it doesn't really matter how well you can express other value functions. And so by viewing it this way, he was able to demonstrate that you can rephrase the learning problem as a constraint programming problem, where you essentially are trying to find this representation such that giving a large set of value functions, it minimizes the approximation error concurrently for all of these value functions. And so there's some of my favorite parts of this paper are visualizations where you have this forum grid world. And if you use this technique, you're able to essentially visualize the activations of all the cells give to the representation. And you have a much more comprehensive, I'd say, set of representations that they're able to cover the state space a lot more smoothly than what you would do if you were to do either just regular RL without this technique. And so this is related to this notion of learning some type of basis, which you use as a representation for expressing any type of value function. Yeah, I love the diagrams too. They really helped me visualize what was going on. So they were showing these adversarial value functions. And I'm trying to really understand what these AVFs are about. Are they either synthetic value functions that are trying to maximize some synthetic rewards that are not the one we actually care about? Yes. So the way they work is that you can essentially think of a vector of length of equal to the number of states. And so if you have this vector, if it's all strictly positive, and you multiply this vector by your value function when you're approximated value function where you're doing learning, then this will still converge to the optimal policy. So you still recover your policy. However, if some of these elements in this delta vector are negative, then it's essentially, you can think of it as a state that you want to avoid in a sense. So you want to try to avoid going into those states. And this changes your value function and induces a different policy. So in this way, by sampling these delta vectors where each element takes on one or negative one with equal probability, you're going to end up with some states where you're trying to maximize the value function. And in other states where you're trying to actually minimize the value function. And from this, you end up with a different type of policy than the optimal policy, which is what you would end up with if this vector were all positive. Right, so the delta functions by doing the sampling, where you're sampling between negative one and one, you essentially end up with a diverse set of policies. So essentially, he phrases the learning problem as sampling a bunch of these delta vectors. And then we use policy gradient to find the optimal policy for the induced value function when you multiply with these delta vectors. And then you try to find a representation that will minimize the approximation error for the set of value functions. So and we're learning these AVFs at the same time as our main value function is that right or is it in advance? That's in advance, so this is to learn the representation. So it's like if we learn all these other synthetic value functions, then when we come to learn our, the one we care about, then it becomes an easy linear task. That's the whole, yeah. And depending what you want to do with these representations, you can either find the optimal policy or if you want something that's interpretable, like the figure three and the paper is what it's trying to demonstrate is that by using this technique, you get this basis essentially, this representation that is a lot richer than what you would normally get with either just sampling random policies or by just trying to compute the optimal value function. So I was curious about these delta functions. I think in the paper, the delta function was just random. Is that right? Yes. So I'm just imagining if the state space got very large or larger and the delta function state as a random plus one minus one sample. So I'm just imagining that the delta function would become like kind of like no static noise as it, I mean like in a small room, if you had some tiles with minus one and plus one and you kind of squint, you kind of see, well, there's a blob of like areas we should avoid over there and a blob of areas. We should we should hit over there, but when it when the state space becomes larger and all the tiles become really small, then it just becomes this, this fuzz. I wonder if that if this method is is would scale is independent of the scale of the state space. Well, no, it is still delta in a different way. It's quite possible. Yes. So there is part of the way the idea was phrased is that really you have a distribution over value functions and so you could presumably have a distribution that tries to ignore uninteresting policies or interesting value functions. And the delta idea is to try to get these extremal vertices of the polytope that I mentioned that are deterministic policies. But yes, it doesn't scale super gracefully with with the size of the state space because the number of policies you if you were to take all the delta values, it's to the end, which is still better than than a to the end. But it's still somewhat restricting when you try to go to large state spaces. And it's quite possible that that your intuition is right that you end up with with a lot of noise, in which case you might want to consider a different way of sampling the policies rather than doing this delta technique. But that's not something we really looked at in the paper. We did run some preliminary experiments on Atari with the same delta idea. And we weren't able to quite get it working, which is why I didn't make it into the paper. But that suggests that we need an improved way of sampling these policies when you're going into large state spaces. Okay, so now I want to move to dopamine, which is the original reason that I reached out to you and how I first heard your name. You're the primary author of the dopamine or al framework. That's at github.com slash Google slash dopamine. I really like this project. And the repos describes the project as dopamine is a research framework for fast prototyping of reinforcement learning algorithms. I find I understand correctly it supports specific DQN variants, including parts of rainbow and IQN. So I wanted to ask you, Pablo, how did you end up writing this framework? So it was really Mark's idea. So when I joined two years ago, when I joined brain, I had assumed I had said goodbye to academia forever when I finished my post art, because this was at a time when I was in, it was very difficult to get a job in, in the type of work I was doing in machine learning and more theoretical machine learning. So I thought I had said goodbye to academia, but then I was lucky enough to rejoin. And when I rejoin, Mark also joined the team. He switched from deep mind in London to, to bring in Montreal. And we knew each other from our masters, because we did our master's degree together at McGill. And so we, he had obviously been doing a lot of research still. So we sat down and, to try to do some research together and we said, okay, let's start with some implementation of DQN. And I went looking around and there were a bunch on GitHub, but it wasn't clear which one was reliable. There was one from DeepMind, but I think it was written in Lua, which we didn't want to use. We found some internal implementations, but they were a bit more complicated than we wanted, because they were aimed for something different. And then finally, we just said, why don't we build our own? I mean, we're going to be iterating on this a lot. If we get to know the code base really well, then it'll be much better for us. And if we do something that's as simple as it can be, it will likely help other people as well that are doing similar type of research that we're doing. So we set out, set out to do this. And so it was a fairly long design process. I mean, we had a bunch of meetings at the beginning where we were trying to scope out what we wanted to do. Like do we want to be comprehensive and have every single algorithm out there and every single environment, or do we want to restrict ourselves, at least initially, to Atari and DQ and Variance? And we ultimately decided to do that because we decided to just base our decisions on the research we wanted to do. So it was really, let's build something that is useful for us. And under the assumption that we're not the only ones doing this research, so it will be useful for other people as well. And then we'll see how it goes. So after making that decision, it kind of became clear for us when we had to make calls on what algorithms to include and other more technical design decisions. So you just released dopamine 2.0 very recently? What can you tell us about that release? Right. So initially, we wanted to do just Atari because most of our research was in Atari and we just wanted to keep it simple and not get bogged down with trying to support other environments. And that was great. That worked great. I mean, the comparative analysis and the AIS stats paper that I mentioned, they were all run on early versions of dopamine. So when we put it out, it got a really good response. We also wanted to make sure that people would find it useful. We didn't want to put all this work into it and then find that nobody ended up using it. If we weren't going to use it, we weren't going to put it in. But seeing how many people started using it and we started getting requests for supporting gym environments, generally open-ended gym environments, we decided that it was probably the most natural next step. And so dopamine 2.0 was meant to do that, to go beyond Atari and support discrete domain environments from gym. And so the idea with this was also to add an interface where it doesn't only support gym. And there's a nice wrapper where you can just pass in the environment name and it works. But it also allows you to, if you have your own environment, to pretty easily just plug it into this API. Awesome. Okay. So I used this framework a little bit last October and I found, like you said, it was designed for Atari. And so I just did, I made a simple fork to allow it to take arbitrary inputs. It sounds like that's not going to be needed anymore because it supports that out of the box, right? Yeah, if it's a gym environment, it should just work out of the box. There's, you might need a little bit of code for specifying observation shapes and things like that, but it should be pretty easy. You shouldn't have to reinvent the wheel every time. And if it's a non-gym environment, it still shouldn't be too bad. You just have to set up the right hooks, but it should be pretty clear how to do that. So what is your vision for dopamine going forward? Like do you point out to grow it or change it anyway or maintain it as it is and morph xbox? What's the vision? Yeah. So one of the things we wanted to do with dopamine is not, so we're primarily researchers all of us. So we didn't want to be in a state where we're just maintaining dopamine all the time. We wanted to be useful for a bunch of people externally, but not to the point where it requires us to be constantly fixing bugs and maintaining it, which is part of the reason why we wanted to keep it as nimble as possible. But going forward, I mean, some of the things we've talked about is just making sure that we always have a state of the art implementation whenever algorithm is state of the art at the moment. So right now, I still think it's for value-based agents. I still think it's rainbow or IQN. I don't think there's any other that's clearly the new learner. And we obviously keep the IQN because that's the one that were where these all came from. One thing I've been thinking about a lot is whether we can start supporting continuous control or policy gradient type methods. That opens a whole new set of complexities, but I've been thinking more that maybe I should take that challenge. Because it also relates to some of the research I want to do. So again, back to the initial idea that we built things when we needed for research. I love the clarity of this framework. I love knowing that we could rely on the implementations being correct. So I want to ask you about the gen configuration. Is that something? Was that a big decision for you to use that? Is that a future? Or I don't know if it's a future, I really liked it. So when we started working on this, we were trying to figure out how to specify parameters that are needed in multiple places. So flags is kind of like the v0 of what you do. So you pass in flags via the command line, but that means that you have to pass these values as parameters to all. So if you have a call it function that calls some object that creates an agent, that creates a replay buffer, and the parameter you need is a replay buffer. You have to add it as a parameter to all of these steps. So that to me seemed kind of ugly. You could also think of maybe creating, I don't know, something like a proto-buff. So you just create an initial proto-buff that contains all of your parameters. It wasn't too keen on that either because it seemed you were still passing around a bunch of things from one function to the next that you maybe don't necessarily need. And so, Gin was being developed within Google by one of the main contributors to Gin is actually the main or one of the main contributors in TF agents. And because I was doing some work with TF agents at the time as well, they were using Gin. The reason I really like it is because you can specify in a single config file all the parameters for all of the objects in your experiment. So going back to the replay buffer example, which is created by the agent, which is created by the runner, which is created by the main calling function. You don't have to pass this parameter throughout all these calls. You can just specify directly in this one Gin config file. And then you still have the flexibility to change these parameters on the command line. So it does, I do recognize it does take a little getting used to, even internally, a bunch of people do sometimes get confused by it. But I think it's worth the initial ramp up. Once you, I mean, I use it for everything now because I just find it super easy to keep track of experiments I'm running and to try things with new hyper parameters really quickly. Okay. So I'm going to take a second look at Gin. I kind of worked around it, but given your recommendation, I'm going to check that out. And that was a nice segue to TF agents. So you were involved in TF agents. Can you help me understand what was your role on the TF agents project? So TF agents started it around the same time that we started building dopamine. And so initially we had a lot of meanings to try to see if we would work together. We would just build one thing. But the scope of the projects became clear at the beginning that the scope was very different. As I said, we wanted to keep dopamine as nimble as possible and really just very closely tied to our research interests. TF agents was being a bit more ambitious and they wanted to really support many different algorithms, many different environments and have it be very modular so you could kind of swap different pieces without having to really break your system. So initially we still work closely together because we didn't, we wanted to make sure that we were aware of each other and sharing as much as we can. So I was mostly involved initially in implementing all the distributional RL stuff. So C51 or I don't even remember now if they have C51 out there. But there was, if it's not there, there was at one point an implementation of C51 that I coded up. So I was helping a lot with that. And just in general with whatever they were working on at the time, I was initially quite active with it. But then at one point it became clear that dopamine and TF agents were really going to be too distinct frameworks. But not competitive. So we had many meetings about this. We wanted to make sure that it wasn't just Google putting out to competing frameworks and leave people kind of confused. I mean, maybe people still are kind of confused. But from the beginning it was very clear to all of us that these are complementary frameworks. So where TF agents, I think, is much better positioned to handle very large experiments or more production type of experiments. Or for running experiments where you're not really being disruptive with the algorithms too much. But more combining different aspects, different types of algorithms or different types of environments, I think TF agents is likely better suited for that. And as dopamine is really meant for what we call throw away research. So this is research where you have some crazy idea that's quite disruptive. And so getting into the internals of the algorithm we hope with dopamine is fairly easy and you can try these crazy ideas. Most of the time these crazy ideas don't work. So that's what we call throw away research because you throw it away. But at least the hope was that with dopamine you would be able to get this answer of whether it's worth continuing this route or not. You could get the answer quickly and then either continue or go on to the next idea. Would you say TF agents is like the TensorFlow for RRL. Like it's like the flagship RRL framework for Google going forward or is it maybe too early to say that? I think it's still too early. So one of the big things with RRL is it's very sensitive to the type of problem that you have. It still requires quite a lot of work upfront. If you have a new problem that's not one of the standard benchmarks to kind of get everything running smoothly and correctly everything from figuring out scale of rewards to hyperparameter optimization. And so from my experience like every problem is different. And so internally we have some people using TF agents. We have some people using dopamine. And it's really like whatever the sort of the message is whatever fits your particular problem go with it. There's no notion of when framework is better than the other. We're I'm very proud of the team, the TF agents team. They've done a fantastic job and I'm super supportive of them and equally they're super supportive of the work we do. And so we have had meetings where people come and they want to use dopamine. And when they explain the problem to me a few times I've told them I think TF agents is probably a better fit. So you should you should check that out. So it's hard to say because I don't know if there maybe there will be someday one reinforcement learning framework to rule them all but at this point I don't see that happening in the near future. It almost seems like it's going the other way like new frameworks are popping up all the time and the problem is like just how to choose. Yeah, yeah that's always a problem. I mean you have N frameworks none of them satisfy you too many frameworks so now you have N plus one frameworks. We did fear that a bit with dopamine and TF agents. I think as I say it's I think it's a consequence of all of these problems being having their own particularities and so people just want to go to whatever will allow them to solve their problem and iterate on the problem faster and more seamlessly. What do you find most interesting in the world of RL these days? So as I mentioned earlier I'm really interested in the notion of representations and what this means for reinforcement learning and actually learning in general I think whipping that bothers me is that first is a notary which is one of the standard benchmarks. We have these agents that have to relearn how to see essentially every time they play the game. So ideally what I want to have happen is that you have an agent that learns how to play pong and when you send it to a breakout it doesn't have to relearn what a ball and a paddle means and how balls interact with paddles and I guess in breakout you can break bricks but there's still some shared dynamics across a lot of games and it's one of the things that bothers me a bit that we have to relearn these every time and I think there's some room there for exploration. Then aside from that I'm also quite interested in actually seeing RL used for real problems. So for forever we've seen papers that motivate the work with real problems but you don't see them actually being used in real problems, all that much. And so there was a workshop in ICML I think this year that was looking at reinforcement learning for real problems. I don't think I'm the only one that's interested in this. I think the community is starting to really think about this and how we can go beyond games and simulations and actually use RL at scale with impactful problems. It seems to me like when you start to consider deploying these policies you right away have trouble answering questions about how safe is it, how can we be sure, how do we understand what it's doing and all these things which are kind of often take a back seat to just raw performance on Atari. Yeah, no, and this comes back to this notion of each problem has its own particularities that if you need it to be interpretable or if you need it to be safe and that requires a certain type of algorithm. But if you really focus on one particular problem then that should drive the type of research you do and whatever framework or software you end up building to really solve that problem. And I hope, I mean it's a place where I'm lucky that I don't really have to care too much about publication numbers. I know for a lot of academics this is something quite stressful because it's how you advance in your career. But I think the community is starting to realize that playing the numbers game isn't necessarily going to get us to these real world problems because these real world problems are going to require a lot of time and work with potentially no publications. So it is something I think about. I have started doing a little bit of work with some external collaborations on that to try to get RL in the real world. Maybe related to that, what do you think RL is going to be like going forward? Like if you look forward in five or ten years, will it be very recognizable to us incremental changes or do you think it will be completely different? Do you think will be some of these lines of research that we're following are going to be seminal that will sprout a whole different dimensions of this problem? What do you see in the future? It's kind of hard to say. This field is changing so much from year to year. I wouldn't be surprised. I wouldn't call this a prediction. I wouldn't be surprised if the lines, and you already see this happening, but the lines between what we call RL and other types of machine learning fields become more blurred. So RL becomes more of a technique or a tool that you use along with many other tools to solve a larger problem. So it isn't just you're working on RL. You're working on problems that include RL as part of the solution. Awesome. Dr. Pablo Samuel Castro, thank you so much for your time today. And for your insight, you taught us so much. I learned so much from reading your work and I look forward to reading whatever you do next. Thank you again. Thank you so much for having me. This was really fun. That's our episode for today, folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 12.8, "start": 0, "text": " This is TalkAeral Podcast, all reinforcement learning, all the time." }, { "end": 15.6, "start": 12.8, "text": " Interviews of brilliant folks across the world of RL." }, { "end": 21.44, "start": 15.6, "text": " I'm your host, Rob and Chohon." }, { "end": 26.72, "start": 21.44, "text": " Dr. Pablo Samuel Castro is a staff research software engineer at Google Bryant." }, { "end": 29.8, "start": 26.72, "text": " He's the main author of the dopamine RL framework." }, { "end": 32.04, "start": 29.8, "text": " Pablo, thanks so much for joining us today." }, { "end": 32.84, "start": 32.04, "text": " Thank you for having me." }, { "end": 34.68, "start": 32.84, "text": " It's a pleasure to be here." }, { "end": 37.84, "start": 34.68, "text": " So can you tell us a bit about your job?" }, { "end": 43.4, "start": 37.84, "text": " Like what is the staff research software engineer role at Google Bryant?" }, { "end": 43.72, "start": 43.4, "text": " Right." }, { "end": 49.480000000000004, "start": 43.72, "text": " So in Google Brain, we have two types of roles, the software engineer and the research" }, { "end": 52.16, "start": 49.480000000000004, "text": " scientist." }, { "end": 55.120000000000005, "start": 52.16, "text": " Software engineer proceeds research scientists." }, { "end": 62.36, "start": 55.12, "text": " I think I don't think research scientists was a role until, I don't know, maybe most" }, { "end": 63.92, "start": 62.36, "text": " 10 years ago." }, { "end": 68.72, "start": 63.92, "text": " So I joined Google as a software engineer after finishing my postdoc." }, { "end": 72.96, "start": 68.72, "text": " And initially I was doing more applied machine learning in ads." }, { "end": 76.92, "start": 72.96, "text": " So two years ago, after I had transferred to Montreal, the brain team opened up and I" }, { "end": 79.56, "start": 76.92, "text": " was lucky enough to be able to transfer there." }, { "end": 84.52, "start": 79.56, "text": " So officially a software engineer, although what I do on a day-to-day basis, I'd say," }, { "end": 87.6, "start": 84.52, "text": " is more like what a research scientist role is." }, { "end": 90.84, "start": 87.6, "text": " And I think really that's kind of how everybody sees these roles." }, { "end": 93.72, "start": 90.84, "text": " It's more of a spectrum rather than a binary thing." }, { "end": 99.03999999999999, "start": 93.72, "text": " So you have software engineers that are a bit more heavily on the engineering side." }, { "end": 102.88, "start": 99.03999999999999, "text": " Like for instance, people that build TensorFlow and things like that, they're really working" }, { "end": 105.92, "start": 102.88, "text": " on hard engineering problems." }, { "end": 110.67999999999999, "start": 105.92, "text": " And then you have people maybe a bit like me that are a bit more on the research side." }, { "end": 114.08, "start": 110.67999999999999, "text": " And same on the research scientist side, there's people that are really just focused" }, { "end": 119.32, "start": 114.08, "text": " on the research and others that are a bit more heavy on the engineering side." }, { "end": 121.16, "start": 119.32, "text": " And you have everything in between." }, { "end": 127.16, "start": 121.16, "text": " So it's really the name itself is more a question of expectations like when you're trying" }, { "end": 131.04, "start": 127.16, "text": " to get promoted or figuring out compensation, that type of thing." }, { "end": 136.07999999999998, "start": 131.04, "text": " You get evaluated on certain components of what your role represents." }, { "end": 142.96, "start": 136.07999999999998, "text": " So for engineering, you have to write high quality code and demonstrate that you're solving" }, { "end": 147.28, "start": 142.96, "text": " challenging problems with elegant solutions and that type of thing." }, { "end": 149.72, "start": 147.28, "text": " What are you working on these days at work?" }, { "end": 150.92000000000002, "start": 149.72, "text": " So I'm working on a few things." }, { "end": 155.52, "start": 150.92000000000002, "text": " So half of my research is actually in the intersection of creativity and machine learning." }, { "end": 160.60000000000002, "start": 155.52, "text": " So I do a bunch of work with that, have a project to generate lyrics." }, { "end": 167.36, "start": 160.60000000000002, "text": " So basically to help songwriters with lyrics to write more interesting lyrics, do a bunch" }, { "end": 172.92000000000002, "start": 167.36, "text": " of other stuff with music and music generation specifically with their use in live performance." }, { "end": 178.6, "start": 172.92, "text": " So I do some work with that and the other half or probably most of my work more than half" }, { "end": 181.27999999999997, "start": 178.6, "text": " I'd say is in reinforcement learning." }, { "end": 186, "start": 181.27999999999997, "text": " And here I'm more focused on, I guess what you call fundamental reinforcement learning." }, { "end": 191.35999999999999, "start": 186, "text": " So looking at some of the core algorithms and some of the core theories behind these" }, { "end": 193.44, "start": 191.35999999999999, "text": " methods." }, { "end": 197.83999999999997, "start": 193.44, "text": " More specifically what I've been thinking about a lot lately is representations and the" }, { "end": 201.88, "start": 197.83999999999997, "text": " types of representations that are learned by reinforcement learning agents and what it" }, { "end": 206.2, "start": 201.88, "text": " even means to be a representation, what it means to learn a representation and why you'd" }, { "end": 208.4, "start": 206.2, "text": " want to learn a good representation." }, { "end": 213.96, "start": 208.4, "text": " I looked back at your master's thesis and your PhD dissertation quite briefly." }, { "end": 215.72, "start": 213.96, "text": " They're very detailed." }, { "end": 220.6, "start": 215.72, "text": " But I wanted to ask you a couple, about a couple concepts that showed up in there." }, { "end": 226.04, "start": 220.6, "text": " Your master's thesis involved vision exploration and you talked about hyper-MDPs." }, { "end": 230.44, "start": 226.04, "text": " Order, if you can help us understand what is a hyper-MDP." }, { "end": 232.44, "start": 230.44, "text": " Is it related to a belief MDP?" }, { "end": 234.2, "start": 232.44, "text": " Is it a different thing?" }, { "end": 236.6, "start": 234.2, "text": " It's related to a belief MDP." }, { "end": 244.32, "start": 236.6, "text": " But this was, so it's essentially you maintain almost like, I guess it's quite related to" }, { "end": 245.32, "start": 244.32, "text": " a belief MDP." }, { "end": 249.28, "start": 245.32, "text": " I'm not sure I don't want to misspeak and say that they're the same." }, { "end": 252.52, "start": 249.28, "text": " It's quite possible that they're actually the same object." }, { "end": 259.08, "start": 252.52, "text": " But essentially the idea is that you don't maintain a single MDP from which you do your" }, { "end": 264.08, "start": 259.08, "text": " planning or your learning, but you maintain a higher level object from which you can" }, { "end": 265.91999999999996, "start": 264.08, "text": " sample MDPs." }, { "end": 267.76, "start": 265.91999999999996, "text": " This is where the Bayesian part comes from." }, { "end": 276.2, "start": 267.76, "text": " You maintain a variance over what you can sample and this you use to do exploration." }, { "end": 283, "start": 276.2, "text": " If you're very confident about certain parts of the MDP, you'll be more prone to choose" }, { "end": 287.36, "start": 283, "text": " greedily in those areas, whereas in other areas where there's a bit more variance," }, { "end": 293.68, "start": 287.36, "text": " then when you sample multiple MDPs, you're going to get different types of systems and" }, { "end": 297.28000000000003, "start": 293.68, "text": " this can induce better exploration." }, { "end": 301.44, "start": 297.28000000000003, "text": " This is what I was looking at in my master's thesis." }, { "end": 309.32, "start": 301.44, "text": " If you had this hyper-MDP that you could sample MDPs from, what form would it take?" }, { "end": 312.76, "start": 309.32, "text": " How would you represent such an object?" }, { "end": 315.52000000000004, "start": 312.76, "text": " Is it a concrete thing or is it a more conceptual thing?" }, { "end": 319.15999999999997, "start": 315.52, "text": " No, no, it's a concrete thing." }, { "end": 323.88, "start": 319.15999999999997, "text": " You maintain what are called information states." }, { "end": 329.03999999999996, "start": 323.88, "text": " The way we were doing it was fairly simple just with counts." }, { "end": 333.08, "start": 329.03999999999996, "text": " For each state, you essentially maintain a set of counts." }, { "end": 338.52, "start": 333.08, "text": " From this, you can derive a method for, I think we were using Thomson sampling." }, { "end": 342.08, "start": 338.52, "text": " This was over 10 years ago, so it's possible." }, { "end": 343.4, "start": 342.08, "text": " I haven't thought about this in a while." }, { "end": 345.2, "start": 343.4, "text": " It's possible my memory is failing me." }, { "end": 348.12, "start": 345.2, "text": " We were using Thomson sampling with these counts." }, { "end": 353.76, "start": 348.12, "text": " For instance, if you have a lot of counts for a particular state, for the next state transitions," }, { "end": 358.48, "start": 353.76, "text": " the variance in the MDPs that you sample from those counts is going to be much lower" }, { "end": 365.24, "start": 358.48, "text": " than for other states where your counts are aren't as high." }, { "end": 372.32, "start": 365.24, "text": " For my master's thesis, we were approaching this as a linear programming problem." }, { "end": 376.15999999999997, "start": 372.32, "text": " If I recall correctly, we were essentially drawing a bunch of samples, then we construct" }, { "end": 383.36, "start": 376.15999999999997, "text": " this rollout tree of the possible MDPs that you can sample from that and you solve this" }, { "end": 388.56, "start": 383.36, "text": " rollout tree using linear programming." }, { "end": 394.15999999999997, "start": 388.56, "text": " It's fairly expensive, but this was at a time before deep nets were a thing for reinforcement" }, { "end": 395.15999999999997, "start": 394.15999999999997, "text": " learning." }, { "end": 398.4, "start": 395.15999999999997, "text": " The problems we were tackling were a lot smaller and we were approaching it more from a" }, { "end": 399.84, "start": 398.4, "text": " theoretical angle." }, { "end": 402.67999999999995, "start": 399.84, "text": " Okay, thanks." }, { "end": 409.64, "start": 402.67999999999995, "text": " Then on your PhD dissertation, it was involved by simulation." }, { "end": 411.96, "start": 409.64, "text": " Can you help us understand what that concept is?" }, { "end": 413.71999999999997, "start": 411.96, "text": " What is by simulation?" }, { "end": 414.71999999999997, "start": 413.71999999999997, "text": " Sure." }, { "end": 419.67999999999995, "start": 414.71999999999997, "text": " By simulation is a notion that comes from concurrency theory." }, { "end": 424.35999999999996, "start": 419.67999999999995, "text": " That notion of by simulation came from the concept of simulation." }, { "end": 428.71999999999997, "start": 424.35999999999996, "text": " The idea here was if you have some complex system that you don't really get to see the" }, { "end": 433.12, "start": 428.72, "text": " internals of it, but you want to be able to say things about it, maybe you can construct" }, { "end": 435.36, "start": 433.12, "text": " a simulator of it." }, { "end": 437.24, "start": 435.36, "text": " The way it would work is almost like a game." }, { "end": 442.96000000000004, "start": 437.24, "text": " For whatever transition the real system makes, you can simulate that transition with your" }, { "end": 443.96000000000004, "start": 442.96000000000004, "text": " simulator." }, { "end": 448.64000000000004, "start": 443.96000000000004, "text": " If you can demonstrate that this is the case and by demonstrating, prove it mathematically," }, { "end": 453.28000000000003, "start": 448.64000000000004, "text": " then that means that whatever verification that you'd want to do, for instance, that" }, { "end": 459.47999999999996, "start": 453.28, "text": " you'll never reach some dangerous state or that you'll only reach it with low probability," }, { "end": 465.32, "start": 459.47999999999996, "text": " you can do on your simulator because it exactly simulates your real system." }, { "end": 471.11999999999995, "start": 465.32, "text": " By simulation is a bit stronger in the sense that it tells you that two systems simulate" }, { "end": 475.47999999999996, "start": 471.11999999999995, "text": " each other in essence." }, { "end": 482.2, "start": 475.47999999999996, "text": " Initially in concurrency theory, this was studied in systems that were deterministic." }, { "end": 484, "start": 482.2, "text": " No non-stochastic." }, { "end": 490.59999999999997, "start": 484, "text": " Here you can have say notion of two-way simulation, but when you start adding stochastic transitions," }, { "end": 492.84, "start": 490.59999999999997, "text": " it becomes a little trickier." }, { "end": 500.28, "start": 492.84, "text": " Here's where the notion of by simulation that we use in MDPs started coming about." }, { "end": 505.48, "start": 500.28, "text": " Here rather than looking at this two-way simulation, you're really looking at these two systems" }, { "end": 506.48, "start": 505.48, "text": " concurrently." }, { "end": 510.76, "start": 506.48, "text": " Actually, there's a neat proof technique that's called co-induction, which is the dual" }, { "end": 513.96, "start": 510.76, "text": " of induction that we all know about." }, { "end": 519.76, "start": 513.96, "text": " It's through these proof techniques of co-induction that you can prove things about these by simulation" }, { "end": 522.4399999999999, "start": 519.76, "text": " relations." }, { "end": 526.4, "start": 522.4399999999999, "text": " When I say relation, it means that it's an equivalence relation." }, { "end": 533.48, "start": 526.4, "text": " If you're now talking about MDPs, you can question whether two states are by similar and when" }, { "end": 537.68, "start": 533.48, "text": " they're by similar, that means that they're behaviorally indistinguishable." }, { "end": 542, "start": 537.68, "text": " You can think of it as sort of as a state aggregation methodology." }, { "end": 547.0799999999999, "start": 542, "text": " If you were able to compute the by simulation equivalence relations in an MDP, then you" }, { "end": 552.28, "start": 547.0799999999999, "text": " could potentially reduce the size of your MDP by collapsing all states that are within" }, { "end": 554.92, "start": 552.28, "text": " the same equivalence class." }, { "end": 557.76, "start": 554.92, "text": " What does it mean to be by similar?" }, { "end": 560.3199999999999, "start": 557.76, "text": " You say that two states are by similar." }, { "end": 565.8399999999999, "start": 560.3199999999999, "text": " If they, for all actions, they have the same immediate reward." }, { "end": 571.2800000000001, "start": 565.84, "text": " Then also for all actions, they have the same probability of transitioning into equivalence" }, { "end": 575.4, "start": 571.2800000000001, "text": " classes of your by simulation equivalence relation." }, { "end": 581.9200000000001, "start": 575.4, "text": " You can see it has this recursive definition to it or circular definition." }, { "end": 586.9200000000001, "start": 581.9200000000001, "text": " The reason this works is via this notion of co-induction that I was mentioning before." }, { "end": 591.2800000000001, "start": 586.9200000000001, "text": " If you're able to compute this by simulation equivalence relation and we have algorithms" }, { "end": 597.68, "start": 591.28, "text": " for doing this using just standard dynamic programming, then you can collapse the state" }, { "end": 598.68, "start": 597.68, "text": " space of your MDP." }, { "end": 603.92, "start": 598.68, "text": " A lot of my work in my PhD was looking at these equivalence relations and seeing how they" }, { "end": 608.52, "start": 603.92, "text": " relate to, for instance, just grouping states based on optimal value functions." }, { "end": 611.8399999999999, "start": 608.52, "text": " If two states have the same optimal value, maybe we grouped them together and how does that" }, { "end": 614.88, "start": 611.8399999999999, "text": " relate to by simulation equivalence relation?" }, { "end": 619.68, "start": 614.88, "text": " What if you group states that are equivalent under all policies or under a special class" }, { "end": 621.8399999999999, "start": 619.68, "text": " of policies?" }, { "end": 626.28, "start": 621.8399999999999, "text": " I also looked at that in the case for pom-dp's, which gets a bit more interesting because" }, { "end": 628.52, "start": 626.28, "text": " you have partial observability." }, { "end": 633.7199999999999, "start": 628.52, "text": " Here there's a very close relation to predictive state representations, which is something that" }, { "end": 635.56, "start": 633.7199999999999, "text": " a lot of people thought about." }, { "end": 640.0799999999999, "start": 635.56, "text": " Some people still think about it now, but it's not as prevalent as it used to be." }, { "end": 643.3599999999999, "start": 640.0799999999999, "text": " These are equivalence relations." }, { "end": 644.3599999999999, "start": 643.3599999999999, "text": " There are zero one." }, { "end": 645.3599999999999, "start": 644.3599999999999, "text": " They're binary." }, { "end": 648.3599999999999, "start": 645.3599999999999, "text": " Either two states are by similar or they're not." }, { "end": 653.44, "start": 648.36, "text": " You can consider a generalization of this, which makes it a bit smoother, and this is" }, { "end": 656.8000000000001, "start": 653.44, "text": " what by simulation metrics are." }, { "end": 662.36, "start": 656.8000000000001, "text": " Here it's rather than this zero one relationship that you get with equivalence relations, you" }, { "end": 664.12, "start": 662.36, "text": " have a distance." }, { "end": 669.64, "start": 664.12, "text": " The important property of them is that if two states have a distance of zero, that means" }, { "end": 675.72, "start": 669.64, "text": " that they are equivalent according to the notion of by simulation equivalence relations." }, { "end": 681.44, "start": 675.72, "text": " The closer two states are, the closer they are to being truly by similar." }, { "end": 687.08, "start": 681.44, "text": " What this allows you to do now is you can, for instance, say create epsilon balls." }, { "end": 691.9200000000001, "start": 687.08, "text": " You're going to group together all states that are within epsilon of each other according" }, { "end": 694.6800000000001, "start": 691.9200000000001, "text": " to this by simulation metric." }, { "end": 698.8000000000001, "start": 694.6800000000001, "text": " This is a lot more expensive and more difficult to compute." }, { "end": 704.48, "start": 698.8000000000001, "text": " You replace the equality of rewards is replaced by just simply the absolute difference of" }, { "end": 710.24, "start": 704.48, "text": " their rewards, but the equality of transition probabilities that's using the contour of" }, { "end": 715.48, "start": 710.24, "text": " it, or what's most known as the masterstein one." }, { "end": 719.08, "start": 715.48, "text": " That masterstein is expensive to compute and you have to compute it multiple times for" }, { "end": 722.76, "start": 719.08, "text": " all pairs of states and all actions, so it gets really expensive." }, { "end": 724.52, "start": 722.76, "text": " That's essentially the idea." }, { "end": 726.72, "start": 724.52, "text": " It's this really nice theoretical tool." }, { "end": 732.12, "start": 726.72, "text": " What's really nice about it is that the distance between two states is an upper bound on their" }, { "end": 734.36, "start": 732.12, "text": " difference in optimal value functions." }, { "end": 739.04, "start": 734.36, "text": " That means that if you do group states that are, say, within epsilon of each other, you" }, { "end": 743, "start": 739.04, "text": " know that the approximation error that you're going to get for the optimal value function" }, { "end": 746.84, "start": 743, "text": " is going to be bounded by epsilon." }, { "end": 751.64, "start": 746.84, "text": " It sounds like another type of generalization." }, { "end": 757.44, "start": 751.64, "text": " We're generalizing the policy across similar states." }, { "end": 762.8000000000001, "start": 757.44, "text": " If you didn't do that step and just fed all these similar states, I mean, I was looking" }, { "end": 768.76, "start": 762.8, "text": " at a recent paper, deep MDP, that showed two images from asteroids." }, { "end": 774, "start": 768.76, "text": " In one image, the asteroids were blue, and the other one is a different color." }, { "end": 778.1999999999999, "start": 774, "text": " The states were different, but it didn't really matter in terms of actions and rewards" }, { "end": 779.8, "start": 778.1999999999999, "text": " in terms of the NDP." }, { "end": 782.7199999999999, "start": 779.8, "text": " It said those were by similar." }, { "end": 786.28, "start": 782.7199999999999, "text": " We would hope that our RL algorithm would figure that out." }, { "end": 793.8399999999999, "start": 786.28, "text": " Is what you're saying that you could use a different state representation before your," }, { "end": 799.4399999999999, "start": 793.8399999999999, "text": " let's say, your model for your RL even begins so that it knows that those two are the same?" }, { "end": 800.4399999999999, "start": 799.4399999999999, "text": " Right." }, { "end": 801.4399999999999, "start": 800.4399999999999, "text": " That's the hope." }, { "end": 808, "start": 801.4399999999999, "text": " If you had an Oracle that was able to give you this bicemulation metric for the MDP" }, { "end": 813.9599999999999, "start": 808, "text": " that you're trying to learn about an optimal policy over, if you had this Oracle, then you" }, { "end": 818.64, "start": 813.96, "text": " could presumably construct an embedding or a representation of your states such that," }, { "end": 823.76, "start": 818.64, "text": " for instance, the Euclidean distance or some type of distance in this manifold is exactly" }, { "end": 825.6, "start": 823.76, "text": " the bicemulation metric." }, { "end": 832.4000000000001, "start": 825.6, "text": " If you have that, then essentially you're collapsing together states where, for instance, in the" }, { "end": 837.6800000000001, "start": 832.4000000000001, "text": " example that you're mentioning, pixel differences that really don't have any play any role in terms" }, { "end": 841.64, "start": 837.6800000000001, "text": " of the dynamics, they get collapsed together." }, { "end": 846.3199999999999, "start": 841.64, "text": " And this deep MDP paper, that's kind of what they were arguing." }, { "end": 851.76, "start": 846.3199999999999, "text": " So they're the result that they have there that relates it to bicemulation metrics is" }, { "end": 858, "start": 851.76, "text": " saying that if, according to their notions of lipsticks continuity, if two states, two" }, { "end": 863.48, "start": 858, "text": " ground states get mapped to the same latent state, that only happens when those two ground" }, { "end": 867.96, "start": 863.48, "text": " states are exactly by similar or have bicemulation distance of zero." }, { "end": 871.4399999999999, "start": 867.96, "text": " So that means that you're not, you don't want to collapse two states that are behaviorally" }, { "end": 876.72, "start": 871.44, "text": " distinguishable because then if you're collapsing them and you're making policy choices for" }, { "end": 881.48, "start": 876.72, "text": " this collapse state, then you might be making suboptimal choices for one of them." }, { "end": 887.6, "start": 881.48, "text": " Whereas if they are bicemular, then you can be sure that choosing the same action for" }, { "end": 892.24, "start": 887.6, "text": " both of them is okay because they are bicemular." }, { "end": 897.8000000000001, "start": 892.24, "text": " So the problem here is that I started with saying say you have this oracle and obviously" }, { "end": 899.5600000000001, "start": 897.8000000000001, "text": " you don't have this oracle." }, { "end": 903.64, "start": 899.56, "text": " So that's actually one of the things I'm working on quite a lot as well." }, { "end": 908.8399999999999, "start": 903.64, "text": " And it's somewhat related to this notion of representation that I was mentioning is how" }, { "end": 915.16, "start": 908.8399999999999, "text": " is there a way that you can take this nice theoretical object that's the bicemulation" }, { "end": 922, "start": 915.16, "text": " metric and incorporate it into the learning process such that it helps you with building" }, { "end": 925.04, "start": 922, "text": " better representations that are able to generalize more?" }, { "end": 926.3599999999999, "start": 925.04, "text": " Thanks for explaining that to us." }, { "end": 927.3599999999999, "start": 926.3599999999999, "text": " I don't really know." }, { "end": 929.16, "start": 927.3599999999999, "text": " I hope that made sense." }, { "end": 933.16, "start": 929.16, "text": " Well, I followed some of it and what I do, I'm going to listen to this again and I'm" }, { "end": 936.6, "start": 933.16, "text": " going to go back and look at these papers because this is really interesting." }, { "end": 941.28, "start": 936.6, "text": " So I wanted to ask you about a paper you co-authored called a comparative analysis of expected" }, { "end": 947.8399999999999, "start": 941.28, "text": " and distributional reinforcement learning that was with Claire Lyle and Mark Belmer." }, { "end": 952.4399999999999, "start": 947.8399999999999, "text": " So could you help us understand what's the main idea of this paper?" }, { "end": 953.4399999999999, "start": 952.4399999999999, "text": " Right." }, { "end": 959.5600000000001, "start": 953.44, "text": " The institutional reinforcement learning is this new way of thinking about reinforcement" }, { "end": 966.6400000000001, "start": 959.5600000000001, "text": " learning that Mark and Will Dabney and Remy Munis published in 2017, where if you think" }, { "end": 974.48, "start": 966.6400000000001, "text": " about the Belman backup where you have the reward plus the discounted sum of expected" }, { "end": 979.7600000000001, "start": 974.48, "text": " future rewards, that's what we've been using for many, many years." }, { "end": 985.72, "start": 979.76, "text": " You have that expectation, which is the expectation of the expected value of the future trajectories" }, { "end": 988.6, "start": 985.72, "text": " according to your policy." }, { "end": 992.2, "start": 988.6, "text": " So what the neat thing that they did is they said, what if we replace that expectation" }, { "end": 994.04, "start": 992.2, "text": " with the distribution?" }, { "end": 998.76, "start": 994.04, "text": " So rather than backing up single values, we're backing up distributions." }, { "end": 1000.92, "start": 998.76, "text": " And so they did this." }, { "end": 1005.16, "start": 1000.92, "text": " They introduced an algorithm they called C-51 in the paper where they're essentially maintaining" }, { "end": 1008.96, "start": 1005.16, "text": " a finite support over the possible values." }, { "end": 1013.52, "start": 1008.96, "text": " And so they're essentially adjusting the distribution with each backup." }, { "end": 1019.52, "start": 1013.52, "text": " And they were able to show that doing this gave some really significant advantages in Atari" }, { "end": 1024.08, "start": 1019.52, "text": " games, which is where they were running their experiments on." }, { "end": 1029.32, "start": 1024.08, "text": " So this was, I mean, as a mathematical notion, it was really interesting and really neat." }, { "end": 1035.3600000000001, "start": 1029.32, "text": " But also empirically, the fact that it works better was, I guess, somewhat surprising," }, { "end": 1039.84, "start": 1035.36, "text": " because it didn't necessarily, even though they had convergence guarantees, they didn't" }, { "end": 1044.6, "start": 1039.84, "text": " have guarantees for necessarily having better performance." }, { "end": 1050.3999999999999, "start": 1044.6, "text": " So the idea for this paper was to try to investigate where is this advantage coming from and" }, { "end": 1052.9599999999998, "start": 1050.3999999999999, "text": " are the situations where we don't have an advantage." }, { "end": 1057, "start": 1052.9599999999998, "text": " And so Claire did a lot of theoretical work starting from ground zero." }, { "end": 1063.3999999999999, "start": 1057, "text": " So let's take the simplest case, the tabular representations of states, and comparing" }, { "end": 1067.8400000000001, "start": 1063.4, "text": " these two ways of doing reinforcement learning." }, { "end": 1073.4, "start": 1067.8400000000001, "text": " And so the way we did it was by, she'd call it a thought experiment." }, { "end": 1079.6000000000001, "start": 1073.4, "text": " So if you were to observe the same, exactly the same trajectories, exactly the same samples" }, { "end": 1081, "start": 1079.6000000000001, "text": " with both types of algorithm." }, { "end": 1088.52, "start": 1081, "text": " So you can imagine running the simulator in parallel, but the two copies are exactly synchronized." }, { "end": 1093.2, "start": 1088.52, "text": " And you perform the backups both the expectation or the traditional backup and this distribution" }, { "end": 1094.88, "start": 1093.2, "text": " backup." }, { "end": 1096.64, "start": 1094.88, "text": " What happens when you get to the end?" }, { "end": 1097.64, "start": 1096.64, "text": " Is there any difference?" }, { "end": 1101, "start": 1097.64, "text": " So it turns out that in the tabular case, there's no difference so that you don't really" }, { "end": 1104.64, "start": 1101, "text": " gain anything from doing distributional RL." }, { "end": 1109.8400000000001, "start": 1104.64, "text": " When you go to the linear case, if you're doing representing your distribution as a cumulative" }, { "end": 1117.28, "start": 1109.8400000000001, "text": " distribution function, a CDF rather than probability mass function, then you also have exactly" }, { "end": 1118.28, "start": 1117.28, "text": " the same thing." }, { "end": 1123.3999999999999, "start": 1118.28, "text": " But essentially what you get in the end, the performance of these two is the same." }, { "end": 1128.44, "start": 1123.3999999999999, "text": " If you're not representing it as a CDF, then you don't necessarily get the same thing." }, { "end": 1130.84, "start": 1128.44, "text": " Not that one's better, they're just kind of different." }, { "end": 1135.08, "start": 1130.84, "text": " And she had some experiments in there that basically showed sometimes distributional wins," }, { "end": 1137.76, "start": 1135.08, "text": " sometimes expectational wins." }, { "end": 1139.2, "start": 1137.76, "text": " It's just different." }, { "end": 1143.68, "start": 1139.2, "text": " Now when you go into the non-linear setting, which is what we typically use with deep nets," }, { "end": 1147.16, "start": 1143.68, "text": " then you really start seeing a difference with distributional." }, { "end": 1152.52, "start": 1147.16, "text": " And empirically, we show that this difference really comes with the expressivity of your" }, { "end": 1153.6000000000001, "start": 1152.52, "text": " representations." }, { "end": 1159.16, "start": 1153.6000000000001, "text": " So we were taking linear, we were doing essentially linear function approximators by using a" }, { "end": 1160.96, "start": 1159.16, "text": " four-year basis." }, { "end": 1163.4, "start": 1160.96, "text": " And you can increase the order of this four-year basis." }, { "end": 1168.0800000000002, "start": 1163.4, "text": " And as you increase the order, distributional really started to shine more." }, { "end": 1175.16, "start": 1168.0800000000002, "text": " So the point of the paper was essentially to show that it's almost like a, to be continued" }, { "end": 1181.72, "start": 1175.16, "text": " paper because we demonstrated that it's really distributional combined with deep nets where" }, { "end": 1183.52, "start": 1181.72, "text": " you see this advantage." }, { "end": 1190.48, "start": 1183.52, "text": " So since then, we've still been trying to answer this question with some follow-up work." }, { "end": 1191.48, "start": 1190.48, "text": " Okay." }, { "end": 1199.24, "start": 1191.48, "text": " And then when you described representing the distribution using a CDF or the PMF." }, { "end": 1204.3600000000001, "start": 1199.24, "text": " So which ones are the main distribution, distributional RL algorithms using?" }, { "end": 1207.9599999999998, "start": 1204.36, "text": " Like C51 is using PMF, is it?" }, { "end": 1208.9599999999998, "start": 1207.9599999999998, "text": " Yes." }, { "end": 1209.9599999999998, "start": 1208.9599999999998, "text": " Yeah." }, { "end": 1210.9599999999998, "start": 1209.9599999999998, "text": " And then..." }, { "end": 1216.4399999999998, "start": 1210.9599999999998, "text": " So that was one thing that came out of this paper that maybe we shouldn't be using PMFs" }, { "end": 1218.9199999999998, "start": 1216.4399999999998, "text": " when we do these backups." }, { "end": 1228.52, "start": 1218.9199999999998, "text": " So we had this other paper at AIS, that's this year, where we were no longer using the," }, { "end": 1232.76, "start": 1228.52, "text": " so we weren't enforcing that it be a proper distribution." }, { "end": 1238.56, "start": 1232.76, "text": " So we got rid of the self-mex and we were still able to prove that this converges." }, { "end": 1241.6, "start": 1238.56, "text": " It still was a PMF." }, { "end": 1243.92, "start": 1241.6, "text": " So that kind of..." }, { "end": 1249.44, "start": 1243.92, "text": " The results we got were not state of the art in a sense, so we weren't able to win over" }, { "end": 1251.24, "start": 1249.44, "text": " what we had before." }, { "end": 1256.76, "start": 1251.24, "text": " So the AIS, that's where it happened before this AAA paper of the comparative analysis." }, { "end": 1263.44, "start": 1256.76, "text": " So the comparative analysis kind of demonstrated that you do actually need the CDF to be able" }, { "end": 1266.12, "start": 1263.44, "text": " to perform better." }, { "end": 1272.92, "start": 1266.12, "text": " And so something like Quantile Regression seems to work better than C51." }, { "end": 1274.24, "start": 1272.92, "text": " Okay." }, { "end": 1277.56, "start": 1274.24, "text": " And then would IQN fall into that category as well?" }, { "end": 1278.56, "start": 1277.56, "text": " Yes." }, { "end": 1281.8799999999999, "start": 1278.56, "text": " Along with Quantile Regression?" }, { "end": 1282.8799999999999, "start": 1281.8799999999999, "text": " Yes." }, { "end": 1283.8799999999999, "start": 1282.8799999999999, "text": " Okay." }, { "end": 1284.8799999999999, "start": 1283.8799999999999, "text": " And so interesting." }, { "end": 1293.48, "start": 1284.88, "text": " So the idea of looking for this expectation equivalence, is that just to help you understand" }, { "end": 1297.96, "start": 1293.48, "text": " what's happening or do you really want to find that expectation equivalence to know" }, { "end": 1299.6000000000001, "start": 1297.96, "text": " that it's correct?" }, { "end": 1303.64, "start": 1299.6000000000001, "text": " Well, ultimately the way these algorithms are behaving is you're still taking this" }, { "end": 1305.5600000000002, "start": 1303.64, "text": " argmax when choosing the action, right?" }, { "end": 1309.8400000000001, "start": 1305.5600000000002, "text": " So you have whether you're representing your value as a single number or as a distribution," }, { "end": 1312.3600000000001, "start": 1309.8400000000001, "text": " you're going to be taking an argmax." }, { "end": 1316.24, "start": 1312.36, "text": " And that argmax is essentially taking the first moment of your distribution or just taking" }, { "end": 1318.6, "start": 1316.24, "text": " that expectation that you were backing up." }, { "end": 1325.52, "start": 1318.6, "text": " So in terms of analyzing with respect to performance of these agents, you do kind of want to look" }, { "end": 1326.9199999999998, "start": 1325.52, "text": " at this expectation." }, { "end": 1330.28, "start": 1326.9199999999998, "text": " Now obviously there can be other methods that look at other moments of the distribution," }, { "end": 1333.52, "start": 1330.28, "text": " like the variance or the skewness or something like that." }, { "end": 1335.4399999999998, "start": 1333.52, "text": " But we weren't looking at those methods." }, { "end": 1337.6399999999999, "start": 1335.4399999999998, "text": " And I think that's an interesting avenue to look at." }, { "end": 1343.2, "start": 1337.64, "text": " But there's no kind of canonical algorithm for that yet at least." }, { "end": 1347.6000000000001, "start": 1343.2, "text": " So we focused on these expectations." }, { "end": 1348.6000000000001, "start": 1347.6000000000001, "text": " Okay." }, { "end": 1356.2, "start": 1348.6000000000001, "text": " And is it still unclear why distributional RL is helpful or is this now more clear?" }, { "end": 1357.6000000000001, "start": 1356.2, "text": " No, I wouldn't say." }, { "end": 1363.1200000000001, "start": 1357.6000000000001, "text": " I mean, maybe it's more clear than before, but it's definitely not a solved problem." }, { "end": 1369.12, "start": 1363.12, "text": " So we've been doing some work where we have some evidence that suggests that it learns" }, { "end": 1371.84, "start": 1369.12, "text": " better representations." }, { "end": 1378.08, "start": 1371.84, "text": " And what it means to have better representations, what better means and what representation" }, { "end": 1381.9599999999998, "start": 1378.08, "text": " means is still we kind of have debates about this." }, { "end": 1385.6, "start": 1381.9599999999998, "text": " Some of us in the team have different ideas for what this means." }, { "end": 1393.3999999999999, "start": 1385.6, "text": " But in general, it does seem like on average they have quote unquote better, quote unquote" }, { "end": 1394.48, "start": 1393.3999999999999, "text": " representations." }, { "end": 1400.8, "start": 1394.48, "text": " And this might actually come from the fact that you could think of distributional RL" }, { "end": 1406.32, "start": 1400.8, "text": " as almost like having auxiliary tasks, which if you think of papers like Unreal, they've" }, { "end": 1409.56, "start": 1406.32, "text": " been shown to really help with the learning process." }, { "end": 1417.32, "start": 1409.56, "text": " Maybe they're serving as some type of regularization for your representations." }, { "end": 1421.3999999999999, "start": 1417.32, "text": " That's still kind of open for discussion." }, { "end": 1426.6799999999998, "start": 1421.3999999999999, "text": " And these distributional RL algorithms, they're performing really well, right?" }, { "end": 1429.2, "start": 1426.6799999999998, "text": " Like they perform better than without them." }, { "end": 1433.3999999999999, "start": 1429.2, "text": " In general, that's why they're in rainbow and I think IQN is still state of the art on" }, { "end": 1434.3999999999999, "start": 1433.3999999999999, "text": " Summ Atari." }, { "end": 1435.6, "start": 1434.3999999999999, "text": " Yes, yeah." }, { "end": 1442.6799999999998, "start": 1435.6, "text": " So that's an important thing, they don't win all the time, but on average, in the majority" }, { "end": 1447.8799999999999, "start": 1442.6799999999998, "text": " of games, they do seem to give quite a big advantage." }, { "end": 1455.08, "start": 1447.8799999999999, "text": " I think the more people are starting to look into them, one of the difficulties is that" }, { "end": 1462.1599999999999, "start": 1455.08, "text": " they aren't as simple to implement as some of the previous algorithms that people use." }, { "end": 1466.3600000000001, "start": 1462.16, "text": " But I think more and more people are starting to really look into these and use them because" }, { "end": 1473.1200000000001, "start": 1466.3600000000001, "text": " they do tend to perform better than the next Spectational RL." }, { "end": 1480.92, "start": 1473.1200000000001, "text": " And then aside from just their raw performance, I think they open up different types of policies," }, { "end": 1481.92, "start": 1480.92, "text": " right?" }, { "end": 1484.72, "start": 1481.92, "text": " Like, could you not have, if you once you have the full distribution, you could say, okay," }, { "end": 1489.64, "start": 1484.72, "text": " I'm not going to take any risky moves that risk getting these low rewards." }, { "end": 1495.4, "start": 1489.64, "text": " So you might have a different policy to choose across your distribution of value functions." }, { "end": 1496.88, "start": 1495.4, "text": " Is that true?" }, { "end": 1498.1200000000001, "start": 1496.88, "text": " Absolutely, yeah, yeah." }, { "end": 1502.6000000000001, "start": 1498.1200000000001, "text": " So I think it does open up a lot more flexibility in terms of what you can do and what you can" }, { "end": 1505.92, "start": 1502.6000000000001, "text": " say about the behavior of your algorithms." }, { "end": 1509.92, "start": 1505.92, "text": " So as I was saying, most of the time, even though we're maintaining a distribution, we take" }, { "end": 1513.5600000000002, "start": 1509.92, "text": " the first moment when we're choosing our action, which is just the mean." }, { "end": 1519.2, "start": 1513.5600000000002, "text": " But if you could, as you're suggesting, take higher moments and make that and form the" }, { "end": 1526.52, "start": 1519.2, "text": " action choice, maybe for exploration or maybe for safe RL, it definitely opens up the door" }, { "end": 1528.04, "start": 1526.52, "text": " for more possibilities." }, { "end": 1536.64, "start": 1528.04, "text": " Okay, but in terms of exploration, so I'm trying to understand how these could help with" }, { "end": 1541.96, "start": 1536.64, "text": " exploration because I think they're not, if I understand correctly, they're not capturing" }, { "end": 1547.8400000000001, "start": 1541.96, "text": " the uncertainty in your transitions or what you don't know about the environment." }, { "end": 1552.8, "start": 1547.84, "text": " I'm trying to imagine what these distributions look like when you start training as opposed" }, { "end": 1554.1999999999998, "start": 1552.8, "text": " to when you're done." }, { "end": 1562.1999999999998, "start": 1554.1999999999998, "text": " And are they informing you in the early stages of training about where exploration is needed?" }, { "end": 1564.6799999999998, "start": 1562.1999999999998, "text": " They're not, right?" }, { "end": 1566.76, "start": 1564.6799999999998, "text": " Not really, not really." }, { "end": 1575.52, "start": 1566.76, "text": " But they do inform, I guess it would be a bit more applicable towards safe RL or safe" }, { "end": 1577.52, "start": 1575.52, "text": " exploration, if you will." }, { "end": 1582, "start": 1577.52, "text": " So we've generated a bunch of videos where you, for instance, in space invaders, when" }, { "end": 1587.84, "start": 1582, "text": " you're close to dying, the distribution really shifts towards zero because it's essentially" }, { "end": 1591.92, "start": 1587.84, "text": " saying there's not much hope in what you can do." }, { "end": 1594.76, "start": 1591.92, "text": " I don't know, maybe that's a point where you want to explore as much as you can because" }, { "end": 1601.56, "start": 1594.76, "text": " you might be able to find one in the scape hatch or something to escape that situation where" }, { "end": 1604.84, "start": 1601.56, "text": " it seems like all hope is lost." }, { "end": 1610.76, "start": 1604.84, "text": " Yes, I mean, I don't think there's at least not that I know of an existing algorithm" }, { "end": 1614.48, "start": 1610.76, "text": " using these four exploration directly." }, { "end": 1624.1599999999999, "start": 1614.48, "text": " But I do feel that there is something there that could potentially aid in these algorithms." }, { "end": 1628.6799999999998, "start": 1624.1599999999999, "text": " I wish I could have gone back and told my teenage self that we would be really seriously discussing" }, { "end": 1635.16, "start": 1628.68, "text": " space invaders at this point in 2019, and this is a serious work and serious business." }, { "end": 1641.44, "start": 1635.16, "text": " So I was looking at another paper that you co-authored named a geometric perspective on optimal" }, { "end": 1645.2, "start": 1641.44, "text": " representations for reinforcement learning." }, { "end": 1650, "start": 1645.2, "text": " Can you help us understand what was the general idea with that paper?" }, { "end": 1655.92, "start": 1650, "text": " So this paper, and I just to be clear, this is mostly Mark's work." }, { "end": 1659.96, "start": 1655.92, "text": " So I was assisting in this paper." }, { "end": 1665.96, "start": 1659.96, "text": " So he, like most of the ideas and everything came from him, and I think he had a lot of" }, { "end": 1670.04, "start": 1665.96, "text": " really fruitful discussions with Dale Schroement about this." }, { "end": 1674.68, "start": 1670.04, "text": " But the idea is he was, again, it came from this notion of trying to understand where the" }, { "end": 1679.3200000000002, "start": 1674.68, "text": " advantages of distributional reinforcement learning are coming from." }, { "end": 1686.52, "start": 1679.32, "text": " And wondering whether it's really coming from the auxiliary task interpretation, and in" }, { "end": 1691.32, "start": 1686.52, "text": " general, how do auxiliary tasks help with this work?" }, { "end": 1695.52, "start": 1691.32, "text": " So out of this idea and out of this work actually came two papers." }, { "end": 1700.6, "start": 1695.52, "text": " So the other one which I'm not on is the value function polytope paper that I see about" }, { "end": 1701.6, "start": 1700.6, "text": " this year." }, { "end": 1704.8, "start": 1701.6, "text": " So Robert Dadaschi was the first author of that one." }, { "end": 1712.8799999999999, "start": 1704.8, "text": " They essentially showed that you can view these value functions as a convex polytope." }, { "end": 1719.72, "start": 1712.8799999999999, "text": " And so that theoretical work was used in the geometric perspective paper where you could" }, { "end": 1726.24, "start": 1719.72, "text": " essentially show that that extreme vertices on this polytope correspond to deterministic" }, { "end": 1727.6, "start": 1726.24, "text": " policies." }, { "end": 1733.28, "start": 1727.6, "text": " And so one way you can think of representations, if you think of a good representation as" }, { "end": 1738.36, "start": 1733.28, "text": " something that can represent many value functions, then you want to find a representation such" }, { "end": 1744.36, "start": 1738.36, "text": " that the closest it can get to any possible value function is minimized." }, { "end": 1749.08, "start": 1744.36, "text": " So if there exists an optimal representation, which I guess would be if you have ground" }, { "end": 1755, "start": 1749.08, "text": " states, would be exactly the ground states, then you would have an approximation of zero" }, { "end": 1759.2, "start": 1755, "text": " or an error of zero when you're trying to approximate any value function." }, { "end": 1763.32, "start": 1759.2, "text": " Because the representation is expressive enough to be able to represent all of those value" }, { "end": 1764.8, "start": 1763.32, "text": " functions." }, { "end": 1771.0800000000002, "start": 1764.8, "text": " So the idea of the geometric perspective paper is trying to learn these representations" }, { "end": 1779.24, "start": 1771.0800000000002, "text": " by minimizing this approximation error with multiple value functions, not just the optimal" }, { "end": 1780.24, "start": 1779.24, "text": " value function." }, { "end": 1783, "start": 1780.24, "text": " So it's not just trying to get to the optimal value function." }, { "end": 1788, "start": 1783, "text": " And as long as you can represent that, then it doesn't really matter how well you can" }, { "end": 1790.08, "start": 1788, "text": " express other value functions." }, { "end": 1798.56, "start": 1790.08, "text": " And so by viewing it this way, he was able to demonstrate that you can rephrase the" }, { "end": 1804.2, "start": 1798.56, "text": " learning problem as a constraint programming problem, where you essentially are trying" }, { "end": 1810.4, "start": 1804.2, "text": " to find this representation such that giving a large set of value functions, it minimizes" }, { "end": 1815.48, "start": 1810.4, "text": " the approximation error concurrently for all of these value functions." }, { "end": 1821.48, "start": 1815.48, "text": " And so there's some of my favorite parts of this paper are visualizations where you" }, { "end": 1824.4, "start": 1821.48, "text": " have this forum grid world." }, { "end": 1830.48, "start": 1824.4, "text": " And if you use this technique, you're able to essentially visualize the activations of" }, { "end": 1832.76, "start": 1830.48, "text": " all the cells give to the representation." }, { "end": 1839.88, "start": 1832.76, "text": " And you have a much more comprehensive, I'd say, set of representations that they're" }, { "end": 1844.04, "start": 1839.88, "text": " able to cover the state space a lot more smoothly than what you would do if you were to" }, { "end": 1849.8799999999999, "start": 1844.04, "text": " do either just regular RL without this technique." }, { "end": 1857.2, "start": 1849.8799999999999, "text": " And so this is related to this notion of learning some type of basis, which you use as a representation" }, { "end": 1859.96, "start": 1857.2, "text": " for expressing any type of value function." }, { "end": 1862.2, "start": 1859.96, "text": " Yeah, I love the diagrams too." }, { "end": 1865.24, "start": 1862.2, "text": " They really helped me visualize what was going on." }, { "end": 1869.12, "start": 1865.24, "text": " So they were showing these adversarial value functions." }, { "end": 1873.9599999999998, "start": 1869.12, "text": " And I'm trying to really understand what these AVFs are about." }, { "end": 1881.6399999999999, "start": 1873.9599999999998, "text": " Are they either synthetic value functions that are trying to maximize some synthetic rewards" }, { "end": 1884.9199999999998, "start": 1881.6399999999999, "text": " that are not the one we actually care about?" }, { "end": 1885.9199999999998, "start": 1884.9199999999998, "text": " Yes." }, { "end": 1896.08, "start": 1885.9199999999998, "text": " So the way they work is that you can essentially think of a vector of length of equal to" }, { "end": 1897.84, "start": 1896.08, "text": " the number of states." }, { "end": 1904.6, "start": 1897.84, "text": " And so if you have this vector, if it's all strictly positive, and you multiply this" }, { "end": 1910.3999999999999, "start": 1904.6, "text": " vector by your value function when you're approximated value function where you're" }, { "end": 1914.84, "start": 1910.3999999999999, "text": " doing learning, then this will still converge to the optimal policy." }, { "end": 1916.48, "start": 1914.84, "text": " So you still recover your policy." }, { "end": 1923.76, "start": 1916.48, "text": " However, if some of these elements in this delta vector are negative, then it's essentially," }, { "end": 1927.3999999999999, "start": 1923.76, "text": " you can think of it as a state that you want to avoid in a sense." }, { "end": 1929.76, "start": 1927.4, "text": " So you want to try to avoid going into those states." }, { "end": 1934.0400000000002, "start": 1929.76, "text": " And this changes your value function and induces a different policy." }, { "end": 1940.92, "start": 1934.0400000000002, "text": " So in this way, by sampling these delta vectors where each element takes on one or negative" }, { "end": 1945.2800000000002, "start": 1940.92, "text": " one with equal probability, you're going to end up with some states where you're trying" }, { "end": 1947.6000000000001, "start": 1945.2800000000002, "text": " to maximize the value function." }, { "end": 1950.72, "start": 1947.6000000000001, "text": " And in other states where you're trying to actually minimize the value function." }, { "end": 1956.2, "start": 1950.72, "text": " And from this, you end up with a different type of policy than the optimal policy, which" }, { "end": 1961.32, "start": 1956.2, "text": " is what you would end up with if this vector were all positive." }, { "end": 1966.48, "start": 1961.32, "text": " Right, so the delta functions by doing the sampling, where you're sampling between negative" }, { "end": 1973.32, "start": 1966.48, "text": " one and one, you essentially end up with a diverse set of policies." }, { "end": 1980.8400000000001, "start": 1973.32, "text": " So essentially, he phrases the learning problem as sampling a bunch of these delta vectors." }, { "end": 1988.6399999999999, "start": 1980.84, "text": " And then we use policy gradient to find the optimal policy for the induced value function" }, { "end": 1991.32, "start": 1988.6399999999999, "text": " when you multiply with these delta vectors." }, { "end": 1996.76, "start": 1991.32, "text": " And then you try to find a representation that will minimize the approximation error for" }, { "end": 1999.8799999999999, "start": 1996.76, "text": " the set of value functions." }, { "end": 2005.48, "start": 1999.8799999999999, "text": " So and we're learning these AVFs at the same time as our main value function is that right" }, { "end": 2008, "start": 2005.48, "text": " or is it in advance?" }, { "end": 2011.76, "start": 2008, "text": " That's in advance, so this is to learn the representation." }, { "end": 2017.32, "start": 2011.76, "text": " So it's like if we learn all these other synthetic value functions, then when we come to" }, { "end": 2022.24, "start": 2017.32, "text": " learn our, the one we care about, then it becomes an easy linear task." }, { "end": 2025.32, "start": 2022.24, "text": " That's the whole, yeah." }, { "end": 2031.88, "start": 2025.32, "text": " And depending what you want to do with these representations, you can either find the" }, { "end": 2037.88, "start": 2031.88, "text": " optimal policy or if you want something that's interpretable, like the figure three and" }, { "end": 2044.0400000000002, "start": 2037.88, "text": " the paper is what it's trying to demonstrate is that by using this technique, you get this" }, { "end": 2048.6, "start": 2044.0400000000002, "text": " basis essentially, this representation that is a lot richer than what you would normally" }, { "end": 2056.08, "start": 2048.6, "text": " get with either just sampling random policies or by just trying to compute the optimal value" }, { "end": 2057.08, "start": 2056.08, "text": " function." }, { "end": 2060.28, "start": 2057.08, "text": " So I was curious about these delta functions." }, { "end": 2063.36, "start": 2060.28, "text": " I think in the paper, the delta function was just random." }, { "end": 2064.6800000000003, "start": 2063.36, "text": " Is that right?" }, { "end": 2065.6800000000003, "start": 2064.6800000000003, "text": " Yes." }, { "end": 2071.96, "start": 2065.68, "text": " So I'm just imagining if the state space got very large or larger and the delta function" }, { "end": 2076.64, "start": 2071.96, "text": " state as a random plus one minus one sample." }, { "end": 2081.7599999999998, "start": 2076.64, "text": " So I'm just imagining that the delta function would become like kind of like no static" }, { "end": 2087.24, "start": 2081.7599999999998, "text": " noise as it, I mean like in a small room, if you had some tiles with minus one and plus" }, { "end": 2090.96, "start": 2087.24, "text": " one and you kind of squint, you kind of see, well, there's a blob of like areas we should" }, { "end": 2093.3199999999997, "start": 2090.96, "text": " avoid over there and a blob of areas." }, { "end": 2098.8, "start": 2093.32, "text": " We should we should hit over there, but when it when the state space becomes larger and" }, { "end": 2102.84, "start": 2098.8, "text": " all the tiles become really small, then it just becomes this, this fuzz." }, { "end": 2108.7200000000003, "start": 2102.84, "text": " I wonder if that if this method is is would scale is independent of the scale of the state" }, { "end": 2109.7200000000003, "start": 2108.7200000000003, "text": " space." }, { "end": 2113.4, "start": 2109.7200000000003, "text": " Well, no, it is still delta in a different way." }, { "end": 2114.4, "start": 2113.4, "text": " It's quite possible." }, { "end": 2115.4, "start": 2114.4, "text": " Yes." }, { "end": 2121.1600000000003, "start": 2115.4, "text": " So there is part of the way the idea was phrased is that really you have a distribution" }, { "end": 2127.68, "start": 2121.16, "text": " over value functions and so you could presumably have a distribution that tries to ignore" }, { "end": 2131.44, "start": 2127.68, "text": " uninteresting policies or interesting value functions." }, { "end": 2138.2799999999997, "start": 2131.44, "text": " And the delta idea is to try to get these extremal vertices of the polytope that I mentioned" }, { "end": 2140.96, "start": 2138.2799999999997, "text": " that are deterministic policies." }, { "end": 2146.6, "start": 2140.96, "text": " But yes, it doesn't scale super gracefully with with the size of the state space because" }, { "end": 2151.96, "start": 2146.6, "text": " the number of policies you if you were to take all the delta values, it's to the end," }, { "end": 2155.88, "start": 2151.96, "text": " which is still better than than a to the end." }, { "end": 2164.44, "start": 2155.88, "text": " But it's still somewhat restricting when you try to go to large state spaces." }, { "end": 2169.64, "start": 2164.44, "text": " And it's quite possible that that your intuition is right that you end up with with a lot of noise," }, { "end": 2174.7599999999998, "start": 2169.64, "text": " in which case you might want to consider a different way of sampling the policies rather" }, { "end": 2177.5600000000004, "start": 2174.76, "text": " than doing this delta technique." }, { "end": 2182.2400000000002, "start": 2177.5600000000004, "text": " But that's not something we really looked at in the paper." }, { "end": 2188.7200000000003, "start": 2182.2400000000002, "text": " We did run some preliminary experiments on Atari with the same delta idea." }, { "end": 2192.8, "start": 2188.7200000000003, "text": " And we weren't able to quite get it working, which is why I didn't make it into the paper." }, { "end": 2202.0400000000004, "start": 2192.8, "text": " But that suggests that we need an improved way of sampling these policies when you're" }, { "end": 2204.0400000000004, "start": 2202.0400000000004, "text": " going into large state spaces." }, { "end": 2209.16, "start": 2204.04, "text": " Okay, so now I want to move to dopamine, which is the original reason that I reached out" }, { "end": 2211.7599999999998, "start": 2209.16, "text": " to you and how I first heard your name." }, { "end": 2215.44, "start": 2211.7599999999998, "text": " You're the primary author of the dopamine or al framework." }, { "end": 2218.92, "start": 2215.44, "text": " That's at github.com slash Google slash dopamine." }, { "end": 2222.32, "start": 2218.92, "text": " I really like this project." }, { "end": 2229, "start": 2222.32, "text": " And the repos describes the project as dopamine is a research framework for fast prototyping" }, { "end": 2231.56, "start": 2229, "text": " of reinforcement learning algorithms." }, { "end": 2238.44, "start": 2231.56, "text": " I find I understand correctly it supports specific DQN variants, including parts of rainbow" }, { "end": 2240.32, "start": 2238.44, "text": " and IQN." }, { "end": 2246.24, "start": 2240.32, "text": " So I wanted to ask you, Pablo, how did you end up writing this framework?" }, { "end": 2250.2799999999997, "start": 2246.24, "text": " So it was really Mark's idea." }, { "end": 2257.64, "start": 2250.2799999999997, "text": " So when I joined two years ago, when I joined brain, I had assumed I had said goodbye to" }, { "end": 2261.52, "start": 2257.64, "text": " academia forever when I finished my post art, because this was at a time when I was" }, { "end": 2267.12, "start": 2261.52, "text": " in, it was very difficult to get a job in, in the type of work I was doing in machine" }, { "end": 2270.6, "start": 2267.12, "text": " learning and more theoretical machine learning." }, { "end": 2275.6, "start": 2270.6, "text": " So I thought I had said goodbye to academia, but then I was lucky enough to rejoin." }, { "end": 2278.28, "start": 2275.6, "text": " And when I rejoin, Mark also joined the team." }, { "end": 2281.96, "start": 2278.28, "text": " He switched from deep mind in London to, to bring in Montreal." }, { "end": 2286.96, "start": 2281.96, "text": " And we knew each other from our masters, because we did our master's degree together at McGill." }, { "end": 2290.28, "start": 2286.96, "text": " And so we, he had obviously been doing a lot of research still." }, { "end": 2296.1200000000003, "start": 2290.28, "text": " So we sat down and, to try to do some research together and we said, okay, let's start with" }, { "end": 2297.84, "start": 2296.1200000000003, "text": " some implementation of DQN." }, { "end": 2302.44, "start": 2297.84, "text": " And I went looking around and there were a bunch on GitHub, but it wasn't clear which one" }, { "end": 2304.96, "start": 2302.44, "text": " was reliable." }, { "end": 2308.76, "start": 2304.96, "text": " There was one from DeepMind, but I think it was written in Lua, which we didn't want" }, { "end": 2310.32, "start": 2308.76, "text": " to use." }, { "end": 2315.2400000000002, "start": 2310.32, "text": " We found some internal implementations, but they were a bit more complicated than we" }, { "end": 2319.1200000000003, "start": 2315.2400000000002, "text": " wanted, because they were aimed for something different." }, { "end": 2321.12, "start": 2319.12, "text": " And then finally, we just said, why don't we build our own?" }, { "end": 2323.44, "start": 2321.12, "text": " I mean, we're going to be iterating on this a lot." }, { "end": 2329.4, "start": 2323.44, "text": " If we get to know the code base really well, then it'll be much better for us." }, { "end": 2334, "start": 2329.4, "text": " And if we do something that's as simple as it can be, it will likely help other people" }, { "end": 2338.3199999999997, "start": 2334, "text": " as well that are doing similar type of research that we're doing." }, { "end": 2340.2799999999997, "start": 2338.3199999999997, "text": " So we set out, set out to do this." }, { "end": 2342.44, "start": 2340.2799999999997, "text": " And so it was a fairly long design process." }, { "end": 2346.24, "start": 2342.44, "text": " I mean, we had a bunch of meetings at the beginning where we were trying to scope out what" }, { "end": 2347.64, "start": 2346.24, "text": " we wanted to do." }, { "end": 2351.16, "start": 2347.64, "text": " Like do we want to be comprehensive and have every single algorithm out there and every" }, { "end": 2356.2799999999997, "start": 2351.16, "text": " single environment, or do we want to restrict ourselves, at least initially, to Atari and" }, { "end": 2357.7599999999998, "start": 2356.2799999999997, "text": " DQ and Variance?" }, { "end": 2363.64, "start": 2357.7599999999998, "text": " And we ultimately decided to do that because we decided to just base our decisions on" }, { "end": 2365.48, "start": 2363.64, "text": " the research we wanted to do." }, { "end": 2370.3199999999997, "start": 2365.48, "text": " So it was really, let's build something that is useful for us." }, { "end": 2374.16, "start": 2370.3199999999997, "text": " And under the assumption that we're not the only ones doing this research, so it will" }, { "end": 2376.3199999999997, "start": 2374.16, "text": " be useful for other people as well." }, { "end": 2378.44, "start": 2376.32, "text": " And then we'll see how it goes." }, { "end": 2384.48, "start": 2378.44, "text": " So after making that decision, it kind of became clear for us when we had to make calls" }, { "end": 2390.96, "start": 2384.48, "text": " on what algorithms to include and other more technical design decisions." }, { "end": 2394.7200000000003, "start": 2390.96, "text": " So you just released dopamine 2.0 very recently?" }, { "end": 2396.52, "start": 2394.7200000000003, "text": " What can you tell us about that release?" }, { "end": 2397.52, "start": 2396.52, "text": " Right." }, { "end": 2402, "start": 2397.52, "text": " So initially, we wanted to do just Atari because most of our research was in Atari and" }, { "end": 2407.64, "start": 2402, "text": " we just wanted to keep it simple and not get bogged down with trying to support other" }, { "end": 2408.64, "start": 2407.64, "text": " environments." }, { "end": 2409.64, "start": 2408.64, "text": " And that was great." }, { "end": 2410.64, "start": 2409.64, "text": " That worked great." }, { "end": 2416.04, "start": 2410.64, "text": " I mean, the comparative analysis and the AIS stats paper that I mentioned, they were all" }, { "end": 2418.92, "start": 2416.04, "text": " run on early versions of dopamine." }, { "end": 2423.2, "start": 2418.92, "text": " So when we put it out, it got a really good response." }, { "end": 2425.44, "start": 2423.2, "text": " We also wanted to make sure that people would find it useful." }, { "end": 2430.44, "start": 2425.44, "text": " We didn't want to put all this work into it and then find that nobody ended up using it." }, { "end": 2433.56, "start": 2430.44, "text": " If we weren't going to use it, we weren't going to put it in." }, { "end": 2437.84, "start": 2433.56, "text": " But seeing how many people started using it and we started getting requests for supporting" }, { "end": 2444.32, "start": 2437.84, "text": " gym environments, generally open-ended gym environments, we decided that it was probably" }, { "end": 2446.4, "start": 2444.32, "text": " the most natural next step." }, { "end": 2452, "start": 2446.4, "text": " And so dopamine 2.0 was meant to do that, to go beyond Atari and support discrete domain" }, { "end": 2454.8, "start": 2452, "text": " environments from gym." }, { "end": 2459.44, "start": 2454.8, "text": " And so the idea with this was also to add an interface where it doesn't only support" }, { "end": 2460.44, "start": 2459.44, "text": " gym." }, { "end": 2465.04, "start": 2460.44, "text": " And there's a nice wrapper where you can just pass in the environment name and it works." }, { "end": 2469.56, "start": 2465.04, "text": " But it also allows you to, if you have your own environment, to pretty easily just plug" }, { "end": 2471.68, "start": 2469.56, "text": " it into this API." }, { "end": 2472.68, "start": 2471.68, "text": " Awesome." }, { "end": 2473.68, "start": 2472.68, "text": " Okay." }, { "end": 2479.12, "start": 2473.68, "text": " So I used this framework a little bit last October and I found, like you said, it was designed" }, { "end": 2480.52, "start": 2479.12, "text": " for Atari." }, { "end": 2485.12, "start": 2480.52, "text": " And so I just did, I made a simple fork to allow it to take arbitrary inputs." }, { "end": 2488.44, "start": 2485.12, "text": " It sounds like that's not going to be needed anymore because it supports that out of the" }, { "end": 2489.44, "start": 2488.44, "text": " box, right?" }, { "end": 2493.7200000000003, "start": 2489.44, "text": " Yeah, if it's a gym environment, it should just work out of the box." }, { "end": 2499.36, "start": 2493.7200000000003, "text": " There's, you might need a little bit of code for specifying observation shapes and things" }, { "end": 2501.88, "start": 2499.36, "text": " like that, but it should be pretty easy." }, { "end": 2505.08, "start": 2501.88, "text": " You shouldn't have to reinvent the wheel every time." }, { "end": 2508.92, "start": 2505.08, "text": " And if it's a non-gym environment, it still shouldn't be too bad." }, { "end": 2513.48, "start": 2508.92, "text": " You just have to set up the right hooks, but it should be pretty clear how to do that." }, { "end": 2517.44, "start": 2513.48, "text": " So what is your vision for dopamine going forward?" }, { "end": 2524.38, "start": 2517.44, "text": " Like do you point out to grow it or change it anyway or maintain it as it is and morph" }, { "end": 2525.38, "start": 2524.38, "text": " xbox?" }, { "end": 2526.38, "start": 2525.38, "text": " What's the vision?" }, { "end": 2527.38, "start": 2526.38, "text": " Yeah." }, { "end": 2532.76, "start": 2527.38, "text": " So one of the things we wanted to do with dopamine is not, so we're primarily researchers" }, { "end": 2533.76, "start": 2532.76, "text": " all of us." }, { "end": 2538.2000000000003, "start": 2533.76, "text": " So we didn't want to be in a state where we're just maintaining dopamine all the time." }, { "end": 2543.44, "start": 2538.2000000000003, "text": " We wanted to be useful for a bunch of people externally, but not to the point where it" }, { "end": 2549.68, "start": 2543.44, "text": " requires us to be constantly fixing bugs and maintaining it, which is part of the reason" }, { "end": 2554.36, "start": 2549.68, "text": " why we wanted to keep it as nimble as possible." }, { "end": 2559.16, "start": 2554.36, "text": " But going forward, I mean, some of the things we've talked about is just making sure that" }, { "end": 2564.8, "start": 2559.16, "text": " we always have a state of the art implementation whenever algorithm is state of the art at" }, { "end": 2565.8, "start": 2564.8, "text": " the moment." }, { "end": 2567.76, "start": 2565.8, "text": " So right now, I still think it's for value-based agents." }, { "end": 2570.76, "start": 2567.76, "text": " I still think it's rainbow or IQN." }, { "end": 2576.88, "start": 2570.76, "text": " I don't think there's any other that's clearly the new learner." }, { "end": 2581.2000000000003, "start": 2576.88, "text": " And we obviously keep the IQN because that's the one that were where these all came from." }, { "end": 2586.6000000000004, "start": 2581.2000000000003, "text": " One thing I've been thinking about a lot is whether we can start supporting continuous" }, { "end": 2590.28, "start": 2586.6000000000004, "text": " control or policy gradient type methods." }, { "end": 2597.1200000000003, "start": 2590.28, "text": " That opens a whole new set of complexities, but I've been thinking more that maybe I" }, { "end": 2599.88, "start": 2597.1200000000003, "text": " should take that challenge." }, { "end": 2602.2400000000002, "start": 2599.88, "text": " Because it also relates to some of the research I want to do." }, { "end": 2608.44, "start": 2602.2400000000002, "text": " So again, back to the initial idea that we built things when we needed for research." }, { "end": 2612.04, "start": 2608.44, "text": " I love the clarity of this framework." }, { "end": 2616.84, "start": 2612.04, "text": " I love knowing that we could rely on the implementations being correct." }, { "end": 2620.92, "start": 2616.84, "text": " So I want to ask you about the gen configuration." }, { "end": 2622.92, "start": 2620.92, "text": " Is that something?" }, { "end": 2625.56, "start": 2622.92, "text": " Was that a big decision for you to use that?" }, { "end": 2626.56, "start": 2625.56, "text": " Is that a future?" }, { "end": 2631.04, "start": 2626.56, "text": " Or I don't know if it's a future, I really liked it." }, { "end": 2638.08, "start": 2631.04, "text": " So when we started working on this, we were trying to figure out how to specify parameters" }, { "end": 2640.36, "start": 2638.08, "text": " that are needed in multiple places." }, { "end": 2644.84, "start": 2640.36, "text": " So flags is kind of like the v0 of what you do." }, { "end": 2648.52, "start": 2644.84, "text": " So you pass in flags via the command line, but that means that you have to pass these" }, { "end": 2652.04, "start": 2648.52, "text": " values as parameters to all." }, { "end": 2656.7599999999998, "start": 2652.04, "text": " So if you have a call it function that calls some object that creates an agent, that creates" }, { "end": 2659.36, "start": 2656.7599999999998, "text": " a replay buffer, and the parameter you need is a replay buffer." }, { "end": 2663.2799999999997, "start": 2659.36, "text": " You have to add it as a parameter to all of these steps." }, { "end": 2666.24, "start": 2663.2799999999997, "text": " So that to me seemed kind of ugly." }, { "end": 2670.16, "start": 2666.24, "text": " You could also think of maybe creating, I don't know, something like a proto-buff." }, { "end": 2675.68, "start": 2670.16, "text": " So you just create an initial proto-buff that contains all of your parameters." }, { "end": 2679.96, "start": 2675.68, "text": " It wasn't too keen on that either because it seemed you were still passing around a" }, { "end": 2685, "start": 2679.96, "text": " bunch of things from one function to the next that you maybe don't necessarily need." }, { "end": 2690.96, "start": 2685, "text": " And so, Gin was being developed within Google by one of the main contributors to Gin is" }, { "end": 2694.76, "start": 2690.96, "text": " actually the main or one of the main contributors in TF agents." }, { "end": 2699.9, "start": 2694.76, "text": " And because I was doing some work with TF agents at the time as well, they were using" }, { "end": 2700.9, "start": 2699.9, "text": " Gin." }, { "end": 2706.08, "start": 2700.9, "text": " The reason I really like it is because you can specify in a single config file all the parameters" }, { "end": 2708.84, "start": 2706.08, "text": " for all of the objects in your experiment." }, { "end": 2712.84, "start": 2708.84, "text": " So going back to the replay buffer example, which is created by the agent, which is created" }, { "end": 2716.4, "start": 2712.84, "text": " by the runner, which is created by the main calling function." }, { "end": 2718.96, "start": 2716.4, "text": " You don't have to pass this parameter throughout all these calls." }, { "end": 2722.6000000000004, "start": 2718.96, "text": " You can just specify directly in this one Gin config file." }, { "end": 2728.28, "start": 2722.6000000000004, "text": " And then you still have the flexibility to change these parameters on the command line." }, { "end": 2733.32, "start": 2728.28, "text": " So it does, I do recognize it does take a little getting used to, even internally, a" }, { "end": 2736.76, "start": 2733.32, "text": " bunch of people do sometimes get confused by it." }, { "end": 2740.0800000000004, "start": 2736.76, "text": " But I think it's worth the initial ramp up." }, { "end": 2746.5600000000004, "start": 2740.0800000000004, "text": " Once you, I mean, I use it for everything now because I just find it super easy to keep" }, { "end": 2752.0800000000004, "start": 2746.5600000000004, "text": " track of experiments I'm running and to try things with new hyper parameters really quickly." }, { "end": 2753.0800000000004, "start": 2752.0800000000004, "text": " Okay." }, { "end": 2754.7200000000003, "start": 2753.0800000000004, "text": " So I'm going to take a second look at Gin." }, { "end": 2759.1600000000003, "start": 2754.7200000000003, "text": " I kind of worked around it, but given your recommendation, I'm going to check that out." }, { "end": 2760.6800000000003, "start": 2759.1600000000003, "text": " And that was a nice segue to TF agents." }, { "end": 2764.5600000000004, "start": 2760.6800000000003, "text": " So you were involved in TF agents." }, { "end": 2769, "start": 2764.56, "text": " Can you help me understand what was your role on the TF agents project?" }, { "end": 2772.48, "start": 2769, "text": " So TF agents started it around the same time that we started building dopamine." }, { "end": 2776.88, "start": 2772.48, "text": " And so initially we had a lot of meanings to try to see if we would work together." }, { "end": 2779.12, "start": 2776.88, "text": " We would just build one thing." }, { "end": 2785.6, "start": 2779.12, "text": " But the scope of the projects became clear at the beginning that the scope was very different." }, { "end": 2790.32, "start": 2785.6, "text": " As I said, we wanted to keep dopamine as nimble as possible and really just very closely" }, { "end": 2793.12, "start": 2790.32, "text": " tied to our research interests." }, { "end": 2797.48, "start": 2793.12, "text": " TF agents was being a bit more ambitious and they wanted to really support many different" }, { "end": 2801.88, "start": 2797.48, "text": " algorithms, many different environments and have it be very modular so you could kind" }, { "end": 2809.3599999999997, "start": 2801.88, "text": " of swap different pieces without having to really break your system." }, { "end": 2814.48, "start": 2809.3599999999997, "text": " So initially we still work closely together because we didn't, we wanted to make sure" }, { "end": 2820.68, "start": 2814.48, "text": " that we were aware of each other and sharing as much as we can." }, { "end": 2825.56, "start": 2820.68, "text": " So I was mostly involved initially in implementing all the distributional RL stuff." }, { "end": 2833.52, "start": 2825.56, "text": " So C51 or I don't even remember now if they have C51 out there." }, { "end": 2840.04, "start": 2833.52, "text": " But there was, if it's not there, there was at one point an implementation of C51 that" }, { "end": 2841.04, "start": 2840.04, "text": " I coded up." }, { "end": 2842.2, "start": 2841.04, "text": " So I was helping a lot with that." }, { "end": 2848.08, "start": 2842.2, "text": " And just in general with whatever they were working on at the time, I was initially quite" }, { "end": 2849.3199999999997, "start": 2848.08, "text": " active with it." }, { "end": 2855.6400000000003, "start": 2849.32, "text": " But then at one point it became clear that dopamine and TF agents were really going to be" }, { "end": 2861.92, "start": 2855.6400000000003, "text": " too distinct frameworks." }, { "end": 2863.0800000000004, "start": 2861.92, "text": " But not competitive." }, { "end": 2865, "start": 2863.0800000000004, "text": " So we had many meetings about this." }, { "end": 2870, "start": 2865, "text": " We wanted to make sure that it wasn't just Google putting out to competing frameworks and" }, { "end": 2871.7200000000003, "start": 2870, "text": " leave people kind of confused." }, { "end": 2873.6400000000003, "start": 2871.7200000000003, "text": " I mean, maybe people still are kind of confused." }, { "end": 2878.92, "start": 2873.6400000000003, "text": " But from the beginning it was very clear to all of us that these are complementary frameworks." }, { "end": 2885.16, "start": 2878.92, "text": " So where TF agents, I think, is much better positioned to handle very large experiments" }, { "end": 2887.88, "start": 2885.16, "text": " or more production type of experiments." }, { "end": 2893.84, "start": 2887.88, "text": " Or for running experiments where you're not really being disruptive with the algorithms" }, { "end": 2894.84, "start": 2893.84, "text": " too much." }, { "end": 2899.32, "start": 2894.84, "text": " But more combining different aspects, different types of algorithms or different types of" }, { "end": 2904.6800000000003, "start": 2899.32, "text": " environments, I think TF agents is likely better suited for that." }, { "end": 2909.04, "start": 2904.68, "text": " And as dopamine is really meant for what we call throw away research." }, { "end": 2913.72, "start": 2909.04, "text": " So this is research where you have some crazy idea that's quite disruptive." }, { "end": 2917.8399999999997, "start": 2913.72, "text": " And so getting into the internals of the algorithm we hope with dopamine is fairly easy and" }, { "end": 2920.04, "start": 2917.8399999999997, "text": " you can try these crazy ideas." }, { "end": 2921.8799999999997, "start": 2920.04, "text": " Most of the time these crazy ideas don't work." }, { "end": 2925.8799999999997, "start": 2921.8799999999997, "text": " So that's what we call throw away research because you throw it away." }, { "end": 2932.12, "start": 2925.8799999999997, "text": " But at least the hope was that with dopamine you would be able to get this answer of whether" }, { "end": 2936.08, "start": 2932.12, "text": " it's worth continuing this route or not." }, { "end": 2943, "start": 2936.08, "text": " You could get the answer quickly and then either continue or go on to the next idea." }, { "end": 2946.44, "start": 2943, "text": " Would you say TF agents is like the TensorFlow for RRL." }, { "end": 2952.2799999999997, "start": 2946.44, "text": " Like it's like the flagship RRL framework for Google going forward or is it maybe too" }, { "end": 2954.4, "start": 2952.2799999999997, "text": " early to say that?" }, { "end": 2959.88, "start": 2954.4, "text": " I think it's still too early." }, { "end": 2967.12, "start": 2959.88, "text": " So one of the big things with RRL is it's very sensitive to the type of problem that you" }, { "end": 2970.48, "start": 2967.12, "text": " have." }, { "end": 2973.04, "start": 2970.48, "text": " It still requires quite a lot of work upfront." }, { "end": 2979.44, "start": 2973.04, "text": " If you have a new problem that's not one of the standard benchmarks to kind of get everything" }, { "end": 2986.04, "start": 2979.44, "text": " running smoothly and correctly everything from figuring out scale of rewards to hyperparameter" }, { "end": 2988.2000000000003, "start": 2986.04, "text": " optimization." }, { "end": 2994.3999999999996, "start": 2988.2, "text": " And so from my experience like every problem is different." }, { "end": 2997.04, "start": 2994.3999999999996, "text": " And so internally we have some people using TF agents." }, { "end": 2999, "start": 2997.04, "text": " We have some people using dopamine." }, { "end": 3004.8399999999997, "start": 2999, "text": " And it's really like whatever the sort of the message is whatever fits your particular" }, { "end": 3007.04, "start": 3004.8399999999997, "text": " problem go with it." }, { "end": 3011.7599999999998, "start": 3007.04, "text": " There's no notion of when framework is better than the other." }, { "end": 3015.6, "start": 3011.7599999999998, "text": " We're I'm very proud of the team, the TF agents team." }, { "end": 3020.36, "start": 3015.6, "text": " They've done a fantastic job and I'm super supportive of them and equally they're super" }, { "end": 3022.56, "start": 3020.36, "text": " supportive of the work we do." }, { "end": 3027.16, "start": 3022.56, "text": " And so we have had meetings where people come and they want to use dopamine." }, { "end": 3031.6, "start": 3027.16, "text": " And when they explain the problem to me a few times I've told them I think TF agents" }, { "end": 3032.8399999999997, "start": 3031.6, "text": " is probably a better fit." }, { "end": 3035.2, "start": 3032.8399999999997, "text": " So you should you should check that out." }, { "end": 3041.16, "start": 3035.2, "text": " So it's hard to say because I don't know if there maybe there will be someday one reinforcement" }, { "end": 3048.3199999999997, "start": 3041.16, "text": " learning framework to rule them all but at this point I don't see that happening in the" }, { "end": 3049.3199999999997, "start": 3048.3199999999997, "text": " near future." }, { "end": 3053.64, "start": 3049.3199999999997, "text": " It almost seems like it's going the other way like new frameworks are popping up all the" }, { "end": 3056.64, "start": 3053.64, "text": " time and the problem is like just how to choose." }, { "end": 3059.16, "start": 3056.64, "text": " Yeah, yeah that's always a problem." }, { "end": 3063.48, "start": 3059.16, "text": " I mean you have N frameworks none of them satisfy you too many frameworks so now you have" }, { "end": 3067.2, "start": 3063.48, "text": " N plus one frameworks." }, { "end": 3071.4399999999996, "start": 3067.2, "text": " We did fear that a bit with dopamine and TF agents." }, { "end": 3078.48, "start": 3071.4399999999996, "text": " I think as I say it's I think it's a consequence of all of these problems being having their" }, { "end": 3084.6, "start": 3078.48, "text": " own particularities and so people just want to go to whatever will allow them to solve" }, { "end": 3089.7999999999997, "start": 3084.6, "text": " their problem and iterate on the problem faster and more seamlessly." }, { "end": 3093.72, "start": 3089.7999999999997, "text": " What do you find most interesting in the world of RL these days?" }, { "end": 3098.48, "start": 3093.72, "text": " So as I mentioned earlier I'm really interested in the notion of representations and what" }, { "end": 3106.68, "start": 3098.48, "text": " this means for reinforcement learning and actually learning in general I think whipping" }, { "end": 3113.68, "start": 3106.68, "text": " that bothers me is that first is a notary which is one of the standard benchmarks." }, { "end": 3118.9599999999996, "start": 3113.68, "text": " We have these agents that have to relearn how to see essentially every time they play" }, { "end": 3120.2, "start": 3118.9599999999996, "text": " the game." }, { "end": 3127.2, "start": 3120.2, "text": " So ideally what I want to have happen is that you have an agent that learns how to play" }, { "end": 3133.56, "start": 3127.2, "text": " pong and when you send it to a breakout it doesn't have to relearn what a ball and a" }, { "end": 3139.68, "start": 3133.56, "text": " paddle means and how balls interact with paddles and I guess in breakout you can break" }, { "end": 3145.3599999999997, "start": 3139.68, "text": " bricks but there's still some shared dynamics across a lot of games and it's one of the things" }, { "end": 3149.12, "start": 3145.3599999999997, "text": " that bothers me a bit that we have to relearn these every time and I think there's some" }, { "end": 3152.2799999999997, "start": 3149.12, "text": " room there for exploration." }, { "end": 3159.04, "start": 3152.2799999999997, "text": " Then aside from that I'm also quite interested in actually seeing RL used for real problems." }, { "end": 3167.44, "start": 3159.04, "text": " So for forever we've seen papers that motivate the work with real problems but you don't" }, { "end": 3172.8399999999997, "start": 3167.44, "text": " see them actually being used in real problems, all that much." }, { "end": 3181.84, "start": 3172.84, "text": " And so there was a workshop in ICML I think this year that was looking at reinforcement" }, { "end": 3184.1600000000003, "start": 3181.84, "text": " learning for real problems." }, { "end": 3188.1600000000003, "start": 3184.1600000000003, "text": " I don't think I'm the only one that's interested in this." }, { "end": 3192.7200000000003, "start": 3188.1600000000003, "text": " I think the community is starting to really think about this and how we can go beyond" }, { "end": 3201.2000000000003, "start": 3192.7200000000003, "text": " games and simulations and actually use RL at scale with impactful problems." }, { "end": 3206.96, "start": 3201.2, "text": " It seems to me like when you start to consider deploying these policies you right away have" }, { "end": 3212.6, "start": 3206.96, "text": " trouble answering questions about how safe is it, how can we be sure, how do we understand" }, { "end": 3217.7999999999997, "start": 3212.6, "text": " what it's doing and all these things which are kind of often take a back seat to just" }, { "end": 3220.24, "start": 3217.7999999999997, "text": " raw performance on Atari." }, { "end": 3225.8799999999997, "start": 3220.24, "text": " Yeah, no, and this comes back to this notion of each problem has its own particularities" }, { "end": 3230.4399999999996, "start": 3225.8799999999997, "text": " that if you need it to be interpretable or if you need it to be safe and that requires" }, { "end": 3233.12, "start": 3230.44, "text": " a certain type of algorithm." }, { "end": 3241.6, "start": 3233.12, "text": " But if you really focus on one particular problem then that should drive the type of research" }, { "end": 3247.2000000000003, "start": 3241.6, "text": " you do and whatever framework or software you end up building to really solve that problem." }, { "end": 3254.16, "start": 3247.2000000000003, "text": " And I hope, I mean it's a place where I'm lucky that I don't really have to care too" }, { "end": 3256.08, "start": 3254.16, "text": " much about publication numbers." }, { "end": 3259.36, "start": 3256.08, "text": " I know for a lot of academics this is something quite stressful because it's how you advance" }, { "end": 3261.88, "start": 3259.36, "text": " in your career." }, { "end": 3268.52, "start": 3261.88, "text": " But I think the community is starting to realize that playing the numbers game isn't necessarily" }, { "end": 3272.4, "start": 3268.52, "text": " going to get us to these real world problems because these real world problems are going" }, { "end": 3278.4, "start": 3272.4, "text": " to require a lot of time and work with potentially no publications." }, { "end": 3280.1600000000003, "start": 3278.4, "text": " So it is something I think about." }, { "end": 3285.1200000000003, "start": 3280.1600000000003, "text": " I have started doing a little bit of work with some external collaborations on that to" }, { "end": 3290.72, "start": 3285.12, "text": " try to get RL in the real world." }, { "end": 3295.88, "start": 3290.72, "text": " Maybe related to that, what do you think RL is going to be like going forward?" }, { "end": 3301.56, "start": 3295.88, "text": " Like if you look forward in five or ten years, will it be very recognizable to us incremental" }, { "end": 3306.7599999999998, "start": 3301.56, "text": " changes or do you think it will be completely different?" }, { "end": 3311.64, "start": 3306.7599999999998, "text": " Do you think will be some of these lines of research that we're following are going" }, { "end": 3319.24, "start": 3311.64, "text": " to be seminal that will sprout a whole different dimensions of this problem?" }, { "end": 3321.48, "start": 3319.24, "text": " What do you see in the future?" }, { "end": 3323.2, "start": 3321.48, "text": " It's kind of hard to say." }, { "end": 3326.92, "start": 3323.2, "text": " This field is changing so much from year to year." }, { "end": 3329.6, "start": 3326.92, "text": " I wouldn't be surprised." }, { "end": 3331.24, "start": 3329.6, "text": " I wouldn't call this a prediction." }, { "end": 3338.24, "start": 3331.24, "text": " I wouldn't be surprised if the lines, and you already see this happening, but the lines" }, { "end": 3344.2799999999997, "start": 3338.24, "text": " between what we call RL and other types of machine learning fields become more blurred." }, { "end": 3352.3199999999997, "start": 3344.2799999999997, "text": " So RL becomes more of a technique or a tool that you use along with many other tools to" }, { "end": 3354.7599999999998, "start": 3352.3199999999997, "text": " solve a larger problem." }, { "end": 3356.8799999999997, "start": 3354.7599999999998, "text": " So it isn't just you're working on RL." }, { "end": 3363.9199999999996, "start": 3356.8799999999997, "text": " You're working on problems that include RL as part of the solution." }, { "end": 3365.9199999999996, "start": 3363.9199999999996, "text": " Awesome." }, { "end": 3370.08, "start": 3365.92, "text": " Dr. Pablo Samuel Castro, thank you so much for your time today." }, { "end": 3372, "start": 3370.08, "text": " And for your insight, you taught us so much." }, { "end": 3375.64, "start": 3372, "text": " I learned so much from reading your work and I look forward to reading whatever you do" }, { "end": 3376.64, "start": 3375.64, "text": " next." }, { "end": 3377.64, "start": 3376.64, "text": " Thank you again." }, { "end": 3378.64, "start": 3377.64, "text": " Thank you so much for having me." }, { "end": 3381.2400000000002, "start": 3378.64, "text": " This was really fun." }, { "end": 3392.16, "start": 3381.2400000000002, "text": " That's our episode for today, folks." }, { "end": 3395.56, "start": 3392.16, "text": " Be sure to check talkrl.com for more great episodes." } ]
Kamyar Azizzadenesheli
Kamyar Azizzadenesheli brings us insight on Bayesian RL, Generative Adversarial Tree search, what goes into great RL papers, and much more!
https://media.transistor…927.mp3?src=site
This is TalkArail Podcast, all reinforcement learning, all the time. Interviews at Brilliant Folks across the world of RL. I'm your host, Rob and Chauhan. Dr. Kamjar Azizadena Shelley is a postdoctoral scholar at Caltech. He will be joining Purdue University as an Assistant CS professor in Fall 2020. Dr. Azizadena Shelley, thank you so much for joining me today. Thanks, Rob and for having me today. So you have a lot of really great papers. We just chose three to focus on today. That is, efficient exploration through Bayesian deep-queue networks, surprising negative results for generative adversarial research, and maybe a few considerations in reinforcement learning research. So before this interview, I got to hear a podcast interview you did actually a year ago on Twimmel AI podcast. You also touched on two of these papers during that podcast. I really enjoyed hearing your interview. I learned a lot from that. I would just say for the listeners, this podcast is a little bit different because what I want to try to be more in touch with the research, try to read some of the papers, the relevant papers before the interview. So we can have a little more deeper discussion. Yeah, that sounds great to me. And since last year, there have been many more research coming out from different labs, related to these topics, that would be great to talk about those as well. Yeah, I mean, just in general, I think the amount of RL research being published is really getting out of control. Like I've seen some of these charts in terms of the topics that are covered at the major ML conferences, and RL is just shooting up exponentially. How do we keep track of what's happening in the field when so much research is coming out? Well, that's a really, really great question that makes the progress a little bit harder for people, I mean, the researchers in the field because we can't keep track of many works. But at the same time, there are many good bills and like great people, they try to abstract out those, like they try to provide the abstract of the papers they read and put it online. And stuff you reading those papers, you probably can read those abstracts. And you see that they're interesting to you, you would go and read them, obviously, this is not going to be a good, it's not going to be optimal way of handling the situation, but it's going to be at least something better than doing nothing and ignoring many papers. But to be honest, I also have a hard time, like keep tracking many papers and I have a list of papers that I need to read and this list doesn't go down, it just keeps increasing and like now it's becoming more than 100. And it's a problem that we don't, I just, I don't know a good solution to it, but there is a remedy which is like great people, they put abstracts of those works out there and I just look at those sometimes. For my friends, I have thankfully during my PhD I made a ton of great friends mainly in theoretical aspect of she learning and also in practical and among practitioners. And they also are really nice to find, they kindly let me know what paper I should read and they start like really great things, but we don't know how to, I personally don't know how to solve this issue of like exploding number of papers out there. So I'm, it seems to me that some of these papers we can safely ignore and other papers that really change your perspective when you just when you even read them for the first time. Yeah, I quite agree with you and this is one of the sections in the paper I wrote, I mean the position paper I wrote that you mentioned as a third paper, we are going to talk about maybe a few consideration in reinforcement learning research. Well in that section I do not, I don't talk about how we can deal with many, many papers, but I say we can reduce the number of papers you write because of many reasons as we all know reinforcement learning is a field of research that is really, really expensive in the sense of intellectuality and time and money cost everything. If you want to publish a reinforcement learning paper or you want to have a contribution in the field, it's even a simple theoretical like innovation takes a lot of time from you. If you are a serious, you might take half a year from you to analyze and provide a great understanding for an RL algorithm. Even if you are a practitioner, you might take you like, or again half a year to do the empirical study, but it is not quite the case for like other topics and fields in machine learning, like supervised learning. I personally wrote a really interesting, from my point of view, paper in supervised learning and doing a patient that I spent one weekend on it to write the whole paper and also the theoretical analysis of it. And also my friends after that helped me to do empirical study for that. By the whole process, they didn't take like a lot of time, but at the same time I was working on another work paper and reinforcement learning that took me like half a year to provide really good satisfying understanding of what is happening. So since reinforcement learning is quite costly and we don't have that many researchers working on reinforcement learning despite the fact that you have set the amenities for working on it, we still need more, but given the fact that we have this few people working on reinforcement learning compared to the number of people we need to work on reinforcement learning, it's better to manage what you want to do, whether the problems you want to solve and make sure that if you have an idea, we think a lot before like, definitely, if we're going to do empirical study, we should think a little more and design some hypotheses for ourselves and test them by ourselves before like running extensive empirical study. I can give you many examples of existing oral, even like general oral algorithms that they fail easily and miserable for two-state MDP and two actions, but the authors could avoid doing it by spending a little time to really evaluate their algorithms. Yeah, so my general point is like, we need to spend more time on thinking deeper in reinforcement learning research and since we have limited budgets and by budget I mean like human time, it's better to cooperate and collaborate with the authors and get a more direct development in the field. Okay, so maybe getting a bit more into that, I've noticed in machine learning there's many different formats of papers like some authors really have a clear hypothesis when they start and some don't and in general, what do you think really makes an excellent paper in RL? What needs to be there to make an excellent paper? From an excellent paper, from my point of view, a paper which has something in it that when I understand I am like, or I learn about it, I would be like excited. If the paper is a theoretical paper, it talks about the theory of RL, excellent. They provide a really great understanding of some algorithms. I would be excited. If it's a scientific paper in the sense that they put out some hypotheses that I care about and they test the hypothesis that by testing I don't mean like running one experiment, it's like extensively testing the hypothesis. When you test the hypothesis and practically you need to try a variety of different settings and make sure that you literally test that hypothesis. It's not just saying, I did one experiment and something happened, so I tested my hypothesis. That's not a good thing. But if you put out a hypothesis and test it, that would be really great to me. The third thing as you said is, let's imagine a paper does not have any hypothesis, but what it does is test or it like reports a set of great empirical study and put them all out. Again, just tell me, hey, we ran this one. That's the thing they ran or the empirical study that they provided. The thing I was thinking maybe would be interesting to see what would happen. If they provided a set of experimental studies, that would be really interesting. But if the paper, this is it. Since we reinforce my learning, we don't know that much. It hasn't been extensively studied. Take a paper for why it is great understanding what is happening. I would love to read that paper. If the paper says, okay, we did this trick and we outperformed something, I wouldn't care much. But I also like that paper because it adds non-negative onto information to me. But the goal is to provide a better understanding. I would love it more. But if it's just climbing the ladder of like leaderboard in the scores, whatever that means, I don't even know what this term means in RL, which I talked about in that position paper, saying what does it mean to climb the ladder of leaderboard in RL. The paper does not provide that much understanding or the empirical say is not exciting. I wouldn't be excited about that paper. So you're not against empirical papers? Oh, I love empirical papers. One of the reasons that RL has been a center of attention for many, many young old like prestigious and junior researcher in last few years was the... I'm excited about empirical studies. When these empirical studies were out, let's do this way. So we can push science and theory. We can say, okay, practitioners, you should not do anything before the theory proves what we should do. But then this way, since the theory is with horror, it takes a lot of time to make some products. But if you ask practitioners, hey, I'm also harshly practitioner. I'm not excluding myself from that community. I love that community. And if practitioners, they start putting intuition and build intuition and provide amazing set of empirical studies, then theorists would gain from this information and try to... Based on what they observed, they try to analyze. It just gives them better clue how to analyze a problem and what would go wrong. Okay, so like the empirical study, the results coming out of the practitioners is wonderfully appreciated. I love them all. But of course, as you said, there are some papers. They don't follow the scientific tradition. And they do some empirical study. They spend like $50,000 or $100,000 on empirical study that you can easily say, well, it wasn't quite wrong thing or idea was like flawed. We're just looking at the idea. Those are like, that's what happens. Those things that happen. You can't stop a field because some research at some point, they made some mistakes. I've been all making mistakes. Even in theory, if I call myself a theorist, I also make mistakes. Oiler made many mistakes. Like Fourier in his theorem, he made mistakes. We all make mistakes. But over time, we build on top of our top researchers and make products. So one thing I've noticed in RL is like, many papers are just looking at one small aspect. And it's hard for me to tell at this point, what would be a cutting edge agent if we combine all the insights across all the different state of our results? What is the real state of the art for a complete agent right now? Once in a while, we get a paper like that. Like I'm thinking like the rainbow paper, maybe as an example of that, it wasn't really a unique thing except for just combining things that already existed. Rainbow was, well, I like it in some sense. It put like many, many efforts all together. So all together to see what is the output. The thing is, if you look at the work, most of the work before Rainbow or like an after Rainbow, I mean, even my works, when I wrote the BDQ and paper, I was like, okay, we have DQN or BDQN and they do have some great for exploration and exploitation to do like sample efficient interaction with them one. And I was like, okay, let's design a method which does a smart way of doing exploration exploitation. And then I propose a model which we call it BDQN and we show that like if you do exploration better compared to DQN or DQN, it's going to perform very well. You would see it like a colleague of mine says, okay, I have DQN, let's study the effect of like the distribution of samples we gather in the replay buffer. You would see another or colleague of mine would say, okay, let's see if the adregolarization to the objective function, what would happen. So we studied the effect of different components in reinforcement learning algorithm and we all compare with DQN or DQN mainly because we are trying to provide a better understanding. My work is like in the same BDQN, I repeated many times that by no mean I'm comparing the performance of BDQN or DQN or performance of any algorithm with the algorithm I'm proposing. I'm not doing any comparison or I'm not just saying, I'm not even comparing any numbers. I'm just trying to provide a better understanding of the algorithm design, algorithm design for reinforcement learning or math or problem. So we have many many great researchers, they try to provide a better understanding of different components of the original learning algorithm and to see what are the contributions and whether they are actually helping, whether they don't help, if they help how much they help, if they don't help, if they degrade the performance, what would happen. And then Rainbow put all these positive, I mean Rainbow I think did not include BDQN. I think it came before BDQN. And they put a good part of like most of these existing algorithms all together and showed that hey, you said one researcher said if we change the replay buffer distribution is going to be, is going to improve the performance. One researcher said if you're still using the one-step return, if you use like a lambda return or like few step return, it's going to improve the performance. Let's put all these things together and see how much everything, all these things together is going to improve. And it was a really cool, I'm very kind of steady and it's also useful for people or not researchers in the field. If you are a, let's say if someone has a company in Bay Area and wants to use reinforcement learning and looks at my paper, it says okay, these papers do exploration. The other papers change that component, the other papers change the other component. But that person since the, it's coming from industry, it doesn't know about what is happening, that person might choose one of these. But Rainbow what it did was like putting all these together and say okay, if you can combine all these sources and components that we know they are going to help, you're going to get an algorithm which is actually going on a road table. And if you're an industry person, you can use that algorithm. It almost seems like Rainbow could be updated every year with whatever is the latest stuff. Like I think Rainbow didn't have IQN in it, for example, the original Rainbow. Yeah, I'm not sure about that, but like IQN has it or not. But based on the conversation I had with some of the authors, their philosophy was yeah, we are, it can be updated, but I don't think they want to update, but they want to, they want to, what I think, they want to show that hey, these are the, people made during that figure, if we put them all together, it's going to be this. Of course, if there are many more advancements, if we put those all together, we're going to even improve more. So it was a kind of proof of concept at least to me, which well, it's not, it doesn't have damage of scientific value compared to like this improvement is component, but it's amazing and cool work to have a proof of concept. What would be your advice for someone doing, say, empirical research based on your position in this paper? Is it to be more in touch with the theory? No, I do not. So some people they require that hey, every reinforcement learning paper should have theoretically call analysis. I strongly against that. It doesn't need to make a why. We have theoretical work and we have scientific work. Like if you have a scientific work, you don't need to provide theirits analysis for that. If you provide, that would be amazing, but if you don't, it's totally fine. And it should be also eased for those papers to be published. If you put a requirement for papers, empirical papers to provide theoretical analysis and if they don't, then we are going to miss many great empirical studies. But one thing that I really would love to see more is if I propose an algorithm for reinforcement learning from empirical, like it's not a theory of work, it's like empirical work or scientific work. I would love to see how the what are the set settings that this algorithm breaks. Let's say I propose a policy gradient algorithm, a new one, which is totally based on my intuition and there's no theory behind it, and I deployed on let's say, Mujuko, or this kind of probotic style environment. And I run it and we know that this, for example, the environment with Jucco, you can change the dynamic of the system where you can change the cost function, you can manipulate all of this. I mean people like try to do cherry planting and change the environment setup and make their algorithms work on this new setup and therefore you can beat everything else, but again, whatever that means, the beating. I would also love to see, I mean, this is amazing research and you showed me that if you change, if the environment has this configuration, your algorithm is going to work very well. I also want to see if how far you can go away from this configuration and still your algorithm doesn't break. So if you propose an algorithm, you should show me where it works, where it breaks, and if it breaks, what do you think? Why it breaks? This is the kind of missing point in the literature and I do not blame the authors. I blame the culture a little bit because if you add a negative result in your paper, probably you get rejected, you don't get a chance to get accepted. I can do more of the reasonable scientific contribution. So this is one reason why I was surprised and excited to see your paper about the surprising negative results for GATs. So you buck the trend here and you did publish a paper with a negative result. Do you think that there should be more negative result papers? So this paper is not published because it's negative results. People loved it but it was not fortunate enough to be published but I just put it on archive for people to use it and I saw that it had a great effect on the society and on our community. But regarding your question, yes. I would love to see negative results but not of course, you can call it negative results like every day, 20 billion of them. But the negative results that actually it was a hypothesis that many people thought that it would work and it's an amazing idea to do and you tried it and it did not work. For example, in this paper, my paper in surprise and negative results, I showed that if you use, if you try to learn the generative model of the environment, if I have a reinforcement learning problem and interacting with the environment, through this interaction, I get some rewards and I get a ton of transitions and going from one state to another state. So these are, if you call, like, these transitions, all these interactions as unsupervised signals or data points, you can learn the dynamics of them. Why am I using this ton of data? For example, if I interact with the Atari games and if I run it for 200 million steps, like 5% or like 1% time, I might have, well, even less than that, I might get any reward. But I get 200 million interaction or 50 million interaction with the environment. So I could use a tiny portion of that data I gather to learn the dynamics of the environment. In fact, in this paper, we showed that it's about a few thousand interaction you're able to learn the dynamics of the environment, opposed to a few hundred million. But doesn't that depend on what part of the environment you're in, like an unskilled agent wouldn't be able to learn about the last level? Oh, absolutely. You learn what happens is like, you run your agent with, like, not really good agent and you collect data from that environment and you're able to learn the dynamics of the environment and that part of the state space. And then if you use that one, you can hopefully enhance your agent and the agent is going to discover new part of the state space and then you use that sample to learn the dynamics of the environment. I absolutely agree with you. It doesn't, by learning the dynamics of the mother, I mean, with regard to that measure induced by the agent. And yeah, you can learn the dynamics of the environment and use ideas like, ideas used in AlphaGo, you just construct a tree, not on the real environment, but the environment you learn, like the agent, the mother you learn, and you find a good action in that state. It can be tree, you can do policy gradient, whatever you do, you just construct, you just do your rollout in the agency model you use. But of course, you do not rollout the agency model to like, few hundred thousand time steps. You just roll out to, let's say, like, few times of 10 to 20 and use the learn, few functions so far in the least note. What is that? It's an idea to me at the beginning when I thought about it, I was like, wait, this is really interesting. And I was able to theoretically guarantee that if you give me a model, like approximation of the environment, and you give me approximation of the Q function, then if I use this approach, I just pull you, I'm able to learn or estimate the Q function better. Well, I showed it, it was nice and it made a total sense to me to use it because at the same time, theoretically, you can show that the DQ and agent comes up with bias estimation of the Q function. So you would be like, hey, if the Q function I'm using for, I'm learning in DQ and it's bias. And if I use this Monte Carlo tree search approach, I just describe to you. And if I use this, this learned Q function in the lift note. And if I have a discount factor, which I'm going to, if I want to use this Q function in the lift note and the lift note is like in depth of, I don't know, 20, the effect of this bias is going to be discount factor 1220, so it's going to disappear soon. So yeah, it's an amazing idea to use. And in fact, there are many papers similar to this paper I wrote, which they exactly do the same thing. But after doing many experiments, I study I was like, why doesn't the outperform even DQ and what is wrong with it? And after spending like, listen, this is crazy. It was, I spent like nine months on this paper and just digging why doesn't work. And after nine months, I was like, hey, let's fall back and see what is happening. And I put our hypothesis and check the hypothesis, but it's true or not. And we realized that, hey, this algorithm is doomed to not perform well as long as the depth is not high. And yeah, so what is high? Like in the paper, you went to depth four, I think, is that right? Four or five, I guess. Yeah. Four or five? Do you mean five is high or five is high? No, five is really low. You got to go to depth of like few hundred, okay? But if you go that deep, will your errors in your model start to become a problem? Well, that's a thing. If you, like for alpha growth, you go really deep. For months, I think few hundred, four hundred, two hundred. I'm not sure about exact number. If you look at this month's college research, it worked by, I think, by Satinder, they also do few hundred, but they all do it on the true model. They don't do it on, they don't learn the model. So they don't have the problem of the model mismanage. And be no seriously, if you go to 200 or 300, like depth and you use the scan factor of like, or 100, if you use the scan factor of 0.99, then that depth is actually giving, is giving you the optimal solution. So you went deep enough to find the optimal solution. You don't need the Q function at lip node because 0.99 to power of 200 is almost zero. But as you said, if I learned the model and the model has, there is a model mismatch. And if I roll out in that model for 200, it might end up some weird thing or in my saturate. Yeah, absolutely that can happen. But the point we were making this paper was like, even if you have a good model, like a perfect model, like if you do Monte Carlo sampling, Monte Carlo research or like policy grading, whatever, on the true model, but the short depth is not going to work as doing to be solved. But now you're saying that if you, we add the model mismatch for the short horizon, so you add more error. So it's not going to help. It's going to degrade the performance. But if you want to do Monte Carlo research on the model, but the model is like, and go to depth of 200, whether the current techniques, they are not good enough to take care of that. There are the error compounds and it goes to exaggeration and doesn't give us a reasonable thing, reasonable output. Unless we do some, we need to do some consideration. We need to take some consideration, for example. If we just do ties, oh, okay, that's actually interesting. Like some of my colleagues, they make a statement that if you learn the model and if you're all that for long, it's not going to work. That's not true. If you work on a tabular MDT, if you have a model mismatch, it's not going to induce any problem, like much problem. If your model is epsilon inaccurate, your policy is going to be all epsilon inaccurate, it's not going to be exponentially bad. So if you are in tabular MDT and you have model mismatch, the model mismatch is not going to hurt you too much. But if you live in a space of like continuous MDPs and you're using function approximation that might might output things that are outside of the domain of the input, then you have problem. The model mismatch in continuous MDPs like LQG, linear quadratic regulator, I guess they call it. But linear models. Even in linear models, if you have a model mismatch, it would not cause you that much of problems. You do roll out in the model, which is not accurate enough. But if you go beyond that, we don't know much. And when we use function approximation, like deep neural networks, they generate outputs outside of the domain of the state. Then when this happens, like bad event might happen. Which is partly, I think, is the point of your Bayesian deep Q network, right? They know when they're getting outside of their range of what they understand. So Bayesian deep Q network is, so this work I was talking about, they were surprising. One, I was talking about model mismatch, but yeah, that's actually a subtle issue. I love it. And if you are serious in reinforcement learning and you work on model based RL, you call the environment model. But when you use a function approximation, you call that function a model. So I think there was a confusion here. By model, I meant the model of the environment, the dynamics of the environment, where that one is not actually estimated. But in Bayesian deep Q network works, the model is the Q function. And Q function can capture the uncertainty. Like the Q function is like, or Q network, you're supposed to estimate the Q function and there's uncertainty between these two. Yeah. Okay. So I guess what I'm saying is if Gats had an estimate of its uncertainty on the leaf nodes of the Q's, would that help or is that a completely different issue? It's quite a different issue. So for Gats, the final statement I was making was in that paper was, if you have no uncertainty on estimating the model dynamics, it means that you're given a frame of the say sorry games and sequence of actions, you're able to perfectly tell me what is the future frame. So it means that you know the model dynamics, like how the environment works. And if you use perfect model or perfect simulator, perfect engine of the game and do roll-outs, no matter you have uncertainty over Q function or you have some estimation of the Q function, as long as you don't have the exact Q function, this approach can be so optimal. Okay. Yeah, but it's quite interesting. So, I love this work because this idea of learning the generative model of the say Atari games and doing roll-outs on that is super, super expensive in the sense both time wise and also compute like time wise is super expensive because you, well for me, we trained a generative adversarial, I mean we deployed a generative adversarial technique to train the model because you want to make sure that if the model is the cast, you're going to capture that. And also you want to come up with a generative model which out with high fidelity frames and it doesn't do bad things. You can just pause it for one second. You mentioned if there was, it was stochastic. So does this model capture? That was one of my questions about this paper actually. If your environment has some randomness in it, like enemies show up at random times or the result of something is random, how does a GATS approach handle that stochasticity in the environment? Oh, that's an absolutely great question. GATS works like, it's like literally, it's super close to conditional or what's called contextual again, I guess you call it, conditional again. In GATS, you have a random seat, so you have in generative adversarial networks, you draw a random Gaussian, let's say noise or uniform with some noise and fit that noise to the generative, generative, generative, you have an image or whatever data points you would like call it. There's another version of this Zuo of GAN works, which receives a random variable or the noise and also you can give context. You want to say, generate an image of a dog and you just give a context of dog and a random noise generates an image of a dog. What you're doing here is we fit last four frames to the generator. Also we feed a random noise to the generator and as and also we tell the generator what action we are going to take and the generator is supposed to generate the next frame. If the next frame is coming from distribution and the environment, if the environment is deterministic, you can ignore the noise you add, because given the last four frames, and action you could use the deterministic to out with the next frame. If you have the random latent, so you can get random, so the game that we tried, they were all quite deterministic and we didn't see any need for that, but it's built in in the architecture of the model. The thing is, we all know training, genetic, and network is quite hard. Despite the fact that the data we use is quite, doesn't change the data set is fixed, but in reinforcement learning, as you mentioned, I think five minutes ago, the distribution of the samples you get over time changes, so you need the originative model needs to adapt to these changes. So it's like, you have image net or not image, you have celabé or you have emnist, you want to generate samples of emnist. It's like you, the distribution of the samples you observe changes because your quality has changed. It's quite a difficult task to find and find the right model and high frame returns to do that, which cost a lot of time, human time. And then the second extensive part is, if you want to do rollout, you're doing rollout on the GPU, right, in a serial manner. And if you do rollout, let's say you want to do a simple thing, Monte Carlo's research, we don't want to do Monte Carlo's. You want to build a whole tree. If you build a tree of depth like 10 and you have, let's say, like, pound tree actions, the number of, like, leaf nodes you're going to produce is going to be like, tree power of 10, right? And for each state, when you want to make a decision, if you do rollout, if you need to make 3,000, 10, almost, order of 3,000, 10 computation, which is killing. For example, if you run ponds, that's if you do the full tree, right? Yeah, full tree. If once you train up a little bit, then you don't have to do the full tree anymore, right? Absolutely. You don't have to do the full tree, like, but it's still, if it's depth 10, you need to, if you want to just look at one trailer, it's going to be 10 extra computation, right? So I'll admit, I've been working on the Palmerman environment, which is a Neurip's composition environment. I was thinking about, so I have, I have, I have an agent, but I was thinking it would be cooler if I could run expert iteration on this environment. Now, Palmerman has a lot in common with Atari, so I was thinking, wouldn't it be cool if I could do, or this is maybe, I don't know, nine months ago. And I started sketching out how, and I came to realize that I would need something, something vaguely like what you're talking about in this paper, but I was very intimidated by the amount of work it would take to build that. So I didn't proceed with that project. Well I'm happy you didn't do it. And another thing that I'm happy is like, this paper, like a year ago or two years ago when I was talking about this negative result to people, people were like either shocked or would not agree what I'm saying until I was convincing them and then became super shocked. You mean they thought it would work? Yeah, they, I mean everyone, it was kind of common sense, except I have one friend who is like, who's been a senior, who used to be senior person at DeepMine now, his head up open, sorry, what is called? Little brain in Paris. And he was the only person who told me that he knew it doesn't work. But the rest of the people I talked to, they were like, no, either I am wrong or like something's wrong or they were shocked. And then if they thought I'm wrong, I convinced them and then they become shocked. So it was like common sense or common knowledge that this approach should work. But in recent, they, I realized whenever I talk to new people, they are like, they all know that this approach is doomed to fail. And I'm happy to get this response because that's what I want. Can you help us summarize briefly what is that reason that we can understand why this approach can't work? And it seems like, I mean, the question that comes to my mind is, why is this so different from Alpha 0? If your model really was really good, then what is the difference remaining? Okay. So Alpha 0 has really deep tree. Okay? The tree is deep enough that the contribution of the tree function using the nodes is not going to be that high. But it might be that the depth is too high. But the idea why it doesn't work? Well, it's too conservative. Let's imagine you are walking by a carab. And each time, if you are able to roll out, a carab is in your right hand side. And if you hit the carab, you might break your leg. If you are able to roll out in the model that you have in your brain, people call it imagination. Like, yeah, let's become a neuroscientist for a little bit. So in your brain, you have a model of the world. And you can imagine if you go too slow to write, you might hit the curb and you hit, then you break your leg. And this is not a good action. But in Atari, if there is a carab is a ditch that if I go there, I'm going to go there. I'm going to break my leg. I wouldn't go there. Therefore, I don't have this experience in my replay buffer. Therefore, if my Q function wanted to go to that curb for some reason, and if I prevented, therefore my Q function does not see the outcome of its own decisions. And if my Q function doesn't see the outcome of the decision, doesn't learn that going right is bad. Or going even to that state that I'm at, which is too, too, from the curb, is the interest, is the interest state? I should not even be there. But my Q function doesn't learn that going there is bad. I'm just kind of preventing that bad event to happen. But since I'm not experiencing it, my Q function doesn't know that bad thing. Yeah, that's the issue. So is this in the paper you mentioned, cats causing a delay in reward propagation? Is that what you're talking about? If I was able, if I was not using this imagination or this rollout in the model of the environment I have in mind, I would try to go right and I hurt myself. And then I learned that going right is bad, and my Q function gets updated. But if I don't do it, I keep walking along the curb for long until, let's say, if I do epsilon-gritty with some low probability, I'm going to hit my, I'm going to break my leg, right? Or like, I hit myself. If I hit myself there like in 10 or 7 in the future, I would get a, I receive a negative reward there by taking 10 steps back to realize that at the beginning I should not even get close to the curb. But that's the same as DQN. A is not the same as DQN because in DQN you would, I mean, here you're still using the replay buffer. You're putting your, are you putting your imagined states into the replay buffer as if they were real? Okay. If you do not put the imagined rollout or like the rollout in the, I would avoid calling it imagined for the machine, but let's be easy to the day and let's say, if I do not put those imagined states in the replay buffer, what would happen is if I get a negative reward in the future, it takes a lot like longer for the current states to know that this is the state's math. But even if you put the imagined states in the replay buffer, you can construct a problem that is again the same delay would have, which extensively be described in the paper, which will also cool. So I'm not sure I fully followed that part. Like, alpha 0 doesn't use a replay buffer, does it? Okay, alpha 0 does have a replay buffer, but it doesn't seem to come up very often. And then there was a paper more recently called model based reinforcement learning for Atari by Kaiser et al. We actually mentioned this when we were talking or before the interview, that uses some kind of LSTM based discrete latent component to predict the next frame, but they're not, I don't think they're predicting the full, well, I don't think they're using pixel space. They're using some kind of latent space. Looking a little closer at the model based reinforcement learning for Atari paper by Kaiser et al. They're using latent, but the latens are also used to predict the pixels in the next frame. So I guess the truth is somewhere between these two. Their network is kind of complicated. And at this point, I can't say I understand the details of how it works. And they got good results with this, what they called simple. It seems kind of like a very similar approach to yours, because they are doing a model based, both are model based. I don't think they're building a full tree. So yeah, that's actually interesting paper, like it. The sense that, well, yeah, do LSTM in our work, we do both LSTM, well, the card RNN, and also feed forward. And our work, we can do one to call it three or two of the gradient. But they do the RNN for the model, and they do policy gradient for the, for coming up with the best action. So they are doing those things with suggested. And, but I agree they perform well, apart, which is the part that I, well, I actually want to author a friend of mine. And I don't agree with the fact that they are performing anything, because we usually run a policy. We run the enforceable learning algorithms that we design on the CYOTRA for 250 minutes time, depends on what you call time step. But 50 minutes time is one billion times the steps, okay, to get some performance. Okay, at the beginning, two times the steps, like, let's say, first, 12,000 times the agent doesn't do much. And maybe, for say, for PONG, it was making reward of minus 21. In the first 100,000 times, it might get reward of minus 9, 10, 20, okay. So it doesn't do much in ad regimes. And I also said in our paper, it's like, if I learned the model dynamic, if I am able to roll out in that model, I'm able to pull up for 10 times the steps, I might avoid some local bad events. I might not hit the curb. And also, if there is an apple, like, in 10 steps away from me, I'm going to reach that, okay, but this is going to be locally being good, right? But I'm not going to be globally good. And what they result shows is that in the first 100,000 times steps, they're outperforming the other algorithm. But it's not what we care about it, but we don't care about outperforming algorithms in first 100 times, 100,000 times steps that much. If your performance is from minus, like, 1,000 to minus 985, and this is the improvement you're making, but actual performance gain we care is like few thousand. If you, first few thousand times steps, you do it, like, what happens is like, at the very first time, like, very beginning, 100,000 times steps, you make a jump, yeah, you boost up, you wrap up, but after that, you don't do much. And that's one of the reasons they did not go beyond 10,000 times steps, while almost all the standard works out there, excluding mine. Like, I do not run things for five, 50 million times, so we're 200 million times steps. But like, that's the way people do. I'm sure, like, the algorithm is not like, well, in the future. Now, the first few times step is going to, it has higher speed of learning at the very, very beginning, because of this three-structured, I was talking about, but after that, it doesn't do much. And when they run it for one million times steps, the performance, like, the improvement vanishes, or plateau. So it's like, it's interesting work. It's the work that we were warning that is not going to work. Do it, they did it. And it's great, but this is the problem. If you do three-starts, or you do, like, a construct, a JSON model, you're going to wrap up very fast at the very beginning, but after that, you're not going to do much. Hmm. Okay. So what would be the next step for this Gats line of work? What would you suggest would be the next place to look? That's excellent question. Well, there's a huge interest among researchers to do model based on model free. And one of, I think, your first guest in your show also, extensive work on this area. But it's harder than what we thought. It's like, it needs really, we need to some time to figure out how to use it. And I'm glad that I did it until I told everyone, hey, don't do it that way, because it was quite straightforward thing to do. And it was great to tell Drifold Don Deid, but what we should do afterwards? Certainly, when we run Q learning, or like, DQ and on, sorry, games, there are many, many signals, vision, rate. And which we might not use, as I said, are unsubvised signals. They're like, well, maybe unsubvised signals are the wrong thing to use, wrong term, but it's better with me. These transitions, we can use them to learn the model. But who said that model free is less sample efficient than model based? No one proved that. People say it, I would say, okay, prove it to me. Well, that's interesting. Well, no one proved that. People are saying it, but it's not true, because we don't know. I mean, I wouldn't say it's not true, but what we have seen both theoretically and empirically, it doesn't, it sounds cool to say model free is less sample efficient than model based, but well, no, we don't know. No one showed that. And theoretically, we know that there are quite, like, you can do almost similar to it. If QLearning has the same sample efficiency as model based, approach is less for tabular setting, and for a continuous set of people working up, including myself, and we are seeing that sample complex is almost same. But that's an interesting point that we also discussed extensively in the paper. Let's imagine you just don't want to maximize your expected return. You want to be also safe locally, okay? If you're, if you learn the model and if you're able to roll out, you know what would happen if you make some certain actions, right? So it helps a lot in safety, which is like amazing use case of it. I'm saying, hey, in order to combine a model based on model free, great, but do not aim for our performing model free methods because, well, why, why you would think like that? Our paper shows don't think like that, like, or at least don't take it that easy, be more precise. But if you want to be safe, if you don't have the model of the environment, you can do a lot of things. The third thing we showed, which was astonishing. I love that experiment, let's imagine you have a punk, you have a game punk. Okay, you remember the game punk? For sure, yeah, I enjoy that as a kid. So, so in the game punk, we have two paddles and a ball, right? And we control one paddle and open them controls another paddle. Every play game and we run DQN and VV. Okay? So now, what, what one experiment we have done, which was super cool was actually I was, you know, Gray Marcus. Oh, yeah. He's like one of the critics and amazing person in the field and I was talking to him, like, right before talking to you, like, like, 10 minutes before talking to you. And we were talking about this example that I had. It's that. I follow him on Twitter, yeah. I was following him on Twitter until I met him in person today in the morning. Now we were talking about this example that he was also have, he had some other examples as well. So the example is a false. I have a punk, the game punk and I have two pads. I control one paddle and after like five million times of steps, I master this game and I'm able to score 21. And then we did thanks to Marlos, my friend who is at the brain now and Mark Bremmer and other folks. They put out a new version of ALE, which allows you change the mode of the game. Okay. Means that you can't change the difficulty of the game or the dynamics of the game. For punk, when you change the mode of the game, what happens is the width of the opponent's paddle get get half. Okay. So it's like the size of the opponent's paddle get half. So the game becomes easier, right? If you have the DQM model, which is able to score 21 under normal game, and then suddenly, which is 21 scores the cap, and suddenly I change the mode of the game and I like half the size of the opponent's paddle. And the game is much easier because you can score easier, right? And if I apply the same model I learned on this game, what would be the score of the DQM model on this easier game? I mean, the same model you trained, which was in score 21, now you applied directly on this game. And surprising is the score of this DQM model, the stuff keeping to be like 21 as we were expecting, it became like minus 21. Means like it totally broke the system. The system was not able to function at all. And is that really a problem with RL or is that a problem with these types of function approximators? Because the function approximator, if it sees a scene with a slightly smaller paddle, it thinks it's a completely different scene than the scene with everything the same except for a larger paddle, right? So one thing, if you don't mind, I would phrase your question a little bit. It's not a problem with reinforcement learning, reinforcement using a general field. It's nothing wrong with it. It's amazing, it's the best. But if you look at the agent as a combination of RL principles and function approximator, if your aim is to maximize reward, and you're using function approximation, there is no reason for the agent to look at, to be able also to work on the game that the paddle is like half or bigger. And that didn't learn, maybe there's no reason to learn any logic behind, I'm using Marcus Grace, like language, there's no logic, there's no need to learn the logic there to solve the game. Okay, so yeah, as I said, if I... Because it has no semantics, it doesn't learn any semantics, right? It doesn't know that's a paddle, it doesn't know. It just has a bunch of matrix to be used. Any change in that suddenly means that the answers are no longer relevant, right? We could imagine some better function approximator than these CNNs, which somehow look for some have some priors or some have some better sense of objects or something like that that could get around this. Yes, yes. In fact, the Marlowe's I mentioned from the mine, he tried to regularize the QN and show that it can help a little bit. I mean, it's a paper by Vicarious called schema networks that they tried exactly this on not pong, but that the one like pong when you're trying to break bricks. Breakout. Breakout. Yes. And they showed some small change in breakout made the standard reinforcement learning actually not work, but their method was able to overcome these small changes. I think there was another one about human priors or something where the RL algorithms were able to play these games just as well when they were replaced with noisy images that made no sense to a human. Oh, yeah, I remember that work. Yeah. Yeah, it didn't matter to TQN, right? Yeah, it didn't. It didn't work. For this RL method, it doesn't matter because you didn't specifically ask for it. And yeah, that's a, you can call it an issue, but this is not the thing you ask to be there at the beginning. Like you ask, hey, maximize the returns. Okay. You didn't have to learn some magic. But this is one surprising thing, as you also mentioned in that work, which shows that if you move the paddle of the breakout a little bit up, it breaks. And this information I'm talking about are the things that I just learned from grade. It's told me. But the second issue was, after changing this game setting by just making the size of the opponent paddle smaller, the amount of time and stuff we needed to train the decaying was almost things that were starting from scratch. So it couldn't adapt immediately. But back into the use of model this. It took our model 3000 samples to adapt to this new domain. So for our DQN agent to adapt, it needed to 2 million or 3 million samples again to adapt. But for our model based, for our like the identity model to adapt to this domain, it just cost us like 3000 samples. So the model itself, the general model was adapting very quickly, but the Q network was adapting slowly. Yes. So now what I'm advocating is, if you want to use model and model free approaches and model based approaches together, we might not gain much at this easily by aiming for maximizing return directly. But if we want to secure like safety, if we want to have adaptation, if I'm going from like, if I'm interacting with the environment, which has some sort of some set of reward, and if I change from one environment to another environment, the environment dynamics and stay the same, but the reward function changes. I don't need to learn, if I use model free approaches, I need to do the whole thing again. But if I have model based approach, I just need to change the reward function, which I learned. So if I have adapting to new domain or general transfer learning problem or safety or many many other things that the model is going to give us additional information compared to the Q function, there we can use models a lot. Well, you might ask whether I am working on this type of research? No, because this type of research, it costs a lot and I don't have that much money to transfer on GPU, but if people out there want to do model based and model free, please read this paper carefully and try to do like safety, try to do the adaptation, try to do like this changing the dynamics, this kind of stuff are really important. If you learn a model, you can come with a better semantic of them environment. But if you want to do direct, if you direct the aim for maximizing return, then this paper shows that it's not that easy at least. I don't see a way to use model based and model free at all together to improve a fun sample complexity. And all the dirty keeping, no one showed that model free is less sample efficient. That's a really interesting point. I really am just naturally drawn to model based methods and I find model free. I just never liked it. I don't know. I think if I really ask myself why it's because I can't reuse any component, like you said. And it seems like a waste of compute. If you learn a policy for something, anything changes, you have to throw it out. When you build a model, I mean, like things like GPT2, for example, the text model or the language model for Mub and the I, so much compute went into building that original model. But then it becomes very cheap for us to fine tune it for different tasks. I mean, to me, that's a model for AI or that's an approach to AI that makes sense to have this massive compute in these reusable artifacts. And then we have to use only small compute to do the work to customize it. Otherwise we're just burning cycles on and on, but it seems like not very much like gain. And I also really like planning. I've spent some time in planning algorithms long ago and planning can be so efficient, but to plan you need a model. So definitely, these things to work together. Yeah. One one interesting aspect of this model based idea is if you are able to come up with like try to learn a model of them and all the Atari games, then what you're going to end up, you can come with the representation. We use the first few layers of like this model we train on ImageNet and we reuse that representation with different tasks, right? But we don't do it in RL. But if you learn a model, hopefully you can get a representation that you can use for different problems or transfer between games. So it's going to be like, you know, like redoing the learning over and over for each problem or each paper or each work you're doing or each homework. You just use this for representation and then you build your stuff on top of it. That's what we did for BDQN. Tari, can you explain that connection to BDQN again? So in BDQN we say, and also after BDQN I had another work on linear bandits, we say, okay, if we have this amazing work on which is amazing idea called deep learning. And to me, one of the great, great benefits that the deep learning has provided is like the representation layer. Okay. So I'm able to learn a representation of like the frame of the game and then my Q function is going to be linear function on this representation. Then I can deploy whatever we know about linear models on the top of this representation, right? Hi, I follow you now, yeah. Yeah. Yeah, let's consider like DQN. DQN you have multiple convolutional layers and then you have linear layers on the top and the Q function at the end is linear transformation of the feature presentation on the bottom, right? If you know that, if you know how to deal with the Q functions which are linear in some given feature presentation, you can use those techniques to apply those techniques on the settings that feature presentation comes from deep learning. Sorry, that representation at the last layer is kind of dependent on the policy, right? Yeah, that's a thing. Depending on how did it get there? It's not just summarizing the game. It's also summarizing the behavior of this agent. Yes, it does. Yeah. So how we could ever separate those to get a general general? We cannot. Unless we wait for one or two years that I proved that we can, but I haven't, I mean, I'm not working on it now, but I know that I'm going to write my piece on it. But what I'm advocating for idea for PDK and it's like, let's imagine I give you a feature presentation of, let's say for environment in an RL problem. And I tell you the optimal Q function is linear in this feature presentation, okay? And I tell you this feature presentation is fixed. Okay. So you're not learning the feature presentation. Your optimal Q function is linear in the feature presentation I gave it to you. Okay. And now I ask you, hey, go and do like it, come up with the algorithm, which given this knowledge is able to do efficient exploration exploration. Okay? That's what you did, right? That's what you did, right? Yeah, yeah. So what I did here was like, if you give me the fixed feature presentation, how an error something is going to use it. We didn't know before this work, how to use it for general problems like in this work, the action of space can be anything as long as it's closed. And I mean, it can be continuous, it can be infinity, it can be like big. And a state of space is also should be closed, but it can be continuous, it can be finite, infinite. So it's like literally two, it's like general and general. So we showed that in this work, if you give me the feature for a presentation and you tell me that the optimal Q function is linear in the feature presentation, we showed how we can come up with the efficient exploration, exploitation algorithm, which is able to learn, is able to give us a reasonable regret bound. So the cut regret bound is really amazing. Everything on nice has some bad dependence on the horizon of the game, but still working on it. But this is, I think, because of the analysis needs to be tightened, but algorithm, I think is not quite right. And we showed that for this algorithm, you get a regret bound, which is going to be a square root of number of interaction, you have a number of episodes in the game. Okay? To me, it was a huge improvement in the sense that now I know better how to do exploration, exploitation in model-free reinforcement learning, when the Q function is linear, optimal Q function is linear in some function representation. So now we were saying, okay, and also we knew how to do optimism based on this idea, and we also showed how to do Thompson sampling. And as I said, in practice, no one is going to give you that feature representation, unless there is a universal feature representation people have learned, but we don't have it yet. Some of my friends are working on it. But if I don't have a good feature representation, what I can do, I can imagine the feature representation is okay, and I applied this algorithm that's clear, so I show it's good, and this feature representation, and come up with the better policy, and now use this policy as a study, it's going to explore a good part of the space space, and use this policy to learn a better feature representation. It's quite an alternative maximization. I fix the feature representation, I learn a good Q function, and then fix that Q function to learn a feature representation, and then like these two are going to compensate for each other. Well, theoretically, I have not shown that this approach is guaranteed to work, which we should wait for one or two years, either I do it or my colleagues they do, but we know that this is actually converges based on what I know about deep learning these ways, what I've learned recently, like last year. So they did something like that in model-based RL for Atari, the Kaiser paper, they did a loop where they keep learning, relearning the model, and then learning the policy, and then evaluate and then re-learning the model. So maybe that loop is... Even in the gas work, you also, as we talked earlier, when you've collected, when you find that you have did your policy, you end up a different part of a space and collect samples, and use those samples to update your model. You keep updating your model and model changes over time, it's regionally. Just trying to connect some of these pieces together, like your BDKRN paper is looking at the uncertainty in the Q values specifically, but I guess we also have uncertainty in the model as well as in some areas of the state space where not really sure what the next state was going to be. That seems like it's... Maybe is Gats maybe not modeling that uncertainty? Is that right? Well, Gats... In Gats, the aim was not providing a better exploration or exploitation algorithm. In Gats, the aim was to, at the beginning, the aim was to come up with a better policy. It doesn't matter how much it's going to cost time-wise or like money. Money was... I had a cap, but time-wise, I also had a cap. A time, I mean time of the game. For that, the sample complexity was not the main concern. In BDKRN, sample complexity was the actual everything. The sample complexity was actually driving force for that work. In Gats, we were saying that even if your model doesn't have any uncertainty, it's going to work. Let's imagine that we didn't know... Let's imagine we wanted you with Gats and we wanted you to use uncertainty on model. This, I think, is one of the chapters in Gats' paper, which shows how we can use the uncertainty in the model. Which actually I like it a lot because the way we train the genetic model for Gats is using Wasterstein distance. Wasterstein distance gives us a distance between two distributions. The distribution of the reality and distribution that my genetic model generated. This one is a modern smash. What the discriminator in Gats does, it looks at what is a distribution of the real transition and what is the output of the genetic model and how four these are in the Wasterstein metric, not Wasterstein metric, but in the Wasterstein distance sense. You can call that one modern uncertainty. We observed that for FCE show, if you show like quantum transitions to the model, the model learns how to produce those transitions and the distance you get is going to be really low. But if you show part of the state space that the agent has been there for a few times, time steps, the Wasterstein distance in that part of the state is going to be big. If you use this notion, you can make sure that your agent is going to... And you put this quantity in the reward, you are encouraging your agent to not only places that receives high reward, but also places that the model is uncertain. Means the Wasterstein distance or like distance between distribution is like high. But doesn't it require you to have the original sample to compare your general results? So using you have original samples for every part of the state space? When you go to a state and you can ask your agent model, what is the next step? So what is the next thing? And also you go to make your decision. You observe and then you do your generated sample and you compare right then. You compare and this is going to be implicit reward you are going to add there. You have a real reward, you get... Curiosity reward? Well I wouldn't call it curiosity, but it's going to be a sound which is uncertain to you that you are going to add. In the theoretical point, we don't call it curiosity, it's a concentration of the measure. And this is different. But you intuitively encourage the agent to go to places that distance is high. But just to look at that a little more, you can only check the wash distance with the very next sample that you're looking at. You can't grow your tree down to 10 levels and look at the 10th level and say, oh that note on the 10th level, I have high uncertainty about. You can't do that until you get there. So you add any state you are after making the decision you go to new state. And for this transition you can see how uncertain you are about this new state. Just for one level. For one level. The uncertainty only carries one level. If I call that uncertainty the reward in that point and then I learn another DQM model on just this intrinsic reward. So what is going to happen is like if I am uncertain like in some state that is going to happen like 20 next 20 times step. My the DQ model, the second DQM model I learned on this like intrinsic implicit rewards is going to guide me to go there. So it's like you basically learn another DQM but this time that this DQM is quite on the reward based on uncertainty you get from this watcher 90. I mean, Schmidt, you wrote a paper about this in 1992 about agents agents that had intrinsic he called a curiosity that would explore an environment and learn all the dynamics of the environment. I was a little off of the date. It was actually 1991 and the paper is called curious model building control systems by Schmidt, you find links to this paper and all the others that we mentioned in the show notes at talkrl.com. So there are two things. One is if you look at a paper called UCRL2 or Upper Confidence found reinforcement learning came at 2010 by a set of amazing theory theory like Peter Oren, Josh and Ronald. They showed how to use uncertainty and how that uncertainty affects your Q function. How you're supposed to use that uncertainty and they proved the first order of the model I guess it was the first one maybe wasn't. But it was the grounding work in tabular mdp on richer than else. In that work, they show how to use it. In my work, I exactly use the recipe, they provide which is probably correct. But for my case was watcher stand distance for decades, well actually concentration measure in like in an altumetric. But yeah, if you are interested in like using the uncertainty of that's another cool idea. If you want to use model model based on model free for reinforcement learning, you can use the model to estimate uncertainty for you, which that uncertainty is like really nice. In some say it's like watcher stand distance and if you can show how to use watcher stand distance uncertainty for exploration, then you know how to algorithm to use. But it's another use of model based approaches in reinforcement learning. They can compute uncertainty for you. So I'm kind of like theorist practitioner, most of the theorists. I was trying to be helpful to this community and it's going to help people to direct their way of the research or the philosophy of the research or like the type of problems they work on and how critical they should be to do their own research. I mean, I would love to like share these things with other people like especially for with junior people to get a better understanding of what they're dealing with. So other is a hard thing and if you commit to it, it needs to be careful. So can I ask you, how do you see RL looking in like say three years or 10 years from now will it be completely different or will it just be like a little bit better than it is now? How do you see it evolving? So one thing is happening now is and it's going to take over as we empirical study is going to be more realistic. It's been a like one from like Atari games or the mid jucal and go to like real world more realistic problems and try to come up with try to like solve actually it real world problems that we need to solve. So the empirical is going to be that that case. But theoretically is going to advance a lot recently. Like in last few weeks there were like many many works on policy creating that I've been reading and also I have one work like a damn Laura and this topic. And theoretically we're going to advance the field a lot and by the still we are going to have a lot of people working on like principles of or like first level understanding of these problems in like two examples again on Atari games or like great worth. Or we took off for a for any better understand but in like 10 years we are going to have a lot of our contribution in in like super realistic real world problem. Yeah that's my take. What what are you excited about working on when you're looking forward what do you what do you plan to do to focus on next and for the next few years. My plan mainly is to I've been the last two three years I've been I've been focusing a lot on empirical side of reinforcement learning before that I was like 100 100% doing theory last two years I've been working on theory and practice. I think I learned a good chunk of experience from interacting with practitioners and like work on this problems that I now I can spend more time doing and developing like theoretical understanding of the problems. And one thing that I am excited about is taking the current methods and not like the not method current principles and understanding we have in reinforcement learning and adapt them to the problems that are in in more immediate importance. And then self-driving car is like healthcare I am really excited about using this methods in healthcare and also in which which which makes me to kind of redesign or rewrite many many of the principles for healthcare problems like healthcare problems like quite different from the problem formulation we have been thinking so far. I'm going to as a faculty in next year when I'm going to be in my group is going to be like devoted a lot and healthcare problems less in empirically more from the scope point of view and providing the deeper understanding what it should look like. And also interested in the there's another part that it's in reinforcement learning quite run we haven't worked on it is like control theory control like control theory has developed for many many years. But the physical like understanding of like empirical process is these things they came out like 20 years ago three years ago they they they they haven't been incorporated in study of control theory that deeply that there have been many control theories that I know they're great and they they most of those things they have developed them. And they haven't been incorporated in in control theory to come with the controllers are of the official so there's another part of my future research which is going to be on this topics of like providing a better understanding in control theory or in general adaptive control which is kind of whether if I say the brain force of learning I make might make people mad but that's the case. That sounds amazing I can't wait to read all about it I think our time has come to close doctor Aziz and a Shelley thank you so much for your time today I've learned so much from talking with you and from reading your work and I'm sure I'm going to learn more from you reading your your future work thank you so much for sharing your insight in your time with all of us today my pleasure and thank you so much for for having me today. That's our episode for today folks be sure to check talk rl dot com for more great episodes.
[ { "end": 12.8, "start": 0, "text": " This is TalkArail Podcast, all reinforcement learning, all the time." }, { "end": 15.8, "start": 12.8, "text": " Interviews at Brilliant Folks across the world of RL." }, { "end": 21.2, "start": 15.8, "text": " I'm your host, Rob and Chauhan." }, { "end": 25.560000000000002, "start": 21.2, "text": " Dr. Kamjar Azizadena Shelley is a postdoctoral scholar at Caltech." }, { "end": 30.88, "start": 25.56, "text": " He will be joining Purdue University as an Assistant CS professor in Fall 2020." }, { "end": 33.72, "start": 30.88, "text": " Dr. Azizadena Shelley, thank you so much for joining me today." }, { "end": 35.76, "start": 33.72, "text": " Thanks, Rob and for having me today." }, { "end": 38.76, "start": 35.76, "text": " So you have a lot of really great papers." }, { "end": 41.84, "start": 38.76, "text": " We just chose three to focus on today." }, { "end": 47.92, "start": 41.84, "text": " That is, efficient exploration through Bayesian deep-queue networks, surprising negative results" }, { "end": 54.04, "start": 47.92, "text": " for generative adversarial research, and maybe a few considerations in reinforcement learning" }, { "end": 55.04, "start": 54.04, "text": " research." }, { "end": 64.28, "start": 55.04, "text": " So before this interview, I got to hear a podcast interview you did actually a year ago" }, { "end": 67.52, "start": 64.28, "text": " on Twimmel AI podcast." }, { "end": 70.88, "start": 67.52, "text": " You also touched on two of these papers during that podcast." }, { "end": 73.8, "start": 70.88, "text": " I really enjoyed hearing your interview." }, { "end": 75.8, "start": 73.8, "text": " I learned a lot from that." }, { "end": 81.96000000000001, "start": 75.8, "text": " I would just say for the listeners, this podcast is a little bit different because what I want" }, { "end": 88.08, "start": 81.96, "text": " to try to be more in touch with the research, try to read some of the papers, the relevant" }, { "end": 90.11999999999999, "start": 88.08, "text": " papers before the interview." }, { "end": 92.08, "start": 90.11999999999999, "text": " So we can have a little more deeper discussion." }, { "end": 94.24, "start": 92.08, "text": " Yeah, that sounds great to me." }, { "end": 101.19999999999999, "start": 94.24, "text": " And since last year, there have been many more research coming out from different labs," }, { "end": 105.52, "start": 101.19999999999999, "text": " related to these topics, that would be great to talk about those as well." }, { "end": 110.8, "start": 105.52, "text": " Yeah, I mean, just in general, I think the amount of RL research being published is really" }, { "end": 112.52, "start": 110.8, "text": " getting out of control." }, { "end": 117.12, "start": 112.52, "text": " Like I've seen some of these charts in terms of the topics that are covered at the major" }, { "end": 122, "start": 117.12, "text": " ML conferences, and RL is just shooting up exponentially." }, { "end": 126.16, "start": 122, "text": " How do we keep track of what's happening in the field when so much research is coming" }, { "end": 127.16, "start": 126.16, "text": " out?" }, { "end": 135.4, "start": 127.16, "text": " Well, that's a really, really great question that makes the progress a little bit harder" }, { "end": 143.76, "start": 135.4, "text": " for people, I mean, the researchers in the field because we can't keep track of many works." }, { "end": 151.6, "start": 143.76, "text": " But at the same time, there are many good bills and like great people, they try to abstract" }, { "end": 158.4, "start": 151.6, "text": " out those, like they try to provide the abstract of the papers they read and put it online." }, { "end": 163.16, "start": 158.4, "text": " And stuff you reading those papers, you probably can read those abstracts." }, { "end": 168.6, "start": 163.16, "text": " And you see that they're interesting to you, you would go and read them, obviously," }, { "end": 174.76, "start": 168.6, "text": " this is not going to be a good, it's not going to be optimal way of handling the situation," }, { "end": 181, "start": 174.76, "text": " but it's going to be at least something better than doing nothing and ignoring many papers." }, { "end": 188.6, "start": 181, "text": " But to be honest, I also have a hard time, like keep tracking many papers and I have a list" }, { "end": 194.32, "start": 188.6, "text": " of papers that I need to read and this list doesn't go down, it just keeps increasing" }, { "end": 197.2, "start": 194.32, "text": " and like now it's becoming more than 100." }, { "end": 205.12, "start": 197.2, "text": " And it's a problem that we don't, I just, I don't know a good solution to it, but there" }, { "end": 210.92, "start": 205.12, "text": " is a remedy which is like great people, they put abstracts of those works out there and" }, { "end": 213, "start": 210.92, "text": " I just look at those sometimes." }, { "end": 221.64, "start": 213, "text": " For my friends, I have thankfully during my PhD I made a ton of great friends mainly" }, { "end": 227.92, "start": 221.64, "text": " in theoretical aspect of she learning and also in practical and among practitioners." }, { "end": 233.32, "start": 227.92, "text": " And they also are really nice to find, they kindly let me know what paper I should read" }, { "end": 238.24, "start": 233.32, "text": " and they start like really great things, but we don't know how to, I personally don't" }, { "end": 242.96, "start": 238.24, "text": " know how to solve this issue of like exploding number of papers out there." }, { "end": 249.4, "start": 242.96, "text": " So I'm, it seems to me that some of these papers we can safely ignore and other papers that" }, { "end": 253.60000000000002, "start": 249.4, "text": " really change your perspective when you just when you even read them for the first time." }, { "end": 261.32, "start": 253.60000000000002, "text": " Yeah, I quite agree with you and this is one of the sections in the paper I wrote, I" }, { "end": 266.56, "start": 261.32, "text": " mean the position paper I wrote that you mentioned as a third paper, we are going to talk" }, { "end": 270.76, "start": 266.56, "text": " about maybe a few consideration in reinforcement learning research." }, { "end": 278.15999999999997, "start": 270.76, "text": " Well in that section I do not, I don't talk about how we can deal with many, many papers," }, { "end": 285.76, "start": 278.15999999999997, "text": " but I say we can reduce the number of papers you write because of many reasons as we all" }, { "end": 292.44, "start": 285.76, "text": " know reinforcement learning is a field of research that is really, really expensive in the" }, { "end": 298.15999999999997, "start": 292.44, "text": " sense of intellectuality and time and money cost everything." }, { "end": 303.44, "start": 298.16, "text": " If you want to publish a reinforcement learning paper or you want to have a contribution" }, { "end": 309.96000000000004, "start": 303.44, "text": " in the field, it's even a simple theoretical like innovation takes a lot of time from you." }, { "end": 315, "start": 309.96000000000004, "text": " If you are a serious, you might take half a year from you to analyze and provide a great" }, { "end": 319.76000000000005, "start": 315, "text": " understanding for an RL algorithm." }, { "end": 325, "start": 319.76000000000005, "text": " Even if you are a practitioner, you might take you like, or again half a year to do the" }, { "end": 333.2, "start": 325, "text": " empirical study, but it is not quite the case for like other topics and fields in machine" }, { "end": 335, "start": 333.2, "text": " learning, like supervised learning." }, { "end": 341.72, "start": 335, "text": " I personally wrote a really interesting, from my point of view, paper in supervised learning" }, { "end": 347.8, "start": 341.72, "text": " and doing a patient that I spent one weekend on it to write the whole paper and also the" }, { "end": 349.4, "start": 347.8, "text": " theoretical analysis of it." }, { "end": 354.4, "start": 349.4, "text": " And also my friends after that helped me to do empirical study for that." }, { "end": 358.91999999999996, "start": 354.4, "text": " By the whole process, they didn't take like a lot of time, but at the same time I was" }, { "end": 364.4, "start": 358.91999999999996, "text": " working on another work paper and reinforcement learning that took me like half a year to" }, { "end": 369.64, "start": 364.4, "text": " provide really good satisfying understanding of what is happening." }, { "end": 376.71999999999997, "start": 369.64, "text": " So since reinforcement learning is quite costly and we don't have that many researchers" }, { "end": 380.08, "start": 376.71999999999997, "text": " working on reinforcement learning despite the fact that you have set the amenities for" }, { "end": 385.84, "start": 380.08, "text": " working on it, we still need more, but given the fact that we have this few people working" }, { "end": 390.08, "start": 385.84, "text": " on reinforcement learning compared to the number of people we need to work on reinforcement" }, { "end": 394.03999999999996, "start": 390.08, "text": " learning, it's better to manage what you want to do, whether the problems you want to solve" }, { "end": 401.32, "start": 394.03999999999996, "text": " and make sure that if you have an idea, we think a lot before like, definitely, if we're" }, { "end": 408.32, "start": 401.32, "text": " going to do empirical study, we should think a little more and design some hypotheses for" }, { "end": 413.96, "start": 408.32, "text": " ourselves and test them by ourselves before like running extensive empirical study." }, { "end": 420.4, "start": 413.96, "text": " I can give you many examples of existing oral, even like general oral algorithms that" }, { "end": 427.84, "start": 420.4, "text": " they fail easily and miserable for two-state MDP and two actions, but the authors could" }, { "end": 433, "start": 427.84, "text": " avoid doing it by spending a little time to really evaluate their algorithms." }, { "end": 441.12, "start": 433, "text": " Yeah, so my general point is like, we need to spend more time on thinking deeper in reinforcement" }, { "end": 447.48, "start": 441.12, "text": " learning research and since we have limited budgets and by budget I mean like human time," }, { "end": 456.68, "start": 447.48, "text": " it's better to cooperate and collaborate with the authors and get a more direct development" }, { "end": 457.68, "start": 456.68, "text": " in the field." }, { "end": 463.8, "start": 457.68, "text": " Okay, so maybe getting a bit more into that, I've noticed in machine learning there's" }, { "end": 470.68, "start": 463.8, "text": " many different formats of papers like some authors really have a clear hypothesis when" }, { "end": 478.04, "start": 470.68, "text": " they start and some don't and in general, what do you think really makes an excellent" }, { "end": 479.72, "start": 478.04, "text": " paper in RL?" }, { "end": 482.24, "start": 479.72, "text": " What needs to be there to make an excellent paper?" }, { "end": 490.96000000000004, "start": 482.24, "text": " From an excellent paper, from my point of view, a paper which has something in it that" }, { "end": 496.52, "start": 490.96000000000004, "text": " when I understand I am like, or I learn about it, I would be like excited." }, { "end": 504.8, "start": 496.52, "text": " If the paper is a theoretical paper, it talks about the theory of RL, excellent." }, { "end": 509.32, "start": 504.8, "text": " They provide a really great understanding of some algorithms." }, { "end": 510.52, "start": 509.32, "text": " I would be excited." }, { "end": 514.64, "start": 510.52, "text": " If it's a scientific paper in the sense that they put out some hypotheses that I care" }, { "end": 521.1999999999999, "start": 514.64, "text": " about and they test the hypothesis that by testing I don't mean like running one experiment," }, { "end": 524.76, "start": 521.1999999999999, "text": " it's like extensively testing the hypothesis." }, { "end": 534.0799999999999, "start": 524.76, "text": " When you test the hypothesis and practically you need to try a variety of different settings" }, { "end": 538.24, "start": 534.0799999999999, "text": " and make sure that you literally test that hypothesis." }, { "end": 544.52, "start": 538.24, "text": " It's not just saying, I did one experiment and something happened, so I tested my hypothesis." }, { "end": 545.84, "start": 544.52, "text": " That's not a good thing." }, { "end": 551.04, "start": 545.84, "text": " But if you put out a hypothesis and test it, that would be really great to me." }, { "end": 556, "start": 551.04, "text": " The third thing as you said is, let's imagine a paper does not have any hypothesis, but" }, { "end": 563.72, "start": 556, "text": " what it does is test or it like reports a set of great empirical study and put them all" }, { "end": 564.72, "start": 563.72, "text": " out." }, { "end": 568.2, "start": 564.72, "text": " Again, just tell me, hey, we ran this one." }, { "end": 573.44, "start": 568.2, "text": " That's the thing they ran or the empirical study that they provided." }, { "end": 578, "start": 573.44, "text": " The thing I was thinking maybe would be interesting to see what would happen." }, { "end": 585.6400000000001, "start": 578, "text": " If they provided a set of experimental studies, that would be really interesting." }, { "end": 589.76, "start": 585.6400000000001, "text": " But if the paper, this is it." }, { "end": 594.48, "start": 589.76, "text": " Since we reinforce my learning, we don't know that much." }, { "end": 598.12, "start": 594.48, "text": " It hasn't been extensively studied." }, { "end": 601.2, "start": 598.12, "text": " Take a paper for why it is great understanding what is happening." }, { "end": 603.2, "start": 601.2, "text": " I would love to read that paper." }, { "end": 610.36, "start": 603.2, "text": " If the paper says, okay, we did this trick and we outperformed something, I wouldn't care" }, { "end": 611.36, "start": 610.36, "text": " much." }, { "end": 622.04, "start": 611.36, "text": " But I also like that paper because it adds non-negative onto information to me." }, { "end": 626.4, "start": 622.04, "text": " But the goal is to provide a better understanding." }, { "end": 629.4, "start": 626.4, "text": " I would love it more." }, { "end": 637.56, "start": 629.4, "text": " But if it's just climbing the ladder of like leaderboard in the scores, whatever that means," }, { "end": 645.4399999999999, "start": 637.56, "text": " I don't even know what this term means in RL, which I talked about in that position paper," }, { "end": 651.8, "start": 645.4399999999999, "text": " saying what does it mean to climb the ladder of leaderboard in RL." }, { "end": 657.12, "start": 651.8, "text": " The paper does not provide that much understanding or the empirical say is not exciting." }, { "end": 660.8399999999999, "start": 657.12, "text": " I wouldn't be excited about that paper." }, { "end": 663.04, "start": 660.8399999999999, "text": " So you're not against empirical papers?" }, { "end": 665.88, "start": 663.04, "text": " Oh, I love empirical papers." }, { "end": 676.64, "start": 665.88, "text": " One of the reasons that RL has been a center of attention for many, many young old like" }, { "end": 681.76, "start": 676.64, "text": " prestigious and junior researcher in last few years was the..." }, { "end": 685.84, "start": 681.76, "text": " I'm excited about empirical studies." }, { "end": 693.04, "start": 685.84, "text": " When these empirical studies were out, let's do this way." }, { "end": 698.04, "start": 693.04, "text": " So we can push science and theory." }, { "end": 703.3199999999999, "start": 698.04, "text": " We can say, okay, practitioners, you should not do anything before the theory proves what" }, { "end": 704.3199999999999, "start": 703.3199999999999, "text": " we should do." }, { "end": 712.08, "start": 704.32, "text": " But then this way, since the theory is with horror, it takes a lot of time to make some" }, { "end": 713.08, "start": 712.08, "text": " products." }, { "end": 717.84, "start": 713.08, "text": " But if you ask practitioners, hey, I'm also harshly practitioner." }, { "end": 720.9200000000001, "start": 717.84, "text": " I'm not excluding myself from that community." }, { "end": 722.5600000000001, "start": 720.9200000000001, "text": " I love that community." }, { "end": 729.72, "start": 722.5600000000001, "text": " And if practitioners, they start putting intuition and build intuition and provide amazing" }, { "end": 738, "start": 729.72, "text": " set of empirical studies, then theorists would gain from this information and try to..." }, { "end": 741.1600000000001, "start": 738, "text": " Based on what they observed, they try to analyze." }, { "end": 746.9200000000001, "start": 741.1600000000001, "text": " It just gives them better clue how to analyze a problem and what would go wrong." }, { "end": 755.8000000000001, "start": 746.9200000000001, "text": " Okay, so like the empirical study, the results coming out of the practitioners is wonderfully" }, { "end": 756.8000000000001, "start": 755.8000000000001, "text": " appreciated." }, { "end": 758.4, "start": 756.8000000000001, "text": " I love them all." }, { "end": 760.72, "start": 758.4, "text": " But of course, as you said, there are some papers." }, { "end": 766.4, "start": 760.72, "text": " They don't follow the scientific tradition." }, { "end": 769.1999999999999, "start": 766.4, "text": " And they do some empirical study." }, { "end": 777.88, "start": 769.1999999999999, "text": " They spend like $50,000 or $100,000 on empirical study that you can easily say, well, it" }, { "end": 784.3199999999999, "start": 777.88, "text": " wasn't quite wrong thing or idea was like flawed." }, { "end": 787.4, "start": 784.3199999999999, "text": " We're just looking at the idea." }, { "end": 789.64, "start": 787.4, "text": " Those are like, that's what happens." }, { "end": 790.64, "start": 789.64, "text": " Those things that happen." }, { "end": 797.24, "start": 790.64, "text": " You can't stop a field because some research at some point, they made some mistakes." }, { "end": 799.16, "start": 797.24, "text": " I've been all making mistakes." }, { "end": 804.12, "start": 799.16, "text": " Even in theory, if I call myself a theorist, I also make mistakes." }, { "end": 805.64, "start": 804.12, "text": " Oiler made many mistakes." }, { "end": 808.9599999999999, "start": 805.64, "text": " Like Fourier in his theorem, he made mistakes." }, { "end": 809.9599999999999, "start": 808.9599999999999, "text": " We all make mistakes." }, { "end": 814.6, "start": 809.9599999999999, "text": " But over time, we build on top of our top researchers and make products." }, { "end": 820.4, "start": 814.6, "text": " So one thing I've noticed in RL is like, many papers are just looking at one small aspect." }, { "end": 827.48, "start": 820.4, "text": " And it's hard for me to tell at this point, what would be a cutting edge agent if we combine" }, { "end": 831.36, "start": 827.48, "text": " all the insights across all the different state of our results?" }, { "end": 834.8000000000001, "start": 831.36, "text": " What is the real state of the art for a complete agent right now?" }, { "end": 836.2, "start": 834.8000000000001, "text": " Once in a while, we get a paper like that." }, { "end": 841.6, "start": 836.2, "text": " Like I'm thinking like the rainbow paper, maybe as an example of that, it wasn't really" }, { "end": 845.6, "start": 841.6, "text": " a unique thing except for just combining things that already existed." }, { "end": 848.2, "start": 845.6, "text": " Rainbow was, well, I like it in some sense." }, { "end": 850.84, "start": 848.2, "text": " It put like many, many efforts all together." }, { "end": 855.0400000000001, "start": 850.84, "text": " So all together to see what is the output." }, { "end": 859.88, "start": 855.0400000000001, "text": " The thing is, if you look at the work, most of the work before Rainbow or like an after" }, { "end": 865.64, "start": 859.88, "text": " Rainbow, I mean, even my works, when I wrote the BDQ and paper, I was like, okay, we have" }, { "end": 873.4, "start": 865.64, "text": " DQN or BDQN and they do have some great for exploration and exploitation to do like sample" }, { "end": 876.4399999999999, "start": 873.4, "text": " efficient interaction with them one." }, { "end": 884.72, "start": 876.4399999999999, "text": " And I was like, okay, let's design a method which does a smart way of doing exploration" }, { "end": 885.72, "start": 884.72, "text": " exploitation." }, { "end": 891.72, "start": 885.72, "text": " And then I propose a model which we call it BDQN and we show that like if you do exploration" }, { "end": 899, "start": 891.72, "text": " better compared to DQN or DQN, it's going to perform very well." }, { "end": 905.72, "start": 899, "text": " You would see it like a colleague of mine says, okay, I have DQN, let's study the effect" }, { "end": 912.8000000000001, "start": 905.72, "text": " of like the distribution of samples we gather in the replay buffer." }, { "end": 918.72, "start": 912.8000000000001, "text": " You would see another or colleague of mine would say, okay, let's see if the adregolarization" }, { "end": 921.24, "start": 918.72, "text": " to the objective function, what would happen." }, { "end": 928.44, "start": 921.24, "text": " So we studied the effect of different components in reinforcement learning algorithm and we all" }, { "end": 935.28, "start": 928.44, "text": " compare with DQN or DQN mainly because we are trying to provide a better understanding." }, { "end": 943.32, "start": 935.28, "text": " My work is like in the same BDQN, I repeated many times that by no mean I'm comparing the" }, { "end": 948.5600000000001, "start": 943.32, "text": " performance of BDQN or DQN or performance of any algorithm with the algorithm I'm proposing." }, { "end": 953.1999999999999, "start": 948.56, "text": " I'm not doing any comparison or I'm not just saying, I'm not even comparing any numbers." }, { "end": 959.2399999999999, "start": 953.1999999999999, "text": " I'm just trying to provide a better understanding of the algorithm design, algorithm design for" }, { "end": 963, "start": 959.2399999999999, "text": " reinforcement learning or math or problem." }, { "end": 968.88, "start": 963, "text": " So we have many many great researchers, they try to provide a better understanding of different" }, { "end": 977, "start": 968.88, "text": " components of the original learning algorithm and to see what are the contributions and" }, { "end": 981.28, "start": 977, "text": " whether they are actually helping, whether they don't help, if they help how much they" }, { "end": 986.28, "start": 981.28, "text": " help, if they don't help, if they degrade the performance, what would happen." }, { "end": 991.48, "start": 986.28, "text": " And then Rainbow put all these positive, I mean Rainbow I think did not include BDQN." }, { "end": 995.04, "start": 991.48, "text": " I think it came before BDQN." }, { "end": 1001, "start": 995.04, "text": " And they put a good part of like most of these existing algorithms all together and" }, { "end": 1005.92, "start": 1001, "text": " showed that hey, you said one researcher said if we change the replay buffer distribution" }, { "end": 1008.56, "start": 1005.92, "text": " is going to be, is going to improve the performance." }, { "end": 1013.68, "start": 1008.56, "text": " One researcher said if you're still using the one-step return, if you use like a lambda" }, { "end": 1016.76, "start": 1013.68, "text": " return or like few step return, it's going to improve the performance." }, { "end": 1020.92, "start": 1016.76, "text": " Let's put all these things together and see how much everything, all these things together" }, { "end": 1022, "start": 1020.92, "text": " is going to improve." }, { "end": 1030.12, "start": 1022, "text": " And it was a really cool, I'm very kind of steady and it's also useful for people or" }, { "end": 1032.24, "start": 1030.12, "text": " not researchers in the field." }, { "end": 1038.32, "start": 1032.24, "text": " If you are a, let's say if someone has a company in Bay Area and wants to use reinforcement" }, { "end": 1043.36, "start": 1038.32, "text": " learning and looks at my paper, it says okay, these papers do exploration." }, { "end": 1047.8, "start": 1043.36, "text": " The other papers change that component, the other papers change the other component." }, { "end": 1052.8, "start": 1047.8, "text": " But that person since the, it's coming from industry, it doesn't know about what is happening," }, { "end": 1055.36, "start": 1052.8, "text": " that person might choose one of these." }, { "end": 1059.56, "start": 1055.36, "text": " But Rainbow what it did was like putting all these together and say okay, if you can combine" }, { "end": 1064.36, "start": 1059.56, "text": " all these sources and components that we know they are going to help, you're going to" }, { "end": 1067.04, "start": 1064.36, "text": " get an algorithm which is actually going on a road table." }, { "end": 1070, "start": 1067.04, "text": " And if you're an industry person, you can use that algorithm." }, { "end": 1074.84, "start": 1070, "text": " It almost seems like Rainbow could be updated every year with whatever is the latest stuff." }, { "end": 1079.04, "start": 1074.84, "text": " Like I think Rainbow didn't have IQN in it, for example, the original Rainbow." }, { "end": 1086.04, "start": 1079.04, "text": " Yeah, I'm not sure about that, but like IQN has it or not." }, { "end": 1093.04, "start": 1086.04, "text": " But based on the conversation I had with some of the authors, their philosophy was yeah," }, { "end": 1098.3999999999999, "start": 1093.04, "text": " we are, it can be updated, but I don't think they want to update, but they want to, they" }, { "end": 1106.3999999999999, "start": 1098.3999999999999, "text": " want to, what I think, they want to show that hey, these are the, people made during" }, { "end": 1110.1599999999999, "start": 1106.3999999999999, "text": " that figure, if we put them all together, it's going to be this." }, { "end": 1115.08, "start": 1110.1599999999999, "text": " Of course, if there are many more advancements, if we put those all together, we're going" }, { "end": 1116.48, "start": 1115.08, "text": " to even improve more." }, { "end": 1122.96, "start": 1116.48, "text": " So it was a kind of proof of concept at least to me, which well, it's not, it doesn't have" }, { "end": 1129.36, "start": 1122.96, "text": " damage of scientific value compared to like this improvement is component, but it's amazing" }, { "end": 1133.1999999999998, "start": 1129.36, "text": " and cool work to have a proof of concept." }, { "end": 1138.12, "start": 1133.1999999999998, "text": " What would be your advice for someone doing, say, empirical research based on your position" }, { "end": 1139.12, "start": 1138.12, "text": " in this paper?" }, { "end": 1143.24, "start": 1139.12, "text": " Is it to be more in touch with the theory?" }, { "end": 1145.84, "start": 1143.24, "text": " No, I do not." }, { "end": 1150.52, "start": 1145.84, "text": " So some people they require that hey, every reinforcement learning paper should have" }, { "end": 1152, "start": 1150.52, "text": " theoretically call analysis." }, { "end": 1154.2, "start": 1152, "text": " I strongly against that." }, { "end": 1158.44, "start": 1154.2, "text": " It doesn't need to make a why." }, { "end": 1161.6, "start": 1158.44, "text": " We have theoretical work and we have scientific work." }, { "end": 1167.04, "start": 1161.6, "text": " Like if you have a scientific work, you don't need to provide theirits analysis for that." }, { "end": 1171.28, "start": 1167.04, "text": " If you provide, that would be amazing, but if you don't, it's totally fine." }, { "end": 1179.96, "start": 1171.28, "text": " And it should be also eased for those papers to be published." }, { "end": 1184.76, "start": 1179.96, "text": " If you put a requirement for papers, empirical papers to provide theoretical analysis and" }, { "end": 1190.92, "start": 1184.76, "text": " if they don't, then we are going to miss many great empirical studies." }, { "end": 1198.84, "start": 1190.92, "text": " But one thing that I really would love to see more is if I propose an algorithm for" }, { "end": 1204.6799999999998, "start": 1198.84, "text": " reinforcement learning from empirical, like it's not a theory of work, it's like empirical" }, { "end": 1207.08, "start": 1204.6799999999998, "text": " work or scientific work." }, { "end": 1215.56, "start": 1207.08, "text": " I would love to see how the what are the set settings that this algorithm breaks." }, { "end": 1223.76, "start": 1215.56, "text": " Let's say I propose a policy gradient algorithm, a new one, which is totally based on my intuition" }, { "end": 1232.64, "start": 1223.76, "text": " and there's no theory behind it, and I deployed on let's say, Mujuko, or this kind of" }, { "end": 1236.64, "start": 1232.64, "text": " probotic style environment." }, { "end": 1242.64, "start": 1236.64, "text": " And I run it and we know that this, for example, the environment with Jucco, you can change" }, { "end": 1247.6, "start": 1242.64, "text": " the dynamic of the system where you can change the cost function, you can manipulate all" }, { "end": 1248.6, "start": 1247.6, "text": " of this." }, { "end": 1258.84, "start": 1248.6, "text": " I mean people like try to do cherry planting and change the environment setup and make" }, { "end": 1263.4399999999998, "start": 1258.84, "text": " their algorithms work on this new setup and therefore you can beat everything else," }, { "end": 1265.8799999999999, "start": 1263.4399999999998, "text": " but again, whatever that means, the beating." }, { "end": 1271.08, "start": 1265.8799999999999, "text": " I would also love to see, I mean, this is amazing research and you showed me that if you change," }, { "end": 1275.6799999999998, "start": 1271.08, "text": " if the environment has this configuration, your algorithm is going to work very well." }, { "end": 1281.5600000000002, "start": 1275.68, "text": " I also want to see if how far you can go away from this configuration and still your" }, { "end": 1283.2, "start": 1281.5600000000002, "text": " algorithm doesn't break." }, { "end": 1289.44, "start": 1283.2, "text": " So if you propose an algorithm, you should show me where it works, where it breaks, and" }, { "end": 1291.24, "start": 1289.44, "text": " if it breaks, what do you think?" }, { "end": 1293.48, "start": 1291.24, "text": " Why it breaks?" }, { "end": 1300.68, "start": 1293.48, "text": " This is the kind of missing point in the literature and I do not blame the authors." }, { "end": 1306.24, "start": 1300.68, "text": " I blame the culture a little bit because if you add a negative result in your paper," }, { "end": 1310.6000000000001, "start": 1306.24, "text": " probably you get rejected, you don't get a chance to get accepted." }, { "end": 1314.8400000000001, "start": 1310.6000000000001, "text": " I can do more of the reasonable scientific contribution." }, { "end": 1319.5600000000002, "start": 1314.8400000000001, "text": " So this is one reason why I was surprised and excited to see your paper about the surprising" }, { "end": 1321.64, "start": 1319.5600000000002, "text": " negative results for GATs." }, { "end": 1326.3600000000001, "start": 1321.64, "text": " So you buck the trend here and you did publish a paper with a negative result." }, { "end": 1329.68, "start": 1326.3600000000001, "text": " Do you think that there should be more negative result papers?" }, { "end": 1335.3200000000002, "start": 1329.68, "text": " So this paper is not published because it's negative results." }, { "end": 1344, "start": 1335.3200000000002, "text": " People loved it but it was not fortunate enough to be published but I just put it on archive" }, { "end": 1352.96, "start": 1344, "text": " for people to use it and I saw that it had a great effect on the society and on our" }, { "end": 1354.48, "start": 1352.96, "text": " community." }, { "end": 1357.2, "start": 1354.48, "text": " But regarding your question, yes." }, { "end": 1364.76, "start": 1357.2, "text": " I would love to see negative results but not of course, you can call it negative results" }, { "end": 1367.88, "start": 1364.76, "text": " like every day, 20 billion of them." }, { "end": 1375.28, "start": 1367.88, "text": " But the negative results that actually it was a hypothesis that many people thought that" }, { "end": 1382.3600000000001, "start": 1375.28, "text": " it would work and it's an amazing idea to do and you tried it and it did not work." }, { "end": 1389.36, "start": 1382.36, "text": " For example, in this paper, my paper in surprise and negative results, I showed that if you" }, { "end": 1402.32, "start": 1389.36, "text": " use, if you try to learn the generative model of the environment, if I have a reinforcement" }, { "end": 1410.24, "start": 1402.32, "text": " learning problem and interacting with the environment, through this interaction, I get some rewards" }, { "end": 1417.48, "start": 1410.24, "text": " and I get a ton of transitions and going from one state to another state." }, { "end": 1423.68, "start": 1417.48, "text": " So these are, if you call, like, these transitions, all these interactions as unsupervised signals" }, { "end": 1429.32, "start": 1423.68, "text": " or data points, you can learn the dynamics of them." }, { "end": 1431, "start": 1429.32, "text": " Why am I using this ton of data?" }, { "end": 1437.64, "start": 1431, "text": " For example, if I interact with the Atari games and if I run it for 200 million steps," }, { "end": 1445.24, "start": 1437.64, "text": " like 5% or like 1% time, I might have, well, even less than that, I might get any reward." }, { "end": 1450.5600000000002, "start": 1445.24, "text": " But I get 200 million interaction or 50 million interaction with the environment." }, { "end": 1459.2, "start": 1450.5600000000002, "text": " So I could use a tiny portion of that data I gather to learn the dynamics of the environment." }, { "end": 1463.48, "start": 1459.2, "text": " In fact, in this paper, we showed that it's about a few thousand interaction you're able" }, { "end": 1468.64, "start": 1463.48, "text": " to learn the dynamics of the environment, opposed to a few hundred million." }, { "end": 1474.6, "start": 1468.64, "text": " But doesn't that depend on what part of the environment you're in, like an unskilled" }, { "end": 1477.24, "start": 1474.6, "text": " agent wouldn't be able to learn about the last level?" }, { "end": 1478.24, "start": 1477.24, "text": " Oh, absolutely." }, { "end": 1486.6, "start": 1478.24, "text": " You learn what happens is like, you run your agent with, like, not really good agent and" }, { "end": 1492.1200000000001, "start": 1486.6, "text": " you collect data from that environment and you're able to learn the dynamics of the environment" }, { "end": 1495, "start": 1492.12, "text": " and that part of the state space." }, { "end": 1499.9199999999998, "start": 1495, "text": " And then if you use that one, you can hopefully enhance your agent and the agent is going to" }, { "end": 1505, "start": 1499.9199999999998, "text": " discover new part of the state space and then you use that sample to learn the dynamics" }, { "end": 1506, "start": 1505, "text": " of the environment." }, { "end": 1507.8, "start": 1506, "text": " I absolutely agree with you." }, { "end": 1512.56, "start": 1507.8, "text": " It doesn't, by learning the dynamics of the mother, I mean, with regard to that measure" }, { "end": 1515.6399999999999, "start": 1512.56, "text": " induced by the agent." }, { "end": 1524.5200000000002, "start": 1515.64, "text": " And yeah, you can learn the dynamics of the environment and use ideas like, ideas used" }, { "end": 1532.5600000000002, "start": 1524.5200000000002, "text": " in AlphaGo, you just construct a tree, not on the real environment, but the environment" }, { "end": 1541, "start": 1532.5600000000002, "text": " you learn, like the agent, the mother you learn, and you find a good action in that state." }, { "end": 1549.52, "start": 1541, "text": " It can be tree, you can do policy gradient, whatever you do, you just construct, you just" }, { "end": 1553.04, "start": 1549.52, "text": " do your rollout in the agency model you use." }, { "end": 1557.56, "start": 1553.04, "text": " But of course, you do not rollout the agency model to like, few hundred thousand time" }, { "end": 1558.56, "start": 1557.56, "text": " steps." }, { "end": 1563.48, "start": 1558.56, "text": " You just roll out to, let's say, like, few times of 10 to 20 and use the learn, few functions" }, { "end": 1565.4, "start": 1563.48, "text": " so far in the least note." }, { "end": 1566.4, "start": 1565.4, "text": " What is that?" }, { "end": 1571.76, "start": 1566.4, "text": " It's an idea to me at the beginning when I thought about it, I was like, wait, this is" }, { "end": 1572.76, "start": 1571.76, "text": " really interesting." }, { "end": 1580.52, "start": 1572.76, "text": " And I was able to theoretically guarantee that if you give me a model, like approximation" }, { "end": 1589.6000000000001, "start": 1580.52, "text": " of the environment, and you give me approximation of the Q function, then if I use this approach," }, { "end": 1595.5600000000002, "start": 1589.6000000000001, "text": " I just pull you, I'm able to learn or estimate the Q function better." }, { "end": 1602.84, "start": 1595.56, "text": " Well, I showed it, it was nice and it made a total sense to me to use it because at the" }, { "end": 1609.12, "start": 1602.84, "text": " same time, theoretically, you can show that the DQ and agent comes up with bias estimation" }, { "end": 1610.52, "start": 1609.12, "text": " of the Q function." }, { "end": 1616, "start": 1610.52, "text": " So you would be like, hey, if the Q function I'm using for, I'm learning in DQ and it's" }, { "end": 1617, "start": 1616, "text": " bias." }, { "end": 1622.24, "start": 1617, "text": " And if I use this Monte Carlo tree search approach, I just describe to you." }, { "end": 1626.2, "start": 1622.24, "text": " And if I use this, this learned Q function in the lift note." }, { "end": 1631.8, "start": 1626.2, "text": " And if I have a discount factor, which I'm going to, if I want to use this Q function in" }, { "end": 1637.8, "start": 1631.8, "text": " the lift note and the lift note is like in depth of, I don't know, 20, the effect of this" }, { "end": 1644.2, "start": 1637.8, "text": " bias is going to be discount factor 1220, so it's going to disappear soon." }, { "end": 1646.92, "start": 1644.2, "text": " So yeah, it's an amazing idea to use." }, { "end": 1654.92, "start": 1646.92, "text": " And in fact, there are many papers similar to this paper I wrote, which they exactly" }, { "end": 1657.3200000000002, "start": 1654.92, "text": " do the same thing." }, { "end": 1663.96, "start": 1657.3200000000002, "text": " But after doing many experiments, I study I was like, why doesn't the outperform even" }, { "end": 1666.52, "start": 1663.96, "text": " DQ and what is wrong with it?" }, { "end": 1669.24, "start": 1666.52, "text": " And after spending like, listen, this is crazy." }, { "end": 1675.92, "start": 1669.24, "text": " It was, I spent like nine months on this paper and just digging why doesn't work." }, { "end": 1680.88, "start": 1675.92, "text": " And after nine months, I was like, hey, let's fall back and see what is happening." }, { "end": 1686.72, "start": 1680.88, "text": " And I put our hypothesis and check the hypothesis, but it's true or not." }, { "end": 1692.6000000000001, "start": 1686.72, "text": " And we realized that, hey, this algorithm is doomed to not perform well as long as the" }, { "end": 1694.76, "start": 1692.6000000000001, "text": " depth is not high." }, { "end": 1696.96, "start": 1694.76, "text": " And yeah, so what is high?" }, { "end": 1700, "start": 1696.96, "text": " Like in the paper, you went to depth four, I think, is that right?" }, { "end": 1701, "start": 1700, "text": " Four or five, I guess." }, { "end": 1702, "start": 1701, "text": " Yeah." }, { "end": 1703, "start": 1702, "text": " Four or five?" }, { "end": 1704, "start": 1703, "text": " Do you mean five is high or five is high?" }, { "end": 1705.72, "start": 1704, "text": " No, five is really low." }, { "end": 1710.36, "start": 1705.72, "text": " You got to go to depth of like few hundred, okay?" }, { "end": 1714.56, "start": 1710.36, "text": " But if you go that deep, will your errors in your model start to become a problem?" }, { "end": 1716.32, "start": 1714.56, "text": " Well, that's a thing." }, { "end": 1722, "start": 1716.32, "text": " If you, like for alpha growth, you go really deep." }, { "end": 1724.84, "start": 1722, "text": " For months, I think few hundred, four hundred, two hundred." }, { "end": 1726.96, "start": 1724.84, "text": " I'm not sure about exact number." }, { "end": 1732.8, "start": 1726.96, "text": " If you look at this month's college research, it worked by, I think, by Satinder, they" }, { "end": 1737.2, "start": 1732.8, "text": " also do few hundred, but they all do it on the true model." }, { "end": 1739.28, "start": 1737.2, "text": " They don't do it on, they don't learn the model." }, { "end": 1743.12, "start": 1739.28, "text": " So they don't have the problem of the model mismanage." }, { "end": 1749.52, "start": 1743.12, "text": " And be no seriously, if you go to 200 or 300, like depth and you use the scan factor of" }, { "end": 1758.8, "start": 1749.52, "text": " like, or 100, if you use the scan factor of 0.99, then that depth is actually giving, is" }, { "end": 1760.3999999999999, "start": 1758.8, "text": " giving you the optimal solution." }, { "end": 1763.0400000000002, "start": 1760.4, "text": " So you went deep enough to find the optimal solution." }, { "end": 1768.76, "start": 1763.0400000000002, "text": " You don't need the Q function at lip node because 0.99 to power of 200 is almost zero." }, { "end": 1775.72, "start": 1768.76, "text": " But as you said, if I learned the model and the model has, there is a model mismatch." }, { "end": 1782.5600000000002, "start": 1775.72, "text": " And if I roll out in that model for 200, it might end up some weird thing or in my saturate." }, { "end": 1784.2, "start": 1782.5600000000002, "text": " Yeah, absolutely that can happen." }, { "end": 1789, "start": 1784.2, "text": " But the point we were making this paper was like, even if you have a good model, like" }, { "end": 1795.88, "start": 1789, "text": " a perfect model, like if you do Monte Carlo sampling, Monte Carlo research or like policy" }, { "end": 1801.76, "start": 1795.88, "text": " grading, whatever, on the true model, but the short depth is not going to work as doing" }, { "end": 1802.76, "start": 1801.76, "text": " to be solved." }, { "end": 1809.32, "start": 1802.76, "text": " But now you're saying that if you, we add the model mismatch for the short horizon, so" }, { "end": 1810.32, "start": 1809.32, "text": " you add more error." }, { "end": 1811.56, "start": 1810.32, "text": " So it's not going to help." }, { "end": 1813.8, "start": 1811.56, "text": " It's going to degrade the performance." }, { "end": 1819.52, "start": 1813.8, "text": " But if you want to do Monte Carlo research on the model, but the model is like, and go to" }, { "end": 1825.48, "start": 1819.52, "text": " depth of 200, whether the current techniques, they are not good enough to take care of that." }, { "end": 1831.8799999999999, "start": 1825.48, "text": " There are the error compounds and it goes to exaggeration and doesn't give us a reasonable" }, { "end": 1834.24, "start": 1831.8799999999999, "text": " thing, reasonable output." }, { "end": 1838.72, "start": 1834.24, "text": " Unless we do some, we need to do some consideration." }, { "end": 1841.76, "start": 1838.72, "text": " We need to take some consideration, for example." }, { "end": 1845.4, "start": 1841.76, "text": " If we just do ties, oh, okay, that's actually interesting." }, { "end": 1849.36, "start": 1845.4, "text": " Like some of my colleagues, they make a statement that if you learn the model and if you're" }, { "end": 1851.24, "start": 1849.36, "text": " all that for long, it's not going to work." }, { "end": 1852.24, "start": 1851.24, "text": " That's not true." }, { "end": 1859.2, "start": 1852.24, "text": " If you work on a tabular MDT, if you have a model mismatch, it's not going to induce any" }, { "end": 1860.52, "start": 1859.2, "text": " problem, like much problem." }, { "end": 1867.56, "start": 1860.52, "text": " If your model is epsilon inaccurate, your policy is going to be all epsilon inaccurate," }, { "end": 1869.6, "start": 1867.56, "text": " it's not going to be exponentially bad." }, { "end": 1876.84, "start": 1869.6, "text": " So if you are in tabular MDT and you have model mismatch, the model mismatch is not going" }, { "end": 1878.36, "start": 1876.84, "text": " to hurt you too much." }, { "end": 1884.6399999999999, "start": 1878.36, "text": " But if you live in a space of like continuous MDPs and you're using function approximation" }, { "end": 1894, "start": 1884.6399999999999, "text": " that might might output things that are outside of the domain of the input, then you have" }, { "end": 1895, "start": 1894, "text": " problem." }, { "end": 1902.68, "start": 1895, "text": " The model mismatch in continuous MDPs like LQG, linear quadratic regulator, I guess they" }, { "end": 1903.68, "start": 1902.68, "text": " call it." }, { "end": 1904.68, "start": 1903.68, "text": " But linear models." }, { "end": 1908.04, "start": 1904.68, "text": " Even in linear models, if you have a model mismatch, it would not cause you that much" }, { "end": 1909.04, "start": 1908.04, "text": " of problems." }, { "end": 1914.4, "start": 1909.04, "text": " You do roll out in the model, which is not accurate enough." }, { "end": 1917.12, "start": 1914.4, "text": " But if you go beyond that, we don't know much." }, { "end": 1922.04, "start": 1917.12, "text": " And when we use function approximation, like deep neural networks, they generate outputs" }, { "end": 1924.8, "start": 1922.04, "text": " outside of the domain of the state." }, { "end": 1929.04, "start": 1924.8, "text": " Then when this happens, like bad event might happen." }, { "end": 1933.3999999999999, "start": 1929.04, "text": " Which is partly, I think, is the point of your Bayesian deep Q network, right?" }, { "end": 1938.48, "start": 1933.3999999999999, "text": " They know when they're getting outside of their range of what they understand." }, { "end": 1944.1599999999999, "start": 1938.48, "text": " So Bayesian deep Q network is, so this work I was talking about, they were surprising." }, { "end": 1949.08, "start": 1944.1599999999999, "text": " One, I was talking about model mismatch, but yeah, that's actually a subtle issue." }, { "end": 1950.48, "start": 1949.08, "text": " I love it." }, { "end": 1958.24, "start": 1950.48, "text": " And if you are serious in reinforcement learning and you work on model based RL, you call the" }, { "end": 1960, "start": 1958.24, "text": " environment model." }, { "end": 1965.2, "start": 1960, "text": " But when you use a function approximation, you call that function a model." }, { "end": 1968.16, "start": 1965.2, "text": " So I think there was a confusion here." }, { "end": 1975, "start": 1968.16, "text": " By model, I meant the model of the environment, the dynamics of the environment, where that" }, { "end": 1977.48, "start": 1975, "text": " one is not actually estimated." }, { "end": 1984.16, "start": 1977.48, "text": " But in Bayesian deep Q network works, the model is the Q function." }, { "end": 1988.88, "start": 1984.16, "text": " And Q function can capture the uncertainty." }, { "end": 1995.92, "start": 1988.88, "text": " Like the Q function is like, or Q network, you're supposed to estimate the Q function and" }, { "end": 1997.8, "start": 1995.92, "text": " there's uncertainty between these two." }, { "end": 1998.8, "start": 1997.8, "text": " Yeah." }, { "end": 1999.8, "start": 1998.8, "text": " Okay." }, { "end": 2005.32, "start": 1999.8, "text": " So I guess what I'm saying is if Gats had an estimate of its uncertainty on the leaf" }, { "end": 2010.76, "start": 2005.32, "text": " nodes of the Q's, would that help or is that a completely different issue?" }, { "end": 2013.6799999999998, "start": 2010.76, "text": " It's quite a different issue." }, { "end": 2022.32, "start": 2013.6799999999998, "text": " So for Gats, the final statement I was making was in that paper was, if you have no uncertainty" }, { "end": 2028.08, "start": 2022.32, "text": " on estimating the model dynamics, it means that you're given a frame of the say sorry" }, { "end": 2033.4399999999998, "start": 2028.08, "text": " games and sequence of actions, you're able to perfectly tell me what is the future frame." }, { "end": 2038.64, "start": 2033.44, "text": " So it means that you know the model dynamics, like how the environment works." }, { "end": 2046.2, "start": 2038.64, "text": " And if you use perfect model or perfect simulator, perfect engine of the game and do roll-outs," }, { "end": 2054.92, "start": 2046.2, "text": " no matter you have uncertainty over Q function or you have some estimation of the Q function," }, { "end": 2060.04, "start": 2054.92, "text": " as long as you don't have the exact Q function, this approach can be so optimal." }, { "end": 2061.04, "start": 2060.04, "text": " Okay." }, { "end": 2065.52, "start": 2061.04, "text": " Yeah, but it's quite interesting." }, { "end": 2076.08, "start": 2065.52, "text": " So, I love this work because this idea of learning the generative model of the say Atari" }, { "end": 2085.2799999999997, "start": 2076.08, "text": " games and doing roll-outs on that is super, super expensive in the sense both time wise" }, { "end": 2094.8, "start": 2085.28, "text": " and also compute like time wise is super expensive because you, well for me, we trained a" }, { "end": 2099.6000000000004, "start": 2094.8, "text": " generative adversarial, I mean we deployed a generative adversarial technique to train" }, { "end": 2104.1600000000003, "start": 2099.6000000000004, "text": " the model because you want to make sure that if the model is the cast, you're going to capture" }, { "end": 2105.48, "start": 2104.1600000000003, "text": " that." }, { "end": 2112.0800000000004, "start": 2105.48, "text": " And also you want to come up with a generative model which out with high fidelity frames" }, { "end": 2114.28, "start": 2112.0800000000004, "text": " and it doesn't do bad things." }, { "end": 2115.84, "start": 2114.28, "text": " You can just pause it for one second." }, { "end": 2119.36, "start": 2115.84, "text": " You mentioned if there was, it was stochastic." }, { "end": 2120.44, "start": 2119.36, "text": " So does this model capture?" }, { "end": 2122.6000000000004, "start": 2120.44, "text": " That was one of my questions about this paper actually." }, { "end": 2127.4, "start": 2122.6000000000004, "text": " If your environment has some randomness in it, like enemies show up at random times" }, { "end": 2132.7200000000003, "start": 2127.4, "text": " or the result of something is random, how does a GATS approach handle that stochasticity" }, { "end": 2133.7200000000003, "start": 2132.7200000000003, "text": " in the environment?" }, { "end": 2137.5600000000004, "start": 2133.7200000000003, "text": " Oh, that's an absolutely great question." }, { "end": 2149.4, "start": 2137.56, "text": " GATS works like, it's like literally, it's super close to conditional or what's called" }, { "end": 2154.84, "start": 2149.4, "text": " contextual again, I guess you call it, conditional again." }, { "end": 2162.84, "start": 2154.84, "text": " In GATS, you have a random seat, so you have in generative adversarial networks, you draw" }, { "end": 2170.44, "start": 2162.84, "text": " a random Gaussian, let's say noise or uniform with some noise and fit that noise to the" }, { "end": 2175.6800000000003, "start": 2170.44, "text": " generative, generative, generative, you have an image or whatever data points you would" }, { "end": 2176.6800000000003, "start": 2175.6800000000003, "text": " like call it." }, { "end": 2186.08, "start": 2176.6800000000003, "text": " There's another version of this Zuo of GAN works, which receives a random variable or" }, { "end": 2189.6000000000004, "start": 2186.08, "text": " the noise and also you can give context." }, { "end": 2196.24, "start": 2189.6, "text": " You want to say, generate an image of a dog and you just give a context of dog and a random" }, { "end": 2199.72, "start": 2196.24, "text": " noise generates an image of a dog." }, { "end": 2207.44, "start": 2199.72, "text": " What you're doing here is we fit last four frames to the generator." }, { "end": 2214.6, "start": 2207.44, "text": " Also we feed a random noise to the generator and as and also we tell the generator what" }, { "end": 2219.16, "start": 2214.6, "text": " action we are going to take and the generator is supposed to generate the next frame." }, { "end": 2227.64, "start": 2219.16, "text": " If the next frame is coming from distribution and the environment, if the environment is" }, { "end": 2234.04, "start": 2227.64, "text": " deterministic, you can ignore the noise you add, because given the last four frames," }, { "end": 2239.16, "start": 2234.04, "text": " and action you could use the deterministic to out with the next frame." }, { "end": 2245.2799999999997, "start": 2239.16, "text": " If you have the random latent, so you can get random, so the game that we tried, they" }, { "end": 2250.5600000000004, "start": 2245.28, "text": " were all quite deterministic and we didn't see any need for that, but it's built in in" }, { "end": 2254.5600000000004, "start": 2250.5600000000004, "text": " the architecture of the model." }, { "end": 2259.6000000000004, "start": 2254.5600000000004, "text": " The thing is, we all know training, genetic, and network is quite hard." }, { "end": 2266.1200000000003, "start": 2259.6000000000004, "text": " Despite the fact that the data we use is quite, doesn't change the data set is fixed," }, { "end": 2272.1600000000003, "start": 2266.1200000000003, "text": " but in reinforcement learning, as you mentioned, I think five minutes ago, the distribution" }, { "end": 2278.12, "start": 2272.16, "text": " of the samples you get over time changes, so you need the originative model needs to adapt" }, { "end": 2279.12, "start": 2278.12, "text": " to these changes." }, { "end": 2285.08, "start": 2279.12, "text": " So it's like, you have image net or not image, you have celabé or you have emnist, you" }, { "end": 2288.2799999999997, "start": 2285.08, "text": " want to generate samples of emnist." }, { "end": 2295.04, "start": 2288.2799999999997, "text": " It's like you, the distribution of the samples you observe changes because your quality" }, { "end": 2296.04, "start": 2295.04, "text": " has changed." }, { "end": 2302.12, "start": 2296.04, "text": " It's quite a difficult task to find and find the right model and high frame returns to" }, { "end": 2306.2, "start": 2302.12, "text": " do that, which cost a lot of time, human time." }, { "end": 2313.7599999999998, "start": 2306.2, "text": " And then the second extensive part is, if you want to do rollout, you're doing rollout" }, { "end": 2317.32, "start": 2313.7599999999998, "text": " on the GPU, right, in a serial manner." }, { "end": 2323.2799999999997, "start": 2317.32, "text": " And if you do rollout, let's say you want to do a simple thing, Monte Carlo's research," }, { "end": 2325.6, "start": 2323.2799999999997, "text": " we don't want to do Monte Carlo's." }, { "end": 2327.7599999999998, "start": 2325.6, "text": " You want to build a whole tree." }, { "end": 2334.12, "start": 2327.7599999999998, "text": " If you build a tree of depth like 10 and you have, let's say, like, pound tree actions," }, { "end": 2342, "start": 2334.12, "text": " the number of, like, leaf nodes you're going to produce is going to be like, tree power" }, { "end": 2344, "start": 2342, "text": " of 10, right?" }, { "end": 2348.3199999999997, "start": 2344, "text": " And for each state, when you want to make a decision, if you do rollout, if you need" }, { "end": 2354.3199999999997, "start": 2348.3199999999997, "text": " to make 3,000, 10, almost, order of 3,000, 10 computation, which is killing." }, { "end": 2359.04, "start": 2354.32, "text": " For example, if you run ponds, that's if you do the full tree, right?" }, { "end": 2360.04, "start": 2359.04, "text": " Yeah, full tree." }, { "end": 2363.32, "start": 2360.04, "text": " If once you train up a little bit, then you don't have to do the full tree anymore, right?" }, { "end": 2364.32, "start": 2363.32, "text": " Absolutely." }, { "end": 2370.76, "start": 2364.32, "text": " You don't have to do the full tree, like, but it's still, if it's depth 10, you need to," }, { "end": 2375.92, "start": 2370.76, "text": " if you want to just look at one trailer, it's going to be 10 extra computation, right?" }, { "end": 2380.84, "start": 2375.92, "text": " So I'll admit, I've been working on the Palmerman environment, which is a Neurip's composition" }, { "end": 2381.84, "start": 2380.84, "text": " environment." }, { "end": 2387.4, "start": 2381.84, "text": " I was thinking about, so I have, I have, I have an agent, but I was thinking it would" }, { "end": 2390.96, "start": 2387.4, "text": " be cooler if I could run expert iteration on this environment." }, { "end": 2394.56, "start": 2390.96, "text": " Now, Palmerman has a lot in common with Atari, so I was thinking, wouldn't it be cool" }, { "end": 2398.8, "start": 2394.56, "text": " if I could do, or this is maybe, I don't know, nine months ago." }, { "end": 2403.36, "start": 2398.8, "text": " And I started sketching out how, and I came to realize that I would need something, something" }, { "end": 2407.36, "start": 2403.36, "text": " vaguely like what you're talking about in this paper, but I was very intimidated by the" }, { "end": 2410.04, "start": 2407.36, "text": " amount of work it would take to build that." }, { "end": 2412.12, "start": 2410.04, "text": " So I didn't proceed with that project." }, { "end": 2414.16, "start": 2412.12, "text": " Well I'm happy you didn't do it." }, { "end": 2420.2, "start": 2414.16, "text": " And another thing that I'm happy is like, this paper, like a year ago or two years ago" }, { "end": 2424.16, "start": 2420.2, "text": " when I was talking about this negative result to people, people were like either shocked" }, { "end": 2430.16, "start": 2424.16, "text": " or would not agree what I'm saying until I was convincing them and then became super shocked." }, { "end": 2431.92, "start": 2430.16, "text": " You mean they thought it would work?" }, { "end": 2436.48, "start": 2431.92, "text": " Yeah, they, I mean everyone, it was kind of common sense, except I have one friend who" }, { "end": 2441.6, "start": 2436.48, "text": " is like, who's been a senior, who used to be senior person at DeepMine now, his head" }, { "end": 2443.6, "start": 2441.6, "text": " up open, sorry, what is called?" }, { "end": 2445.2400000000002, "start": 2443.6, "text": " Little brain in Paris." }, { "end": 2448.4, "start": 2445.2400000000002, "text": " And he was the only person who told me that he knew it doesn't work." }, { "end": 2453.84, "start": 2448.4, "text": " But the rest of the people I talked to, they were like, no, either I am wrong or like" }, { "end": 2457.32, "start": 2453.84, "text": " something's wrong or they were shocked." }, { "end": 2461.6, "start": 2457.32, "text": " And then if they thought I'm wrong, I convinced them and then they become shocked." }, { "end": 2466.7599999999998, "start": 2461.6, "text": " So it was like common sense or common knowledge that this approach should work." }, { "end": 2473.2799999999997, "start": 2466.7599999999998, "text": " But in recent, they, I realized whenever I talk to new people, they are like, they all know" }, { "end": 2477.44, "start": 2473.2799999999997, "text": " that this approach is doomed to fail." }, { "end": 2482.56, "start": 2477.44, "text": " And I'm happy to get this response because that's what I want." }, { "end": 2488.44, "start": 2482.56, "text": " Can you help us summarize briefly what is that reason that we can understand why this approach" }, { "end": 2489.44, "start": 2488.44, "text": " can't work?" }, { "end": 2494.32, "start": 2489.44, "text": " And it seems like, I mean, the question that comes to my mind is, why is this so different" }, { "end": 2497.4, "start": 2494.32, "text": " from Alpha 0?" }, { "end": 2502.2400000000002, "start": 2497.4, "text": " If your model really was really good, then what is the difference remaining?" }, { "end": 2503.2400000000002, "start": 2502.2400000000002, "text": " Okay." }, { "end": 2507.96, "start": 2503.2400000000002, "text": " So Alpha 0 has really deep tree." }, { "end": 2508.96, "start": 2507.96, "text": " Okay?" }, { "end": 2514.44, "start": 2508.96, "text": " The tree is deep enough that the contribution of the tree function using the nodes is not" }, { "end": 2515.88, "start": 2514.44, "text": " going to be that high." }, { "end": 2520.88, "start": 2515.88, "text": " But it might be that the depth is too high." }, { "end": 2526.88, "start": 2520.88, "text": " But the idea why it doesn't work?" }, { "end": 2530.44, "start": 2526.88, "text": " Well, it's too conservative." }, { "end": 2535.76, "start": 2530.44, "text": " Let's imagine you are walking by a carab." }, { "end": 2542.08, "start": 2535.76, "text": " And each time, if you are able to roll out, a carab is in your right hand side." }, { "end": 2545.04, "start": 2542.08, "text": " And if you hit the carab, you might break your leg." }, { "end": 2549.92, "start": 2545.04, "text": " If you are able to roll out in the model that you have in your brain, people call it" }, { "end": 2550.92, "start": 2549.92, "text": " imagination." }, { "end": 2555.7599999999998, "start": 2550.92, "text": " Like, yeah, let's become a neuroscientist for a little bit." }, { "end": 2557.92, "start": 2555.7599999999998, "text": " So in your brain, you have a model of the world." }, { "end": 2563.04, "start": 2557.92, "text": " And you can imagine if you go too slow to write, you might hit the curb and you hit, then" }, { "end": 2565.04, "start": 2563.04, "text": " you break your leg." }, { "end": 2567.32, "start": 2565.04, "text": " And this is not a good action." }, { "end": 2575, "start": 2567.32, "text": " But in Atari, if there is a carab is a ditch that if I go there, I'm going to go there." }, { "end": 2576.6, "start": 2575, "text": " I'm going to break my leg." }, { "end": 2578, "start": 2576.6, "text": " I wouldn't go there." }, { "end": 2582.88, "start": 2578, "text": " Therefore, I don't have this experience in my replay buffer." }, { "end": 2590.56, "start": 2582.88, "text": " Therefore, if my Q function wanted to go to that curb for some reason, and if I prevented," }, { "end": 2596.24, "start": 2590.56, "text": " therefore my Q function does not see the outcome of its own decisions." }, { "end": 2599.8, "start": 2596.24, "text": " And if my Q function doesn't see the outcome of the decision, doesn't learn that going" }, { "end": 2600.8, "start": 2599.8, "text": " right is bad." }, { "end": 2606.2000000000003, "start": 2600.8, "text": " Or going even to that state that I'm at, which is too, too, from the curb, is the" }, { "end": 2608.1200000000003, "start": 2606.2000000000003, "text": " interest, is the interest state?" }, { "end": 2609.5600000000004, "start": 2608.1200000000003, "text": " I should not even be there." }, { "end": 2612.84, "start": 2609.5600000000004, "text": " But my Q function doesn't learn that going there is bad." }, { "end": 2617.4, "start": 2612.84, "text": " I'm just kind of preventing that bad event to happen." }, { "end": 2621.52, "start": 2617.4, "text": " But since I'm not experiencing it, my Q function doesn't know that bad thing." }, { "end": 2623.1600000000003, "start": 2621.52, "text": " Yeah, that's the issue." }, { "end": 2628.4, "start": 2623.1600000000003, "text": " So is this in the paper you mentioned, cats causing a delay in reward propagation?" }, { "end": 2630.4, "start": 2628.4, "text": " Is that what you're talking about?" }, { "end": 2638.48, "start": 2630.4, "text": " If I was able, if I was not using this imagination or this rollout in the model of the environment" }, { "end": 2643.04, "start": 2638.48, "text": " I have in mind, I would try to go right and I hurt myself." }, { "end": 2647.7200000000003, "start": 2643.04, "text": " And then I learned that going right is bad, and my Q function gets updated." }, { "end": 2653.56, "start": 2647.7200000000003, "text": " But if I don't do it, I keep walking along the curb for long until, let's say, if I do" }, { "end": 2657, "start": 2653.56, "text": " epsilon-gritty with some low probability, I'm going to hit my, I'm going to break my" }, { "end": 2658, "start": 2657, "text": " leg, right?" }, { "end": 2659.48, "start": 2658, "text": " Or like, I hit myself." }, { "end": 2664.84, "start": 2659.48, "text": " If I hit myself there like in 10 or 7 in the future, I would get a, I receive a negative" }, { "end": 2672.28, "start": 2664.84, "text": " reward there by taking 10 steps back to realize that at the beginning I should not even get" }, { "end": 2673.28, "start": 2672.28, "text": " close to the curb." }, { "end": 2675.28, "start": 2673.28, "text": " But that's the same as DQN." }, { "end": 2681.84, "start": 2675.28, "text": " A is not the same as DQN because in DQN you would, I mean, here you're still using the" }, { "end": 2682.84, "start": 2681.84, "text": " replay buffer." }, { "end": 2686.64, "start": 2682.84, "text": " You're putting your, are you putting your imagined states into the replay buffer as if they were" }, { "end": 2687.64, "start": 2686.64, "text": " real?" }, { "end": 2688.64, "start": 2687.64, "text": " Okay." }, { "end": 2696.44, "start": 2688.64, "text": " If you do not put the imagined rollout or like the rollout in the, I would avoid calling" }, { "end": 2702.92, "start": 2696.44, "text": " it imagined for the machine, but let's be easy to the day and let's say, if I do not put" }, { "end": 2707.92, "start": 2702.92, "text": " those imagined states in the replay buffer, what would happen is if I get a negative reward" }, { "end": 2712.6, "start": 2707.92, "text": " in the future, it takes a lot like longer for the current states to know that this is" }, { "end": 2713.68, "start": 2712.6, "text": " the state's math." }, { "end": 2718.48, "start": 2713.68, "text": " But even if you put the imagined states in the replay buffer, you can construct a" }, { "end": 2724.84, "start": 2718.48, "text": " problem that is again the same delay would have, which extensively be described in the" }, { "end": 2727.04, "start": 2724.84, "text": " paper, which will also cool." }, { "end": 2729.4, "start": 2727.04, "text": " So I'm not sure I fully followed that part." }, { "end": 2733.8, "start": 2729.4, "text": " Like, alpha 0 doesn't use a replay buffer, does it?" }, { "end": 2740.88, "start": 2733.8, "text": " Okay, alpha 0 does have a replay buffer, but it doesn't seem to come up very often." }, { "end": 2747.4, "start": 2740.88, "text": " And then there was a paper more recently called model based reinforcement learning for Atari" }, { "end": 2749.4, "start": 2747.4, "text": " by Kaiser et al." }, { "end": 2753.96, "start": 2749.4, "text": " We actually mentioned this when we were talking or before the interview, that uses some kind" }, { "end": 2761.96, "start": 2753.96, "text": " of LSTM based discrete latent component to predict the next frame, but they're not, I" }, { "end": 2766.96, "start": 2761.96, "text": " don't think they're predicting the full, well, I don't think they're using pixel space." }, { "end": 2770.6800000000003, "start": 2766.96, "text": " They're using some kind of latent space." }, { "end": 2774.36, "start": 2770.6800000000003, "text": " Looking a little closer at the model based reinforcement learning for Atari paper by Kaiser" }, { "end": 2775.36, "start": 2774.36, "text": " et al." }, { "end": 2779.04, "start": 2775.36, "text": " They're using latent, but the latens are also used to predict the pixels in the next" }, { "end": 2780.04, "start": 2779.04, "text": " frame." }, { "end": 2782.6, "start": 2780.04, "text": " So I guess the truth is somewhere between these two." }, { "end": 2784.32, "start": 2782.6, "text": " Their network is kind of complicated." }, { "end": 2788.8, "start": 2784.32, "text": " And at this point, I can't say I understand the details of how it works." }, { "end": 2791.96, "start": 2788.8, "text": " And they got good results with this, what they called simple." }, { "end": 2796.1200000000003, "start": 2791.96, "text": " It seems kind of like a very similar approach to yours, because they are doing a model" }, { "end": 2799.36, "start": 2796.1200000000003, "text": " based, both are model based." }, { "end": 2802.44, "start": 2799.36, "text": " I don't think they're building a full tree." }, { "end": 2806.6, "start": 2802.44, "text": " So yeah, that's actually interesting paper, like it." }, { "end": 2816.92, "start": 2806.6, "text": " The sense that, well, yeah, do LSTM in our work, we do both LSTM, well, the card RNN," }, { "end": 2819.04, "start": 2816.92, "text": " and also feed forward." }, { "end": 2825.12, "start": 2819.04, "text": " And our work, we can do one to call it three or two of the gradient." }, { "end": 2833.48, "start": 2825.12, "text": " But they do the RNN for the model, and they do policy gradient for the, for coming up" }, { "end": 2834.48, "start": 2833.48, "text": " with the best action." }, { "end": 2837.2799999999997, "start": 2834.48, "text": " So they are doing those things with suggested." }, { "end": 2845.68, "start": 2837.2799999999997, "text": " And, but I agree they perform well, apart, which is the part that I, well, I actually want" }, { "end": 2848.52, "start": 2845.68, "text": " to author a friend of mine." }, { "end": 2854.44, "start": 2848.52, "text": " And I don't agree with the fact that they are performing anything, because we usually" }, { "end": 2857.84, "start": 2854.44, "text": " run a policy." }, { "end": 2866.48, "start": 2857.84, "text": " We run the enforceable learning algorithms that we design on the CYOTRA for 250 minutes" }, { "end": 2868.8, "start": 2866.48, "text": " time, depends on what you call time step." }, { "end": 2873.76, "start": 2868.8, "text": " But 50 minutes time is one billion times the steps, okay, to get some performance." }, { "end": 2882.36, "start": 2873.76, "text": " Okay, at the beginning, two times the steps, like, let's say, first, 12,000 times the" }, { "end": 2885.08, "start": 2882.36, "text": " agent doesn't do much." }, { "end": 2891.1600000000003, "start": 2885.08, "text": " And maybe, for say, for PONG, it was making reward of minus 21." }, { "end": 2898.96, "start": 2891.1600000000003, "text": " In the first 100,000 times, it might get reward of minus 9, 10, 20, okay." }, { "end": 2902.92, "start": 2898.96, "text": " So it doesn't do much in ad regimes." }, { "end": 2912.2400000000002, "start": 2902.92, "text": " And I also said in our paper, it's like, if I learned the model dynamic, if I am able" }, { "end": 2919, "start": 2912.24, "text": " to roll out in that model, I'm able to pull up for 10 times the steps, I might avoid some" }, { "end": 2921.56, "start": 2919, "text": " local bad events." }, { "end": 2924.2, "start": 2921.56, "text": " I might not hit the curb." }, { "end": 2928.52, "start": 2924.2, "text": " And also, if there is an apple, like, in 10 steps away from me, I'm going to reach that," }, { "end": 2931.3599999999997, "start": 2928.52, "text": " okay, but this is going to be locally being good, right?" }, { "end": 2933.2799999999997, "start": 2931.3599999999997, "text": " But I'm not going to be globally good." }, { "end": 2939.9599999999996, "start": 2933.2799999999997, "text": " And what they result shows is that in the first 100,000 times steps, they're outperforming" }, { "end": 2941.72, "start": 2939.9599999999996, "text": " the other algorithm." }, { "end": 2951.08, "start": 2941.72, "text": " But it's not what we care about it, but we don't care about outperforming algorithms in" }, { "end": 2956.68, "start": 2951.08, "text": " first 100 times, 100,000 times steps that much." }, { "end": 2965.72, "start": 2956.68, "text": " If your performance is from minus, like, 1,000 to minus 985, and this is the improvement" }, { "end": 2970.72, "start": 2965.72, "text": " you're making, but actual performance gain we care is like few thousand." }, { "end": 2977.3199999999997, "start": 2970.72, "text": " If you, first few thousand times steps, you do it, like, what happens is like, at the" }, { "end": 2983.6, "start": 2977.3199999999997, "text": " very first time, like, very beginning, 100,000 times steps, you make a jump, yeah, you" }, { "end": 2987.08, "start": 2983.6, "text": " boost up, you wrap up, but after that, you don't do much." }, { "end": 2992.2, "start": 2987.08, "text": " And that's one of the reasons they did not go beyond 10,000 times steps, while almost" }, { "end": 2996.72, "start": 2992.2, "text": " all the standard works out there, excluding mine." }, { "end": 3002.12, "start": 2996.72, "text": " Like, I do not run things for five, 50 million times, so we're 200 million times steps." }, { "end": 3004.8399999999997, "start": 3002.12, "text": " But like, that's the way people do." }, { "end": 3010.56, "start": 3004.8399999999997, "text": " I'm sure, like, the algorithm is not like, well, in the future." }, { "end": 3016.3599999999997, "start": 3010.56, "text": " Now, the first few times step is going to, it has higher speed of learning at the very," }, { "end": 3021.24, "start": 3016.3599999999997, "text": " very beginning, because of this three-structured, I was talking about, but after that, it doesn't" }, { "end": 3022.4399999999996, "start": 3021.24, "text": " do much." }, { "end": 3028.2000000000003, "start": 3022.44, "text": " And when they run it for one million times steps, the performance, like, the improvement" }, { "end": 3031.36, "start": 3028.2000000000003, "text": " vanishes, or plateau." }, { "end": 3034.36, "start": 3031.36, "text": " So it's like, it's interesting work." }, { "end": 3037.76, "start": 3034.36, "text": " It's the work that we were warning that is not going to work." }, { "end": 3040.76, "start": 3037.76, "text": " Do it, they did it." }, { "end": 3044.44, "start": 3040.76, "text": " And it's great, but this is the problem." }, { "end": 3048.88, "start": 3044.44, "text": " If you do three-starts, or you do, like, a construct, a JSON model, you're going to wrap" }, { "end": 3053.2000000000003, "start": 3048.88, "text": " up very fast at the very beginning, but after that, you're not going to do much." }, { "end": 3054.2000000000003, "start": 3053.2000000000003, "text": " Hmm." }, { "end": 3055.2000000000003, "start": 3054.2000000000003, "text": " Okay." }, { "end": 3060.96, "start": 3055.2000000000003, "text": " So what would be the next step for this Gats line of work?" }, { "end": 3064.2000000000003, "start": 3060.96, "text": " What would you suggest would be the next place to look?" }, { "end": 3066.52, "start": 3064.2000000000003, "text": " That's excellent question." }, { "end": 3074.32, "start": 3066.52, "text": " Well, there's a huge interest among researchers to do model based on model free." }, { "end": 3085.2400000000002, "start": 3074.32, "text": " And one of, I think, your first guest in your show also, extensive work on this area." }, { "end": 3090.04, "start": 3085.2400000000002, "text": " But it's harder than what we thought." }, { "end": 3098.6800000000003, "start": 3090.04, "text": " It's like, it needs really, we need to some time to figure out how to use it." }, { "end": 3105.12, "start": 3098.68, "text": " And I'm glad that I did it until I told everyone, hey, don't do it that way, because it was" }, { "end": 3107.52, "start": 3105.12, "text": " quite straightforward thing to do." }, { "end": 3114, "start": 3107.52, "text": " And it was great to tell Drifold Don Deid, but what we should do afterwards?" }, { "end": 3121.44, "start": 3114, "text": " Certainly, when we run Q learning, or like, DQ and on, sorry, games, there are many," }, { "end": 3124.48, "start": 3121.44, "text": " many signals, vision, rate." }, { "end": 3129.28, "start": 3124.48, "text": " And which we might not use, as I said, are unsubvised signals." }, { "end": 3134.28, "start": 3129.28, "text": " They're like, well, maybe unsubvised signals are the wrong thing to use, wrong term, but" }, { "end": 3135.28, "start": 3134.28, "text": " it's better with me." }, { "end": 3138.8, "start": 3135.28, "text": " These transitions, we can use them to learn the model." }, { "end": 3147.2400000000002, "start": 3138.8, "text": " But who said that model free is less sample efficient than model based?" }, { "end": 3148.2400000000002, "start": 3147.2400000000002, "text": " No one proved that." }, { "end": 3150.92, "start": 3148.2400000000002, "text": " People say it, I would say, okay, prove it to me." }, { "end": 3151.92, "start": 3150.92, "text": " Well, that's interesting." }, { "end": 3152.92, "start": 3151.92, "text": " Well, no one proved that." }, { "end": 3156.52, "start": 3152.92, "text": " People are saying it, but it's not true, because we don't know." }, { "end": 3162.56, "start": 3156.52, "text": " I mean, I wouldn't say it's not true, but what we have seen both theoretically and empirically," }, { "end": 3168.6800000000003, "start": 3162.56, "text": " it doesn't, it sounds cool to say model free is less sample efficient than model based," }, { "end": 3172.2000000000003, "start": 3168.6800000000003, "text": " but well, no, we don't know." }, { "end": 3173.44, "start": 3172.2000000000003, "text": " No one showed that." }, { "end": 3180.2400000000002, "start": 3173.44, "text": " And theoretically, we know that there are quite, like, you can do almost similar to it." }, { "end": 3188.2799999999997, "start": 3180.24, "text": " If QLearning has the same sample efficiency as model based, approach is less for tabular" }, { "end": 3193.3599999999997, "start": 3188.2799999999997, "text": " setting, and for a continuous set of people working up, including myself, and we are" }, { "end": 3195.9199999999996, "start": 3193.3599999999997, "text": " seeing that sample complex is almost same." }, { "end": 3202.72, "start": 3195.9199999999996, "text": " But that's an interesting point that we also discussed extensively in the paper." }, { "end": 3209.9199999999996, "start": 3202.72, "text": " Let's imagine you just don't want to maximize your expected return." }, { "end": 3212.64, "start": 3209.92, "text": " You want to be also safe locally, okay?" }, { "end": 3217.48, "start": 3212.64, "text": " If you're, if you learn the model and if you're able to roll out, you know what would happen" }, { "end": 3220.2000000000003, "start": 3217.48, "text": " if you make some certain actions, right?" }, { "end": 3226.88, "start": 3220.2000000000003, "text": " So it helps a lot in safety, which is like amazing use case of it." }, { "end": 3232.88, "start": 3226.88, "text": " I'm saying, hey, in order to combine a model based on model free, great, but do not aim" }, { "end": 3240.48, "start": 3232.88, "text": " for our performing model free methods because, well, why, why you would think like that?" }, { "end": 3244.96, "start": 3240.48, "text": " Our paper shows don't think like that, like, or at least don't take it that easy, be" }, { "end": 3246.96, "start": 3244.96, "text": " more precise." }, { "end": 3250.48, "start": 3246.96, "text": " But if you want to be safe, if you don't have the model of the environment, you can do a" }, { "end": 3251.48, "start": 3250.48, "text": " lot of things." }, { "end": 3255.12, "start": 3251.48, "text": " The third thing we showed, which was astonishing." }, { "end": 3259.92, "start": 3255.12, "text": " I love that experiment, let's imagine you have a punk, you have a game punk." }, { "end": 3262.44, "start": 3259.92, "text": " Okay, you remember the game punk?" }, { "end": 3264.68, "start": 3262.44, "text": " For sure, yeah, I enjoy that as a kid." }, { "end": 3269.64, "start": 3264.68, "text": " So, so in the game punk, we have two paddles and a ball, right?" }, { "end": 3273.76, "start": 3269.64, "text": " And we control one paddle and open them controls another paddle." }, { "end": 3277, "start": 3273.76, "text": " Every play game and we run DQN and VV." }, { "end": 3278, "start": 3277, "text": " Okay?" }, { "end": 3285.2400000000002, "start": 3278, "text": " So now, what, what one experiment we have done, which was super cool was actually I was," }, { "end": 3286.2400000000002, "start": 3285.2400000000002, "text": " you know, Gray Marcus." }, { "end": 3287.2400000000002, "start": 3286.2400000000002, "text": " Oh, yeah." }, { "end": 3293.3599999999997, "start": 3287.24, "text": " He's like one of the critics and amazing person in the field and I was talking to him," }, { "end": 3299.3199999999997, "start": 3293.3599999999997, "text": " like, right before talking to you, like, like, 10 minutes before talking to you." }, { "end": 3302.9599999999996, "start": 3299.3199999999997, "text": " And we were talking about this example that I had." }, { "end": 3303.9599999999996, "start": 3302.9599999999996, "text": " It's that." }, { "end": 3305.8399999999997, "start": 3303.9599999999996, "text": " I follow him on Twitter, yeah." }, { "end": 3312.2, "start": 3305.8399999999997, "text": " I was following him on Twitter until I met him in person today in the morning." }, { "end": 3317.2, "start": 3312.2, "text": " Now we were talking about this example that he was also have, he had some other examples" }, { "end": 3318.2, "start": 3317.2, "text": " as well." }, { "end": 3319.2799999999997, "start": 3318.2, "text": " So the example is a false." }, { "end": 3321.52, "start": 3319.2799999999997, "text": " I have a punk, the game punk and I have two pads." }, { "end": 3327.04, "start": 3321.52, "text": " I control one paddle and after like five million times of steps, I master this game and" }, { "end": 3329.12, "start": 3327.04, "text": " I'm able to score 21." }, { "end": 3337.3599999999997, "start": 3329.12, "text": " And then we did thanks to Marlos, my friend who is at the brain now and Mark Bremmer and" }, { "end": 3338.68, "start": 3337.3599999999997, "text": " other folks." }, { "end": 3344, "start": 3338.68, "text": " They put out a new version of ALE, which allows you change the mode of the game." }, { "end": 3345, "start": 3344, "text": " Okay." }, { "end": 3349, "start": 3345, "text": " Means that you can't change the difficulty of the game or the dynamics of the game." }, { "end": 3357.14, "start": 3349, "text": " For punk, when you change the mode of the game, what happens is the width of the opponent's" }, { "end": 3359.2, "start": 3357.14, "text": " paddle get get half." }, { "end": 3360.2, "start": 3359.2, "text": " Okay." }, { "end": 3363.08, "start": 3360.2, "text": " So it's like the size of the opponent's paddle get half." }, { "end": 3365.3199999999997, "start": 3363.08, "text": " So the game becomes easier, right?" }, { "end": 3371.56, "start": 3365.32, "text": " If you have the DQM model, which is able to score 21 under normal game, and then suddenly," }, { "end": 3379.04, "start": 3371.56, "text": " which is 21 scores the cap, and suddenly I change the mode of the game and I like half" }, { "end": 3382.0800000000004, "start": 3379.04, "text": " the size of the opponent's paddle." }, { "end": 3385, "start": 3382.0800000000004, "text": " And the game is much easier because you can score easier, right?" }, { "end": 3390.36, "start": 3385, "text": " And if I apply the same model I learned on this game, what would be the score of the" }, { "end": 3392.96, "start": 3390.36, "text": " DQM model on this easier game?" }, { "end": 3397.7200000000003, "start": 3392.96, "text": " I mean, the same model you trained, which was in score 21, now you applied directly on" }, { "end": 3398.8, "start": 3397.7200000000003, "text": " this game." }, { "end": 3406.48, "start": 3398.8, "text": " And surprising is the score of this DQM model, the stuff keeping to be like 21 as we were" }, { "end": 3409.2400000000002, "start": 3406.48, "text": " expecting, it became like minus 21." }, { "end": 3413.48, "start": 3409.2400000000002, "text": " Means like it totally broke the system." }, { "end": 3415.2, "start": 3413.48, "text": " The system was not able to function at all." }, { "end": 3419.8, "start": 3415.2, "text": " And is that really a problem with RL or is that a problem with these types of function" }, { "end": 3420.8, "start": 3419.8, "text": " approximators?" }, { "end": 3425.88, "start": 3420.8, "text": " Because the function approximator, if it sees a scene with a slightly smaller paddle, it" }, { "end": 3429.44, "start": 3425.88, "text": " thinks it's a completely different scene than the scene with everything the same except" }, { "end": 3432.2000000000003, "start": 3429.44, "text": " for a larger paddle, right?" }, { "end": 3436.6400000000003, "start": 3432.2000000000003, "text": " So one thing, if you don't mind, I would phrase your question a little bit." }, { "end": 3441.6400000000003, "start": 3436.6400000000003, "text": " It's not a problem with reinforcement learning, reinforcement using a general field." }, { "end": 3443.2400000000002, "start": 3441.6400000000003, "text": " It's nothing wrong with it." }, { "end": 3445, "start": 3443.2400000000002, "text": " It's amazing, it's the best." }, { "end": 3450.96, "start": 3445, "text": " But if you look at the agent as a combination of RL principles and function approximator," }, { "end": 3457.8, "start": 3450.96, "text": " if your aim is to maximize reward, and you're using function approximation, there is no" }, { "end": 3465.12, "start": 3457.8, "text": " reason for the agent to look at, to be able also to work on the game that the paddle is" }, { "end": 3467.12, "start": 3465.12, "text": " like half or bigger." }, { "end": 3475.68, "start": 3467.12, "text": " And that didn't learn, maybe there's no reason to learn any logic behind, I'm using Marcus" }, { "end": 3481.8399999999997, "start": 3475.68, "text": " Grace, like language, there's no logic, there's no need to learn the logic there to solve" }, { "end": 3482.8399999999997, "start": 3481.8399999999997, "text": " the game." }, { "end": 3484.8399999999997, "start": 3482.8399999999997, "text": " Okay, so yeah, as I said, if I..." }, { "end": 3487.96, "start": 3484.8399999999997, "text": " Because it has no semantics, it doesn't learn any semantics, right?" }, { "end": 3490.6, "start": 3487.96, "text": " It doesn't know that's a paddle, it doesn't know." }, { "end": 3493.7599999999998, "start": 3490.6, "text": " It just has a bunch of matrix to be used." }, { "end": 3498.88, "start": 3493.76, "text": " Any change in that suddenly means that the answers are no longer relevant, right?" }, { "end": 3505.48, "start": 3498.88, "text": " We could imagine some better function approximator than these CNNs, which somehow look for some" }, { "end": 3510.76, "start": 3505.48, "text": " have some priors or some have some better sense of objects or something like that that" }, { "end": 3512.0400000000004, "start": 3510.76, "text": " could get around this." }, { "end": 3513.0400000000004, "start": 3512.0400000000004, "text": " Yes, yes." }, { "end": 3519.5200000000004, "start": 3513.0400000000004, "text": " In fact, the Marlowe's I mentioned from the mine, he tried to regularize the QN and show" }, { "end": 3521.5200000000004, "start": 3519.5200000000004, "text": " that it can help a little bit." }, { "end": 3530.12, "start": 3521.52, "text": " I mean, it's a paper by Vicarious called schema networks that they tried exactly this" }, { "end": 3534.7599999999998, "start": 3530.12, "text": " on not pong, but that the one like pong when you're trying to break bricks." }, { "end": 3535.7599999999998, "start": 3534.7599999999998, "text": " Breakout." }, { "end": 3536.7599999999998, "start": 3535.7599999999998, "text": " Breakout." }, { "end": 3537.7599999999998, "start": 3536.7599999999998, "text": " Yes." }, { "end": 3543.12, "start": 3537.7599999999998, "text": " And they showed some small change in breakout made the standard reinforcement learning actually" }, { "end": 3547, "start": 3543.12, "text": " not work, but their method was able to overcome these small changes." }, { "end": 3554.24, "start": 3547, "text": " I think there was another one about human priors or something where the RL algorithms were" }, { "end": 3559.08, "start": 3554.24, "text": " able to play these games just as well when they were replaced with noisy images that made" }, { "end": 3560.08, "start": 3559.08, "text": " no sense to a human." }, { "end": 3561.92, "start": 3560.08, "text": " Oh, yeah, I remember that work." }, { "end": 3562.92, "start": 3561.92, "text": " Yeah." }, { "end": 3564.52, "start": 3562.92, "text": " Yeah, it didn't matter to TQN, right?" }, { "end": 3565.52, "start": 3564.52, "text": " Yeah, it didn't." }, { "end": 3566.52, "start": 3565.52, "text": " It didn't work." }, { "end": 3571.56, "start": 3566.52, "text": " For this RL method, it doesn't matter because you didn't specifically ask for it." }, { "end": 3578.56, "start": 3571.56, "text": " And yeah, that's a, you can call it an issue, but this is not the thing you ask to be there" }, { "end": 3579.56, "start": 3578.56, "text": " at the beginning." }, { "end": 3581.7999999999997, "start": 3579.56, "text": " Like you ask, hey, maximize the returns." }, { "end": 3582.7999999999997, "start": 3581.7999999999997, "text": " Okay." }, { "end": 3585.68, "start": 3582.7999999999997, "text": " You didn't have to learn some magic." }, { "end": 3591, "start": 3585.68, "text": " But this is one surprising thing, as you also mentioned in that work, which shows that" }, { "end": 3595.32, "start": 3591, "text": " if you move the paddle of the breakout a little bit up, it breaks." }, { "end": 3599.56, "start": 3595.32, "text": " And this information I'm talking about are the things that I just learned from grade." }, { "end": 3602.2799999999997, "start": 3599.56, "text": " It's told me." }, { "end": 3609.7999999999997, "start": 3602.2799999999997, "text": " But the second issue was, after changing this game setting by just making the size of" }, { "end": 3615, "start": 3609.7999999999997, "text": " the opponent paddle smaller, the amount of time and stuff we needed to train the decaying" }, { "end": 3617.92, "start": 3615, "text": " was almost things that were starting from scratch." }, { "end": 3621.4, "start": 3617.92, "text": " So it couldn't adapt immediately." }, { "end": 3624.92, "start": 3621.4, "text": " But back into the use of model this." }, { "end": 3631.64, "start": 3624.92, "text": " It took our model 3000 samples to adapt to this new domain." }, { "end": 3636.84, "start": 3631.64, "text": " So for our DQN agent to adapt, it needed to 2 million or 3 million samples again to" }, { "end": 3637.84, "start": 3636.84, "text": " adapt." }, { "end": 3644.92, "start": 3637.84, "text": " But for our model based, for our like the identity model to adapt to this domain, it just cost" }, { "end": 3646.88, "start": 3644.92, "text": " us like 3000 samples." }, { "end": 3654.04, "start": 3646.88, "text": " So the model itself, the general model was adapting very quickly, but the Q network was" }, { "end": 3655.04, "start": 3654.04, "text": " adapting slowly." }, { "end": 3656.04, "start": 3655.04, "text": " Yes." }, { "end": 3661.04, "start": 3656.04, "text": " So now what I'm advocating is, if you want to use model and model free approaches and model" }, { "end": 3668.64, "start": 3661.04, "text": " based approaches together, we might not gain much at this easily by aiming for maximizing" }, { "end": 3670.96, "start": 3668.64, "text": " return directly." }, { "end": 3679.2799999999997, "start": 3670.96, "text": " But if we want to secure like safety, if we want to have adaptation, if I'm going from" }, { "end": 3684.2400000000002, "start": 3679.28, "text": " like, if I'm interacting with the environment, which has some sort of some set of reward," }, { "end": 3688.84, "start": 3684.2400000000002, "text": " and if I change from one environment to another environment, the environment dynamics" }, { "end": 3692.1200000000003, "start": 3688.84, "text": " and stay the same, but the reward function changes." }, { "end": 3698.0800000000004, "start": 3692.1200000000003, "text": " I don't need to learn, if I use model free approaches, I need to do the whole thing again." }, { "end": 3702.2000000000003, "start": 3698.0800000000004, "text": " But if I have model based approach, I just need to change the reward function, which I" }, { "end": 3703.2000000000003, "start": 3702.2000000000003, "text": " learned." }, { "end": 3710.56, "start": 3703.2, "text": " So if I have adapting to new domain or general transfer learning problem or safety or many" }, { "end": 3715.3599999999997, "start": 3710.56, "text": " many other things that the model is going to give us additional information compared to" }, { "end": 3719.7599999999998, "start": 3715.3599999999997, "text": " the Q function, there we can use models a lot." }, { "end": 3723.2, "start": 3719.7599999999998, "text": " Well, you might ask whether I am working on this type of research?" }, { "end": 3727.8399999999997, "start": 3723.2, "text": " No, because this type of research, it costs a lot and I don't have that much money to" }, { "end": 3735.56, "start": 3727.84, "text": " transfer on GPU, but if people out there want to do model based and model free, please" }, { "end": 3741.88, "start": 3735.56, "text": " read this paper carefully and try to do like safety, try to do the adaptation, try to" }, { "end": 3747.84, "start": 3741.88, "text": " do like this changing the dynamics, this kind of stuff are really important." }, { "end": 3752.28, "start": 3747.84, "text": " If you learn a model, you can come with a better semantic of them environment." }, { "end": 3759.6800000000003, "start": 3752.28, "text": " But if you want to do direct, if you direct the aim for maximizing return, then this paper" }, { "end": 3762.96, "start": 3759.6800000000003, "text": " shows that it's not that easy at least." }, { "end": 3767.7200000000003, "start": 3762.96, "text": " I don't see a way to use model based and model free at all together to improve a fun sample" }, { "end": 3769.1200000000003, "start": 3767.7200000000003, "text": " complexity." }, { "end": 3774.92, "start": 3769.1200000000003, "text": " And all the dirty keeping, no one showed that model free is less sample efficient." }, { "end": 3776.2400000000002, "start": 3774.92, "text": " That's a really interesting point." }, { "end": 3783.2, "start": 3776.24, "text": " I really am just naturally drawn to model based methods and I find model free." }, { "end": 3784.2, "start": 3783.2, "text": " I just never liked it." }, { "end": 3785.2, "start": 3784.2, "text": " I don't know." }, { "end": 3790.6, "start": 3785.2, "text": " I think if I really ask myself why it's because I can't reuse any component, like you" }, { "end": 3792.7999999999997, "start": 3790.6, "text": " said." }, { "end": 3794.8799999999997, "start": 3792.7999999999997, "text": " And it seems like a waste of compute." }, { "end": 3801.04, "start": 3794.8799999999997, "text": " If you learn a policy for something, anything changes, you have to throw it out." }, { "end": 3805.8799999999997, "start": 3801.04, "text": " When you build a model, I mean, like things like GPT2, for example, the text model or" }, { "end": 3810.84, "start": 3805.88, "text": " the language model for Mub and the I, so much compute went into building that original" }, { "end": 3811.84, "start": 3810.84, "text": " model." }, { "end": 3815.12, "start": 3811.84, "text": " But then it becomes very cheap for us to fine tune it for different tasks." }, { "end": 3820.12, "start": 3815.12, "text": " I mean, to me, that's a model for AI or that's an approach to AI that makes sense to have" }, { "end": 3824.92, "start": 3820.12, "text": " this massive compute in these reusable artifacts." }, { "end": 3830.6, "start": 3824.92, "text": " And then we have to use only small compute to do the work to customize it." }, { "end": 3836.8399999999997, "start": 3830.6, "text": " Otherwise we're just burning cycles on and on, but it seems like not very much like" }, { "end": 3837.8399999999997, "start": 3836.8399999999997, "text": " gain." }, { "end": 3840.3199999999997, "start": 3837.8399999999997, "text": " And I also really like planning." }, { "end": 3846.7599999999998, "start": 3840.3199999999997, "text": " I've spent some time in planning algorithms long ago and planning can be so efficient," }, { "end": 3848.12, "start": 3846.7599999999998, "text": " but to plan you need a model." }, { "end": 3851.44, "start": 3848.12, "text": " So definitely, these things to work together." }, { "end": 3852.44, "start": 3851.44, "text": " Yeah." }, { "end": 3858.6, "start": 3852.44, "text": " One one interesting aspect of this model based idea is if you are able to come up with" }, { "end": 3864.92, "start": 3858.6, "text": " like try to learn a model of them and all the Atari games, then what you're going to" }, { "end": 3868.2, "start": 3864.92, "text": " end up, you can come with the representation." }, { "end": 3877.12, "start": 3868.2, "text": " We use the first few layers of like this model we train on ImageNet and we reuse that representation" }, { "end": 3878.6, "start": 3877.12, "text": " with different tasks, right?" }, { "end": 3879.6, "start": 3878.6, "text": " But we don't do it in RL." }, { "end": 3885.2, "start": 3879.6, "text": " But if you learn a model, hopefully you can get a representation that you can use for" }, { "end": 3889.2, "start": 3885.2, "text": " different problems or transfer between games." }, { "end": 3894.7999999999997, "start": 3889.2, "text": " So it's going to be like, you know, like redoing the learning over and over for each problem" }, { "end": 3899.2, "start": 3894.7999999999997, "text": " or each paper or each work you're doing or each homework." }, { "end": 3903.3999999999996, "start": 3899.2, "text": " You just use this for representation and then you build your stuff on top of it." }, { "end": 3906.2, "start": 3903.3999999999996, "text": " That's what we did for BDQN." }, { "end": 3909.2, "start": 3906.2, "text": " Tari, can you explain that connection to BDQN again?" }, { "end": 3917.2, "start": 3909.2, "text": " So in BDQN we say, and also after BDQN I had another work on linear bandits, we say," }, { "end": 3927, "start": 3917.2, "text": " okay, if we have this amazing work on which is amazing idea called deep learning." }, { "end": 3933.52, "start": 3927, "text": " And to me, one of the great, great benefits that the deep learning has provided is like" }, { "end": 3935.3999999999996, "start": 3933.52, "text": " the representation layer." }, { "end": 3936.3999999999996, "start": 3935.3999999999996, "text": " Okay." }, { "end": 3944.7200000000003, "start": 3936.4, "text": " So I'm able to learn a representation of like the frame of the game and then my Q function" }, { "end": 3948, "start": 3944.7200000000003, "text": " is going to be linear function on this representation." }, { "end": 3953.2000000000003, "start": 3948, "text": " Then I can deploy whatever we know about linear models on the top of this representation," }, { "end": 3954.2000000000003, "start": 3953.2000000000003, "text": " right?" }, { "end": 3955.2000000000003, "start": 3954.2000000000003, "text": " Hi, I follow you now, yeah." }, { "end": 3956.2000000000003, "start": 3955.2000000000003, "text": " Yeah." }, { "end": 3957.2000000000003, "start": 3956.2000000000003, "text": " Yeah, let's consider like DQN." }, { "end": 3964.2400000000002, "start": 3957.2000000000003, "text": " DQN you have multiple convolutional layers and then you have linear layers on the top" }, { "end": 3968.68, "start": 3964.24, "text": " and the Q function at the end is linear transformation of the feature presentation" }, { "end": 3970.2, "start": 3968.68, "text": " on the bottom, right?" }, { "end": 3975.3999999999996, "start": 3970.2, "text": " If you know that, if you know how to deal with the Q functions which are linear in some" }, { "end": 3982, "start": 3975.3999999999996, "text": " given feature presentation, you can use those techniques to apply those techniques on" }, { "end": 3986.64, "start": 3982, "text": " the settings that feature presentation comes from deep learning." }, { "end": 3992, "start": 3986.64, "text": " Sorry, that representation at the last layer is kind of dependent on the policy, right?" }, { "end": 3993, "start": 3992, "text": " Yeah, that's a thing." }, { "end": 3994.72, "start": 3993, "text": " Depending on how did it get there?" }, { "end": 3996.4, "start": 3994.72, "text": " It's not just summarizing the game." }, { "end": 3999.4, "start": 3996.4, "text": " It's also summarizing the behavior of this agent." }, { "end": 4000.4, "start": 3999.4, "text": " Yes, it does." }, { "end": 4001.4, "start": 4000.4, "text": " Yeah." }, { "end": 4004.4, "start": 4001.4, "text": " So how we could ever separate those to get a general general?" }, { "end": 4005.4, "start": 4004.4, "text": " We cannot." }, { "end": 4011.2, "start": 4005.4, "text": " Unless we wait for one or two years that I proved that we can, but I haven't, I mean," }, { "end": 4018.2, "start": 4011.2, "text": " I'm not working on it now, but I know that I'm going to write my piece on it." }, { "end": 4027.4399999999996, "start": 4018.2, "text": " But what I'm advocating for idea for PDK and it's like, let's imagine I give you a feature" }, { "end": 4036.3199999999997, "start": 4027.4399999999996, "text": " presentation of, let's say for environment in an RL problem." }, { "end": 4042.04, "start": 4036.3199999999997, "text": " And I tell you the optimal Q function is linear in this feature presentation, okay?" }, { "end": 4045.2799999999997, "start": 4042.04, "text": " And I tell you this feature presentation is fixed." }, { "end": 4046.2799999999997, "start": 4045.2799999999997, "text": " Okay." }, { "end": 4048.6800000000003, "start": 4046.28, "text": " So you're not learning the feature presentation." }, { "end": 4053.6400000000003, "start": 4048.6800000000003, "text": " Your optimal Q function is linear in the feature presentation I gave it to you." }, { "end": 4054.6400000000003, "start": 4053.6400000000003, "text": " Okay." }, { "end": 4059.84, "start": 4054.6400000000003, "text": " And now I ask you, hey, go and do like it, come up with the algorithm, which given this" }, { "end": 4064.52, "start": 4059.84, "text": " knowledge is able to do efficient exploration exploration." }, { "end": 4065.52, "start": 4064.52, "text": " Okay?" }, { "end": 4067.52, "start": 4065.52, "text": " That's what you did, right?" }, { "end": 4068.52, "start": 4067.52, "text": " That's what you did, right?" }, { "end": 4069.52, "start": 4068.52, "text": " Yeah, yeah." }, { "end": 4074.84, "start": 4069.52, "text": " So what I did here was like, if you give me the fixed feature presentation, how an" }, { "end": 4076.4, "start": 4074.84, "text": " error something is going to use it." }, { "end": 4083.92, "start": 4076.4, "text": " We didn't know before this work, how to use it for general problems like in this work," }, { "end": 4088.8, "start": 4083.92, "text": " the action of space can be anything as long as it's closed." }, { "end": 4093.8, "start": 4088.8, "text": " And I mean, it can be continuous, it can be infinity, it can be like big." }, { "end": 4100.12, "start": 4093.8, "text": " And a state of space is also should be closed, but it can be continuous, it can be finite," }, { "end": 4101.12, "start": 4100.12, "text": " infinite." }, { "end": 4105, "start": 4101.12, "text": " So it's like literally two, it's like general and general." }, { "end": 4108.32, "start": 4105, "text": " So we showed that in this work, if you give me the feature for a presentation and you" }, { "end": 4112.4, "start": 4108.32, "text": " tell me that the optimal Q function is linear in the feature presentation, we showed" }, { "end": 4119.4, "start": 4112.4, "text": " how we can come up with the efficient exploration, exploitation algorithm, which is able to learn," }, { "end": 4123.5599999999995, "start": 4119.4, "text": " is able to give us a reasonable regret bound." }, { "end": 4126.36, "start": 4123.5599999999995, "text": " So the cut regret bound is really amazing." }, { "end": 4131.5199999999995, "start": 4126.36, "text": " Everything on nice has some bad dependence on the horizon of the game, but still working" }, { "end": 4133.799999999999, "start": 4131.5199999999995, "text": " on it." }, { "end": 4139.12, "start": 4133.799999999999, "text": " But this is, I think, because of the analysis needs to be tightened, but algorithm, I think" }, { "end": 4141.839999999999, "start": 4139.12, "text": " is not quite right." }, { "end": 4146.88, "start": 4141.839999999999, "text": " And we showed that for this algorithm, you get a regret bound, which is going to be a" }, { "end": 4152.08, "start": 4146.88, "text": " square root of number of interaction, you have a number of episodes in the game." }, { "end": 4153.08, "start": 4152.08, "text": " Okay?" }, { "end": 4160.08, "start": 4153.08, "text": " To me, it was a huge improvement in the sense that now I know better how to do exploration," }, { "end": 4165.5199999999995, "start": 4160.08, "text": " exploitation in model-free reinforcement learning, when the Q function is linear, optimal" }, { "end": 4168.72, "start": 4165.5199999999995, "text": " Q function is linear in some function representation." }, { "end": 4176.28, "start": 4168.72, "text": " So now we were saying, okay, and also we knew how to do optimism based on this idea, and" }, { "end": 4179.4, "start": 4176.28, "text": " we also showed how to do Thompson sampling." }, { "end": 4186.599999999999, "start": 4179.4, "text": " And as I said, in practice, no one is going to give you that feature representation, unless" }, { "end": 4189.879999999999, "start": 4186.599999999999, "text": " there is a universal feature representation people have learned, but we don't have it" }, { "end": 4190.879999999999, "start": 4189.879999999999, "text": " yet." }, { "end": 4193.08, "start": 4190.879999999999, "text": " Some of my friends are working on it." }, { "end": 4197.96, "start": 4193.08, "text": " But if I don't have a good feature representation, what I can do, I can imagine the feature" }, { "end": 4204.32, "start": 4197.96, "text": " representation is okay, and I applied this algorithm that's clear, so I show it's good," }, { "end": 4210.4, "start": 4204.32, "text": " and this feature representation, and come up with the better policy, and now use this policy" }, { "end": 4215.799999999999, "start": 4210.4, "text": " as a study, it's going to explore a good part of the space space, and use this policy to" }, { "end": 4217.719999999999, "start": 4215.799999999999, "text": " learn a better feature representation." }, { "end": 4220.32, "start": 4217.719999999999, "text": " It's quite an alternative maximization." }, { "end": 4225.96, "start": 4220.32, "text": " I fix the feature representation, I learn a good Q function, and then fix that Q function" }, { "end": 4230.48, "start": 4225.96, "text": " to learn a feature representation, and then like these two are going to compensate for" }, { "end": 4231.48, "start": 4230.48, "text": " each other." }, { "end": 4236.839999999999, "start": 4231.48, "text": " Well, theoretically, I have not shown that this approach is guaranteed to work, which" }, { "end": 4244.599999999999, "start": 4236.839999999999, "text": " we should wait for one or two years, either I do it or my colleagues they do, but we" }, { "end": 4252.2, "start": 4244.599999999999, "text": " know that this is actually converges based on what I know about deep learning these" }, { "end": 4255, "start": 4252.2, "text": " ways, what I've learned recently, like last year." }, { "end": 4259.679999999999, "start": 4255, "text": " So they did something like that in model-based RL for Atari, the Kaiser paper, they did" }, { "end": 4264.4800000000005, "start": 4259.68, "text": " a loop where they keep learning, relearning the model, and then learning the policy, and" }, { "end": 4267.04, "start": 4264.4800000000005, "text": " then evaluate and then re-learning the model." }, { "end": 4269.04, "start": 4267.04, "text": " So maybe that loop is..." }, { "end": 4277.08, "start": 4269.04, "text": " Even in the gas work, you also, as we talked earlier, when you've collected, when you" }, { "end": 4281.76, "start": 4277.08, "text": " find that you have did your policy, you end up a different part of a space and collect" }, { "end": 4284.280000000001, "start": 4281.76, "text": " samples, and use those samples to update your model." }, { "end": 4288.88, "start": 4284.280000000001, "text": " You keep updating your model and model changes over time, it's regionally." }, { "end": 4294.56, "start": 4288.88, "text": " Just trying to connect some of these pieces together, like your BDKRN paper is looking" }, { "end": 4301.52, "start": 4294.56, "text": " at the uncertainty in the Q values specifically, but I guess we also have uncertainty in the" }, { "end": 4308.32, "start": 4301.52, "text": " model as well as in some areas of the state space where not really sure what the next" }, { "end": 4310.96, "start": 4308.32, "text": " state was going to be." }, { "end": 4313.400000000001, "start": 4310.96, "text": " That seems like it's..." }, { "end": 4316.4400000000005, "start": 4313.400000000001, "text": " Maybe is Gats maybe not modeling that uncertainty?" }, { "end": 4317.4400000000005, "start": 4316.4400000000005, "text": " Is that right?" }, { "end": 4319.44, "start": 4317.44, "text": " Well, Gats..." }, { "end": 4325.879999999999, "start": 4319.44, "text": " In Gats, the aim was not providing a better exploration or exploitation algorithm." }, { "end": 4334.04, "start": 4325.879999999999, "text": " In Gats, the aim was to, at the beginning, the aim was to come up with a better policy." }, { "end": 4340.44, "start": 4334.04, "text": " It doesn't matter how much it's going to cost time-wise or like money." }, { "end": 4341.44, "start": 4340.44, "text": " Money was..." }, { "end": 4344.679999999999, "start": 4341.44, "text": " I had a cap, but time-wise, I also had a cap." }, { "end": 4347.76, "start": 4344.68, "text": " A time, I mean time of the game." }, { "end": 4353.52, "start": 4347.76, "text": " For that, the sample complexity was not the main concern." }, { "end": 4358.280000000001, "start": 4353.52, "text": " In BDKRN, sample complexity was the actual everything." }, { "end": 4363.4400000000005, "start": 4358.280000000001, "text": " The sample complexity was actually driving force for that work." }, { "end": 4369.240000000001, "start": 4363.4400000000005, "text": " In Gats, we were saying that even if your model doesn't have any uncertainty, it's" }, { "end": 4375.2, "start": 4369.24, "text": " going to work." }, { "end": 4378, "start": 4375.2, "text": " Let's imagine that we didn't know..." }, { "end": 4381.679999999999, "start": 4378, "text": " Let's imagine we wanted you with Gats and we wanted you to use uncertainty on model." }, { "end": 4389.5599999999995, "start": 4381.679999999999, "text": " This, I think, is one of the chapters in Gats' paper, which shows how we can use the uncertainty" }, { "end": 4390.5599999999995, "start": 4389.5599999999995, "text": " in the model." }, { "end": 4396.639999999999, "start": 4390.5599999999995, "text": " Which actually I like it a lot because the way we train the genetic model for Gats is" }, { "end": 4398.639999999999, "start": 4396.639999999999, "text": " using Wasterstein distance." }, { "end": 4406, "start": 4398.64, "text": " Wasterstein distance gives us a distance between two distributions." }, { "end": 4412.320000000001, "start": 4406, "text": " The distribution of the reality and distribution that my genetic model generated." }, { "end": 4414.92, "start": 4412.320000000001, "text": " This one is a modern smash." }, { "end": 4421.96, "start": 4414.92, "text": " What the discriminator in Gats does, it looks at what is a distribution of the real transition" }, { "end": 4428.96, "start": 4421.96, "text": " and what is the output of the genetic model and how four these are in the Wasterstein metric," }, { "end": 4433.84, "start": 4428.96, "text": " not Wasterstein metric, but in the Wasterstein distance sense." }, { "end": 4437.32, "start": 4433.84, "text": " You can call that one modern uncertainty." }, { "end": 4446.68, "start": 4437.32, "text": " We observed that for FCE show, if you show like quantum transitions to the model, the model" }, { "end": 4453.4800000000005, "start": 4446.68, "text": " learns how to produce those transitions and the distance you get is going to be really low." }, { "end": 4459.76, "start": 4453.4800000000005, "text": " But if you show part of the state space that the agent has been there for a few times," }, { "end": 4467.52, "start": 4459.76, "text": " time steps, the Wasterstein distance in that part of the state is going to be big." }, { "end": 4471.8, "start": 4467.52, "text": " If you use this notion, you can make sure that your agent is going to..." }, { "end": 4480, "start": 4471.8, "text": " And you put this quantity in the reward, you are encouraging your agent to not only places" }, { "end": 4486.2, "start": 4480, "text": " that receives high reward, but also places that the model is uncertain." }, { "end": 4490.6, "start": 4486.2, "text": " Means the Wasterstein distance or like distance between distribution is like high." }, { "end": 4499.6, "start": 4490.6, "text": " But doesn't it require you to have the original sample to compare your general results?" }, { "end": 4503.200000000001, "start": 4499.6, "text": " So using you have original samples for every part of the state space?" }, { "end": 4513, "start": 4503.200000000001, "text": " When you go to a state and you can ask your agent model, what is the next step?" }, { "end": 4514.6, "start": 4513, "text": " So what is the next thing?" }, { "end": 4518.8, "start": 4514.6, "text": " And also you go to make your decision." }, { "end": 4523.6, "start": 4518.8, "text": " You observe and then you do your generated sample and you compare right then." }, { "end": 4528.400000000001, "start": 4523.6, "text": " You compare and this is going to be implicit reward you are going to add there." }, { "end": 4530.599999999999, "start": 4528.4, "text": " You have a real reward, you get..." }, { "end": 4531.599999999999, "start": 4530.599999999999, "text": " Curiosity reward?" }, { "end": 4535.4, "start": 4531.599999999999, "text": " Well I wouldn't call it curiosity, but it's going to be a sound which is uncertain to you" }, { "end": 4537.4, "start": 4535.4, "text": " that you are going to add." }, { "end": 4545.4, "start": 4537.4, "text": " In the theoretical point, we don't call it curiosity, it's a concentration of the measure." }, { "end": 4547.4, "start": 4545.4, "text": " And this is different." }, { "end": 4556.4, "start": 4547.4, "text": " But you intuitively encourage the agent to go to places that distance is high." }, { "end": 4566.4, "start": 4556.4, "text": " But just to look at that a little more, you can only check the wash distance with the very next sample that you're looking at." }, { "end": 4575.4, "start": 4566.4, "text": " You can't grow your tree down to 10 levels and look at the 10th level and say, oh that note on the 10th level, I have high uncertainty about." }, { "end": 4578.4, "start": 4575.4, "text": " You can't do that until you get there." }, { "end": 4586.4, "start": 4578.4, "text": " So you add any state you are after making the decision you go to new state." }, { "end": 4591.4, "start": 4586.4, "text": " And for this transition you can see how uncertain you are about this new state." }, { "end": 4592.4, "start": 4591.4, "text": " Just for one level." }, { "end": 4593.4, "start": 4592.4, "text": " For one level." }, { "end": 4595.4, "start": 4593.4, "text": " The uncertainty only carries one level." }, { "end": 4605.4, "start": 4595.4, "text": " If I call that uncertainty the reward in that point and then I learn another DQM model on just this intrinsic reward." }, { "end": 4615.4, "start": 4605.4, "text": " So what is going to happen is like if I am uncertain like in some state that is going to happen like 20 next 20 times step." }, { "end": 4625.4, "start": 4615.4, "text": " My the DQ model, the second DQM model I learned on this like intrinsic implicit rewards is going to guide me to go there." }, { "end": 4638.4, "start": 4625.4, "text": " So it's like you basically learn another DQM but this time that this DQM is quite on the reward based on uncertainty you get from this watcher 90." }, { "end": 4650.4, "start": 4638.4, "text": " I mean, Schmidt, you wrote a paper about this in 1992 about agents agents that had intrinsic he called a curiosity that would explore an environment and learn all the dynamics of the environment." }, { "end": 4664.4, "start": 4650.4, "text": " I was a little off of the date. It was actually 1991 and the paper is called curious model building control systems by Schmidt, you find links to this paper and all the others that we mentioned in the show notes at talkrl.com." }, { "end": 4678.4, "start": 4664.4, "text": " So there are two things. One is if you look at a paper called UCRL2 or Upper Confidence found reinforcement learning came at 2010 by" }, { "end": 4697.4, "start": 4678.4, "text": " a set of amazing theory theory like Peter Oren, Josh and Ronald. They showed how to use uncertainty and how that uncertainty affects your Q function." }, { "end": 4715.4, "start": 4697.4, "text": " How you're supposed to use that uncertainty and they proved the first order of the model I guess it was the first one maybe wasn't. But it was the grounding work in tabular mdp on richer than else." }, { "end": 4733.4, "start": 4715.4, "text": " In that work, they show how to use it. In my work, I exactly use the recipe, they provide which is probably correct. But for my case was watcher stand distance for decades, well actually concentration measure in like in an altumetric." }, { "end": 4753.4, "start": 4733.4, "text": " But yeah, if you are interested in like using the uncertainty of that's another cool idea. If you want to use model model based on model free for reinforcement learning, you can use the model to estimate uncertainty for you, which that uncertainty is like really nice." }, { "end": 4766.4, "start": 4753.4, "text": " In some say it's like watcher stand distance and if you can show how to use watcher stand distance uncertainty for exploration, then you know how to algorithm to use." }, { "end": 4774.4, "start": 4766.4, "text": " But it's another use of model based approaches in reinforcement learning. They can compute uncertainty for you." }, { "end": 4798.4, "start": 4774.4, "text": " So I'm kind of like theorist practitioner, most of the theorists. I was trying to be helpful to this community and it's going to help people to direct their way of the research or the philosophy of the research or like the type of problems they work on and how critical they should be to do their own research." }, { "end": 4810.4, "start": 4798.4, "text": " I mean, I would love to like share these things with other people like especially for with junior people to get a better understanding of what they're dealing with." }, { "end": 4815.4, "start": 4810.4, "text": " So other is a hard thing and if you commit to it, it needs to be careful." }, { "end": 4828.4, "start": 4815.4, "text": " So can I ask you, how do you see RL looking in like say three years or 10 years from now will it be completely different or will it just be like a little bit better than it is now? How do you see it evolving?" }, { "end": 4857.4, "start": 4828.4, "text": " So one thing is happening now is and it's going to take over as we empirical study is going to be more realistic. It's been a like one from like Atari games or the mid jucal and go to like real world more realistic problems and try to come up with try to like solve actually" }, { "end": 4865.4, "start": 4857.4, "text": " it real world problems that we need to solve. So the empirical is going to be that that case. But theoretically is going to advance a lot recently." }, { "end": 4877.4, "start": 4865.4, "text": " Like in last few weeks there were like many many works on policy creating that I've been reading and also I have one work like a damn Laura and this topic." }, { "end": 4897.4, "start": 4877.4, "text": " And theoretically we're going to advance the field a lot and by the still we are going to have a lot of people working on like principles of or like first level understanding of these problems in like two examples again on Atari games or like great worth." }, { "end": 4912.4, "start": 4897.4, "text": " Or we took off for a for any better understand but in like 10 years we are going to have a lot of our contribution in in like super realistic real world problem. Yeah that's my take." }, { "end": 4920.4, "start": 4912.4, "text": " What what are you excited about working on when you're looking forward what do you what do you plan to do to focus on next and for the next few years." }, { "end": 4941.4, "start": 4920.4, "text": " My plan mainly is to I've been the last two three years I've been I've been focusing a lot on empirical side of reinforcement learning before that I was like 100 100% doing theory last two years I've been working on theory and practice." }, { "end": 4958.4, "start": 4941.4, "text": " I think I learned a good chunk of experience from interacting with practitioners and like work on this problems that I now I can spend more time doing and developing like theoretical understanding of the problems." }, { "end": 4980.4, "start": 4958.4, "text": " And one thing that I am excited about is taking the current methods and not like the not method current principles and understanding we have in reinforcement learning and adapt them to the problems that are in in more immediate importance." }, { "end": 5008.4, "start": 4980.4, "text": " And then self-driving car is like healthcare I am really excited about using this methods in healthcare and also in which which which makes me to kind of redesign or rewrite many many of the principles for healthcare problems like healthcare problems like quite different from the problem formulation we have been thinking so far." }, { "end": 5025.4, "start": 5008.4, "text": " I'm going to as a faculty in next year when I'm going to be in my group is going to be like devoted a lot and healthcare problems less in empirically more from the scope point of view and providing the deeper understanding what it should look like." }, { "end": 5048.4, "start": 5025.4, "text": " And also interested in the there's another part that it's in reinforcement learning quite run we haven't worked on it is like control theory control like control theory has developed for many many years." }, { "end": 5071.4, "start": 5048.4, "text": " But the physical like understanding of like empirical process is these things they came out like 20 years ago three years ago they they they they haven't been incorporated in study of control theory that deeply that there have been many control theories that I know they're great and they they most of those things they have developed them." }, { "end": 5098.4, "start": 5071.4, "text": " And they haven't been incorporated in in control theory to come with the controllers are of the official so there's another part of my future research which is going to be on this topics of like providing a better understanding in control theory or in general adaptive control which is kind of whether if I say the brain force of learning I make might make people mad but that's the case." }, { "end": 5125.4, "start": 5098.4, "text": " That sounds amazing I can't wait to read all about it I think our time has come to close doctor Aziz and a Shelley thank you so much for your time today I've learned so much from talking with you and from reading your work and I'm sure I'm going to learn more from you reading your your future work thank you so much for sharing your insight in your time with all of us today my pleasure and thank you so much for for having me today." }, { "end": 5142.4, "start": 5128.4, "text": " That's our episode for today folks be sure to check talk rl dot com for more great episodes." } ]
Antonin Raffin and Ashley Hill
Antonin Raffin and Ashley Hill discuss Stable Baselines past, present and future, State Representation Learning, S-RL Toolbox, RL on real robots, big compute for RL an...
https://media.transistor…122.mp3?src=site
This is TalkArail Podcast, all reinforcement learning, all the time. Interviews at Brilliant folks across the world of RL. I'm your host, Rob and Chauhan. AdTelon and Ophon is a researcher at the German Aerospace Center in Munich, working in the Institute of Robotics and Mechanronics. His research involves using machine learning for controlling real robots, because he says simulation is not enough. With a particular interest in reinforcement learning, he will start his PhD soon. Welcome Antonin. Thank you for having us, you're Robin. And Ashley Hill is doing his PhD on improving control algorithms, using machine learning for real-time gain tuning. He works with neuro-evolution, genetic algorithms, and RL, applied to mobile robots. Ashley holds a master degree in machine learning and a bachelor's in computer science, from the University, Polly Sakele. Thank you for joining us, Ashley. Thank you for having us. So I wonder if we could start by hearing a little bit about what you do and your work. Antonin. So currently I'm working with a David Robot, which is so an fantastic robot with viable stiffness, and working on exciting oscillation with using non-linear modes. And does that involve some RL? Not yet, mostly it involves black box optimization and also using a bit of dimension reduction using auto encoders. What about you, Ashley? I'm working on real-time gain tuning for robotic controllers. The main idea is instead of using reinforcement learning completely on the robot, we keep some of the state-of-the-art controllers that exist and tune them using machine learning, so we be a either reinforcement learning or optimization or just neuro-evolution. So I originally reach out to you both because I'm a fan of your stable baselines work. So I wanted us to get into that. The primary authors of this fork of OpenAI baselines, last I checked, there's actually 2,700 forks of OpenAI baselines, but this one is quite special. It's got a number of great features, awesome documentation. The code is so clean and consistent and hackable. You added more algorithms than what they have in the OpenAI version, TensorBoard supports, and I'm sure a bunch of other things. So I think it's a fantastic framework. I've used it myself, it's been great to work with. So I just want to thank you both, first of all, for doing this and sharing it openly. Thank you for using it then. Ashley, can you tell us from your point of view, how did you come to work on this framework and to work with Antone and Onend? Well, initially I ended up being a side project of the internship that I did at LINSTA. The project was a state representation learning. The issues we were having is I had to deal with reinforcement learning part of this project along with Antone. We were using OpenAI baselines at the time and we kept on trying to add hacks and fixes to OpenAI because we had, I think, at the end 10 different code breaking issues with this code. And at one point, if I remember correctly, they reintroduced the bug they fixed in DeepQN and they actually ignored the comment above saying, do not delete this line, otherwise saving DeepQN models will break and it broke. So at this point, with Antone and we decided, let's fork OpenAI baselines, let's call it stable baselines and actually fix the issues we found because OpenAI were not actually listening to our poll requests and why issues on GitHub. Yeah, that's the main story. The main story is we started at first patching the codes in our own repo and at some point we had too much patches and it keep breaking when we pulled the original repo. So we wanted to have something more stable to work with. So that's how it started and then at some point during refightering, actually showed me some nice feature rig, it came out with a nice scikit-lein like syntax and I found it very cool so I did it again more time to that. And I think it took us like two months to refactor everything. So first month was about documenting, having a common code style for everything and then the next month was about refactoring and having a common API for all the models. Yeah, it was two months of hard work. How did you guys split this work between you? Oh, initially it started off I think as Anton asking me to make a fork and that's why it's under my GitHub page to make a fork and just fix the patches and I think I got a slightly zealous and started to add a lot of fixes, a lot more fixes in initial plans. Yeah, the way we started to work is actually was doing all the refactoring and I was more working on the documentation and when actually was adding new features, I was trying to break the code and I was coming to him showing that it broke and then he fixed the things again until we had something working. That was mostly it. Yeah, him working at first on refactoring and me working on documenting and trying to break his code. So you guys added more algorithms than the original OpenAI Basel lines code has. I think that's TD3 and SAC. Yep. How did that go? I think the idea was those two algorithms are part of the issues for robotics and looking at what was in the repo, it did not correspond to the latest algorithm that seemed to work best and I think I needed SAC for one of my side projects and I wanted to have it at some point and it seems to be both stable and working algorithm and that people were using it and it was actually working on read robots so I thought it was a good idea to add it. And the way it went was mostly looking reading the paper lots of time, then reading the different implementation, especially the original implementation and then having it proof tested using the zoo so making it work on all the environment and especially the others one. And once it's working on the others one then you're pretty sure that they should work and I think in the future we'd like to also add more but usually I would prefer to wait and not implement all the new algorithms but wait and see what's staying there and SAC and TD3 were the two algorithms that were actually used by the community and apparently actually working. Have you guys chatted with OpenAI much about this fork? Yeah, I think we made a pull request. I believe it was a month after we got to this very nice refactor desk I learned user friendly interface that we were targeting and OpenAI took a while to answer us and after well said they were interested to integrating it into their main code base and in exchange we said okay but we would like you to be a bit more open about your issues and your roadmap and you know take proper into consideration into proper consideration all the problems that a lot of users except besides us sorry that had and their answer after a few months I think was we would like to integrate these changes however they're no longer compatible with our roadmap so I think they declined our pull request. It's a bit more complicated they I think they said at some point we will come back to you and they never came back I think the last message. See the pull request is a number 481 I just found it again. Yeah, we discussed with them but at the end there was no input from them anymore so I think it's not possible anyway we kind of diverged but at first it was a bit weird because we did the pull request and we had to wait two months before adding a comment. I think you have at least some fans inside OpenAI. I was at last in Europe in Montreal and I heard in OpenAI researchers say some nice things about stable baselines. It was just banter and so maybe take it with some salt but in shouting afterwards they said stable baselines was the better baselines. Oh wow. So I think you have some fans in there. Good to know. Good to know then. What do you see as the future of stable baselines? I personally see it as being not the best day open, not the best reinforcement learning library but being a like desk colour and being a very user friendly, very open library where people can just come up and plug their stuff in and get it open running properly. After what we want to be done is we would like it to go towards TensorFlow 2.0. I want to add some more genetic algorithms, those kind of things and I would also like to add better parallelization. Okay, and when you say better parallelization, do you mean in terms of having multiple environments running or multiple GPUs, what are we talking about here? It's an issue that crops up very often in Python programming. Python doesn't handle parallelization very well at all because of the JILs. So the idea would be to maybe add threading or try and optimise the parallelization by batching it CPU parallelization mostly, because that seems to be the bottleneck at the moment on the library. And again, that is that the CPU running environments? Yes, sorry. Yeah, the next big step would be TensorFlow 2 and I'm still wondering about creating PyTorch version, so having the same API but with PyTorch as a backend because since the beginning using TensorFlow has always been a bit of problem and we use TensorFlow because of legacy reason, not because of our choice, there's no choice. So there's one point and the other point is because we didn't start from scratch then there are some legacy things that are still there, but are to change without breaking the code. I may try at some point also to have PyTorch version but keeping the same API and things. Yeah, and in the future we already have kind of a roll map and feature we would like to add and people can contribute to that. They can directly look at the roll map on the repo. And last thing I mean, there's lots of reinforcement learning library up there and I see us really like something that is easy to use but that's not really meant for changing the algorithm. We provide self-contained implementation that are not so modular, that's our compared to other library but you can directly look at the code and understand how the algorithm works and we provide also documentation how to use it. That's where we are I think compared to others. Future, the big next step is 1032 and I think some nice feature like there's lots of things about evaluating reinforcement learning algorithm that I would like also to add to the library so I've still to make research more or more of the principles if you have a standout way of evaluating and algorithm if you provide a function to do that then I could be a good idea for me or so. In terms of this library I loved how clear everything was and I could actually look back all the way from the line that calls the algorithm down into all the implementation details so we could understand if we wanted to change anything where the change needs would be made. By comparison looking at RLlib the abstractions were so deep that it was I found it difficult to work with. So there's definitely got to be pros and cons. Do you guys want to comment on other libraries, other libraries you like in this space? Actually I really like RLlib coach, the code of RLlib coach is really good if you want to just learn about the algorithm on how they are implemented because the code is way written and documented so I really like RLlib coach for that. There are another recent one I've seen. I didn't have time to try it but that is called catalyst but catalyst seems like much doing also deep learning, not only reinforcement learning but which is a lot much more modular so you can try new to combine improvement from different algorithm but then that make it's harder if you want to know how an algorithm works because you have to search into all the different models but that may be good for trying to combine improvement from different algorithm. Tense of flow agents? Yeah, it's relative. I haven't had time to go too deep into it but it looks relatively easy to use and pretty modular on its interface. It doesn't really need a gym to interface with environments. Yeah, the only thing up to now that I didn't find was the good documentation as we have in step-up design. I've tried. They have mostly tutorials and I've tried to not book on how to use things but I couldn't find a good documentation as I would have expected for now so I hope that we will also improve on that. It seems like early days for TIF agents. You both co-authored a paper titled decoupling feature extraction from policy learning, assessing benefits of state representation learning in Google based robotics and that was in the beginning of this year. Could you tell us the main idea with this paper? Yeah, okay. So this paper is kind of the summary of one year of research about state representation learning. So state representation learning is about how do you extract relevant information, how do you extract compact representation from your input data and for instance from an image if you have an image from a robot on the x-flight redevant information from the image. And the idea is also if you do that then you don't need to re-learn the feature extractor x-time. You want to apply reinforcement learning. You can reuse the same representation and the other thing is it's compact so you can use a much different algorithm like evolution strategy and it's also usually interpretable so you can directly understand why your feature extraction is working or not or what is your agent having as input. So if I understand this correctly, are the representations learned completely separately from the RL and are they learned in advance or at the same time? They are learned in advance from the environment where we sample pictures from the environment, get us some sufficient data set and then learn from that and in an unsupervised manner. So this paper looks at specific types of representation learning including what are called robotic priors. Can you help us understand what are robotic priors? It almost seems like that phrase could be capitalized. Okay so the idea of robotic priors was to have so in order to learn the representation you can have some prior knowledge on how good representation should look like and this was a work from Rico Tchenkovsky from RBO Lab in Berlin and also a professor broke. And the idea is you have some knowledge about how the word works and how good representation should look like for a robot and you encode that as losses and that way you can directly learn from the data. So for instance you have a prior that is telling you that it's a prior of smoothness that is telling you that things doesn't teleport so from one instant to the next things should change only smoothly. That's one example of a prior. And our work was also mainly focused on applying reinforcement learning for robotics so our methods were always robotics in mind. So do these types of priors usually or always apply in the real world? Like are there cases where these priors don't fully apply? Yeah, those priors are quite restrictive so there's one simple case where they are violated. This one is for collision. I think one of the priors is violated by that. And this was interesting added for extracting information from the input but in practice you can have a much simpler, a lot simpler objective than all those different priors for learning or presentation. So you touched on this a bit already but what kinds of things do you consider when you're choosing between different methods of state representation learning and how do you select the best one? I think mainly at main concern was reinforcement learning performance and being able to learn a good policy from that representation that was our first measurement. And then we developed, I believe, a few of the measurements in order to verify the correlation for example with the ground truth. So what was our real robot position and we wanted to see how that correlated to our learned state and things like that. And well if you want to apply a set of presentation to learning to a new task it depends on what do you want to learn? For instance, and onto what data do you have access, if you have access to only two images then using a auto encoder is pretty straightforward and usually work fine unless your object are not sedent in the image, for instance. And if you have access to more data, for instance if you want to learn what is control label then usually you should use an invest dynamic class where you will extract what you can control. Yeah, it usually depends on what type of data do you have access to and how does the data look like also? So are we, is the goal here to ensure that state representation learning learns enough to enable the policies and goals that we have right now or are we, are we trying to come up with representation learnings that are sufficient to enable all the future policies and goals that we may ever have? Is that a different question or is that the same thing? I think that's related because you can have a set of presentation that are dedicated to only one task, from this if you have the reward information and you have a set of presentation that are only related to that are driven for several tasks in the same environment. So Anson, do you use state representation learning in your work and can you talk a little bit about how you use it in your work? It's not directly state representation learning, it's mostly diamond and reduction. So it's a way to, so in my work for non-linear normal modes which are extension of linear modes. There's a characterization that say that the system can be described with only one variable and the way to find that mode is then to train a node-on-quarter that we find the transformation to the mode manifold and this is how it is done. So this can be seen as state representation learning but I see that more as a dimension reduction that case. But I've been using state representation learning more for a side project. For instance with a robotic car we are trying to learn useful feature to apply in the free world. How did that work out? For now it's just simple auto-on-quarter because I don't have much data but I would like to try other type of features afterwards. Actually do you want to comment on this? Like do you use SRL in the work that you do? I don't use it in my work unfortunately because my dimension space is relatively small. I know I already have access to the ground truth information in my task but it's something I would like to use in the future because it tends to speed up in what we tried during the paper. The paper tended to increase performance drastically on the training time. So I noticed at least two papers on state representation learning were accepted for new reps this year and maybe there's many more but I just noticed these two dynamics of wear and beddings by Whitney and Edal and unsupervised state representation learning on Atari by a non Edal. I wonder do you guys have any thoughts about the future of SRL and where it's going? I see it more as it being an extra tool in the tool belt of reinforcement learning and developers because it tends to be to our perform and to end learning. So when you have a policy with a convolutional neural network it is very hard to find a good policy quickly and using that but if you have a state representation learn a learnt state representation then you can quickly infer many policies for that task quickly much faster than you would just by learning it through the convolutional neural network. So I see it more as an extra tool. I found it really relevant for everything that is robotics because you cannot simulate like you do for Atari or any games or simulation. You usually have only one robot and here you need to be simple efficient and by decoupling that feature extraction from policy learning you usually speed up learning even though you will have a bit if you don't extract all the relevant information you will have a small drop in performance but still usually in robotics you want to have a good enough policy you don't want the best one you want to the one that you can find quickly that works that was good enough for your task. So I still find it really relevant for especially robotics. You both authored the robotics RL-SRL repo on GitHub. Can you tell us a bit about that repo? The SRL toolbox is I think it was mainly designed so we could benchmark loads of different SRL tools and we coupled it with stable baselines so we had access to all these reinforcement learning tools we could hot swap all the RL algorithms and hot swap all the state representation learning algorithms and try and match and match and match and see what worked well. That was the main goal of that repository so that other users could also maybe try and see what could work with state representation learning. Yes, the main idea was at first we started with only state representation learning how to combine them or to evaluate them we added tools to debug visualize access quality of the different representation and at some point we needed to evaluate in the error setting so we started adding different baseline and that was the ancestor of stable baseline because we started doing the unification in that repo and then we put it everything that we fixed and that we have done here in that repo to stable baseline in fact at the end. So, the idea is to provide tools to try new things to compare state SRL methods and I've been using those tools on all the projects for visualization from exploring the latent space of the tone code of learn representation we had some tools for that and we added also some metrics for assessing the quality knowing the ground rules. This is mainly what this is repo is about and it was there also one almost one year of work with also Rodaie and Natalia. What do you find interesting these days in RL space? I've been keeping up with RL in fact and going a bit more into the model based RL because my goal is to bridge the gap between what's RL working in simulation and RL working in the real world so having sample deficient reinforcement learning and also having actually working algorithm that only works on particular environment in simulation. One of the trends a bit afraid of is people using it only on games or on simulation and I prefer always when you have experiment with re-robots from the soft acto critic paper they have this soft acto critic and application where they show that it actually works on different robots in different settings and this was quite interesting and on the other hand I'm trying to form more in the direction of adding prior knowledge to the method so you have some knowledge about your task, you have some knowledge about how things work, how you should do solve the task and recently there was a paper so tossing bot where they are coded almost everything and only learn what cannot be modeled and it actually works directly in the way that I would think things would be better that way so not learning everything from scratch because it just doesn't make sense to start from scratch always especially if you have some knowledge that you can incorporate. I noticed that most frameworks really focus on the model free RL and yet model based RL seems to be the key for sample efficiency I wonder when we're going to see more flexible toolkits for model based RL? Yes, I'm also waiting for that and I may also work on that if there's nothing but the main problem with model based is apparently it only works for now in very simple settings and there are other two implement, there are slower to train slower to make it work so I think that's why now it's not really used or there's not so much implementation available or I would expect more on that so I think recently there was a big benchmark on the different model based method and there are still far from model free methods when you look at the asymptotic performance and I think the main problem is when it starts acting the model so when it starts using the overestimation of the model. I just said that a bit with PhD student at TU Darmstadt and that was quite interesting that the thing is model free, there are much more model free because there are easier to implement and they just work for now so I think that's why you don't have the stable baseline for model based error yet but I would like that to happen also. So actually you had some comments against RL, do you want to share those with us? Yes, there's a lot of situations I've seen in recent years where people are telling to want to run reinforcement learning in end to end settings, removing any construction from the control loop for example. I've seen situations where people will take the input of a camera and expect to directly control a car or directly and to bypass 100 plus years of the carmen filters, of advanced controllers and all these things and proof of convergence and so on and I think that's not the right solution. I think reinforcement learning has a very important role, specifically in playing and task management and so on and so on and so on. But when it comes to low level control and future extractions these are things that come hand in hand with no networks I tend to find and they're not things that DPRL should be concerning too much about. I think it's strength lays more in planning and in those kind of things. So it seems a little bit naive to just throw something really hard and say, oh DPRL will just figure that out. Why do you think that people are trying to do that? Is it just ignorance of other methods or some kind of magical beliefs about the capabilities of these systems? No, I'm actually glad that in some respects that they're doing this because they're testing in some ways the limitations of a deeper reinforcement learning. They're testing the boundaries before actually seeing where this can be applied properly by checking exactly what you can remove and see if the systems don't work in some sense. They're really seeing how far can I push this method until it breaks. So I'm glad they're doing it in some respects but I don't appreciate the idea of throwing massive amounts of computing power to a problem with the DPRL and say, oh, it's fine, it works. Yeah, and usually this successful application like the deep mimic paper you still have a low level PD controller that is also the work of controlling the position. So usually having a combination of the two is needed. Yeah. A trend lately where at least some organizations are going for really big RL compute projects and then that means that many other organizations cannot reproduce their results. Or compete in that area. Do you think that's something we need to be concerned about or is that just, is that a temporary thing? Is that a long trend? I don't like personally the, I think it was, I can't remember the name. It was the latest language generation model that OpenAI released. GPC really came out and they remember them saying that they poured tons of hours of Reddit conversations down into this algorithm with tons of computing machines and they are hesitant to release it because of the damage it could do because no one actually had the computing power to be able to detect the system because it's such a, they've just threw so much power at it that you couldn't do anything against it. And I found that it's not only unfair, I think, towards the developers, but it has to be done. So I'm glad it's being done personally. I mean, on the other hand, computes costs are falling quickly. And so I recall the behavior suites from DeepMind. I think they wrote somewhere in the paper that these results could be replicated with a very small number of dollars on cloud. I think it was something like on the order of two digits of dollars on the cloud. So maybe the simple trend is these large organizations will have access to big, just a little bit sooner than we do. Maybe it's as simple as that. Hopefully it's that. What I would like to see in papers, personally, would be the jewel cost of actually training it. I would honestly enjoy to see how many jewels it took to actually train this model just from an energy standpoint because I'm not sure if we all start doing this if we have enough power. Yeah, jewels and also carbon emissions, right? Yes, yeah, exactly. Like I've always wondered how much carbon alpha zero or alpha go took to train or doda doda two. Bit in the ass. Well, yeah, and we can shuffle the accounting by trying to go for green power, but the end of the day we all need to share the power that we have. So it's kind of more of an accounting trick to use, you know, to offsets and all that. Yeah, I would just like to see some acknowledgement of what it took, not necessarily any repercussions or anything, just how much energy did it genuinely take you to do this, even for a scale point of view for it's important to know that. Antton and Ashley, I so appreciate you being here with me today. I look forward to chatting with you for many months. And it's a small dream come true for me. Thanks so much for sharing your work openly. I love stable baselines and I read your paper with fascination. Thanks so much for sharing your time and your insight with us all today. Thank you. Thank you for having us. It's been a pleasure. That's our episode for today folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 12.8, "start": 0, "text": " This is TalkArail Podcast, all reinforcement learning, all the time." }, { "end": 15.6, "start": 12.8, "text": " Interviews at Brilliant folks across the world of RL." }, { "end": 20.240000000000002, "start": 15.6, "text": " I'm your host, Rob and Chauhan." }, { "end": 24.96, "start": 20.240000000000002, "text": " AdTelon and Ophon is a researcher at the German Aerospace Center in Munich, working in the" }, { "end": 27.76, "start": 24.96, "text": " Institute of Robotics and Mechanronics." }, { "end": 31.720000000000002, "start": 27.76, "text": " His research involves using machine learning for controlling real robots, because he says" }, { "end": 33.760000000000005, "start": 31.720000000000002, "text": " simulation is not enough." }, { "end": 38.36, "start": 33.760000000000005, "text": " With a particular interest in reinforcement learning, he will start his PhD soon." }, { "end": 39.36, "start": 38.36, "text": " Welcome Antonin." }, { "end": 42.480000000000004, "start": 39.36, "text": " Thank you for having us, you're Robin." }, { "end": 46.84, "start": 42.480000000000004, "text": " And Ashley Hill is doing his PhD on improving control algorithms, using machine learning" }, { "end": 48.28, "start": 46.84, "text": " for real-time gain tuning." }, { "end": 53.760000000000005, "start": 48.28, "text": " He works with neuro-evolution, genetic algorithms, and RL, applied to mobile robots." }, { "end": 58.4, "start": 53.76, "text": " Ashley holds a master degree in machine learning and a bachelor's in computer science, from" }, { "end": 61.08, "start": 58.4, "text": " the University, Polly Sakele." }, { "end": 62.8, "start": 61.08, "text": " Thank you for joining us, Ashley." }, { "end": 64.12, "start": 62.8, "text": " Thank you for having us." }, { "end": 69.8, "start": 64.12, "text": " So I wonder if we could start by hearing a little bit about what you do and your work." }, { "end": 70.8, "start": 69.8, "text": " Antonin." }, { "end": 75.68, "start": 70.8, "text": " So currently I'm working with a David Robot, which is so an" }, { "end": 84, "start": 75.68, "text": " fantastic robot with viable stiffness, and working on exciting oscillation with using non-linear" }, { "end": 85, "start": 84, "text": " modes." }, { "end": 88.12, "start": 85, "text": " And does that involve some RL?" }, { "end": 95.4, "start": 88.12, "text": " Not yet, mostly it involves black box optimization and also using a bit of dimension" }, { "end": 97.80000000000001, "start": 95.4, "text": " reduction using auto encoders." }, { "end": 99.04, "start": 97.80000000000001, "text": " What about you, Ashley?" }, { "end": 103.72, "start": 99.04, "text": " I'm working on real-time gain tuning for robotic controllers." }, { "end": 108.64, "start": 103.72, "text": " The main idea is instead of using reinforcement learning completely on the robot, we keep some" }, { "end": 113.68, "start": 108.64, "text": " of the state-of-the-art controllers that exist and tune them using machine learning, so we" }, { "end": 119.64, "start": 113.68, "text": " be a either reinforcement learning or optimization or just neuro-evolution." }, { "end": 125.72, "start": 119.64, "text": " So I originally reach out to you both because I'm a fan of your stable baselines work." }, { "end": 128.4, "start": 125.72, "text": " So I wanted us to get into that." }, { "end": 135.52, "start": 128.4, "text": " The primary authors of this fork of OpenAI baselines, last I checked, there's actually 2,700" }, { "end": 141.24, "start": 135.52, "text": " forks of OpenAI baselines, but this one is quite special." }, { "end": 145.48000000000002, "start": 141.24, "text": " It's got a number of great features, awesome documentation." }, { "end": 149.20000000000002, "start": 145.48000000000002, "text": " The code is so clean and consistent and hackable." }, { "end": 156.48000000000002, "start": 149.20000000000002, "text": " You added more algorithms than what they have in the OpenAI version, TensorBoard supports," }, { "end": 158.08, "start": 156.48000000000002, "text": " and I'm sure a bunch of other things." }, { "end": 160.52, "start": 158.08, "text": " So I think it's a fantastic framework." }, { "end": 162.92000000000002, "start": 160.52, "text": " I've used it myself, it's been great to work with." }, { "end": 167.4, "start": 162.92000000000002, "text": " So I just want to thank you both, first of all, for doing this and sharing it openly." }, { "end": 170.24, "start": 167.4, "text": " Thank you for using it then." }, { "end": 175, "start": 170.24, "text": " Ashley, can you tell us from your point of view, how did you come to work on this framework" }, { "end": 176.88000000000002, "start": 175, "text": " and to work with Antone and Onend?" }, { "end": 184.12, "start": 176.88000000000002, "text": " Well, initially I ended up being a side project of the internship that I did at LINSTA." }, { "end": 186.64000000000001, "start": 184.12, "text": " The project was a state representation learning." }, { "end": 190.92, "start": 186.64, "text": " The issues we were having is I had to deal with reinforcement learning part of this project" }, { "end": 192.44, "start": 190.92, "text": " along with Antone." }, { "end": 198.27999999999997, "start": 192.44, "text": " We were using OpenAI baselines at the time and we kept on trying to add hacks and fixes" }, { "end": 204.67999999999998, "start": 198.27999999999997, "text": " to OpenAI because we had, I think, at the end 10 different code breaking issues with" }, { "end": 205.67999999999998, "start": 204.67999999999998, "text": " this code." }, { "end": 211.35999999999999, "start": 205.67999999999998, "text": " And at one point, if I remember correctly, they reintroduced the bug they fixed in" }, { "end": 216.96, "start": 211.36, "text": " DeepQN and they actually ignored the comment above saying, do not delete this line, otherwise" }, { "end": 220.08, "start": 216.96, "text": " saving DeepQN models will break and it broke." }, { "end": 225.24, "start": 220.08, "text": " So at this point, with Antone and we decided, let's fork OpenAI baselines, let's call it" }, { "end": 231.84, "start": 225.24, "text": " stable baselines and actually fix the issues we found because OpenAI were not actually listening" }, { "end": 236.36, "start": 231.84, "text": " to our poll requests and why issues on GitHub." }, { "end": 238.32000000000002, "start": 236.36, "text": " Yeah, that's the main story." }, { "end": 245.32, "start": 238.32, "text": " The main story is we started at first patching the codes in our own repo and at some point" }, { "end": 252.35999999999999, "start": 245.32, "text": " we had too much patches and it keep breaking when we pulled the original repo." }, { "end": 257.28, "start": 252.35999999999999, "text": " So we wanted to have something more stable to work with." }, { "end": 263.03999999999996, "start": 257.28, "text": " So that's how it started and then at some point during refightering, actually showed me" }, { "end": 269.44, "start": 263.04, "text": " some nice feature rig, it came out with a nice scikit-lein like syntax and I found it" }, { "end": 273.44, "start": 269.44, "text": " very cool so I did it again more time to that." }, { "end": 278.24, "start": 273.44, "text": " And I think it took us like two months to refactor everything." }, { "end": 284.84000000000003, "start": 278.24, "text": " So first month was about documenting, having a common code style for everything and then" }, { "end": 291.56, "start": 284.84000000000003, "text": " the next month was about refactoring and having a common API for all the models." }, { "end": 296.88, "start": 291.56, "text": " Yeah, it was two months of hard work." }, { "end": 299.4, "start": 296.88, "text": " How did you guys split this work between you?" }, { "end": 305.56, "start": 299.4, "text": " Oh, initially it started off I think as Anton asking me to make a fork and that's why" }, { "end": 311.36, "start": 305.56, "text": " it's under my GitHub page to make a fork and just fix the patches and I think I got" }, { "end": 318.08, "start": 311.36, "text": " a slightly zealous and started to add a lot of fixes, a lot more fixes in initial plans." }, { "end": 324.32, "start": 318.08, "text": " Yeah, the way we started to work is actually was doing all the refactoring and I was more" }, { "end": 330.03999999999996, "start": 324.32, "text": " working on the documentation and when actually was adding new features, I was trying to" }, { "end": 335.76, "start": 330.03999999999996, "text": " break the code and I was coming to him showing that it broke and then he fixed the things" }, { "end": 340.12, "start": 335.76, "text": " again until we had something working." }, { "end": 341.12, "start": 340.12, "text": " That was mostly it." }, { "end": 346.64, "start": 341.12, "text": " Yeah, him working at first on refactoring and me working on documenting and trying to" }, { "end": 348.64, "start": 346.64, "text": " break his code." }, { "end": 354.32, "start": 348.64, "text": " So you guys added more algorithms than the original OpenAI Basel lines code has." }, { "end": 356.59999999999997, "start": 354.32, "text": " I think that's TD3 and SAC." }, { "end": 357.59999999999997, "start": 356.59999999999997, "text": " Yep." }, { "end": 358.59999999999997, "start": 357.59999999999997, "text": " How did that go?" }, { "end": 367.88, "start": 358.59999999999997, "text": " I think the idea was those two algorithms are part of the issues for robotics and looking" }, { "end": 374.24, "start": 367.88, "text": " at what was in the repo, it did not correspond to the latest algorithm that seemed to work" }, { "end": 382.08, "start": 374.24, "text": " best and I think I needed SAC for one of my side projects and I wanted to have it at some" }, { "end": 390.24, "start": 382.08, "text": " point and it seems to be both stable and working algorithm and that people were using it" }, { "end": 397.28000000000003, "start": 390.24, "text": " and it was actually working on read robots so I thought it was a good idea to add it." }, { "end": 404.64, "start": 397.28, "text": " And the way it went was mostly looking reading the paper lots of time, then reading the" }, { "end": 412.47999999999996, "start": 404.64, "text": " different implementation, especially the original implementation and then having it proof" }, { "end": 420.4, "start": 412.47999999999996, "text": " tested using the zoo so making it work on all the environment and especially the others" }, { "end": 421.4, "start": 420.4, "text": " one." }, { "end": 426.32, "start": 421.4, "text": " And once it's working on the others one then you're pretty sure that they should work" }, { "end": 433.76, "start": 426.32, "text": " and I think in the future we'd like to also add more but usually I would prefer to wait" }, { "end": 440.64, "start": 433.76, "text": " and not implement all the new algorithms but wait and see what's staying there and SAC" }, { "end": 446.24, "start": 440.64, "text": " and TD3 were the two algorithms that were actually used by the community and apparently" }, { "end": 447.24, "start": 446.24, "text": " actually working." }, { "end": 451.44, "start": 447.24, "text": " Have you guys chatted with OpenAI much about this fork?" }, { "end": 454.28, "start": 451.44, "text": " Yeah, I think we made a pull request." }, { "end": 460.35999999999996, "start": 454.28, "text": " I believe it was a month after we got to this very nice refactor desk I learned user friendly" }, { "end": 465.67999999999995, "start": 460.35999999999996, "text": " interface that we were targeting and OpenAI took a while to answer us and after well said" }, { "end": 471.03999999999996, "start": 465.67999999999995, "text": " they were interested to integrating it into their main code base and in exchange we said" }, { "end": 475.76, "start": 471.03999999999996, "text": " okay but we would like you to be a bit more open about your issues and your roadmap and" }, { "end": 479.59999999999997, "start": 475.76, "text": " you know take proper into consideration into proper consideration all the problems that" }, { "end": 486.08000000000004, "start": 479.6, "text": " a lot of users except besides us sorry that had and their answer after a few months I think" }, { "end": 490.8, "start": 486.08000000000004, "text": " was we would like to integrate these changes however they're no longer compatible with" }, { "end": 495.56, "start": 490.8, "text": " our roadmap so I think they declined our pull request." }, { "end": 500.28000000000003, "start": 495.56, "text": " It's a bit more complicated they I think they said at some point we will come back to you" }, { "end": 503.6, "start": 500.28000000000003, "text": " and they never came back I think the last message." }, { "end": 509.16, "start": 503.6, "text": " See the pull request is a number 481 I just found it again." }, { "end": 516.28, "start": 509.16, "text": " Yeah, we discussed with them but at the end there was no input from them anymore so I think" }, { "end": 523.88, "start": 516.28, "text": " it's not possible anyway we kind of diverged but at first it was a bit weird because we" }, { "end": 530.32, "start": 523.88, "text": " did the pull request and we had to wait two months before adding a comment." }, { "end": 533.6, "start": 530.32, "text": " I think you have at least some fans inside OpenAI." }, { "end": 538.0400000000001, "start": 533.6, "text": " I was at last in Europe in Montreal and I heard in OpenAI researchers say some nice things" }, { "end": 541.04, "start": 538.04, "text": " about stable baselines." }, { "end": 547, "start": 541.04, "text": " It was just banter and so maybe take it with some salt but in shouting afterwards they" }, { "end": 550, "start": 547, "text": " said stable baselines was the better baselines." }, { "end": 551, "start": 550, "text": " Oh wow." }, { "end": 555, "start": 551, "text": " So I think you have some fans in there." }, { "end": 556, "start": 555, "text": " Good to know." }, { "end": 558.24, "start": 556, "text": " Good to know then." }, { "end": 561.88, "start": 558.24, "text": " What do you see as the future of stable baselines?" }, { "end": 568.08, "start": 561.88, "text": " I personally see it as being not the best day open, not the best reinforcement learning" }, { "end": 573.76, "start": 568.08, "text": " library but being a like desk colour and being a very user friendly, very open library" }, { "end": 578.4, "start": 573.76, "text": " where people can just come up and plug their stuff in and get it open running properly." }, { "end": 583.72, "start": 578.4, "text": " After what we want to be done is we would like it to go towards TensorFlow 2.0." }, { "end": 588.96, "start": 583.72, "text": " I want to add some more genetic algorithms, those kind of things and I would also like" }, { "end": 590.96, "start": 588.96, "text": " to add better parallelization." }, { "end": 596.36, "start": 590.96, "text": " Okay, and when you say better parallelization, do you mean in terms of having multiple environments" }, { "end": 600.48, "start": 596.36, "text": " running or multiple GPUs, what are we talking about here?" }, { "end": 607.2800000000001, "start": 600.48, "text": " It's an issue that crops up very often in Python programming." }, { "end": 612.4000000000001, "start": 607.2800000000001, "text": " Python doesn't handle parallelization very well at all because of the JILs." }, { "end": 617, "start": 612.4000000000001, "text": " So the idea would be to maybe add threading or try and optimise the parallelization by" }, { "end": 621.44, "start": 617, "text": " batching it CPU parallelization mostly, because that seems to be the bottleneck at the moment" }, { "end": 622.84, "start": 621.44, "text": " on the library." }, { "end": 626.64, "start": 622.84, "text": " And again, that is that the CPU running environments?" }, { "end": 627.64, "start": 626.64, "text": " Yes, sorry." }, { "end": 636.24, "start": 627.64, "text": " Yeah, the next big step would be TensorFlow 2 and I'm still wondering about creating PyTorch" }, { "end": 643.28, "start": 636.24, "text": " version, so having the same API but with PyTorch as a backend because since the beginning" }, { "end": 650.28, "start": 643.28, "text": " using TensorFlow has always been a bit of problem and we use TensorFlow because of legacy reason," }, { "end": 652.72, "start": 650.28, "text": " not because of our choice, there's no choice." }, { "end": 659.68, "start": 652.72, "text": " So there's one point and the other point is because we didn't start from scratch then" }, { "end": 665.4399999999999, "start": 659.68, "text": " there are some legacy things that are still there, but are to change without breaking the" }, { "end": 666.4399999999999, "start": 665.4399999999999, "text": " code." }, { "end": 672.12, "start": 666.4399999999999, "text": " I may try at some point also to have PyTorch version but keeping the same API and things." }, { "end": 677.72, "start": 672.12, "text": " Yeah, and in the future we already have kind of a roll map and feature we would like to" }, { "end": 680.2, "start": 677.72, "text": " add and people can contribute to that." }, { "end": 685.04, "start": 680.2, "text": " They can directly look at the roll map on the repo." }, { "end": 693.32, "start": 685.04, "text": " And last thing I mean, there's lots of reinforcement learning library up there and I see us really" }, { "end": 700.04, "start": 693.32, "text": " like something that is easy to use but that's not really meant for changing the algorithm." }, { "end": 706.5999999999999, "start": 700.04, "text": " We provide self-contained implementation that are not so modular, that's our compared" }, { "end": 711, "start": 706.5999999999999, "text": " to other library but you can directly look at the code and understand how the algorithm" }, { "end": 715.8399999999999, "start": 711, "text": " works and we provide also documentation how to use it." }, { "end": 719.12, "start": 715.8399999999999, "text": " That's where we are I think compared to others." }, { "end": 727.4399999999999, "start": 719.12, "text": " Future, the big next step is 1032 and I think some nice feature like there's lots of" }, { "end": 733.12, "start": 727.44, "text": " things about evaluating reinforcement learning algorithm that I would like also to add" }, { "end": 738.6800000000001, "start": 733.12, "text": " to the library so I've still to make research more or more of the principles if you have" }, { "end": 744.6400000000001, "start": 738.6800000000001, "text": " a standout way of evaluating and algorithm if you provide a function to do that then" }, { "end": 746.8800000000001, "start": 744.6400000000001, "text": " I could be a good idea for me or so." }, { "end": 752.24, "start": 746.8800000000001, "text": " In terms of this library I loved how clear everything was and I could actually look back" }, { "end": 758.36, "start": 752.24, "text": " all the way from the line that calls the algorithm down into all the implementation details" }, { "end": 762.4, "start": 758.36, "text": " so we could understand if we wanted to change anything where the change needs would be made." }, { "end": 768.6800000000001, "start": 762.4, "text": " By comparison looking at RLlib the abstractions were so deep that it was I found it difficult" }, { "end": 770, "start": 768.6800000000001, "text": " to work with." }, { "end": 772.2, "start": 770, "text": " So there's definitely got to be pros and cons." }, { "end": 777.52, "start": 772.2, "text": " Do you guys want to comment on other libraries, other libraries you like in this space?" }, { "end": 785, "start": 777.52, "text": " Actually I really like RLlib coach, the code of RLlib coach is really good if you want" }, { "end": 791.52, "start": 785, "text": " to just learn about the algorithm on how they are implemented because the code is" }, { "end": 798.88, "start": 791.52, "text": " way written and documented so I really like RLlib coach for that." }, { "end": 801.52, "start": 798.88, "text": " There are another recent one I've seen." }, { "end": 808.52, "start": 801.52, "text": " I didn't have time to try it but that is called catalyst but catalyst seems like much" }, { "end": 816.84, "start": 808.52, "text": " doing also deep learning, not only reinforcement learning but which is a lot much more modular" }, { "end": 823.6, "start": 816.84, "text": " so you can try new to combine improvement from different algorithm but then that make" }, { "end": 829.16, "start": 823.6, "text": " it's harder if you want to know how an algorithm works because you have to search into all" }, { "end": 836.24, "start": 829.16, "text": " the different models but that may be good for trying to combine improvement from different" }, { "end": 837.24, "start": 836.24, "text": " algorithm." }, { "end": 839.28, "start": 837.24, "text": " Tense of flow agents?" }, { "end": 841.28, "start": 839.28, "text": " Yeah, it's relative." }, { "end": 847.12, "start": 841.28, "text": " I haven't had time to go too deep into it but it looks relatively easy to use and pretty" }, { "end": 849.12, "start": 847.12, "text": " modular on its interface." }, { "end": 853.72, "start": 849.12, "text": " It doesn't really need a gym to interface with environments." }, { "end": 859.12, "start": 853.72, "text": " Yeah, the only thing up to now that I didn't find was the good documentation as we have" }, { "end": 860.12, "start": 859.12, "text": " in step-up design." }, { "end": 861.12, "start": 860.12, "text": " I've tried." }, { "end": 867.68, "start": 861.12, "text": " They have mostly tutorials and I've tried to not book on how to use things but I couldn't" }, { "end": 874.12, "start": 867.68, "text": " find a good documentation as I would have expected for now so I hope that we will also improve" }, { "end": 875.12, "start": 874.12, "text": " on that." }, { "end": 877.4, "start": 875.12, "text": " It seems like early days for TIF agents." }, { "end": 882.4, "start": 877.4, "text": " You both co-authored a paper titled decoupling feature extraction from policy learning, assessing" }, { "end": 887.4, "start": 882.4, "text": " benefits of state representation learning in Google based robotics and that was in the" }, { "end": 888.4, "start": 887.4, "text": " beginning of this year." }, { "end": 891.12, "start": 888.4, "text": " Could you tell us the main idea with this paper?" }, { "end": 892.12, "start": 891.12, "text": " Yeah, okay." }, { "end": 899.4, "start": 892.12, "text": " So this paper is kind of the summary of one year of research about state representation" }, { "end": 900.4, "start": 899.4, "text": " learning." }, { "end": 905.9599999999999, "start": 900.4, "text": " So state representation learning is about how do you extract relevant information, how do" }, { "end": 912.4399999999999, "start": 905.9599999999999, "text": " you extract compact representation from your input data and for instance from an image" }, { "end": 919.2800000000001, "start": 912.44, "text": " if you have an image from a robot on the x-flight redevant information from the image." }, { "end": 925.2, "start": 919.2800000000001, "text": " And the idea is also if you do that then you don't need to re-learn the feature extractor" }, { "end": 926.2, "start": 925.2, "text": " x-time." }, { "end": 927.4000000000001, "start": 926.2, "text": " You want to apply reinforcement learning." }, { "end": 934.36, "start": 927.4000000000001, "text": " You can reuse the same representation and the other thing is it's compact so you can" }, { "end": 941.8800000000001, "start": 934.36, "text": " use a much different algorithm like evolution strategy and it's also usually interpretable" }, { "end": 948.4399999999999, "start": 941.88, "text": " so you can directly understand why your feature extraction is working or not or what is your" }, { "end": 950.6, "start": 948.4399999999999, "text": " agent having as input." }, { "end": 955.28, "start": 950.6, "text": " So if I understand this correctly, are the representations learned completely separately" }, { "end": 959.52, "start": 955.28, "text": " from the RL and are they learned in advance or at the same time?" }, { "end": 964.8, "start": 959.52, "text": " They are learned in advance from the environment where we sample pictures from the environment," }, { "end": 969.04, "start": 964.8, "text": " get us some sufficient data set and then learn from that and in an unsupervised manner." }, { "end": 974.24, "start": 969.04, "text": " So this paper looks at specific types of representation learning including what are" }, { "end": 976.64, "start": 974.24, "text": " called robotic priors." }, { "end": 979.48, "start": 976.64, "text": " Can you help us understand what are robotic priors?" }, { "end": 981.52, "start": 979.48, "text": " It almost seems like that phrase could be capitalized." }, { "end": 988.5999999999999, "start": 981.52, "text": " Okay so the idea of robotic priors was to have so in order to learn the representation" }, { "end": 994.92, "start": 988.5999999999999, "text": " you can have some prior knowledge on how good representation should look like and this" }, { "end": 1005.3199999999999, "start": 994.92, "text": " was a work from Rico Tchenkovsky from RBO Lab in Berlin and also a professor broke." }, { "end": 1012.56, "start": 1005.3199999999999, "text": " And the idea is you have some knowledge about how the word works and how good representation" }, { "end": 1019.28, "start": 1012.56, "text": " should look like for a robot and you encode that as losses and that way you can directly" }, { "end": 1022.28, "start": 1019.28, "text": " learn from the data." }, { "end": 1028.32, "start": 1022.28, "text": " So for instance you have a prior that is telling you that it's a prior of smoothness that" }, { "end": 1034.44, "start": 1028.32, "text": " is telling you that things doesn't teleport so from one instant to the next things should" }, { "end": 1036.36, "start": 1034.44, "text": " change only smoothly." }, { "end": 1038.72, "start": 1036.36, "text": " That's one example of a prior." }, { "end": 1045.28, "start": 1038.72, "text": " And our work was also mainly focused on applying reinforcement learning for robotics so our" }, { "end": 1048.56, "start": 1045.28, "text": " methods were always robotics in mind." }, { "end": 1053.52, "start": 1048.56, "text": " So do these types of priors usually or always apply in the real world?" }, { "end": 1058.08, "start": 1053.52, "text": " Like are there cases where these priors don't fully apply?" }, { "end": 1065, "start": 1058.08, "text": " Yeah, those priors are quite restrictive so there's one simple case where they are violated." }, { "end": 1066.48, "start": 1065, "text": " This one is for collision." }, { "end": 1069.9199999999998, "start": 1066.48, "text": " I think one of the priors is violated by that." }, { "end": 1078.44, "start": 1069.9199999999998, "text": " And this was interesting added for extracting information from the input but in practice" }, { "end": 1085.64, "start": 1078.44, "text": " you can have a much simpler, a lot simpler objective than all those different priors for" }, { "end": 1087.2, "start": 1085.64, "text": " learning or presentation." }, { "end": 1092.1200000000001, "start": 1087.2, "text": " So you touched on this a bit already but what kinds of things do you consider when you're" }, { "end": 1097.76, "start": 1092.1200000000001, "text": " choosing between different methods of state representation learning and how do you select" }, { "end": 1099.24, "start": 1097.76, "text": " the best one?" }, { "end": 1105.1200000000001, "start": 1099.24, "text": " I think mainly at main concern was reinforcement learning performance and being able to" }, { "end": 1109.12, "start": 1105.12, "text": " learn a good policy from that representation that was our first measurement." }, { "end": 1112.9599999999998, "start": 1109.12, "text": " And then we developed, I believe, a few of the measurements in order to verify the correlation" }, { "end": 1115.1599999999999, "start": 1112.9599999999998, "text": " for example with the ground truth." }, { "end": 1118.9599999999998, "start": 1115.1599999999999, "text": " So what was our real robot position and we wanted to see how that correlated to our" }, { "end": 1121.3999999999999, "start": 1118.9599999999998, "text": " learned state and things like that." }, { "end": 1128.6399999999999, "start": 1121.3999999999999, "text": " And well if you want to apply a set of presentation to learning to a new task it depends on what" }, { "end": 1130.6799999999998, "start": 1128.6399999999999, "text": " do you want to learn?" }, { "end": 1136.48, "start": 1130.68, "text": " For instance, and onto what data do you have access, if you have access to only two images" }, { "end": 1144.48, "start": 1136.48, "text": " then using a auto encoder is pretty straightforward and usually work fine unless your object are" }, { "end": 1149.16, "start": 1144.48, "text": " not sedent in the image, for instance." }, { "end": 1153.96, "start": 1149.16, "text": " And if you have access to more data, for instance if you want to learn what is control label" }, { "end": 1161.28, "start": 1153.96, "text": " then usually you should use an invest dynamic class where you will extract what you can" }, { "end": 1162.28, "start": 1161.28, "text": " control." }, { "end": 1169.24, "start": 1162.28, "text": " Yeah, it usually depends on what type of data do you have access to and how does the data" }, { "end": 1170.72, "start": 1169.24, "text": " look like also?" }, { "end": 1175.92, "start": 1170.72, "text": " So are we, is the goal here to ensure that state representation learning learns enough to" }, { "end": 1181.96, "start": 1175.92, "text": " enable the policies and goals that we have right now or are we, are we trying to come" }, { "end": 1186.92, "start": 1181.96, "text": " up with representation learnings that are sufficient to enable all the future policies and" }, { "end": 1188.52, "start": 1186.92, "text": " goals that we may ever have?" }, { "end": 1190.56, "start": 1188.52, "text": " Is that a different question or is that the same thing?" }, { "end": 1197.8400000000001, "start": 1190.56, "text": " I think that's related because you can have a set of presentation that are dedicated to" }, { "end": 1203.2, "start": 1197.8400000000001, "text": " only one task, from this if you have the reward information and you have a set of presentation" }, { "end": 1209.24, "start": 1203.2, "text": " that are only related to that are driven for several tasks in the same environment." }, { "end": 1215.28, "start": 1209.24, "text": " So Anson, do you use state representation learning in your work and can you talk a little" }, { "end": 1217.84, "start": 1215.28, "text": " bit about how you use it in your work?" }, { "end": 1223.88, "start": 1217.84, "text": " It's not directly state representation learning, it's mostly diamond and reduction." }, { "end": 1231.72, "start": 1223.88, "text": " So it's a way to, so in my work for non-linear normal modes which are extension of linear" }, { "end": 1232.72, "start": 1231.72, "text": " modes." }, { "end": 1239.16, "start": 1232.72, "text": " There's a characterization that say that the system can be described with only one variable" }, { "end": 1246.0400000000002, "start": 1239.16, "text": " and the way to find that mode is then to train a node-on-quarter that we find the transformation" }, { "end": 1250.88, "start": 1246.0400000000002, "text": " to the mode manifold and this is how it is done." }, { "end": 1256.16, "start": 1250.88, "text": " So this can be seen as state representation learning but I see that more as a dimension" }, { "end": 1258.52, "start": 1256.16, "text": " reduction that case." }, { "end": 1262.5600000000002, "start": 1258.52, "text": " But I've been using state representation learning more for a side project." }, { "end": 1268.44, "start": 1262.5600000000002, "text": " For instance with a robotic car we are trying to learn useful feature to apply in the" }, { "end": 1269.44, "start": 1268.44, "text": " free world." }, { "end": 1272.52, "start": 1269.44, "text": " How did that work out?" }, { "end": 1277.88, "start": 1272.52, "text": " For now it's just simple auto-on-quarter because I don't have much data but I would like" }, { "end": 1283.48, "start": 1277.88, "text": " to try other type of features afterwards." }, { "end": 1284.6000000000001, "start": 1283.48, "text": " Actually do you want to comment on this?" }, { "end": 1288.3600000000001, "start": 1284.6000000000001, "text": " Like do you use SRL in the work that you do?" }, { "end": 1293.1200000000001, "start": 1288.3600000000001, "text": " I don't use it in my work unfortunately because my dimension space is relatively small." }, { "end": 1297.2, "start": 1293.1200000000001, "text": " I know I already have access to the ground truth information in my task but it's something" }, { "end": 1301.32, "start": 1297.2, "text": " I would like to use in the future because it tends to speed up in what we tried during" }, { "end": 1302.32, "start": 1301.32, "text": " the paper." }, { "end": 1307.04, "start": 1302.32, "text": " The paper tended to increase performance drastically on the training time." }, { "end": 1311.16, "start": 1307.04, "text": " So I noticed at least two papers on state representation learning were accepted for" }, { "end": 1315.0800000000002, "start": 1311.16, "text": " new reps this year and maybe there's many more but I just noticed these two dynamics" }, { "end": 1320.3600000000001, "start": 1315.0800000000002, "text": " of wear and beddings by Whitney and Edal and unsupervised state representation learning" }, { "end": 1323.48, "start": 1320.3600000000001, "text": " on Atari by a non Edal." }, { "end": 1331.52, "start": 1323.48, "text": " I wonder do you guys have any thoughts about the future of SRL and where it's going?" }, { "end": 1339.28, "start": 1331.52, "text": " I see it more as it being an extra tool in the tool belt of reinforcement learning and" }, { "end": 1344.96, "start": 1339.28, "text": " developers because it tends to be to our perform and to end learning." }, { "end": 1350.28, "start": 1344.96, "text": " So when you have a policy with a convolutional neural network it is very hard to find a good" }, { "end": 1355.08, "start": 1350.28, "text": " policy quickly and using that but if you have a state representation learn a learnt state" }, { "end": 1360.6399999999999, "start": 1355.08, "text": " representation then you can quickly infer many policies for that task quickly much faster" }, { "end": 1364.52, "start": 1360.6399999999999, "text": " than you would just by learning it through the convolutional neural network." }, { "end": 1366.28, "start": 1364.52, "text": " So I see it more as an extra tool." }, { "end": 1371.96, "start": 1366.28, "text": " I found it really relevant for everything that is robotics because you cannot simulate" }, { "end": 1376.24, "start": 1371.96, "text": " like you do for Atari or any games or simulation." }, { "end": 1381.64, "start": 1376.24, "text": " You usually have only one robot and here you need to be simple efficient and by decoupling" }, { "end": 1388.96, "start": 1381.64, "text": " that feature extraction from policy learning you usually speed up learning even though you" }, { "end": 1395.48, "start": 1388.96, "text": " will have a bit if you don't extract all the relevant information you will have a small" }, { "end": 1400.28, "start": 1395.48, "text": " drop in performance but still usually in robotics you want to have a good enough policy you" }, { "end": 1404.68, "start": 1400.28, "text": " don't want the best one you want to the one that you can find quickly that works that" }, { "end": 1406.28, "start": 1404.68, "text": " was good enough for your task." }, { "end": 1410.8400000000001, "start": 1406.28, "text": " So I still find it really relevant for especially robotics." }, { "end": 1415.5600000000002, "start": 1410.8400000000001, "text": " You both authored the robotics RL-SRL repo on GitHub." }, { "end": 1418.44, "start": 1415.5600000000002, "text": " Can you tell us a bit about that repo?" }, { "end": 1424.88, "start": 1418.44, "text": " The SRL toolbox is I think it was mainly designed so we could benchmark loads of different" }, { "end": 1429.76, "start": 1424.88, "text": " SRL tools and we coupled it with stable baselines so we had access to all these reinforcement" }, { "end": 1435.6, "start": 1429.76, "text": " learning tools we could hot swap all the RL algorithms and hot swap all the state representation" }, { "end": 1439.48, "start": 1435.6, "text": " learning algorithms and try and match and match and match and see what worked well." }, { "end": 1443.8799999999999, "start": 1439.48, "text": " That was the main goal of that repository so that other users could also maybe try and" }, { "end": 1447.28, "start": 1443.8799999999999, "text": " see what could work with state representation learning." }, { "end": 1453.24, "start": 1447.28, "text": " Yes, the main idea was at first we started with only state representation learning how" }, { "end": 1459.96, "start": 1453.24, "text": " to combine them or to evaluate them we added tools to debug visualize access quality of" }, { "end": 1465.44, "start": 1459.96, "text": " the different representation and at some point we needed to evaluate in the error setting" }, { "end": 1471.52, "start": 1465.44, "text": " so we started adding different baseline and that was the ancestor of stable baseline" }, { "end": 1479.04, "start": 1471.52, "text": " because we started doing the unification in that repo and then we put it everything" }, { "end": 1484.76, "start": 1479.04, "text": " that we fixed and that we have done here in that repo to stable baseline in fact at the" }, { "end": 1485.76, "start": 1484.76, "text": " end." }, { "end": 1500.56, "start": 1485.76, "text": " So, the idea is to provide tools to try new things to compare state SRL methods and I've" }, { "end": 1507, "start": 1500.56, "text": " been using those tools on all the projects for visualization from exploring the latent" }, { "end": 1513.44, "start": 1507, "text": " space of the tone code of learn representation we had some tools for that and we added also" }, { "end": 1518.84, "start": 1513.44, "text": " some metrics for assessing the quality knowing the ground rules." }, { "end": 1524.44, "start": 1518.84, "text": " This is mainly what this is repo is about and it was there also one almost one year of" }, { "end": 1531.48, "start": 1524.44, "text": " work with also Rodaie and Natalia." }, { "end": 1534.12, "start": 1531.48, "text": " What do you find interesting these days in RL space?" }, { "end": 1541, "start": 1534.12, "text": " I've been keeping up with RL in fact and going a bit more into the model based RL because" }, { "end": 1550.8799999999999, "start": 1541, "text": " my goal is to bridge the gap between what's RL working in simulation and RL working in" }, { "end": 1555.9599999999998, "start": 1550.8799999999999, "text": " the real world so having sample deficient reinforcement learning and also having actually" }, { "end": 1562.52, "start": 1555.9599999999998, "text": " working algorithm that only works on particular environment in simulation." }, { "end": 1570, "start": 1562.52, "text": " One of the trends a bit afraid of is people using it only on games or on simulation and" }, { "end": 1577.52, "start": 1570, "text": " I prefer always when you have experiment with re-robots from the soft acto critic paper" }, { "end": 1583.16, "start": 1577.52, "text": " they have this soft acto critic and application where they show that it actually works on different" }, { "end": 1592.4, "start": 1583.16, "text": " robots in different settings and this was quite interesting and on the other hand I'm" }, { "end": 1598, "start": 1592.4, "text": " trying to form more in the direction of adding prior knowledge to the method so you have" }, { "end": 1602.5600000000002, "start": 1598, "text": " some knowledge about your task, you have some knowledge about how things work, how you" }, { "end": 1609.1200000000001, "start": 1602.5600000000002, "text": " should do solve the task and recently there was a paper so tossing bot where they are" }, { "end": 1616.52, "start": 1609.1200000000001, "text": " coded almost everything and only learn what cannot be modeled and it actually works directly" }, { "end": 1624.12, "start": 1616.52, "text": " in the way that I would think things would be better that way so not learning everything" }, { "end": 1630.6399999999999, "start": 1624.12, "text": " from scratch because it just doesn't make sense to start from scratch always especially" }, { "end": 1633.92, "start": 1630.6399999999999, "text": " if you have some knowledge that you can incorporate." }, { "end": 1641.8799999999999, "start": 1633.92, "text": " I noticed that most frameworks really focus on the model free RL and yet model based RL" }, { "end": 1647.72, "start": 1641.88, "text": " seems to be the key for sample efficiency I wonder when we're going to see more flexible" }, { "end": 1649.8400000000001, "start": 1647.72, "text": " toolkits for model based RL?" }, { "end": 1658.1200000000001, "start": 1649.8400000000001, "text": " Yes, I'm also waiting for that and I may also work on that if there's nothing but the" }, { "end": 1664.3600000000001, "start": 1658.1200000000001, "text": " main problem with model based is apparently it only works for now in very simple settings" }, { "end": 1672.8799999999999, "start": 1664.36, "text": " and there are other two implement, there are slower to train slower to make it work so I" }, { "end": 1678.84, "start": 1672.8799999999999, "text": " think that's why now it's not really used or there's not so much implementation available" }, { "end": 1686.4399999999998, "start": 1678.84, "text": " or I would expect more on that so I think recently there was a big benchmark on the different" }, { "end": 1692.52, "start": 1686.4399999999998, "text": " model based method and there are still far from model free methods when you look at the" }, { "end": 1700.08, "start": 1692.52, "text": " asymptotic performance and I think the main problem is when it starts acting the model" }, { "end": 1704.32, "start": 1700.08, "text": " so when it starts using the overestimation of the model." }, { "end": 1710.44, "start": 1704.32, "text": " I just said that a bit with PhD student at TU Darmstadt and that was quite interesting" }, { "end": 1716.4, "start": 1710.44, "text": " that the thing is model free, there are much more model free because there are easier" }, { "end": 1722.4, "start": 1716.4, "text": " to implement and they just work for now so I think that's why you don't have the" }, { "end": 1728, "start": 1722.4, "text": " stable baseline for model based error yet but I would like that to happen also." }, { "end": 1732.16, "start": 1728, "text": " So actually you had some comments against RL, do you want to share those with us?" }, { "end": 1737.0400000000002, "start": 1732.16, "text": " Yes, there's a lot of situations I've seen in recent years where people are telling" }, { "end": 1743, "start": 1737.0400000000002, "text": " to want to run reinforcement learning in end to end settings, removing any construction" }, { "end": 1745.6000000000001, "start": 1743, "text": " from the control loop for example." }, { "end": 1750.2, "start": 1745.6000000000001, "text": " I've seen situations where people will take the input of a camera and expect to directly" }, { "end": 1758.28, "start": 1750.2, "text": " control a car or directly and to bypass 100 plus years of the carmen filters, of advanced" }, { "end": 1762.96, "start": 1758.28, "text": " controllers and all these things and proof of convergence and so on and I think that's" }, { "end": 1765.04, "start": 1762.96, "text": " not the right solution." }, { "end": 1771.96, "start": 1765.04, "text": " I think reinforcement learning has a very important role, specifically in playing and task" }, { "end": 1773.52, "start": 1771.96, "text": " management and so on and so on and so on." }, { "end": 1778.56, "start": 1773.52, "text": " But when it comes to low level control and future extractions these are things that come" }, { "end": 1783.6399999999999, "start": 1778.56, "text": " hand in hand with no networks I tend to find and they're not things that DPRL should" }, { "end": 1785.36, "start": 1783.6399999999999, "text": " be concerning too much about." }, { "end": 1790.52, "start": 1785.36, "text": " I think it's strength lays more in planning and in those kind of things." }, { "end": 1796, "start": 1790.52, "text": " So it seems a little bit naive to just throw something really hard and say, oh DPRL will" }, { "end": 1797.32, "start": 1796, "text": " just figure that out." }, { "end": 1800.08, "start": 1797.32, "text": " Why do you think that people are trying to do that?" }, { "end": 1808.12, "start": 1800.08, "text": " Is it just ignorance of other methods or some kind of magical beliefs about the capabilities" }, { "end": 1809.12, "start": 1808.12, "text": " of these systems?" }, { "end": 1813.1999999999998, "start": 1809.12, "text": " No, I'm actually glad that in some respects that they're doing this because they're testing" }, { "end": 1816.7199999999998, "start": 1813.1999999999998, "text": " in some ways the limitations of a deeper reinforcement learning." }, { "end": 1821, "start": 1816.7199999999998, "text": " They're testing the boundaries before actually seeing where this can be applied properly by" }, { "end": 1825.6799999999998, "start": 1821, "text": " checking exactly what you can remove and see if the systems don't work in some sense." }, { "end": 1829.7199999999998, "start": 1825.6799999999998, "text": " They're really seeing how far can I push this method until it breaks." }, { "end": 1833.32, "start": 1829.7199999999998, "text": " So I'm glad they're doing it in some respects but I don't appreciate the idea of throwing" }, { "end": 1837.4799999999998, "start": 1833.32, "text": " massive amounts of computing power to a problem with the DPRL and say, oh, it's fine," }, { "end": 1838.48, "start": 1837.48, "text": " it works." }, { "end": 1844.56, "start": 1838.48, "text": " Yeah, and usually this successful application like the deep mimic paper you still have a" }, { "end": 1850.64, "start": 1844.56, "text": " low level PD controller that is also the work of controlling the position." }, { "end": 1855.32, "start": 1850.64, "text": " So usually having a combination of the two is needed." }, { "end": 1856.32, "start": 1855.32, "text": " Yeah." }, { "end": 1863.52, "start": 1856.32, "text": " A trend lately where at least some organizations are going for really big RL compute projects" }, { "end": 1867.44, "start": 1863.52, "text": " and then that means that many other organizations cannot reproduce their results." }, { "end": 1870.44, "start": 1867.44, "text": " Or compete in that area." }, { "end": 1877.0800000000002, "start": 1870.44, "text": " Do you think that's something we need to be concerned about or is that just, is that a" }, { "end": 1878.0800000000002, "start": 1877.0800000000002, "text": " temporary thing?" }, { "end": 1879.0800000000002, "start": 1878.0800000000002, "text": " Is that a long trend?" }, { "end": 1884.92, "start": 1879.0800000000002, "text": " I don't like personally the, I think it was, I can't remember the name." }, { "end": 1889.48, "start": 1884.92, "text": " It was the latest language generation model that OpenAI released." }, { "end": 1894.72, "start": 1889.48, "text": " GPC really came out and they remember them saying that they poured tons of hours of" }, { "end": 1899.4, "start": 1894.72, "text": " Reddit conversations down into this algorithm with tons of computing machines and they" }, { "end": 1904, "start": 1899.4, "text": " are hesitant to release it because of the damage it could do because no one actually had" }, { "end": 1909.8, "start": 1904, "text": " the computing power to be able to detect the system because it's such a, they've just" }, { "end": 1912.84, "start": 1909.8, "text": " threw so much power at it that you couldn't do anything against it." }, { "end": 1918.6000000000001, "start": 1912.84, "text": " And I found that it's not only unfair, I think, towards the developers, but it has to" }, { "end": 1919.6000000000001, "start": 1918.6000000000001, "text": " be done." }, { "end": 1922.24, "start": 1919.6000000000001, "text": " So I'm glad it's being done personally." }, { "end": 1929.16, "start": 1922.24, "text": " I mean, on the other hand, computes costs are falling quickly." }, { "end": 1933.6, "start": 1929.16, "text": " And so I recall the behavior suites from DeepMind." }, { "end": 1938.96, "start": 1933.6, "text": " I think they wrote somewhere in the paper that these results could be replicated with" }, { "end": 1942.56, "start": 1938.96, "text": " a very small number of dollars on cloud." }, { "end": 1947.68, "start": 1942.56, "text": " I think it was something like on the order of two digits of dollars on the cloud." }, { "end": 1953.8400000000001, "start": 1947.68, "text": " So maybe the simple trend is these large organizations will have access to big, just a little" }, { "end": 1955.24, "start": 1953.8400000000001, "text": " bit sooner than we do." }, { "end": 1956.24, "start": 1955.24, "text": " Maybe it's as simple as that." }, { "end": 1957.24, "start": 1956.24, "text": " Hopefully it's that." }, { "end": 1964.88, "start": 1957.24, "text": " What I would like to see in papers, personally, would be the jewel cost of actually training" }, { "end": 1965.88, "start": 1964.88, "text": " it." }, { "end": 1970.28, "start": 1965.88, "text": " I would honestly enjoy to see how many jewels it took to actually train this model just" }, { "end": 1974.44, "start": 1970.28, "text": " from an energy standpoint because I'm not sure if we all start doing this if we have" }, { "end": 1975.68, "start": 1974.44, "text": " enough power." }, { "end": 1980, "start": 1975.68, "text": " Yeah, jewels and also carbon emissions, right?" }, { "end": 1981.6000000000001, "start": 1980, "text": " Yes, yeah, exactly." }, { "end": 1987.4, "start": 1981.6000000000001, "text": " Like I've always wondered how much carbon alpha zero or alpha go took to train or doda doda" }, { "end": 1988.4, "start": 1987.4, "text": " two." }, { "end": 1990.4, "start": 1988.4, "text": " Bit in the ass." }, { "end": 1996.4, "start": 1990.4, "text": " Well, yeah, and we can shuffle the accounting by trying to go for green power, but the" }, { "end": 2001.44, "start": 1996.4, "text": " end of the day we all need to share the power that we have." }, { "end": 2006.16, "start": 2001.44, "text": " So it's kind of more of an accounting trick to use, you know, to offsets and all that." }, { "end": 2010.8, "start": 2006.16, "text": " Yeah, I would just like to see some acknowledgement of what it took, not necessarily any repercussions" }, { "end": 2016.3200000000002, "start": 2010.8, "text": " or anything, just how much energy did it genuinely take you to do this, even for a scale point" }, { "end": 2019.44, "start": 2016.3200000000002, "text": " of view for it's important to know that." }, { "end": 2024.16, "start": 2019.44, "text": " Antton and Ashley, I so appreciate you being here with me today." }, { "end": 2028.56, "start": 2024.16, "text": " I look forward to chatting with you for many months." }, { "end": 2031.8, "start": 2028.56, "text": " And it's a small dream come true for me." }, { "end": 2034.8, "start": 2031.8, "text": " Thanks so much for sharing your work openly." }, { "end": 2039.72, "start": 2034.8, "text": " I love stable baselines and I read your paper with fascination." }, { "end": 2042.3999999999999, "start": 2039.72, "text": " Thanks so much for sharing your time and your insight with us all today." }, { "end": 2043.3999999999999, "start": 2042.3999999999999, "text": " Thank you." }, { "end": 2044.3999999999999, "start": 2043.3999999999999, "text": " Thank you for having us." }, { "end": 2045.3999999999999, "start": 2044.3999999999999, "text": " It's been a pleasure." }, { "end": 2059.2400000000002, "start": 2045.4, "text": " That's our episode for today folks." }, { "end": 2084.72, "start": 2059.24, "text": " Be sure to check talkrl.com for more great episodes." } ]
Michael Littman
ACM Fellow Professor Michael L Littman enlightens us on Human feedback in RL, his Udacity courses, Theory of Mind, organizing the RLDM Conference, RL past and present,...
https://media.transistor…ec8.mp3?src=site
This is TalkAreal Podcast. All reinforcement learning, all the time. Interviews at Brilliant Folks across the world of RL. I'm your host, Rob and Chauhan. Michael Lippmann is a professor of computer science at Brown University. He's not your average CS prof nor is he your average RL researcher. Professor Lippmann was elected ACM fellow in 2018 for contributions to the design and analysis of sequential decision making algorithms in artificial intelligence. His research has garnered a monumental number of citations. Google tells me that it's over 35,000 and he continues to publish innovative new research in RL. Professor Lippmann, thank you so much for joining us. Wow, you're welcome. I guess I've always thought of myself as being pretty average, so that's kind of exciting that you see me otherwise. So your thesis back in 96 was titled Algorithms for Sequential Decision Making and you're still in the field. You're like a real OG of RL. Well, thanks. I certainly feel that way. I know that recently when I've been teaching reinforcement learning class on campus, I start with the same banter that I started with a long time ago, which is like now just keep in mind that you know what machine learning is and this is, you know, this is like the the weird little baby brother of machine learning. We're not doing supervised learning and now it used to be that people would be like, yeah, it's okay, we just need an extra class. And now they're like, no, no, no, we absolutely intend to be here. We want to learn about reinforcement learning. So it's kind of a very exciting time. How does you initially come to get interested in this area that became the major focus of your career? Because it's just so interesting, isn't it? I mean, I guess I guess I first started to think about it. So I was in college in the I guess mid late 80s. And this was this was during the one of the previous neural network waves. And so people were talking about machine learning. And when I heard the phrase, the first thing that came to mind was, well, what I later learned would be reinforcement learning, you know, trying to get machines to use their experience to behave to make decisions. And so that was what really was really the driving interesting example of machine learning to me. I wrote a paper in a psychology class in college about, you know, what I thought that meant and how we would use, you know, quote unquote machine learning to solve problems like playing tic-tac-toe. And only later came to discover that actually this there there was an area that worked on it. It wasn't what I thought was the area that worked on it, but there was stuff going on and I was I was very excited. In fact, right out of college, I worked at an organization that was called Bellcore, which had been kind of spun out of the bell system, spun out of Bell Labs. And my my mentor in my group that I joined as it was a guy named David Ackley. And he had done his dissertation on like a genetic machine for doing optimization and decision making. And I told him about my interest. I told him about the kinds of problems that I thought were really cool. And he's like, oh, oh, that's reinforcement learning here. And he gave me Rich Sutton's 1988 paper. This was, you know, we're talking 1989. So this was like fresh out like a brand new idea that was that the people were talking about. And I said, wow, this is really cool. How do I learn more about it? He's like, well, it's new, but we can just have Rich come and give a talk. And so he just reached out and had Rich Sutton come and give a talk in our in our research. And so I'm like, can you do that? Can you reach into the literature and pull out actual live human beings? It was quite a revelation for me. And it was super valuable to get in at that stage and start learning about, how people were thinking about these problems. What are the open questions that we're still going on and to try to engage with them? Yeah, from kind of from the bones almost the beginning. Well, reaching into literature and talking to real human, that's kind of how I feel today talking to you. So thanks so much again for being here. When you look at your research career, do you think of it as having different chapters or is it is it one long episode? Oh, that's interesting. Yeah. So I mean, you know, only in retrospect is it easy to think about chapters. But it is the case that I've changed jobs a couple of times. And each time there's a job change, there's an opportunity to kind of step back and say, okay, what am I trying to do here? What's what's the real plan? And so it was certainly the case that when I when I stopped working with Dave and went to go get my PhD, that was a kind of an opportunity to kind of have a bit of a change. When I when I started as a professor at Duke, the two areas that I was most interested in were were reinforcement learning or sequential decision making and also information retrieval. And so I actually had one PhD student working with me on each of those topics. When I finished up at Duke and moved to, well, ultimately moved to Rutgers, it occurred to me that it's just too hard to stay on top of these two fields, both of which were moving very, very rapidly. So I wasn't going to be able to help guide people and mentor people in both reinforcement learning and information retrieval. And I thought, okay, well, reinforcement learning is the one that I really want to stick with. So I named my research group when I got to Rutgers, the Rutgers laboratory for real life reinforcement learning or RL cubed. And and just yeah, I just didn't work with people who were doing other things anymore. So that was definitely a kind of a chapter boundary. I stopped doing language and I and I really focused in on on decision making. I came to RL after deep RL was already a thing. Can you help us understand like what does it like for someone like yourself? Being part of this deep reinforcement learning boom over the past few years. And whereas you started long before that when it was so much smaller. Hard to hard to say how to put it in terms that you know would look reasonable to somebody who who kind of joined during this later phase. You know a lot of the questions are still the same questions. Some things that we thought we used to think worked but didn't work now kind of work. So so back in the day, you know for just as a one concrete example around the time of of TD gammon there was there was work that Jerry Jarrett Saro did training up a TD algorithm to play backammon. And so kind of pretty much before that paper there wasn't there weren't examples of there was plenty of examples of TD learning being implemented and tested and you can make graphs and stuff. But it wasn't really solving a problem that you would think of as a problem. That anything that TD learning had been doing there was some better way that we knew how to do it. We just wanted to do it with TD because we wanted to understand the properties of TD and we thought it was really important. But with with Jerry's work he was he was getting a machine to play backammon at a level that that no one had ever seen before right. It was playing the game better than arguably as good as the best people. And and the seemed like the secret sauce to that was was some some part of reinforcement learning or or temple difference learning. And the way that he was doing it the way the way that he was representing the value function was with neural networks. Right so it all sounds very familiar right. You're going to play some hard game that we didn't know that machines couldn't play very well before by combining a neural network and reinforcement learning. You know not so different from the alpha go work. And and it was really remarkable. And there was like a a big jump in the number of people who were excited and interested in in applying reinforcement learning to different problems. And shortly after that you saw people applying it to things like elevator control or or cell phone channel allocation all kinds of practical problems that that fit the mold. And generally people tried to combine a neural net with this this this notion of kind of a temporal backup. And the funny thing about it maybe not funny at all is that it was actually really hard to get it to work. The neural networks were very very brittle. And it was pretty common for them to actually improve for a while. And then as you continue to train them they would just completely collapse like worse than random after that. They just they knew nothing at all about the problem or even had to answer questions. And so I've I've had many back in that in that time I I supervised projects from many different students trying to do exactly that apply a neural net to learn a value function for some you know so a video game or or or board game or some control problem in the real world. And the neural nets were just too flaky. We ended up really not using them effectively ever. And so that was sort of the dirty secret is that Jerry Tissaro could get neural nets to learn amazing things. But the rest of us it was much more hit or miss. And so I think one of the things that's really exciting this time around is that the neural net training process seems to be a lot more robust that we're able to get many more people are able to train many more problems effectively. And part of that is I think we have a better culture now of sharing code. And part of it is I think that the the training algorithms are just or just more solid than they were back in the 90s. Tissaro's project seemed really ahead of his time. And people talked about it for many years afterwards. When you know when I first got really interested in this at the time it seemed like DQN was the thing that started this. But looking back at that back game and work it's not really that different from DQN. I think they had a simpler network. But it seemed really ahead of his time. And like you said the grandfather of AlphaGo. Yeah I think that's right. And I think that one of the things that wasn't at I think so there's a lot to love about the DQN work. And it is really exciting that it got a lot of people jazzed about this whole area. But looking at the paper from the from the broader perspective one of the things that they pushed on is whoa reinforcement learning that's really powerful. We could do all sorts of things with it. But a lot of us already knew that. What we really wanted to hear is what are you doing differently from what Tissaro did? What are you doing differently now? What what ideas have you instantiated in your algorithm that we didn't know how to do before that are making this work? Like we knew we could have done this 30 years ago. Like why why now? Why is it working now? And and there are reasons for that. There are things that they did in the DQN work that made the training of the network more robust. But that was that was sort of downplayed in a way that I think is unfortunate. I think I think there was a lot to learn from their success, the engineering of what they put together. And the algorithmics of what they put together. They they they patented that. And if you read patent. The DQN they really hinges on the target network. They have the two networks and to the target network. So maybe that was one of the innovations. Oh absolutely. That was one of the things where looking back I think we would have been reluctant to try that back in a day because it doesn't seem like it would be reactive enough, responsive enough. And so you know another thing they did in some of that early DQN work is is to say we're going to throw away a ton of data. Like we're going to run whole trajectories and just you know we'll just remember one transition from that whole thing because that's going to give us statistical independence of of of the samples that we're using for training. And like we would have never done that back in the day partly because computers were just a heck of a lot slower. And so the idea of running a whole trajectory and only keeping one sample from it was unthinkable. So they they had a wider when they revisited these questions I think back in the you know mid 2010s things had shifted right the computer power had shifted the the palette of of engineering opportunities was more open. And I think they took took advantage of it in a really beautiful way. You recently put on the RLDM conference in Montreal that's reinforcement learning and decision-making which happens every two years. That sounds like a very unique conference being so multi-disciplinary. Is there is there a common language across all these disciplines for this stuff? So I love RLDM I think it's a fantastic conference and I'm delighted that I had the opportunity to contribute. The basically there was there's a core set of reinforcement learning researchers who felt like it's interesting because for a long time they had been saying people like like Andy Bartow would be approached saying hey reinforcement learning it's really cool you should have your own conference right there's a machine learning conference there's machine vision conferences there's planning conferences there's AI conferences you know there should totally be a reinforcement learning conference. And he and and I think also Rich Sutton were very much of the opinion that that would be that would be a huge mistake for all the fields concerned that it was very important to make sure that reinforcement learning was always being done in in service of the broader AI goals and separating it out from that that community would be damaging. And so they resisted the the push to have a conference for a very very long time and I think they finally they finally caved it's probably around the same time that Andy was getting ready to retire. So he had less less say in the process but the but the way that the the the powers that be decided to instantiate a conference was very deliberate very conscious they they decided they wouldn't have it be a conference that had proceedings right so so if you want to have a paper that gets published and academics need to have papers that get published you couldn't do it in this conference you'd have to publish those papers and other conferences. So that was one way that they they tried to ensure that things didn't split off. And the other thing is they said well we're going to have a meeting anyway we might as well make it as multidisciplinary as possible. There's there's interesting things happening in decision making and reinforcement learning across I don't know half a dozen or more academic disciplines let's bring those people together and give and have an opportunity to just talk about the problems. And so yeah so your question in terms of is there really a common vocabulary it's it's pretty remarkable. We had at this conference this year people from from marketing from neuroscience from some sort of cognitive behavioral sciences from computer science and robotics and and AI and engineering and yeah you know we kind of all talked at least a similar enough language that the the talks really translated very nicely from from subarea to subarea and so everybody seemed engaged the whole time it's not like the neuroscientists would get out and leave the room when we were talking about computational issues or vice versa there was plenty to learn from each other and I think a lot of people really enjoyed that getting to to think outside their own discipline. So I didn't get to make it to this conference and I have my regret is very high about that but can you share with us I saw there was a great list of speakers well do you were there any highlights that you would want to share with us. Sure so well you know one highlight is that the next conference in two years will be in Providence for at Island in my home institution so you know try to make it out to that one because that one will be good too but in terms of highlights I mean the structure of the conference is interesting that's there's one track all the talks happen in one track and all the talk of the main talks are invited there's shorter talks that are contributed the people's submit papers or extended abstracts and they get selected to present their work but most of the talks are one or most of the most of the minutes of the talks are spent on people that were invited by the by the program chairs and that was me and Kate Hartley at NYU and you know I'd it would be it would be really rough of me to actually say hey you know this talk this area that's great and the rest of the stuff me you know because these were these were the the cream of the crop right these are the people that Kate and I said boy you know if we if we had a conference and we got to choose every talk in the conference which we kind of did what would we want to hear about and this was exactly what we wanted to hear about we we heard just wonderful results and and interesting ideas from all across the spectrum so I can I can I can give a couple highlights so one one piece of work that I think was really well received was the work on distributional reinforcement learning which is the idea that you're you create a reinforcement learner that instead of just trying to predict expected future reward it was trying to predict multiple they call them expectiles so percentiles of the expectations of the returns that the the agent is getting at each point in time so it's it's giving it a harder problem is for trying to produce not just the expected value of future return but the whole distribution what does it look like how what's the likelihood that I'm going to get a really high return from here what's likely to go low return and you can always turn that into an expected value by averaging that distribution but but and that's what that's what they end up doing and deciding how to actually behave but the learning problem is to learn the entire distribution and there's there's really interesting stuff happening with that idea both in terms of what does that mean like it seems to work really well for for for example learning to play Atari games why like we're not using that distribution why is it helped to learn it and why is it better so this is like the c 51 and the bell mares line of work that's right iqs yes that's right and and so we heard at least one talk that talks specifically about what have they been able to figure out in terms of why it's helping so another line of work that we heard about related to distribution rl is re-analyses of some experiments on actual biological systems that are learning so measurements of neurons patterns across multiple neurons and arguing that the patterns of learning that you see in in these neurons in a real in a real brain are capturing the distributional information about the returns that's that there's evidence that the brain is doing distribution rl and not just expected value like td rl and so that was that was i mean i think it's still early going i don't think that this is definitive quite yet but it is really exciting and tantalizing and they did a very careful job of of analyzing the results and presenting them and explaining why why that would make sense for a brain to do that sort of thing so i thought that was really cool another another thing that that really stuck with me is another scientist talked about risk seeking behavior so one of the things you observe in people who are addicted to gambling is that they really they really like gambling i mean that's that that's that's what it means to be addicted to gambling and one of the the the properties that you see in such people is that they actually tend to downweight risk so they're that when they're deciding what to do they're they're they're not very worried about the downside they're they're more excited about the possible upside and this is visible not just in their gambling behavior but if you actually give them sort of classic condomin toversky type problems to to decide oh what would you do if you had the choice between this kind of thing and this kind of thing you see a consistent pattern of the way that they they make their decisions and one of the things that was cool is they is our speaker talked about how that you can you can get rats to have this behavior you can do various things to rats especially making them cocaine addicted turns out to be a way of getting to exhibit exactly this kind of pattern of choices but another really interesting thing is that if you when you're when you actually make them do the task when you show them the choice and then they make the choice and then they get rewarded if in addition to just giving them the juice or whatever it is that that they're that they're trying to maximize if you also make lots of flashing lights and loud sounds that go you know kittin kittin kittin that tends to put their brain into a mode where they're much more likely to be risk seeking and much more gambling and I think the reason that this is really striking to me is if you think about how casinos are are constructed especially slot machines and casinos the machines make a big deal about winning or these these online sites that let you play I don't know various kinds of little little games like they don't you don't just get points and then you're done with it there's also animations and noises and and and sorts of things that seem to be exactly what causes brains to become more risk seeking and so that was super creepy and super exciting to think about because it means well maybe we have ways of helping people and ways of intervening to help them not be so addicted when it becomes a problem but also creepy because like I know I'm being exposed to these kinds of stimuli all the time and they're they're wreaking havoc with my reward system and how I interpret rewards and so it's made me a little more cognizant of okay you know what I think I'm done with this game for now I think this game is having its way with me and not the other way around wow that just reminds me of social media with the constant little rewards that we get exactly exactly right and so I think you know somebody at some level knows this right they're building people are building interfaces that are definitely tapping into this I think maybe the more people that know it the more aware people are of it the hope is the more we can protect ourselves when we're being manipulated right sometimes it's just fun and if you want to spend a little time having fun like we shouldn't be worried about that but in other cases it starts to creep in and kind of take over your life and until we as a society figure out or we as individuals almost figure out how to how to protect ourselves from that kind of manipulation we're just you know we're just tools right we're just we're just going to do whatever the machines tell us to do and I don't think that's what we want well I'm really glad that there's researchers thinking in that direction of how to protect us and avoid these mechanisms exactly right right and and and it's fun to think that that you know we doing computational reinforcement learning research are helping to inform that community helping to give them tools for thinking about their problems and in exchange they're like you know protecting us from the big bad you know because he knows that are that are after us I want to move on to your MOOCs you have some some great MOOC courses I can't say I've done them but I think you have on Udacity RL and machine learning do you do you plan to do more of this? that yeah so so I've done let's see when shortly after Udacity became a thing I was in touch with Sebastian Thrunn and and asked him you know I just thought this was really neat and it was cool that people were doing it and I wanted to learn about this you know this new wave of of of of organizing educational materials for people and so I actually got to to spend I think it was like a week and a half or two weeks out in Silicon Valley working at Udacity and I put together a class on what do we call it crunching social networks something like that but it was basically a graph like a course on graph algorithms and and so that happened and I was really excited and I had a good time with it and I kept asking Sebastian can I do more of this can I do more of this he's like I'm glad that you did it but like he wasn't he wasn't letting me he wasn't giving me another opportunity but then Udacity and Georgia Tech ended up forming a partnership to create an online masters degree and my good friend Charles this bell was one of the people instigating that and he said okay this is about to become a thing would you like to do a machine learning class with me I'm like yay that's like I get to be with Charles I get to do another MOOC and I don't have to wait for Sebastian to say okay so so we we put together machine learning class and and that went really well and then we put together a follow up class specifically on reinforcement learning now I still use those videos I'm about to teach a reinforcement learning class on campus this this fall and the students will be asked to you know to watch videos from from that MOOC so yeah so that was that was super fun we haven't done anything since then I am now in the process of trying to put together a class for a company so it's not a massive open online course because it's not open it's it's for pay but one of the things that I think is really fun about it is it's machine learning for not you don't have to be a computer science major it's a it's a course where if you're just interested in hearing about what machine learning is about and learning about some of the technologies underneath of it that this would give you a chance to learn about that so I'm excited and it's it's been a ton of work because I can't depend on people knowing the kinds of mathematics and the kinds of of computer science algorithms that I would normally depend on in in an on campus class and so boy just there's there's just a lot of thinking about how to present the material so that the amount of background needed is small as small as possible so you appeared in a AAAI article called ask me anything about MOOCs a paraphrasing from something you said there you said I was drawn to MOOCs as an opportunity to turn teaching into a sequential decision making problem where the student is the environment and the MOOC is the decision maker and then you went on to say that it's that's been hard to achieve and then you have a 2018 paper well a co-authored a paper with Sarnin if I'm saying that correctly called personalized education at scale where you you look at this idea of using RL and education and you and the paper says um suggest that sample efficient RL could help optimize curricula for human learners so how I wonder how far are we making if we're making this type of approach to education practical and uh like our MOOCs trying to do something like this right now so I was amazed that the people who started the whole MOOC craze were almost all machine learning people so uh Daphne Culler Andrew Ing, Sebastian Thrun, uh a lot of the the Movers and Jekers in Coursera Udacity um probably some people at edX but I didn't know them as well were were not just computer scientists thinking about education but they were specifically AI machine learning people thinking about education and so it seemed obvious that that they were viewing this as a machine learning kind of platform right that we would be using this to to find ways of optimizing the process of education but it turns out they had bigger I don't know if they're bigger fish to fry they had different fish to fry they had so many things to worry about in terms of getting these companies out and stable that the the machine learning aspect which which surely they were thinking about it never really got surfaced it never really became part of of what they were what they were doing they were saving their data but they weren't really doing it in a way that um that was going to be able to translate into optimizing the process so I've I've now had two PhD students who've been really interested in this question of how we can do that uh one uh now graduate a student named Vucca C. Maravate he um he took a stab at it and didn't really get very far a part of it is that that he was doing that and he was trying to do reinforcement learning for medicine at the same time and we were trying to make you know fundamental contributions to the machine learning literature and so I think he just didn't he just wasn't enough in the education side to be able to have that kind of impact after that though uh this saren and sam saren and is is a current PhD student of mine and he's really committed to the education side he's been doing work with uh computer science education folks in my department and um is is taking it really seriously going out and learning that literature uh talking to companies that they're collecting data talking to talking to them about ways to collect data differently being willing to think about uh uh uh care and bandit problems as opposed to the full reinforcement learning problem with the idea that uh that it's a simpler version to problem but more likely we'll be able to get the right kind of data and make the right kind of judgments based on that and it looks like that's going to be his his PhD dissertation we're going to focus on these these questions of how to get data from people and how to turn that back into in decisions and and see that those decisions are actually causing improvements in something measurable about their learning so so i think we're poised to to make some contributions to this area now but yeah it's been it's been a long time since the first moot came out and yeah i don't i i can't think of too many positive examples of of machine learning really having a substantial impact on the way that the learning's happening i remember seeing an emma brunskill talk about uh her using rl in in some kind of course um curriculum design in terms of which which um topic should be covered next but you know as i think about this i'm like well like teachers have all this built-in knowledge and intuition about how students are learning and and what orders to do things and how how to approach things and and i can't imagine like teacher a system trying to use like epsilon greedy to figure out how to teach a course um so it seems like we have to somehow combine what the teachers understand and figure out what is it remains to be learned so so i i wonder how how would you approach that in terms of what what should be learned and what should be built in um to a system like that i i think one of the things that we're doing that's a little different now is that we're taking a step back and trying to teach much simpler things i think that the the dream of being able to for example take everything that people uh need to learn about mathematics say and then organizing it into a flow right so that so that the right topics are being introduced at the right times and the right assessments are taking place so that you can potentially have people skip over things that they understand well or dive more deeply into things that they haven't quite gotten yet um i think that was the dream and i think that might just we just might not be ready for that yet so the kinds of problems that that we might look at sam and i might look at include things of uh you know you you need to memorize a big list of things you need to memorize um you know the the the major historical events in some i don't know in say russian history or something like that right so you need to be able to associate events and dates and things like that it's not it's not a deep kind of cognitive understanding and there's no real sequentiality to it but uh but it turns out that forgetting people to remember facts just kind of dry facts um showing it to them once is not enough showing it to them repeatedly is important but the timing of those repetitions is also very important you want to you want to what you want to do what's called spaced repetition you want to basically remind somebody of a fact that they're just about to forget but haven't quite forgotten yet um and typically that causes them to actually remember it for much longer but they're still eventually going to forget and you need to remind them again before they forget and so the the typical pattern of forgetting follows a kind of an exponential curve each time they're reminded there's some multiplicative factor longer that they're going to remember it for and so figuring out what those those quantities are is a much easier learning problem right figuring out how fast people are going to forget something and then reminding them right before they forget is a much easier problem than organizing the entire curriculum the entire set of topics and how they interrelate to each other and so we want to try to get a handle on that simpler problem first uh i'd like to move on to a paper that you recently co-authored that was published in the journal of experimental psychology is called people teach with rewards and punishments as communication not reinforcements that was by ho at all um this paper draws a distinction between re-enforcement learning directly from a reward signal and then something a little more subtle that you're calling communication can you help us understand the difference here? yeah absolutely so first of all just to give a little bit of context i've always been a bit of a psychology groupie so when i was an undergraduate i was i was a computer science major but i i took a ton of psychology classes and my actual on-campus job was was as a research assistant in a psychology lab and so i've always hung around with the psychologist but i've never really gotten to publish in in the area and uh when i got to to start working with mark ho when he was a PhD student in psychology at brown he was also doing a master's degree in computer science with me and so i got i had access to this this you know budding psychology superstar and uh one of the questions that we ended up looking at together was this notion of how do how do people provide rewards and punishments and how is that similar and different from the way that reward functions do right so we have in reinforcement learning algorithms that can learn from reward feedback and the reward is provided by some some function right that says okay well if you destroy this enemy in this situation you get ten points if you get stuck in this situation you lose two points like there's a whole there's a whole menu of uh or like a like a cost list of what all the different things in the world are and how much they're worth and then the algorithm is then trying to just maximize profit so it's reasonable to think well when people are giving rewards and punishments for for teaching you know like a little kid or or an animal or or you know just or each other that they're giving rewards and punishments in a way that's really similar to that but that turns out not to be the case so the most obvious thing you can see along these lines is if you're trying to tell an agent how to behave and it starts behaving in the in the way that you want it to behave your tendency is going to be to withdraw reward right once it's doing the right thing you don't have to reward it anymore it's doing the right thing and so what people will tend to do is they'll give a lot of reward when things are heading in the right direction and then they'll start to back off from that reward when uh when the system is doing the right thing right so does that does that make sense does that does that jive with your intuitions about how you would give reward if you were doing this over a long period of time yeah totally I wouldn't want to have to keep rewarding that same behavior uh right right it and it feels almost unnatural it's almost almost insulting right it's like okay you know if I were the learner like okay I know I get it you don't have to tell me anymore um but if you do if you do that to a to a q learner if you do that to a standard reinforcement learner and you withdraw the the reward that it's getting it's that's going to force it to start getting TD errors like it's expecting a reward it didn't get the reward so something's wrong it needs to update its model it needs to change its behavior to try to get more reward and so everything falls apart so if you try to train a reinforcement learner with a human as the reward function it tends to degrade really rapidly like it heads in a good direction and then it starts to fall apart because people stop giving the reward the the the the stopping to give reward is actually a symptom of a much more systematic process that the that the people giving rewards seem to be undergoing what they seem to be doing is they are making internal predictions about how the learner is doing how good is it at the task and then they're providing positive rewards when the behavior that they exhibit exceeds that expectation they're giving sort of neutral rewards when it is kind of consistent with their expectations and they're giving punishment negative rewards if it's degrading if somehow that the performance is falling off from what was expected and as the learner is learning this is a moving target right it's it's you're trying to tell it well here's here's positive reward for doing the next thing that you need to be doing that towards the the behavior that you should be learning and so that the the idea that once the system actually is learning to do the correct thing that people would back off with the positive reward that's just one special case of this sort of more general thing which seems to be that people are providing advantage feedback they're they're providing information about the performance of the agent relative to its current expectations or its current baseline and so from that perspective we need different kinds of reinforcement learning algorithms to take advantage of that kind of feedback we can't just use q learning because q learning is it interprets the rewards very differently than what people are actually providing so you have a paper called conversion actor critic by humans with a McGlashon at all is that is that what you're talking about there with the the humans providing that advantage signal to the critic yeah so that was in some ways that was the the capstone of a long series of papers where we were starting to get a handle on what we were understanding about about learning from people there was a separate in addition to the work that I was involved in with park ho that we had a collaboration with folks at NC state and Washington state University Matt Taylor's group and and David Roberts's group where we were following a very similar kind of pathway of trying to figure out what is it that what is it the information that people are providing and how can we make good use of it and both efforts ended up converging on this idea of well really the reward signal is the trainers way of saying yes you're going in the right direction so it really is a kind of communication more than it is a quantity to be maximized right if you just think of it in terms of profit maximization you end up being misled by these signals you have to really think of them as an indicator kind of nudging you in the right direction and so we ended up with a whole sequence of algorithms for dealing with this kind of feedback and I think I think I think coach well coach was the last one in that sequence and and I think it really did build on the insights that we had gotten through that whole I don't know five or six year period of of of exploring those questions so it seems like there's that's one of the challenges with human feedback and then in in the deep coach paper you mentioned human feedback is often delayed because it can't be humans can't be as real time or is that is that another one of the challenges are there other challenges with with incorporating humans in in this loop that's right so right so delay in actually providing that that information can it has to be taken into consideration and so we had a mechanism in in the paper that you're referring to where we it was essentially a kind of trace kind of the same kind of trace that you see in TD learning of just keeping information around so that when the feedback actually happens it can be applied to what was in memory when the learning should have been taking place that work it hasn't quite finished yet right we haven't quite gotten that to scale and to be robust but that was the direction that we were going when we wrote that paper so in that conversion actocratic by humans paper there's a line that says nearly half of American household have a pet dog and at least some exposure to animal training suggesting an alternative path for customizing robot behavior sounds like you're suggesting there could be a whole human skill involved in how to drain a robot using these types of signals do you see do you see things unfolding that way i do i'm excited about that idea so i don't have you ever tried to train a pet like a dog or anything i am trying to train our puppy right now it's challenging yeah as is did you take any classes i did i probably need a lot more classes right so i think i think i think that well okay so i grew up with cats and stuff and we didn't even make any attempt to train the cats but but i got married and had kids and and the family the rest of the family pushed like we're gonna have a dog we're gonna have a dog and so i'm like it's not my dog but sure you can have it in the house i just doesn't you know it just doesn't belong to me but you know that being said we're buddies now and she's sleeping next to me right now as i'm talking to you and that's i mean the dog not this spouse the the thing so i went to the some of these training classes early on and it was it was really remarkable to me how much of a skill it really seems to be that it seems so natural it's like yeah you just reward it for good things and you punish it for bad things it's like no it's it's much more subtle than that and part of it is kind of getting into the seeing the world through the animal from the animal's perspective so you know that when you're giving a reward that it's being interpreted as a reward for the right thing and not just just a kind of random occurrence and one of the things that we discovered in the coachwork so you mentioned James McLeashon so he became a kind of robot whisperer during that work he he implemented the algorithm on a turtle bot which is like a rumbo with kind of a computer strap to the top of it and man he could make that robot do amazing things he would he would just like you know look deep into its eyes and understand sort of how it was seeing the world i mean he wrote the code too so that helps and giving just the right reward at just the right time and getting it to do kind of remarkable things one of the things in in that coach paper that you mentioned we did experiments with James as the trainer we never really got other people to be able to be the trainer partly because it's just hard to bring you know random people into the lab and and train them up on this on this task but partly because it really it really was hard it really wasn't the case that that you could just start giving you know random rewards and punishments and it's all going to just turn out fine that you have to be thinking about what the perception of the machine is and how it's likely to change its behavior to give this feedback at this time and so even though our goal is to make a system that is going to be really amenable to to a broad variety of end users we're not there yet that at the moment it's still a specialized skill and you need well at the moment you need to be James to to be really good at it it almost makes you wonder if we need some kind of translation layer between the way humans would normally convey what they want and what the rl needs to experience to to do the learning those those those two things are quite different yeah that could be I mean it's so one of the things that occasionally occurs to me when I do this kind of work is that as an educator I think I tend to undervalue education or educating as a skill right because it's like the thing I just like you know if you speak English as you're as your first language you don't have to you don't think about the fact that that's actually kind of a really powerful tool for you know working in the global economy and there's many people who have to work really really hard to get to that point it's it's it's it's a valuable skill that you have and you got it sort of for free but but it's still incredibly valuable this this notion of being able to to engage in in an educational process to be able to take a hard problem break it down into pieces and then convey those pieces to an individual you know if you do it all the time as part of your job you might not think about the fact that it's actually a really great skill and it is a skill that not everybody has has developed to the same degree I think everybody's pretty good at it because I think just to be able to talk just to be able to explain something to somebody and we all do that all the time you have to have some practice in in taking complex concepts and breaking them down into simpler pieces but the folks that can really do this at a at a I don't know at a level that that is is able to break down extremely complicated problems that's a skill and that's not a skill that everybody has and I think training these robots is a similar kind of thing it's not the case that we just everyone just naturally has it and it's just a question of translating their what they what they say into rewards and punishment so that it works better it's a question of the things that they say differ depending on whether they've really kind of thought deeply about how to do teaching and so it might not be so simple right it might be that you know we all need to to work a little harder to tell machines what we want them to do part of it is even knowing what you want right so some people haven't thought hard enough about what it is that they want and of course if you don't know what it is that you want breaking it down in a way to convey to a machine isn't going to happen yeah I'm not I'm not convinced at the moment it's just like oh we just need the right interface we just need the right translator I think part of it is we have to encourage people to think a little more clearly and then then maybe it's not at that hard a problem anymore hmm so with these methods of using human feedback can the human feedback be given in batches after the fact or is it kind of like on policy where the humans really need to be in their real time I would say that the evidence from education is that like so when when textbooks first appeared and when videos educational videos first appeared a lot of times the perception is oh well this is going to replace teachers right because you can now have like the the the most accomplished experts in the world get their thoughts down convey them to people perfectly and now it's just going to like run inside their heads it's going to be ideal the fact the matter is having teachers that are there with the student in the loop right that they're both having the same experiences at the same time and they can adapt to each other you know we haven't replaced that we don't really know what's happening I think in that loop but it does really seem to be important and I think that's true in in this robot training setting as well that if we're trying to get the robot to learn just giving it a whole big batch of experience and letting it process it offline is in general not going to be as powerful as as having a you know a person there helping to interpret the situations on you know on behalf of the robot but you know really what you want is a mix right you want the robots to be you know whatever dreaming offline processing this information offline getting squeezing the most offline information that they they can get out of it and then when they are able to interact live with it with a trainer to get the most that they can out of that as well going back to this people teach with rewards and punishments as communications paper that paper mentions theory of mind and cognitive hierarchies can you touch on how theory of mind relates to communication here and and what these cognitive hierarchies are and what they help us do so I think the best way to think about a cognitive hierarchy is it's a way of trying to snip or simplify the infinite recursion that you get when you've got two minds thinking about each other so whenever whenever communication is taking place and again I believe that teaching is a form of communication right the teacher has information that needs to run inside the learner's head so that that communication has to happen whenever there's there's whenever you're thinking about communication there's the the teachers trying to convey information but what how how is the teacher trying to convey information the teacher should say whatever it is that the learner will interpret you know in the way that the teacher is hoping that the learner will change right so that the the learner has some I don't want to say deficit but has some sort of gap between their current state of knowledge and where the teacher is trying to get them to go and so the learner has to say something so that that gap will get bridged to some degree all right so so the teacher has to think about what's going on in the head of the learner but the learner should also be thinking about what's going on in the head of the teacher so like why did the teacher say that to me just now like oh the teacher trying to get me to understand this great okay so if the the learner is kind of an active participant in this communicate communicative act then the learner is also thinking about the fact that the teacher is thinking about what the student is trying to think about and we can just keep spinning this out infinitely right so the teacher has to think about the fact that the student is thinking about what the teachers thinking about the students thinking about the teachers thinking about that the students mind is going to change and not this never ends right and so so what the cognitive of hierarchy ideas says is, you know what? This has to end, right? We do this infinitely deeply. Sure, it's possible that I'll be able to say like one weird sound like brrrr, and that will have exactly the right cascading effect into the learner's head, and suddenly the learner knows everything. Right? It's maybe, maybe. But more likely, we're going to spend tons and tons of time thinking about each other's, you know, recursively modeling each other in a way that's pretty fruitless. So the cognitive hierarchy idea says, no, no, let's just go back and forth some finite amount of time. We're going to say, I should be saying things so that the learner should be hearing things in the context that I'm saying something that the learner's mind has to change, something like that. So just by clipping this hierarchy and taking it only to a certain depth, but more than just one, right? Don't just say the thing literally, just you want to say the thing so that it has the right effect, and the learner knows that. So you do want to do a couple of levels, maybe, back and forth, but not infinitely deep. And so the cognitive hierarchy idea says, okay, let's do this to some probably pretty small, but non-zero levels of mutual modeling. So it's like on Twitter, it's like, do you take that tweet literally? Is it ironic? Is it sarcasm on top of irony? How far do you go? There must be some point of diminishing returns where you're like, okay, that's what they really meant. Yeah, I like the idea of Twitter having diminishing returns. It does, I feel like that fits. So yeah, that's right. But that's true of any kind of communication. I think that Twitter maybe makes that more obvious to us because we're talking to people who we don't necessarily have great individual models of, right? When we're talking to a friend, we have plenty of experience that tells us how deeply this back and forth should go. But when we're doing it on Twitter, there's just so many people and they're so different and they're so unfamiliar that, yeah, you're right. You need to be thinking about it and you need to be almost calculating it carefully. So yeah, so in the context of doing training or teaching of this kind of cognitive hierarchy can be a useful structure for deciding how to present the material in the most effective way. For completeness, do you want to touch on what theory of mind refers to? Sure. Yeah, so I guess theory of mind is essentially this notion of recursive modeling that we're, that we're, that we think not just about, I guess the thing that's not theory of mind or is proto theory of mind is the idea that when I'm trying to understand your behavior, I imagine that you're me, that you have what in your head, what I have in my head and therefore the decisions you make are exactly the decisions I would make in that same circumstance. So that's already a pretty powerful perspective to say, hey, I can put myself into your shoes and understand your behavior through my own lens. What theory of mind does is it, is it takes that one step further and says, yeah, but you may have had different experiences than I have. So you might act differently than I would if just putting myself in your shoes and imagining how I would act, knowing what I know is not necessarily how you are going to act because you know some things that are different from what I know. And so this is a remarkably powerful, even more powerful way of predicting the behavior of other minds is to, is to build out this kind of, you know, almost simulate their, their inputs, their experiences to the, to a sufficient degree that you can predict how they're going to respond to, to new stimuli. And, and people get pretty good at this by age. I think it's like seven or so. People who are autistic have a very difficult time with this. They can learn to do it, but it becomes sort of like a conscious computation. But I guess neurotypical people by the time they're seven or eight years old do this without even thinking about it and can, can develop very rich models of a whole network of people and how they're interacting with each other. And so there's reason to think, well, boy, if we're, if we're trying to make agents that are doing good decision making in networks of other agents, including people, they're going to have to do some amount of this theory of mind stuff, right? It's not going to be enough for them to just try to be maximizing reward. They need to be also thinking about the impact that their actions have on the mental state of the other agents in, in the environment. So you co-authored another paper theory of minds understanding behavior in groups through inverse planning with Schum at all. Can you talk about the idea behind this paper? Right. So continuing this idea of if we're trying to act in amongst other agents, it's really useful to be able to have an understanding of how they're going to make their decisions. The specific idea, well, so let me, let me, let me mention some work that's, that's not mine. So, Anka Dragan, who's now at UC Berkeley, did some really neat work in thinking about robot motion control. So deciding where robots, arms should move, taking into consideration the way that motion is going to be perceived. So if, for example, a robot's trying to hand an object to another person, it's not enough to just put the object in a place where that person can grab it. You have to be signaling to the person where they should be reaching next. And the more legible, the more interpretable, the movement is, the more effective that the collaboration is going to be. And so she's developed motion planning algorithms that do a kind of theory of mind idea. They do think about not just, I need to physically move to this position, but the trajectory that the robot takes on route to that is affecting the mental state of the other person. And we want to have the right effect on that mental state. So it sounds like we're talking about action as communication. And earlier we were talking also about reward as communication. It seems like a common thread here. That's right. So action can be communication, reward can be communication, you know, communication can be communication, like, you know, words and things like that. And so in the context of the paper that you're referring to, that was another one of these actions as communication scenarios where the individual agents were trying to think about, there was a group of agents and they all had their own goals. And people watching this were then asked to interpret what goals were the different individuals trying to carry out. How were they thinking of themselves as subteams within the bigger group of agents? And people seem to be actually remarkably good at this. And we were able to model the way that people do this calculation or at least the end goal of a compute by saying, oh, well, one thing they could be doing is inverse planning. They could be thinking, well, if this was the mental state of these agents, then I'd expect them to behave like this. That's not how they behaved. So I should, you know, use that information with Bayes rule to try to condition, well, what would be a more likely way that they, that they're more likely description of their mental states. So, you know, seeing the behavior and then running that behavior backwards in a sense to say, well, what is it that they're trying to do? Gives a window into what's going on inside the heads of the agents. I guess this reminds me of poker where you're constantly having to think about what does the other person know or think they know? I noticed you had some papers on poker in your past. And then recently we saw Pluribus making some strides in poker. Is it theory of minds going to be relevant in those types of games? Definitely. So definitely the idea of modeling what's going on in the other agents head is a really critical element to the way that the best poker players play, both machine and human. One difference between the really elaborated theory of mind and the kind of poker theory of mind is that in the poker setting, everything is intended to be misleading, right? Like most of what you're trying to do with your actions is to communicate the wrong thing, right? To the extent that you're actually conveying to the other player, the cards that you have hidden that only you know about, you're actually doing yourself a disservice because it is a purely competitive game. There's other games. And so Michael Bowling, who's worked substantially on poker, has also written some papers on cooperative games where you need, but it seems as though you need a theory of mind. That was another one of the talks at the RLDM conference. He talked about a game called Hanabi, which is a purely cooperative game with a group of people where everyone has a hand like cards that they have that are private cards of their own, but unlike normal games where you face the cards to yourself and only you know it, you actually face the cards outwards and everybody but you knows what you have in your hand. And they have to buy their actions in the game, convey to you enough information so they know what's in their hands, so you know what's in your hands, so that you can make the right actions in the game and we can all win. And that's a game where it really feels like you need sophisticated theory of mind. You need to say, you know, a player needs to say, huh, I'm going to take this action because I think you're going to wonder, why would I have taken that action? And the only explanation you're going to be able to come up with is this, which is exactly what I want you to know because I want you to take this action and that's how I'm going to tell you. It's just, if you haven't played this yet, it's a real kicker. You start to access parts of your brain that are very under-exercised and it's a really cool feeling. First of all, RL algorithms coming out all the time, should we expect this to continue indefinitely or should we expect that somehow they'll be, we'll reach saturation and we'll have just enough of them? So I remember, so I don't know if you know Andrew Morris, but he was a big contributor to reinforcement learning in the early days. He ultimately became a professor at CMU and then he became a high level manager at Google and Pittsburgh and then he became back to CMU and it was a dean of computing and now he's off to the place I'll say he might be back at Google. But I remember him saying at one point in the early days of reinforcement learning, boy, you know, it seems as though we have a different algorithm for each, you know, minor variation of what you can do with a Markov decision process. And that, to me, implies that we can actually generate an infinite series of papers that make zero progress towards the goal, right? That we're not actually getting better at solving problems. We just are, you know, shattering the case, all these special cases and coming up with special case algorithms for each of them. And I, you know, I, that's, you know, it's a valid observation, but it doesn't seem to be what happened. It doesn't seem as though what the field did was then articulate all these different minor variations and then develop different algorithms for each of them. You know, some of that always happens. There's always research that is more incremental that may or may not have an impact on the kind of broad trajectory of the field. But, you know, but if you look back at the history from then to now, I don't think you would just describe it the way that he was predicting it would, it would play out. I think what you see is, you know, well, you see things like DQN popping out of, you know, interesting combinations of ideas from different fields. And so, yeah, we're seeing lots of algorithms now. I think, I think that, I think that's common. I think it's, I think there's like phases that a field goes through. And we're in a kind of local search mode right now where we're doing little tweaks on the same kinds of algorithms. But we're going to get tired of that. We're going to, we're going to, there's the, the people just don't stay interested in that kind of, oh, here's one tenth improvement on this one game. Eventually, they're going to either just, you know, give up on this area and just think, okay, well, we've solved it as best we can solve it. Let's move on to something else. Or there's going to be a wave of seeing it from a different perspective that's going to result in, in more, you know, more rapid progress, more, more larger strides per unit time. Do you think that there's some like upper bounds on how sample efficient model free RL can, can, yes. And like, are we getting, are we approaching that? Or are we still really far away from that? Like, how much can we squeeze out of these trajectories? Right. So it strikes me that it's a losing battle that the fact that matter is general MDPs are really hard. You can, you can embed really hard problems in these MDPs that if you want to say, I've got an algorithm that's super fast and super general, you're lying. You can't, you can't be both. There's, there's kind of, there must be some kind of no free lunch, you know, idea in the context of reinforcement learning problems. So I would not expect it to be the case that we can just, just by using completely general techniques, squeeze out the maximum amount of generalization from a given trajectory. It strikes me. Well, I mean, sometimes I think about that we're really sort of solving the wrong problem to, to the extent that we're trying to make, and we're trying to make it a reinforcement learning algorithm. Then we demonstrated it on a single environment. Really, the best way of solving that single environment is to just present the policy for that environment. You don't want a learning algorithm at all if that's the problem you're trying to solve. It only makes sense to use a learning algorithm if the, if the learning algorithm doesn't know in advance which problem it's going to need to solve. Right? That we need to be able to evaluate them with respect to a set of possible problems. And if that set is all possible MDPs, then I think it's extremely limited as to what we'll be able to get an algorithm to do. If on the other hand, that set is a, is a constrained set of MDPs, like the MDPs representing the, you know, the thermodynamics of my house for, for, for moving the thermostat around. You could imagine lots of different houses with, you know, with variability in terms of thermodynamics, but it's a much more constrained space than the space of all possible MDPs. And so to me, then the right problem is, okay, find me a learning algorithm for that. Now, it could be that we have to pick by hand a couple applications domains and then come up with specialized learning algorithm for those application domains. I'm not so interested in that. I'd be much more interested in the meta reinforcement learning problem, which is given a sample of a set of domains that are interrelated in some way, derive, use, automatically derive a reinforcement learning algorithm that's going to be most effective for that set of problems. That to me is, I think that's the problem that we really should be working on. Hmm, so, is this, would this partly be your response to Richard Sutton's bitter lesson post-warry talks about how compute seems to conquer all as opposed to human-designed algorithms? Oh, it may be that we got to the same punchline through very different paths. Yeah, so the bitter lesson article, that was, that got a lot of attention, got people very excited. My understanding is that itch guy this year, there was a couple of invited talks that directly addressed that whole discussion. You know, in some sense, it's kind of the opposite of the bitter lessons. So the bitter lesson thing says, don't try to solve specific problems just throw more compute at it. I'm not saying that. I'm saying that we want specialized algorithms for particular kinds of problems. And in particular, if we're going to try to, if the learning algorithm needs to learn with a very small amount of data, that doesn't make any sense to throw very, very general algorithms that require tons and tons of data, right? You just can't use them. We have to break it down in a way that's going to allow an algorithm to do more with the small amount of data that it's got. And so, yeah, I mean, that being said, what I, what I proposed is that we, that we focus on algorithms that work at the meta level and that those algorithms can be very powerful and very general. But those aren't the algorithms that we would actually deploy in the, in the setting where they actually have to do the learning. This is a thing that we would do offline ahead of time to create those algorithms. So, yes. So, so is that what our brains are doing? Like do we have, our brains just have this huge menu of specialized and general algorithms and where we're kind of a meta learner that's, that's just quickly figures out which one to throw at this particular situation. Well, so I would, what do you think is going to say is that, well, maybe, but in some ways, I think of this analogy is happening at the evolutionary level. Like, people are born not to be completely general learners, maybe to, you know, maybe they, maybe you can evolve towards completely general learning or you can have thoughts that allow you to build a completely general learner in software. But, but by and large, we are born with lots of biases about the way that our world is structured. And that is critical for being able to learn sufficiently rapidly that, you know, within our lifetime, we can actually use the knowledge that we gain. So, so I think of evolution as doing the job of, of meta learning in the case of people. But, yeah, but I guess I, I, I hadn't thought about it this way, but I would say that you're probably right that, that, that, that people, especially people who are engaged in problem solving as a, as a kind of first class activity, like they're not just solving problems because they have a problem, they're like solving problems because they like to engage in the process of problem solving. They, to some extent, are, are learning something like that. They're learning, okay, here's a new problem. Does this remind me of other problems that I've solved in the past? What sort of procedures did I follow in those cases that actually got me to a solution? Let me try those in this case. Hopefully they'll, they'll get me to where I want to go. Right? There's, there's no guarantee that that's going to work because we can create problems that are arbitrarily, sort of cryptic, right? That, that, that, that, that, that, what they look to be on the surface and, and what they actually require in terms of solution are so different that you, you, and you end up having to just kind of check all possible solutions to see if one of them works. But, but that's not normal, right? Normally we see problems and they actually do bear some resemblance to problems that we've seen before. And so, yeah, so we do get kind of that, that meta problem solving capability to work for us. In broad strokes, do you feel like there's different schools of thought in the RL research community or different ideas of what's important? And like how well distributed is a deep understanding of RL? I guess what comes to mind for me is like is deep mind hoovering up, you know, the line share of the talent in this field? Or would you say that RL talent is, is well spread out? So, the high level picture from my perspective is that any, any topic, if you zoom in close enough, you're going to see camps, right? That it, there's not, you, it's perfect uniformity. So, of course, there's going to be camps. I think that the reinforcement learning field is more coherent than a lot of fields out there. But, it is not perfectly coherent. And I do think that there's some people who put more emphasis on the theoretical aspects, the guarantees that you can get out of algorithms. Some people put more emphasis on their, the performance empirically. And so, if they can actually get a system to do kind of, you know, amazing jumping through hoop stuff, then, then they're very content. Even if they don't have any guarantees about how, how that algorithm will perform on other problems. So, yeah, I do think that there's some, chain differences in emphasis. But there's actually a fair amount of consistency in terms of what the problem is. And, you know, if not always, in whether or not it's been solved. As far as whether or not deep mind is, is, is got the lock on, on the talent in the community, it is definitely the case that they've got a ton of people. It's, I don't know that history has ever seen such a concentration of, of researchers in an area, certainly not in a computer science area doing, you know, doing such, such similar stuff on such a large scale. Yeah, I worry as to whether that's sustainable on both sides, whether they'll continue to be supported within the, the Google umbrella. And whether the field can stay healthy if so many people were, you know, sucked into that vortex. You know, my guess, if I had the guess, is it, it's not going to last forever. Eventually, it will be, you know, people will, will go back to the various places that they came from or that they, they're going to go to next. And there will be this, this nice dissemination, not just of interest in reinforcement learning, but also the tremendous, you know, procedural things that, that, that, that the deep mind people have figured out that allow them to run such large scale experiments and to, to answer such big questions. So I, yeah, so at the moment, I do think that it's not, things are not spread very evenly. We don't have the, the, the, the strength in the universities that is, that can sustainably produce, you know, top-notch researchers to go out into the world and, and, and attack these problems. I think a lot of people are getting distracted and pulled into the, into this company. And so we're, we'll see kind of a, you know, just a, maybe a dip in our ability to produce the next generation of researchers. But I do think that, that ultimately it's going to be, to everyone's benefit, that, that, that, that having these people together, doing the kind of work that they're doing, sharing some of the results, sharing their knowledge in various ways, it will, it will disseminate even if it's not, even if right now they're, they're pretty, closed in terms of, of what they can share. I saw some of your performances on YouTube. You have a thriller video, TurboTax commercial. It seemed like you have a lot of fun with this stuff. Do you, do you see you doing more acting or music in, in your future? Maybe a big screen cameo for an AI professor? Well, okay. So, I would love that. And so if you have any pull, you know, feel free to make it happen. I'd be super excited. You know, all those things that, that I've done, I have, I think back on very fondly, it was, it was a great experience and I, really enjoyed doing it. I am now trying to find ways like being on your podcast of getting to, to getting out there and, and speaking, you know, to, to, to, to being involved in the conversation in a more public way. And so this is something that I'm really excited about. I think is important. And I'd like to do more of. So yeah, you know, if, if, if Hollywood comes a knock-in, I probably will, will answer the door. Great, Hollywood, I hope you're listening. Professor Michael Littman, I've learned so much from you and from the little that I've read of your work. It's been a real honor and a pleasure. Thank you so much for, for sharing your insight and your time with me today. It was a treat to talk to you. Thanks, thanks so much for just being so engaged and, and for helping to get the word out to a broader community. That's our episode for today folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 12.8, "start": 0, "text": " This is TalkAreal Podcast. All reinforcement learning, all the time." }, { "end": 21.6, "start": 12.8, "text": " Interviews at Brilliant Folks across the world of RL. I'm your host, Rob and Chauhan." }, { "end": 25.88, "start": 21.6, "text": " Michael Lippmann is a professor of computer science at Brown University. He's not your" }, { "end": 32.519999999999996, "start": 25.88, "text": " average CS prof nor is he your average RL researcher. Professor Lippmann was elected ACM fellow in 2018" }, { "end": 37.4, "start": 32.519999999999996, "text": " for contributions to the design and analysis of sequential decision making algorithms in artificial" }, { "end": 43.56, "start": 37.4, "text": " intelligence. His research has garnered a monumental number of citations. Google tells me that it's" }, { "end": 50.599999999999994, "start": 43.56, "text": " over 35,000 and he continues to publish innovative new research in RL. Professor Lippmann, thank you" }, { "end": 55.800000000000004, "start": 50.6, "text": " so much for joining us. Wow, you're welcome. I guess I've always thought of myself as being" }, { "end": 62.84, "start": 55.800000000000004, "text": " pretty average, so that's kind of exciting that you see me otherwise. So your thesis back in 96" }, { "end": 68.92, "start": 62.84, "text": " was titled Algorithms for Sequential Decision Making and you're still in the field. You're like a" }, { "end": 74.52000000000001, "start": 68.92, "text": " real OG of RL. Well, thanks. I certainly feel that way. I know that recently when I've been" }, { "end": 78.92, "start": 74.52000000000001, "text": " teaching reinforcement learning class on campus, I start with the same banter that I started with" }, { "end": 84.2, "start": 78.92, "text": " a long time ago, which is like now just keep in mind that you know what machine learning is and" }, { "end": 88.28, "start": 84.2, "text": " this is, you know, this is like the the weird little baby brother of machine learning. We're not" }, { "end": 92.44, "start": 88.28, "text": " doing supervised learning and now it used to be that people would be like, yeah, it's okay," }, { "end": 97.16, "start": 92.44, "text": " we just need an extra class. And now they're like, no, no, no, we absolutely intend to be here. We" }, { "end": 101.88, "start": 97.16, "text": " want to learn about reinforcement learning. So it's kind of a very exciting time. How does you" }, { "end": 106.36, "start": 101.88, "text": " initially come to get interested in this area that became the major focus of your career?" }, { "end": 112.76, "start": 106.36, "text": " Because it's just so interesting, isn't it? I mean, I guess I guess I first started to think about it." }, { "end": 123.08, "start": 113.48, "text": " So I was in college in the I guess mid late 80s. And this was this was during the one of the" }, { "end": 127.88, "start": 123.08, "text": " previous neural network waves. And so people were talking about machine learning. And when I heard" }, { "end": 132.84, "start": 127.88, "text": " the phrase, the first thing that came to mind was, well, what I later learned would be reinforcement" }, { "end": 137.56, "start": 132.84, "text": " learning, you know, trying to get machines to use their experience to behave to make decisions." }, { "end": 142.52, "start": 137.56, "text": " And so that was what really was really the driving interesting example of machine learning to me." }, { "end": 148.6, "start": 143.24, "text": " I wrote a paper in a psychology class in college about, you know, what I thought that meant and" }, { "end": 152.84, "start": 148.6, "text": " how we would use, you know, quote unquote machine learning to solve problems like playing tic-tac-toe." }, { "end": 158.76, "start": 153.48000000000002, "text": " And only later came to discover that actually this there there was an area that worked on it. It" }, { "end": 162.92, "start": 158.76, "text": " wasn't what I thought was the area that worked on it, but there was stuff going on and I was" }, { "end": 169.39999999999998, "start": 162.92, "text": " I was very excited. In fact, right out of college, I worked at an organization that was called" }, { "end": 176.04, "start": 169.39999999999998, "text": " Bellcore, which had been kind of spun out of the bell system, spun out of Bell Labs. And my" }, { "end": 182.51999999999998, "start": 176.92, "text": " my mentor in my group that I joined as it was a guy named David Ackley. And he had done his" }, { "end": 189.48000000000002, "start": 182.52, "text": " dissertation on like a genetic machine for doing optimization and decision making. And I told him" }, { "end": 192.92000000000002, "start": 189.48000000000002, "text": " about my interest. I told him about the kinds of problems that I thought were really cool. And he's" }, { "end": 197.88, "start": 192.92000000000002, "text": " like, oh, oh, that's reinforcement learning here. And he gave me Rich Sutton's 1988 paper. This was," }, { "end": 204.04000000000002, "start": 199, "text": " you know, we're talking 1989. So this was like fresh out like a brand new idea that was that" }, { "end": 208.12, "start": 204.04000000000002, "text": " the people were talking about. And I said, wow, this is really cool. How do I learn more about it?" }, { "end": 211.96, "start": 208.12, "text": " He's like, well, it's new, but we can just have Rich come and give a talk. And so he just reached" }, { "end": 216.92000000000002, "start": 211.96, "text": " out and had Rich Sutton come and give a talk in our in our research. And so I'm like, can you do" }, { "end": 222.76000000000002, "start": 216.92000000000002, "text": " that? Can you reach into the literature and pull out actual live human beings? It was quite a" }, { "end": 227.64000000000001, "start": 222.76000000000002, "text": " revelation for me. And it was super valuable to get in at that stage and start learning about," }, { "end": 232.92000000000002, "start": 228.52, "text": " how people were thinking about these problems. What are the open questions that we're still going" }, { "end": 237.24, "start": 232.92000000000002, "text": " on and to try to engage with them? Yeah, from kind of from the bones almost the beginning." }, { "end": 242.36, "start": 237.24, "text": " Well, reaching into literature and talking to real human, that's kind of how I feel today talking" }, { "end": 248.44, "start": 242.36, "text": " to you. So thanks so much again for being here. When you look at your research career," }, { "end": 255, "start": 249.24, "text": " do you think of it as having different chapters or is it is it one long episode?" }, { "end": 260.6, "start": 255.56, "text": " Oh, that's interesting. Yeah. So I mean, you know, only in retrospect is it easy to think about" }, { "end": 266.12, "start": 260.6, "text": " chapters. But it is the case that I've changed jobs a couple of times. And each time there's a" }, { "end": 270.68, "start": 266.12, "text": " job change, there's an opportunity to kind of step back and say, okay, what am I trying to do here?" }, { "end": 276.76, "start": 270.68, "text": " What's what's the real plan? And so it was certainly the case that when I when I stopped working" }, { "end": 281.16, "start": 276.76, "text": " with Dave and went to go get my PhD, that was a kind of an opportunity to kind of have a bit of a" }, { "end": 287.8, "start": 281.16, "text": " change. When I when I started as a professor at Duke, the two areas that I was most interested in" }, { "end": 293.32, "start": 287.8, "text": " were were reinforcement learning or sequential decision making and also information retrieval. And" }, { "end": 299.08, "start": 293.32, "text": " so I actually had one PhD student working with me on each of those topics. When I finished up at Duke" }, { "end": 305, "start": 299.08, "text": " and moved to, well, ultimately moved to Rutgers, it occurred to me that it's just too hard to stay" }, { "end": 309.4, "start": 305, "text": " on top of these two fields, both of which were moving very, very rapidly. So I wasn't going to be" }, { "end": 315.08, "start": 309.4, "text": " able to help guide people and mentor people in both reinforcement learning and information" }, { "end": 319, "start": 315.08, "text": " retrieval. And I thought, okay, well, reinforcement learning is the one that I really want to stick" }, { "end": 323.8, "start": 319, "text": " with. So I named my research group when I got to Rutgers, the Rutgers laboratory for real life" }, { "end": 328.92, "start": 323.8, "text": " reinforcement learning or RL cubed. And and just yeah, I just didn't work with people who were" }, { "end": 333.08, "start": 328.92, "text": " doing other things anymore. So that was definitely a kind of a chapter boundary. I stopped doing" }, { "end": 340.52, "start": 333.72, "text": " language and I and I really focused in on on decision making. I came to RL after deep RL was" }, { "end": 345.32, "start": 340.52, "text": " already a thing. Can you help us understand like what does it like for someone like yourself?" }, { "end": 351.4, "start": 345.32, "text": " Being part of this deep reinforcement learning boom over the past few years. And" }, { "end": 356.68, "start": 352.04, "text": " whereas you started long before that when it was so much smaller." }, { "end": 363.56, "start": 357.64, "text": " Hard to hard to say how to put it in terms that you know would look reasonable to somebody who" }, { "end": 370.44, "start": 363.56, "text": " who kind of joined during this later phase. You know a lot of the questions are still the same" }, { "end": 376.28, "start": 370.44, "text": " questions. Some things that we thought we used to think worked but didn't work now kind of work." }, { "end": 382.2, "start": 376.28, "text": " So so back in the day, you know for just as a one concrete example around the time of of" }, { "end": 389.08, "start": 382.2, "text": " TD gammon there was there was work that Jerry Jarrett Saro did training up a TD algorithm to play" }, { "end": 395.4, "start": 389.72, "text": " backammon. And so kind of pretty much before that paper there wasn't there weren't examples of" }, { "end": 399.8, "start": 395.96, "text": " there was plenty of examples of TD learning being implemented and tested and you can make" }, { "end": 403.8, "start": 399.8, "text": " graphs and stuff. But it wasn't really solving a problem that you would think of as a problem." }, { "end": 408.52000000000004, "start": 403.8, "text": " That anything that TD learning had been doing there was some better way that we knew how to do it." }, { "end": 412.2, "start": 408.52000000000004, "text": " We just wanted to do it with TD because we wanted to understand the properties of TD and we" }, { "end": 418.68, "start": 412.2, "text": " thought it was really important. But with with Jerry's work he was he was getting a machine to" }, { "end": 423.32, "start": 418.68, "text": " play backammon at a level that that no one had ever seen before right. It was playing the game" }, { "end": 430.44, "start": 423.32, "text": " better than arguably as good as the best people. And and the seemed like the secret sauce to that was" }, { "end": 435.48, "start": 430.44, "text": " was some some part of reinforcement learning or or temple difference learning. And the way that he" }, { "end": 439.24, "start": 435.48, "text": " was doing it the way the way that he was representing the value function was with neural networks." }, { "end": 443.88, "start": 439.96, "text": " Right so it all sounds very familiar right. You're going to play some hard game that we didn't know" }, { "end": 448.76, "start": 443.88, "text": " that machines couldn't play very well before by combining a neural network and reinforcement" }, { "end": 454.36, "start": 448.76, "text": " learning. You know not so different from the alpha go work. And and it was really remarkable. And" }, { "end": 458.84, "start": 454.36, "text": " there was like a a big jump in the number of people who were excited and interested in" }, { "end": 463, "start": 459.4, "text": " in applying reinforcement learning to different problems. And shortly after that you saw people" }, { "end": 469.96, "start": 463, "text": " applying it to things like elevator control or or cell phone channel allocation all kinds of" }, { "end": 475.88, "start": 469.96, "text": " practical problems that that fit the mold. And generally people tried to combine a neural net with" }, { "end": 482.68, "start": 475.88, "text": " this this this notion of kind of a temporal backup. And the funny thing about it maybe not funny" }, { "end": 487.32, "start": 482.68, "text": " at all is that it was actually really hard to get it to work. The neural networks were very very" }, { "end": 493.71999999999997, "start": 487.32, "text": " brittle. And it was pretty common for them to actually improve for a while. And then as you" }, { "end": 498.52, "start": 493.71999999999997, "text": " continue to train them they would just completely collapse like worse than random after that. They just" }, { "end": 503.32, "start": 498.52, "text": " they knew nothing at all about the problem or even had to answer questions. And so I've I've had" }, { "end": 509.24, "start": 503.32, "text": " many back in that in that time I I supervised projects from many different students trying to do" }, { "end": 515.4, "start": 509.24, "text": " exactly that apply a neural net to learn a value function for some you know so a video game or" }, { "end": 522.12, "start": 515.4, "text": " or or board game or some control problem in the real world. And the neural nets were just too flaky." }, { "end": 528.68, "start": 522.12, "text": " We ended up really not using them effectively ever. And so that was sort of the dirty secret is" }, { "end": 533.16, "start": 528.68, "text": " that Jerry Tissaro could get neural nets to learn amazing things. But the rest of us it was much more" }, { "end": 538.92, "start": 533.16, "text": " hit or miss. And so I think one of the things that's really exciting this time around is that the" }, { "end": 546.4399999999999, "start": 538.92, "text": " neural net training process seems to be a lot more robust that we're able to get many more people" }, { "end": 551.3199999999999, "start": 546.4399999999999, "text": " are able to train many more problems effectively. And part of that is I think we have a better" }, { "end": 557, "start": 551.3199999999999, "text": " culture now of sharing code. And part of it is I think that the the training algorithms are just" }, { "end": 562.84, "start": 557, "text": " or just more solid than they were back in the 90s. Tissaro's project seemed really ahead of his time." }, { "end": 569, "start": 562.84, "text": " And people talked about it for many years afterwards. When you know when I first got really interested" }, { "end": 575.56, "start": 569, "text": " in this at the time it seemed like DQN was the thing that started this. But looking back at that" }, { "end": 581.32, "start": 575.56, "text": " back game and work it's not really that different from DQN. I think they had a simpler network." }, { "end": 588.2800000000001, "start": 581.32, "text": " But it seemed really ahead of his time. And like you said the grandfather of AlphaGo." }, { "end": 594.7600000000001, "start": 588.2800000000001, "text": " Yeah I think that's right. And I think that one of the things that wasn't at I think" }, { "end": 599.48, "start": 594.7600000000001, "text": " so there's a lot to love about the DQN work. And it is really exciting that it got a lot of people" }, { "end": 604.9200000000001, "start": 599.48, "text": " jazzed about this whole area. But looking at the paper from the from the broader perspective" }, { "end": 609.4000000000001, "start": 604.9200000000001, "text": " one of the things that they pushed on is whoa reinforcement learning that's really powerful. We" }, { "end": 613.24, "start": 609.4, "text": " could do all sorts of things with it. But a lot of us already knew that. What we really wanted to" }, { "end": 617.3199999999999, "start": 613.24, "text": " hear is what are you doing differently from what Tissaro did? What are you doing differently now?" }, { "end": 623, "start": 617.3199999999999, "text": " What what ideas have you instantiated in your algorithm that we didn't know how to do before" }, { "end": 628.92, "start": 623, "text": " that are making this work? Like we knew we could have done this 30 years ago. Like why why now?" }, { "end": 634.52, "start": 628.92, "text": " Why is it working now? And and there are reasons for that. There are things that they did in the" }, { "end": 641.24, "start": 634.52, "text": " DQN work that made the training of the network more robust. But that was that was sort of downplayed" }, { "end": 645.4, "start": 641.24, "text": " in a way that I think is unfortunate. I think I think there was a lot to learn from their success," }, { "end": 650.12, "start": 645.4, "text": " the engineering of what they put together. And the algorithmics of what they put together." }, { "end": 658.28, "start": 650.92, "text": " They they they patented that. And if you read patent. The DQN they really hinges on the target" }, { "end": 663.0799999999999, "start": 658.28, "text": " network. They have the two networks and to the target network. So maybe that was one of the" }, { "end": 667.64, "start": 663.08, "text": " innovations. Oh absolutely. That was one of the things where looking back I think we would have" }, { "end": 672.84, "start": 667.64, "text": " been reluctant to try that back in a day because it doesn't seem like it would be reactive enough," }, { "end": 678.6800000000001, "start": 672.84, "text": " responsive enough. And so you know another thing they did in some of that early DQN work is is to say" }, { "end": 682.36, "start": 678.6800000000001, "text": " we're going to throw away a ton of data. Like we're going to run whole trajectories and just" }, { "end": 685.8000000000001, "start": 682.36, "text": " you know we'll just remember one transition from that whole thing because that's going to give" }, { "end": 690.9200000000001, "start": 685.8000000000001, "text": " us statistical independence of of of the samples that we're using for training. And like we would" }, { "end": 694.5999999999999, "start": 690.92, "text": " have never done that back in the day partly because computers were just a heck of a lot slower." }, { "end": 699.88, "start": 695.16, "text": " And so the idea of running a whole trajectory and only keeping one sample from it was unthinkable." }, { "end": 706.76, "start": 700.68, "text": " So they they had a wider when they revisited these questions I think back in the you know mid 2010s" }, { "end": 713.7199999999999, "start": 708.8399999999999, "text": " things had shifted right the computer power had shifted the the palette of of engineering" }, { "end": 719.24, "start": 714.5999999999999, "text": " opportunities was more open. And I think they took took advantage of it in a really beautiful way." }, { "end": 725.24, "start": 719.24, "text": " You recently put on the RLDM conference in Montreal that's reinforcement learning and decision-making" }, { "end": 730.44, "start": 725.24, "text": " which happens every two years. That sounds like a very unique conference being so multi-disciplinary." }, { "end": 735.16, "start": 731.24, "text": " Is there is there a common language across all these disciplines for this stuff?" }, { "end": 741.32, "start": 736.04, "text": " So I love RLDM I think it's a fantastic conference and I'm delighted that I had the opportunity" }, { "end": 747.24, "start": 741.32, "text": " to contribute. The basically there was there's a core set of reinforcement learning researchers who" }, { "end": 753.4, "start": 747.24, "text": " felt like it's interesting because for a long time they had been saying people like like Andy Bartow" }, { "end": 757.24, "start": 753.4, "text": " would be approached saying hey reinforcement learning it's really cool you should have your own" }, { "end": 760.92, "start": 757.24, "text": " conference right there's a machine learning conference there's machine vision conferences there's" }, { "end": 765.24, "start": 760.92, "text": " planning conferences there's AI conferences you know there should totally be a reinforcement learning" }, { "end": 772.36, "start": 765.24, "text": " conference. And he and and I think also Rich Sutton were very much of the opinion that that would be" }, { "end": 778.28, "start": 772.36, "text": " that would be a huge mistake for all the fields concerned that it was very important to make sure" }, { "end": 784.76, "start": 778.28, "text": " that reinforcement learning was always being done in in service of the broader AI goals and separating" }, { "end": 791.72, "start": 784.76, "text": " it out from that that community would be damaging. And so they resisted the the push to have a" }, { "end": 797, "start": 791.72, "text": " conference for a very very long time and I think they finally they finally caved it's probably" }, { "end": 803.32, "start": 797, "text": " around the same time that Andy was getting ready to retire. So he had less less say in the process" }, { "end": 808.92, "start": 803.32, "text": " but the but the way that the the the powers that be decided to instantiate a conference was very" }, { "end": 814.12, "start": 808.92, "text": " deliberate very conscious they they decided they wouldn't have it be a conference that had" }, { "end": 819.8, "start": 814.12, "text": " proceedings right so so if you want to have a paper that gets published and academics need to" }, { "end": 823.48, "start": 819.8, "text": " have papers that get published you couldn't do it in this conference you'd have to publish those" }, { "end": 828.44, "start": 823.48, "text": " papers and other conferences. So that was one way that they they tried to ensure that things didn't" }, { "end": 832.52, "start": 828.44, "text": " split off. And the other thing is they said well we're going to have a meeting anyway we might as" }, { "end": 837.16, "start": 832.52, "text": " well make it as multidisciplinary as possible. There's there's interesting things happening in" }, { "end": 844.04, "start": 837.16, "text": " decision making and reinforcement learning across I don't know half a dozen or more academic disciplines" }, { "end": 848.84, "start": 844.04, "text": " let's bring those people together and give and have an opportunity to just talk about the problems." }, { "end": 855.08, "start": 848.84, "text": " And so yeah so your question in terms of is there really a common vocabulary it's it's pretty" }, { "end": 862.2800000000001, "start": 855.08, "text": " remarkable. We had at this conference this year people from from marketing from neuroscience from" }, { "end": 868.0400000000001, "start": 863.08, "text": " some sort of cognitive behavioral sciences from computer science and robotics and and AI" }, { "end": 874.9200000000001, "start": 868.9200000000001, "text": " and engineering and yeah you know we kind of all talked at least a similar enough language that" }, { "end": 882.36, "start": 874.92, "text": " the the talks really translated very nicely from from subarea to subarea and so everybody seemed" }, { "end": 887.56, "start": 882.36, "text": " engaged the whole time it's not like the neuroscientists would get out and leave the room when we were" }, { "end": 892.68, "start": 887.56, "text": " talking about computational issues or vice versa there was plenty to learn from each other and I" }, { "end": 897.7199999999999, "start": 892.68, "text": " think a lot of people really enjoyed that getting to to think outside their own discipline." }, { "end": 903.4, "start": 898.4399999999999, "text": " So I didn't get to make it to this conference and I have my regret is very high about that but" }, { "end": 909.48, "start": 903.4, "text": " can you share with us I saw there was a great list of speakers well do you were there any highlights" }, { "end": 914.52, "start": 909.48, "text": " that you would want to share with us. Sure so well you know one highlight is that the next conference" }, { "end": 919.4, "start": 914.52, "text": " in two years will be in Providence for at Island in my home institution so you know try to make it" }, { "end": 926.04, "start": 919.4, "text": " out to that one because that one will be good too but in terms of highlights I mean the structure" }, { "end": 931.16, "start": 926.04, "text": " of the conference is interesting that's there's one track all the talks happen in one track and all" }, { "end": 936.6, "start": 931.16, "text": " the talk of the main talks are invited there's shorter talks that are contributed the people's" }, { "end": 941.56, "start": 936.6, "text": " submit papers or extended abstracts and they get selected to present their work but most of the" }, { "end": 947.8, "start": 941.56, "text": " talks are one or most of the most of the minutes of the talks are spent on people that were invited" }, { "end": 955.0799999999999, "start": 947.8, "text": " by the by the program chairs and that was me and Kate Hartley at NYU and you know I'd it would" }, { "end": 960.12, "start": 955.0799999999999, "text": " be it would be really rough of me to actually say hey you know this talk this area that's great" }, { "end": 964.36, "start": 960.12, "text": " and the rest of the stuff me you know because these were these were the the cream of the crop" }, { "end": 968.84, "start": 964.36, "text": " right these are the people that Kate and I said boy you know if we if we had a conference and we got" }, { "end": 973.96, "start": 968.84, "text": " to choose every talk in the conference which we kind of did what would we want to hear about and" }, { "end": 979.96, "start": 973.96, "text": " this was exactly what we wanted to hear about we we heard just wonderful results and and interesting" }, { "end": 986.36, "start": 979.96, "text": " ideas from all across the spectrum so I can I can I can give a couple highlights so one one piece" }, { "end": 992.28, "start": 986.36, "text": " of work that I think was really well received was the work on distributional reinforcement learning" }, { "end": 997.4, "start": 993, "text": " which is the idea that you're you create a reinforcement learner that instead of just trying" }, { "end": 1004.36, "start": 997.4, "text": " to predict expected future reward it was trying to predict multiple they call them expectiles so" }, { "end": 1013.8000000000001, "start": 1008.44, "text": " percentiles of the expectations of the returns that the the agent is getting at each point in time" }, { "end": 1018.3599999999999, "start": 1013.8, "text": " so it's it's giving it a harder problem is for trying to produce not just the expected value of" }, { "end": 1023.0799999999999, "start": 1018.3599999999999, "text": " future return but the whole distribution what does it look like how what's the likelihood that I'm" }, { "end": 1027.8, "start": 1023.0799999999999, "text": " going to get a really high return from here what's likely to go low return and you can always" }, { "end": 1034.04, "start": 1027.8, "text": " turn that into an expected value by averaging that distribution but but and that's what that's" }, { "end": 1038.36, "start": 1034.04, "text": " what they end up doing and deciding how to actually behave but the learning problem is to learn" }, { "end": 1044.6, "start": 1038.36, "text": " the entire distribution and there's there's really interesting stuff happening with that idea both" }, { "end": 1049.3999999999999, "start": 1044.6, "text": " in terms of what does that mean like it seems to work really well for for for example learning to" }, { "end": 1054.6799999999998, "start": 1049.3999999999999, "text": " play Atari games why like we're not using that distribution why is it helped to learn it" }, { "end": 1062.76, "start": 1055.56, "text": " and why is it better so this is like the c 51 and the bell mares line of work that's right iqs yes" }, { "end": 1068.04, "start": 1062.76, "text": " that's right and and so we heard at least one talk that talks specifically about what have they" }, { "end": 1072.2, "start": 1068.04, "text": " been able to figure out in terms of why it's helping so another line of work that we heard about" }, { "end": 1079.4, "start": 1072.76, "text": " related to distribution rl is re-analyses of some experiments on actual biological systems" }, { "end": 1085.08, "start": 1079.4, "text": " that are learning so measurements of neurons patterns across multiple neurons and arguing that the" }, { "end": 1091.56, "start": 1085.08, "text": " patterns of learning that you see in in these neurons in a real in a real brain are capturing the" }, { "end": 1097.56, "start": 1091.56, "text": " distributional information about the returns that's that there's evidence that the brain is doing" }, { "end": 1104.36, "start": 1097.56, "text": " distribution rl and not just expected value like td rl and so that was that was i mean i think it's" }, { "end": 1109.3999999999999, "start": 1104.36, "text": " still early going i don't think that this is definitive quite yet but it is really exciting and" }, { "end": 1113.96, "start": 1109.3999999999999, "text": " tantalizing and they did a very careful job of of analyzing the results and presenting them and" }, { "end": 1118.9199999999998, "start": 1113.96, "text": " explaining why why that would make sense for a brain to do that sort of thing so i thought that was" }, { "end": 1126.28, "start": 1118.92, "text": " really cool another another thing that that really stuck with me is another scientist talked about" }, { "end": 1135.72, "start": 1127.8000000000002, "text": " risk seeking behavior so one of the things you observe in people who are addicted to gambling" }, { "end": 1140.28, "start": 1135.72, "text": " is that they really they really like gambling i mean that's that that's that's what it means to be" }, { "end": 1146.1200000000001, "start": 1140.28, "text": " addicted to gambling and one of the the the properties that you see in such people is that they" }, { "end": 1151.6399999999999, "start": 1146.12, "text": " actually tend to downweight risk so they're that when they're deciding what to do they're they're" }, { "end": 1156.36, "start": 1151.6399999999999, "text": " they're not very worried about the downside they're they're more excited about the possible upside" }, { "end": 1160.52, "start": 1156.36, "text": " and this is visible not just in their gambling behavior but if you actually give them" }, { "end": 1166.4399999999998, "start": 1161.3999999999999, "text": " sort of classic condomin toversky type problems to to decide oh what would you do if you had" }, { "end": 1171.1599999999999, "start": 1166.4399999999998, "text": " the choice between this kind of thing and this kind of thing you see a consistent pattern of the way" }, { "end": 1177.3200000000002, "start": 1171.16, "text": " that they they make their decisions and one of the things that was cool is they is our speaker" }, { "end": 1183.72, "start": 1177.3200000000002, "text": " talked about how that you can you can get rats to have this behavior you can do various things to" }, { "end": 1188.76, "start": 1183.72, "text": " rats especially making them cocaine addicted turns out to be a way of getting to exhibit exactly" }, { "end": 1195.48, "start": 1188.76, "text": " this kind of pattern of choices but another really interesting thing is that if you when you're when" }, { "end": 1200.2, "start": 1195.48, "text": " you actually make them do the task when you show them the choice and then they make the choice and" }, { "end": 1204.68, "start": 1200.2, "text": " then they get rewarded if in addition to just giving them the juice or whatever it is that that" }, { "end": 1209.24, "start": 1204.68, "text": " they're that they're trying to maximize if you also make lots of flashing lights and loud sounds" }, { "end": 1213.8, "start": 1209.24, "text": " that go you know kittin kittin kittin that tends to put their brain into a mode where they're" }, { "end": 1219.96, "start": 1213.8, "text": " much more likely to be risk seeking and much more gambling and I think the reason that this is" }, { "end": 1225.72, "start": 1219.96, "text": " really striking to me is if you think about how casinos are are constructed especially slot machines" }, { "end": 1231.72, "start": 1225.72, "text": " and casinos the machines make a big deal about winning or these these online sites that let you" }, { "end": 1236.92, "start": 1231.72, "text": " play I don't know various kinds of little little games like they don't you don't just get points" }, { "end": 1242.44, "start": 1236.92, "text": " and then you're done with it there's also animations and noises and and and sorts of things that" }, { "end": 1248.28, "start": 1242.44, "text": " seem to be exactly what causes brains to become more risk seeking and so that was super creepy" }, { "end": 1253.32, "start": 1248.28, "text": " and super exciting to think about because it means well maybe we have ways of helping people" }, { "end": 1258.2, "start": 1253.32, "text": " and ways of intervening to help them not be so addicted when it becomes a problem" }, { "end": 1264.36, "start": 1259.6399999999999, "text": " but also creepy because like I know I'm being exposed to these kinds of stimuli all the time and" }, { "end": 1269.96, "start": 1264.36, "text": " they're they're wreaking havoc with my reward system and how I interpret rewards and so it's made" }, { "end": 1274.04, "start": 1269.96, "text": " me a little more cognizant of okay you know what I think I'm done with this game for now I think" }, { "end": 1279.24, "start": 1274.04, "text": " this game is having its way with me and not the other way around wow that just reminds me of" }, { "end": 1285.72, "start": 1279.24, "text": " social media with the constant little rewards that we get exactly exactly right and so I think" }, { "end": 1291.24, "start": 1287, "text": " you know somebody at some level knows this right they're building people are building interfaces" }, { "end": 1295.64, "start": 1291.24, "text": " that are definitely tapping into this I think maybe the more people that know it the more aware" }, { "end": 1300.6, "start": 1295.64, "text": " people are of it the hope is the more we can protect ourselves when we're being manipulated" }, { "end": 1304.28, "start": 1300.92, "text": " right sometimes it's just fun and if you want to spend a little time having fun like we" }, { "end": 1308.6, "start": 1304.28, "text": " shouldn't be worried about that but in other cases it starts to creep in and kind of take over" }, { "end": 1314.1999999999998, "start": 1308.6, "text": " your life and until we as a society figure out or we as individuals almost figure out how to" }, { "end": 1320.36, "start": 1315.3999999999999, "text": " how to protect ourselves from that kind of manipulation we're just you know we're just tools" }, { "end": 1323.56, "start": 1320.36, "text": " right we're just we're just going to do whatever the machines tell us to do and I don't think" }, { "end": 1328.6799999999998, "start": 1323.56, "text": " that's what we want well I'm really glad that there's researchers thinking in that direction of" }, { "end": 1335.08, "start": 1328.6799999999998, "text": " how to protect us and avoid these mechanisms exactly right right and and and it's fun to think that" }, { "end": 1339.56, "start": 1335.08, "text": " that you know we doing computational reinforcement learning research are helping to inform that" }, { "end": 1344.12, "start": 1339.56, "text": " community helping to give them tools for thinking about their problems and in exchange they're like" }, { "end": 1350.1999999999998, "start": 1344.12, "text": " you know protecting us from the big bad you know because he knows that are that are after us" }, { "end": 1357, "start": 1351, "text": " I want to move on to your MOOCs you have some some great MOOC courses I can't say I've done them" }, { "end": 1362.36, "start": 1357, "text": " but I think you have on Udacity RL and machine learning do you do you plan to do more of this?" }, { "end": 1372.6, "start": 1362.36, "text": " that yeah so so I've done let's see when shortly after Udacity became a thing I was in touch with" }, { "end": 1376.36, "start": 1372.6, "text": " Sebastian Thrunn and and asked him you know I just thought this was really neat and it was cool" }, { "end": 1380.84, "start": 1376.36, "text": " that people were doing it and I wanted to learn about this you know this new wave of of of" }, { "end": 1388.76, "start": 1380.84, "text": " of organizing educational materials for people and so I actually got to to spend I think it was" }, { "end": 1393.8799999999999, "start": 1388.76, "text": " like a week and a half or two weeks out in Silicon Valley working at Udacity and I put together a" }, { "end": 1398.52, "start": 1393.8799999999999, "text": " class on what do we call it crunching social networks something like that but it was basically" }, { "end": 1406.6, "start": 1398.52, "text": " a graph like a course on graph algorithms and and so that happened and I was really excited" }, { "end": 1410.44, "start": 1406.6, "text": " and I had a good time with it and I kept asking Sebastian can I do more of this can I do more of this" }, { "end": 1416.44, "start": 1410.44, "text": " he's like I'm glad that you did it but like he wasn't he wasn't letting me he wasn't giving me" }, { "end": 1421.64, "start": 1416.44, "text": " another opportunity but then Udacity and Georgia Tech ended up forming a partnership to create" }, { "end": 1426.8400000000001, "start": 1421.64, "text": " an online masters degree and my good friend Charles this bell was one of the people instigating that" }, { "end": 1431.24, "start": 1426.8400000000001, "text": " and he said okay this is about to become a thing would you like to do a machine learning class with" }, { "end": 1436.52, "start": 1431.24, "text": " me I'm like yay that's like I get to be with Charles I get to do another MOOC and I don't have" }, { "end": 1442.68, "start": 1436.52, "text": " to wait for Sebastian to say okay so so we we put together machine learning class and and that" }, { "end": 1446.28, "start": 1442.68, "text": " went really well and then we put together a follow up class specifically on reinforcement learning" }, { "end": 1451.4, "start": 1447.0800000000002, "text": " now I still use those videos I'm about to teach a reinforcement learning class on campus this" }, { "end": 1457.88, "start": 1451.4, "text": " this fall and the students will be asked to you know to watch videos from from that MOOC so yeah so" }, { "end": 1464.6000000000001, "start": 1457.88, "text": " that was that was super fun we haven't done anything since then I am now in the process of trying" }, { "end": 1471.64, "start": 1464.6000000000001, "text": " to put together a class for a company so it's not a massive open online course because it's not" }, { "end": 1476.8400000000001, "start": 1471.64, "text": " open it's it's for pay but one of the things that I think is really fun about it is it's machine" }, { "end": 1482.1200000000001, "start": 1476.8400000000001, "text": " learning for not you don't have to be a computer science major it's a it's a course where" }, { "end": 1487, "start": 1483.16, "text": " if you're just interested in hearing about what machine learning is about and learning about some" }, { "end": 1491.64, "start": 1487, "text": " of the technologies underneath of it that this would give you a chance to learn about that so" }, { "end": 1498.76, "start": 1491.64, "text": " I'm excited and it's it's been a ton of work because I can't depend on people knowing the kinds" }, { "end": 1503.96, "start": 1498.76, "text": " of mathematics and the kinds of of computer science algorithms that I would normally depend on in" }, { "end": 1510.76, "start": 1503.96, "text": " in an on campus class and so boy just there's there's just a lot of thinking about how to present" }, { "end": 1516.12, "start": 1510.76, "text": " the material so that the amount of background needed is small as small as possible so you appeared" }, { "end": 1521.8, "start": 1516.12, "text": " in a AAAI article called ask me anything about MOOCs a paraphrasing from something you said" }, { "end": 1526.44, "start": 1521.8, "text": " there you said I was drawn to MOOCs as an opportunity to turn teaching into a sequential decision" }, { "end": 1531.96, "start": 1526.44, "text": " making problem where the student is the environment and the MOOC is the decision maker and then" }, { "end": 1538.92, "start": 1531.96, "text": " you went on to say that it's that's been hard to achieve and then you have a 2018 paper well" }, { "end": 1545, "start": 1538.92, "text": " a co-authored a paper with Sarnin if I'm saying that correctly called personalized education at" }, { "end": 1552.28, "start": 1545, "text": " scale where you you look at this idea of using RL and education and you and the paper says" }, { "end": 1558.36, "start": 1552.28, "text": " um suggest that sample efficient RL could help optimize curricula for human learners so how" }, { "end": 1564.04, "start": 1558.36, "text": " I wonder how far are we making if we're making this type of approach to education practical and" }, { "end": 1571, "start": 1564.04, "text": " uh like our MOOCs trying to do something like this right now so I was amazed that the people who" }, { "end": 1577.16, "start": 1571, "text": " started the whole MOOC craze were almost all machine learning people so uh Daphne Culler Andrew" }, { "end": 1583.64, "start": 1577.16, "text": " Ing, Sebastian Thrun, uh a lot of the the Movers and Jekers in Coursera Udacity um probably some" }, { "end": 1589.16, "start": 1583.64, "text": " people at edX but I didn't know them as well were were not just computer scientists thinking about" }, { "end": 1593.64, "start": 1589.16, "text": " education but they were specifically AI machine learning people thinking about education and so it" }, { "end": 1600.2, "start": 1593.64, "text": " seemed obvious that that they were viewing this as a machine learning kind of platform right that" }, { "end": 1606.52, "start": 1600.2, "text": " we would be using this to to find ways of optimizing the process of education but it turns out they" }, { "end": 1610.52, "start": 1606.52, "text": " had bigger I don't know if they're bigger fish to fry they had different fish to fry they had so" }, { "end": 1615.72, "start": 1610.52, "text": " many things to worry about in terms of getting these companies out and stable that the the machine" }, { "end": 1620.6, "start": 1615.72, "text": " learning aspect which which surely they were thinking about it never really got surfaced it never" }, { "end": 1625.4, "start": 1620.6, "text": " really became part of of what they were what they were doing they were saving their data but they" }, { "end": 1631.8799999999999, "start": 1625.4, "text": " weren't really doing it in a way that um that was going to be able to translate into optimizing the" }, { "end": 1637, "start": 1631.88, "text": " process so I've I've now had two PhD students who've been really interested in this question of how" }, { "end": 1644.8400000000001, "start": 1637, "text": " we can do that uh one uh now graduate a student named Vucca C. Maravate he um he took a stab at it" }, { "end": 1649.64, "start": 1644.8400000000001, "text": " and didn't really get very far a part of it is that that he was doing that and he was trying to do" }, { "end": 1655, "start": 1649.64, "text": " reinforcement learning for medicine at the same time and we were trying to make you know fundamental" }, { "end": 1660.2800000000002, "start": 1655, "text": " contributions to the machine learning literature and so I think he just didn't he just wasn't" }, { "end": 1665.96, "start": 1660.28, "text": " enough in the education side to be able to have that kind of impact after that though uh this" }, { "end": 1671.08, "start": 1665.96, "text": " saren and sam saren and is is a current PhD student of mine and he's really committed to the education" }, { "end": 1677.8, "start": 1671.08, "text": " side he's been doing work with uh computer science education folks in my department and um is" }, { "end": 1683, "start": 1677.8, "text": " is taking it really seriously going out and learning that literature uh talking to companies that" }, { "end": 1686.76, "start": 1683, "text": " they're collecting data talking to talking to them about ways to collect data differently" }, { "end": 1693.8, "start": 1686.76, "text": " being willing to think about uh uh uh care and bandit problems as opposed to the full reinforcement" }, { "end": 1698.76, "start": 1693.8, "text": " learning problem with the idea that uh that it's a simpler version to problem but more likely" }, { "end": 1702.52, "start": 1698.76, "text": " we'll be able to get the right kind of data and make the right kind of judgments based on that" }, { "end": 1707.64, "start": 1702.52, "text": " and it looks like that's going to be his his PhD dissertation we're going to focus on these these" }, { "end": 1713.32, "start": 1707.64, "text": " questions of how to get data from people and how to turn that back into in decisions and and see" }, { "end": 1717.48, "start": 1713.32, "text": " that those decisions are actually causing improvements in something measurable about their learning" }, { "end": 1723.24, "start": 1718.12, "text": " so so i think we're poised to to make some contributions to this area now but yeah it's been" }, { "end": 1728.12, "start": 1723.24, "text": " it's been a long time since the first moot came out and yeah i don't i i can't think of too many" }, { "end": 1733.3999999999999, "start": 1728.12, "text": " positive examples of of machine learning really having a substantial impact on the way that the" }, { "end": 1741.32, "start": 1733.3999999999999, "text": " learning's happening i remember seeing an emma brunskill talk about uh her using rl in in some kind" }, { "end": 1748.28, "start": 1741.32, "text": " of course um curriculum design in terms of which which um topic should be covered next but you" }, { "end": 1753.8, "start": 1748.28, "text": " know as i think about this i'm like well like teachers have all this built-in knowledge and intuition" }, { "end": 1759.1599999999999, "start": 1753.8, "text": " about how students are learning and and what orders to do things and how how to approach things and" }, { "end": 1764.84, "start": 1759.1599999999999, "text": " and i can't imagine like teacher a system trying to use like epsilon greedy to figure out how to" }, { "end": 1772.04, "start": 1764.84, "text": " teach a course um so it seems like we have to somehow combine what the teachers understand and" }, { "end": 1777.48, "start": 1772.6799999999998, "text": " figure out what is it remains to be learned so so i i wonder how how would you approach that in" }, { "end": 1784.04, "start": 1777.48, "text": " terms of what what should be learned and what should be built in um to a system like that i i think" }, { "end": 1788.4399999999998, "start": 1784.04, "text": " one of the things that we're doing that's a little different now is that we're taking a step back" }, { "end": 1794.1999999999998, "start": 1788.4399999999998, "text": " and trying to teach much simpler things i think that the the dream of being able to for example take" }, { "end": 1802.28, "start": 1794.2, "text": " everything that people uh need to learn about mathematics say and then organizing it into a flow" }, { "end": 1806.92, "start": 1802.28, "text": " right so that so that the right topics are being introduced at the right times and the right" }, { "end": 1810.76, "start": 1806.92, "text": " assessments are taking place so that you can potentially have people skip over things that they" }, { "end": 1815.56, "start": 1810.76, "text": " understand well or dive more deeply into things that they haven't quite gotten yet um i think" }, { "end": 1820.68, "start": 1815.56, "text": " that was the dream and i think that might just we just might not be ready for that yet so the kinds" }, { "end": 1827, "start": 1820.68, "text": " of problems that that we might look at sam and i might look at include things of uh you know you" }, { "end": 1832.3600000000001, "start": 1827, "text": " you need to memorize a big list of things you need to memorize um you know the the the major" }, { "end": 1838.2, "start": 1833.0800000000002, "text": " historical events in some i don't know in say russian history or something like that right so you" }, { "end": 1843.0800000000002, "start": 1838.2, "text": " need to be able to associate events and dates and things like that it's not it's not a deep kind of" }, { "end": 1848.04, "start": 1843.0800000000002, "text": " cognitive understanding and there's no real sequentiality to it but uh but it turns out that" }, { "end": 1854.52, "start": 1848.04, "text": " forgetting people to remember facts just kind of dry facts um showing it to them once is not" }, { "end": 1859.8799999999999, "start": 1854.52, "text": " enough showing it to them repeatedly is important but the timing of those repetitions is also very" }, { "end": 1863.8799999999999, "start": 1859.8799999999999, "text": " important you want to you want to what you want to do what's called spaced repetition you want to" }, { "end": 1869.96, "start": 1865.24, "text": " basically remind somebody of a fact that they're just about to forget but haven't quite forgotten" }, { "end": 1874.36, "start": 1869.96, "text": " yet um and typically that causes them to actually remember it for much longer but they're still" }, { "end": 1879, "start": 1874.36, "text": " eventually going to forget and you need to remind them again before they forget and so the the" }, { "end": 1884.4399999999998, "start": 1879, "text": " typical pattern of forgetting follows a kind of an exponential curve each time they're reminded" }, { "end": 1887.4799999999998, "start": 1884.4399999999998, "text": " there's some multiplicative factor longer that they're going to remember it for" }, { "end": 1893, "start": 1888.36, "text": " and so figuring out what those those quantities are is a much easier learning problem right" }, { "end": 1896.36, "start": 1893, "text": " figuring out how fast people are going to forget something and then reminding them right before" }, { "end": 1902.36, "start": 1896.36, "text": " they forget is a much easier problem than organizing the entire curriculum the entire set of topics" }, { "end": 1906.84, "start": 1902.36, "text": " and how they interrelate to each other and so we want to try to get a handle on that simpler problem" }, { "end": 1912.6, "start": 1906.84, "text": " first uh i'd like to move on to a paper that you recently co-authored that was published in the" }, { "end": 1918.36, "start": 1912.6, "text": " journal of experimental psychology is called people teach with rewards and punishments as communication" }, { "end": 1924.04, "start": 1918.36, "text": " not reinforcements that was by ho at all um this paper draws a distinction between" }, { "end": 1928.9199999999998, "start": 1924.52, "text": " re-enforcement learning directly from a reward signal and then something a little more subtle" }, { "end": 1933.0800000000002, "start": 1928.92, "text": " that you're calling communication can you help us understand the difference here?" }, { "end": 1937.96, "start": 1933.0800000000002, "text": " yeah absolutely so first of all just to give a little bit of context i've always been a bit of a" }, { "end": 1943.4, "start": 1937.96, "text": " psychology groupie so when i was an undergraduate i was i was a computer science major but i" }, { "end": 1948.44, "start": 1943.4, "text": " i took a ton of psychology classes and my actual on-campus job was was as a research assistant" }, { "end": 1952.3600000000001, "start": 1948.44, "text": " in a psychology lab and so i've always hung around with the psychologist but i've never really" }, { "end": 1958.3600000000001, "start": 1952.3600000000001, "text": " gotten to publish in in the area and uh when i got to to start working with mark ho when he was a" }, { "end": 1963.7199999999998, "start": 1958.36, "text": " PhD student in psychology at brown he was also doing a master's degree in computer science with me" }, { "end": 1970.6799999999998, "start": 1963.7199999999998, "text": " and so i got i had access to this this you know budding psychology superstar and uh one of the" }, { "end": 1975.7199999999998, "start": 1970.6799999999998, "text": " questions that we ended up looking at together was this notion of how do how do people provide" }, { "end": 1981, "start": 1975.7199999999998, "text": " rewards and punishments and how is that similar and different from the way that reward functions do" }, { "end": 1985.7199999999998, "start": 1981, "text": " right so we have in reinforcement learning algorithms that can learn from reward feedback" }, { "end": 1990.76, "start": 1985.72, "text": " and the reward is provided by some some function right that says okay well if you destroy this" }, { "end": 1995.56, "start": 1990.76, "text": " enemy in this situation you get ten points if you get stuck in this situation you lose two points" }, { "end": 2002.52, "start": 1995.56, "text": " like there's a whole there's a whole menu of uh or like a like a cost list of what all the different" }, { "end": 2006.04, "start": 2002.52, "text": " things in the world are and how much they're worth and then the algorithm is then trying to just" }, { "end": 2011.8, "start": 2006.04, "text": " maximize profit so it's reasonable to think well when people are giving rewards and punishments" }, { "end": 2017.1599999999999, "start": 2011.8, "text": " for for teaching you know like a little kid or or an animal or or you know just or each other" }, { "end": 2022.52, "start": 2018.36, "text": " that they're giving rewards and punishments in a way that's really similar to that but that turns" }, { "end": 2028.52, "start": 2022.52, "text": " out not to be the case so the most obvious thing you can see along these lines is if you're trying" }, { "end": 2035.08, "start": 2028.52, "text": " to tell an agent how to behave and it starts behaving in the in the way that you want it to behave" }, { "end": 2040.44, "start": 2035.6399999999999, "text": " your tendency is going to be to withdraw reward right once it's doing the right thing you don't" }, { "end": 2045.0800000000002, "start": 2040.44, "text": " have to reward it anymore it's doing the right thing and so what people will tend to do is they'll" }, { "end": 2048.6, "start": 2045.0800000000002, "text": " give a lot of reward when things are heading in the right direction and then they'll start to back" }, { "end": 2053.48, "start": 2048.6, "text": " off from that reward when uh when the system is doing the right thing right so does that does that" }, { "end": 2057.88, "start": 2053.48, "text": " make sense does that does that jive with your intuitions about how you would give reward if you" }, { "end": 2062.84, "start": 2057.88, "text": " were doing this over a long period of time yeah totally I wouldn't want to have to keep rewarding" }, { "end": 2068.44, "start": 2062.84, "text": " that same behavior uh right right it and it feels almost unnatural it's almost almost insulting" }, { "end": 2072.04, "start": 2068.44, "text": " right it's like okay you know if I were the learner like okay I know I get it you don't have to tell" }, { "end": 2077.64, "start": 2072.04, "text": " me anymore um but if you do if you do that to a to a q learner if you do that to a standard reinforcement" }, { "end": 2082.2000000000003, "start": 2077.64, "text": " learner and you withdraw the the reward that it's getting it's that's going to force it to start" }, { "end": 2086.92, "start": 2082.2000000000003, "text": " getting TD errors like it's expecting a reward it didn't get the reward so something's wrong it" }, { "end": 2091.88, "start": 2086.92, "text": " needs to update its model it needs to change its behavior to try to get more reward and so everything" }, { "end": 2097.4, "start": 2091.88, "text": " falls apart so if you try to train a reinforcement learner with a human as the reward function it tends" }, { "end": 2102.44, "start": 2097.4, "text": " to degrade really rapidly like it heads in a good direction and then it starts to fall apart because" }, { "end": 2108.92, "start": 2102.44, "text": " people stop giving the reward the the the the stopping to give reward is actually a symptom of a much" }, { "end": 2115.8, "start": 2108.92, "text": " more systematic process that the that the people giving rewards seem to be undergoing what they seem" }, { "end": 2123.1600000000003, "start": 2115.8, "text": " to be doing is they are making internal predictions about how the learner is doing how good is it at" }, { "end": 2129.08, "start": 2123.16, "text": " the task and then they're providing positive rewards when the behavior that they exhibit exceeds" }, { "end": 2133.96, "start": 2129.08, "text": " that expectation they're giving sort of neutral rewards when it is kind of consistent with their" }, { "end": 2140.6, "start": 2133.96, "text": " expectations and they're giving punishment negative rewards if it's degrading if somehow that the" }, { "end": 2146.44, "start": 2140.6, "text": " performance is falling off from what was expected and as the learner is learning this is a moving target" }, { "end": 2151.96, "start": 2146.44, "text": " right it's it's you're trying to tell it well here's here's positive reward for doing the next" }, { "end": 2157.08, "start": 2151.96, "text": " thing that you need to be doing that towards the the behavior that you should be learning and so" }, { "end": 2161.32, "start": 2157.08, "text": " that the the idea that once the system actually is learning to do the correct thing that people" }, { "end": 2166.68, "start": 2161.32, "text": " would back off with the positive reward that's just one special case of this sort of more general" }, { "end": 2171.48, "start": 2166.68, "text": " thing which seems to be that people are providing advantage feedback they're they're providing" }, { "end": 2177.7200000000003, "start": 2172.04, "text": " information about the performance of the agent relative to its current expectations or its" }, { "end": 2183.72, "start": 2177.72, "text": " current baseline and so from that perspective we need different kinds of reinforcement learning" }, { "end": 2188.3599999999997, "start": 2183.72, "text": " algorithms to take advantage of that kind of feedback we can't just use q learning because" }, { "end": 2193.3199999999997, "start": 2188.3599999999997, "text": " q learning is it interprets the rewards very differently than what people are actually providing" }, { "end": 2201, "start": 2193.9599999999996, "text": " so you have a paper called conversion actor critic by humans with a McGlashon at all" }, { "end": 2204.2, "start": 2201, "text": " is that is that what you're talking about there with the the humans providing that" }, { "end": 2209.72, "start": 2204.2, "text": " advantage signal to the critic yeah so that was in some ways that was the the capstone of a long" }, { "end": 2214.52, "start": 2209.72, "text": " series of papers where we were starting to get a handle on what we were understanding about" }, { "end": 2219.3199999999997, "start": 2215.3999999999996, "text": " about learning from people there was a separate in addition to the work that I was involved in" }, { "end": 2226.2799999999997, "start": 2219.3199999999997, "text": " with park ho that we had a collaboration with folks at NC state and Washington state University" }, { "end": 2232.9199999999996, "start": 2227.64, "text": " Matt Taylor's group and and David Roberts's group where we were following a very similar kind" }, { "end": 2236.6800000000003, "start": 2232.92, "text": " of pathway of trying to figure out what is it that what is it the information that people are" }, { "end": 2241.96, "start": 2236.6800000000003, "text": " providing and how can we make good use of it and both efforts ended up converging on this idea of" }, { "end": 2247.08, "start": 2241.96, "text": " well really the reward signal is the trainers way of saying yes you're going in the right direction" }, { "end": 2254.6, "start": 2247.08, "text": " so it really is a kind of communication more than it is a quantity to be maximized right if you" }, { "end": 2259.16, "start": 2254.6, "text": " just think of it in terms of profit maximization you end up being misled by these signals you have" }, { "end": 2265.24, "start": 2259.16, "text": " to really think of them as an indicator kind of nudging you in the right direction and so we ended" }, { "end": 2269.72, "start": 2265.24, "text": " up with a whole sequence of algorithms for dealing with this kind of feedback and I think I think" }, { "end": 2275, "start": 2269.72, "text": " I think coach well coach was the last one in that sequence and and I think it really did" }, { "end": 2280.2799999999997, "start": 2275, "text": " build on the insights that we had gotten through that whole I don't know five or six year period of" }, { "end": 2285.72, "start": 2280.2799999999997, "text": " of of exploring those questions so it seems like there's that's one of the challenges with human" }, { "end": 2291.7999999999997, "start": 2285.72, "text": " feedback and then in in the deep coach paper you mentioned human feedback is often delayed" }, { "end": 2298.2799999999997, "start": 2292.8399999999997, "text": " because it can't be humans can't be as real time or is that is that another one of the challenges" }, { "end": 2304.2, "start": 2298.2799999999997, "text": " are there other challenges with with incorporating humans in in this loop that's right so right so" }, { "end": 2311.7999999999997, "start": 2304.2, "text": " delay in actually providing that that information can it has to be taken into consideration and so" }, { "end": 2316.52, "start": 2311.8, "text": " we had a mechanism in in the paper that you're referring to where we it was essentially a kind of" }, { "end": 2322.36, "start": 2316.52, "text": " trace kind of the same kind of trace that you see in TD learning of just keeping information" }, { "end": 2329.8, "start": 2322.36, "text": " around so that when the feedback actually happens it can be applied to what was in memory when the" }, { "end": 2336.76, "start": 2329.8, "text": " learning should have been taking place that work it hasn't quite finished yet right we haven't" }, { "end": 2341.88, "start": 2336.76, "text": " quite gotten that to scale and to be robust but that was the direction that we were going when we" }, { "end": 2347.7200000000003, "start": 2341.88, "text": " wrote that paper so in that conversion actocratic by humans paper there's a line that says nearly" }, { "end": 2352.36, "start": 2347.7200000000003, "text": " half of American household have a pet dog and at least some exposure to animal training" }, { "end": 2357.1600000000003, "start": 2352.36, "text": " suggesting an alternative path for customizing robot behavior sounds like you're suggesting there" }, { "end": 2363.48, "start": 2357.1600000000003, "text": " could be a whole human skill involved in how to drain a robot using these types of signals do you" }, { "end": 2367.96, "start": 2363.48, "text": " see do you see things unfolding that way i do i'm excited about that idea so i don't have you" }, { "end": 2373.32, "start": 2367.96, "text": " ever tried to train a pet like a dog or anything i am trying to train our puppy right now it's" }, { "end": 2379.88, "start": 2373.32, "text": " challenging yeah as is did you take any classes i did i probably need a lot more classes right so i" }, { "end": 2385.2400000000002, "start": 2379.88, "text": " think i think i think that well okay so i grew up with cats and stuff and we didn't even make any" }, { "end": 2390.12, "start": 2385.2400000000002, "text": " attempt to train the cats but but i got married and had kids and and the family the rest of the" }, { "end": 2394.2, "start": 2390.12, "text": " family pushed like we're gonna have a dog we're gonna have a dog and so i'm like it's not my dog" }, { "end": 2399.08, "start": 2394.2, "text": " but sure you can have it in the house i just doesn't you know it just doesn't belong to me" }, { "end": 2404.2799999999997, "start": 2399.64, "text": " but you know that being said we're buddies now and she's sleeping next to me right now as i'm" }, { "end": 2410.2799999999997, "start": 2404.2799999999997, "text": " talking to you and that's i mean the dog not this spouse the the thing so i went to the some" }, { "end": 2414.52, "start": 2410.2799999999997, "text": " of these training classes early on and it was it was really remarkable to me how much of a skill" }, { "end": 2418.92, "start": 2414.52, "text": " it really seems to be that it seems so natural it's like yeah you just reward it for good things and" }, { "end": 2424.52, "start": 2418.92, "text": " you punish it for bad things it's like no it's it's much more subtle than that and part of it is" }, { "end": 2429.56, "start": 2424.52, "text": " kind of getting into the seeing the world through the animal from the animal's perspective so" }, { "end": 2434.2000000000003, "start": 2429.56, "text": " you know that when you're giving a reward that it's being interpreted as a reward for the right thing" }, { "end": 2439.2400000000002, "start": 2434.2000000000003, "text": " and not just just a kind of random occurrence and one of the things that we discovered in the" }, { "end": 2444.44, "start": 2439.2400000000002, "text": " coachwork so you mentioned James McLeashon so he became a kind of robot whisperer during that work" }, { "end": 2450.76, "start": 2444.44, "text": " he he implemented the algorithm on a turtle bot which is like a rumbo with kind of a computer" }, { "end": 2455.8, "start": 2450.76, "text": " strap to the top of it and man he could make that robot do amazing things he would he would just" }, { "end": 2460.6, "start": 2455.8, "text": " like you know look deep into its eyes and understand sort of how it was seeing the world i mean he" }, { "end": 2465.32, "start": 2460.6, "text": " wrote the code too so that helps and giving just the right reward at just the right time and" }, { "end": 2469.64, "start": 2465.32, "text": " getting it to do kind of remarkable things one of the things in in that coach paper that you" }, { "end": 2473.88, "start": 2469.64, "text": " mentioned we did experiments with James as the trainer we never really got other people to be" }, { "end": 2479.88, "start": 2473.88, "text": " able to be the trainer partly because it's just hard to bring you know random people into the lab" }, { "end": 2485.08, "start": 2479.88, "text": " and and train them up on this on this task but partly because it really it really was hard it really" }, { "end": 2490.6800000000003, "start": 2485.08, "text": " wasn't the case that that you could just start giving you know random rewards and punishments and" }, { "end": 2495.32, "start": 2490.6800000000003, "text": " it's all going to just turn out fine that you have to be thinking about what the perception of the" }, { "end": 2501.6400000000003, "start": 2495.32, "text": " machine is and how it's likely to change its behavior to give this feedback at this time and so" }, { "end": 2507.64, "start": 2501.64, "text": " even though our goal is to make a system that is going to be really amenable to to a broad variety" }, { "end": 2512.3599999999997, "start": 2507.64, "text": " of end users we're not there yet that at the moment it's still a specialized skill and you need" }, { "end": 2517.3199999999997, "start": 2512.3599999999997, "text": " well at the moment you need to be James to to be really good at it it almost makes you wonder if" }, { "end": 2522.7599999999998, "start": 2517.3199999999997, "text": " we need some kind of translation layer between the way humans would normally convey what they want" }, { "end": 2528.6, "start": 2522.7599999999998, "text": " and what the rl needs to experience to to do the learning those those those two things are quite" }, { "end": 2532.7599999999998, "start": 2528.6, "text": " different yeah that could be I mean it's so one of the things that occasionally occurs to me when I" }, { "end": 2539.72, "start": 2532.7599999999998, "text": " do this kind of work is that as an educator I think I tend to undervalue education or educating" }, { "end": 2544.92, "start": 2539.72, "text": " as a skill right because it's like the thing I just like you know if you speak English as you're" }, { "end": 2548.2, "start": 2544.92, "text": " as your first language you don't have to you don't think about the fact that that's actually kind" }, { "end": 2552.36, "start": 2548.2, "text": " of a really powerful tool for you know working in the global economy and there's many people who" }, { "end": 2557.7999999999997, "start": 2552.36, "text": " have to work really really hard to get to that point it's it's it's it's a valuable skill that you" }, { "end": 2563.2400000000002, "start": 2557.8, "text": " have and you got it sort of for free but but it's still incredibly valuable this this notion of" }, { "end": 2568.52, "start": 2563.2400000000002, "text": " being able to to engage in in an educational process to be able to take a hard problem break it" }, { "end": 2573, "start": 2568.52, "text": " down into pieces and then convey those pieces to an individual you know if you do it all the time" }, { "end": 2577.5600000000004, "start": 2573, "text": " as part of your job you might not think about the fact that it's actually a really great skill and" }, { "end": 2583.6400000000003, "start": 2577.5600000000004, "text": " it is a skill that not everybody has has developed to the same degree I think everybody's pretty good" }, { "end": 2587.7999999999997, "start": 2583.64, "text": " at it because I think just to be able to talk just to be able to explain something to somebody and we" }, { "end": 2594.6, "start": 2587.7999999999997, "text": " all do that all the time you have to have some practice in in taking complex concepts and breaking" }, { "end": 2599.16, "start": 2594.6, "text": " them down into simpler pieces but the folks that can really do this at a at a I don't know at a" }, { "end": 2604.04, "start": 2599.16, "text": " level that that is is able to break down extremely complicated problems that's a skill and that's" }, { "end": 2609.72, "start": 2604.04, "text": " not a skill that everybody has and I think training these robots is a similar kind of thing it's" }, { "end": 2614.04, "start": 2609.72, "text": " not the case that we just everyone just naturally has it and it's just a question of translating" }, { "end": 2619.08, "start": 2614.04, "text": " their what they what they say into rewards and punishment so that it works better it's a question" }, { "end": 2624.52, "start": 2619.08, "text": " of the things that they say differ depending on whether they've really kind of thought deeply about" }, { "end": 2630.12, "start": 2624.52, "text": " how to do teaching and so it might not be so simple right it might be that you know we all need to" }, { "end": 2635.3199999999997, "start": 2630.12, "text": " to work a little harder to tell machines what we want them to do part of it is even knowing what" }, { "end": 2639.16, "start": 2635.3199999999997, "text": " you want right so some people haven't thought hard enough about what it is that they want and of" }, { "end": 2643.3199999999997, "start": 2639.16, "text": " course if you don't know what it is that you want breaking it down in a way to convey to a machine" }, { "end": 2647, "start": 2643.3199999999997, "text": " isn't going to happen yeah I'm not I'm not convinced at the moment it's just like oh we just need" }, { "end": 2651.56, "start": 2647, "text": " the right interface we just need the right translator I think part of it is we have to encourage" }, { "end": 2656.12, "start": 2651.56, "text": " people to think a little more clearly and then then maybe it's not at that hard a problem anymore" }, { "end": 2665.24, "start": 2656.7599999999998, "text": " hmm so with these methods of using human feedback can the human feedback be given in batches" }, { "end": 2671.08, "start": 2665.24, "text": " after the fact or is it kind of like on policy where the humans really need to be in their real time" }, { "end": 2677.64, "start": 2671.08, "text": " I would say that the evidence from education is that like so when when textbooks first appeared and" }, { "end": 2683.7999999999997, "start": 2677.64, "text": " when videos educational videos first appeared a lot of times the perception is oh well this is" }, { "end": 2689.3999999999996, "start": 2683.7999999999997, "text": " going to replace teachers right because you can now have like the the the most accomplished experts" }, { "end": 2695.7200000000003, "start": 2689.4, "text": " in the world get their thoughts down convey them to people perfectly and now it's just going to like" }, { "end": 2700.92, "start": 2695.7200000000003, "text": " run inside their heads it's going to be ideal the fact the matter is having teachers that are there" }, { "end": 2705.1600000000003, "start": 2700.92, "text": " with the student in the loop right that they're both having the same experiences at the same time" }, { "end": 2709.32, "start": 2705.1600000000003, "text": " and they can adapt to each other you know we haven't replaced that we don't really know what's" }, { "end": 2715.4, "start": 2709.32, "text": " happening I think in that loop but it does really seem to be important and I think that's true in" }, { "end": 2720.44, "start": 2715.4, "text": " in this robot training setting as well that if we're trying to get the robot to learn just giving" }, { "end": 2725.2400000000002, "start": 2720.44, "text": " it a whole big batch of experience and letting it process it offline is in general not going to be" }, { "end": 2731, "start": 2725.2400000000002, "text": " as powerful as as having a you know a person there helping to interpret the situations on" }, { "end": 2735.96, "start": 2731, "text": " you know on behalf of the robot but you know really what you want is a mix right you want the" }, { "end": 2741, "start": 2735.96, "text": " robots to be you know whatever dreaming offline processing this information offline getting squeezing" }, { "end": 2748.12, "start": 2741, "text": " the most offline information that they they can get out of it and then when they are able to" }, { "end": 2753.48, "start": 2748.12, "text": " interact live with it with a trainer to get the most that they can out of that as well going back" }, { "end": 2759.88, "start": 2753.48, "text": " to this people teach with rewards and punishments as communications paper that paper mentions theory" }, { "end": 2767.16, "start": 2759.88, "text": " of mind and cognitive hierarchies can you touch on how theory of mind relates to communication here" }, { "end": 2773, "start": 2767.16, "text": " and and what these cognitive hierarchies are and what they help us do so I think the best way to" }, { "end": 2781.56, "start": 2773.72, "text": " think about a cognitive hierarchy is it's a way of trying to snip or simplify the infinite recursion" }, { "end": 2786.92, "start": 2781.56, "text": " that you get when you've got two minds thinking about each other so whenever whenever communication is" }, { "end": 2791.3199999999997, "start": 2786.92, "text": " taking place and again I believe that teaching is a form of communication right the teacher has" }, { "end": 2796.76, "start": 2791.32, "text": " information that needs to run inside the learner's head so that that communication has to happen" }, { "end": 2802.44, "start": 2797.6400000000003, "text": " whenever there's there's whenever you're thinking about communication there's the the teachers" }, { "end": 2806.04, "start": 2802.44, "text": " trying to convey information but what how how is the teacher trying to convey information the" }, { "end": 2811.56, "start": 2806.04, "text": " teacher should say whatever it is that the learner will interpret you know in the way that the" }, { "end": 2816.44, "start": 2811.56, "text": " teacher is hoping that the learner will change right so that the the learner has some I don't want" }, { "end": 2820.44, "start": 2816.44, "text": " to say deficit but has some sort of gap between their current state of knowledge and where the" }, { "end": 2825.4, "start": 2820.44, "text": " teacher is trying to get them to go and so the learner has to say something so that that gap will" }, { "end": 2830.92, "start": 2825.4, "text": " get bridged to some degree all right so so the teacher has to think about what's going on in the" }, { "end": 2835.4, "start": 2830.92, "text": " head of the learner but the learner should also be thinking about what's going on in the head of the" }, { "end": 2840.52, "start": 2835.4, "text": " teacher so like why did the teacher say that to me just now like oh the teacher trying to get me" }, { "end": 2846.12, "start": 2840.52, "text": " to understand this great okay so if the the learner is kind of an active participant in this" }, { "end": 2850.44, "start": 2846.12, "text": " communicate communicative act then the learner is also thinking about the fact that the teacher" }, { "end": 2855.48, "start": 2850.44, "text": " is thinking about what the student is trying to think about and we can just keep spinning this out" }, { "end": 2859.88, "start": 2856.04, "text": " infinitely right so the teacher has to think about the fact that the student is thinking" }, { "end": 2863.3199999999997, "start": 2859.88, "text": " about what the teachers thinking about the students thinking about the teachers thinking about that" }, { "end": 2870.2799999999997, "start": 2863.3199999999997, "text": " the students mind is going to change and not this never ends right and so so what the cognitive" }, { "end": 2875.8, "start": 2870.28, "text": " of hierarchy ideas says is, you know what? This has to end, right? We do this infinitely" }, { "end": 2881.96, "start": 2875.8, "text": " deeply. Sure, it's possible that I'll be able to say like one weird sound like brrrr," }, { "end": 2885.2000000000003, "start": 2881.96, "text": " and that will have exactly the right cascading effect into the learner's head, and suddenly" }, { "end": 2889.92, "start": 2885.2000000000003, "text": " the learner knows everything. Right? It's maybe, maybe. But more likely, we're going to" }, { "end": 2895.0400000000004, "start": 2889.92, "text": " spend tons and tons of time thinking about each other's, you know, recursively modeling" }, { "end": 2900.4, "start": 2895.04, "text": " each other in a way that's pretty fruitless. So the cognitive hierarchy idea says, no, no," }, { "end": 2904.24, "start": 2900.4, "text": " let's just go back and forth some finite amount of time. We're going to say, I should be" }, { "end": 2909.64, "start": 2904.24, "text": " saying things so that the learner should be hearing things in the context that I'm saying" }, { "end": 2914.4, "start": 2909.64, "text": " something that the learner's mind has to change, something like that. So just by clipping" }, { "end": 2919.4, "start": 2914.4, "text": " this hierarchy and taking it only to a certain depth, but more than just one, right? Don't" }, { "end": 2923.44, "start": 2919.4, "text": " just say the thing literally, just you want to say the thing so that it has the right" }, { "end": 2928.16, "start": 2923.44, "text": " effect, and the learner knows that. So you do want to do a couple of levels, maybe," }, { "end": 2932.52, "start": 2928.16, "text": " back and forth, but not infinitely deep. And so the cognitive hierarchy idea says, okay," }, { "end": 2940.16, "start": 2932.52, "text": " let's do this to some probably pretty small, but non-zero levels of mutual modeling." }, { "end": 2944.92, "start": 2940.16, "text": " So it's like on Twitter, it's like, do you take that tweet literally? Is it ironic? Is" }, { "end": 2950.2400000000002, "start": 2944.92, "text": " it sarcasm on top of irony? How far do you go? There must be some point of diminishing" }, { "end": 2955.08, "start": 2950.24, "text": " returns where you're like, okay, that's what they really meant. Yeah, I like the idea" }, { "end": 2961.3999999999996, "start": 2955.08, "text": " of Twitter having diminishing returns. It does, I feel like that fits. So yeah, that's" }, { "end": 2965.56, "start": 2961.3999999999996, "text": " right. But that's true of any kind of communication. I think that Twitter maybe makes that more" }, { "end": 2970.52, "start": 2965.56, "text": " obvious to us because we're talking to people who we don't necessarily have great individual" }, { "end": 2974.68, "start": 2970.52, "text": " models of, right? When we're talking to a friend, we have plenty of experience that" }, { "end": 2979.08, "start": 2974.68, "text": " tells us how deeply this back and forth should go. But when we're doing it on Twitter," }, { "end": 2983.6, "start": 2979.08, "text": " there's just so many people and they're so different and they're so unfamiliar that," }, { "end": 2988.88, "start": 2983.6, "text": " yeah, you're right. You need to be thinking about it and you need to be almost calculating" }, { "end": 2997.6, "start": 2988.88, "text": " it carefully. So yeah, so in the context of doing training or teaching of this kind of" }, { "end": 3003.84, "start": 2997.6, "text": " cognitive hierarchy can be a useful structure for deciding how to present the material" }, { "end": 3005.68, "start": 3003.84, "text": " in the most effective way." }, { "end": 3010.6, "start": 3005.68, "text": " For completeness, do you want to touch on what theory of mind refers to? Sure. Yeah, so" }, { "end": 3014.7599999999998, "start": 3010.6, "text": " I guess theory of mind is essentially this notion of recursive modeling that we're," }, { "end": 3020.72, "start": 3014.7599999999998, "text": " that we're, that we think not just about, I guess the thing that's not theory of mind" }, { "end": 3026.3599999999997, "start": 3020.72, "text": " or is proto theory of mind is the idea that when I'm trying to understand your behavior," }, { "end": 3031.44, "start": 3026.3599999999997, "text": " I imagine that you're me, that you have what in your head, what I have in my head and therefore" }, { "end": 3035.7200000000003, "start": 3031.44, "text": " the decisions you make are exactly the decisions I would make in that same circumstance. So" }, { "end": 3040.2400000000002, "start": 3035.7200000000003, "text": " that's already a pretty powerful perspective to say, hey, I can put myself into your shoes" }, { "end": 3044.96, "start": 3040.2400000000002, "text": " and understand your behavior through my own lens. What theory of mind does is it, is it" }, { "end": 3049.68, "start": 3044.96, "text": " takes that one step further and says, yeah, but you may have had different experiences" }, { "end": 3054.44, "start": 3049.68, "text": " than I have. So you might act differently than I would if just putting myself in your" }, { "end": 3058.88, "start": 3054.44, "text": " shoes and imagining how I would act, knowing what I know is not necessarily how you are" }, { "end": 3063.2000000000003, "start": 3058.88, "text": " going to act because you know some things that are different from what I know. And so this" }, { "end": 3069.92, "start": 3063.2000000000003, "text": " is a remarkably powerful, even more powerful way of predicting the behavior of other minds" }, { "end": 3074.96, "start": 3069.92, "text": " is to, is to build out this kind of, you know, almost simulate their, their inputs, their" }, { "end": 3079.48, "start": 3074.96, "text": " experiences to the, to a sufficient degree that you can predict how they're going to" }, { "end": 3085.76, "start": 3079.48, "text": " respond to, to new stimuli. And, and people get pretty good at this by age. I think" }, { "end": 3091.8, "start": 3085.76, "text": " it's like seven or so. People who are autistic have a very difficult time with this. They" }, { "end": 3098.32, "start": 3091.8, "text": " can learn to do it, but it becomes sort of like a conscious computation. But I guess neurotypical" }, { "end": 3102.36, "start": 3098.32, "text": " people by the time they're seven or eight years old do this without even thinking about" }, { "end": 3107.48, "start": 3102.36, "text": " it and can, can develop very rich models of a whole network of people and how they're" }, { "end": 3110.6000000000004, "start": 3107.48, "text": " interacting with each other. And so there's reason to think, well, boy, if we're, if we're" }, { "end": 3115.5600000000004, "start": 3110.6000000000004, "text": " trying to make agents that are doing good decision making in networks of other agents," }, { "end": 3119.6, "start": 3115.56, "text": " including people, they're going to have to do some amount of this theory of mind stuff," }, { "end": 3123.12, "start": 3119.6, "text": " right? It's not going to be enough for them to just try to be maximizing reward. They" }, { "end": 3128.7599999999998, "start": 3123.12, "text": " need to be also thinking about the impact that their actions have on the mental state" }, { "end": 3130.96, "start": 3128.7599999999998, "text": " of the other agents in, in the environment." }, { "end": 3136.56, "start": 3130.96, "text": " So you co-authored another paper theory of minds understanding behavior in groups through" }, { "end": 3141.96, "start": 3136.56, "text": " inverse planning with Schum at all. Can you talk about the idea behind this paper?" }, { "end": 3149.52, "start": 3141.96, "text": " Right. So continuing this idea of if we're trying to act in amongst other agents, it's" }, { "end": 3154.52, "start": 3149.52, "text": " really useful to be able to have an understanding of how they're going to make their decisions." }, { "end": 3158.36, "start": 3154.52, "text": " The specific idea, well, so let me, let me, let me mention some work that's, that's not" }, { "end": 3166.4, "start": 3158.36, "text": " mine. So, Anka Dragan, who's now at UC Berkeley, did some really neat work in thinking about" }, { "end": 3172.36, "start": 3166.4, "text": " robot motion control. So deciding where robots, arms should move, taking into consideration" }, { "end": 3177.8, "start": 3172.36, "text": " the way that motion is going to be perceived. So if, for example, a robot's trying to hand" }, { "end": 3182.48, "start": 3177.8, "text": " an object to another person, it's not enough to just put the object in a place where that" }, { "end": 3187.48, "start": 3182.48, "text": " person can grab it. You have to be signaling to the person where they should be reaching" }, { "end": 3195.2400000000002, "start": 3187.48, "text": " next. And the more legible, the more interpretable, the movement is, the more effective that" }, { "end": 3200.8399999999997, "start": 3195.24, "text": " the collaboration is going to be. And so she's developed motion planning algorithms that" }, { "end": 3206.64, "start": 3200.8399999999997, "text": " do a kind of theory of mind idea. They do think about not just, I need to physically" }, { "end": 3213.24, "start": 3206.64, "text": " move to this position, but the trajectory that the robot takes on route to that is affecting" }, { "end": 3217.24, "start": 3213.24, "text": " the mental state of the other person. And we want to have the right effect on that mental" }, { "end": 3220.04, "start": 3217.24, "text": " state. So it sounds like we're talking about action" }, { "end": 3224.6, "start": 3220.04, "text": " as communication. And earlier we were talking also about reward as communication. It seems" }, { "end": 3227.6, "start": 3224.6, "text": " like a common thread here. That's right. So action can be communication, reward" }, { "end": 3231.24, "start": 3227.6, "text": " can be communication, you know, communication can be communication, like, you know, words" }, { "end": 3236.96, "start": 3231.24, "text": " and things like that. And so in the context of the paper that you're referring to, that" }, { "end": 3241.7999999999997, "start": 3236.96, "text": " was another one of these actions as communication scenarios where the individual agents were" }, { "end": 3246.12, "start": 3241.7999999999997, "text": " trying to think about, there was a group of agents and they all had their own goals. And" }, { "end": 3251.16, "start": 3246.12, "text": " people watching this were then asked to interpret what goals were the different individuals" }, { "end": 3256.44, "start": 3251.16, "text": " trying to carry out. How were they thinking of themselves as subteams within the bigger" }, { "end": 3261.92, "start": 3256.44, "text": " group of agents? And people seem to be actually remarkably good at this. And we were able" }, { "end": 3268.24, "start": 3261.92, "text": " to model the way that people do this calculation or at least the end goal of a compute by" }, { "end": 3272.3999999999996, "start": 3268.24, "text": " saying, oh, well, one thing they could be doing is inverse planning. They could be thinking," }, { "end": 3277.04, "start": 3272.3999999999996, "text": " well, if this was the mental state of these agents, then I'd expect them to behave like" }, { "end": 3282.46, "start": 3277.04, "text": " this. That's not how they behaved. So I should, you know, use that information with Bayes" }, { "end": 3287.36, "start": 3282.46, "text": " rule to try to condition, well, what would be a more likely way that they, that they're" }, { "end": 3292.12, "start": 3287.36, "text": " more likely description of their mental states. So, you know, seeing the behavior and then" }, { "end": 3295.32, "start": 3292.12, "text": " running that behavior backwards in a sense to say, well, what is it that they're trying" }, { "end": 3300.88, "start": 3295.32, "text": " to do? Gives a window into what's going on inside the heads of the agents." }, { "end": 3305.16, "start": 3300.88, "text": " I guess this reminds me of poker where you're constantly having to think about what does" }, { "end": 3310.72, "start": 3305.16, "text": " the other person know or think they know? I noticed you had some papers on poker in your" }, { "end": 3317.56, "start": 3310.72, "text": " past. And then recently we saw Pluribus making some strides in poker. Is it theory of" }, { "end": 3322.96, "start": 3317.56, "text": " minds going to be relevant in those types of games? Definitely. So definitely the idea" }, { "end": 3328.12, "start": 3322.96, "text": " of modeling what's going on in the other agents head is a really critical element to the" }, { "end": 3334.64, "start": 3328.12, "text": " way that the best poker players play, both machine and human. One difference between" }, { "end": 3340, "start": 3334.64, "text": " the really elaborated theory of mind and the kind of poker theory of mind is that in the" }, { "end": 3344.8799999999997, "start": 3340, "text": " poker setting, everything is intended to be misleading, right? Like most of what you're" }, { "end": 3350.3199999999997, "start": 3344.8799999999997, "text": " trying to do with your actions is to communicate the wrong thing, right? To the extent that you're" }, { "end": 3354.96, "start": 3350.3199999999997, "text": " actually conveying to the other player, the cards that you have hidden that only you know" }, { "end": 3360.48, "start": 3354.96, "text": " about, you're actually doing yourself a disservice because it is a purely competitive game. There's" }, { "end": 3365.64, "start": 3360.48, "text": " other games. And so Michael Bowling, who's worked substantially on poker, has also written" }, { "end": 3370.2400000000002, "start": 3365.64, "text": " some papers on cooperative games where you need, but it seems as though you need a theory" }, { "end": 3375.28, "start": 3370.2400000000002, "text": " of mind. That was another one of the talks at the RLDM conference. He talked about a game" }, { "end": 3379.88, "start": 3375.28, "text": " called Hanabi, which is a purely cooperative game with a group of people where everyone" }, { "end": 3386.88, "start": 3379.88, "text": " has a hand like cards that they have that are private cards of their own, but unlike" }, { "end": 3391.56, "start": 3386.88, "text": " normal games where you face the cards to yourself and only you know it, you actually face" }, { "end": 3396.28, "start": 3391.56, "text": " the cards outwards and everybody but you knows what you have in your hand. And they have" }, { "end": 3401, "start": 3396.28, "text": " to buy their actions in the game, convey to you enough information so they know what's" }, { "end": 3404.84, "start": 3401, "text": " in their hands, so you know what's in your hands, so that you can make the right actions" }, { "end": 3410.4, "start": 3404.84, "text": " in the game and we can all win. And that's a game where it really feels like you need" }, { "end": 3416.32, "start": 3410.4, "text": " sophisticated theory of mind. You need to say, you know, a player needs to say, huh, I'm" }, { "end": 3420.2400000000002, "start": 3416.32, "text": " going to take this action because I think you're going to wonder, why would I have taken" }, { "end": 3424.28, "start": 3420.2400000000002, "text": " that action? And the only explanation you're going to be able to come up with is this," }, { "end": 3428.1600000000003, "start": 3424.28, "text": " which is exactly what I want you to know because I want you to take this action and that's" }, { "end": 3434.96, "start": 3428.1600000000003, "text": " how I'm going to tell you. It's just, if you haven't played this yet, it's a real kicker." }, { "end": 3440.28, "start": 3434.96, "text": " You start to access parts of your brain that are very under-exercised and it's a really" }, { "end": 3442.1200000000003, "start": 3440.28, "text": " cool feeling." }, { "end": 3446.44, "start": 3442.12, "text": " First of all, RL algorithms coming out all the time, should we expect this to continue" }, { "end": 3451.12, "start": 3446.44, "text": " indefinitely or should we expect that somehow they'll be, we'll reach saturation and we'll" }, { "end": 3453.52, "start": 3451.12, "text": " have just enough of them?" }, { "end": 3457.04, "start": 3453.52, "text": " So I remember, so I don't know if you know Andrew Morris, but he was a big contributor" }, { "end": 3463.4, "start": 3457.04, "text": " to reinforcement learning in the early days. He ultimately became a professor at CMU and" }, { "end": 3469.16, "start": 3463.4, "text": " then he became a high level manager at Google and Pittsburgh and then he became back to CMU" }, { "end": 3473.08, "start": 3469.16, "text": " and it was a dean of computing and now he's off to the place I'll say he might be back" }, { "end": 3478.72, "start": 3473.08, "text": " at Google. But I remember him saying at one point in the early days of reinforcement" }, { "end": 3482.7599999999998, "start": 3478.72, "text": " learning, boy, you know, it seems as though we have a different algorithm for each, you know," }, { "end": 3489.64, "start": 3482.7599999999998, "text": " minor variation of what you can do with a Markov decision process. And that, to me, implies" }, { "end": 3493.8399999999997, "start": 3489.64, "text": " that we can actually generate an infinite series of papers that make zero progress towards" }, { "end": 3499.48, "start": 3493.84, "text": " the goal, right? That we're not actually getting better at solving problems. We just are, you" }, { "end": 3503.56, "start": 3499.48, "text": " know, shattering the case, all these special cases and coming up with special case algorithms" }, { "end": 3509.04, "start": 3503.56, "text": " for each of them. And I, you know, I, that's, you know, it's a valid observation, but it" }, { "end": 3513.48, "start": 3509.04, "text": " doesn't seem to be what happened. It doesn't seem as though what the field did was then" }, { "end": 3517.32, "start": 3513.48, "text": " articulate all these different minor variations and then develop different algorithms for each" }, { "end": 3523.52, "start": 3517.32, "text": " of them. You know, some of that always happens. There's always research that is more incremental" }, { "end": 3529.36, "start": 3523.52, "text": " that may or may not have an impact on the kind of broad trajectory of the field. But, you" }, { "end": 3532.7599999999998, "start": 3529.36, "text": " know, but if you look back at the history from then to now, I don't think you would just" }, { "end": 3536.24, "start": 3532.7599999999998, "text": " describe it the way that he was predicting it would, it would play out. I think what you" }, { "end": 3541.84, "start": 3536.24, "text": " see is, you know, well, you see things like DQN popping out of, you know, interesting" }, { "end": 3547.72, "start": 3541.84, "text": " combinations of ideas from different fields. And so, yeah, we're seeing lots of algorithms" }, { "end": 3551.72, "start": 3547.72, "text": " now. I think, I think that, I think that's common. I think it's, I think there's like" }, { "end": 3557.16, "start": 3551.72, "text": " phases that a field goes through. And we're in a kind of local search mode right now where" }, { "end": 3562.56, "start": 3557.16, "text": " we're doing little tweaks on the same kinds of algorithms. But we're going to get tired" }, { "end": 3566.24, "start": 3562.56, "text": " of that. We're going to, we're going to, there's the, the people just don't stay interested" }, { "end": 3570.9199999999996, "start": 3566.24, "text": " in that kind of, oh, here's one tenth improvement on this one game. Eventually, they're going" }, { "end": 3574.9199999999996, "start": 3570.9199999999996, "text": " to either just, you know, give up on this area and just think, okay, well, we've solved" }, { "end": 3579.2799999999997, "start": 3574.9199999999996, "text": " it as best we can solve it. Let's move on to something else. Or there's going to be" }, { "end": 3584.88, "start": 3579.28, "text": " a wave of seeing it from a different perspective that's going to result in, in more, you know," }, { "end": 3589.1600000000003, "start": 3584.88, "text": " more rapid progress, more, more larger strides per unit time." }, { "end": 3594.44, "start": 3589.1600000000003, "text": " Do you think that there's some like upper bounds on how sample efficient model free RL" }, { "end": 3599.4, "start": 3594.44, "text": " can, can, yes. And like, are we getting, are we approaching that? Or are we still really" }, { "end": 3603.0400000000004, "start": 3599.4, "text": " far away from that? Like, how much can we squeeze out of these trajectories?" }, { "end": 3610.48, "start": 3603.04, "text": " Right. So it strikes me that it's a losing battle that the fact that matter is general" }, { "end": 3616.44, "start": 3610.48, "text": " MDPs are really hard. You can, you can embed really hard problems in these MDPs that if" }, { "end": 3622.64, "start": 3616.44, "text": " you want to say, I've got an algorithm that's super fast and super general, you're lying." }, { "end": 3626.52, "start": 3622.64, "text": " You can't, you can't be both. There's, there's kind of, there must be some kind of no" }, { "end": 3632.92, "start": 3626.52, "text": " free lunch, you know, idea in the context of reinforcement learning problems. So" }, { "end": 3638.4, "start": 3632.92, "text": " I would not expect it to be the case that we can just, just by using completely general" }, { "end": 3645.12, "start": 3638.4, "text": " techniques, squeeze out the maximum amount of generalization from a given trajectory." }, { "end": 3649.48, "start": 3645.12, "text": " It strikes me. Well, I mean, sometimes I think about that we're really sort of solving" }, { "end": 3652.76, "start": 3649.48, "text": " the wrong problem to, to the extent that we're trying to make, and we're trying to make" }, { "end": 3657.96, "start": 3652.76, "text": " it a reinforcement learning algorithm. Then we demonstrated it on a single environment." }, { "end": 3662.12, "start": 3657.96, "text": " Really, the best way of solving that single environment is to just present the policy" }, { "end": 3665.96, "start": 3662.12, "text": " for that environment. You don't want a learning algorithm at all if that's the problem you're" }, { "end": 3670.96, "start": 3665.96, "text": " trying to solve. It only makes sense to use a learning algorithm if the, if the learning" }, { "end": 3676.2799999999997, "start": 3670.96, "text": " algorithm doesn't know in advance which problem it's going to need to solve. Right? That we" }, { "end": 3682.2, "start": 3676.2799999999997, "text": " need to be able to evaluate them with respect to a set of possible problems. And if that" }, { "end": 3687.4, "start": 3682.2, "text": " set is all possible MDPs, then I think it's extremely limited as to what we'll be able" }, { "end": 3692.1600000000003, "start": 3687.4, "text": " to get an algorithm to do. If on the other hand, that set is a, is a constrained set of" }, { "end": 3697.52, "start": 3692.1600000000003, "text": " MDPs, like the MDPs representing the, you know, the thermodynamics of my house for, for," }, { "end": 3703.56, "start": 3697.52, "text": " for moving the thermostat around. You could imagine lots of different houses with, you know," }, { "end": 3709.2400000000002, "start": 3703.56, "text": " with variability in terms of thermodynamics, but it's a much more constrained space than" }, { "end": 3714.4, "start": 3709.2400000000002, "text": " the space of all possible MDPs. And so to me, then the right problem is, okay, find me" }, { "end": 3720.1600000000003, "start": 3714.4, "text": " a learning algorithm for that. Now, it could be that we have to pick by hand a couple applications" }, { "end": 3723.92, "start": 3720.1600000000003, "text": " domains and then come up with specialized learning algorithm for those application domains." }, { "end": 3727.8, "start": 3723.92, "text": " I'm not so interested in that. I'd be much more interested in the meta reinforcement" }, { "end": 3732.12, "start": 3727.8, "text": " learning problem, which is given a sample of a set of domains that are interrelated" }, { "end": 3737.84, "start": 3732.12, "text": " in some way, derive, use, automatically derive a reinforcement learning algorithm that's" }, { "end": 3742.6, "start": 3737.84, "text": " going to be most effective for that set of problems. That to me is, I think that's the" }, { "end": 3744.6, "start": 3742.6, "text": " problem that we really should be working on." }, { "end": 3752.6, "start": 3744.6, "text": " Hmm, so, is this, would this partly be your response to Richard Sutton's bitter lesson" }, { "end": 3761.24, "start": 3752.6, "text": " post-warry talks about how compute seems to conquer all as opposed to human-designed algorithms?" }, { "end": 3767.12, "start": 3761.24, "text": " Oh, it may be that we got to the same punchline through very different paths. Yeah, so the" }, { "end": 3773.52, "start": 3767.12, "text": " bitter lesson article, that was, that got a lot of attention, got people very excited." }, { "end": 3777.2, "start": 3773.52, "text": " My understanding is that itch guy this year, there was a couple of invited talks that" }, { "end": 3788.2, "start": 3777.2, "text": " directly addressed that whole discussion. You know, in some sense, it's kind of the opposite" }, { "end": 3792.88, "start": 3788.2, "text": " of the bitter lessons. So the bitter lesson thing says, don't try to solve specific problems" }, { "end": 3798.6400000000003, "start": 3792.88, "text": " just throw more compute at it. I'm not saying that. I'm saying that we want specialized" }, { "end": 3804.96, "start": 3798.6400000000003, "text": " algorithms for particular kinds of problems. And in particular, if we're going to try to," }, { "end": 3809.08, "start": 3804.96, "text": " if the learning algorithm needs to learn with a very small amount of data, that doesn't" }, { "end": 3815.32, "start": 3809.08, "text": " make any sense to throw very, very general algorithms that require tons and tons of data," }, { "end": 3820.36, "start": 3815.32, "text": " right? You just can't use them. We have to break it down in a way that's going to allow" }, { "end": 3828.08, "start": 3820.36, "text": " an algorithm to do more with the small amount of data that it's got. And so, yeah, I mean," }, { "end": 3833.84, "start": 3828.08, "text": " that being said, what I, what I proposed is that we, that we focus on algorithms that" }, { "end": 3837.6400000000003, "start": 3833.84, "text": " work at the meta level and that those algorithms can be very powerful and very general. But" }, { "end": 3841.04, "start": 3837.6400000000003, "text": " those aren't the algorithms that we would actually deploy in the, in the setting where" }, { "end": 3844.6400000000003, "start": 3841.04, "text": " they actually have to do the learning. This is a thing that we would do offline ahead" }, { "end": 3855.72, "start": 3844.64, "text": " of time to create those algorithms. So, yes. So, so is that what our brains are doing?" }, { "end": 3860.7599999999998, "start": 3855.72, "text": " Like do we have, our brains just have this huge menu of specialized and general algorithms" }, { "end": 3866.24, "start": 3860.7599999999998, "text": " and where we're kind of a meta learner that's, that's just quickly figures out which one" }, { "end": 3870.08, "start": 3866.24, "text": " to throw at this particular situation. Well, so I would, what do you think is going" }, { "end": 3878.88, "start": 3870.08, "text": " to say is that, well, maybe, but in some ways, I think of this analogy is happening at" }, { "end": 3885.48, "start": 3878.88, "text": " the evolutionary level. Like, people are born not to be completely general learners, maybe" }, { "end": 3889.2, "start": 3885.48, "text": " to, you know, maybe they, maybe you can evolve towards completely general learning or you" }, { "end": 3893.7599999999998, "start": 3889.2, "text": " can have thoughts that allow you to build a completely general learner in software. But," }, { "end": 3899.16, "start": 3893.7599999999998, "text": " but by and large, we are born with lots of biases about the way that our world is structured." }, { "end": 3904, "start": 3899.16, "text": " And that is critical for being able to learn sufficiently rapidly that, you know, within" }, { "end": 3908.64, "start": 3904, "text": " our lifetime, we can actually use the knowledge that we gain. So, so I think of evolution as" }, { "end": 3915.3599999999997, "start": 3908.64, "text": " doing the job of, of meta learning in the case of people. But, yeah, but I guess I, I," }, { "end": 3918.7599999999998, "start": 3915.3599999999997, "text": " I hadn't thought about it this way, but I would say that you're probably right that, that," }, { "end": 3923.8799999999997, "start": 3918.7599999999998, "text": " that, that people, especially people who are engaged in problem solving as a, as a kind" }, { "end": 3927.6, "start": 3923.8799999999997, "text": " of first class activity, like they're not just solving problems because they have a problem," }, { "end": 3931.64, "start": 3927.6, "text": " they're like solving problems because they like to engage in the process of problem solving." }, { "end": 3937.12, "start": 3931.64, "text": " They, to some extent, are, are learning something like that. They're learning, okay, here's a" }, { "end": 3940.8399999999997, "start": 3937.12, "text": " new problem. Does this remind me of other problems that I've solved in the past? What sort" }, { "end": 3945.48, "start": 3940.8399999999997, "text": " of procedures did I follow in those cases that actually got me to a solution? Let me try" }, { "end": 3949.56, "start": 3945.48, "text": " those in this case. Hopefully they'll, they'll get me to where I want to go. Right? There's," }, { "end": 3953.92, "start": 3949.56, "text": " there's no guarantee that that's going to work because we can create problems that are" }, { "end": 3959.96, "start": 3953.92, "text": " arbitrarily, sort of cryptic, right? That, that, that, that, that, that, what they look to" }, { "end": 3965, "start": 3959.96, "text": " be on the surface and, and what they actually require in terms of solution are so different" }, { "end": 3970.52, "start": 3965, "text": " that you, you, and you end up having to just kind of check all possible solutions to see" }, { "end": 3975.48, "start": 3970.52, "text": " if one of them works. But, but that's not normal, right? Normally we see problems and they" }, { "end": 3979.6800000000003, "start": 3975.48, "text": " actually do bear some resemblance to problems that we've seen before. And so, yeah, so we" }, { "end": 3984.9199999999996, "start": 3979.68, "text": " do get kind of that, that meta problem solving capability to work for us." }, { "end": 3989.3199999999997, "start": 3984.9199999999996, "text": " In broad strokes, do you feel like there's different schools of thought in the RL research" }, { "end": 3996.3199999999997, "start": 3989.3199999999997, "text": " community or different ideas of what's important? And like how well distributed is a deep understanding" }, { "end": 4001.9199999999996, "start": 3996.3199999999997, "text": " of RL? I guess what comes to mind for me is like is deep mind hoovering up, you know," }, { "end": 4006.16, "start": 4001.9199999999996, "text": " the line share of the talent in this field? Or would you say that RL talent is, is well" }, { "end": 4009.64, "start": 4006.16, "text": " spread out? So, the high level picture from my perspective" }, { "end": 4017.3199999999997, "start": 4009.64, "text": " is that any, any topic, if you zoom in close enough, you're going to see camps, right?" }, { "end": 4021.16, "start": 4017.3199999999997, "text": " That it, there's not, you, it's perfect uniformity. So, of course, there's going to be camps." }, { "end": 4025.8799999999997, "start": 4021.16, "text": " I think that the reinforcement learning field is more coherent than a lot of fields out" }, { "end": 4030.7599999999998, "start": 4025.8799999999997, "text": " there. But, it is not perfectly coherent. And I do think that there's some people who" }, { "end": 4036.56, "start": 4030.7599999999998, "text": " put more emphasis on the theoretical aspects, the guarantees that you can get out of algorithms." }, { "end": 4043, "start": 4036.56, "text": " Some people put more emphasis on their, the performance empirically. And so, if they can" }, { "end": 4048.04, "start": 4043, "text": " actually get a system to do kind of, you know, amazing jumping through hoop stuff, then," }, { "end": 4052.48, "start": 4048.04, "text": " then they're very content. Even if they don't have any guarantees about how, how that" }, { "end": 4058.04, "start": 4052.48, "text": " algorithm will perform on other problems. So, yeah, I do think that there's some, chain" }, { "end": 4062.32, "start": 4058.04, "text": " differences in emphasis. But there's actually a fair amount of consistency in terms of" }, { "end": 4070.6400000000003, "start": 4062.32, "text": " what the problem is. And, you know, if not always, in whether or not it's been solved." }, { "end": 4078.44, "start": 4070.6400000000003, "text": " As far as whether or not deep mind is, is, is got the lock on, on the talent in the community," }, { "end": 4082.8, "start": 4078.44, "text": " it is definitely the case that they've got a ton of people. It's, I don't know that history" }, { "end": 4089.2000000000003, "start": 4082.8, "text": " has ever seen such a concentration of, of researchers in an area, certainly not in a computer" }, { "end": 4096.44, "start": 4089.2, "text": " science area doing, you know, doing such, such similar stuff on such a large scale." }, { "end": 4102.12, "start": 4096.44, "text": " Yeah, I worry as to whether that's sustainable on both sides, whether they'll continue" }, { "end": 4108.28, "start": 4102.12, "text": " to be supported within the, the Google umbrella. And whether the field can stay healthy if" }, { "end": 4115.12, "start": 4108.28, "text": " so many people were, you know, sucked into that vortex. You know, my guess, if I had" }, { "end": 4120.64, "start": 4115.12, "text": " the guess, is it, it's not going to last forever. Eventually, it will be, you know, people" }, { "end": 4125.08, "start": 4120.64, "text": " will, will go back to the various places that they came from or that they, they're going" }, { "end": 4130.16, "start": 4125.08, "text": " to go to next. And there will be this, this nice dissemination, not just of interest in" }, { "end": 4136.48, "start": 4130.16, "text": " reinforcement learning, but also the tremendous, you know, procedural things that, that, that," }, { "end": 4141.36, "start": 4136.48, "text": " that the deep mind people have figured out that allow them to run such large scale experiments" }, { "end": 4147.4, "start": 4141.36, "text": " and to, to answer such big questions. So I, yeah, so at the moment, I do think that it's" }, { "end": 4154.5199999999995, "start": 4147.4, "text": " not, things are not spread very evenly. We don't have the, the, the, the strength in the" }, { "end": 4160.5199999999995, "start": 4154.5199999999995, "text": " universities that is, that can sustainably produce, you know, top-notch researchers to" }, { "end": 4164.24, "start": 4160.5199999999995, "text": " go out into the world and, and, and attack these problems. I think a lot of people are" }, { "end": 4167.5599999999995, "start": 4164.24, "text": " getting distracted and pulled into the, into this company. And so we're, we'll see kind" }, { "end": 4176.68, "start": 4167.56, "text": " of a, you know, just a, maybe a dip in our ability to produce the next generation of researchers." }, { "end": 4182.04, "start": 4176.68, "text": " But I do think that, that ultimately it's going to be, to everyone's benefit, that, that," }, { "end": 4186.400000000001, "start": 4182.04, "text": " that, that having these people together, doing the kind of work that they're doing, sharing" }, { "end": 4190.160000000001, "start": 4186.400000000001, "text": " some of the results, sharing their knowledge in various ways, it will, it will disseminate" }, { "end": 4195.88, "start": 4190.160000000001, "text": " even if it's not, even if right now they're, they're pretty, closed in terms of, of what" }, { "end": 4200.96, "start": 4195.88, "text": " they can share. I saw some of your performances on YouTube. You have a thriller video," }, { "end": 4207.04, "start": 4200.96, "text": " TurboTax commercial. It seemed like you have a lot of fun with this stuff. Do you, do" }, { "end": 4213.36, "start": 4207.04, "text": " you see you doing more acting or music in, in your future? Maybe a big screen cameo for" }, { "end": 4219.76, "start": 4213.36, "text": " an AI professor? Well, okay. So, I would love that. And so if you have any pull, you know," }, { "end": 4223.2, "start": 4219.76, "text": " feel free to make it happen. I'd be super excited. You know, all those things that, that" }, { "end": 4228.08, "start": 4223.2, "text": " I've done, I have, I think back on very fondly, it was, it was a great experience and I," }, { "end": 4233.639999999999, "start": 4228.08, "text": " really enjoyed doing it. I am now trying to find ways like being on your podcast of getting" }, { "end": 4238.2, "start": 4233.639999999999, "text": " to, to getting out there and, and speaking, you know, to, to, to, to being involved in" }, { "end": 4241.599999999999, "start": 4238.2, "text": " the conversation in a more public way. And so this is something that I'm really excited" }, { "end": 4247.24, "start": 4241.599999999999, "text": " about. I think is important. And I'd like to do more of. So yeah, you know, if, if," }, { "end": 4251.92, "start": 4247.24, "text": " if Hollywood comes a knock-in, I probably will, will answer the door. Great, Hollywood," }, { "end": 4258.64, "start": 4251.92, "text": " I hope you're listening. Professor Michael Littman, I've learned so much from you and from" }, { "end": 4263.6, "start": 4258.64, "text": " the little that I've read of your work. It's been a real honor and a pleasure. Thank you" }, { "end": 4268.76, "start": 4263.6, "text": " so much for, for sharing your insight and your time with me today. It was a treat to talk" }, { "end": 4273.2, "start": 4268.76, "text": " to you. Thanks, thanks so much for just being so engaged and, and for helping to get the" }, { "end": 4289.679999999999, "start": 4273.2, "text": " word out to a broader community. That's our episode for today folks. Be sure to check" }, { "end": 4304.68, "start": 4289.68, "text": " talkrl.com for more great episodes." } ]
Natasha Jaques
Natasha Jaques talks about her PhD, her papers on Social Influence in Multi-Agent RL, ML & Climate Change, Sequential Social Dilemmas, internships at DeepMind and ...
https://media.transistor…06d.mp3?src=site
This is Talk Our Real Podcast. All reinforcement learning, all the time. Interviews of brilliant folks from across the world of our realm. Time your host, Rob and Chauhan. Natasha Jakes is a PhD candidate at MIT, working on effective and social intelligence. She's interned with DeepMind and Google Brain and was an OpenAI Scholars Mentor. Her paper Social Influence as in Trans-Sync Motivation for Multi-Agent Deep Reinforcement Learning, received an Honorable mention for Best Paper at ICML 2019. Natasha, thanks so much for joining me today. Happy to be here. You defended your thesis just last week, is that right? Yes. How does that feel? It feels good. It wasn't quite the rush I was expecting because there's still a lot of bureaucracy in terms of finishing my PhD. I have to finish the edits on the thesis and hand out it and everything. It doesn't feel like it's totally over yet, but it's nice to have the stress easing off. We can't quite call you Dr. Yet, but soon, right? Exactly. What was your thesis about? My thesis tries to say that in order to improve deep learning models and machine learning models and make them more general and able to adapt to new situations, we should try to integrate forms of social and affective intelligence. So different forms of social learning or learning from humans. In particular, when you're learning from humans, can you use like affective cues about their preferences to learn from them? Did you go into it with that plan and did it turn out how you expected it or were there some surprises on the way? Not at all. So there was a definitely significant drift in terms of my research area. So when I started my PhD, I thought affective computing was the coolest possible thing you could do and that's why I wanted to work with my advisor because she's an expert in that area. But after I went to my first NURP, I kind of fell in love with machine learning and deep learning in particular because it just seems like there's so much progress happening there. And so many new abilities that are sort of being unlocked for AI as a result of deep learning. So I became very excited about that and did these internships and really enjoy that. And shifted my focus to more like algorithmic contributions to deep learning and deep borrow. I did notice that on your Google Scholar page, it shows that you have a paper from 2013 on Palm DPs. So that suggests you've been in this space for a while. That's true. I mean, I think when I started out, I was doing much more applied work. That paper on Palm DPs was, how do you model people's emotional states with the Palm DPs? It's not really like innovating on Palm DPs. So that's what I mean in terms of things have shifted. What do you see as grand challenges in the areas of AI that you focus on? Generalization, I think is one big challenge. So especially with deep borrow, we just know that it's very brittle. So people have shown like you trained something on an Atari game and then you just slightly shift the colors and it totally collapses. It can't play anymore. And so that's just going to be way too brittle for deploying these models to the real world. We're pretty far from like a robot that could climb up many different types of real world stairs, for example. You know, that's a relatively constrained task. So I think trying to solve this problem of robustness to different types of variation and generalizing to new and related tasks is a huge challenge problem. You've done research at three of the most prestigious AI research organizations in the world. DeepMind, Google Brain, and MIT. I'm sure not that many people could say that. And this is before you complete a DURPHD. I wonder if you could share with us any thoughts or impressions on how they're similar or different, like maybe in terms of their culture or their approach to research? Sure. So I mean, they're all great places to work. And I've been very lucky and enjoyed my opportunities at all of them. I can definitely talk about the difference between Google Brain and DeepMind. So the standard line is basically like, DeepMind is very top down. I heard that a lot before I ever went there. But what that means is that DeepMind is very mission driven. It's very focused on how we can actually improve AI and even how we could make it to AGI, so artificial general intelligence. And it's very much like there are projects that fit into sort of a hierarchical structure. And it's very organized and very driven towards that goal. And I think that allows it to get a lot of good work done. And I very much enjoyed working there. But Google Brain is more kind of loose and free. And you can sort of self organizing. You can put forward whatever you want to work on. It's fine if you want to go off in your own directions. Not that much like dictation of what to work on. And so I've heard this analogy. And I don't know if you'll like it, but basically Google Brain is more like Bell Labs. And DeepMind is more like what NASA was to the Moon landing. Oh, that's so interesting. I mean, that's very, it's obviously a very lofty analogy. But I liked it. I'd like to move to a paper that you co-authored called Tackling Climate Change with Machine Learning. I noticed that paper had authors from many different organizations. I think I counted 16. What was it like working on a paper with co-authors from so many different organizations? Well, I really have to give credit to the first author, David Rolnik, for making networks so well. He was just very thoughtful about how to organize this and how to make it work. Very careful with planning timelines and setting expectations and breaking it down. And we really broke it into different pieces. So each person is writing sort of a mini paper or a section that fits in with the other sections. And then there were many discussions of how to structure each section so it worked well with the rest. And like one thing we focused on was very much like communicating what, like if I'm an expert in generative models, where can I go in the paper to apply those techniques to these problems? And then also emphasizing what is the impact of each of the different areas? And then in terms of just collaborating, we just had a lot of Skype meetings with, you know, 10 people on the call. So it actually worked pretty well though. I was surprised. It was a pretty smooth experience. I really enjoyed that paper. It got a lot of attention, I think rightly so. And your section was tools for individuals. Is that right? Yes. I think some people might wonder if individual change can really make a difference. But in this paper, you note that individual behavior change can mitigate between 20 and 37% of global emissions from 2020 to 2050, which I thought was really impressive. Yes. Now that estimate definitely includes a behavior change of farmers and including agricultural practices of small scale farmers. So in terms of like whether you need to recycle every pop can and that's going to mitigate 37% of global emissions, that's not what we're saying. But I think it's important that people do recognize that some of their behaviors have a large effect on the climate. So for example, I'm working with the Media Lab now on a program to offset the carbon emissions of flights, because flights are incredibly part intensive. So we're trying to communicate this in a way that doesn't encourage people to actually fly more because offsets are not there yet. When you actually buy offsets, what you're doing is your funding climate related causes like building renewable plants. But of course we have no effective way to actually recapture the carbon. So we're trying to communicate this in a way like you can go to the site and it'll say something like, okay, you flew from Boston to San Francisco, that's equivalent to 187 days of powering a home. Or like so there's many days of driving a car and it's very significant. So there are individual behaviors that do have a strong effect. Okay, thanks for putting that number into context. I'm also glad you mentioned agriculture because my day job is I work for AgFundre, which is a VC focused on agro-food tech. We invest in companies that are making that space more efficient. Yeah, there's a lot of beneficial change that can be made in the agriculture space. So I'm really glad you're working on that. Since this is an RL podcast, I wanted to look at the ways that RL is mentioned in this paper. And I see about 13 mentions. And this is summarized very healthfully as you pointed out before the call in table one. But it's mentioned 13 times in the main text. I'm just going to read out the types of things that it's being applied to. So smart grid, autonomous vehicle controllers for smoothing traffic, optimizing vehicle to grid power storage, building load modeling, smart buildings, scheduling energy use, cooling systems, predicting forest fire progression, geoengineering control, multi-agent planning coordination, and climate change policy or mitigation. And in your section, scheduling household appliances. I know you're focused on the research in, but I wonder if you can comment like, how far are we away from deploying some of these things? Or maybe that's the point of this paper is to encourage the readers to get this stuff out of the lab, and out there really saving emissions. Yeah, well, we've actually seen some evidence that RL, like one of the few, I hate to say this, but I've talked to several people and they say like, you know, deep learning in general, we've shown that it has many beneficial real world applications. So we've seen like, ComvNets, for example, be repeatedly useful in a number of real world problems. Like we're predicting tumor locations, for example, it's working. But arguably, deep RL has not been applied in the real world and really proven itself as being beneficial. So I think actually this is one area where there's a lot of potential for that. There was some publications a couple of years ago about Google applying deep RL to control its data centers and increasing the energy efficiency by something like 10%. But I found out later that actually they ended up not using deep RL, and they ended up using simpler control techniques because that turned out to be just more effective. So there's still a lot of room to actually make progress in these areas. And in terms of how close we are, in terms of the big thing here is, if you can understand how to schedule your home energy load or ideally larger energy loads on the grid to be more efficient. So what's going on with Grids right now is you want to use solar energy and you want to use wind energy, but of course they're unreliable and unpredictable because if it's not a sunny day or it's not a windy day, then you don't have access to that. And we don't really have good battery systems yet to be able to maintain that. So often there's carbon intensive backup power in terms of coal power plants or other types of carbon intensive power. And what you need to keep this actually running in the background so that you can spin it up quickly so you don't lose power, but there's a lot of potential for bringing machine learning into both predict when you will need the backup power so you have to keep less in reserve and also optimizing how to schedule things on the grid when there's most solar and wind available. So you're using the cleanest energy possible. So how close are we? I think we need a lot more work to actually connect with, for example, operators of these Grids. We need to be able to collect data sets of residential home energy used to be able to think about off-policy learning from that data. So there's still work that needs to be done, but we're hoping to spur continued research in this direction. And I guess this relates to what you said before about these systems being somewhat brittle at this point. And yet these types of problems being so safety critical. And so we need to somehow bridge that gap. Is that right? That's right. That's right. But I think DeepRL gets you something really powerful, which is generalizing over related states when the state space is really high dimensional. And I don't think other methods are going to be able to do that in the same way. One part of this paper that jumped out at me at the end of the fourth page, it says, while we hope the ML will be useful in reducing the cost associated with climate action, humanity also must decide to act. I can't stop thinking about this line. And that so much of machine learning today is focused on influencing people to click on ads. I can't help but wonder if some of that influence could be used towards climate action. Like with the idea of encouraging people to do something helpful, fall under the umbrella of effective computing. So firstly, I just want to say I'm actually really scared of the idea of using machine learning to influence people's behavior. I don't like this idea that we're taking autonomy away from people using machine learning. And I don't think that creates a lot of resentment as it should. And the nice thing about climate change is, well, you know, there are political divides about it. Ideally, it doesn't need to be a bipartisan issue. And I think when we talk about influencing people, it could worsen that conflict. So I'm a little bit scared of that. And actually, influencing people is not really the domain of affective computing. What my advisor, Ross, really sees the field as relating to is ideally helping promote people's well-being through a deeper understanding using machine learning to obtain a deeper understanding of, for example, their stress or their happiness or detecting these measures using signal processing. So I wouldn't say yeah, social influence on social networks yet, not so much. I can say though, something about social influence. And this is work that comes out of the media lab a lot, actually, if you are familiar with Alex Pendlin's work, social physics, there is a, it's good if you're curious. But there is a lot of work that shows like people, if you do want to spur behavior change, one of the most effective ways to do that is actually with social motivation. So people are incredibly motivated by their friends and their family and their social networks. So Alex Pendlin has this really nice work that, for example, if you want someone to lose weight, you could try paying them for how much weight they lose. But that actually doesn't work very well. But if you pay their friend based on how much weight they're losing, that works incredibly well. So you can see these really strong social influence effects. So one thing that I think is really important with climate change is to just help raise awareness of some of these issues and hope that that spreads through social networks. So like that's what we're trying to do, for example, with this paper and with the carbon offsets program I mentioned earlier. If you don't mind, I'd like to move on to your social influence paper. Yeah, speaking of influencing agents, right? This is easily one of my favorite RL papers of all time, because it brings together so many things that I personally find so fascinating in an incredible way. There's so many of them I had to make a list. So multi-agent RL in itself, sequential social dilemmas, intrinsic motivation in multi-agent RL, model of other agents, learn communication, causal digs in multi-agent RL, cooperation in multi-agent RL, and even empathy. To start us off, can you help us understand what is a sequential social dilemma? Yeah, so these are environments that basically try to extend prisoners dilemma like dynamics to a more complex, spatially and temporally extended environment. So the reason they're dilemmas is basically because each individual agent could always get hide-a-reward by doing something greedy and sort of defecting on the other agents in the environment. But if all the agents follow this defecting greedy policy, then they'll all do poorly. So an example is like a tragedy of the commons environment, and of course this connects to climate change as well, but we have a very simplified version. So there's some nice natural resources in the environment, so that's these apples that agents are trying to harvest. But if agents harvest the apples too quickly, then they don't grow back, and of course if they harvest all the apples, no apples grow back. So each individual agent always wants to sort of get more apples for itself, but if everyone's too greedy, then they deplete the resource. And similarly, there's this cleanup game where agents have to clean a nearby river to cause the apples to spawn, but it's partially observable, and as agents are cleaning the river, they can't see what's going on with the apples, and other agents can easily exploit the apples that they're creating. Would you describe the basic idea behind your paper? What I was interested in is this social learning. I think this is really important. So I was thinking about how could agents in a multi-agent system actually learn socially from other agents. And what I wanted to do was just make more realistic assumptions about how much agents could observe about the other agents in the environment. So we focused on just allowing agents to learn from the other actions that the agents are taking, and not being able to view things like the agents internal reward function. Because if you think intrinsic rewards are important, then each agent is going to have a different reward function that's less realistic. And also, if you think about potential downstream applications of multi-agent RL, we often think about something like autonomous driving. And so it's a pretty unrealistic assumption to think that the Tesla car is going to be able to learn something from the proprietary reward function of the Waymo car. But it can actually see what the Waymo car is doing in the environment, such as whether it's turning left. So what can these agents learn socially from each other just by observing each other's actions? And what we chose to do is to focus on rewarding agents for having a causal influence over the actions of another agent. And we actually show in the paper this relates to rewarding the mutual information between agents' actions. So the intuition is that this will help drive them to learn coordinated behavior. How did the idea for this paper come about? Well, this is actually kind of a funny story. So I did my general exam for my PhD, which is just reading a ton of papers and then you're grilled on it orally by your committee. And actually, Nando D'Fritis was a committee member of mine. And he asked this really hard question. It was a first question my general exam. It was like, what kind of social and emotional intrinsic motivations could agents have? Because I was very fascinated by this curiosity paper. And he kind of put me on the spot. I didn't have a great answer at that moment. But I actually also had to do this 24 hour take home exam on the same, and he basically gave me the same question. So I stayed up almost 24 hours. It's like 4 a.m. 5 a.m. I'm drinking coffee. I'm trying to come up with good answers for this. And I've been thinking about different ways you could learn socially from other agents. Like, could you learn from your proximity to other agents or something like this? And I thought about learning from the causal influence of your actions on another agent and how you would compute that. So I don't know if a podcast is the best way to describe a formula. But if you're curious, you can definitely check out the paper. This idea that agents want to influence other agents is there some basis for this in social science? Like, to people and animals want this type of influence as well? Well, yeah, let me clarify exactly what the influence is. So basically, I want to take an action that changes what your action is. Like, you're doing something that you wouldn't have done otherwise. Just conditioning on my action. And the reason is there a basis for this in humans? I don't know about influence generally, but actually what we focus on in the paper is teaching agents how to communicate with this influence reward. And I think that's actually really compelling because that's kind of the purpose of communication, arguably. So you can read this book by Michael Thomas-Sello. That's all about cooperation in humans. And it's specifically focuses on how children, actually, when they're initially learning to communicate, what they actually want to do is be able to influence their conversation partner. So if you can think about a young child that's hungry and it wants to have food, how does it? How can you get that food? The best way is to learn to communicate that that's what it needs. So there's this idea that learning to communicate is actually a way to learn to is learning to try to influence others. So there's that basis. I did see a YouTube video you posted of your policies running on this problem. And I wonder, could you describe for us qualitatively, how does your mechanism change how these agents behaved? So actually this I think was one of the most interesting and surprising results that we found. So we had given agents this reward for having their actions, having influenced the actions of another agent. So this action action, influence. But we were also giving agents a reward for still getting their own environmental rewards, so like collecting apples. And we observed, for example, in a couple of cases, there was agents that were trained with the influence reward actually restricted the set of actions that they used in the environment to collect their reward. So in one case, this agent only actually used two actions ever in the environment and it did something a little bit different from the other agents. So while the other agents continue to explore the map when there were no apples present, they move around looking for more apples. This agent actually stayed still so it only ever moved on the map when there was an apple present. And what that allowed it to do is actually communicate whether apples were present via its actions. So when an apple was present, it would move. And because the environment was partially observable, another agent that couldn't necessarily see that apple, but could know that this agent was moving, would be able to understand that there must be apples present in the environment that it can't see. And that would change its intended behavior. And thus allowing the influencer agent to gain influence by communicating information about the presence of food in the environment. So we think this was really interesting that as because the agents were trying to learn to influence each other, they actually learn to communicate with each other. And this is immersion behavior immersion complexity that you didn't really design in. You're having to go in after the fact and figure out what's going on in there. Exactly. Yeah. That is so interesting. I don't know if you saw there was a recent post about I think it was by Andre Carpathi, but it was about the importance of like if you're doing deep learning really digging into the inputs, the outputs exactly what the model is doing very carefully visualizing and auditing what's going on with your model. Because there's so many ways in which these models can go wrong and can fail to train. So just really digging in and seeing what the behavior of your agent is can help a lot. And that helped a lot in this case because otherwise I wouldn't have discovered how the influence. Like we had I had for a long time. I think three weeks or a month. These beautiful plots showing that the influence reward was helping the agents learn to cooperate in this environment. But it wasn't convincing my team because they were like, why should influencing someone else help them? How is this working? So I really had to dig in and understand like what when was the influence reward happening and what was going on when agents got the influence reward? I guess that's one interesting thing about RL. Like sometimes there's just no replacement for going in and seeing what these policies are doing beyond just what the charts are telling us. I think it's really important. Was there any controversy in terms of deciding which experiments to run or was that pretty obvious to you early on? There was a little bit of controversy, honestly. So I came into a debind and was working with this team ahead of by Joel Liebo. That's very excited about these SSD environments. And I think they were a good test bed for the influence reward because they actually get at the problem of what the agents could. They're not they're not fully like they're a dilemma. So agents aren't fully self interested, but they're not fully cooperative. You're not just trying to optimize a group reward. So it's interesting to see what the influence reward do would do. But some of the feedback we got on the paper was that we should have tested in maybe more traditional RL environments, maybe we should have tested in robot software, for example. And I think that that's very interesting follow up work to do and it's something I'm thinking about doing. I look briefly at the GitHub repo for these SSDs and I noticed that there were some you had some references to RLib in your comments. Was that what you used to train it? Actually, no. So we I used I was a demand. So I use their amazing internal infrastructure to train these models. And it was actually based on a three C, but actually a little bit closer to Impala because we did use a V trace correction. But then when I left deep mind, you can't touch the code anymore because it's not open source. So I ended up using RLib to reproduce some of the code. And there's a not debugged secret fork of some repo that if you really need access to this code, you can email me about. But I haven't had a chance to fully test it and make it pretty and get it ready to be fully released. So that's something we're working on reproducing it in open source. Great. I'm looking forward to that. I really got way more into SSDs after reading your paper. And after thinking about the fact that real real world SSDs are a serious problem in our world today. It seems to me that there's so much more we could do with them. And yet here we are in 2019. We're just starting to ask the basic questions about them. Sorry. I want to give a plug to my collaborator Eugene Vinitsky, who was the real hero behind reproducing the SSDs themselves in open source. So you think they get have repo is something that he it was largely him. So just want to give a shout out to to Eugene. We've seen different types of intrinsic motivations now in RL mostly focused on the single agent setting. Curiosity, as you mentioned, empowerment. There was a paper that focused on inequity a version, which is a gathered some sort of punishment for agents that are rewarded more than the others in nut shells. Yeah, we're rewarded less. They don't like to be too different from the group. So if you're getting way less reward than the rest of the group or way more rewarded makes the agent disincentivized to do that. The paper picture is being like guilty or envious. Ah, okay. So that was inequity a version improves cooperation in inter temporal social dilemmas. The mechanism that you came up with is that more effective than inequity aversion? And is there some way that they could work together? I just wanted to shout out also the first author of that paper is and Hughes. I think it's a great paper and you guys should check that out. We do have a plot in our paper that or a table that does show in certain of our experiments. We exceed the maximum obtained in the inequity version paper in terms of total collective reward for all the agents. So that suggests it can be more effective, but it's not a direct comparison because the the amount of hyper parameter sweeping and in my paper was not necessarily held constant with the previous paper. So I don't want to over claim there. But one thing that I would say in terms of effectiveness is that our paper does not make an assumption which the inequity aversion paper does. The inequity version paper relies on agents being able to see each other's rewards, which as I mentioned earlier is not necessarily realistic assumption, especially if you think about different downstream applications of this type of research. So I think it's more realistic that agents just see each other's actions, but can still learn from each other. And that's I think a big contribution of our paper. If you had a model of other agents, maybe you could project your own understanding every words onto them and assume that their reward structure is similar to yours. Would that be a way around that? Yeah, so I'm really fascinated with this idea of could you use yourself to model others? So there's this modeling others using one self paper actually that came out, which is interesting to check out. And I think this could really dovetail nicely with the influence reward. So one criticism of the influence reward is that you could influence an agent in a way that doesn't necessarily help it. So let's say I just get in your way and you have to walk around me. I've influenced you, but I haven't helped. So what I think would be really nice is to compute directional causal influence of your actions on another agent's value function. So say I want to cause your value estimate, your estimate of total expected future rewards to go up. So I want to help you, but I don't want to have to be able to observe your value function because that's pretty unrealistic. Why would you necessarily share that with me? So what if I were able to use my own value estimate to model your situation? So basically I switch places with you in and how to do this might be dependent on the environment, but essentially I put you in my place and I substitute my action for your action. And I compute how that action would affect my value function. And I use that as an estimate to say how I think my action is affecting you. And I hope that I'm helping you in affecting you in a positive way. And yeah, I think there could be really interesting experiments there like how well does this work as our rewards diverge? Like if you have different goals than I do, how well does just generally trying to help like imagine you as me help you? But yeah, I think it's a really interesting direction to pursue that. Get to the idea of the Tesla car assuming that the way my car has a similar policy and value function. So it could imagine how it was influencing the other cars reward. A simpler mechanism might be to share rewards like I tried that in power. Yeah, I mean, I think that's a really I think that's a really promising assumption you can make if you're training a swarm of agents that all had the same goal. And in some environments that is the case. So maybe you're doing robot soccer every agent does want to on the same team wants to win the game. And that's straightforward to do. But I think if you think about the human world, we're not actually all jointly optimizing for the same shared reward function. We all have different goals. And in the same sense as like autonomous driving like cars have some shared goals like they don't want traffic accident, for example. But they have different destinations and their motivations are essentially different. So I think it's not always plausible to assume that reward is totally shared. But if reward wasn't shared, then how would you be able to figure out how it would affect their value function like your assumption that you could use your own model to put the other agent in your place and estimate the reward they would get from certain actions. Might not make sense because you're not sure what their goals are. Is that right? That's a great question. Yeah. And actually the experiments I want to do if I pursue this method is precisely that. So as our interest diverge, like as we follow increasingly different reward functions, how much does this actually help? And in an environment where there's like let's say we were in an environment where you wanted to eat apples and I wanted to eat bananas. But neither of us wanted to die or like fall off a cliff or be shot or something like this. It may still be possible that modeling my your reward as mine or my reward as yours still helps you in some way and is still beneficial. But at what point does that fail utterly? So I think that would be an interesting experiment to do. With all this talk of SSDs, I can't help it wonder could this research be used to help us solve our real world sequential social dilemmas like climate change in terms of influence. You mentioned how it would be more effective if my friends were paid for me to lose weight. But in terms of systems, you said you you don't want to see systems that influence people in this way. Is that right? Yeah, I'm a little bit wary of the idea of building systems that influence people. The problem with a lot of systems AI systems that are currently meant to influence people is that we don't have the right metrics. So if you just you just train YouTube to increase watch time, you actually get some pretty questionable behavior. So I don't know if you saw there was a recent article that YouTube recommendations tend to that are trained to optimize watch time and up recommending extremist content. Yes, because people click on it and they watch it for longer. So we just don't know what metrics we should be optimizing when we're trying to influence people to do something according to a metric. And so I think that's pretty dangerous. But I do think social dilemmas in the sense that they are like a tragedy of the comments, which of course does relate to climate change. Are something that's interesting to study the question is how much can we take these it? Like how can we develop insights that we are able to take to larger scale problems in a beneficial way? I think the general idea that multi-Asian RL could be posing these social science questions or maybe answering them in different ways is super fascinating. I'm fascinated by that too. Absolutely. It almost seems like multi-Asian RL might help us get down to the essence of some of these issues without the complexities of language and culture that are always part of the social experiment approaches. This somehow helps us distill down to some mathematical essence of the problem. Yeah, and I think that's a valuable tool. And if you think about like what game theory has done for economics, I think there's something similar there where you make a simplified version of the problem and develop concrete theory as to how this works in a simplified version. But I don't think that's the only research that you should do and there you definitely need to fill in the gaps with more complex research. And one thing that I think illustrates this is that actually the communication protocol that emerged as a result of the influence reward in these environments emerged precisely because they're a little bit more complex than a prisoner's dilemma. And there's partial observability and there's all these complexities that actually led to that being an effective policy in order to gain influence. And we might not have seen that behavior if we had a much more simplified environment. So I do think the demanding this would be environment definitely shapes the solution that you end up coming up with. And so while simplified versions are good for really concretely entering down aspects of the problem, it's not the only thing we should study. When I first encountered your paper, I was thinking a lot about compassion and AI and whether it makes sense to design agents that are compassionate or what would that even mean? And I don't mean in terms of feelings when in terms of behaving in ways that could be described as compassion or caring. So I reached out to Rohan Shah at UC Berkeley who does the AI alignment newsletter. It seemed like he might know about these things. He replied by email and gave me permission to share. He said in part, your idea seems more about how the AI system should help us, i.e. by being compassionate. I'd rather figure out how to make it so humans retain effective control over the AI so that they can decide what they want the AI to do. And the image that came to mind for me was like imagining this supposedly caring robotic mother. But if it had a faulty model, then it couldn't really be effectively compassionate to do that well would take an incredible amount of intelligence. Yeah, I think this question is absolutely fascinating and actually connects a lot of the different things we've been talking about because it connects again to the thing I mentioned earlier about if you don't have the right metrics, you have only clumsy metrics and you try to optimize for them, then you can actually do harm. So like if you optimize watch time, you can actually, you know, end up with really bad behaviors. So that's exactly right. If you have a robot mother that tries to be compassionate, but it's metrics of what you need are clumsy and incomplete, then it could actually be really damaging. And actually, if you're if you have a passion for bad AI sci-fi TV shows and movies, there's a show on Amazon Prime called humans that actually investigates this where they have like an elderly person that has a humanoid robot that's meant to care for them, but the robot is made by the government and has very strict ideas of what is healthy for this person and really restricts their autonomy and trying to be quote unquote compassionate. I think there's something interesting there, but that's actually so another part of my research has been trying to use reinforcement learning and other techniques to learn about human preferences from their implicit social cues. So can we improve a generative model by looking at people's facial expressions as they look at samples from the model or can we improve a dialogue model by looking at people sentiment in terms of how they respond to the model. And I'm kind of interested in this because I think if you really get at these underlying signals of what people's mental states are and you try to really get at what people's current and ongoing preferences really are, that helps alleviate some of these issues. So you can think about human preferences as this non-stationary signal that if you do something clumsy and wrong repeatedly, you will no longer be satisfying their preferences and you will no longer get a good reward signal. So if I tell the same joke to you once and you laugh, great, but if I tell it to you three times, you're not going to be laughing anymore. So to the extent that we could really sense people's underlying preferences and optimize for those, that seems like we would be getting closer to a beneficial AI. So it's like having humans in that real time control loop. That's really interesting. Yeah, so I basically agree wholeheartedly with what Rohan was saying that people need autonomy to be able to control what these models are doing. Towards the end of your paper, you mentioned how your influence mechanism might be used to prevent collapse in hierarchical RL. Could you say more about what you mean by that and how this mechanism might help? In a hierarchical model, you have what I would call the top level of the hierarchy that's deciding on sort of a more abstract policy. Like, let's say I want to go get the cup and then you have like a lower level of the policy that's supposed to be implementing how to like take the actions to go get the cup, like move here, move here, grab, etc. The issue about collapse is basically that we don't really see this behavior emerging. So like we don't actually learn several different options that can be invoked at different times. It's just that there's one option that the high level always chooses, for example, on the low level does everything or we can see the reverse happening as well. And then the issue of like ignoring the high level option can also happen. So if the low levels just doing everything, then whatever option is being chosen at the high level is not being actually used by the low level of the model, if that makes sense. So the idea that influence could help to not ignore the high level option is to say that we can compute the causal influence of the option chosen by the high level on the low level policy and try to optimize for that. So even both parts of the model be rewarded if there's more influence between the high level option chosen and the low level policy. So basically saying there should just be more mutual information between the high level option and the low level policy. When I first read your social influence paper, I was reminded of another paper about auto curricula. I think that paper is the only place on archive where that word is mentioned. The title is Auto Curricula and the Emergence of Innovation from Social Interaction, a manifesto for multi-agent intelligence research by Libot at Al. You mentioned you're familiar with this paper. Could your mechanism be used to help us move towards auto curricula or maybe you could start by describing what is auto curricula? Yeah, so I actually really love this paper. So again, Joel Ibo is the first author and worked on it as well. So it's a really interesting paper that basically pulls together a lot of sociology and social science research that talks about why social demands actually drive human cognitive development. And another really interesting paper that I love that you could check out that's a bit older that's directly from the the sociology literature is called the social function of intellect. And basically both papers are arguing that oh, and actually you could read this for popular book now of course is sapiens by you all know a hermitage also makes the same kind of arguments. Yeah, so it's it's you know that book has some over claiming but I really it's a fun. So what's going on with all these is that they're basically saying that humans when they integrate into larger social groups that's actually very cognitively demanding because you have to understand all the different social relationships and maintain that and and actually if you're not able to do that you can't organize into larger social groups. So for example, sapiens makes the claims claim that Neanderthals were only able to organize into groups of like 50 but sapiens were able to organize into groups of 200 and that's why they were able to genocide all the other proto human species so fun spoiler alert. I don't know if you want to include that. So the autoccurricular paper is again making this case that in a social environment if you want to cooperate with other agents when they're increasingly complicated social institutions then you need to have a larger brain or you need to have better cognitive abilities and then as your cognitive abilities get better you can make more complicated social institutions. So that's kind of driving feedback loop and then there's this other type of feedback loop where if you want to compete with another agent like out compute and out compete another human for resources or you know or magic partners or whatever then you need to have more intelligence as well. So the idea of this autoccurricular is that the social demands of an environment as other agents learn and get better in order to work with them in their environment you will also have to learn and get better so it's I think they mean by autoccurricular that this curriculum is developing naturally and automatically without a researcher having to engineer the curriculum but it's actually coming from the agents in the environment itself. So you can potentially see this happening in the social influence paper because like in order to influence you I have to have some understanding of what you're interested in and why you're taking the actions you are and and be able to take actions that or or communicate information that you will find useful. So the particular paper mentions the problem problem the idea that researchers have this problem of where they have to keep creating new problems for the agents to solve and I was reminded of Jeff Cloon's team at Uber AI published a paper on what they call poet which automatically makes different variations on environments which seems like it's trying to sell the same thing although autoccurricular somehow seems more elegant in that way. So it's just one environment that provides everything that's needed for this intelligence explosion to happen. Is that right or we do think of this as one environment where that keeps on running and the agents are just getting smarter and smarter? I don't think the autoccurricular means like it's restricted to any single environment. I think it's the idea that as you have other agents in the environment that are either working with you or competing with you that will provide a curriculum in itself. So with poet you can think of the other agent as the environment and it's trying to make the environment harder for you and you're trying to adapt to the environment as it's changing. So that that could be the autoccurricular could occur in any multi agent environment where you're trying to work with other agents either in a competitive way or in a cooperative way. Would you say auto curricula is entirely a theoretical idea at this point or are there early versions of this running? There are early versions working. I've been doing interviews lately with different companies and someone was able to show me some early results that I can't talk about, but you can look out for those happening soon. But yeah, so people are thinking about this and I think it's quite interesting. So the autoccurricular paper itself as it titles itself is just a manifesto is just putting these ideas forward is not necessarily showing them in a concrete way. But hopefully you can inspire other researchers to think about multi agent as a way to drive intelligence broadly. Do you feel like your influence mechanism would be useful for autoccurricular potentially. So if you think about as agents get more complex and they learn more about the environment itself and they're more self sufficient, it would be more and more difficult to influence them by providing useful information. So you have to work harder to find some information that they don't know that you could use to influence them. So there could be something there. Thinking back to the environments that you used in your paper, the sequential social dilemmas. Like harvest and cleanup. I'm wondering how much more complex do they need to be to support auto curricula? I'm thinking like in physics, a two body problem is not hard to solve, but add one more and make a three body problem and suddenly it becomes hard to solve. You get this emergent complexity. So in the same way, maybe there's some small steps that we could take like I get us to an auto curricula type environment, though I'm not really clear what that would be. Either and I think inevitably you do run up against the limitations of the simple environment, even in terms of like developing more and more complex social policies. Because if there is a point at which you your policy saturates and you sort of solve the environment, then it's not fear. Yeah, it's an interesting question. So how to develop an environment that basically enables continuing social development. And like I think you mentioned like developing social institutions that are more and more complex. That sounds really cool. And maybe there's something to just like intrinsic motivation in itself. If you had an interesting set of intrinsic motivations and kind of an unbounded environment, that could be really, really cool. Can I ask, who do you look up to in the research world? Do you have mentors? So I do have definitely have mentors, but before I get into that, I do just want to give a shout out to Chelsea Finn, because I actually definitely look up to Chelsea Finn. And I think she's really inspiring for a lot of women entering the field and she's just such a badass. So she's great. But in terms of mentors, there's been a lot of people that have helped me along the way. And I really want to be grateful to them and acknowledge them. I kind of feel like I'm giving my thesis acknowledgements all over again. I just did this in my defense, but I do want to thank Doug Eck at Google Brain, who was my manager for a very early internship and has really believed in me and supported me. And that's been incredibly valuable. I want to thank Nando. He's been great as I mentioned before. He asks all the hard questions and really gets me to think. I really want to thank Joel Pino, because she is on my thesis committee and has worked with me in great detail and given me really detailed technical feedback in a way that I found to be incredibly valuable. And of course, I have to thank my advisor, Rosalyn Picard, because she's just been incredibly supportive and kind and great to bounce ideas off of. And I really wouldn't have had the same PhD if I didn't have such an understanding and amazing advisor. So I want to give thanks to her. Any words for people who look up to you? Oh, wow. Well, I guess that's kind of a weird question, because I still think of myself as mainly looking up to other people. But I would say that it just takes hard work. I remember you really just have to stick with it. Like I remember in my master's degree very early, taking my first machine learning course and having forgotten a lot of linear algebra since early undergrad and having to like go back and take course. And then you just keep trying to keep working. And if you keep trying eventually things work out, even if it's even frustrating in the short term, don't think like, I can't do this. Just keep trying. I don't think I can do these things just at first. Would you tell us about future directions for you? Like what do you find interesting these days? And what do you think you want to do next? Well, let me give you the pitch because I would think you a lot about this. So basically I want to integrate the stuff that I've worked on before into a much more cohesive direction. And I'm really fascinating, but by this idea of multi agent cooperation and I think I have many ideas for how to do social learning in a multi agent system. But what I think would be really cool is if you could train a policy that's able to quickly adapt to a new agent and dynamically coordinate with it in an ad hoc way. And then you could take that policy and generalize it to coordinating with the human because I think there's really interesting challenges there. And then fine tune your policy by learning from the human. So I think uniting these directions of learning from humans and learning from other agents into the social learning direction is what I'm really excited about. Do you have any plans to take a break now that you're done your PhD? Oh my goodness. Yes, I have been planning to take a break since June, but so far it hasn't materialized. But very soon I hope to be able to actually take some time off and I really want to go hiking. I want to do like a 10 day backpack and trip kind of by myself disconnect, turn off the phone for a while, being the woods. So that's the plan. Natasha, Jake's I've learned so much from your work and from and from you. This has been a real treat for me. I'm really looking forward to reading whatever you come up in next. So thanks, thanks so much for sharing your time and you're insight with us all today. Oh, thank you so much for inviting me. This is so fun talking about this stuff. That's our episode for today folks. Be sure to check talkrl.com for more great episodes.
[ { "end": 13, "start": 0, "text": " This is Talk Our Real Podcast. All reinforcement learning, all the time." }, { "end": 16, "start": 13, "text": " Interviews of brilliant folks from across the world of our realm." }, { "end": 19, "start": 16, "text": " Time your host, Rob and Chauhan." }, { "end": 25, "start": 19, "text": " Natasha Jakes is a PhD candidate at MIT, working on effective and social intelligence." }, { "end": 30, "start": 25, "text": " She's interned with DeepMind and Google Brain and was an OpenAI Scholars Mentor." }, { "end": 34, "start": 30, "text": " Her paper Social Influence as in Trans-Sync Motivation for Multi-Agent Deep Reinforcement Learning," }, { "end": 38, "start": 34, "text": " received an Honorable mention for Best Paper at ICML 2019." }, { "end": 41, "start": 38, "text": " Natasha, thanks so much for joining me today." }, { "end": 42, "start": 41, "text": " Happy to be here." }, { "end": 45, "start": 42, "text": " You defended your thesis just last week, is that right?" }, { "end": 46, "start": 45, "text": " Yes." }, { "end": 47, "start": 46, "text": " How does that feel?" }, { "end": 53, "start": 47, "text": " It feels good. It wasn't quite the rush I was expecting because there's still a lot of bureaucracy" }, { "end": 58, "start": 53, "text": " in terms of finishing my PhD. I have to finish the edits on the thesis and hand out it and everything." }, { "end": 65, "start": 58, "text": " It doesn't feel like it's totally over yet, but it's nice to have the stress easing off." }, { "end": 68, "start": 65, "text": " We can't quite call you Dr. Yet, but soon, right?" }, { "end": 69, "start": 68, "text": " Exactly." }, { "end": 71, "start": 69, "text": " What was your thesis about?" }, { "end": 77, "start": 71, "text": " My thesis tries to say that in order to improve deep learning models and machine learning models" }, { "end": 81, "start": 77, "text": " and make them more general and able to adapt to new situations," }, { "end": 86, "start": 81, "text": " we should try to integrate forms of social and affective intelligence." }, { "end": 90, "start": 86, "text": " So different forms of social learning or learning from humans." }, { "end": 94, "start": 90, "text": " In particular, when you're learning from humans, can you use like affective cues" }, { "end": 97, "start": 94, "text": " about their preferences to learn from them?" }, { "end": 102, "start": 97, "text": " Did you go into it with that plan and did it turn out how you expected it or were there some surprises on the way?" }, { "end": 104, "start": 102, "text": " Not at all." }, { "end": 108, "start": 104, "text": " So there was a definitely significant drift in terms of my research area." }, { "end": 113, "start": 108, "text": " So when I started my PhD, I thought affective computing was the coolest possible thing you could do" }, { "end": 116, "start": 113, "text": " and that's why I wanted to work with my advisor because she's an expert in that area." }, { "end": 122, "start": 116, "text": " But after I went to my first NURP, I kind of fell in love with machine learning and deep learning in particular" }, { "end": 125, "start": 122, "text": " because it just seems like there's so much progress happening there." }, { "end": 130, "start": 125, "text": " And so many new abilities that are sort of being unlocked for AI as a result of deep learning." }, { "end": 135, "start": 130, "text": " So I became very excited about that and did these internships and really enjoy that." }, { "end": 141, "start": 135, "text": " And shifted my focus to more like algorithmic contributions to deep learning and deep borrow." }, { "end": 149, "start": 141, "text": " I did notice that on your Google Scholar page, it shows that you have a paper from 2013 on Palm DPs." }, { "end": 153, "start": 149, "text": " So that suggests you've been in this space for a while." }, { "end": 158, "start": 153, "text": " That's true. I mean, I think when I started out, I was doing much more applied work." }, { "end": 163, "start": 158, "text": " That paper on Palm DPs was, how do you model people's emotional states with the Palm DPs?" }, { "end": 166, "start": 163, "text": " It's not really like innovating on Palm DPs." }, { "end": 169, "start": 166, "text": " So that's what I mean in terms of things have shifted." }, { "end": 173, "start": 169, "text": " What do you see as grand challenges in the areas of AI that you focus on?" }, { "end": 176, "start": 173, "text": " Generalization, I think is one big challenge." }, { "end": 180, "start": 176, "text": " So especially with deep borrow, we just know that it's very brittle." }, { "end": 184, "start": 180, "text": " So people have shown like you trained something on an Atari game and then you just slightly shift the colors" }, { "end": 187, "start": 184, "text": " and it totally collapses. It can't play anymore." }, { "end": 191, "start": 187, "text": " And so that's just going to be way too brittle for deploying these models to the real world." }, { "end": 197, "start": 191, "text": " We're pretty far from like a robot that could climb up many different types of real world stairs, for example." }, { "end": 200, "start": 197, "text": " You know, that's a relatively constrained task." }, { "end": 204, "start": 200, "text": " So I think trying to solve this problem of robustness to different types of variation" }, { "end": 209, "start": 204, "text": " and generalizing to new and related tasks is a huge challenge problem." }, { "end": 214, "start": 209, "text": " You've done research at three of the most prestigious AI research organizations in the world." }, { "end": 216, "start": 214, "text": " DeepMind, Google Brain, and MIT." }, { "end": 219, "start": 216, "text": " I'm sure not that many people could say that." }, { "end": 222, "start": 219, "text": " And this is before you complete a DURPHD." }, { "end": 228, "start": 222, "text": " I wonder if you could share with us any thoughts or impressions on how they're similar or different," }, { "end": 231, "start": 228, "text": " like maybe in terms of their culture or their approach to research?" }, { "end": 235, "start": 231, "text": " Sure. So I mean, they're all great places to work." }, { "end": 238, "start": 235, "text": " And I've been very lucky and enjoyed my opportunities at all of them." }, { "end": 242, "start": 238, "text": " I can definitely talk about the difference between Google Brain and DeepMind." }, { "end": 246, "start": 242, "text": " So the standard line is basically like, DeepMind is very top down." }, { "end": 249, "start": 246, "text": " I heard that a lot before I ever went there." }, { "end": 251, "start": 249, "text": " But what that means is that DeepMind is very mission driven." }, { "end": 257, "start": 251, "text": " It's very focused on how we can actually improve AI and even how we could make it to AGI," }, { "end": 259, "start": 257, "text": " so artificial general intelligence." }, { "end": 265, "start": 259, "text": " And it's very much like there are projects that fit into sort of a hierarchical structure." }, { "end": 270, "start": 265, "text": " And it's very organized and very driven towards that goal." }, { "end": 273, "start": 270, "text": " And I think that allows it to get a lot of good work done." }, { "end": 276, "start": 273, "text": " And I very much enjoyed working there." }, { "end": 281, "start": 276, "text": " But Google Brain is more kind of loose and free." }, { "end": 283, "start": 281, "text": " And you can sort of self organizing." }, { "end": 285, "start": 283, "text": " You can put forward whatever you want to work on." }, { "end": 287, "start": 285, "text": " It's fine if you want to go off in your own directions." }, { "end": 291, "start": 287, "text": " Not that much like dictation of what to work on." }, { "end": 293, "start": 291, "text": " And so I've heard this analogy." }, { "end": 297, "start": 293, "text": " And I don't know if you'll like it, but basically Google Brain is more like Bell Labs." }, { "end": 302, "start": 297, "text": " And DeepMind is more like what NASA was to the Moon landing." }, { "end": 303, "start": 302, "text": " Oh, that's so interesting." }, { "end": 306, "start": 303, "text": " I mean, that's very, it's obviously a very lofty analogy." }, { "end": 308, "start": 306, "text": " But I liked it." }, { "end": 313, "start": 308, "text": " I'd like to move to a paper that you co-authored called Tackling Climate Change with Machine Learning." }, { "end": 317, "start": 313, "text": " I noticed that paper had authors from many different organizations." }, { "end": 318, "start": 317, "text": " I think I counted 16." }, { "end": 323, "start": 318, "text": " What was it like working on a paper with co-authors from so many different organizations?" }, { "end": 328, "start": 323, "text": " Well, I really have to give credit to the first author, David Rolnik, for making networks so well." }, { "end": 332, "start": 328, "text": " He was just very thoughtful about how to organize this and how to make it work." }, { "end": 338, "start": 332, "text": " Very careful with planning timelines and setting expectations and breaking it down." }, { "end": 341, "start": 338, "text": " And we really broke it into different pieces." }, { "end": 346, "start": 341, "text": " So each person is writing sort of a mini paper or a section that fits in with the other sections." }, { "end": 351, "start": 346, "text": " And then there were many discussions of how to structure each section so it worked well with the rest." }, { "end": 355, "start": 351, "text": " And like one thing we focused on was very much like communicating what," }, { "end": 363, "start": 355, "text": " like if I'm an expert in generative models, where can I go in the paper to apply those techniques to these problems?" }, { "end": 367, "start": 363, "text": " And then also emphasizing what is the impact of each of the different areas?" }, { "end": 373, "start": 367, "text": " And then in terms of just collaborating, we just had a lot of Skype meetings with, you know, 10 people on the call." }, { "end": 376, "start": 373, "text": " So it actually worked pretty well though." }, { "end": 379, "start": 376, "text": " I was surprised. It was a pretty smooth experience." }, { "end": 381, "start": 379, "text": " I really enjoyed that paper." }, { "end": 384, "start": 381, "text": " It got a lot of attention, I think rightly so." }, { "end": 387, "start": 384, "text": " And your section was tools for individuals. Is that right?" }, { "end": 388, "start": 387, "text": " Yes." }, { "end": 392, "start": 388, "text": " I think some people might wonder if individual change can really make a difference." }, { "end": 404, "start": 392, "text": " But in this paper, you note that individual behavior change can mitigate between 20 and 37% of global emissions from 2020 to 2050," }, { "end": 407, "start": 404, "text": " which I thought was really impressive." }, { "end": 412, "start": 407, "text": " Yes. Now that estimate definitely includes a behavior change of farmers" }, { "end": 417, "start": 412, "text": " and including agricultural practices of small scale farmers." }, { "end": 425, "start": 417, "text": " So in terms of like whether you need to recycle every pop can and that's going to mitigate 37% of global emissions, that's not what we're saying." }, { "end": 431, "start": 425, "text": " But I think it's important that people do recognize that some of their behaviors have a large effect on the climate." }, { "end": 437, "start": 431, "text": " So for example, I'm working with the Media Lab now on a program to offset the carbon emissions of flights," }, { "end": 440, "start": 437, "text": " because flights are incredibly part intensive." }, { "end": 447, "start": 440, "text": " So we're trying to communicate this in a way that doesn't encourage people to actually fly more because offsets are not there yet." }, { "end": 452, "start": 447, "text": " When you actually buy offsets, what you're doing is your funding climate related causes like building renewable plants." }, { "end": 456, "start": 452, "text": " But of course we have no effective way to actually recapture the carbon." }, { "end": 460, "start": 456, "text": " So we're trying to communicate this in a way like you can go to the site and it'll say something like," }, { "end": 467, "start": 460, "text": " okay, you flew from Boston to San Francisco, that's equivalent to 187 days of powering a home." }, { "end": 471, "start": 467, "text": " Or like so there's many days of driving a car and it's very significant." }, { "end": 474, "start": 471, "text": " So there are individual behaviors that do have a strong effect." }, { "end": 477, "start": 474, "text": " Okay, thanks for putting that number into context." }, { "end": 482, "start": 477, "text": " I'm also glad you mentioned agriculture because my day job is I work for AgFundre," }, { "end": 485, "start": 482, "text": " which is a VC focused on agro-food tech." }, { "end": 488, "start": 485, "text": " We invest in companies that are making that space more efficient." }, { "end": 492, "start": 488, "text": " Yeah, there's a lot of beneficial change that can be made in the agriculture space." }, { "end": 493, "start": 492, "text": " So I'm really glad you're working on that." }, { "end": 499, "start": 493, "text": " Since this is an RL podcast, I wanted to look at the ways that RL is mentioned in this paper." }, { "end": 501, "start": 499, "text": " And I see about 13 mentions." }, { "end": 506, "start": 501, "text": " And this is summarized very healthfully as you pointed out before the call in table one." }, { "end": 509, "start": 506, "text": " But it's mentioned 13 times in the main text." }, { "end": 513, "start": 509, "text": " I'm just going to read out the types of things that it's being applied to." }, { "end": 517, "start": 513, "text": " So smart grid, autonomous vehicle controllers for smoothing traffic," }, { "end": 522, "start": 517, "text": " optimizing vehicle to grid power storage, building load modeling," }, { "end": 528, "start": 522, "text": " smart buildings, scheduling energy use, cooling systems, predicting forest fire progression," }, { "end": 533, "start": 528, "text": " geoengineering control, multi-agent planning coordination," }, { "end": 536, "start": 533, "text": " and climate change policy or mitigation." }, { "end": 540, "start": 536, "text": " And in your section, scheduling household appliances." }, { "end": 543, "start": 540, "text": " I know you're focused on the research in, but I wonder if you can comment like," }, { "end": 547, "start": 543, "text": " how far are we away from deploying some of these things?" }, { "end": 551, "start": 547, "text": " Or maybe that's the point of this paper is to encourage the readers to get this stuff out of the lab," }, { "end": 554, "start": 551, "text": " and out there really saving emissions." }, { "end": 558, "start": 554, "text": " Yeah, well, we've actually seen some evidence that RL, like one of the few," }, { "end": 561, "start": 558, "text": " I hate to say this, but I've talked to several people and they say like," }, { "end": 566, "start": 561, "text": " you know, deep learning in general, we've shown that it has many beneficial real world applications." }, { "end": 570, "start": 566, "text": " So we've seen like, ComvNets, for example, be repeatedly useful in a number of real world problems." }, { "end": 574, "start": 570, "text": " Like we're predicting tumor locations, for example, it's working." }, { "end": 580, "start": 574, "text": " But arguably, deep RL has not been applied in the real world and really proven itself" }, { "end": 585, "start": 580, "text": " as being beneficial. So I think actually this is one area where there's a lot of potential for that." }, { "end": 589, "start": 585, "text": " There was some publications a couple of years ago about Google applying deep RL" }, { "end": 595, "start": 589, "text": " to control its data centers and increasing the energy efficiency by something like 10%." }, { "end": 598, "start": 595, "text": " But I found out later that actually they ended up not using deep RL," }, { "end": 603, "start": 598, "text": " and they ended up using simpler control techniques because that turned out to be just more effective." }, { "end": 607, "start": 603, "text": " So there's still a lot of room to actually make progress in these areas." }, { "end": 613, "start": 607, "text": " And in terms of how close we are, in terms of the big thing here is," }, { "end": 619, "start": 613, "text": " if you can understand how to schedule your home energy load or ideally larger energy loads" }, { "end": 623, "start": 619, "text": " on the grid to be more efficient. So what's going on with Grids right now is" }, { "end": 626, "start": 623, "text": " you want to use solar energy and you want to use wind energy," }, { "end": 632, "start": 626, "text": " but of course they're unreliable and unpredictable because if it's not a sunny day or it's not a windy day," }, { "end": 633, "start": 632, "text": " then you don't have access to that." }, { "end": 638, "start": 633, "text": " And we don't really have good battery systems yet to be able to maintain that." }, { "end": 646, "start": 638, "text": " So often there's carbon intensive backup power in terms of coal power plants or other types of carbon intensive power." }, { "end": 650, "start": 646, "text": " And what you need to keep this actually running in the background so that you can spin it up quickly" }, { "end": 655, "start": 650, "text": " so you don't lose power, but there's a lot of potential for bringing machine learning" }, { "end": 660, "start": 655, "text": " into both predict when you will need the backup power so you have to keep less in reserve" }, { "end": 666, "start": 660, "text": " and also optimizing how to schedule things on the grid when there's most solar and wind available." }, { "end": 669, "start": 666, "text": " So you're using the cleanest energy possible." }, { "end": 675, "start": 669, "text": " So how close are we? I think we need a lot more work to actually connect with," }, { "end": 678, "start": 675, "text": " for example, operators of these Grids." }, { "end": 685, "start": 678, "text": " We need to be able to collect data sets of residential home energy used to be able to think about off-policy learning from that data." }, { "end": 691, "start": 685, "text": " So there's still work that needs to be done, but we're hoping to spur continued research in this direction." }, { "end": 697, "start": 691, "text": " And I guess this relates to what you said before about these systems being somewhat brittle at this point." }, { "end": 701, "start": 697, "text": " And yet these types of problems being so safety critical." }, { "end": 705, "start": 701, "text": " And so we need to somehow bridge that gap. Is that right?" }, { "end": 706, "start": 705, "text": " That's right. That's right." }, { "end": 712, "start": 706, "text": " But I think DeepRL gets you something really powerful, which is generalizing over related states" }, { "end": 719, "start": 712, "text": " when the state space is really high dimensional. And I don't think other methods are going to be able to do that in the same way." }, { "end": 723, "start": 719, "text": " One part of this paper that jumped out at me at the end of the fourth page, it says," }, { "end": 727, "start": 723, "text": " while we hope the ML will be useful in reducing the cost associated with climate action," }, { "end": 730, "start": 727, "text": " humanity also must decide to act." }, { "end": 732, "start": 730, "text": " I can't stop thinking about this line." }, { "end": 739, "start": 732, "text": " And that so much of machine learning today is focused on influencing people to click on ads." }, { "end": 743, "start": 739, "text": " I can't help but wonder if some of that influence could be used towards climate action." }, { "end": 752, "start": 743, "text": " Like with the idea of encouraging people to do something helpful, fall under the umbrella of effective computing." }, { "end": 758, "start": 752, "text": " So firstly, I just want to say I'm actually really scared of the idea of using machine learning to influence people's behavior." }, { "end": 763, "start": 758, "text": " I don't like this idea that we're taking autonomy away from people using machine learning." }, { "end": 770, "start": 763, "text": " And I don't think that creates a lot of resentment as it should. And the nice thing about climate change is," }, { "end": 773, "start": 770, "text": " well, you know, there are political divides about it." }, { "end": 776, "start": 773, "text": " Ideally, it doesn't need to be a bipartisan issue." }, { "end": 782, "start": 776, "text": " And I think when we talk about influencing people, it could worsen that conflict." }, { "end": 788, "start": 782, "text": " So I'm a little bit scared of that. And actually, influencing people is not really the domain of affective computing." }, { "end": 796, "start": 788, "text": " What my advisor, Ross, really sees the field as relating to is ideally helping promote people's well-being through a deeper understanding" }, { "end": 806, "start": 796, "text": " using machine learning to obtain a deeper understanding of, for example, their stress or their happiness or detecting these measures using signal processing." }, { "end": 812, "start": 806, "text": " So I wouldn't say yeah, social influence on social networks yet, not so much." }, { "end": 819, "start": 812, "text": " I can say though, something about social influence. And this is work that comes out of the media lab a lot," }, { "end": 826, "start": 819, "text": " actually, if you are familiar with Alex Pendlin's work, social physics, there is a, it's good if you're curious." }, { "end": 833, "start": 826, "text": " But there is a lot of work that shows like people, if you do want to spur behavior change," }, { "end": 837, "start": 833, "text": " one of the most effective ways to do that is actually with social motivation." }, { "end": 843, "start": 837, "text": " So people are incredibly motivated by their friends and their family and their social networks." }, { "end": 848, "start": 843, "text": " So Alex Pendlin has this really nice work that, for example, if you want someone to lose weight," }, { "end": 852, "start": 848, "text": " you could try paying them for how much weight they lose. But that actually doesn't work very well." }, { "end": 858, "start": 852, "text": " But if you pay their friend based on how much weight they're losing, that works incredibly well." }, { "end": 861, "start": 858, "text": " So you can see these really strong social influence effects." }, { "end": 868, "start": 861, "text": " So one thing that I think is really important with climate change is to just help raise awareness of some of these issues" }, { "end": 873, "start": 868, "text": " and hope that that spreads through social networks. So like that's what we're trying to do, for example, with this paper" }, { "end": 876, "start": 873, "text": " and with the carbon offsets program I mentioned earlier." }, { "end": 879, "start": 876, "text": " If you don't mind, I'd like to move on to your social influence paper." }, { "end": 882, "start": 879, "text": " Yeah, speaking of influencing agents, right?" }, { "end": 890, "start": 882, "text": " This is easily one of my favorite RL papers of all time, because it brings together so many things that I personally find so fascinating" }, { "end": 894, "start": 890, "text": " in an incredible way. There's so many of them I had to make a list." }, { "end": 904, "start": 894, "text": " So multi-agent RL in itself, sequential social dilemmas, intrinsic motivation in multi-agent RL, model of other agents," }, { "end": 911, "start": 904, "text": " learn communication, causal digs in multi-agent RL, cooperation in multi-agent RL, and even empathy." }, { "end": 916, "start": 911, "text": " To start us off, can you help us understand what is a sequential social dilemma?" }, { "end": 922, "start": 916, "text": " Yeah, so these are environments that basically try to extend prisoners dilemma like dynamics" }, { "end": 926, "start": 922, "text": " to a more complex, spatially and temporally extended environment." }, { "end": 933, "start": 926, "text": " So the reason they're dilemmas is basically because each individual agent could always get hide-a-reward by doing something greedy" }, { "end": 937, "start": 933, "text": " and sort of defecting on the other agents in the environment." }, { "end": 941, "start": 937, "text": " But if all the agents follow this defecting greedy policy, then they'll all do poorly." }, { "end": 947, "start": 941, "text": " So an example is like a tragedy of the commons environment, and of course this connects to climate change as well," }, { "end": 950, "start": 947, "text": " but we have a very simplified version." }, { "end": 957, "start": 950, "text": " So there's some nice natural resources in the environment, so that's these apples that agents are trying to harvest." }, { "end": 964, "start": 957, "text": " But if agents harvest the apples too quickly, then they don't grow back, and of course if they harvest all the apples, no apples grow back." }, { "end": 970, "start": 964, "text": " So each individual agent always wants to sort of get more apples for itself, but if everyone's too greedy, then they deplete the resource." }, { "end": 978, "start": 970, "text": " And similarly, there's this cleanup game where agents have to clean a nearby river to cause the apples to spawn," }, { "end": 983, "start": 978, "text": " but it's partially observable, and as agents are cleaning the river, they can't see what's going on with the apples," }, { "end": 987, "start": 983, "text": " and other agents can easily exploit the apples that they're creating." }, { "end": 990, "start": 987, "text": " Would you describe the basic idea behind your paper?" }, { "end": 994, "start": 990, "text": " What I was interested in is this social learning. I think this is really important." }, { "end": 1001, "start": 994, "text": " So I was thinking about how could agents in a multi-agent system actually learn socially from other agents." }, { "end": 1008, "start": 1001, "text": " And what I wanted to do was just make more realistic assumptions about how much agents could observe about the other agents in the environment." }, { "end": 1018, "start": 1008, "text": " So we focused on just allowing agents to learn from the other actions that the agents are taking, and not being able to view things like the agents internal reward function." }, { "end": 1023, "start": 1018, "text": " Because if you think intrinsic rewards are important, then each agent is going to have a different reward function that's less realistic." }, { "end": 1030, "start": 1023, "text": " And also, if you think about potential downstream applications of multi-agent RL, we often think about something like autonomous driving." }, { "end": 1038, "start": 1030, "text": " And so it's a pretty unrealistic assumption to think that the Tesla car is going to be able to learn something from the proprietary reward function of the Waymo car." }, { "end": 1043, "start": 1038, "text": " But it can actually see what the Waymo car is doing in the environment, such as whether it's turning left." }, { "end": 1049, "start": 1043, "text": " So what can these agents learn socially from each other just by observing each other's actions?" }, { "end": 1057, "start": 1049, "text": " And what we chose to do is to focus on rewarding agents for having a causal influence over the actions of another agent." }, { "end": 1063, "start": 1057, "text": " And we actually show in the paper this relates to rewarding the mutual information between agents' actions." }, { "end": 1068, "start": 1063, "text": " So the intuition is that this will help drive them to learn coordinated behavior." }, { "end": 1072, "start": 1068, "text": " How did the idea for this paper come about?" }, { "end": 1083, "start": 1072, "text": " Well, this is actually kind of a funny story. So I did my general exam for my PhD, which is just reading a ton of papers and then you're grilled on it orally by your committee." }, { "end": 1090, "start": 1083, "text": " And actually, Nando D'Fritis was a committee member of mine. And he asked this really hard question. It was a first question my general exam." }, { "end": 1096, "start": 1090, "text": " It was like, what kind of social and emotional intrinsic motivations could agents have?" }, { "end": 1100, "start": 1096, "text": " Because I was very fascinated by this curiosity paper. And he kind of put me on the spot." }, { "end": 1109, "start": 1100, "text": " I didn't have a great answer at that moment. But I actually also had to do this 24 hour take home exam on the same, and he basically gave me the same question." }, { "end": 1117, "start": 1109, "text": " So I stayed up almost 24 hours. It's like 4 a.m. 5 a.m. I'm drinking coffee. I'm trying to come up with good answers for this." }, { "end": 1121, "start": 1117, "text": " And I've been thinking about different ways you could learn socially from other agents." }, { "end": 1124, "start": 1121, "text": " Like, could you learn from your proximity to other agents or something like this?" }, { "end": 1130, "start": 1124, "text": " And I thought about learning from the causal influence of your actions on another agent and how you would compute that." }, { "end": 1136, "start": 1130, "text": " So I don't know if a podcast is the best way to describe a formula. But if you're curious, you can definitely check out the paper." }, { "end": 1147, "start": 1136, "text": " This idea that agents want to influence other agents is there some basis for this in social science? Like, to people and animals want this type of influence as well?" }, { "end": 1157, "start": 1147, "text": " Well, yeah, let me clarify exactly what the influence is. So basically, I want to take an action that changes what your action is." }, { "end": 1163, "start": 1157, "text": " Like, you're doing something that you wouldn't have done otherwise. Just conditioning on my action." }, { "end": 1168, "start": 1163, "text": " And the reason is there a basis for this in humans?" }, { "end": 1177, "start": 1168, "text": " I don't know about influence generally, but actually what we focus on in the paper is teaching agents how to communicate with this influence reward." }, { "end": 1183, "start": 1177, "text": " And I think that's actually really compelling because that's kind of the purpose of communication, arguably." }, { "end": 1197, "start": 1183, "text": " So you can read this book by Michael Thomas-Sello. That's all about cooperation in humans. And it's specifically focuses on how children, actually, when they're initially learning to communicate, what they actually want to do is be able to influence their conversation partner." }, { "end": 1203, "start": 1197, "text": " So if you can think about a young child that's hungry and it wants to have food, how does it? How can you get that food?" }, { "end": 1214, "start": 1203, "text": " The best way is to learn to communicate that that's what it needs. So there's this idea that learning to communicate is actually a way to learn to is learning to try to influence others." }, { "end": 1216, "start": 1214, "text": " So there's that basis." }, { "end": 1229, "start": 1216, "text": " I did see a YouTube video you posted of your policies running on this problem. And I wonder, could you describe for us qualitatively, how does your mechanism change how these agents behaved?" }, { "end": 1242, "start": 1229, "text": " So actually this I think was one of the most interesting and surprising results that we found. So we had given agents this reward for having their actions, having influenced the actions of another agent. So this action action, influence." }, { "end": 1260, "start": 1242, "text": " But we were also giving agents a reward for still getting their own environmental rewards, so like collecting apples. And we observed, for example, in a couple of cases, there was agents that were trained with the influence reward actually restricted the set of actions that they used in the environment to collect their reward." }, { "end": 1276, "start": 1260, "text": " So in one case, this agent only actually used two actions ever in the environment and it did something a little bit different from the other agents. So while the other agents continue to explore the map when there were no apples present, they move around looking for more apples." }, { "end": 1290, "start": 1276, "text": " This agent actually stayed still so it only ever moved on the map when there was an apple present. And what that allowed it to do is actually communicate whether apples were present via its actions." }, { "end": 1306, "start": 1290, "text": " So when an apple was present, it would move. And because the environment was partially observable, another agent that couldn't necessarily see that apple, but could know that this agent was moving, would be able to understand that there must be apples present in the environment that it can't see." }, { "end": 1316, "start": 1306, "text": " And that would change its intended behavior. And thus allowing the influencer agent to gain influence by communicating information about the presence of food in the environment." }, { "end": 1325, "start": 1316, "text": " So we think this was really interesting that as because the agents were trying to learn to influence each other, they actually learn to communicate with each other." }, { "end": 1334, "start": 1325, "text": " And this is immersion behavior immersion complexity that you didn't really design in. You're having to go in after the fact and figure out what's going on in there." }, { "end": 1335, "start": 1334, "text": " Exactly. Yeah." }, { "end": 1336, "start": 1335, "text": " That is so interesting." }, { "end": 1352, "start": 1336, "text": " I don't know if you saw there was a recent post about I think it was by Andre Carpathi, but it was about the importance of like if you're doing deep learning really digging into the inputs, the outputs exactly what the model is doing very carefully visualizing and auditing what's going on with your model." }, { "end": 1357, "start": 1352, "text": " Because there's so many ways in which these models can go wrong and can fail to train." }, { "end": 1366, "start": 1357, "text": " So just really digging in and seeing what the behavior of your agent is can help a lot. And that helped a lot in this case because otherwise I wouldn't have discovered how the influence." }, { "end": 1370, "start": 1366, "text": " Like we had I had for a long time. I think three weeks or a month." }, { "end": 1376, "start": 1370, "text": " These beautiful plots showing that the influence reward was helping the agents learn to cooperate in this environment." }, { "end": 1381, "start": 1376, "text": " But it wasn't convincing my team because they were like, why should influencing someone else help them?" }, { "end": 1382, "start": 1381, "text": " How is this working?" }, { "end": 1391, "start": 1382, "text": " So I really had to dig in and understand like what when was the influence reward happening and what was going on when agents got the influence reward?" }, { "end": 1393, "start": 1391, "text": " I guess that's one interesting thing about RL." }, { "end": 1401, "start": 1393, "text": " Like sometimes there's just no replacement for going in and seeing what these policies are doing beyond just what the charts are telling us." }, { "end": 1403, "start": 1401, "text": " I think it's really important." }, { "end": 1410, "start": 1403, "text": " Was there any controversy in terms of deciding which experiments to run or was that pretty obvious to you early on?" }, { "end": 1418, "start": 1410, "text": " There was a little bit of controversy, honestly. So I came into a debind and was working with this team ahead of by Joel Liebo." }, { "end": 1422, "start": 1418, "text": " That's very excited about these SSD environments." }, { "end": 1431, "start": 1422, "text": " And I think they were a good test bed for the influence reward because they actually get at the problem of what the agents could." }, { "end": 1435, "start": 1431, "text": " They're not they're not fully like they're a dilemma." }, { "end": 1440, "start": 1435, "text": " So agents aren't fully self interested, but they're not fully cooperative. You're not just trying to optimize a group reward." }, { "end": 1443, "start": 1440, "text": " So it's interesting to see what the influence reward do would do." }, { "end": 1453, "start": 1443, "text": " But some of the feedback we got on the paper was that we should have tested in maybe more traditional RL environments, maybe we should have tested in robot software, for example." }, { "end": 1458, "start": 1453, "text": " And I think that that's very interesting follow up work to do and it's something I'm thinking about doing." }, { "end": 1467, "start": 1458, "text": " I look briefly at the GitHub repo for these SSDs and I noticed that there were some you had some references to RLib in your comments." }, { "end": 1469, "start": 1467, "text": " Was that what you used to train it?" }, { "end": 1475, "start": 1469, "text": " Actually, no. So we I used I was a demand. So I use their amazing internal infrastructure to train these models." }, { "end": 1482, "start": 1475, "text": " And it was actually based on a three C, but actually a little bit closer to Impala because we did use a V trace correction." }, { "end": 1490, "start": 1482, "text": " But then when I left deep mind, you can't touch the code anymore because it's not open source. So I ended up using RLib to reproduce some of the code." }, { "end": 1497, "start": 1490, "text": " And there's a not debugged secret fork of some repo that if you really need access to this code, you can email me about." }, { "end": 1504, "start": 1497, "text": " But I haven't had a chance to fully test it and make it pretty and get it ready to be fully released." }, { "end": 1509, "start": 1504, "text": " So that's something we're working on reproducing it in open source." }, { "end": 1514, "start": 1509, "text": " Great. I'm looking forward to that. I really got way more into SSDs after reading your paper." }, { "end": 1522, "start": 1514, "text": " And after thinking about the fact that real real world SSDs are a serious problem in our world today." }, { "end": 1527, "start": 1522, "text": " It seems to me that there's so much more we could do with them. And yet here we are in 2019." }, { "end": 1530, "start": 1527, "text": " We're just starting to ask the basic questions about them." }, { "end": 1538, "start": 1530, "text": " Sorry. I want to give a plug to my collaborator Eugene Vinitsky, who was the real hero behind reproducing the SSDs themselves in open source." }, { "end": 1543, "start": 1538, "text": " So you think they get have repo is something that he it was largely him." }, { "end": 1546, "start": 1543, "text": " So just want to give a shout out to to Eugene." }, { "end": 1553, "start": 1546, "text": " We've seen different types of intrinsic motivations now in RL mostly focused on the single agent setting." }, { "end": 1555, "start": 1553, "text": " Curiosity, as you mentioned, empowerment." }, { "end": 1567, "start": 1555, "text": " There was a paper that focused on inequity a version, which is a gathered some sort of punishment for agents that are rewarded more than the others in nut shells." }, { "end": 1572, "start": 1567, "text": " Yeah, we're rewarded less. They don't like to be too different from the group." }, { "end": 1579, "start": 1572, "text": " So if you're getting way less reward than the rest of the group or way more rewarded makes the agent disincentivized to do that." }, { "end": 1582, "start": 1579, "text": " The paper picture is being like guilty or envious." }, { "end": 1588, "start": 1582, "text": " Ah, okay. So that was inequity a version improves cooperation in inter temporal social dilemmas." }, { "end": 1593, "start": 1588, "text": " The mechanism that you came up with is that more effective than inequity aversion?" }, { "end": 1598, "start": 1593, "text": " And is there some way that they could work together? I just wanted to shout out also the first author of that paper is and Hughes." }, { "end": 1601, "start": 1598, "text": " I think it's a great paper and you guys should check that out." }, { "end": 1608, "start": 1601, "text": " We do have a plot in our paper that or a table that does show in certain of our experiments." }, { "end": 1615, "start": 1608, "text": " We exceed the maximum obtained in the inequity version paper in terms of total collective reward for all the agents." }, { "end": 1621, "start": 1615, "text": " So that suggests it can be more effective, but it's not a direct comparison because the the amount of hyper parameter sweeping and" }, { "end": 1626, "start": 1621, "text": " in my paper was not necessarily held constant with the previous paper." }, { "end": 1628, "start": 1626, "text": " So I don't want to over claim there." }, { "end": 1636, "start": 1628, "text": " But one thing that I would say in terms of effectiveness is that our paper does not make an assumption which the inequity aversion paper does." }, { "end": 1647, "start": 1636, "text": " The inequity version paper relies on agents being able to see each other's rewards, which as I mentioned earlier is not necessarily realistic assumption, especially if you think about different downstream applications of this type of research." }, { "end": 1652, "start": 1647, "text": " So I think it's more realistic that agents just see each other's actions, but can still learn from each other." }, { "end": 1655, "start": 1652, "text": " And that's I think a big contribution of our paper." }, { "end": 1664, "start": 1655, "text": " If you had a model of other agents, maybe you could project your own understanding every words onto them and assume that their reward structure is similar to yours." }, { "end": 1666, "start": 1664, "text": " Would that be a way around that?" }, { "end": 1670, "start": 1666, "text": " Yeah, so I'm really fascinated with this idea of could you use yourself to model others?" }, { "end": 1676, "start": 1670, "text": " So there's this modeling others using one self paper actually that came out, which is interesting to check out." }, { "end": 1681, "start": 1676, "text": " And I think this could really dovetail nicely with the influence reward." }, { "end": 1687, "start": 1681, "text": " So one criticism of the influence reward is that you could influence an agent in a way that doesn't necessarily help it." }, { "end": 1691, "start": 1687, "text": " So let's say I just get in your way and you have to walk around me. I've influenced you, but I haven't helped." }, { "end": 1698, "start": 1691, "text": " So what I think would be really nice is to compute directional causal influence of your actions on another agent's value function." }, { "end": 1704, "start": 1698, "text": " So say I want to cause your value estimate, your estimate of total expected future rewards to go up." }, { "end": 1709, "start": 1704, "text": " So I want to help you, but I don't want to have to be able to observe your value function because that's pretty unrealistic." }, { "end": 1711, "start": 1709, "text": " Why would you necessarily share that with me?" }, { "end": 1716, "start": 1711, "text": " So what if I were able to use my own value estimate to model your situation?" }, { "end": 1727, "start": 1716, "text": " So basically I switch places with you in and how to do this might be dependent on the environment, but essentially I put you in my place and I substitute my action for your action." }, { "end": 1735, "start": 1727, "text": " And I compute how that action would affect my value function. And I use that as an estimate to say how I think my action is affecting you." }, { "end": 1740, "start": 1735, "text": " And I hope that I'm helping you in affecting you in a positive way." }, { "end": 1747, "start": 1740, "text": " And yeah, I think there could be really interesting experiments there like how well does this work as our rewards diverge?" }, { "end": 1754, "start": 1747, "text": " Like if you have different goals than I do, how well does just generally trying to help like imagine you as me help you?" }, { "end": 1758, "start": 1754, "text": " But yeah, I think it's a really interesting direction to pursue that." }, { "end": 1764, "start": 1758, "text": " Get to the idea of the Tesla car assuming that the way my car has a similar policy and value function." }, { "end": 1769, "start": 1764, "text": " So it could imagine how it was influencing the other cars reward." }, { "end": 1773, "start": 1769, "text": " A simpler mechanism might be to share rewards like I tried that in power." }, { "end": 1782, "start": 1773, "text": " Yeah, I mean, I think that's a really I think that's a really promising assumption you can make if you're training a swarm of agents that all had the same goal." }, { "end": 1790, "start": 1782, "text": " And in some environments that is the case. So maybe you're doing robot soccer every agent does want to on the same team wants to win the game." }, { "end": 1792, "start": 1790, "text": " And that's straightforward to do." }, { "end": 1799, "start": 1792, "text": " But I think if you think about the human world, we're not actually all jointly optimizing for the same shared reward function." }, { "end": 1801, "start": 1799, "text": " We all have different goals." }, { "end": 1809, "start": 1801, "text": " And in the same sense as like autonomous driving like cars have some shared goals like they don't want traffic accident, for example." }, { "end": 1813, "start": 1809, "text": " But they have different destinations and their motivations are essentially different." }, { "end": 1818, "start": 1813, "text": " So I think it's not always plausible to assume that reward is totally shared." }, { "end": 1833, "start": 1818, "text": " But if reward wasn't shared, then how would you be able to figure out how it would affect their value function like your assumption that you could use your own model to put the other agent in your place and estimate the reward they would get from certain actions." }, { "end": 1836, "start": 1833, "text": " Might not make sense because you're not sure what their goals are." }, { "end": 1837, "start": 1836, "text": " Is that right?" }, { "end": 1842, "start": 1837, "text": " That's a great question. Yeah. And actually the experiments I want to do if I pursue this method is precisely that." }, { "end": 1850, "start": 1842, "text": " So as our interest diverge, like as we follow increasingly different reward functions, how much does this actually help?" }, { "end": 1857, "start": 1850, "text": " And in an environment where there's like let's say we were in an environment where you wanted to eat apples and I wanted to eat bananas." }, { "end": 1863, "start": 1857, "text": " But neither of us wanted to die or like fall off a cliff or be shot or something like this." }, { "end": 1871, "start": 1863, "text": " It may still be possible that modeling my your reward as mine or my reward as yours still helps you in some way and is still beneficial." }, { "end": 1877, "start": 1871, "text": " But at what point does that fail utterly? So I think that would be an interesting experiment to do." }, { "end": 1888, "start": 1877, "text": " With all this talk of SSDs, I can't help it wonder could this research be used to help us solve our real world sequential social dilemmas like climate change in terms of influence." }, { "end": 1893, "start": 1888, "text": " You mentioned how it would be more effective if my friends were paid for me to lose weight." }, { "end": 1899, "start": 1893, "text": " But in terms of systems, you said you you don't want to see systems that influence people in this way." }, { "end": 1900, "start": 1899, "text": " Is that right?" }, { "end": 1906, "start": 1900, "text": " Yeah, I'm a little bit wary of the idea of building systems that influence people." }, { "end": 1912, "start": 1906, "text": " The problem with a lot of systems AI systems that are currently meant to influence people is that we don't have the right metrics." }, { "end": 1921, "start": 1912, "text": " So if you just you just train YouTube to increase watch time, you actually get some pretty questionable behavior." }, { "end": 1930, "start": 1921, "text": " So I don't know if you saw there was a recent article that YouTube recommendations tend to that are trained to optimize watch time and up recommending extremist content." }, { "end": 1934, "start": 1930, "text": " Yes, because people click on it and they watch it for longer." }, { "end": 1942, "start": 1934, "text": " So we just don't know what metrics we should be optimizing when we're trying to influence people to do something according to a metric." }, { "end": 1945, "start": 1942, "text": " And so I think that's pretty dangerous." }, { "end": 1952, "start": 1945, "text": " But I do think social dilemmas in the sense that they are like a tragedy of the comments, which of course does relate to climate change." }, { "end": 1957, "start": 1952, "text": " Are something that's interesting to study the question is how much can we take these it?" }, { "end": 1963, "start": 1957, "text": " Like how can we develop insights that we are able to take to larger scale problems in a beneficial way?" }, { "end": 1972, "start": 1963, "text": " I think the general idea that multi-Asian RL could be posing these social science questions or maybe answering them in different ways is super fascinating." }, { "end": 1975, "start": 1972, "text": " I'm fascinated by that too. Absolutely." }, { "end": 1987, "start": 1975, "text": " It almost seems like multi-Asian RL might help us get down to the essence of some of these issues without the complexities of language and culture that are always part of the social experiment approaches." }, { "end": 1991, "start": 1987, "text": " This somehow helps us distill down to some mathematical essence of the problem." }, { "end": 2005, "start": 1991, "text": " Yeah, and I think that's a valuable tool. And if you think about like what game theory has done for economics, I think there's something similar there where you make a simplified version of the problem and develop concrete theory as to how this works in a simplified version." }, { "end": 2013, "start": 2005, "text": " But I don't think that's the only research that you should do and there you definitely need to fill in the gaps with more complex research." }, { "end": 2026, "start": 2013, "text": " And one thing that I think illustrates this is that actually the communication protocol that emerged as a result of the influence reward in these environments emerged precisely because they're a little bit more complex than a prisoner's dilemma." }, { "end": 2036, "start": 2026, "text": " And there's partial observability and there's all these complexities that actually led to that being an effective policy in order to gain influence." }, { "end": 2046, "start": 2036, "text": " And we might not have seen that behavior if we had a much more simplified environment. So I do think the demanding this would be environment definitely shapes the solution that you end up coming up with." }, { "end": 2054, "start": 2046, "text": " And so while simplified versions are good for really concretely entering down aspects of the problem, it's not the only thing we should study." }, { "end": 2067, "start": 2054, "text": " When I first encountered your paper, I was thinking a lot about compassion and AI and whether it makes sense to design agents that are compassionate or what would that even mean?" }, { "end": 2074, "start": 2067, "text": " And I don't mean in terms of feelings when in terms of behaving in ways that could be described as compassion or caring." }, { "end": 2092, "start": 2074, "text": " So I reached out to Rohan Shah at UC Berkeley who does the AI alignment newsletter. It seemed like he might know about these things. He replied by email and gave me permission to share. He said in part, your idea seems more about how the AI system should help us, i.e. by being compassionate." }, { "end": 2100, "start": 2092, "text": " I'd rather figure out how to make it so humans retain effective control over the AI so that they can decide what they want the AI to do." }, { "end": 2117, "start": 2100, "text": " And the image that came to mind for me was like imagining this supposedly caring robotic mother. But if it had a faulty model, then it couldn't really be effectively compassionate to do that well would take an incredible amount of intelligence." }, { "end": 2132, "start": 2117, "text": " Yeah, I think this question is absolutely fascinating and actually connects a lot of the different things we've been talking about because it connects again to the thing I mentioned earlier about if you don't have the right metrics, you have only clumsy metrics and you try to optimize for them, then you can actually do harm." }, { "end": 2149, "start": 2132, "text": " So like if you optimize watch time, you can actually, you know, end up with really bad behaviors. So that's exactly right. If you have a robot mother that tries to be compassionate, but it's metrics of what you need are clumsy and incomplete, then it could actually be really damaging." }, { "end": 2176, "start": 2149, "text": " And actually, if you're if you have a passion for bad AI sci-fi TV shows and movies, there's a show on Amazon Prime called humans that actually investigates this where they have like an elderly person that has a humanoid robot that's meant to care for them, but the robot is made by the government and has very strict ideas of what is healthy for this person and really restricts their autonomy and trying to be quote unquote compassionate." }, { "end": 2190, "start": 2176, "text": " I think there's something interesting there, but that's actually so another part of my research has been trying to use reinforcement learning and other techniques to learn about human preferences from their implicit social cues." }, { "end": 2202, "start": 2190, "text": " So can we improve a generative model by looking at people's facial expressions as they look at samples from the model or can we improve a dialogue model by looking at people sentiment in terms of how they respond to the model." }, { "end": 2218, "start": 2202, "text": " And I'm kind of interested in this because I think if you really get at these underlying signals of what people's mental states are and you try to really get at what people's current and ongoing preferences really are, that helps alleviate some of these issues." }, { "end": 2231, "start": 2218, "text": " So you can think about human preferences as this non-stationary signal that if you do something clumsy and wrong repeatedly, you will no longer be satisfying their preferences and you will no longer get a good reward signal." }, { "end": 2239, "start": 2231, "text": " So if I tell the same joke to you once and you laugh, great, but if I tell it to you three times, you're not going to be laughing anymore." }, { "end": 2249, "start": 2239, "text": " So to the extent that we could really sense people's underlying preferences and optimize for those, that seems like we would be getting closer to a beneficial AI." }, { "end": 2255, "start": 2249, "text": " So it's like having humans in that real time control loop. That's really interesting." }, { "end": 2263, "start": 2255, "text": " Yeah, so I basically agree wholeheartedly with what Rohan was saying that people need autonomy to be able to control what these models are doing." }, { "end": 2273, "start": 2263, "text": " Towards the end of your paper, you mentioned how your influence mechanism might be used to prevent collapse in hierarchical RL." }, { "end": 2277, "start": 2273, "text": " Could you say more about what you mean by that and how this mechanism might help?" }, { "end": 2285, "start": 2277, "text": " In a hierarchical model, you have what I would call the top level of the hierarchy that's deciding on sort of a more abstract policy." }, { "end": 2296, "start": 2285, "text": " Like, let's say I want to go get the cup and then you have like a lower level of the policy that's supposed to be implementing how to like take the actions to go get the cup, like move here, move here, grab, etc." }, { "end": 2302, "start": 2296, "text": " The issue about collapse is basically that we don't really see this behavior emerging." }, { "end": 2307, "start": 2302, "text": " So like we don't actually learn several different options that can be invoked at different times." }, { "end": 2316, "start": 2307, "text": " It's just that there's one option that the high level always chooses, for example, on the low level does everything or we can see the reverse happening as well." }, { "end": 2321, "start": 2316, "text": " And then the issue of like ignoring the high level option can also happen." }, { "end": 2332, "start": 2321, "text": " So if the low levels just doing everything, then whatever option is being chosen at the high level is not being actually used by the low level of the model, if that makes sense." }, { "end": 2345, "start": 2332, "text": " So the idea that influence could help to not ignore the high level option is to say that we can compute the causal influence of the option chosen by the high level on the low level policy and try to optimize for that." }, { "end": 2351, "start": 2345, "text": " So even both parts of the model be rewarded if there's more influence between the high level option chosen and the low level policy." }, { "end": 2358, "start": 2351, "text": " So basically saying there should just be more mutual information between the high level option and the low level policy." }, { "end": 2365, "start": 2358, "text": " When I first read your social influence paper, I was reminded of another paper about auto curricula." }, { "end": 2370, "start": 2365, "text": " I think that paper is the only place on archive where that word is mentioned." }, { "end": 2381, "start": 2370, "text": " The title is Auto Curricula and the Emergence of Innovation from Social Interaction, a manifesto for multi-agent intelligence research by Libot at Al." }, { "end": 2391, "start": 2381, "text": " You mentioned you're familiar with this paper. Could your mechanism be used to help us move towards auto curricula or maybe you could start by describing what is auto curricula?" }, { "end": 2398, "start": 2391, "text": " Yeah, so I actually really love this paper. So again, Joel Ibo is the first author and worked on it as well." }, { "end": 2412, "start": 2398, "text": " So it's a really interesting paper that basically pulls together a lot of sociology and social science research that talks about why social demands actually drive human cognitive development." }, { "end": 2421, "start": 2412, "text": " And another really interesting paper that I love that you could check out that's a bit older that's directly from the the sociology literature is called the social function of intellect." }, { "end": 2433, "start": 2421, "text": " And basically both papers are arguing that oh, and actually you could read this for popular book now of course is sapiens by you all know a hermitage also makes the same kind of arguments." }, { "end": 2439, "start": 2433, "text": " Yeah, so it's it's you know that book has some over claiming but I really it's a fun." }, { "end": 2459, "start": 2439, "text": " So what's going on with all these is that they're basically saying that humans when they integrate into larger social groups that's actually very cognitively demanding because you have to understand all the different social relationships and maintain that and and actually if you're not able to do that you can't organize into larger social groups." }, { "end": 2475, "start": 2459, "text": " So for example, sapiens makes the claims claim that Neanderthals were only able to organize into groups of like 50 but sapiens were able to organize into groups of 200 and that's why they were able to genocide all the other proto human species so fun spoiler alert." }, { "end": 2477, "start": 2475, "text": " I don't know if you want to include that." }, { "end": 2498, "start": 2477, "text": " So the autoccurricular paper is again making this case that in a social environment if you want to cooperate with other agents when they're increasingly complicated social institutions then you need to have a larger brain or you need to have better cognitive abilities and then as your cognitive abilities get better you can make more complicated social institutions." }, { "end": 2515, "start": 2498, "text": " So that's kind of driving feedback loop and then there's this other type of feedback loop where if you want to compete with another agent like out compute and out compete another human for resources or you know or magic partners or whatever then you need to have more intelligence as well." }, { "end": 2541, "start": 2515, "text": " So the idea of this autoccurricular is that the social demands of an environment as other agents learn and get better in order to work with them in their environment you will also have to learn and get better so it's I think they mean by autoccurricular that this curriculum is developing naturally and automatically without a researcher having to engineer the curriculum but it's actually coming from the agents in the environment itself." }, { "end": 2558, "start": 2541, "text": " So you can potentially see this happening in the social influence paper because like in order to influence you I have to have some understanding of what you're interested in and why you're taking the actions you are and and be able to take actions that or or communicate information that you will find useful." }, { "end": 2587, "start": 2558, "text": " So the particular paper mentions the problem problem the idea that researchers have this problem of where they have to keep creating new problems for the agents to solve and I was reminded of Jeff Cloon's team at Uber AI published a paper on what they call poet which automatically makes different variations on environments which seems like it's trying to sell the same thing although autoccurricular somehow seems more elegant in that way." }, { "end": 2594, "start": 2587, "text": " So it's just one environment that provides everything that's needed for this intelligence explosion to happen." }, { "end": 2601, "start": 2594, "text": " Is that right or we do think of this as one environment where that keeps on running and the agents are just getting smarter and smarter?" }, { "end": 2606, "start": 2601, "text": " I don't think the autoccurricular means like it's restricted to any single environment." }, { "end": 2613, "start": 2606, "text": " I think it's the idea that as you have other agents in the environment that are either working with you or competing with you that will provide a curriculum in itself." }, { "end": 2621, "start": 2613, "text": " So with poet you can think of the other agent as the environment and it's trying to make the environment harder for you and you're trying to adapt to the environment as it's changing." }, { "end": 2632, "start": 2621, "text": " So that that could be the autoccurricular could occur in any multi agent environment where you're trying to work with other agents either in a competitive way or in a cooperative way." }, { "end": 2640, "start": 2632, "text": " Would you say auto curricula is entirely a theoretical idea at this point or are there early versions of this running?" }, { "end": 2654, "start": 2640, "text": " There are early versions working. I've been doing interviews lately with different companies and someone was able to show me some early results that I can't talk about, but you can look out for those happening soon." }, { "end": 2667, "start": 2654, "text": " But yeah, so people are thinking about this and I think it's quite interesting. So the autoccurricular paper itself as it titles itself is just a manifesto is just putting these ideas forward is not necessarily showing them in a concrete way." }, { "end": 2673, "start": 2667, "text": " But hopefully you can inspire other researchers to think about multi agent as a way to drive intelligence broadly." }, { "end": 2691, "start": 2673, "text": " Do you feel like your influence mechanism would be useful for autoccurricular potentially. So if you think about as agents get more complex and they learn more about the environment itself and they're more self sufficient, it would be more and more difficult to influence them by providing useful information." }, { "end": 2699, "start": 2691, "text": " So you have to work harder to find some information that they don't know that you could use to influence them. So there could be something there." }, { "end": 2704, "start": 2699, "text": " Thinking back to the environments that you used in your paper, the sequential social dilemmas." }, { "end": 2713, "start": 2704, "text": " Like harvest and cleanup. I'm wondering how much more complex do they need to be to support auto curricula?" }, { "end": 2725, "start": 2713, "text": " I'm thinking like in physics, a two body problem is not hard to solve, but add one more and make a three body problem and suddenly it becomes hard to solve. You get this emergent complexity." }, { "end": 2735, "start": 2725, "text": " So in the same way, maybe there's some small steps that we could take like I get us to an auto curricula type environment, though I'm not really clear what that would be." }, { "end": 2745, "start": 2735, "text": " Either and I think inevitably you do run up against the limitations of the simple environment, even in terms of like developing more and more complex social policies." }, { "end": 2753, "start": 2745, "text": " Because if there is a point at which you your policy saturates and you sort of solve the environment, then it's not fear." }, { "end": 2756, "start": 2753, "text": " Yeah, it's an interesting question." }, { "end": 2769, "start": 2756, "text": " So how to develop an environment that basically enables continuing social development. And like I think you mentioned like developing social institutions that are more and more complex." }, { "end": 2783, "start": 2769, "text": " That sounds really cool. And maybe there's something to just like intrinsic motivation in itself. If you had an interesting set of intrinsic motivations and kind of an unbounded environment, that could be really, really cool." }, { "end": 2787, "start": 2783, "text": " Can I ask, who do you look up to in the research world? Do you have mentors?" }, { "end": 2796, "start": 2787, "text": " So I do have definitely have mentors, but before I get into that, I do just want to give a shout out to Chelsea Finn, because I actually definitely look up to Chelsea Finn." }, { "end": 2802, "start": 2796, "text": " And I think she's really inspiring for a lot of women entering the field and she's just such a badass. So she's great." }, { "end": 2814, "start": 2802, "text": " But in terms of mentors, there's been a lot of people that have helped me along the way. And I really want to be grateful to them and acknowledge them. I kind of feel like I'm giving my thesis acknowledgements all over again." }, { "end": 2825, "start": 2814, "text": " I just did this in my defense, but I do want to thank Doug Eck at Google Brain, who was my manager for a very early internship and has really believed in me and supported me. And that's been incredibly valuable." }, { "end": 2843, "start": 2825, "text": " I want to thank Nando. He's been great as I mentioned before. He asks all the hard questions and really gets me to think. I really want to thank Joel Pino, because she is on my thesis committee and has worked with me in great detail and given me really detailed technical feedback in a way that I found to be incredibly valuable." }, { "end": 2858, "start": 2843, "text": " And of course, I have to thank my advisor, Rosalyn Picard, because she's just been incredibly supportive and kind and great to bounce ideas off of. And I really wouldn't have had the same PhD if I didn't have such an understanding and amazing advisor." }, { "end": 2860, "start": 2858, "text": " So I want to give thanks to her." }, { "end": 2863, "start": 2860, "text": " Any words for people who look up to you?" }, { "end": 2875, "start": 2863, "text": " Oh, wow. Well, I guess that's kind of a weird question, because I still think of myself as mainly looking up to other people. But I would say that it just takes hard work." }, { "end": 2889, "start": 2875, "text": " I remember you really just have to stick with it. Like I remember in my master's degree very early, taking my first machine learning course and having forgotten a lot of linear algebra since early undergrad and having to like go back and take course." }, { "end": 2914, "start": 2889, "text": " And then you just keep trying to keep working. And if you keep trying eventually things work out, even if it's even frustrating in the short term, don't think like, I can't do this. Just keep trying." }, { "end": 2922, "start": 2914, "text": " I don't think I can do these things just at first. Would you tell us about future directions for you? Like what do you find interesting these days? And what do you think you want to do next?" }, { "end": 2930, "start": 2922, "text": " Well, let me give you the pitch because I would think you a lot about this. So basically I want to integrate the stuff that I've worked on before into a much more cohesive direction." }, { "end": 2938, "start": 2930, "text": " And I'm really fascinating, but by this idea of multi agent cooperation and I think I have many ideas for how to do social learning in a multi agent system." }, { "end": 2948, "start": 2938, "text": " But what I think would be really cool is if you could train a policy that's able to quickly adapt to a new agent and dynamically coordinate with it in an ad hoc way." }, { "end": 2957, "start": 2948, "text": " And then you could take that policy and generalize it to coordinating with the human because I think there's really interesting challenges there. And then fine tune your policy by learning from the human." }, { "end": 2965, "start": 2957, "text": " So I think uniting these directions of learning from humans and learning from other agents into the social learning direction is what I'm really excited about." }, { "end": 2975, "start": 2965, "text": " Do you have any plans to take a break now that you're done your PhD? Oh my goodness. Yes, I have been planning to take a break since June, but so far it hasn't materialized." }, { "end": 2987, "start": 2975, "text": " But very soon I hope to be able to actually take some time off and I really want to go hiking. I want to do like a 10 day backpack and trip kind of by myself disconnect, turn off the phone for a while, being the woods. So that's the plan." }, { "end": 2997, "start": 2987, "text": " Natasha, Jake's I've learned so much from your work and from and from you. This has been a real treat for me. I'm really looking forward to reading whatever you come up in next." }, { "end": 3004, "start": 2997, "text": " So thanks, thanks so much for sharing your time and you're insight with us all today. Oh, thank you so much for inviting me. This is so fun talking about this stuff." }, { "end": 3021, "start": 3004, "text": " That's our episode for today folks. Be sure to check talkrl.com for more great episodes." } ]
About TalkRL Podcast: All Reinforcement Learning, All the Time
Introducing TalkRL Podcast! Also check out our website at talkRL.com
https://media.transistor…950.mp3?src=site
This is TalkRail Podcast, all reinforcement learning, all the time. Interviews at Brilliant Folks from across the world of RL. Time your host, Rob and Chauhan. The idea with TalkRail Podcast is to hear from brilliant folks from across the world of reinforcement learning, both research and applications. As much as possible, I want to hear from them in their own language. I try to get to know as much as I can about their work beforehand. And I'm not here to convert anyone. I want to reach people who are already into RL. So we won't stop to explain what a value function is, for example. Though we also won't assume that everyone's read all the latest papers. Why am I doing this? Because it's a great way to learn from the most inspiring people in the field. There's so much happening in the universe of RL, and there's tons of interesting angles and so many fascinating minds to learn from. Now I know there's no shortage of books, papers, lectures, but so much goes unsaid. I mean, I guess if you work for, like Miele or Amy or Vector Institute, you might be having these conversations over coffee all the time. But I live in a little village in the woods in BC, so for me, these remote interviews are a great way to have these conversations. And I hope sharing them with the community makes it more worthwhile for everyone. In terms of format, the first two episodes were interviews in longer form around an hour long. But going forward, some might be a lot shorter. It largely depends on the guest. If you want to be a guest or a suggest a guest, go to talkrl.com slash about and you'll find a link to a suggestion form. Thanks for listening. Talkrl.com.
[ { "end": 13, "start": 0, "text": " This is TalkRail Podcast, all reinforcement learning, all the time." }, { "end": 16, "start": 13, "text": " Interviews at Brilliant Folks from across the world of RL." }, { "end": 20, "start": 16, "text": " Time your host, Rob and Chauhan." }, { "end": 24.400000000000002, "start": 20, "text": " The idea with TalkRail Podcast is to hear from brilliant folks from across the world of" }, { "end": 28.72, "start": 24.400000000000002, "text": " reinforcement learning, both research and applications." }, { "end": 32.4, "start": 28.72, "text": " As much as possible, I want to hear from them in their own language." }, { "end": 36.04, "start": 32.4, "text": " I try to get to know as much as I can about their work beforehand." }, { "end": 38.36, "start": 36.04, "text": " And I'm not here to convert anyone." }, { "end": 41.16, "start": 38.36, "text": " I want to reach people who are already into RL." }, { "end": 45.8, "start": 41.16, "text": " So we won't stop to explain what a value function is, for example." }, { "end": 49.879999999999995, "start": 45.8, "text": " Though we also won't assume that everyone's read all the latest papers." }, { "end": 51.2, "start": 49.879999999999995, "text": " Why am I doing this?" }, { "end": 55.04, "start": 51.2, "text": " Because it's a great way to learn from the most inspiring people in the field." }, { "end": 59.28, "start": 55.04, "text": " There's so much happening in the universe of RL, and there's tons of interesting angles" }, { "end": 62.56, "start": 59.28, "text": " and so many fascinating minds to learn from." }, { "end": 68.16, "start": 62.56, "text": " Now I know there's no shortage of books, papers, lectures, but so much goes unsaid." }, { "end": 73.72, "start": 68.16, "text": " I mean, I guess if you work for, like Miele or Amy or Vector Institute, you might be having" }, { "end": 76.52, "start": 73.72, "text": " these conversations over coffee all the time." }, { "end": 81.03999999999999, "start": 76.52, "text": " But I live in a little village in the woods in BC, so for me, these remote interviews are" }, { "end": 83.56, "start": 81.03999999999999, "text": " a great way to have these conversations." }, { "end": 87.68, "start": 83.56, "text": " And I hope sharing them with the community makes it more worthwhile for everyone." }, { "end": 92.36, "start": 87.68, "text": " In terms of format, the first two episodes were interviews in longer form around an hour" }, { "end": 93.36, "start": 92.36, "text": " long." }, { "end": 96.64, "start": 93.36, "text": " But going forward, some might be a lot shorter." }, { "end": 99.48, "start": 96.64, "text": " It largely depends on the guest." }, { "end": 105.16, "start": 99.48, "text": " If you want to be a guest or a suggest a guest, go to talkrl.com slash about and you'll" }, { "end": 107.96000000000001, "start": 105.16, "text": " find a link to a suggestion form." }, { "end": 109.64, "start": 107.96000000000001, "text": " Thanks for listening." }, { "end": 116.64, "start": 109.64, "text": " Talkrl.com." } ]