video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
approaches to train a linear controller to maximize expected reward of a robot so step one two and three is the unsupervised learning that can happen ahead of time and then you can run RL on that representation that you've learned so one thing that's real interesting here is that remember the Omnitech where yaw would say oh well you know reinforced learning is a cherry on the cake which is tiny compared to the cake and why are something for spent in the chair because it's not a lot of rewards it's just small amount of reward signal
00:31:23
00:31:59
1883
1919
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1883s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
the following york response to be bound a signal there's a lot of signal coming from self supervised learning and that's the foundation of the kick and so if you look at what's happening here the VMA neural network has four million parameters the RN and dynamics model network says no network has four hundred thousand parameters and then the controller the thing that is learned with RL only has eight hundred something parameters there's a massive difference in that RL only has to learn a small number of parameters which Maps do it
00:31:59
00:32:28
1919
1948
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1919s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
only have a less lesser amount of signal whereas the saw survived part has to learn most parameters millions of parameters and that's done from one to device theta okay so here's an example of an input frame the 64 by 64 pixels here and a frame reconstruction which kind of roughly matches up not perfectly but it gets to just then here we have when we use just C or Z on h h is the ardennes in that state so it shows that it's important that the RNA and hidden state captures something important about the world let's look at results so what we
00:32:28
00:33:12
1948
1992
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1948s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
see here in the table is scores obtained with the model described highest score in this car is the environment compared to previous methods obviously in principle unlimited she be able to learn this too but when you limit the amount of time you get to train then using cell scribes learning to learn a representation combined with reinforcement to learn the control allow us to get seemingly higher scores than previous methods there were pure RL were able to do so this is the model we looked at before one experiment we saw
00:33:12
00:33:53
1992
2033
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1992s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
so far I had the car racing environment the second experiment where you have to dodge things being shot at you in a fist doom environment so the input will look something like we see on the left but sometimes you'll see fireballs coming at you when they're shooting at you and you got to dodge those fire bullets to get to stay alive and get high reward same approach train of uni training our end world model then linear controller train with our L on top of that and so again this linear controller train on top of
00:33:53
00:34:31
2033
2071
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2033s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that is trained in the Arnon simulator itself so you don't need you don't need to simulate what things will look like rendering is often expensive computationally if you need to go all the way to rendering to train your policy I'll take a lot longer to do the same number of rollouts their oldest thing that low dimensional latent space to train the policy so it's called doom take cover here's a higher resolution version of what this looks like if you were to play this game yourself same approach laid out here again
00:34:31
00:35:03
2071
2103
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2071s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
unsupervised learning does all the stuff at the top here millions of parameters learn then the RL only needs to learn about a thousand forever again beautiful association of the non-linear cake idea so here is what here's what this what this looks like one thing to to keep in mind here is that it actually sometimes you can you can have some quirky results where the simulator of the world allows you to do things you can now do in the real world and so that's something to look out for that they're highlighting in on their
00:35:03
00:35:43
2103
2143
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2103s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
website so if you go look at the kind of normal temperature higher temperature things you'll see some differences there so here are the results we have depending on the temperature different discrepancies so for low temperature we see a very high virtual score but the actual score not so great for higher temperatures we have a closer match between the virtual score in the actual score so actually actually I should I quickly highlight what would meet with temperature here so typically in RL you have a policy that has stochastic output
00:35:43
00:36:29
2143
2189
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2143s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
so you would have a distribution over actions and that solution over actions can have a temperature parameter in terms of how much you favor your favorite action and so that temperature parameter if you make it small low close to zero then you'll always think your preferred out most preferred action when you have then you end up with a close to the domestic policy we have a close to domestic policy you can often explored quirks in your simulator whereas if you have some random is in your policy you have a higher temperature
00:36:29
00:37:03
2189
2223
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2189s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
if ourselves a little bit of randomness then you could not exploit the very specific quirks and lauren simulator because the randomness will prevent you from being able to go to that very very quirky path where you all of a sudden get a high score even though you know really you can't do that in the real world but your simulator has a small little bug you won't be able to trigger that small little bug and that's what's going on here with temperature at higher temperature we are not able to exploit tiny little bugs my learned simulator we
00:37:03
00:37:32
2223
2252
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2223s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
have to learn something more robust and that leads to a better match between performers in the real environment relative to the learned simulator ok so that was the world models paper by David hein collaborators now one question you could ask yourself if we're going to learn a world model we're going to learn a simulator some Lincoln space simulator couldn't make sense to try to learn a latent space such that control becomes easier what I mean with that so if you look at the control literature some control problems are easy to solve some
00:37:32
00:38:15
2252
2295
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2252s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
control problems are very hard to solve and maybe we can map our pixel observations and world and amps and pixel space into a blatant space dynamics that satisfy certain properties that make the resulting control problem easier to solve a good example of this is linear dynamical systems if you have a linear dynamical system then the control problem tends to be relatively straightforward to solve so how about this dinner this is what this paper we're going to cover here is going to do is hold on give me one second here
00:38:15
00:38:58
2295
2338
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2295s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
[Music] let me cover something else here first so one thing that might happen is as you train the world model on your randomly collected data and then turn your policy and test it in the real world it might not always work the reason might not work is because the randomly collected data might not not have been interesting enough to maybe cover the parts of the space where you would get high reward and so what then you'd want to do is iterate this process at this point you effectively a model-based reinforcement
00:38:58
00:39:43
2338
2383
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2338s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
procedure you collect data you learn a model you find a policy in the learned model you deploy that policy you take a new data and prove your world model and repeat so that's what they did and this gives for the carpal swing up and so after about twenty trations with this we would be able to learn to swing it up now a couple of other world models papers is the actions additional video prediction using deep networks in atari games just at the top here worth checking out model-based reinforced planning for atari it's another one
00:39:43
00:40:21
2383
2421
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2383s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
worth checking out and then learning the dynamics from for planning some pixels planet which we'll look at a little bit later also so if you want to look more closely at the specific it which is covered there's a really nice website world model start github dot which has the code which has many demos for you to play with the play with what actually the latent variables are doing in the VAD and so forth for these alarms so highly recommend checking that out and here is a video of the chris doom cover in action so you get these fireballs
00:40:21
00:41:04
2421
2464
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2421s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
coming at you and the agent has learned to get out of the way to not get killed all right so we looked at so far is we looked at how to go from observation to state and then learn a model in that latent state space now we're gonna take a look at program division to state but also from state action to next state so and this was that earlier alluded to an hour i jumped the gun a little bit on this yes we're gonna now be representational and that's not ahead of time learning your position and pixel to hopefully state or
00:41:04
00:41:45
2464
2505
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2464s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
something like state but that is already when it's doing representation looking at the dynamics and so when we look at the dynamics to representation learning why not learn a representation where the dynamics is such that control becomes easier for example learn a representation such that in this new representation space that an Amex is linear because if the dynamics is linear then all of a sudden control becomes easy and you turn your original pixel space problem might be highly nonlinear very complex to have a control methodology for into a
00:41:45
00:42:20
2505
2540
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2505s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
latent space problem where it's just linear and very simple to solve that's the main idea behind is embed to control paper we're covering now so the existence they considered were pendulum card pull and three linked arm but again this is from pixel soda a pixel input when the lid representation where hopefully dynamics is close to linear and hence control becomes easy so it's called stochastic control the methods they apply it's kind of a standard control method then you can apply to linear systems and embed to
00:42:20
00:42:57
2540
2577
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2540s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
control will learn a latent space model using pressure on encoder while forcing a locally linear latent space dynamics model so once you have a local inner model you can apply stochastic optimal control is an example that in action where once you have such a model is very easy to find the controller that brings you to a target and say a stable fixed point thanks to that controller or just to work well locally along this trajectory you seem to have linear dynamics models and in fact the way this methods work is they
00:42:57
00:43:31
2577
2611
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2577s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
tend to linearize the dynamics along trajectories but when if you learn a latent space model where it's already linear we already good to go and that linearization will not be an approximation or actually be the action model that you learn to that be nice to have a very big fit of your linear model to the absolute veneks so the costs are often assumed to be quadratic so that's that's an assumption to make you know this class of problems called lqr problems later through out of control problems sometimes LQG problems if you
00:43:31
00:44:01
2611
2641
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2611s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
also have some use to bestest be in there and these problems assume that you have linear dynamics and quadratic costs without a cost meaning there's a quadratic penalties for being away from the state where you're supposed to be okay so of course we can't just throw from our original pixel observations to some space where the NamUs is linear and ignore the real-world dynamics esta map button my lab back out to real world to them so let's look at the complete loss function to look at first of all go to latent space see you need to be able to
00:44:01
00:44:33
2641
2673
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2641s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
reconstruct the original language so Z should not lose important information about what is happening or what's the situation here then we have this temporal aspect here and I'll Polly want to reach a goal and want to have long-term act prediction that in the end put the sequence of actions that are keys the goal it also predicts that's going to be the case so every step along the way we're gonna have prediction for when and use linear models so prediction must be locally analyzable for all valid control magnitudes such
00:44:33
00:45:07
2673
2707
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2673s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that when we optimize our controls we get something then when it works in simulation also works in real world now we're going to force that to be true by learning a model that does this by construction so let's look at that model here's the next component we already have our encoder decoder we have our control input u so in controls usually he was used for control input and reinforcement often a is used for action controls need for controls then we have our next latent state CP plus 1 now for this to be meaningful the same decoder
00:45:07
00:45:42
2707
2742
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2707s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
should be able to reconstruct the image input at time T plus 1 if that's the case then that latent space dynamics was correct okay so we're going to learn a locally linear model here in that transition to make that work okay then once we have all that in place we pretty much good to go we're going to use this model over long horizon C to make sure that actually that we don't just do this over one step we actually lay this out over and longer horizons and as we've trained the model we have multi-step predictions over which we
00:45:42
00:46:19
2742
2779
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2742s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
have this loss function you might say why do we need why do we need all this well it turns out that if you make a small mistake and your prediction for the next state then you might say nah Bob just a small mistake no big deal but if you make a small mistake the problem is that you land in a new latent state for which your model might not have been trained and when you make the next prediction to go to time T plus 2 you're doing it from a time T plus 1 latent state that you're not familiar with that doesn't lie in
00:46:19
00:46:50
2779
2810
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2779s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
your training distribution and now you might make a not so good prediction make it even worse and this is an accumulation of errors over time can lead to divergence and explicitly avoided any kind of simulations to run over longer horizon need some mechanism to avoid that okay so one mechanism is to explicitly have a loss of a multi-step another mechanisms to ensure that your next state prediction comes from the correct label distribution so if you embed into my Tate unit Gaussian spins then after you do your next state
00:46:50
00:47:21
2810
2841
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2810s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
prediction they should also that what you get there should come from a Gaussian unit gosh and distribution to ensure that when you go from there to the next one you're ready to make your predictions all right so those are the components we have an autoencoder tutoring image X into latent state with accurate long-term vision of latent states because we ensure that the next latent state comes from the correct distribution a unit Gaussian just like our auto encoder it forces it to be and then the prediction must be locally in
00:47:21
00:47:53
2841
2873
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2841s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
erisa bowl so we don't get some fans in their network to predict the next flight instead from the current day and see if it has to be feasible with just a linear prediction okay so this is the full system that they proposed all the last term's shown at the bottom now let's take a look at how all this works so they apply this to cart poll that showed a good amount of success in car pool and then here are some evaluations on on that showing that embed to control indeed can do in virtual pendulums swing up pretty well I can do carp or balance
00:47:53
00:48:29
2873
2909
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2873s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
you can do three link arm so good results on three environments that they experiment with and here's this environments look like this is from broad images so what we're watching is effectively also what the agencies the agent will often see the down sampling so they can actually look at the the environments themselves so really much on the left because worked on the right and here we have cardboard balancing in action and so this can use some idea of how capable this approach is so it does very well at the same time clearly these
00:48:29
00:49:10
2909
2950
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2909s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
environments they're not nearly as complicated as what we saw in the Unreal environments where it was deep mind lab navigation tasks versus these kind of 2d relatively low-resolution single robot that you fully control kind of tasks now in embed to control the idea was to have a single linear system and for your full of enemies that might be difficult but it's been shown in controls is that very often even though your real system is highly nonlinear locally it can be linearized and so you might ask the question can we instead follow the same
00:49:10
00:49:55
2950
2995
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2950s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
philosophy and I was in bed to control but instead of learning the single linear model can we learn just a collection of linear models in some way that allows us to apply time varying linear control methods which are also extremely efficient and maybe have a richer set of environment that we can solve for because time varying linear models can cover more than just a single linear model can that's actually what we did in this work called solar showing an action on the right here we now have different linear models at different
00:49:55
00:50:25
2995
3025
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=2995s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
times and so we learn to embed into space where at each time a local linear model can capture the transition very well so you still get initial random rollouts followed by learning representation and latent dynamics but now I'm not a simple linear model but the sequence of linear models and then from that once we've done that we can start doing a robot infer where we are in this thing's getting out of in your models find the sequence of controllers execute that get new data and repeat the smaller base to draw in action a model-based reinforced
00:50:25
00:51:07
3025
3067
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3025s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
waiting in action on the distant if we're setting where we make the latent space very efficient to find optimal policies and so might not succeed on the first time around so get the new data update the representation infer where we are in terms of linear dynamics models and trying to updated policy and repeat and this can actually learn in about 20 minutes to stack it like a block learning it from from pixels as input okay so we looked at state representation which is how to go from broth acceleration to state learning
00:51:07
00:51:42
3067
3102
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3067s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
ahead of time with a few in the world models paper that we looked at after we learned dynamics model and mapping from pixels who stayed at the same time and maybe benefit from that now here's another way we can think about this which is we could think of putting some prior information so when we have pixels as inputs and we know that under the hood there is a state thing we know that state is just a bunch of real numbers so we did here in this papers is said okay when a plug data we're going to learn a latent
00:51:42
00:52:16
3102
3136
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3102s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
representation which is curated by sequels of column filters then we're going to apply a spatial softmax meaning we're going to look for each of these 16 filters where each filter is most active through a spatial softmax and output the corresponding coordinates those coordinates should allow us to reconstruct the original image because they captured essence they recorded to the objects in the scene if you know the courts of the objects we've seen at least as the home you can reconstruct the scene and then once we have learned
00:52:16
00:52:50
3136
3170
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3136s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that representation we could learn to control with just a 32 dimensional input rather than needing to take in 240 by 240 input which is much higher dimensional and much more expensive to do reinforced fighting against there's actually capable of learning a pretty wide range of skills here so here is the data collection so it's just randomly moving around collecting data that data is used to train that spatial auto encoder and sir then we learn we look actually we imprinted the goal situation and then we do reinforce
00:52:50
00:53:37
3170
3217
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3170s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
learning in the feature space the thirty two dimensional feature space and learn in a relatively short amount of time how to push the block to the target location it's not ready you can follow and how to go from image observations to state or something likes hearing kind of interesting method here them it actually doesn't bother reconstructing it says all we need to do is think about physics what is physics tell us well we're gonna want to find an encoding of state underlying state coming through the observation fine here will be the big
00:53:37
00:54:15
3217
3255
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3217s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
neural network that turns image observation into underlying state well what do we know about state we know that in physics then there will be coordinates and then derivatives of coordinates which are the velocities of these objects so there is a state variable corresponding to velocity and other severe I'll compare corresponding to position and the change in position is velocity that's we know that velocity is derivative of position then what else do we know we know that when the world to be in different states we're going to need you
00:54:15
00:54:52
3255
3292
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3255s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
know different state values so by default if we you know get random stuff presented to us we want the embeddings as in a field two different situations to be far apart so that's what this law is is saying we want embeddings to be far apart but in all you do is play the embedding is far apart well then that's not enough to get any structure so then the next loss here says that four consecutive times the position state variables should be close it also says that between time T and t minus 1 the velocity state variables should be close
00:54:52
00:55:29
3292
3329
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3292s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
because philosophy cannot change quickly so this is saying acceleration people is going to be small on average an acceleration is gonna be small then conservation of momentum and our energy is captured in here and in the last part here Singer we need a representation where the actions are able to influence what the next state is going to be so wanted correlation between action and state all right so it tested is on a couple of environments you know where they would just collect data in these environments pixel input and then learn a state
00:55:29
00:56:09
3329
3369
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3329s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
representation that doesn't do reconstruction just try to satisfies those invariants that are expected to be no good loss functions based on physics and one pretty interesting state representations that way here's another example of state learning in action going relatively quickly earth-2 just gonna give it a lot of different ideas across we've covered the beta tae and one of the early lectures beta V is a very solid encoder we'll put a coefficient beta in front of the KL loss on the prior and by making that
00:56:09
00:56:41
3369
3401
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3369s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
collision beta bigger than one effective what we're doing is we're trying to make the latent variable Z maximally independent so we're trying to find a disentangled representation of the scene and so the thinking here is that well if we want to find something that we think of our state from raw pixel values and probably we need to find something that's really strongly disentangled and so it's putting that prior into it and they show that by having this beta V you actually get much better transfer so they train a beta vini and then do Q
00:56:41
00:57:16
3401
3436
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3401s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
learning and bit in a network that takes the embeddings from the beta ba and compare it with regular Q learning and so on the left here we see what happens in the training environments the training environments regular Chi learning and Darla's which is few learning with the beta V representation learning do about equally well but when we look at a new task related task but looks very different by doing the representation line which is shown at the bottom right we have to get much better performance top left is
00:57:16
00:57:51
3436
3471
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3436s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
actually not getting a job done it's not collecting these yellow targets whereas bottom varieties we look at at collecting yellow targets and what's changed while the walls in the background have changed to pink rather than green and the ground has changed the blue rather than yellow so a relatively small change originally QN this doesn't do representation learning per se hasn't learned those notions whereas stability has somehow learned a representation that allows it to transfer here at zero shop to this new environment much better
00:57:51
00:58:22
3471
3502
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3471s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
it's not our idea of representing first state and dynamics we looked up Gantz and I think lecture four of this class now if you just train again what happens is that you just transfer each frame independently what we want this we want to learn but there's intentions that are consistent over time as we're going to do is we're gonna have a discriminator here that looks at two consecutive observations and decides whether those two consecutive observations are consecutive observations from the real world and welcome secular observations generated
00:58:22
00:58:57
3502
3537
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3502s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
by a generator and so if a generator here trying to generate fake sequence of observations trying to fool the discriminator and at convergence what that means that this generator is trying to generate observation sequences that are indistinguishable from real-world observation sequences once you have done you can use that generator as a simulator and learn that simulator or planning that similar in this case we did planning to try to achieve goals we will see on the right here is we didn't actually did this for rope manipulation
00:58:57
00:59:30
3537
3570
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3537s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
so on the left is the initial spin of the road on the right design end state of the rope and we see us with causal info again it thinks that these are the interpolated states so it thinks that this is a sequence of states you have to go through to go from the initial state to the end state same for the next one next one next one compare that with VC GM which we also which would currently just doesn't look at transitions just looked at individual frames we see that the interpolation here doesn't necessarily lead to
00:59:30
00:59:56
3570
3596
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3570s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
intermediate states that are that meaningful for a robot to try to follow that you know sequence of intermediate states and rope configurations to get some start to go and so we're able to by training in Foca which looks at realism of transitions rather than just realism of individual frames is able to learn a dynamics model in a latent space that we can use for robot to make plants now one of the first things we covered was the world models which showed that you can learn a latent space and then learn all right on top of
00:59:56
01:00:31
3596
3631
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3596s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
the latent space for dynamics and then learn a linear control on top of that of course that's a very new you think it's almost surprising it works in there what's so interesting that it actually does work in a range of environments but hopefully it's not not likely to be the final answer to keep it that simple and so here's a paper called planet learning latent announced models from pixels lesson planning in it so what's what's new here is after learned laden space 10-ounce model it's actually risk is not deploying a policy of learning it's
01:00:31
01:01:03
3631
3663
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3631s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
using a planner look using look ahead as a function of which sequence of actions do I get the most reward taking that first action that sequence of actions repeat and here learns the latent space encoding together with learning D dynamics also is joint learning of encoding and dynamics recently has been an improver that is called dreamer from the same office roughly and what they show is that you can actually run a limb planning in in latent space you can actually train a active critic and Leyton space simulator and that'll
01:01:03
01:01:43
3663
3703
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3663s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
actually do better than i'm the planet he also showed that the dynamics model you learn it's better in these environments to learning stochastic dynamics model rather than in the domestic dynamics model and that there's a two big differences between planned a dreamer going from planning to learning active critic agent and using a stochastic model now so far we talked about latent space models and directly learning to control in the latent space there is also work that actually goes back the image space and so here are
01:01:43
01:02:20
3703
3740
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3703s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
some example executions by robot moving objects to target locations and this is done by this the system here learned a video prediction model so i learn as a function of action the robot takes what will be the next frame i see and i long to the next action will be the next frame RC and so forth once you have a action conditional video prediction model and if you have a target frame or target property that you want to achieve you can now use this action traditional bigger prediction model as your simulator and this can give really good
01:02:20
01:02:59
3740
3779
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3740s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
results actually some examples shown here the slide get the downside of this that often planning to take a long time because to generate an actual traditional video prediction it can be fairly expensive we need to generate actually many of them because you're trying different sequence of actions to see which one might work the best and then after you find one that might work the best it might be a sequence of ten actions you take the first of those ten actions and you repeat that whole process and so these things tend to be
01:02:59
01:03:29
3779
3809
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3779s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
not as real time as some of the other things we looked at but it's very surprising how all this works you can do full a traditional video fliction and manipulate objects that way now one thing you might wonder is it's all good and well to do full detailed video prediction but is it always meaningful imagine you drop the bottle of water class ball of water drops on the floor how are you gonna do video prediction 1/2 for what happens there very very hard I mean you you're never gonna have access to all the details of
01:03:29
01:04:09
3809
3849
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3809s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
the state of the water in the ball all the little defects that might be in the mid you know water bottle materials and so forth that will determine how exactly this thing fractures the best you can be able to do is probably why I think it's gonna break into a lot of pieces and pieces of different sizes and you know maybe the the net tongue stays together because it doesn't hit the ground it's the bottom that's hitting the ground and so forth and you also don't need the details like to make decisions you just
01:04:09
01:04:38
3849
3878
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3849s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
need to know it's going to break and so what you could do say hey instead of learning a fully dynamics model and say I need to learn just what the future will look like be able to predict that you say hey what if I can predict what action I took for example seeing this shattered bottle and say well the action taken was dropping the bottle and so if I can go from I can make that prediction then I can also understand you want to achieve a certain goal what action might leave me there and not with me there this is called inverse dynamics and so
01:04:38
01:05:12
3878
3912
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3878s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that's at the core of many other dynamics models that being learned throughout that for dynamics learn an inverse dynamics more effectively like learning a goal condition action strategy so no is it a paper here what if it said as follows it says we want to learn a Ford model and latent space I want to sleep in space - of course in fact will represent the things that matter but if all we care about is live in space predictions then the problem is that maybe we'll make our little space always zero and we picked always zero
01:05:12
01:05:46
3912
3946
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3912s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
we're always good but we don't have anything interesting and so they're gonna say well we want to learn a little space we would complete the next latent state but to avoid it being all zeroes or any other way of being degenerate we're going to require that from the latest state of the next time 50 plus one and the light instead of the current times et which we offered to predict the action that was taken at time T and so we went to dynamics models at the same time we're learning a inverse dynamics in a fluid dynamics
01:05:46
01:06:19
3946
3979
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3946s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
model at the same time in this light in space this is applied to learn to poke object so well you see here on the left is data collection you can set this up for autonomous data collection on the right where you see is the learned control so it's learned that the Namek law and now I can look at the current state and look at the goal state and it can do a prediction of which action is going to help the most to get close to that goal state and can repeatedly do that well it finally reaches something very close to the goal State okay now
01:06:19
01:07:06
3979
4026
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=3979s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
reinforced planning is about reward so far ration mostly ignored the rewards when we learned representations and so we'll switch that up now let's not just learn to predict next state but also learn to predict future reward kind of first paper down or first recent paper that looked at this and the deep reinforcement in convicts is a predictor on paper so enter learning and planning and what they said is well it's difficult to know what needs to go into the latent state and so because we don't really know what
01:07:06
01:07:38
4026
4058
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4026s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
it has to when laid instead and we don't necessarily want to reconstruct the full observation because that's just so many things to reconstruct them we really want to focus on the essence well if what we care about is getting high reward should we just focus on predicting future rewards for every sequence of actions we can predict the future reward well we should be good to go then we can just thick secretive action that leaves the highest feature award and we're good to go predictor on did this for some relatively simple environments showing
01:07:38
01:08:09
4058
4089
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4058s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
here billiards as a function of which actually you tink how old the billiards balls and up and I should have pretty well on that task and also they looked at it for maze navigation now the most threesome is all the scores that you might have heard of that builds on top of these very directly is mu 0 mu 0 is also learning a blatant dynamics model that predicts rewards and doesn't worry about reconstruction and so this one here doesn't just given one action in the beginning what's the sequence of latent states that allow me to predict
01:08:09
01:08:44
4089
4124
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4089s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
reward in the future and use 0 same thing but now action conditional and was able to solve a very wide range of game situations I'm on a variation is successful feature so you might say it's enough predicting reward which is just one number what if reward consists of many components gave a clear about location of the robot maybe I care about energy expend maybe I care about other things these are all features and so the idea here is then if I had a set of features that relate the reward why not learn to predict well learn a latent space model
01:08:44
01:09:23
4124
4163
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4124s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that allows me to predict the future sequence of features encountered we looked at this ourselves in the comics of navigation actually so when you have a robot that's navigating a world it does some convolutional processing of its observations then they'll be some lsdm because when you're navigating you currently see something we want also remember things you've seen in the past it's in memory here and then some that should try to predict observations features of observations they might in the future for example might have a
01:09:23
01:09:54
4163
4194
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4163s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
pollution or something like that here's this system in action so we're gonna so what we have here let me fast forward this a little bit to the experimental setup so what we see here is this is inside a simulator actually for now but also real world experiments coming later you see the kind of visual inputs this is processing and it's trying to predict things about speed hiding collision those are features it's trying to predict so I put it know this many steps in the future well my heading B will my speed be my
01:09:54
01:10:45
4194
4245
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4194s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
collision be based on what I see right now and based on the actions I will take any intervening time through that is able to learn in a total internal representation of how the world works but most importantly how the world works as it relates to features that matter for navigation versus try to learn everything about the world which might be a lot to learn relative to what you actually need to learn be successful at your task and so based on its able to learn to navigate these environments pretty well then so that the real robot
01:10:45
01:11:16
4245
4276
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4245s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
so here we have the actual robot that's going to learn to navigate the hallways in quarry all over at the electrical engine electrical engineering building at Berkeley so we see here and actually when it's still learning has a lot of collisions but it learns to predict that it learns something say if I think if I see this take that sequence of actions I will have a collision in five time steps or my heading will change in that way and so forth and so after training its internalize a lot of how the world works
01:11:16
01:11:46
4276
4306
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4276s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
now I can plan against a transition well I need to act now let's go to this is test let's learn now we can see that it's learned to avoid collisions and in terms of what it's doing it it knows to predict as a function of the actions taking whatever which is likely to happen or not well heading it might end up with and then take actions accordingly and again the reason I'm showing all these videos here is because as you see different approaches are testable very different environments and this by no means a converged research field and there's a
01:11:46
01:12:21
4306
4341
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4306s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
lot of variation how things get tested and by looking at how its tested to give you a sense of how complex an environment a certain approach might be able to handle now a natural question you might have is well this is all great there's all these different ways of learning representations but could we come up with a way of optimally representing the world what would that even mean what does it mean to have an optimal reclamation of the world well there's some worried especially trying to get up this so here are some fairly theoretical
01:12:21
01:12:52
4341
4372
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4341s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
to be fair references on trying to understand what it means the popularization of the world and one thing you'll often see come back is his word oma morphism and when it refers to is that essentially you have the real world you have a simulator and you want it to be the case ad if you go from real world to some weight and space simulator so you have a one-to-one match that's happening you go from from lit from real world to this latent space representation at that point you simulate in both worlds and then after a
01:12:52
01:13:25
4372
4405
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4372s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
while he tried to map back and see if it still corresponds but homomorphism would mean that you had still the correspondents many steps if he's are any number of steps in the future and so that would be kind of a by simulation homomorphism type approach and the question of course becomes what's the minimal life in space that you need to be able to do that just the more middle that laden spaces the less variables you want to do was as a reinforcement winner or a planner who tried to learn achieve good reward in the environment now one
01:13:25
01:13:56
4405
4436
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4405s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
thing that's very well-known in traditional controls is something called separation principle and separation principle in traditional control says the following and it's not well it's very specific snare it says if I have a linear dynamical system and I have noisy observations of this state so I don't have access to state I only have noisy observations and these noisy observations are linear functions of the state so linear dynamics observations linear function of this state then to do optimal control in this
01:13:56
01:14:41
4436
4481
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4436s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
environment where I don't have full access to the state all I need to do is to find the optimal estimator of state which will be a common filter and use data out with my best estimate of the state at every time combine them in the optimal controller assuming I have full access to state so the separation panel says I could several enzyme an estimator and a controller design them separately and then combine and that's actually optimal and that's actually very related things we've been talking about one of the representation I wanted to control want
01:14:41
01:15:15
4481
4515
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4481s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
to be the case I would do the right can representation when decision comes out of it and just be used with optimal control we get the optimal result and so you some work now trying to look at what you have a nonlinear system might apply deep neural networks and so forth what does it mean to have you know optimal estimation of state from your observations and how you know when is that compatible with your control and so forth so very interesting theoretical direction if you're more Theory inclined so another way to think of it is to say
01:15:15
01:15:49
4515
4549
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4515s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
well shouldn't I just think about it end to end so often in deep learning you have kind of two paths one path is you're gonna try to design something and the other pattern you say hey I'm just think about the result that one is the result that one let me define a loss function on the result I want and then training a staff instead of putting all the modules in more detail together myself so in this case what it might mean well instead of learning representation for a dynamic smaller and then bolt it on a planter or bolting on
01:15:49
01:16:25
4549
4585
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4549s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
a reinforcement agent why not say hey when I learned my dynamics model I should train it end to end such that what I learned is maximal compatible with a planner that I will use in the future so this goes a little bit back to the early thing we cover the embed to control we said if we can learn a linear dynamics model in latent space planning comes easy you're gonna say what a feel a more general plan and my mom and so that general planner might work well in a wide range of situations now can we learn a representation that if we combine it
01:16:25
01:17:01
4585
4621
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4585s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
with a more general planner together they function well if so then we learned a good representation so when we did this and some early work validation that works led by then post hoc Aviv tomorrow now professor at Technion we showed that actually the validation a very common way of doing planning for toddler Markov decision processes actually this validation process can be turned into a neural network representation and so we can then bolt this validation network onto a representation Learning Network and optimize them together to try to get
01:17:01
01:17:44
4621
4664
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4621s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
good performance out turning image input into a representation on which validation runs and the encoding of image input will need to be such down the validation process actually gives good results and we even gave the validation process and flexibility to learn parts of that which showed that this way you can actually get very good performance on planning tasks they might say well planning with visual inputs shouldn't just choose you just be able to learn a confident that just kind of looks at it and makes the right decision
01:17:44
01:18:14
4664
4694
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4664s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
well it turns out really if sometimes what we're doing here is building a very strong prior intercom by building the validation aspect into it that's a bit like why do we use a confident we'll use a continent to encode translation invariance and once we can learn more efficiently than if we were to use a fully connected Network it's kind of the same idea here we're learning that work that should solve a control problem that under the hood uses planning well then we should just put that planning into network the planning structure into the
01:18:14
01:18:43
4694
4723
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4694s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
network so we can learn it all and when and now one question that has often come up in this in this context as well she we ever do pixel level video prediction that's a good question I mean awfully you're just looking at noise and what's the point in trying to predict that what really matters is predicting the things that effect so how do you do that more directly so we're going to use plan ability as a criterion for representation learning now so validation that works as I just described go into a little more detail
01:18:43
01:19:22
4723
4762
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4723s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
it says have an observation observation it's turn goes into a module that outputs a value function which is how good a certain state is they put that out for every every state that you can hang on in parallel then an attention mechanism will look at the currents observation and understand which of all these possible stages should index into then make a decision on what to do in the current state the value iteration module looks at essentially validation cannot remember that sense what it does it needs to look
01:19:22
01:20:06
4762
4806
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4762s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
at reward and dynamics model and then this and rewarding dynamic smaller can do a recurrent calculation to get out the value of each state so this is just a recurrent calculation repeating applying the same operation so some recurring moment and it's a return on that work with local calculation because states next to each other can be visited from each other and happen to show off in this dynamic programming calculation so turns out that there is a recurrent component is enough to represent validation calculation but we don't need
01:20:06
01:20:38
4806
4838
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4806s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
to do it with some evaluation which only applies to situation we can have a tabular representation of the world which means for very relatively small discrete states places he knows more generally so we're looking at here is universal planning Network universal fine network says okay we have an observation we want to achieve a goal observation we take our initial observation we're turning to the late instead we're gonna encoded then we take an action new let instead looks we take an action new let me say we're not
01:20:38
01:21:10
4838
4870
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4838s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
actually late in state and so forth taken on articulated state and after that series of actions we want our little state here to match up the delays in state of the goal of the rich that water key so we can do is within search over actions that will get us close and so if we had already trained this live in space dynamics model all we would need to do is to optimize this sequence of actions and if this is a continuous space we can optimize the sequence of actions and back to the dishes a look around standard vacuum Gatien define a
01:21:10
01:21:44
4870
4904
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4870s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
sequence of actions that optimizes how close we'll get to the goal so that's the planning part assuming we have this dance model we can run back obligation to play how do you get the dynamics model well here's we're going to do we're going to learn the dynamics model such that so we're going to try to find parameters in this dynamics model such that if we use those parameters to run this optimization to find actions then the sequence of actions we find corresponds to what was shown in a demonstration that we're given so we're
01:21:44
01:22:23
4904
4943
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4904s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
given a demonstration a sequence of actions and we'll have an imitation loss and that will say we want to be able to imitate the sequence of actions by writing this very specific process of optimizing with aggregation our sequence of actions against a dynamics model that we're going to learn once we have learned that the nameks model this way what it means is then onwards we can learn we can use this latent space dynamics model to find sequence of actions to optimize how close we get to some other goal in the future so benefit
01:22:23
01:22:58
4943
4978
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4943s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
here is that internalization that our inductive bias than just learning some backbone black box for imitation now she also learns a metric in this abstract space that's useful for reinforced learning in the future so we're comparing us with a reactive imitation learning it just says okay I need to just imitate a sequence of actions but this black box known that doesn't know that when you imitate probably the demonstrator had a goal and you're trying to find something of actions that it keeps that goal so it doesn't have that inductive bias it's
01:22:58
01:23:27
4978
5007
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=4978s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
not I do as well and something closes the architecture we use is something that's also a recurrent neural network but doesn't have the internal optimization process in the inner loop to find a sequence of actions that optimizes how close we get to a goal so task we looked at here was some maze navigation tasks and also reaching between obstacles to a target the courtesy here on the horizontal axis number of demonstrations the bring boxes is the average test success rate oh it seems Universal planning networks
01:23:27
01:24:02
5007
5042
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5007s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
outperforms the baselines that I just described but that means that building that inductive bias helps significantly in learning to solve these problems now note that you can says well what did it actually learn we said to build an inductive bias we say with building inductive bias to learn to plan in that inner loop but is it really learn to plan here's experiment doing say what if we learn with 40 iterations of gradient descent to find a civil actions and then we test with a very number of planning steps meaning we vary the number of Grandison
01:24:02
01:24:44
5042
5084
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5042s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
steps in the inner loop when we do plan if our thing is doing planning then the hope is that by writing more planning iterations it would keep refining the plan and end up with a better plan than if it always access to 40 iterations that's indeed what we see here after the horizontal actually we increase the number of planning steps the test success rate goes up for the same amount same training same training just different number of planning steps of tests on so this indicates that likely is something like planning is really
01:24:44
01:25:14
5084
5114
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5084s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
happening under the hood and if you plan longer you can do better nothing that happens is when you do this you learn a representation that ties into how an agent should make decisions that representation can be used by a reinforcement learning agent to learn more quickly what makes me a force wink typically hard is that the reward is sparse but if you map your world into this latent space in that latent space where you're running this optimizer grading descent to find good actions well again bring descend assumes that
01:25:14
01:25:47
5114
5147
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5114s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
there's some smoothness so once you've learned that we can space where there are smoothness you can optimize against that probably means that in that latent space distances are more meaningful I think now do reinforce learning against distances in that waking space you're doing it against the reward that's better that's not sparse but it's dense and it's giving a signal locally on whether you're improving or not improving on what you're doing and so we showed in a wide range of environments I did indeed reinforcement learning can be
01:25:47
01:26:16
5147
5176
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5147s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
a lot more effective when using the distance in Laytonsville earning in the process I just described but then you do reinforcement in a new environment example we did imitation in three link and 4 link environments switch to a 5 link environment Rand reinforced wanting the file of environments faced and the latent space there is used for reward shaping and I guess you learn a lot more quickly same thing here where the initial learning happened with a point mass and a to sub-point mass and then actually have the controller and robot and thanks to
01:26:16
01:26:52
5176
5212
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5176s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
these shaping that comes from learning the slave representation where distances are meaningful learning can be a lot more efficient okay so at this point we've covered quite a few different ways of combining representation learning with reinforced learning to be more efficient and the general theme so far has been that or at least in our state for positions been done raw pixels sure it has the information but it's embedded in a very high dimensional space is million megapixel image million dimensional input we wanted in a more
01:26:52
01:27:31
5212
5251
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5212s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
compact position with and learn against more efficiently and all these ascribe observations available to state and state actually the next date and so forth all tried to get a handle of that problem now nothing you might observe is down what we covered so far is fairly complex is a wide range of ideas at play and so the question we asked ourselves recently is is it possible with a relatively simple idea to maybe get a lot of leverage that we have seen here and let's take a look at that and see how far agree with relatively simple
01:27:31
01:28:09
5251
5289
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5251s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
idea and actually we'll see out the form essentially all the approaches we've covered so far that doesn't mean the ideas and the approaches we've covered so far are not important so we're not important with colleges to skip them there's a lot of good ideas we've covered that we probably want to bring into this next approach we're about to cover but what I'm about to cover curl will really focus on simplicity and see how far I can get with something very simple our stunning exhibition here was if you look at the learning curves the
01:28:09
01:28:37
5289
5317
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5289s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
vertical axis here is reward and higher is better horizontal axis number of trials in this environment and so see like at the end here 1e a to a hundred million steps have been taken in this environment and so we see a blue learning curve here that learns very quickly and then green learning curves that take a long time to learn what's different blue learns from states green learns from pixels same thing here blue learns from stayed very flowers green from pixels not nearly so fast and this isn't this case the RL
01:28:37
01:29:08
5317
5348
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5317s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
algorithm is soft is a d4 PG which is still yard are a logger so if you think about the essence here reinforced winning is about learning to achieve goals and if the underlying space is low dimensional there is a low dimensional state shims will be able to recover that low dimensional state and then learn just as efficiently from pixels as from state and how might we do that well we've seen a lot of success in past lectures with contrast of learning for computer vision in fact we saw with CTC that it was possible by using
01:29:08
01:29:49
5348
5389
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5348s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
unlabeled data is on image net to constantly out the form learning with label data so unlabeled plus so there's the same amount of label data but the blue curve also has unlabeled data you see that the unlabeled data consistently helps outperform having only access to that amount of labeled data then of course very recently Sinclair came out and as actually getting equally good performance has supervised learning on image net when using a linear classifier just a linear classifier on top of a self supervised representation so that means
01:29:49
01:30:29
5389
5429
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5389s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that almost all the learning happens in self supervision and then a little bit of learning habitat the M of course to get the meaning of the labels but it just needed a linear classifier if that's the case then the hope is if we do something similar in reinforcement all we need to do is do something where we do representation learning that extracts the essence and I've gained a little bit of extra information the reward to do the rest of the learning so would it simply or do it essentially said I have an image I'm going to turn
01:30:29
01:30:59
5429
5459
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5429s
https://i.ytimg.com/vi/Y…axresdefault.jpg