text
stringlengths
0
782k
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. If we have an animation movie or a computer game with quadrupeds and we are yearning for really high quality, life-like animations, motion capture is often the go-to tool for the job. Motion capture means that we put an actor in our case a dog in the studio and we ask it to perform sitting, trotting, pacing and jumping, record its motion and transfer it onto our virtual character. There are two key challenges with this approach. One, we have to try to weave together all of these motions because we cannot record all the possible transitions between sitting and pacing, jumping and trotting and so on. We need some filler animations to make these transitions work. This was addressed by this neural network-based technique here. The other one is trying to reduce these unnatural foot-sliding motions. Both of these have been addressed by learning based algorithms in the previous works that you see here. Later, bipeds were also taught to maneuver through complex geometry and sit in not one kind of chair, but any chair regardless of geometry. This already sounds like science fiction. So, are we done or can these amazing techniques be further improved? Well, we are talking about research, so the answer is, of course, yes. Here, you see a technique that reacts to its environment in a believable manner. It can accidentally stop on the ball, stagger a little bit, and then flounder on this slippery surface, and it doesn't fall, and it can do much, much more. The goal is that we would be able to do all this without explicitly programming all of these behaviors by hand, but unfortunately, there is a problem. If we write an agent that behaves according to physics, it will be difficult to control properly. And this is where these new techniques shines. It gives us physically appealing motion, and we can grab a controller and play with the character, like in a video game. The first step we need to perform is called imitation learning. This means looking at real reference movement data and trying to reproduce it. This is going to be motion that looks great, is very natural, however, we are nowhere near done because we still don't have any control over this agent. Can we improve this somehow? Well, let's try something and see if it works. This paper proposes that in step number two, we try an architecture by the name generative adversarial network. Here, we have a neural network that generates motion and the discriminator that looks at these motions and tries to tell what is real and what is fake. However, to accomplish this, we need lots of real and fake data that we then use to train the discriminator to be able to tell which one is which. So, how do we do that? Well, let's try to label the movement that came from the user controller inputs as fake and the reference movement data from before as real. Remember that this makes sense as we concluded that the reference motion looked natural. If we do this, over time, we will have a discriminator network that is able to look at a piece of animation data and tell whether it is real or fake. So, after doing all this work, how does this perform? Does this work? Well, sort of, but it does not react well if we try to control the simulation. If we let it run undisturbed, it works beautifully and now, when we try to stop it with the controller, well, this needs some more work, doesn't it? So, how do we adapt this architecture to the animation problem that we have here? And here comes one of the key ideas of the paper. In step number three, we can revire this whole thing to originate from the controller and introduce a deep reinforcement learning based fine tuning stage. This was the amazing technique that DeepMind used to defeat Atari Breakout. So, what good does all this for us? Well, hold on to your papers because it enables true user control while synthesizing motion that is very robust against tough previously unseen scenarios. And if you have been watching this series for a while, you know what is coming? Of course, throwing blocks at it and see how well it can take the punishment. As you see, the AI is taking it like a champ. We can also add pathfinding to the agent and, of course, being computer graphics researchers throws some blocks into the mix for good measure. It performs beautifully. This is so realistic. We can also add sensors to the agent to allow them to navigate in this virtual world in a realistic manner. Just a note on how remarkable this is. So, this quadruped behaves according to physics, lets us control it with the controller, which is already somewhat of a contradiction. And it is robust against these perturbations at the same time. This is absolute witchcraft and no doubt it has earned to be accepted to C-Graph which is perhaps the most prestigious research venue in computer graphics. Congratulations! What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get the free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karojeone Ifehir. Style Transfer is a technique in machine learning research where we have two input images, one for content and one for style, and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera, and the style can be a painting which leads to the super fun results that you see here. An earlier paper had shown that the more sophisticated ones can sometimes make even art curators think that they are real. This previous work blew me away as it could perform style transfer for smoke simulations. I almost fell out of the chair when I have first seen these results. It could do fire textures, starry night, you name it. It seems that it is able to do anything we can think of. Now let me try to explain two things. One, why is this so difficult? And two, the results are really good, so are there any shortcomings? Doing this for smoke simulations is a big departure from 2D style transfer because that takes an image where this works in 3D and does not deal with images, but with density fields. A density field means a collection of numbers that describe how dense a smoke plume is at a given spatial position. It is a physical description of a smoke plume if you will. So how could we possibly apply artistic style from an image to a collection of densities? The solution in this earlier paper was to first down sample the density field to a core-ser version, perform the style transfer there, and up sample this density field again with already existing techniques. This technique was called transport-based neural style transfer TNST in short, please remember this. Now let's look at some results from this technique. This is what our simulation would look like normally, and then all we have to do is show this image to the simulator and what does it do with it? Wow, my goodness, just look at those heavenly patterns. So what does today's new follow-up work offer to us that the previous one doesn't? How can this seemingly nearly perfect technique be improved? Well, this new work takes an even more brazen vantage point to this question. If style transfer on density fields is hard, then try a different representation. The title of the paper says Lagrangian-style neural style transfer. So what does that mean? It means particles. This was made for particle-based simulations which comes with several advantages. One, because the styles are now attached to particles, we can choose different styles for different smoke plumes, and they will remember what style they are supposed to follow. Because of this advantageous property, we can even ask the particles to change their styles over time, creating these heavenly animations. In these 2D examples, you also see how the texture of the simulation evolves over time, and that the elements of the style are really propagated to the surface and the style indeed follows how the fluid changes. This is true even if we mix these styles together. Two, it not only provides us these high-quality results, but it is fast. And by this, I mean blazing fast. You see, we talked about TNST, the transport-based technique, approximately 7 months ago, and in this series, I always note that 2 more papers down the line, and it will be much, much faster. So here's the 2 minute papers moment of truth. What do the timing say? For the previous technique, it said more than 1D. What could that 1D mean? Oh goodness, that thing took an entire day to compute. So, what about the new one? What? Really? Just 1 hour? That is insanity. So, how detailed of a simulation are we talking about? Let's have a look together. M-slash-f means minutes per frame, and as you see, if we have tens of thousands of particles, we have 0.05, or in other words, 3 seconds per frame, and we can go up to hundreds of thousands, or even millions of particles, and end up around 30 seconds per frame. Loving it. Artists are going to do miracles with this technique, I am sure. The next step is likely going to be a real-time algorithm, which may appear as soon as 1 or 2 more works down the line, and you can bet your papers that I'll be here to cover it. So, make sure to subscribe and hit the bell icon to not miss it when it appears. The speed of progress in computer graphics research is nothing short of amazing. Also, make sure to have a look at the full paper in the video description, not only because it is a beautiful paper, and also a lot of fun to read, but because you will also know what this regularization step here does exactly to the simulation. This episode has been supported by weights and biases. In this post, they show you how to connect their system to the hugging face library and how to generate tweets in the style of your favorite people. You can even try an example in an interactive notebook through the link in the video description. Wates and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karri Zsolnai-Fehir. These days, we see so many amazing uses for learning based algorithms from enhancing computer animations, teaching virtual animals to walk, to teaching self-driving cars, depth perception, and more. It truly feels like no field of science is left untouched by these new techniques, including the medical sciences. You see, in medical imaging, a common problem is that we have so many diagnostic images out there in the world that it makes it more and more infeasible for doctors to look at all of them. What you see here is a work from scientists at Deep Mind Health that we covered a few hundred episodes ago. The training part takes about 14,000 optical coherence tomography scans. This is the OCT label that you see on the left. These images are cross sections of the human retina. We first start out with this OCT scan, then a manual segmentation step follows where a doctor marks up this image to show where the relevant parts, like the retinal fluids, or the elevations of the retinal pigments are. After the learning process, this method can reproduce these segmentations really well by itself without the doctor's supervision, and you see here that the two images are almost identical in these tests. Now that we have the segmentation map, it is time to perform classification. This means that we look at this map and assign a probability to each possible condition that may be present. Finally, based on these, a final verdict is made whether the patient needs to be urgently seen, or just a routine check, or perhaps no check is required. This was an absolutely incredible piece of work. However, it is of utmost importance to evaluate these tools together with experienced doctors and hopefully on international datasets. Since then, in this new work, Deep Mind has knocked the evaluation out of the park for a system they developed to detect breast cancer as early as possible. Let's briefly talk about the technique, and then I'll try to explain why it is sinfully difficult to evaluate it properly. So, onto the new problem. These mammograms contain four images that show the breasts from two different angles, and the goal is to predict whether the biopsy taken later will be positive for cancer or not. This is especially important because early detection is key for treating these patients, and the key question is how does it compare to the experts? Have a look here. This is a case of cancer that was missed by all six experts in the study, but was correctly identified by the AI. And what about this one? This case didn't work so well. It was caught by all six experts, but was missed by the AI. So, one reassuring sample, and one failed sample. And with this, we have arrived to the central thesis of the paper, which asks the question, what does it really take to say that an AI system surpassed human experts? To even have a fighting chance in tackling this, we have to measure false positives and false negatives. The false positive means that the AI mistakenly predicts that the sample is positive, when in reality it is negative. The false negative means that the AI thinks that the sample is negative, whereas it is positive in reality. The key is that in every decision domain, the permissible rates for false negatives and positives is different. Let me try to explain this through this example. In cancer detection, if we have a sick patient who gets classified as healthy, is a grave mistake that can lead to serious consequences. But if we have a healthy patient who is misclassified as sick, the positive cases get a second look from a doctor who can easily identify the mistake. The consequences, in this case, are much less problematic and can be remedied by spending a little time checking the samples that the AI was less confident about. The bottom line is that there are many different ways to interpret the data, and it is by no means trivial to find out which one is the right way to do so. And now, hold on to your papers because here comes the best part. If we compare the predictions of the AI to the human experts, we see that the false positive cases in the US have been reduced by 5.7%. While the false negative cases have been reduced by 9.7%. That is the holy grail. We don't need to consider the cost of false positives or negatives here because it reduced false positives and false negatives at the same time. Spectacular. Another important detail is that these numbers came out of an independent evaluation. It means that the results did not come from the scientists who wrote the algorithm and have been thoroughly checked by independent experts who have no vested interest in this project. This is the reason why you see so many authors on this paper. Excellent. Another interesting tidbit is that the AI was trained on subjects from the UK and the question was how well does this knowledge generalize for subjects from other places, for instance, the United States. Is this UK knowledge reusable in the US? I have been quite surprised by the answer because it never saw a sample from anyone in the US and still did better than the experts on US data. This is a very reassuring property and I hope to see some more studies that show how general the knowledge is that these systems are able to obtain through training and perhaps the most important. If you remember one thing from this video, let it be the following. This work much like other AI infused medical solutions are not made to replace human doctors. The goal is instead to empower them and take off as much weight from their shoulders as possible. We have hard numbers for this as the results concluded that this work reduces this workload of the doctors by 88% which is an incredible result. Among other far-eaching consequences, I would like to mention that this would substantially help not only the work of doctors in a wealthier, more developed countries but it may single-handedly enable proper cancer detections in more developing countries who cannot afford to check these scans. And note that in this video we truly have just scratched the surface, whatever we talk about here in a few minutes cannot be a description as rigorous and accurate as the paper itself, so make sure to check it out in the video description. And with that, I hope you now have a good feel of the pace of progress in machine learning research. The Retina Fluid Project was state of the art in 2018 and now less than two years later we have a proper independently evaluated AI-based detection for breast cancer. Bravo deep-mind! What a time to be alive! This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is your step-up to powerful, fast, fully configurable cloud computing. Linode also has one-click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers or click the link in the video description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karri Zonai-Fehir. Not so long ago, we talked about a neural image generator that was able to dream up beautiful natural scenes. It had a killer feature where it would take as an input not only the image itself, but the label layout of this image as well. That is, a gold mine of information and including this, indeed, opens up a killer application. Look, we can even change the scene around by modifying the labels on this layout, for instance, by adding some mountains, make a grassy field, and add the lake. Making a scene from scratch from a simple starting point was also possible with this technique. This is already a powerful learning-based tool for artists to use as is, but can we go further? For instance, would it be possible to choose exactly what to fill these regions with? And this is what today's paper exiles it, and it turns out it can do much, much more. Let's dive in. One, we can provide it this layout, which they refer to as a semantic mask, and it can synthesize clothes, pants, and hair in many, many different ways. Heavenly. If you have a closer look, you see that fortunately, it doesn't seem to change any other parts of the image. Nothing too crazy here, but please remember this, and now would be a good time to hold onto your papers, because, too, it can change the sky or the material properties of the floor. And, wait, are you seeing what I am seeing? We cannot just change the sky, because we have a lake there, reflecting it, therefore the lake has to change, too. Does it? Yes, it does. It indeed changes other parts of the image when it is necessary, which is a hallmark of a learning algorithm that truly understands what it is synthesizing. You can see this effect, especially clearly, at the end of the looped footage, when the sky is the brightest. Loving it. So, what about the floor? This is one of my favorites. It doesn't just change the color of the floor itself, but it performs proper material modeling. Look, the reflections also become glossier over time. A proper light transport simulation for this scenario would take a very, very long time, we are likely talking from minutes to hours. And this thing has never been taught about light transport and learned about these materials by itself. Make no mistake, these may be low resolution, pixelated images, but this still feels like science fiction. Two more papers down the line, and we will see HD videos of this, I am sure. The third application is something that the authors refer to as appearance mixture, where we can essentially select parts of the image to our liking and fuse these selected aspects together into a new image. This could more or less be done with traditional handcrafted methods too, but four, it can also do style morphing, where we start from image A, change it until it looks like image B and back. Now, normally, this can be done very easily with a handcrafted method called image interpolation. However, to make this morphing really work, the tricky part is that all of the intermediate images have to be meaningful. And as you can see, this learning method does a fine job at that. Any of these intermediate images can stand on their own. I try to stop the morphing process at different points so you can have a look and decide for yourself. Let me know in the comments if you agree. I am delighted to see that these image synthesis algorithms are improving at a stunning pace, and I think these tools will rapidly become viable to aid the work of artists in the industry. This episode has been supported by weights and biases. In this post, they show you how to visualize your scikit-learn models with just a few lines of code. Look at all these beautiful visualizations. So good! You can even try an example in an interactive notebook through the link in the video description. Weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir. Between 2013 and 2015, Deep Mind worked on an incredible learning algorithm by the name Deep Reinforcement Learning. This technique, looked at the pixels of the game, was given a controller and played much like a human would, with the exception that it learned to play some Atari games on a superhuman level. I have tried to train it a few years ago and would like to invite you for a marvelous journey to see what happened. When it starts learning to play an old game, Atari Breakout, at first the algorithm loses all of its lives without any science of intelligent action. If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an adapt player. But here's the catch, if we wait for longer, we get something absolutely spectacular. Over time, it learns to play like a pro and finds out that the best way to win the game is digging a tunnel through the bricks and hit them from behind. This technique is a combination of a neural network that processes the visual data that we see on the screen and the reinforcement learner that comes up with the gameplay-related decisions. This is an amazing algorithm, a true breakthrough in AI research. However, it had its own issues. For instance, it did not do well on Montezuma's revenge or pitfall because these games require more long-term planning. Believe it or not, the solution in a follow-up work was to infuse these agents with a very human-like property, curiosity. That agent was able to do much, much better at these games and then got addicted to the TV. But that's a different story. Note that this has been remedied since. And believe it or not, as impossible as it may sound, all of this has been improved significantly. This new work is called Agent 57 and it plays better than humans on all 57 Atari games. Absolute insanity. Let's have a look at it in action and then in a moment I'll try to explain how it does what it does. You see Agent 57 doing really well at the Solaris game here. This space battle game is one of the most impressive games on the Atari as it contains 16 quadrants, 48 sectors, space battles, warp mechanics, pirate ships, fuel management and much more, you name it. This game is not only quite complex but it also is a credit assignment nightmare for an AI to play. This credit assignment problem means that it can happen that we choose an action and we only win or lose hundreds of actions later, leaving us with no idea as to which of our actions led to this win or loss, thus making it difficult to learn from our actions. This Solaris game is a credit assignment nightmare. Let me try to bring this point to life by talking about school. In school when we take an exam we hand it in and the teacher gives us feedback for every single one of our solutions and tells us whether we were correct or not. We know exactly where we did well and what we need to practice to do better next time. Clear, simple, easy. Solaris on the other hand, not so much. If this were a school project, the Solaris game would be a brutal, merciless teacher. Would you like to know your grades? No grades but he tells you that you failed. Well, that's weird. Okay, where did we fail? He won't say. What should we do better next time to improve? You'll figure it out, Bako. Also we wrote this exam 10 weeks ago. Why do we only get to know about the results now? No answer. I think in this case we can conclude that this would be a challenging learning environment even for a motivated human, so just imagine how hard it is for an AI. Hopefully this puts into perspective how incredible it is that Agent 57 performs well on this game. It truly looks like science fiction. To understand what Agent 57 adds to this, it was given something called a meta-controller that can decide when to prioritize short and long-term planning. On the short term, we typically have mechanical challenges like avoiding a skull in Montezuma's revenge or dodging the shots of an enemy ship in Solaris. The long-term part is also necessary to explore new parts of the game and have a good strategic plan to eventually win the game. This is great because this new technique can now deal with the brutal and merciless teacher who we just introduced. Alternatively, this agent can be thought of as someone who has a motivation to explore the game and do well at mechanical tasks at the same time and can also prioritize these tasks. With this, for the first time, scientists at DeepMind found a learning algorithm that exceeds human performance on all 57 Atari games. And please do not forget about the fact that DeepMind tries to solve general intelligence and then use general intelligence to solve everything else. This is their holy grail. In other words, they are seeking an algorithm that can learn by itself and achieve human-like performance on a variety of tasks. There is still plenty to do, but we are now one step closer to that. If you learn only one thing from this video, let it be the fact that there are not 57 different methods but one general algorithm that plays 57 games better than humans. What a time to be alive. I would like to show you a short message from a few days ago that melted my heart. This I got from Nathan who has been inspired by these incredible works and he decided to turn his life around and go back to study more. I love my job and reading messages like this is one of the absolute best parts of it. Congratulations Nathan and note that you can take this inspiration and greatness can materialize in every aspect of life not only in computer graphics or machine learning research. Good luck. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally hold onto your papers because the Lambda GPU cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zorna Ifehir. When I was a bachelor student and took on my first bigger undertaking in computer graphics in 2011, it was a research project for a feature-length movie where the goal was to be able to learn the brush strokes of an artist, you see the sample brush stroke here and what it could do is change the silhouette of a digital 3D object to appear as if it were drawn with this style. This way we could use an immense amount of perfectly model geometry and make them look as if they were drawn by an artist. The project was a combination of machine learning and computer graphics and got me hooked on this topic for life. So, this was about silhouettes, but what about being able to change the lighting? To address this problem, this new work promises something that sounds like science fiction. The input is a painting which is thought of as a collection of brush strokes. First the algorithm is trying to break down the image into these individual strokes. Here, on the left with A, you see the painting itself and B is the real collection of strokes that were used to create it. This is what the algorithm is trying to estimate it with and this colorful image visualizes the difference between the two. The blue color denotes regions where these brush strokes are estimated well and we can find more differences as we transition into the red colored regions. So, great, now that we have a bunch of these brush strokes, but what do we do with them? Well, let's add one more assumption into this system which is that the densely packed regions are going to be more affected by the lighting effects while the sparser regions will be less impacted. This way we can make the painting change as if we were to move our imaginary light source around. No painting skills or manual labor required. Wonderful. But some of the skeptical fellow scholars out there would immediately ask the question, it looks great, but how do we know if this really is good enough to be used in practice? The authors thought of that too and asked an artist to create some of these views by hand and what do you know, they are extremely good. Very close to the real deal and all this comes for free. Insanity. Now, we noted that the input for this algorithm is just one image. So, what about a cheeky experiment where we would add not a painting but a photo and pretend that it is a painting, can it relate it properly? Well, hold on to your papers and let's have a look. Here's the photo, the breakdown of the brush strokes if this were a painting and wow! Here are the lighting effects. It worked and if you enjoyed these results and would like to see more, make sure to have a look at this beautiful paper in the video description. For instance, here you see a comparison against previous works and it seems quite clear that it smokes the competition on a variety of test cases. And these papers they are comparing to are also quite recent. The pace of progress in computer graphics research is absolutely incredible. More on that in a moment. Also, just look at the information density here. This tiny diagram shows you exactly where the light source positions are. I remember looking at a paper on a similar topic that did not have this thing and it made the entirety of the work a great deal more challenging to evaluate properly. This kind of attention to detail might seem like a small thing but it makes all the difference for a great paper which this one is. The provided user study shows that these outputs can be generated within a matter of seconds and reinforces our hunch that most people prefer the outputs of the new technique to the previous ones. So much improvement in so little time. And this we can now create digital lighting effects from a single image for paintings and even photographs in a matter of seconds. What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Nefahir, when shooting feature-length movies or just trying to hold meetings from home through Zoom or Skype, we can make our appearance a little more professional by hiding the maths we have in the background by changing it to something more pleasing. Of course, this can only happen if we have an algorithm at hand that can detect what the foreground and the background is, which typically is easiest when we have a green screen behind us that is easy to filter for even the simpler algorithms out there. However, of course, not everyone has a green screen at home and even for people who do may need to hold meetings out there in the wilderness. Unfortunately, this would mean that the problem statement is the exact opposite of what we've said or, in other words, the background is almost anything else but a green screen. So, is it possible to apply some of these newer neural network-based learning algorithms to tackle this problem? Well, this technique promises to make this problem much, much easier to solve. All we need to do is take two photographs, one with and one without the test subject and it will automatically predict an alpha-mat that isolates the test subject from the background. If you have a closer look, you'll see the first part of why this problem is difficult. This mat is not binary, so the final compositing process is given not as only foreground or only background for every pixel in the image, but there are parts typically around the silhouettes and hair that need to be blended together. This blending information is contained in the gray parts of the image and are especially difficult to predict. Let's have a look at some results. You see the captured background here and the input video below and you see that it is truly a sight to behold. It seems that this person is really just casually hanging out in front of a place that is definitely not a whiteboard. It even works in cases where the background or the camera itself is slightly in motion. Very cool. It really is much, much better than these previous techniques where you see the temporal coherence is typically a problem. This is the flickering that you see here which arises from the vastly different predictions for the alpha mat between neighboring frames in the video. Opposed to previous methods, this new technique shows very little of that. Excellent. Now, we noted that a little movement in the background is permissible, but it really means just a little. If things get too crazy back there, the outputs are also going to break down. This wizardry all works through a generative adversarial network in which one neural network generates the output results. This, by itself, didn't work all that well because the images used to train this neural network can differ greatly from the backgrounds that we record out there in the wild. In this work, the authors bridge the gap by introducing a detector network that tries to find faults in the output and tell the generator if it has failed to fool it. As the two neural networks do get out, they work and improve together yielding these incredible results. Note that there are plenty of more contributions in the paper, so please make sure to have a look for more details. What a time to be alive. But you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI to your research, GitHub and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In 2019, researchers at OpenAI came up with an amazing learning algorithm that they deployed on a robot hand that was able to dexterously manipulate a Rubik's cube even when it was severely hamstrung. A good game plan to perform such a thing is to first solve the problem in a computer simulation where we can learn and iterate quickly and then transfer everything the agent learned there to the real world and hope that it obtained general knowledge that indeed can be applied to real tasks. Papers like these are some of my favorites. If you're one of our core Fellow scholars, you may remember that we talked about walking robots about 200 episodes ago. In this amazing paper, we witnessed a robot not only learning to walk, but it could also adjust its behavior and keep walking even if one or multiple legs lose power or get damaged. In this previous work, the key idea was to allow the robot to learn tasks such as walking not only in one optimal way, but to explore and build a map of many alternative motions relying on different body parts. Both of these papers teach us that working in the real world often shows us new and expected challenges to overcome. And this new paper offers a technique to adapt a robot arm to these challenges after it has been deployed into the real world. It is supposed to be able to pick up objects which sounds somewhat simple these days until we realize that new previously unseen objects may appear in the bin with different shapes or material models. For example, reflective and refractive objects are particularly perilous because they often show us more about their surroundings than about themselves. Lighting conditions may also change after deployment. The grippers length or shape may change and many, many other issues are likely to arise. Let's have a look at the lighting conditions part. Why would that be such an issue? The objects are the same, the scene looks nearly the same, so why is this a challenge? Well, if the lighting changes, the reflections change significantly and since the robot arm sees its reflection and thinks that it is a different object, it just keeps trying to grasp it. After some fine tuning, this method was able to increase the otherwise not too pleasant 32% success rate to 63%. Much, much better. Also, extending the gripper used to be somewhat of a problem but as you see here with this technique it is barely an issue anymore. Also, if we have a somewhat intelligent system and we move the position of the gripper around nothing really changes, so we would expect it to perform well. Does it? Well, let's have a look. Unfortunately, it just seems to be rotating around without too many meaningful actions. And now, hold on to your papers because after using this continual learning scheme, yes, it improved substantially and makes very few mistakes and can even pick up these tiny objects that are very challenging to grasp with this clumsy hand. This fine tuning step typically takes an additional hour or at most a few hours of extra training and can be used to help these AIs learn continuously after they are deployed in the real world, thereby updating and improving themselves. It is hard to define what exactly intelligence is, but an important component of it is being able to reuse knowledge and adapt to new, unseen situations. This is exactly what this paper helps with. Absolute witchcraft. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is a step-up to powerful, fast, fully configurable cloud computing. Linode also has one-click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers, or click the link in the description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today, it is almost taken for granted that neural network-based learning algorithms are capable of identifying objects in images or even write full coherent sentences about them, but fewer people know that there is also parallel research on trying to break these systems. For instance, some of these image detectors can be fooled by adding a little noise to the image and in some specialized cases, we can even perform something that is called the one pixel attack. Let's have a look at some examples. Changing just this one pixel can make a classifier think that this ship is a car or that this horse is a frog and, amusingly, be quite confident about its guess. Note that the choice of this pixel and the color is by no means random and it needs solving a mathematical optimization problem to find out exactly how to perform this. Trying to build better image detectors while other researchers are trying to break them is not the only arms race we are experiencing in machine learning research. For instance, a few years ago, deep-mind introduced an incredible learning algorithm that looked at the screen much like a human would, but was able to reach superhuman levels in playing a few Atari games. It was a spectacular milestone in AR research. They also just have published a follow-up paper on this that will cover very soon so make sure to subscribe and hit the bell icon to not miss it when it appears in the near future. Interestingly, while these learning algorithms are being improved at a staggering pace there is a parallel subfield where researchers endeavor to break these learning systems by slightly changing the information they are presented with. Let's have a look at OpenAI's example. Their first method adds a tiny bit of noise to a large portion of the video input where the difference is barely perceptible, but it forces the learning algorithm to choose a different action than it would have chosen otherwise. In the other one, a different modification was used that has a smaller footprint, but is more visible. For instance, in Punk, adding a tiny fake ball to the game can coerce the learner into going down when it was originally planning to go up. It is important to emphasize that the researchers did not do this by hand. The algorithm itself is able to pick up game-specific knowledge by itself and find out how to fool the other AI using it. Both attacks perform remarkably well. However, it is not always true that we can just change these images or the playing environment to our desire to fool these algorithms. So, with this, an even more interesting question arises. Is it possible to just enter the game as a player and perform interesting stunts that can reliably win against these AI's? And with this, we have arrived to the subject of today's paper. This is the Uchalnav Pass game where the red agent is trying to hold back the blue character and not let it cross the line. Here, you see two regular AI's duking it out. Sometimes the red wins. Sometimes the blue is able to get through. Nothing too crazy here. This is the reference case which is somewhat well-balanced. And now, hold on to your papers because this adversarial agent that this new paper proposes does this. You may think this was some kind of glitch and I put the incorrect footage here by accident. No, this is not an error. You can believe your eyes. It basically collapses and does absolutely nothing. This can't be a useful strategy. Can it? Well, look at that. It still wins the majority of the time. This is very confusing. How can that be? Let's have a closer look. This red agent is normally a somewhat competent player. As you can see here, it can punch the blue victim and make it fall. We now replaced this red player with the adversarial agent which collapsed and it almost feels like it hypnotized the blue agent to also fall. And now, squeeze your papers because the normal red opponent's win rate was 47% and this collapsing chap wins 86% of the time. It not only wins but it wins much, much more reliably than a competent AI. What is this wizardry? The answer is that the adversary induces of distribution activations. To understand what that exactly means, let's have a look at this chart. This tells us how likely it is that the actions of the AI against different opponents are normal. As you see, when this agent named Zoo plays against itself, the bars are in the positive region, meaning that normal things are happening. Things go as expected. However, that's not the case for the blue lines, which are the actions when we play against this adversarial agent in which case the blue victim's actions are not normal in the slightest. So, the adversarial agent is really doing nothing but it is doing nothing in a way that reprograms its opponent to make mistakes and behave close to a completely randomly acting agent. This paper is absolute insanity. I love it. And if you look here, you see that the more the blue curve improves, the better this scheme works for a given game. For instance, it is doing real good on kick and defend, fairly good on sumo humans, and that there is something about the sumo and game that prevents this interesting kind of hypnosis from happening. I'd love to see a follow-up paper that can pull this off a little more reliably. What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. With the nimble progress we are seeing in computer graphics research, it is now not only possible to perform beautiful fluid simulations, but we can also simulate more advanced effects, such as honey coiling, ferrofluids climbing up on other objects, and a variety of similar advanced effects. However, due to the complexity of these techniques, we often have to wait for several seconds, or even minutes for every single one of these images, which often means that we have to leave our computer crunching these scenes overnight. Or even wait several days for the results to appear. But what about real-time applications? Can we perform these fluid simulations in a more reasonable timeframe? Well, this technique offers detailed fluid simulations like the one here and is blazing fast. The reason for this is that one, it uses a sparse volume representation, and two, it supports parallel computation and runs on your graphics card. So, what do these terms really mean? Let's start with the sparse part. With classical fluid simulation techniques, the simulation domain has to be declared in advance, and is typically confined to a cube. This comes with several disadvantages. For instance, if we wish to have a piece of fluid or smoke coming out of this cube, we are out of luck. The simulation domain stays, so we would have to know in advance how the simulation pans out, which we don't. Now, the first thing you're probably thinking about, well, of course, make the simulation domain bigger. Yes, but. Unless special measures are taken, the bigger the domain, the more we have to compute. Even the empty parts take some computation. Ouch. This means that we have to confine the simulation to as small a domain as we can. So, this is where this technique comes into play. The sparse representation that it uses means that the simulation domain can take any form. As you see here, it just starts altering the shape of the simulation domain as the fluid splashes out of it. Furthermore, we are not only not doing work in the empty parts of the domain, which is a huge efficiency increase, but we don't need to allocate too much additional memory for these regions, which you will see in a minute will be a key part of the value proposition of this technique. We noted that it supports parallel computation and runs on your graphics card. The graphics card part is key because otherwise it would run on your processor like most of the techniques that require minutes per frame. The more complex the technique is, typically, the more likely that it runs on your processor, which has a few cores, two, a few tenths, of course. However, your graphics card, comparably, is almost a supercomputer as it has up to a few hundred or even a few thousand cores to compute on. So, why not use that? Well, it's not that simple, and here is where the word parallel is key. If the problem can be decomposed into smaller independent problems, they can be allocated to many many cores that can work independently and much more efficiently. This is exactly what this paper does with the fluid simulation. It runs it on your graphics card, and hence, it is typically 10 to 20 times faster than the equivalent techniques running on your processor. Let me try to demonstrate this with an example. Let's talk about coffee. You see, making coffee is not a very parallel task. If you ask a person to make coffee, it can typically be done in a few seconds. However, if you suddenly put 30 people in the kitchen and ask them to make coffee, it will not only not be a faster process, but may even be slower than one person because of two reasons. One, it is hard to coordinate 30 people, and there will be miscommunication, and two, there are very few tools and lots of people, so they won't be able to help each other or much worse, will just hold each other up. If we could formulate the coffee making problem, such that we need 30 units of coffee, and we have 30 kitchens, we could just place one person into each kitchen, and then they could work efficiently and independently. At the risk of oversimplifying the situation, this is an intuition of what this technique does, and hence, it runs on your graphics card and is incredibly fast. Also, note that your graphics card typically has a limited amount of memory, and remember, we noted that the sparse representation makes it very gentle on memory usage, making this the perfect algorithm for creating detailed, large-scale fluid simulations quickly. Excellent design. I plan to post slow-down versions of some of the footage that you see here to our Instagram page if you feel that it is something you would enjoy, make sure to follow us there. Just search for two-minute papers on Instagram to find it, or also, as always, the link is in the video description. And finally, hold on to your papers because if you look here, you see that the damn break scene can be simulated with about 5 frames per second, not seconds per frame, while the water drop scene can run about 7 frames per second with a few million particles. We can, of course, scale up the simulation, and then we are back at seconds per frame land, but it is still blazing fast. If you look here, we can go up to 27 times faster, so in one all-nighter simulation, I can simulate what I could simulate in nearly a month. Sign me up. What a time to be alive. Now, note that in the early days of two-minute papers, about 300, 400 episodes ago, I covered plenty of papers on fluid simulations, however, nearly no one really showed up to watch them. Before publishing any of these videos, I was like, here we go again, I knew that almost nobody would watch it, but this is a series where I set out to share the love for these papers. I believe we can learn a lot from these works, and if no one watches them, so be it. I still love doing this. But I was surprised to find out that over the years, something has changed. You fell off scholars somehow, started to love the fluids, and I am delighted to see that. So, thank you so much for trusting the process, showing up, and watching these videos. I hope you're enjoying watching these as much as I enjoyed making them. This episode has been supported by weights and biases. Here, they show you how to make it to the top of Kaggle leaderboards by using weights and biases to find the best model faster than everyone else. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In the last few years, the pace of progress in machine learning research has been staggering. Neural network-based learning algorithms are now able to look at an image and describe what's seen in this image, or even better, the other way around generating images from a written description. You see here, a set of results from BigGAN, a state-of-the-art image generation technique, and marvel at the fact that all of these images are indeed synthetic. TheGAN part of this technique abbreviates the term generative adversarial network. This means a pair of neural networks that battle each other over time to master a task, for instance, to generate realistic-looking images when given a theme. After that, styleGAN and even its second version appeared, which, among many other crazy good features, opened up the possibility to lock in several aspects of these images, for instance, age, pose, some facial features, and more, and then we could mix them with other images to our liking while retaining these locked-in aspects. I am loving the fact that these newer research works are moving in a direction of more artistic control and the paper we'll discuss today also takes a step in this direction. With this new work, we can ask to translate our image into different seasons, weather conditions, time of day, and more. Let's have a look. Here we have our input and the imagine that we'd like to add more clouds and translate it into a different time of the day, and there we go. Wow! Or we can take this snowy landscape image and translate it into a blooming flowery field. This truly seems like black magic, so I can't wait to look under the hood and see what is going on. The input is our source image and the set of attributes where we can describe our artistic vision. For instance, here let's ask the AI to add some more vegetation to this scene. That will do. Step number one, this artistic description is rooted to a scene generation network which hallucinates an image that fits our description. Well, that's great. As you see here, it kind of resembles the input image, but still it is substantially different. So, why is that? If you look here, you see that it also takes the layout of our image as an input, or in other words, the colors and the silhouettes describe what part of the image contains a lake, vegetation, clouds, and more. It creates the hallucination according to that, so we have more clouds, that's great, but the road here has been left out. So now we are stuck with an image that only kind of resembles what we want. What do we do now? Now, step number two, let's not use this hallucinated image directly, but apply its artistic style to our source image. Brilliant. Now we have our content, but with more vegetation. However, remember that we have the layout of the input image, that is a gold mine of information. So, are you thinking what I am thinking? Yes, including this indeed opens up a killer application. We can even change the scene around by modifying the labels on this layout, for instance, by adding some mountains, make it a grassy field, and add the lake. Making a scene from scratch from a simple starting point is also possible. Just add some mountains, trees, a lake, and you are good to go. And then you can use the other part of the algorithm to transform it into a different season, time of day, or even make it foggy. What a time to be alive. Now, as with every research work, there is still room for improvements. For instance, I find that it is hard to define what it means to have a cloudier image. For instance, the hallucination here works according to the specification. It indeed has more clouds than this. But, for instance, here I am unsure if we have more clouds in the output. You see that perhaps it is even less than in the input. It seems that not all of them made it to the final image. Also, do fewer and denser clouds qualify as cloudier. Nonetheless, I think this is going to be an awesome tool as is, and I can only imagine how cool it will become two more papers down the line. This episode has been supported by weights and biases. In this post, they show you how to easily iterate on models by visualizing and comparing experiments in real time. Weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. This paper is not your usual paper, but it does something quite novel. It appeared in the distilled journal, one of my favorites, which offers new and exciting ways of publishing beautiful, but unusual works aiming for exceptional clarity and readability. And of course, this new paper is no different. It claims that despite the fact that these neural network-based learning algorithms look almost unfathomably complex inside, if we look under the hood, we can often find meaningful algorithms in there. Well, I am quite excited for this, so sign me up. Let's have a look at an example. At the risk of oversimplifying the explanation, we can say that a neural network is given as a collection of neurons and connections. If you look here, you can see the visualization of three neurons. At first glance, they look like an absolute mess, don't they? Well, kind of, but upon closer inspection, we see that there is quite a bit of structure here. For instance, the upper part looks like a car window. The next one resembles a car body, and the bottom of the third neuron clearly contains a wheel detector. However, no car looks exactly like these neurons, so what does the network do with all this? Well, in the next layer, the neurons arise as a combination of neurons in the previous layers where we cherry pick parts of each neuron that we wish to use. So here, we'd read you see that we are exciting the upper part of this neuron to get the window, use roughly the entirety of the middle one, and use the bottom part of the third one to assemble this. And now we have a neuron in the next layer that will help us detect whether we see a car in an image or not. So cool. I love this one. Let's look at another example. Here you see a dog head detector, but it kind of looks like a crazy Picasso painting where he tried to paint a human from not one angle like everyone else, but from all possible angles on one image. But this is a neural network. So why engage in this kind of insanity? Well, if we have a picture of a dog, the orientation of the head of the dog can be anything. It can be a frontal image, look from the left to right, right to left, and so on. So this is a pose invariant dog head detector. What this means is that it can detect many different orientations and look here. You see that it gets very excited by all of these good boys. I think we even have a squirrel in here. Good thing this is not the only neuron we have in the network to make a decision. I hope that it already shows that this is truly an ingenious design. If you have a look at the paper in the video description, which you should absolutely do, you'll see exactly how these neurons are built from the neurons in the previous layers. The article contains way more than this. You'll see a lot more dog snouts, curve detectors, and even a follow-up article that you can have a look at and even comment on before it gets finished. A huge thank you to Chris Ola, who devotes his time away from research and uses his own money to run this amazing journal, I cannot wait to cover more of these articles in future episodes, so make sure to subscribe and hit the bell icon to never miss any of those. So finally, we understand a little more how neural networks do all these amazing things they are able to do. What a time to be alive. This episode has been supported by weights and biases. Here, they show you how you can visualize the training process for your boosted trees with XG boost using that tool. If you have a closer look, you'll see that all you need is one line of code. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. After reading a physics textbook on the laws of fluid motion, with a little effort, we can make a virtual world come alive by writing a computer program that contains these laws resulting in beautiful fluid simulations like the one you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. To simulate all these, many recent methods built on top of a technique called the material point method. This is a hybrid simulation technique that uses both particles and grids to create these beautiful animations. However, when used by itself, we can come up with a bunch of phenomena that it cannot simulate properly. One such example is cracking and tearing phenomena which has been addressed in a previous paper that we covered a few videos ago. With this, we can smash Oreos, candy crabs, pumpkins, and much, much more. In a few minutes, I will show you how to combine some of these aspects of a simulation. It is going to be glorious, or maybe not so much. Just give me a moment and you'll see. Beyond that, when using this material point method, coupling problems frequently arise. This means that the sand is allowed to have an effect on the fluid, but at the same time, as the fluid sloshes around, it also moves the sand particles within. This is what we refer to as two-way coupling. If it is implemented correctly, our simulated honey will behave as real honey in the footage here and support the deeper. These are also not trivial to compute with the material point method and require specialized extensions to do so. So, what else is there to do? This amazing new paper provides an extension to handle simulating elastic objects such as hair, rubber, and you will see that it even works for skin simulations and it can handle their interactions with other materials. So why is this useful? Well, we know that we can pull off simulating a bunch of particles and a yellow simulation separately, so it's time for some experimentation. This is the one I promised earlier, so let's try to put these two things together and see what happens. It seems to start out okay, particles are bouncing off of the yellow and then, uh-oh, look, many of them seem to get stuck. So can we fix this somehow? Well, this is where this new paper comes into play. Look here, it starts out somewhat similarly, most of the particles get pushed away from the yellow and then, look, some of them indeed keep bouncing for a long, long time and none of them are stuck to the yellow. Glorious. We can see the same phenomenon here with three yellow blocks of different stiffness values. With this, we can also simulate more than 10,000 bouncy hair strands and to the delight of a computer graphics researcher, we can even throw snow into it and expect it to behave correctly. Braids work well too. And if you remember, I also promised some skin simulation and this demonstration is not only super fun, for instance, the ones around this area are perhaps the most entertaining, but the information density of this screen is just absolutely amazing. As we go from bottom to top, you can see the effect of the stiffness parameters or, in other words, the higher we are, the stiffer things become and as we go from left to right, the effect of damping increases. And you can see not only a bunch of combinations of these two parameters, but you can also compare many configurations against each other at a glance on the same screen, loving it. So how long does it take to simulate all this? Well, given that we are talking about an offline simulation technique, this is not designed to run in real time games as the execution time is typically not measured in frames per second, but seconds per frame and sometimes even minutes per frame. However, having run simulations that contain much fewer interactions than this that took me several days to compute, I would argue that these numbers are quite appealing for a method of this class. Also note that this is one of those papers that makes the impossible possible for us and of course, as we always say around here, two more papers down the line and it will be significantly improved. For now, I am very impressed. Time to fire up some elaborate yellow simulations. What a time to be alive. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Joel Naifahir. A few years ago, we have mainly seen neural network-based techniques being used for image classification. This means that they were able to recognize objects, for instance animals and traffic signs in images. But today, with the incredible pace of machine learning research, we now have a selection of neural network-based techniques for not only classifying images, but also synthesizing them. The images that you see here and throughout this video is generated by one of these learning-based methods. But of course, in this series, we are always obsessed with artistic control, or, in other words, how much of a say we have in the creation of these images. After all, getting thousands and thousands of images without any overarching theme or artistic control is hardly useful for anyone. One way of being able to control the outputs is to use a technique that is capable of image translation. What you see here is a work by the name CycleGan. It could transform apples into oranges, zebras into horses, and more. It was called CycleGan because it introduced a cycle consistency loss function. This means that if we convert a summer image to a winter image, and then back to a summer image, we should get the same image back, or at least something very similar. If our learning system obeys this principle, the output quality of the translation is going to be significantly better. Today, we are going to study a more advanced image translation technique that takes this further. This paper is amazingly good at daytime image translation. It looks at a selection of landscape images, and then, as you see here, it learns to reimagine our input photos as if they were taken at different times of the day. I love how clouds form and move over time in the synthesized images, and the night sky with the stars is also truly a sight to behold. But wait, CycleGan and many other follow-up works did image translation. This also does image translation. So, what's really new here? Well, one, this work proposes a novel up-sampling scheme that helps creating output images with lots and lots of detail. Two, it can also create not just a bunch of images, a few hours apart, but it can also make beautiful time-lapse videos where the transitions are smooth. Oh my goodness, I love this. And three, the training happens by shoveling 20,000 landscape images into the neural network, and it becomes able to perform this translation task without labels. This means that we don't have to explicitly search for all the daytime images and tell the learner that these are daytime images and these other images are not. This is amazing, because the algorithm is able to learn by itself without labels, but it is also easier to use because we can feed in lots and lots more training data without having to label these images correctly. As a result, we now know that this daytime translation task is used as a testbed to demonstrate that this method can be reused for other kinds of image translation tasks. The fact that it can learn on its own and still compete with other works in this area is truly incredible. Due to this kind of generality, it can also perform other related tasks. For instance, it can perform style transfer, or in other words, not just change the time of day, but reimagine our pictures in the style of famous artists. I think with this paper, we have a really capable technique on our hands that is getting closer and closer to the point where they can see use in mainstream software packages and image editors. That would be absolutely amazing. If you have a closer look at the paper, you will see that it tries to minimize seven things at the same time. What a time to be alive! This episode has been supported by weights and biases. Here, they show you how to build a proper convolutional neural network for image classification and how to visualize the performance of your model. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Kato Zonai-Fehir. Neural Network-based learning algorithms are making great leaps in a variety of areas. And many of us are wondering whether it is possible that one day we'll get a learning algorithm, show it a video, and ask it to summarize it, and we can then decide whether we wish to watch it or not. Or just describe what we are looking for, and it would fetch the appropriate videos for us. Thinking today's paper has a good pointer whether we can expect this to happen, and in a few moments we'll find out together why. A few years ago, these neural networks were mainly used for image classification, or in other words, they would tell us what kinds of objects are present in an image. But they are capable of so much more, for instance, these days we can get a recurrent neural network, write proper sentences about images, and it would work well for even highly non-trivial cases. For instance, it is able to infer that work is being done here, or that a ball is present in this image even if the vast majority of the ball itself is concealed. The even crazier thing about this is that this work is not recent at all, this is from a more than four-year-old paper. Insanity The first author of this paper was André Carpathy, one of the best minds in the game, who is currently the director of AI at Tesla, and works on making these cards able to drive themselves. So as amazing as this work was, progress in machine learning research keeps on accelerating. So let's have a look at this newer paper that takes it a step further and has a look not at an image, but a video and explains what happens there in. Very exciting. Let's have a look at an example. This was the input video and let's stop right at the first statement. The red sphere enters the scene. So, it was able to correctly identify not only what we are talking about in terms of color and shape, but also knows what this object is doing as well. That's a great start. Let's proceed further. Now it correctly identifies the collision event with the cylinder. Then, this cylinder hits another cylinder, very good, and look at that. It identifies that the cylinder is made of metal. I like that a lot because this particular object is made of a very reflective material, which shows us more about the surrounding room than the object itself. But we shouldn't only let the AI tell us what is going on on its own terms, let's ask questions and see if it can answer them correctly. So first, let's ask what is the material of the last object that hit the cyan cylinder, and it correctly finds that the answer is metal. Awesome. Now let's take it a step further and stop the video here, can it predict what is about to happen after this point? Look, it indeed can. This is remarkable because of two things. If we look under the hood, we see that to be able to pull this off, it not only has to understand what objects are present in the video and predict how they will interact, but also has to parse our questions correctly, put it all together and form an answer based on all this information. If any of these tasks works unreliably, the answer will be incorrect. And two, there are many other techniques that are able to do some of these tasks, so why is this one particularly interesting? Well, look here. This new method is able to do all of these tasks at the same time. So there we go, if this improves further, we might become able to search YouTube videos by just typing something that happens in the video and it would be able to automatically find it for us. That would be absolutely amazing. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full backend access to your server, which is a step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers or click the link in the description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir, with the ascendancy of Neural Network-Based Learning Agrithams, we are now able to take on and defeat problems that sounded completely impossible just a few years ago. For instance, now we can create deepfakes, or in other words, we can record a short video of ourselves and transfer our gestures to a target subject, and this particular technique is so advanced that we don't even need a video of our target, just one still image. So we can even use paintings, images of sculptures, so yes, even the Mona Lisa works. However, don't despair, it's not all doom and gloom. A paper by the name Face for Enzyx contains a large dataset of original and manipulated video pairs. As this offered a ton of training data for real and forged videos, it became possible to use these to train a deepfake detector. You can see it here in action as these green-to-red colors showcase regions that the AI correctly thinks were tampered with. However, if we have access to a deepfake detector, we can also use it to improve our deepfake creating algorithms. And with this, an arms race has begun. The paper we are looking at today showcases this phenomenon. If you look here, you see this footage, which is very visibly fake, and the algorithm correctly concludes that. Now, if you look at this video, which for us looks like if it were the same video, yet it suddenly became real, at least the AI thinks that, of course, incorrectly. This is very confusing. So what really happened here? To understand what is going on here, we first have to talk about ostriches. So what do ostriches have to do with this insanity? Let me try to explain that. An adversarial attack on a neural network can be performed as follows. We present such a classifier network with an image of a bus, and it will successfully tell us that yes, this is indeed a bus. Nothing too crazy here. Now, we show it another image of a bus, but a bus plus some carefully crafted noise that is barely perceptible, that forces the neural network to misclassify it as an ostrich. I will stress that this is not any kind of noise, but the kind of noise that exploits biases in the neural network, which is by no means trivial to craft. However, if we succeed at that, this kind of adversarial attack can be pulled off on many different kinds of images. Everything that you see here on the right will be classified as an ostrich, but the neural network, these noise patterns were created for. And this can now be done not only on images, but videos as well, hence what happened a minute ago is that the deep fake video has been adversarially modified with noise to bypass such a detector. If you look here, you see that the authors have chosen excellent examples because some of these are clearly forged videos, which is initially recognized by the detector algorithm, but after adding the adversarial noise to it, the detector fails spectacularly. To demonstrate the utility of their technique, they have chosen the other examples to be much more subtle. Now, let's talk about one more question. We were talking about A detector algorithm, but there is not one detector out there, there are many, and we can change the wiring of these neural networks to have even more variation. So, what does it mean to fool a detector? Excellent question. The success rate of these adversarial videos, indeed, depends on the deep fake detector we are up against, but hold on to your papers because this success rate on uncompressed videos is over 98%, which is amazing, but note that when using video compression, this success rate may drop to 58% to 92% depending on the detector. This means that video compression and some other tricks involving image transformations still help us in defending against these adversarial attacks. What I also really like about the paper is that it discusses white and black box attacks separately. In the white box case, we know everything about the inner workings of the detector, including the neural network architecture and parameters, this is typically the easier case. But the technique also does really well in the black box case where we are not allowed to look under the hood of the detector, but we can show it a few videos and see how it reacts to them. This is a really cool work that gives us a more nuanced view about the current state of the art around deepfakes and deep fake detectors. I think it is best if we all know about the fact that these tools exist. If you wish to help us with this endeavor, please make sure to share this with your friends. Thank you. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com, slash papers, and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. About two years ago, we worked on a neuro-rendering system which would perform light transport on this scene and guess how it would change if we would change the material properties of this test object. It was able to closely match the output of a real light simulation program and it was near instantaneous as it took less than 5 milliseconds instead of the 40 to 60 seconds the light transport algorithm usually requires. This technique went by the name Gaussian material synthesis and the learned quantities were material properties. But this new paper sets out to learn something more difficult and also more general. We are talking about a 5D Neural Radiance Field Representation. So what does this mean exactly? What this means is that we have the three dimensions for location and two for view direction or in short the input is where we are in space and what we are looking at and the resulting image of this view. So here we take a bunch of this input data, learn it and synthesize new previously unseen views of not just the materials in the scene but the entire scene itself. And here we are talking not only digital environments but also real scenes as well. Now that's quite a value proposition so let's see if it can live up to this promise. Wow! So good! Love it! But what is it really that we should be looking at? What makes a good output here? The most challenging part is writing an algorithm that is able to reproduce delicate high frequency details while having temporal coherence. So what does that mean? Well, in simpler words we are looking for sharp and smooth image sequences. Perfectly matte objects are easier to learn here because they look the same from all directions while glossier, more reflective materials are significantly more difficult because they change a great deal as we move our head around and this highly variant information is typically not present in the learned input images. If you read the paper you'll see these referred to as non-lumbarian materials. The paper and the video contains a ton of examples of these view dependent effects to demonstrate that these difficult scenes are handled really well by this technique. Refractions also look great. Now if we define difficulty as things that change a lot when we change our position or view direction a little, not only the non-lumbarian materials are going to give us headaches, occlusions can also be challenging as well. For instance you can see here how well it handles the complex occlusion situation between the ribs of the skeleton here. It also has an understanding of depth and this depth information is so accurate that we can do these nice augmented reality applications where we put a new virtual object in the scene and it correctly determines whether it is in front of or behind the real objects in the scene. End of what these new iPads do with their lidar sensors but without the sensor. As you see this technique smokes the competition. So what do you know? Entire real world scenes can be reproduced from only a few views by using neural networks and the results are just out of this world. Absolutely amazing. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Also weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project you can use their tools for free. What really is as good as it gets. Make sure to visit them through wnbe.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Your fellow strollers, this is too many papers with this man's name that isn't possible to pronounce. My name is Dr. Karo Ejona Ifehir and indeed it seems that pronouncing my name requires some advanced technology. So, what was this? I promise to tell you in a moment, but to understand what happened here, first let's have a look at this deepfake technique we showcased a few videos ago. As you see, we are at a point where our mouth, head and eye movements are also realistically translated to a chosen target subject and perhaps the most remarkable part of this work was that we don't even need a video of this target person, just one photograph. However, these deepfake techniques mainly help us in transferring video content. So, what about voice synthesis? Is it also as advanced as this technique we are looking at? Well, let's have a look at an example and you can decide for yourself. This is a recent work that goes by the name Tecotron 2 and it performs AI-based voice cloning. All this technique requires is a 5-second sound sample of us and is able to synthesize new sentences in our voice as if we utter these words ourselves. Let's listen to a couple examples. The Norsemen considered the rainbow as a bridge over which the gods passed from Earth to their home in the sky. Take a look at these pages for Cricut Creek Drive. There are several listings for gas station. Here's the forecast for the next four days. Wow, these are truly incredible. The tumble of the voice is very similar and it is able to synthesize sounds and consonants that have to be inferred because they were not heard in the original voice sample. And now let's jump to the next level and use a new technique that takes a sound sample and animates the video footage as if the target subject said it themselves. This technique is called Neural Voice Papetry and even though the voices here are synthesized by this previous Tecotron 2 method that you heard a moment ago, we shouldn't judge this technique by its audio quality, but how well the video follows these given sounds. Let's go. The President of the United States is the head of state and head of government of the United States, indirectly elected to a four-year term by the people through the Electoral College. The office holder leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. There are currently four living former presidents. If you decide to stay until the end of this video, there will be another fun video sample waiting for you there. Now note that this is not the first technique to achieve results like this, so I can't wait to look under the hood and see what's new here. After processing the incoming audio, the gestures are applied to an intermediate 3D model which is specific to each person since each speaker has their own way of expressing themselves. You can see this intermediate 3D model here, but we are not done yet, we feed it through a neural renderer and what this does is apply this motion to the particular face model shown in the video. You can imagine the intermediate 3D model as a crude mask that models the gestures well, but does not look like the face of anyone where the neural render adapts the mask to our power-gates subject. This includes adapting it to the current resolution, lighting, face position and more, all of which is specific to what is seen in the video. What is even cooler is that this neural rendering part runs in real time. So what do we get from all this? Well, one superior quality, but at the same time it also generalizes to multiple targets. Have a look here. You know, I think we're in a moment of history where probably the most important thing we need to do is to bring the country together and one of the skills that I bring to bear. And the list of great news is not over yet, you can try it yourself. The link is available in the video description. Make sure to leave a comment with your results. To sum up by combining multiple existing techniques, it is important that everyone knows about the fact that we can both perform joint video and audio synthesis for a target subject. This episode has been supported by weights and biases. Here they show you how to use their tool to perform face swapping and improve your model that performs it. Also, weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks to weights and biases for their long standing support and for helping us make better videos for you.
Dear Fellow Scholars, this is two minute papers with Dr. Kato Jonaifahir. We have showcased this paper just a few months ago, which was about creating virtual characters with a skeletal system, adding more than 300 muscles and teaching them to use these muscles to kick, jump, move around and perform other realistic human movements. It came with really cool insights as it could portray how increasing the amount of weight to be lifted changes what muscles are being trained during a workout. These agents also learn to jump really high and you can see a drastic difference between the movement required for a mediocre jump and an amazing one. Beyond that, it showed us how these virtual characters would move if they were hamstrung by bone deformities, a stiff ankle or muscle deficiencies and watched them learn to walk despite these setbacks. We could even have a look at the improvements after a virtual surgery takes place. So now, how about an even more elaborate technique that focuses more on the muscle simulation part? The ropes here are simulated in a way that the only interesting property of the particles holding them together is position. Cossarat rod simulations are an improvement because they also take into consideration the orientation of the particles and hands can simulate twists as well. And this new technique is called Viper and adds a scale property to these particles and hands takes into consideration stretching and compression. What does that mean? Well, it means that this can be used for a lot of muscle related simulation problems that you will see in a moment. However, before that, an important part is inserting these objects into our simulations. The cool thing is that we don't need to get an artist to break up these surfaces into muscle fibers. And that would not only be too laborious, but of course would also require a great deal of anatomical knowledge. Instead, this technique does all this automatically a process that the authors call Viparization. So, in goes the geometry and outcomes a nice muscle model. This really opens up a world of really cool applications. For instance, one such application is muscle movement simulation. When attaching the muscles to bones as we move the character, the muscles move and contract accurately. Two, it can also perform muscle growth simulations. And three, we get more accurate soft body physics. Or in other words, we can animate gooey characters like this octopus. Okay, that all sounds great, but how expensive is this? Do we have to wait a few seconds to minutes to get this? No, no, not at all. This technique is really efficient and runs in milliseconds so we can throw in a couple more objects. And by couple, a computer graphics researcher always means a couple dozen more, of course. And in the meantime, let's look carefully at the simulation timings. It starts from around 8 to 9 milliseconds per frame and with all these octopi, we are still hovering around 10 milliseconds per frame. That's 100 frames per second, which means that the algorithm scales with the complexity of these scenes really well. This is one of those rare papers that is written both very precisely and it is absolutely beautiful. Make sure to have a look in the video description. The source code of the project is also available. And this, I hope, will get even more realistic characters with real muscle models in our computer games and real time applications. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. And thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. Have you heard the saying that whenever we look into the mirror, strictly speaking, we don't really see ourselves, but we see ourselves from the past. From a few nanoseconds ago. Is that true? If so, why? This is indeed true, and the reason for this is that the speed of light is finite and it has to travel back from the mirror to our eyes. If you feel that this is really hard to imagine, you are in luck because a legendary paper from 2013 by the name FEMPT OF AUTOGRAPHY capture this effect. I would say it is safe to start holding onto your papers from this point basically until the end of this video. Here you can see a super high speed camera capturing how a wave of light propagates through a bottle, most makes it through, and some gets absorbed by the bottle cap. But this means that this mirror example we talked about shall not only be a thought experiment, but we can even witness it ourselves. Yep, toy first, mirror image second. Approximately a nanosecond apart. So if someone says that you look old, you have an excellent excuse now. The first author of this work was Andra Svelton, who worked on this at MIT, and he is now a professor leading an incredible research group at the University of Wisconsin Medicine. But wait, since it is possible to create light transport simulations in which we simulate the path of many, many millions of light rays to create a beautiful photo-realistic image, Adrienne Harabo thought that he would create a simulator that wouldn't just give us the final image, but he would show us the propagation of light in a digital simulated environment. As you see here, with this, we can create even crazier experiments because we are not limited to the real world light conditions and limitations of the camera. The beauty of this technique is just unparalleled. He calls this method transient rendering, and this particular work is tailored to excel at rendering caustic patterns. A caustic is a beautiful phenomenon in nature where curved surfaces reflect or refract light thereby concentrating it to a relatively small area. I hope that you are not surprised when I say that this is the favorite phenomenon of most light transport researchers. Now, we're about these caustics. We need a super efficient technique to be able to pull this off. For instance, back in 2013, we showcased a fantastic scene made by Vlad Miller that was a nightmare to compute and it took a community effort and more than a month to accomplish it. Beyond that, the transient renderer only uses very little memory builds on the Fordon Beams technique we talked about a few videos ago and always arrives to a correct solution given enough time. Bravo! And we can do all this through the power of science. Isn't it incredible? And if you feel a little stranded at home and are yearning to learn more about light transport, I held a master-level course on light transport simulations at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. So, the course is now available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full light simulation program from scratch there and learn about physics, the world around us, and more. This episode has been supported by weights and biases. In this post, they show you how to build and track a simple neural network in Keras to recognize characters from the Simpson series. You can even fork this piece of code and start right away. Also, weights and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karaj Zonai-Fehir. It is important for you to know that everybody can make deepfakes now. You can turn your head around, mouth movements are looking great, and eye movements are also translated into the target footage. So of course, as we always say, two more papers down the line and it will be even better and cheaper than this. As you see, some papers are so well done and are so clear that they just speak for themselves. This is one of them. To use this technique, all you need to do is record a video of yourself, add just one image of the target subject, run this learning based algorithm, and there you go. If you stay until the end of this video, you will see even more people introducing themselves as me. As noted, many important gestures are being translated, such as head, mouth, and eye movement, but what's even better is that even full body movement works. Absolutely incredible. Now there are plenty of techniques out there that can create deepfakes, many of which we have talked about in this series, so what sets this one apart? Well, one, most previous algorithms required additional information, for instance, facial landmarks or a pose estimation of the target subject. This one requires no knowledge of the image. As a result, this technique becomes so much more general. We can create high quality deepfakes with just one photo of the target subject, make ourselves dance like a professional, and what's more, hold on to your papers because it also works on non-humanoid and cartoon models, and even that's not all, we can even synthesize an animation of a robot arm by using another one as a driving sequence. So why is it that it doesn't need all this additional information? Well, if we look under the hood, we see that it is a neural network based method that generates all this information by itself. It identifies what kind of movements and transformations are taking place in our driving video. You can see that the learned key points here follow the motion of the videos really well. Now, we pack up all this information and send it over to the generator to warp the target image appropriately, taking into consideration possible occlusions that may occur. This means that some parts of the image may now be uncovered where we don't know what the background should look like. Normally, we will do this by hand with an image-inpainting technique, for instance, you see the legendary patchmatch algorithm here that does it, however, in this case, the neural network does it automatically by itself. If you are seeking for flaws in the output, these will be important regions to look at. And it not only requires less information than previous techniques, but it also outperforms them significantly. Yes, there is still room to improve this. For instance, the sudden head rotation here seems to generate an excessive amount of visual artifacts. The source code and even an example colab notebook is available, I think it is one of the most accessible papers in this area. Want me south and make sure to have a look in the video description and try to run your own experiments. Let me know in the comments how they went or feel free to drop by at our Discord server where all of you fellow scholars are welcome to discuss ideas and learn together in a kind and respectful environment. The link is available in the video description, it is completely free and if you have joined, make sure to leave a short introduction. Now, of course, beyond the many amazing use cases of this in reviving deceased actors, creating beautiful visual art, redubbing movies and more, unfortunately, there are people around the world who are rubbing their palms together in excitement to use this to their advantage. So, you may ask why make these videos on deepfakes? Why spread this knowledge, especially now with the source codes? Well, I think step number one is to make sure to inform the public that these deepfakes can now be created quickly and inexpensively and they don't require a trained scientist anymore. If this can be done, it is of utmost importance that we all know about it. Then, beyond that, step number two, as a service to the public, I attend to EU and NATO conferences and inform key political and military decision makers about the existence and details of these techniques to make sure that they also know about these and using that knowledge they can make better decisions for us. You see me doing it here. And again, you see this technique in action here to demonstrate that it works really well for video footage in the world. Note that these talks and consultations all happen free of charge and if they keep inviting me, I'll keep showing up to help with this in the future as a service to the public. The cool thing is that later, over dinner, they tend to come back to me with a summary of their understanding of the situation and I highly appreciate the fact that they are open to what we scientists have to say. And now, please enjoy the promised footage. Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fahir. It is important for you to know that everybody can make deepfakes now. You can turn your head around, mouth movements are looking great, and eye movements are also translated into the target footage. And of course, as we always say, two more papers down the line and it will be even better and cheaper than this. This episode has been supported by weights and biases. Here, they show you how you can use sweeps, their tool to search through high-dimensional parameter spaces and find the best performing model. Weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ysorna Ifahir. When we, humans, look at an image or a piece of video footage, such as this one, we all understand that this is just a 2D projection of the world around us. So much so that if we have the time and patience, we could draw a depth map that describes the distance of each object from the camera. This information is highly useful because we can use it to create real-time defocus effects for virtual reality and computer games. Or even perform this can burns effect in 3D, or in other words, zoom and pan around in a photograph. But with a beautiful twist because in the meantime we can reveal the depth of the image. However, when we show the same images to a machine, all it sees is a bunch of numbers. Fortunately, with the ascendancy of neural network-based learning algorithms, we now have a chance to do this reasonably well. For instance, we discussed this depth perception neural network in an earlier episode, which was trained using a large number of input output pairs, where the inputs are a bunch of images, and the outputs are their corresponding depth maps for the neural network to learn from. The authors implemented this with a random scene generator, which creates a bunch of these crazy configurations with a lot of occlusions and computes via simulation the appropriate depth map for them. This is what we call supervised learning because we have all these input output pairs. The solutions are given in the training set to guide the training of the neural network. This is supervised learning, machine learning with crutches. We can also use this depth information to enhance the perception of self-driving cars, but this application is not like the previous two I just mentioned. It is much, much harder because in the earlier supervised learning example, we have trained a neural network in a simulation, and then we also use it later in a computer game, which is, of course, another simulation. We control all the variables and the environment here. However, self-driving cars need to be deployed in the real world. These cars also generate a lot of video footage with their sensors, which could be fed back to the neural networks as additional training data, if we had the depth maps for them, which, of course, unfortunately, we don't. And now, with this, we have arrived to the concept of unsupervised learning. Unsupervised learning is proper machine learning, where no crutches are allowed, we just unleash the algorithm on a bunch of data with no labels, and if we do it well, the neural network will learn something useful from it. It is very convenient because any video we have may be used as training data. That would be great, but we have a tiny problem, and that tiny problem is that this sounds impossible. Or it may have sounded impossible until this paper appeared. This work promises us no less than unsupervised depth learning from videos. Since this is unsupervised, it means that during training, all it sees is unlabeled videos from different viewpoints, and somehow figures out a way to create these depth maps from it. So, how is this even possible? Well, it is possible by adding just one ingenious idea. The idea is that since we don't have the labels, we can't teach the algorithm how to be right, but instead we can teach it to be consistent. That doesn't sound like much, does it? Well, it makes all the difference because if we ask the algorithm to be consistent, it will find out that a good way to be consistent is to be right. While we are looking at some results to make this clearer, let me add one more real-world example that demonstrates how cool this idea is. Imagine that you are a university professor overseeing an exam in mathematics and someone tells you that for one of the problems, most of the students give the same answer. If this is the case, there is a good chance that this was the right answer. It is not a hundred percent chance that this is the case, but if most of the students have the same answer, it is much more unlikely that they have all failed the same way. There are many different ways to fail, but there is only one way to succeed. Therefore, if there is consistency, often there is success. And this simple but powerful thought leads to far-eaching conclusions. Let's have a look at some more results. Woohoo! Now this is something. Let me explain why I am so excited for this. This is the input image and this is the perfect depth map that is concealed from our beloved algorithm and is there for us to be able to evaluate its performance. These are two previous works, both use crutches. The first was trained via supervised learning by showing it input output image pairs with depth maps and it does reasonably well while the other one gets even less supervision, the worst crutch if you will, and it came up with this. Now the unsupervised new technique was not given any crutches and came up with this. This is a very accurate version of the true depth maps. So what do you know? This neural network-based method looks at unlabeled videos and finds a way to create depth maps by not trying to be right, but trying to be consistent. This is one of those amazing papers where one simple, brilliant idea can change everything and make the impossible possible. What a time to be alive! What you see here is an instrumentation of this depth learning paper we have talked about. This was made by Wets and Biasis. I think organizing these experiments really showcases the usability of their system. Also, Wets and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you're an academic or have an open-source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnbe.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wets and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karo Ejorna Ifehir. When I was growing up, IQ tests were created by humans to test the intelligence of other humans. If someone told me just 10 years ago that algorithms will create IQ tests to be taken by other algorithms, I wouldn't have believed a word of it. Yet, just a year ago, scientists at DeepMind created a program that is able to generate a large amount of problems that test abstract reasoning capabilities. They are inspired by human IQ tests with all these questions about sizes, colors, and progressions. They wrote their own neural network to take these tests which performed remarkably well. How well exactly? In the presence of nasty distractor objects, it was able to find out the correct solution about 62% of the time and if we remove these distractors, which I will note that are good at misdirecting humans too, the AI was correct 78% of the time. Awesome. But today, we are capable of writing even more sophisticated learning algorithms that can even complete our sentences. Not so long ago, the OpenAI Lab published GPT2, a technique that they unleashed to read the internet and it learned our language by itself. A few episodes ago, we gave it a spin and I almost fell out of the chair when I saw that it could finish my sentences about fluid simulations in such a scholarly way that I think could easily fool a layperson. Have a look here and judge for yourself. This GPT2 technique was a neural network variant that was trained using one and a half billion parameters. At the risk of oversimplifying what that means, it roughly refers to the internal complexity of the networks or in other words, how many weights and connections are there. And now, the Google Brain team has released MINA, an open domain chatbot that uses 2.6 billion parameters and shows remarkable human-like properties. The chatbot part means a piece of software or a machine that we can talk to and the open domain part refers to the fact that we can try any topic, hotels, movies, the ocean, favorite movie characters or pretty much anything we can think of and expect a bot to do well. So how do we know that it's really good? Well, let's try to evaluate it in two different ways. First, let's try the super fun but less scientific way or in other words, what we are already doing, looking at chat logs. You see MINA writing on the left and the human being on the right and it not only answers questions sensibly and coherently but is even capable of cracking a joke. Of course, if you consider a pun to be a joke, that is. You see a selection of topics here where the user talks with MINA about movies and about expresses the desire to see the Grand Budapest Hotel which is indeed a very human-like quality. It can also try to come up with a proper definition of philosophy. And now, since we are scholars, we would also like to measure how human like this is in a more scientific manner as well. Now is a good time to hold onto your papers because this is measured by the sensibleness and specificity average score from now on SSA in short in which humans are here, previous chatbots are down there and MINA is right there close by which means that it is easy to be confused for a real human. That already sounds like science fiction, however, let's be a little nosy here and also ask how do we know if this SSA is any good in predicting what is human like and what isn't? Excellent question. In measuring human likeness for these chatbots, plugging in the SSA, again, the sensibleness and specificity average, we see that they correlate really strongly which means that the two seem to measure very similar things and in this case SSA can indeed be used as a proxy for human likeness. The coefficient of determination is 0.96. This is a several times stronger correlation than we can measure between the intelligence and the grades of a student which is already a great correlation. This is a remarkable result. Now what we get out of this is that the SSA is much easier and precise to measure than human likeness and is hence used throughout the paper. So chatbots say, what are all these things useful for? Well do you remember Google's technique that would automatically use an AI to talk to your colors and screen your calls? Or even make calls on your behalf? When connected to a text to speech synthesizer, something that Google already does amazingly well, Mina could really come alive in our daily lives soon. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com, slash papers and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fahir. Today, we are going to play with a cellular automaton. You can imagine this automaton as small games where we have a bunch of cells and a set of simple rules that describe when a cell should be full and when it should be empty. These rules typically depend on the state of the neighboring cells. For instance, perhaps the most well-known form of this cellular automaton is John Horton Conway's Game of Life, which simulates a tiny world where each cell represents a little life form. The rules, again, depend on the neighbors of this cell. If there are too many neighbors, they will die due to overpopulation. If too few, they will die due to underpopulation. And if they have just the right amount of neighbors, they will thrive and reproduce. So why is this so interesting? Well, this cellular automaton shows us that a small set of simple rules can give rise to remarkably complex life forms such as gliders, spaceships, and even John Fawne Neumann's universal constructor or, in other words, self-replicating machines. I hope you think that's quite something, and in this paper today, we are going to take this concept further. Way further. This cellular automaton is programmed to evolve a single cell to grow into a prescribed kind of life form. Apart from that, there are many other key differences from other works, and we will highlight two of them today. One, the cell state is a little different because it can either be empty, growing, or mature, and even more importantly, two, the mathematical formulation of the problem is written in a way that is quite similar to how we train a deep neural network to accomplish something. This is absolutely amazing. Why is that? Well, because it gives rise to a highly useful feature, namely that we can teach it to grow these prescribed organisms. But wait, over time, some of them seem to decay, some of them can stop growing, and some of them will be responsible for your nightmares, so from this point on, proceed with care. In the next experiment, the authors describe an additional step in which it can recover from these undesirable states. And now, hold on to your papers because this leads to one of the major points of this paper. If it can recover from undesirable states, can it perhaps regenerate when damaged? Well, here you will see all kinds of damage, and then this happens. Wow! The best part is that this thing wasn't even trained to be able to perform this kind of regeneration. The objective for training was that it should be able to perform its task of growing and maintaining shape, and it turns out some sort of regeneration is included in that. It can also handle rotations as well, which will give rise to a lot of fun, and as note to the moment ago, some nightmare-ish experiments. And note that this is a paper in the distilled journal, which not only means that it is excellent, but also interactive, so you can run many of these experiments yourself right in your browser. If Alexander Mordvinsev, the name of the first author, Ringsabel, he worked on Google's deep dreams approximately five years ago. How far we have come since? My goodness! Loving these crazy, non-traditional research papers, and I'm looking forward to seeing more of these. This episode has been supported by weights and biases. Here, they show you how you can visualize the training process for your boosted trees with XG boost using their tool. If you have a closer look, you'll see that all you need is one line of code. weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open-source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.kamslashpapers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. In the last few years, we have seen a bunch of new AI-based techniques that were specialized in generating new and novel images. This is mainly done through learning-based techniques, typically a generative adversarial network, again, in short, which is an architecture where a generator neural network creates new images and passes it to a discriminator network which learns to distinguish real photos from these fake, generated images. The two networks learn and improve together and generate better and better images over time. What you see here is a set of results created with a technique by the name Psychogun. This could even translate daytime into nighttime images, re-imagine a picture of a horse as if it were a zebra and more. We can also use it for style transfer, a problem where we have two input images, one for content and one for style, and as you see here, the output would be a nice mixture of the two. However, if we use Psychogun for this kind of style transfer, we'll get something like this. The goal was to learn the style of a select set of famous illustrators of children's books by providing an input image with their work. So, what do you think about the results? While the style is indeed completely different from the source, but the algorithm seems a little too heavy handed and did not leave the content itself intact. Let's have a look at another result with a previous technique. Maybe this will do better. This is Duogan which refers to a paper by the name unsupervised Duolarning for image to image translation. This uses two GANs to perform image translation, where one GAN learns to translate, for instance, one day to night, while the other learns the opposite, night to day translation. This among other advantages makes things very efficient, but as you see here, in these cases, it preserves the content of the image, but perhaps a little too much because the style itself does not appear too prominently in the output images. So, Psychogun is good at transferring style, but a little less so for content and Duogun is good at preserving the content, but sometimes adds too little of the style to the image. And now, hold on to your papers because this new technique by the name GANILA offers us these results. The content is intact, checkmark, and the style goes through really well, checkmark. It preserves the content and transfers the style at the same time. Excellent! One of the many key reasons as to why this happens is the usage of skip connections, which help preserve the content information as we travel deeper into the neural network. So finally, let's put our money where our mouth is and take a bunch of illustrators, marvel at their unique style, and then apply it to photographs and see how the algorithm stacks up against other previous works. Wow! I love these beautiful results! These comparisons really show how good Duogunila technique is at preserving content. And note that these are distinct artistic styles that are really difficult to reproduce even for humans. It is truly amazing that we can perform such a thing algorithmically. Don't forget that the first style transfer paper appeared approximately 3 to 3.5 years ago, and now we have come a long, long way. The pace of progress in machine learning research is truly stunning. While we are looking at some more amazing results, this time around, only from Gunila, I will note that the authors also made a user study with 48 people who favored this against previous techniques. And perhaps leaving the best for last, it can even draw in the style of Hayao Miyazaki. I bet there are a bunch of Miyazaki fans watching, so let me know in the comments what you think about these results. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to easily iterate on models by visualizing and comparing experiments in real time. Also, weights and biases provide tools to track your experiments in your deep learning projects. Each system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. With the power of modern computer graphics and machine learning techniques, we are now able to teach virtual humanoids to walk, set, manipulate objects, and we can even make up new creature types and teach them new tricks if we are patient enough that is. But even with all this knowledge, we are not done yet. Are we? Should we just shut down all the research facilities because there is nothing else to do? Well, if you have spent any amount of time watching two-minute papers, you know that the answer is, of course not. There is still so much to do, I don't even know where to start. For instance, let's consider the case of deformable simulations. Not so long ago, we talked about you and Ming-Hu's amazing paper, with which we can engage in the favorite pastime of a computer graphics researcher, which is, of course, destroying virtual objects in a spectacular manner. It can also create remarkably accurate yellow simulations where we can even choose our physical parameters. Here you see how we can drop in blocks of different densities into the yellow, and as a result, they sink in deeper and deeper. Amazing. However, note that this is not for real-time applications and computer games, because the execution time is not measured in frames per second, but in seconds per frame. If we are looking for somewhat coarse results, but in real-time, we have covered a paper approximately 300 episodes ago, which performed something that is called a reduced deformable simulation. Leave a comment if you were already a fellow scholar back then. The technique could be trained on a number of different representative cases, which, in computer graphics research, is often referred to as pre-computation, which means that we have to do a ton of work before starting a task, but only once, and then all our subsequent simulations can be sped up. Kind of like a student studying before an exam, so when the exam itself happens, the student, in the ideal case, will know exactly what to do. Imagine trying to learn the whole subject during the exam. Note that this training in this technique is not the same kind of training we are used to see with neural networks, and its generalization capabilities were limited, meaning that if we strayed too far from the training examples, the algorithm did not work so reliably. And now, hold on to your papers because this new method runs on your graphics card, and hence can perform these deformable simulations at close to 40 frames per second. And in the following examples in a moment, you will see something even better. A killer advantage of this method is that this is also scalable. This means that a resolution of the object geometry can be changed around. Here, the upper left is a coarse version of the object, where the lower right is the most refined version of it. Of course, the number of frames we can put out per second depends a great deal on the resolution of this geometry, and if you have a look, this looks very close to the one below it, but it is still more than 3 to 6 times faster than real time. Wow! And whenever we are dealing with collisions, lots of amazing details appear. Just look at this. Let's look at a little more formal measurement of the scalability of this method. Note that this is a log-log plot since the number of tetrahedra used for the geometry, and the execution time spans many orders of magnitude. In other words, we can see how it works from the coarsest piece of geometry to the most detailed models we can throw at it. If we look at something like this, we are hoping that the lines are not too steep, which is the case for both the memory and execution timings. So, finally, real time deformable simulations, here we come. What a time to be alive. This episode has been supported by weights and biases. Here, they show you how to make it to the top of Kaggle leaderboards by using their tool to find the best model faster than everyone else. Also, weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open-source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karo Jona Ifehir. It's time for some fluid simulations again. Writing fluid simulations is one of the most fun things we can do within computer graphics because we can create a virtual scene, add the laws of physics for fluid motion, and create photorealistic footage with an absolutely incredible amount of detail and realism. Note that we can do this ourselves so much so that for this scene I run the fluid and light simulation myself here at the two minute paper studio and on consumer hardware. However, despite this amazing looking footage, we are not nearly done yet. There is still so much to explore. For instance, a big challenge these days is trying to simulate fluid solid interactions. This means that the sand is allowed to have an effect on the fluid, but at the same time, as the fluid sloshes around, it also moves the sand particles within. This is what we refer to as two-way coupling. We also note that there are different kinds of two-way coupling and only the more advanced ones can correctly simulate how real honey supports the deeper and there is barely any movement. This may be about the only place on the internet where we are super happy that nothing at all is happening. However, many of you astute fellow scholars immediately ask, okay, but what kind of honey are we talking about? We can buy tens if not hundreds of different kinds of honey at the market. If we don't know what kind of honey we are using, how do we know if this simulation is too viscous or not viscous enough? Great question. Just to make sure we don't get lost, viscosity means the amount of resistance against deformation, therefore as we go up, you can witness this kind of resistance increasing. And now, hold on to your papers because this new technique comes from the same authors as the previous one with the honey deeper and enables us to import real-world honey into our simulation. That sounds like science fiction. Importing real-world materials into a computer simulation, how is that even possible? Well, with this solution, all we need to do is point a consumer smartphone camera at the phenomenon and record it. The proposed technique does all the heavy lifting by first extracting the silhouette of the footage and then creating a simulation that tries to reproduce this behavior. The closer it is, the better. However, at first, of course, we don't know the exact parameters that would result in this, however, now we have an objective we can work towards. The goal is to rerun this simulation with different parameters sets in a way to minimize the difference between the simulation and reality. This is not just working by trial and error, but through a technique that we refer to as mathematical optimization. As you see, later, the technique was able to successfully identify the appropriate viscosity parameter. And when evaluating these results, note that this work does not deal with how things look. For instance, whether the honey has the proper color or translucency is not the point here. What we are trying to reproduce is not how it looks, but how it moves. It works on a variety of different fluid types. I have slowed down some of these videos to make sure we can appreciate together how amazingly good these estimations are. And we are not even done yet. If we wish to, we can even set up a similar scene as the real world one with our simulation as a proxy for the real honey or caramel flow. After that, we can perform anything we want with this virtual piece of fluid, even including putting it into novel scenarios like this scene, which would otherwise be very difficult to control and quite wasteful, or even creating the perfect honey-depri-experiment. Look at how perfect the symmetry is there down below. Yum! Normally, in a real world environment, we cannot pour the honey and apply forces this accurately, but in a simulation, we can do anything we want. And now, we can also import the exact kind of materials for my real world repertoire. If you can buy it, you can simulate it. What a time to be alive! This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is a step-up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back, and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers, or click the link in the video description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zona Ifehir. Today, we have an abundance of neural network-based image generation techniques. Every image that you see here and throughout this video is generated by one of these learning based methods. This can offer high fidelity synthesis and not only that, but we can even exert artistic control over the outputs. We can truly do so much with this. And if you're wondering, there is a reason why we will be talking about an exact set of techniques and you will see that in a moment. So the first one is a very capable technique by the name CycleGAN. This was great at image translation or in other words, transforming apples into oranges, zebras into horses and more. It was called CycleGAN because it introduced a Cycle consistency loss function. This means that if we convert a summer image to a winter image and then back to a summer image, we should get the same input image back. If our learning system obeys to this principle, the output quality of the translation is going to be significantly better. Later, a technique by the name BigGAN appeared which was able to create reasonably high quality images and not only that, but it also gave us a little artistic control over the outputs. After that, CycleGAN and even its second version appeared which, among many other crazy good features, opened up the possibility to lock in several aspects of these images. For instance, age, pose, some facial features and more. And then we could mix them with other images to our liking while retaining these locked in aspects. And of course, deep-fake creation provides fertile grounds for research works so much so that at this point, it seems to be a subfield of its own where the rate of progress is just stunning. Now that we can generate arbitrarily many beautiful images with these learning algorithms, they will inevitably appear in many corners of the internet, so an important new question arises, can we detect if an image was made by these methods? This new paper argues that the answer is a resounding yes. You see a bunch of synthetic images above and real images below here, and if you look carefully for the labels, you'll see many names that ring a bell to our scholarly minds. CycleGAN, big gain, star gain, nice. And now you know that this is exactly why we briefly went through what these techniques do at the start of the video. So all of these can be detected by this new method. And now hold on to your papers because I kind of expected that, but what I didn't expect is that this detector was trained on only one of these techniques and leaning on that knowledge it was able to catch all the others. Now that's incredible. This means that there are foundational elements that bind together all of these techniques. Our season fellow scholars know that this similarity is none other than the fact that they are all built on convolutional neural networks. They are vastly different, but they use very similar building blocks. Imagine the convolutional layers as Lego pieces and think of the techniques themselves to be the objects that we build using them. We can build anything, but what binds these all together is that they are all but a collection of Lego pieces. So this detector was only trained on real images and synthetic ones created by the progain technique and you see with the blue bars that the detection ratio is quite close to perfect for a number of techniques saved for these two. The AP label means average precision. If you look at the paper in the description, you will get a lot more insights as to how robust it is against compression artifacts, a little frequency analysis of the different synthesis techniques and more. Let's send a huge thank you to the authors of the paper who also provide a source code and training data for this technique. For now, we can all breathe a sigh of relief that there are proper detection tools that we can train ourselves at home. In fact, you will see such an example in a second. What a time to be alive. Also, good news. We now have an unofficial Discord server where all of you fellow scholars are welcome to discuss ideas and learn together in a kind and respectful environment. Look, some connections and discussions are already being made. Thank you so much for our volunteering fellow scholars for making this happen. The link is available in the video description. It is completely free. And if you have joined, make sure to leave a short introduction. Meanwhile, what you see here is an instrumentation of this exact paper we have talked about, which was made by Wates and Biasis. Wates and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wates and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Dr. Karoizzoa Naifahir. In computer graphics research, we spend most of our time dealing with images. An image is a bunch of pixels put onto a 2D plane, which is a tiny window into reality, but reality is inherently 3D. This is easy to understand for us because if we look at a flat image, we see the geometric structures that hit the pics. If we look at this image, we know that this is not a sticker, but a 3-dimensional fluid domain. If I would freeze an image and ask a human to imagine rotating around this fluid domain, that human would do a pretty good job at that. However, for a computer algorithm, it would be extremely difficult to extract the 3D structure out from this image. So can we use these shiny, new neural network-based learning algorithms to accomplish something like this? Well, have a look at this new technique that takes a 2D image as an input and tries to guess 3 things. The cool thing is that the geometry problem we talked about is just the first one. Beyond that, 2, it also guesses what the lighting configuration is that leads to an appearance like this, and 3, it also produces the texture map for an object as well. This would already be great, but wait, there's more. If we plug all this into a rendering program, we can also specify a camera position and this position can be different from the one that was used to take this input image. So what does that mean exactly? Well, it means that maybe it can not only reconstruct the geometry, light, and texture of the object, but even put this all together and make a photo of it from a novel viewpoint. Wow! Let's have a look at an example. There's a lot going on in this image, so let me try to explain how to read it. This image is the input photo, and the white silhouette image is called a mask which can either be given with the image or be approximated by already existing methods. This is the reconstructed image by this technique, and then this is a previous method from 2018 by the name Category Specific Mesh Reconstruction, CMR in short. And now, hold on to your papers because in the second row, you see this technique creating images of this bird from different novel viewpoints. How cool is that? Absolutely amazing. Since we can render this bird from any viewpoint, we can even create a turntable video of it. And all this from just one input photo. Let's have a look at another example. Here you see how it puts together the final car rendering in the first column from the individual elements like geometry, texture, and lighting. The other comparisons in the paper reveal that this technique is indeed a huge step up from previous works. Now all this sounds great, but what is all this used for? What are some example applications of this 3D object from 2D image thing? Well, techniques like this can be a great deal of help in enhancing the depth perception capabilities of robots, and of course, whenever we would like to build a virtual world, creating a 3D version of something we only have a picture of can get extremely laborious. This could help a great deal with that too. For this application, we could quickly get a starting point with some text re-information and get an artist to fill in the fine details. This might get addressed in a follow-up paper. And if you are worried about the slide discoloration around the big area of this bird, do not despair. As we always say, two more papers down the line and this will likely be improved significantly. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com, slash papers, and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karoizor Naifahir. Whenever we look at these amazing research papers on physical simulations, it is always a joy seeing people discussing them in the comments section. However, one thing that caught my attention is that some people comment about how things look and not on how things move in these papers. Which is fair enough, and to this end, I will devote this episode to talk about a few amazing techniques in light transport simulations. But first things first, when talking about physical simulations, we are talking about a technique that computes how things move. Then, we typically run a light simulation program that computes how things look. The two are completely independent, which means that it is possible that the physical behavior of bread breaking here is correct, but the bread itself does not look perfectly realistic. The second part depends on the quality of the light simulation and the materials used there. We can create such an image by simulating the path of millions and millions of light rays. And initially, this image will look noisy, and as we add more and more rays, this image will slowly clean up over time. If we don't have a well-optimized program, this can take from hours to days to compute. We can speed up this process by carefully choosing where to shoot these rays, and this is a technique that is called important sampling. But then, around 1993, an amazing paper appeared by the name Bi-directional Path Tracing that proposed that we don't just start building light paths from one direction, but two instead. One from the camera, and one from the light source, and then connect them. This significantly improved the efficiency of these light simulations, however, it opened up a new kind of worms. There are many different ways of connecting these paths, which leads to mathematical difficulties. For instance, we have to specify the probability of a light path forming, but what do we do if there are multiple ways of producing this light path? There will be multiple probabilities. What do we do with all this stuff? To address this, Eric Vich described a magical algorithm in his thesis, and thus, multiple important sampling was born. I can say without exaggeration that this is one of the most powerful techniques in all photorealistic rendering research. What multiple important sampling, or from now on, MIS in short does, is combine these multiple sampling techniques in a way that accentuates the strength of each of them. For instance, you can see the image created by one sampling technique here, and the image from a different one here. Both of them are quite noisy, but if we combine them with MIS, we get this instead in the same amount of time. A much smoother, less noisy image. In many cases, this can truly bring down the computation times from several hours to several minutes. Absolute witchcraft. Later, even more advanced techniques appeared to accelerate the speed of these light simulation programs. For instance, it is now not only possible to compute light transport between points in space, but between a point and a beam instead. You see the evolution of an image using this photom beam-based technique. This way, we can get rid of the point-based noise and get a much, much more appealing rendering process. The lead author of this beam paper is Vojta Kiyaros, who, three years later, ended up being the head of the rendering group at the Amazing Disney Research Lab. Around that time, he also hired me to work with him at Disney on a project I can't talk about, which was an incredible and life-changing experience, and I will be forever grateful for his kindness. By the way, he is now a professor at the Dartmouth University and just keeps pumping out one killer paper after another. So as you might have guessed, if it is possible to compute light transport between two points, the point and a beam, later it became possible to do this between two beams. None of these are for the faint of the heart, but it works really well. But there is a huge problem. These techniques work with different dimensionalities, or, in other words, they estimate the final result so differently that they cannot be combined with multiple importance sampling. That is, indeed a problem, because all of these have completely different strengths and weaknesses. And now, hold on to your papers because we have finally arrived to the main paper of this episode. It bears the name UPBP, which stands for unifying points, beams, and paths, and it formulates multiple importance sampling between all of these different kinds of light transport simulations. Basically, what we can do with this is throw every advanced simulation program we can think of together and out comes a super powerful version of them that combines all their strengths and nullifies nearly all of their weaknesses. It is absolutely unreal. Here, you see four completely different algorithms running, and as you can see, they are noisy and smooth at very different places. They are good at computing different kinds of light transport. And now, hold on to your papers because this final result with the UPBP technique is this. Wow! Light transport on steroids. While we look at some more results, I will note that in my opinion, this is one of the best papers ever written in light transport research. The crazy thing is that I hardly ever hear anybody talk about it. If any paper would deserve a bit more attention, so I hope this video will help with that. And I would like to dedicate this video to Jerozlav Krzywanek, the first author of this absolutely amazing paper who has tragically passed away a few months ago. In my memories, I think of him as the true king of multiple important sampling, and I hope that now you do too. Note that MIS is not limited to light transport algorithms. It is a general concept that can be used together with a mathematical technique called Monte Carlo integration, which is used pretty much everywhere from finding out what an electromagnetic field looks like to financial modeling and much, much more. If you have anything to do with Monte Carlo integration, please read Eric Vichy's thesis and this paper, and if you feel that it is a good fit, try to incorporate multiple important sampling into your system. You'll be glad you did. Also, we have recorded my lectures of a master-level course on light transport simulations at the Technical University of Vienna. In this course, we write such a light simulation program from scratch, and it is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. Additionally, I have implemented a small, one-dimensional example of MIS if you wish to pick it up and try it, that's also available in the video description. While talking about the Technical University of Vienna, we are hiring for a PhD and a postdoc position. The call here about lighting simulation for architectural design is advised by my PhD advisor, Mikhail Vima, who I highly recommend. Apply now if you feel qualified, the link is in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. With today's camera and graphics technology, we can enjoy smooth and creamy videos on our devices that were created with 60 frames per second. I also make each of these videos using 60 frames per second, however, it almost always happens that our encounter paper videos from 24 to 30 frames per second or FPS in short. In this case, I put them in my video editor that has a 60 FPS timeline, so half or even more of these frames will not provide any new information. As we try to slow down the videos for some nice slow motion action, this ratio is even worse, creating an extremely choppy output video because we have huge gaps between these frames. So does this mean that there is nothing we can do and have to put up with this choppy footage? No, not at all. Earlier, we discussed two potential techniques to remedy this issue. One was frame blending, which simply computes the average of two consecutive images and presents that as a solution. This helps a little for simpler cases, but this technique is unable to produce new information. Optical Flow is a much more sophisticated method that is very capable as it tries to predict the motion that takes place between these frames. This can kind of produce new information and I use this in the video series on a regular basis, but the output footage also has to be carefully inspected for unwanted artifacts, which are relatively common occurrence. Now, our season follow scholars will immediately note that we have a lot of high frame rate videos on the internet. Why not delete some of the in-between frames, give the choppy and the smooth videos to a neural network and teach it to fill in the gaps. After the lengthy training process, it should be able to complete these choppy videos properly. So, is that true? Yes, but note that there are plenty of techniques out there that already do this, so what is new in this paper? Well, this work does that and much more. We will have a look at the results which are absolutely incredible, but to be able to appreciate what is going on, let me quickly show you this. The design of this neural network tries to produce four different kinds of data to fill in these images. One is optical flows, which is part of previous solutions too, but two, it also produces a depth map that tells us how far different parts of the image are from the camera. This is of utmost importance because if we rotate this camera around, previously occluded objects suddenly become visible and we need proper intelligence to be able to recognize this and to fill in this kind of missing information. This is what the contextual extraction step is for, which drastically improves the quality of the reconstruction, and finally, the interpolation kernels are also learned, which gives it more knowledge as to what data to take from the previous and the next frame. Since it also has a contextual understanding of these images, one would think that it needs a ton of neighboring frames to understand what is going on, which surprisingly is not the case at all. All it needs is just the two neighboring images. So, after doing all this work, it better be worth it, right? Let's have a look at some results. Hold on to your papers and in the meantime, look at how smooth and creamy the outputs are. Love it. Because it also deals with contextual information, if you wish to feel like a real scholar, you can gaze at regions where the occlusion situation changes rapidly and see how well it feels in this kind of information. Hand real. So, how does one show that the technique is quite robust? Well, by producing and showing it off on tons and tons of footage, and that is exactly what the authors did. I put a link to a huge playlist with 33 different videos in the description, so you can have a look at how well this works on a wide variety of genres. Now, of course, this is not the first technique for learning-based frame interpolation, so let's see how it stacks up against the competition. Wow. This is quite a value proposition, because depending on the dataset, it comes out first and second place on most examples. The PSNR is the peak signal to noise ratio, while the SSIM is the structure of similarity metric, both of which measure how well the algorithm reconstructs these details compared to the ground truth and both are subject to maximization. Note that none of them are linear, therefore, even a small difference in these numbers can mean a significant difference. I think we are now at a point where these tools are getting so much better than their handcrafted optical flow rivals that I think they will quickly find their way to production software. I cannot wait. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you which hyper-parameters to tweak to improve your model performance. Also, weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. They don't lock you in and if you are an academic or have an open-source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Toulinit Papers with Karo Zsolnai-Fehir. In computer graphics, when we are talking about portrait-relighting, we mean a technique that is able to look at an image and change the lighting and maybe even the materials or geometry after this image has been taken. This is a very challenging endeavor. So, can Neuron-Atworks put a dent into this problem and give us something new and better? You bet. Examples that you see here are done with this new work that uses a learning-based technique and is able to change the lighting for human portraits and only requires one input image. You see, normally, using methods in computer graphics to relate these images would require trying to find out what the geometry of the face, materials and lighting is from the image and then we can change the lighting or other parameters, run a light simulation program and hope that the estimations are good enough to make it realistic. However, if we wish to use Neuron-Atworks to learn the concept of portrait-relighting, of course, we need quite a bit of training data. Since this is not trivially available, the paper contains a new dataset with over 25,000 portrait images that are relit in five different ways. It also proposes a Neuron-Atworks structure that can learn this reliting operation efficiently. It is shaped a bit like an hourglass and contains an encoder and decoder parts. The encoder part takes an image as an input and estimates what lighting could have been used to produce it while the decoder part is where we can play around with changing the lighting and it will generate the appropriate image that this kind of lighting would produce. What you see here are skip connections that are useful to save insights from different abstraction levels and transfer them from the encoder to the decoder network. So what does this mean exactly? Intuitively, it is a bit like using the lighting estimator network to teach the image generator what it has learned. So do we really lose a lot if we skip the skip connections? Well, quite a bit. Have a look here. The image on the left shows the result using all skip connections while as we traverse to the right we see the results omitting them. These connections indeed make the profound difference. Let's be thankful for the authors of the paper as putting together such a data set and trying to get an understanding as to what network architectures it would require to get great results like this takes quite a bit of work. I'd like to make a note about modeling subsurface light transport. This is a piece of footage from our earlier paper that we wrote as a collaboration with the Activision Blizzard Company and you can see here that including this indeed makes a profound difference in the looks of a human face. I cannot wait to see some follow-up papers that take more advanced effects like this into consideration for relighting as well. If you wish to find out more about this work make sure to click the link in the video description. This episode has been supported by weights and biases. Here you see a write-up of theirs where they explain how to visualize gradients running through your models and illustrate it through the example of predicting protein structure. They also have a live example that you can try. Weight and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI to your research, Stanford and Berkeley. Make sure to visit them through www.b.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fahir. Finally, the full research paper has appeared on OpenAI 5, which is an AI that plays Dota 2, a multiplayer online battle arena game with a huge cold following. And you may not expect this, it is not only as good as some of the best players in the world, but it also describes a surgery technique that sounds quite unexpected and I promise to tell you what it is later during this video. This game is a nightmare for any AI to play because of 3 main reasons. One, it requires long-term strategic planning where it is possible that we make one bad decision, then a thousand good ones, and we still lose the game in the end. Finding out which decision led to this loss is immensely difficult, often even for humans. Two, we have imperfect information, meaning that we can only see what our units and buildings can see. And three, even though these learning agents don't look at the pixels of the game, but they see the world as a big bunch of numbers, there is just too much information to look at and too many decisions to make compared to chess or go, or almost anything else. Despite these difficulties, in 2017, OpenAI showed us an initial version of their agent that was able to play one versus one games with only one hero and was able to reliably beat Dandy, a world champion player. That was quite an achievement, however, of course, this was meant to be a stepping stone towards something much bigger that is playing the real Dota 2. And just two years later, an newer version named OpenAI 5 has appeared, defeated the Dota 2 World Champions and beat 99.4% of human players during an online event that ran for multiple days. Many voices said that this would never happen, so two years to pull this off after the first version, I think was an absolute miracle. Bravo! Now, note that even this version has two key limitations. One, in a normal game, we can choose from a pool of 117 heroes where this system supports 17 of them and two items that allow the player to control multiple characters at once have been disabled. If I remember correctly from a previous post of theirs, invisibility effects are also neglected because the algorithm is not looking at pixels, it would either always have this information shown as a bunch of numbers or never. Neither of these would be good design decisions, so thus invisibility is not part of this technique. Fortunately, the paper is now available, so I was really excited to look under the hood for some more details. So first, as I promised, what is this surgery thing about? You see, the training of the neural network part of this algorithm took no less than 10 months. Now, just imagine forgetting to feed an important piece of information into the system or finding a bug while training is underway. In cases like this, normally we would have to abort the training and start again. If we have a new idea as to how to improve the system, again we have to abort the training and start again. If a new version of Dota 2 comes out with some changes, you guys try it, we start again. This would be okay if the training took from the order of minutes to hours, but we are talking 10 months here. This is clearly not practical. So, what if there would be a technique that would be able to apply all of these changes to a training process that is already underway? Well, this is what the surgery technique is about. Here with the blue curve, you see the agents keyerating improving over time and the red lines with the black triangles show us the dates for the surgeries. The author's note that over the 10 month training process, they have performed approximately one surgery per two weeks. It seems that getting a doctorate in machine learning research is getting a whole new meaning. Some of them indeed made an immediate difference while others seemingly not so much. So how do we assess how potent these surgeries were? Did they give the agent superpowers? Well, have a look at the rerun part here, which is the final Frankenstein's monster agent containing the result of all the surgeries retrained from scratch. And just look at how quickly it is trained and not only that, but it shoots even higher than the original agent. Absolute madness. Apparently, open AI is employing some proper surgeons over there at their lab. I love it. Interestingly, this is not the only time I've seen the word surgery used in the computer sciences outside of medicine. A legendary mathematician named Gregory Perelman, who proved the Poincare conjecture also performed a mathematical technique that he called surgery. What's more, we even talked about simulating weightlifting and how a simulated AI agent will walk after getting hamstrung and, you guessed it right, undergoing surgery to fix it. What a time to be alive. And again, an important lesson is that in this project, open AI is not spending so much money and resources just to play video games. Dota 2 is a wonderful test bed to see how their AI compares to humans at complex tasks that involve strategy and teamwork. However, the ultimate goal is to reuse parts of this system for other complex problems outside of video games. For instance, the algorithm that you've seen here today can also do this. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is your step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back, and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my PhD studies. To receive $20 credit in your new Linode account, visit linode.com slash papers, or just click the link in the video description and give it a try today. Thanks for your support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Jolene Fahir. In this series, we often discuss a class of techniques by the name Image Impainting. Image impainting methods are capable of filling in missing details from a mostly intact image. You see the legendary patch match algorithm at work here, which is more than 10 years old and it is a good old computer graphics method with no machine learning insight and after so much time, 10 years is an eternity in research years, it still punches way above its weight. However, with the ascendancy of neural network based learning methods, I am often wondering whether it would be possible to take a more difficult problem, for instance, impainting not just images, but movies as well. For instance, let's take an old, old black and white movie that suffers from missing data, flickering, blurriness and interestingly, even the contrast of the footage has changed as it faded over time. Well, hold onto your papers because this learning based approach fixes all of these and even more. Step number one is restoration, which takes care of all of these artifacts and contrast issues. You can not only see how much better the restored version is, but it is also reported what the technique did exactly. However, it does more. What more could be possibly asked for? Well, colorization. What it does is that it looks at only six colorized reference images that we have to provide and uses this as our direction and propagated to the remainder of the frames and it does an absolutely amazing work at that. It even tells us which reference image it is looking at when colorizing some of these frames, so if something does not come out favorably, we know which image to recolor. The architecture of the neural network that is used for all this also has to follow the requirements appropriately. For instance, beyond the standard spatial convolution layers, it also makes ample use of temporal convolution layers, which helps smearing out the colorization information from one reference image to multiple frames. However, in research, a technique is rarely the very first at doing something and sure enough. This is not the first technique that does this kind of restoration and colorization. So, how does it compare to previously published methods? Well, quite favorably. With previous methods, in some cases, the colorization just appears and disappears over time while it is much more stable here. Also, fewer artifacts make it to the final footage and since cleaning these up is one of the main objectives of these methods, that's also great news. If we look at some quantitative results or in other words numbers that describe the difference, you can see here that we get a 3-4 decibels cleaner image, which is outstanding. Note that the decibel scale is not linear, but a logarithmic scale, therefore if you read 28 instead of 24, it does not mean that it is just approximately 15% better. It is a much, much more pronounced difference than that. I think these results are approaching a state where they are becoming close to good enough so that we can revive some of these old masterpiece movies and give them a much deserved facelift. What a time to be alive! This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI to your research, Stanford and Berkeley. They also wrote a guide on the fundamentals of neural networks where they explain in simple terms how to train a neural network properly, what are the most common errors you can make, and how to fix them. It is really great you got to have a look. So make sure to visit them through wendeeb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahid. In a recent video, we showcased a computer graphics technique that simulated the process of baking, and now it's time to discuss a paper that is about simulating how we can tear this loaf of bread apart. This paper aligns well with the favorite pastimes of a computer graphics researcher, which is, of course, destroying virtual objects in a spectacular fashion. Like the previous work, this new paper also builds on top of the material point method, a hybrid simulation technique that uses both particles and grids to create these beautiful animations. However, it traditionally does not support simulating cracking and tearing phenomena. Now, have a look at this new work and marvel at how beautifully this phenomenon is simulated here. With this, we can smash Oreos, candy crabs, pumpkins, and much, much more. This jelly fracture scene is my absolute favorite. Now, when an artist works with these simulations, the issue of artistic control often comes up. After all, this method is meant to compute this phenomena by simulating physics and we can just instruct physics to be more beautiful. Or can we? Well, this technique offers us plenty of parameters to tune the simulation to our liking, two that will note today are the alpha, which means the hardening and beta is the cohesion parameter. So what does that mean exactly? Well, beta was cohesion, which is the force that holds matter together. So as we go to the right, the objects stay more intact and as we go down, the objects shatter into more and more pieces. The method offers us more parameters than these, but even with these two, we can really make the kind of simulation we are looking for. Huh, what the heck? Let's do two more. We can even control the way the cracks form with the MC parameter, which is the speed of crack propagation. And G is the energy release, which, as we look to the right, increases the object's resistance to damage. So how long does this take? Well, the technique takes its sweet time. The execution timings range from 17 seconds to about 10 minutes per frame. This is one of those methods that does something that wasn't possible before, and it is about doing things correctly. And after a paper appears on something that makes the impossible possible, follow-up research works get published later that further refine and optimize it. So as we say, two more papers down the line, and this will run much faster. Now, a word about the first author of the paper, Joshua Wopper. Strictly speaking, it is his third paper, but only the second within computer graphics, and my goodness, did he come back with guns blazing. This paper was accepted to the C-Graph conference, which is one of the biggest honors a computer graphics researcher can get, perhaps equivalent to the Olympic gold medal for an athlete. It definitely is worthy of a gold medal. Make sure to have a look at the paper in the video description. It is an absolutely beautifully crafted piece of work. Congratulations, Joshua. This episode has been supported by Lambda. If you're a researcher, where I start up, looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Asia. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahir, Muran Network-based Learning Algorithms, or on their eyes these days, and even though it is common knowledge that they are capable of image classification, or, in other words, looking at an image and saying whether it depicts a dog or a cat, nowadays, they can do much, much more. In this series, we covered a stunning paper that showcased the system that could not only classify an image, but write a proper sentence on what is going on and could cover even highly non-trivial cases. You may be surprised, but this thing is not recent at all. This is four-year-old news. In sanity. Later, researchers turned this whole problem around and performed something that was previously thought to be impossible. They started using these networks to generate photorealistic images from a written text description. We could create new bird species by specifying that it should have orange legs and a short yellow bill. Later, researchers at NVIDIA recognized and addressed two shortcomings. One was that the images were not that detailed, and two, even though we could input text, we couldn't exert too much artistic control over the results. In came style again to the rescue, which was able to perform both of these difficult tasks really well. These images were progressively grown, which means that we started out with a course image and go over it, over and over again, adding new details. This is what the results look like, and we can marvel at the fact that none of these people are real. However, some of these images were still contaminated by unwanted artifacts. Furthermore, there are some features that are highly localized, as we exert control over these images. You can see how this part of the teeth and eyes are pinned to a particular position, and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings. This new work is titled Style again 2, and it addresses all of these problems in one go. Perhaps this is the only place on the internet where we can say that finally teeth and eyes are now allowed to float around freely and mean it with a positive sentiment. Here you see a few hand-picked examples from the best ones, and I have to say these are eye-poppingly detailed and correct looking images. My goodness! The mixing examples you see here are also outstanding, way better than the previous version. Also, note that as there are plenty of training images out there, for many other things beyond human faces, it can also generate cars, churches, horses, and of course, cats. Now that the original Style again 1 work has been out for a while, we have a little more clarity and understanding as to how it does what it does, and the redundant parts of the architecture have been revised and simplified. This clarity comes with additional advantages beyond faster and higher quality training and image generation. For instance, interestingly, despite the fact that the quality has improved significantly, images made with the new method can be detected more easily. Note that the paper does much, much more than this, so make sure to have a look in the video description. In this series, we always say that two more papers down the line and this technique will be leaps and bounds beyond the first iteration. Well, here we are, not two, but only one more paper down the line. What a time to be alive! The source code of this project is also available. What's more, it even runs in your browser. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by open AI to your research, Stanford and Berkeley. Here you see a beautiful final report on one of their projects on classifying parts of street images and see how these learning algorithms evolve over time. Make sure to visit them through wendb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir, a few episodes ago we discuss a new research work that performs something that they call differentiable rendering. The problem formulation is the following. To specify a target image that is either rendered by a computer program or even better a photo. The input is a pitiful approximation of it and now because it progressively changed the input materials, textures and even the geometry of this input in a 3D modelar system it is able to match this photo. At the end of the video I noted that I am really looking forward for more differentiable rendering and differentiable everything papers. So fortunately here we go. This new paper introduces differentiable programming for physical simulations. So what does that mean exactly? Let's look at a few examples and find out together. Imagine that we have this billiard game where we would like to hit the wide ball with just the right amount of force and from the right direction such that the blue ball ends up close to the black spot. Let's try it. Well, this example shows that this doesn't happen by chance and we have to engage in a fair amount of trial and error to make this happen. What this differentiable programming system does for us is that we can specify an end state which is the blue ball on the black dot and it is able to compute the required forces and angles to make this happen. Very close. But the key point here is that this system is general and therefore can be applied to many many more problems. We'll have a look at a few that are much more challenging than this example. For instance, it can also teach this GUI object to actuate itself in a way so that it would start to work properly within only two minutes. The 3D version of this simulation learned so robustly so that it can even withstand a few extra particles in the way. The next example is going to be obscenely powerful. I tried to explain what this is to make sure that we can properly appreciate it. Many years ago I was trying to solve a problem called fluid control where we would try to coerce a smoke plume or a piece of fluid to take a given shape like a bunny or a logo with letters. You can see some footage of this project here. The key difficulty of this problem is that this is not what typically happens in reality. Of course, a glass of spilled water is very unlikely to suddenly take the shape of a human face so we have to introduce changes to the simulation itself but at the same time it still has to look as if it could happen in nature. If you wish to know more about my work here, the full thesis and the source code is available in the video description and one of my kind students has even implemented it in Blunder. So this problem is obscenely difficult. And you can now guess what's next for this differentiable technique, fluid control. It starts out with a piece of simulated ink with a checkerboard pattern and it exerts just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after. I am shocked by how such a general system can perform something of this complexity. Having worked on this problem for a while, I can tell you that this is immensely difficult. Amazing. And hold on to your papers because it can do even more. In this example, it adds carefully crafted ripples to the water to make sure that it ends up in a state that distorts the image of the squirrel in a way that a powerful and well-known neural network sees it not as a squirrel, but as a goldfish. This thing is basically a victory lap in the paper. It is so powerful, it's not even funny. You can just make up some problems that sound completely impossible and it rips right through them. The full source code of this work is also available. By the way, the first author of this paper is Yuan Ming-Hu. His work was showcased several times in this series. He talked about his amazing yellow simulation that was implemented in so few lines of code it almost fits on a business card. I said it in a previous episode and I will say it again. I can't wait to see more and more papers in differentiable rendering and simulations. And as this work leaves plenty of room for creativity for novel problem definitions, I'd love to hear what you think about it. What else could this be used for? In video games faster than other learning based techniques, anything else, let me know in the comments below. What a time to be alive. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. It is really easy to set up so much so that they have made an instrumentation for this exact paper we have talked about in this episode. Have a look here. Make sure to visit them through whendb.com slash papers, www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fahir. In this series, we often talk about computer animation and physical simulations, and these episodes are typically about one or the other. You see, it is possible to teach a simulated AI agent to lift weights and jump really high using physical simulations to make sure that the movements and forces are accurate. The simulation side is always looking for correctness. However, let's not forget that things also have to look good. Animation studios are paying a fortune to record motion capture data from real humans and sometimes even dogs to make sure that these movements are visually appealing. So is it possible to create something that reacts to our commands with the controller, looks good, and also adheres to physics? Well, have a look. This work was developed at Ubisoft LaForge. It responds to our input via the controller and the output animations are fluid and natural. Since it relies on a technique called deep reinforcement learning, it requires training. You see that early on, the blue agent is trying to imitate the white character and it is not doing well at all. It basically looks like me when going to bed after reading papers all night. The white agent's movement is not physically simulated and was built using a motion database with only 10 minutes of animation data. This is the one that is in the looks good category. Or it would look really good if it wasn't pacing around like a drunkard, so the question naturally arises who in their right minds would control a character like this. Well, of course, no one. This sequence was generated by an artificial worst-case player which is a nightmare situation for NEA AI to reproduce. Early on, it indeed is a nightmare. However, after 30 hours of training, the blue agent learned to reproduce the motion of the white character while being physically simulated. So, what is the advantage of that? Well, for instance, it can interact with the scene better and is robust against perturbations. This means that it can rapidly recover from undesirable positions. This can be validated via something that the paper calls impact testing. Are you thinking what I am thinking? I hope so, because I am thinking about throwing blocks at this virtual agent, one of our favorite pastimes at two minute papers and it will be able to handle them. Whoops! Well, most of them anyway. It also reacts to a change in direction much quicker than previous agents. If all that was not amazing enough, the whole control system is very light and takes only a few microseconds, most of which is spent by not even the control part, but the physics simulation. So, with the power of computer graphics and machine learning research, animation and physics can now be combined beautifully, it does not limit controller responsiveness, looks very realistic and it is very likely that we'll see this technique in action in future Ubisoft games. Outstanding This video was supported by you on Patreon. If you wish to watch these videos in Early Access or get your name immortalized in the video description, make sure to go to patreon.com slash two minute papers and pick up one of those cool perks or we are also test driving the Early Access program here on YouTube. Just go ahead and click the join button or use the link in the description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. If we study the laws of fluid motion from physics and write a computer program that contains these laws, we can create beautiful simulations like the one you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. However, when talking about fluid simulations, we often see a paper produce a piece of geometry that evolves over time, and of course the more detailed this geometry is, the better. However, look at this. It is detailed, but something is really missing here. Do you see it? Well, let's look at the revised version of this simulation to find out what it is. Yes, form, spray, and bubble particles are now present, and the quality of the simulation just got elevated to the next level. Also, if you look at the source text, you see that this is a paper from 2012, and it describes how to add these effects to a fluid simulation. So, why are we talking about a paper that's about 8 years old? Not only that, but this work was not published at one of the most prestigious journals. Not even close. So, why? Well, you'll find out in a moment, but I have to tell you that I just got to know about this paper a few days ago, and it is so good it has single-handedly changed the way I think about research. Note that a variant of this paper has been implemented in a blender plugin called Flip Fluids. Blender is a free and open source modeler program, which is a complete powerhouse. I love it. And this plugin embeds this work into a modern framework, and boy, does it come to life in there. I have rerun one of their simulations and rendered a high resolution animation with light transport. The fluid simulation took about 8 hours, and as always, I went a little overboard with the light transport that took about 40 hours. Have a look. It is unreal how good it looks. My goodness. It is one of the miracles of the world that we can put a piece of silicon in our machines and through the power of science, explain fluid dynamics to it so well that such a simulation can come out of it. I have been working on these for many years now, and I am still shocked by the level of progress in computer graphics research. Let's talk about three important aspects of this work. First, it proposes one unified technique to add foam, spray, and bubbles in one go to the fluid simulation. One technique to model all three. In the paper, they are collectively called diffuse particles, and if these particles are deeply underwater, they will be classified as bubbles. If they are on the surface of the water, they will be foam particles, and if they are further above the surface, we will call them spray particles. With one method, we get all three of those. Lovely. Two, when I had shown you this footage, with and without the diffuse particles, normally I would need to resimulate the whole fluid domain to add these advanced effects, but this is not the case at all. These particles can be added as a post-processing step, which means that I was able to just run the simulation once, and then decide whether to use them or not. Just one click, and here it is, with the particles removed. Absolutely amazing. And three, perhaps the most important part, this technique is so simple I could hardly believe the paper when I saw it. You see, normally, to be able to simulate the formation of bubbles or foam, we would need to compute the waybar numbers, which requires expensive surface tangent computations, and more. Instead, the paper for fits that, and goes with the notion that bubbles and foam appear at regions where air gets trapped within the fluid. On the back of this knowledge, they note that wave crests are an example of that, and propose a method to find these wave crests by looking for regions where the curvature of the fluid geometry is high and locally convex. Both of these can be found through very simple expressions. Finally, air is also trapped when fluid particles move rapidly towards each other, which is also super simple to compute and evaluate. The whole thing can be implemented in a day, and it leads to absolutely killer fluid animations. You see, I have a great deal of admiration for a 20-page long technique that models something very difficult perfectly, but I have at least as much admiration for an almost trivially simple method that gets us to 80% of the perfect solution. This paper is the latter. I love it. This really changed my thinking not only about fluid simulation papers, but this paper is so good, it challenged how I think about research in general. It is an honor to be able to talk about beautiful works like this to you, so thank you so much for coming and listening to these videos. Note that the paper does more than what we've talked about here, it also proposes a method to compute the lifetime of these particles, tells us how they get evacuated by water and more. Make sure to check out the paper in the description for more on that. If you're interested, go and try a blender. That tool is completely free for everyone to use. I have been using it for around a decade now, and it is truly incredible that something like this exists as a community effort. The Flip Fluids plugin is a paid edition. If one pays for it, it can be used immediately, or if you spend a little time, you can compile it yourself, and this way you can get it for free. Respect for the plugin authors for making such a gentle business model. If you don't want to do any of those, even blender has a usable build-in fluid simulator. You can do incredible things with it, but it can produce diffuse particles. I am still stunned by how simple and powerful this technique is. The lesson here is that you can really find jumps anywhere, not just around the most prestigious research venues. I hope you got inspired by this, and if you wish to understand how these fluids work some more, or write your own simulator, I put a link to my master's thesis where I try to explain the whole thing as intuitively as possible, and it also comes with a full source code, free of charge, for a simulator that runs on your graphics card. If you feel so voracious that even that's not enough, I will also highly recommend Dojo Kim's book on fluid engine development. That one also comes with free source code. This episode has been supported by weights and biases. Here you see their beautiful final report on a point cloud classification project of theirs, and see how using different learning rates and other parameters influences the final results. Wates and biases provides tools to track your experiments in your deep learning projects. You can save you a ton of time and money in these projects, and is being used by open AI to your research, Stanford, and Berkeley. Make sure to visit them through wendeebe.com slash papers, w-a-n-d-b.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai Fahir. As humans, when looking at the world, our eyes and brain does not process the entirety of the image we have in front of us, but plays an interesting trick on us. We can only see fine details in a tiny, tiny, for-vated region that we are gazing at, while our peripheral or indirect vision only sees a sparse, blurry version of the image, and the rest of the information is filled in by our brain. This is very efficient because our vision system only has to process a tiny fraction of the visual data that is in front of us, and it still enables us to interact with the world around us. So, what if we would take a learning algorithm that does something similar for digital videos? Imagine that we would need to render a sparse video with only every tenth pixel filled with information and some kind of neural network-based technique would be able to reconstruct the full image similarly to what our brain does. Yes, that sounds great, but that is very little information to reconstruct an image from. So is it possible? Well, hold on to your papers because this new work can reconstruct a near-perfect image by looking at less than 10% of the input pixels. So we have this as an input, and we get this. Wow! What is happening here is called a neural reconstruction of foviated rendering data, or you are welcome to refer to it as foviated reconstruction in short during your conversations over dinner. The scrambled text part here is quite interesting. One might think that, well, it could be better. However, given the fact that if you look at the appropriate place in the sparse image, I not only cannot read the text, I am not even sure if I see anything that indicates that there is a text there at all. So far, the example assumed that we are looking at a particular point in the middle of the screen, and the ultimate question is, how does this deal with a real-life case where the user is looking around? Well, let's see. This is the input, and the reconstruction. Witchcraft. Let's have a look at some more results. Note that this method is developed for head-mounted displays where we have information on where the user is looking over time, and this can make all the difference in terms of optimization. You see a comparison here against a method labeled as multi-resolution. This is from a paper by the name foviated 3D graphics, and you can see that the difference in the quality of the reconstruction is truly remarkable. Additionally, it has been trained on 350,000 short natural video sequences and the whole thing runs in real time. Also, note that we often discuss image-impainting methods in this series. For instance, what you see here is the legendary patch match algorithm that is one of these, and it is able to fill in missing parts of an image. However, in image-impainting, most of the image is intact with smaller regions that are missing. This is even more difficult than image-impainting because the vast majority of the image is completely missing. The fact that we can now do this with learning-based methods is absolutely incredible. The first author of the paper is Anton Kaplanjan, who is a brilliant and very rigorous mathematician, so of course, the results are evaluated in detail both in terms of mathematics and with a user study. Make sure to have a look at the paper for more on that. We got to know each other with Anton during the days when all we did was light transport simulations all day, every day, and we're always speculating about potential projects and to migrate sadness, somehow, unfortunately, we never managed to work together for a full project. Again, congratulations, Anton. Beautiful work. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. Exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To spin up your own GPU instance and receive a $20 free credit, visit Linode.com slash papers or click the link in the video description and use the promo code papers20 during sign-up. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. The opening video sequence of this paper immediately starts with a beautiful snow simulation, which I presume is an homage to a legendary Disney paper from 2013 by the name a material point method for snow simulation, which, to the best of my knowledge, was used for the first Frozen movie. I was super excited to showcase that paper when this series started, however, unfortunately, I was unable to get the rights to do it, but I make sure to put a link to the original paper with the same scene in the video description if you're interested. Now, typically, we are looking to produce high-resolution simulations with lots of detail, however, this takes from hours to days to compute. So, how can we deal with this kind of complexity? Well, approximately 400 videos ago, in two-minute papers episode 10, we talked about this technique that introduced spatial adaptivity to this process. The adaptive part means that it made the simulation finer and coarser depending on what parts of the simulation are visible. The parts that we don't see can be run through a coarser simulation because we won't be able to see the difference. Very smart. The spatial part means that we use particles and subdivide the 3D space into grid points in which we compute the necessary quantities like velocities and pressures. This was a great paper on adaptive fluid simulations, but now look at this new paper. This one says that it is about temporal adaptivity. There are two issues that immediately arise. First, we don't know what temporal adaptivity means and even if we did, we'll find out that this is something that is almost impossible to pull off. Let me explain. There is a great deal of difficulty in choosing the right time steps for such a simulation. These simulations are run in a way that we check and resolve all the collisions and then we can advance the time forward by a tiny amount. This amount is called a time step and choosing the appropriate time step has always been a challenge. You see, if we set it to two large, we will be done faster and compute less, however, we will almost certainly miss some collisions because we skipped over them. It gets even worse because the simulation may end up in a state that is so incorrect that it is impossible to recover from and we have to throw the entire thing out. If we set it to two low, we get a more robust simulation, however, it will take from many hours to days to compute. So what does this temporal adaptivity mean exactly? Well, it means that there is not one global time step for the simulation, but time is advanced differently at different places. You see here this delta T, this means the numbers chosen for the time steps and the blue color coding means a simple region where there isn't much going on so we can get away with bigger time steps and less computation without missing important events. The red regions have to be simulated with smaller time steps because there is a lot going on and we would miss out on that. Once the new technique is called an asynchronous method because it is a crazy simulation where time advances in different amounts at different spatial regions. So how do we test this solution? Well, of course, ideally this should look the same as the synchronized simulation. So does it? You bet your papers it does. Look at that. Absolutely fantastic. And since we can get away with less computation, it is faster. How much faster? In the worst cases, 40% faster in the better ones, 10 times faster. So kind of my all-nighter fluid simulations can be done in one night, sign me up. What a time to be alive. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher. This is one of those simulation papers where you can look at it for three seconds and immediately know what it's about. Let's try that. Clearly, expansion and baking is happening. And now, let's look inside. Hmm, yep, this is done. Clearly, this is a paper on simulating the process of baking, loving the idea. So how comprehensive is it? Well, for a proper baking procedure, the simulator also has to be able to deal with melting, solidification, dehydration, coloring, and much, much more. This requires developing a proper thermomechanical model where these materials are modeled as a collection of solids, water, and gas. Let's have a look at some more results. And we have to stop right here because I'd like to tell you that the information density on this deceivingly simple scene is just stunning. In the X-axis, from the left to right, we have a decreasing temperature in the oven, left being the hottest, and the chocolate chip cookies above are simulated with an earlier work from 2014. The ones in the bottom row are made with a new technique. You can see a different kind of shape change as we increase the temperature if we crank the oven up even more, and look there, even the chocolate chips are melting. Oh my goodness, what a paper. Talking about information density, you can also see here how these simulated pieces of dough of different viscosities react to different amounts of stress. Viscosity means the amount of resistance against deformation, therefore, as we go up, you can witness this kind of resistance increasing. Here you can see a cross-section of the bread, which shows the amount of heat everywhere. This not only teaches us why crust forms on the outside layer, but you can see how the amount of heat diffuses slowly into the inside. This is a maxed-out paper. By this, I mean the execution quality is through the roof, and the paper is considered done not when it looks alright, but when the idea is being pushed to the limit, and the work is as good as it can be without trivial ways to improve it. And the results are absolute witchcraft. Huge congratulations to the authors. In fact, double congratulations because it seems to me that this is only the second paper of Manguondink, the lead author, and it has been accepted to the SIGRAPH Asia conference, which is one of the greatest achievements a computer graphics researcher can dream of. The paper of such quality for the second try. Wow! This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI to your research, Stanford, and Berkeley. They have excellent tutorial videos. In this one, the CEO himself teaches you how to build your own neural network and more. Make sure to visit them through www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Kato Jornai-Fehir. Some papers come with an intense media campaign and a lot of nice videos and some other amazing papers are at the risk of slipping under the radar because of the lack of such a media presence. This new work from DeepMind is indeed absolutely amazing, you'll see in a moment why and is not really talked about. So in this video, let's try to reward such a work. In many episodes you get ice cream for your eyes, but today you get ice cream for your mind. Buckle up. In the last few years, we have seen DeepMind's AI defeat the best goal players in the world and after open AI's venture in the game of Dota 2, DeepMind embarked on a journey to defeat pro players in Starcraft 2, a real-time strategy game. This is a game that requires a great deal of mechanical skill, split second decision-making, and we have imperfect information as we only see what our units can see. A nightmare situation for any AI. You see some footage of its previous games here on the screen. And in my opinion, people seem to pay too much attention to how good a given algorithm performs and too little to how general it is. Let me explain. DeepMind has developed a new technique that tries to rely more on its predictions of the future and generalizes to many, many more games than previous techniques. This includes Alpha Zero, a previous technique also from them that was able to play Go, Chess, and Japanese Chess or Shogi as well, and beat any human player at these games confidently. This new method is so general that it does as well as Alpha Zero at these games, however, it can also play a wide variety of Atari games as well. And that is the key here. Writing an algorithm that plays Chess well has been a possibility for decades. For instance, if you wish to know more, make sure to check out Stockfish, which is an incredible open source project and a very potent algorithm. However, Stockfish cannot play anything else. Whenever we look at a new game, we have to derive a new algorithm that solves it. Not so much with these learning methods that can generalize to a wide variety of games. This is why I would like to argue that the generalization capability of these AIs is just as important as their performance. In other words, if there was a narrow algorithm that is the best possible chess algorithm that ever existed, or a somewhat below world champion level AIs that can play any game we can possibly imagine, I would take the letter in a heartbeat. Now, speaking about generalization, let's see how well it does at these Atari games. Shall we? After 30 minutes of time on each game, it significantly outperforms humans on nearly all of these games, the percentages show you here what kind of outperformance we are talking about. In many cases, the algorithm outperforms us several times and up to several hundred times. Absolutely incredible. As you see, it has a more than formidable score on almost all of these games and therefore it generalizes quite well. I'll tell you in a moment about the games it falters at, but for now, let's compare it to three other competing algorithms. You see one ball number per row, which always highlights the best performing algorithm for your convenience. The new technique beats the others on about 66% of the games, including the recurrent experience replay technique in short R2D2. Yes, this is another one of those crazy paper names. And even when it falls short, it is typically very close. As a reference, humans triumphed on less than 10% of the games. We still have a big fat zero on pitfall and the Montezuma's revenge games. So why is that? Well, these games require long-term planning, which is one of the most difficult cases for reinforcement learning algorithms. In an earlier episode, we discussed how we can infuse an AI agent with curiosity to go out there and explore some more with success. However, note that these algorithms are more narrow than the one we've been talking about today. So there is still plenty of work to be done, but I hope you see that this is incredibly nimble progress on AI research. Bravo deep-mind. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. Exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your own experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To spin up your own GPU instance and receive a $20 free credit, visit linode.com slash papers or click the link in the description and use the promo code Papers20 during SINAM. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Efehir. Have a look and marvel at this learning-based assembler robot that is able to put together simple contraptions. Since this is a neural network-based learning method, it needs to be trained to be able to do this. So, how is it trained? Normally, to train such an algorithm, we would have to show it a lot of pairs of the same contraption and tell it that this is what it looks like when it's disassembled and what you see here is the same thing assembled. If we did this, this method would be called supervised learning. This would be very time-consuming and potentially expensive as it would require the presence of a human as well. A more convenient way would be to go for unsupervised learning where we just chuck a lot of things on the table and say, well, robot, you figure it out. However, this would be very inefficient if at all possible because we would have to provide it many, many contraptions that wouldn't fit on the table. But this paper went for none of these solutions as they opted for a really smart, self-supervised technique. So what does that mean? Well, first, we give the robot an assembled contraption and ask it to disassemble it. And therein lies the really cool idea because disassembling it is easier and by rewinding the process, it also gets to know how to assemble it later. And the training process takes place by assembling, disassembling, and doing it over and over again several hundred times per object. Isn't this amazing? Love it. However, what is the point of all this? Instead, we could just add explicit instructions to a non-learning based robot to assemble the objects. Why not just do that? And the answer lies in one of the most important aspects within machine learning generalization. If we program a robot to be able to assemble one thing, it will be able to do exactly that, assemble one thing. And whenever we have a new contraption on our hands, we need to reprogram it. However, with this technique, after the learning process took place, we will be able to give it a new previously unseen object and it will have a chance to assemble it. This requires intelligence to perform. So how good is it at generalization? Well, get this, the paper reports that when showing it new objects, it was able to successfully assemble new previously unseen contraptions 86% of the time. Incredible. So, what about the limitations? This technique works on a 2D planar surface, for instance, this table, and while it is able to insert most of these parts vertically, it does not deal well with more complex assemblies that require inserting screws and pegs in a 45 degree angle. As we always say, two more papers down the line and this will likely be improved significantly. If you have ever bought a new bed or a cupboard and said, well, it just looks like a block. How hard can it be to assemble? Wait, does this thing have more than 100 screws and pegs? I wonder why. And then, 4.5 hours later, you'll find out yourself. I hope, techniques like these will help us save time by enabling us to buy many of these contraptions preassembled and it can be used for much, much more. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they're offering GPU cloud services as well. The Lambda GPU cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. In the last few years, Neural Network based learning algorithms became so good at image recognition tasks that they can often rival and sometimes even outperform humans in these endeavors. Beyond making these Neural Networks even more accurate in these tasks, interestingly, there are plenty of research works on how to attack and mislead these Neural Networks. I think this area of research is extremely exciting and I'll now try to show you why. One of the first examples of an adversarial attack can be performed as follows. We present such a classifier with an image of a bus and it will successfully tell us that yes, this is indeed a bus. Nothing too crazy here. Now we show it not an image of a bus, but a bus plus some carefully crafted noise that is barely perceptible that forces the Neural Network to misclassify it as an ostrich. I will stress that this is not any kind of noise, but the kind of noise that exploits biases in the Neural Network which is by no means easy or trivial to craft. However, if we succeed at that, this kind of adversarial attack can be pulled off on many different kinds of images. Everything that you see here on the right will be classified as an ostrich by the Neural Network these noise patterns were crafted for. In a later work, researchers of the Google Brain team found that we can not only coerce the Neural Network into making some mistake, but we can even force it to make exactly the kind of mistake we want. This example here reprograms an image classifier to count the number of squares in our images. However, interestingly, some adversarial attacks do not need carefully crafted noise or any tricks for that matter. Did you know that many of them occur naturally in nature? This new work contains a brutally hard data set with such images that throw off even the best neural image recognition systems. Let's have a look at an example. If I were the Neural Network, I would look at this Queryl and claim that with high confidence I can tell you that this is a sea lion. And you human may think that this is a dragonfly, but you would be wrong. I'm pretty sure that this is a manhole cover. Well, except that it's not. The paper shows many of these examples, some of which don't really occur in my brain. For instance, I don't see this mushroom as a pretzel at all, but there was something about that dragonfly that upon a cursory look may get registered as a manhole cover. If you look quickly, you see a squirrel here, just kidding. It's a bullfrog. I feel that if I look at some of these with a fresh eye, sometimes I get a similar impression as the Neural Network. I'll put up a bunch of more examples for you here. Let me know in the comments which are the ones that got you. Very cool project. I love it. What's even better, this data set by the name ImageNet A is now available for everyone free of charge. And if you remember, at the start of the video, I said that it is brutally hard for Neural Networks to identify what is going on here. So what kind of success rates can we expect? 70%, maybe 50%, nope, 2%. Wow. In a world where some of these learning-based image classifiers are better than us at some data sets, they are vastly outclassed by us humans on these natural adversarial examples. If you have a look at the paper, you will see that the currently known techniques to improve the robustness of training show little to no improvement on this. I cannot wait to see some follow-up papers on how to correct this nut. We can learn so much from this paper and we likely learn even more from these follow-up works. Make sure to subscribe and also hit the bell icon to never miss future episodes. What a time to be alive. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI Toyota Research, Stanford and Berkeley. In this post, they show you how to train a state-of-the-art machine learning model with over 99% accuracy on classifying quickly handwritten numbers and how to use their tools to get a crystal clear understanding of what your model exactly does and what part of the letters it is looking at. Make sure to visit them through whendb.com slash papers, www.wanddb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Reinforcement learning is a technique in the field of machine learning to learn how to navigate an elaborate, play a video game, or to teach a digital creature to walk. Usually, we are interested in a series of actions that are in some sense optimal in a given environment. Despite the fact that many enormous tomes exist to discuss the mathematical details, the intuition behind the algorithm itself is remarkably simple. Choose an action, and if you get rewarded for it, try to find out which series of actions led to this and keep doing it. If the rewards are not coming, try something else. The reward can be, for instance, our score in a computer game or how far our digital creature could walk. Approximately 300 episodes ago, OpenAI published one of their first major works by the name Jim, where anyone could submit their solutions and compete against each other on the same games. It was like Disney World for reinforcement learning researchers. A moment ago, I noted that in reinforcement learning, if the rewards are not coming, we have to try something else. Hmm, is that so? Because there are cases where trying crazy new actions is downright dangerous. For instance, imagine that during the training of this robot arm, initially, it would try random actions and start flailing about where it made damage itself some other equipment or even worse, humans may come to harm. Here you see an amusing example of DeepMind's reinforcement learning agent from 2017 that like to engage in similar flailing activities. So, what could be a possible solution for this? Well, have a look at this new work from OpenAI by the name Safety Jim. In this paper, they introduce what they call the Constraint Re-Enforcement Learning Formulation in which these agents can be discouraged from performing actions that are deemed potentially dangerous in an environment. You can see an example here where the AI has to navigate through these environments and achieve a task such as reaching the green goal signs, push buttons, or move a box around to a prescribed position. The Constraint part comes in whenever some sort of safety violation happens, which are in this environment collisions with the boxes or blue regions. All of these events are highlighted with this red sphere and the good learning algorithm should be instructed to try to avoid these. The goal of this project is that in the future, for reinforcement learning algorithms, not only the efficiency, but the safety scores should also be measured. This way, a self-driving AI would be incentivized to not only drive recklessly to the finish line, but respect our safety standards along the journey as well. While noting that clearly self-driving cars may be achieved with other kinds of algorithms, many of which have been in the works for years, there are many additional applications for this work. For instance, the paper discusses the case of incentivizing recommender systems to not show psychologically harmful content to its users, or to make sure that a medical question-answering system does not mislead us with false information. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer you virtual servers that make it easy and affordable to host your own app, site, project, or anything else in the cloud. Whether you are a Linodex expert or just starting to tinker with your own code, Linode will be useful for you. A few episodes ago, we played with an implementation of OpenAI's GPT2, where our excited viewers accidentally overloaded the system. With Linode's load balancing technology and instances ranging from shared nanodes, all the way up to dedicated GPUs, you don't have to worry about your project being overloaded. To get $20 of free credit, make sure to head over to Linode.com slash papers and sign up today using the promo code Papers20. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Creating photorealistic materials for light transport algorithms requires carefully fine-tuning a set of material properties to achieve a desired artistic effect. This is a lengthy process that involves a trained artist with specialized knowledge. In this work, we propose a system that only requires basic image processing knowledge and enables users without photorealistic rendering experience to create high quality materials. This is highly desirable as human thinking is inherently visual and not based on physically based material parameters. In our proposed workflow, all the user needs to do is apply a few intuitive transforms to a source image and in the next step our technique produces the closest photorealistic material that approximates this target image. One of our key observations is that even though this process target image is often not physically achievable, in many cases a photorealistic material model can be found that closely matches this image. Our method generates results in less than 30 seconds and works in the presence of poorly edited target images like the discoloration of the pedestal or the background of the gold material here. This technique is especially useful early in the material design process where the artist seeks to rapidly iterate over a variety of possible artistic effects. We also propose an extension to predict image sequences with a tight budget of 1-2 seconds per image. To achieve this, we propose a simple optimization formulation that is able to produce accurate solutions that takes relatively long due to the lack of a useful initial guess. Our other main observation is that an approximate solution can also be achieved without an optimization step by implementing a simple and coder neural network. The main advantage of this method is that it produces a solution within a few milliseconds with the drawback that the provided solution is only approximate. We refer to this as the inversion technique. Both of these solutions suffer from drawbacks. The optimization approach provides results that resemble the target image that is impracticable due to the fact that it requires too many function evaluations and gets stuck in local minima, whereas the inversion technique rapidly produces a solution that is more approximate in nature. We show that the best aspects of these two solutions can be fused together into a hybrid method that initializes our optimizer with the prediction of the neural network. This hybrid method opens up the possibility of creating novel materials by stitching together the best aspects of two or more materials, deleting unwanted features through image in-painting, contrast enhancement, or even fusing together two materials. These synthesized materials can also be easily inserted into already existing scenes by the user. In this scene, we made a material mixture to achieve a richer nebula effect inside the glass. We also show in the paper that this hybrid method not only gives a head start to the optimizer by endowing it with a useful initial guess, but provides strictly higher quality outputs than any of the two previous solutions on all of our test cases. Furthermore, if at most a handful of materials are sought, the total modeling times reveal that our technique compares favorably to previous work on mass scale material synthesis. We believe this method will offer an appealing entry point for novices into the world of photorealistic material modeling. Thank you for your attention.
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifeher. This beautiful scene is from our paper by the name Gaussian Material Synthesis, and it contains more than a hundred different materials, each of which has been learned and synthesized by an AI. None of these days is, and then the lions are alike, each of them have a different material model. Normally, to obtain results like this, an artist has to engage in a direct interaction with an interface that you see here. This contains a ton of parameters, and to be able to use it properly, the artist needs to have years of experience in photorealistic rendering and material modeling. But, unfortunately, the problem gets even worse. Since a proper light simulation program needs to create an image with the new material parameters, this initially results in a noisy image that typically takes 40 to 60 seconds to clear up. We have to wait out these 40 to 60 seconds for every single parameter change that we make. This would take several hours in practical cases. The goal of this project was to speed up workflows like this by teaching an AI the concept of material models, such as metals, minerals, and translucent materials. With our technique, first, we show the user a gallery of random materials with signs a score to each of them, saying that I like this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us. We also created a neural render that replaces the light simulation program and creates a near-perfect image of the output in about 4 milliseconds. That's not real time, that's 10 times faster than real time. That is very fast and accurate. However, our neural renderer is limited to the scene that you see here. So, the question is, is it possible to create something that is a little more general? Well, let's have a look at this new work that performs something similar that they call differentiable rendering. The problem formulation is the following. We specify a target image that is either rendered by a computer program or even better the photo. The input is a pitiful approximation of it and now hold on to your papers because it progressively changes the input materials, textures, and even the geometry to match this photo. My goodness, even the geometry. This thing is doing three people's jobs when given a target photo and you haven't seen the best part here because there is an animation that shows how the input evolves over time as we run this technique. As we start out, it almost immediately matches the material properties and the base shape and after that, it refines the geometry to make sure that the more intricate details are also matched properly. As always, some limitations apply, for instance, area light sources are fine, but it doesn't support point light sources may show problems in the presence of discontinuities and mirror-like materials. I cannot wait to see where this ends up a couple of papers down the line and I really hope this thing takes off. In my opinion, this is one of the most refreshing and exciting ideas in photorealistic rendering research as of late. More differentiable rendering papers, please. I would like to stress that there are also other works on differentiable rendering. This is not the first one. However, if you have a closer look at the paper in the description, you will see that it does better than previous techniques. In this series, I try to make you feel how I feel when I read these papers and I hope I have managed this time, but you be the judge. Please let me know in the comments. And if this got you excited to learn more about light transport, I am holding a master-level course on it at the Technical University of Vienna. This course used to take place behind closed doors, but I feel that the teachings shouldn't only be available for the 20 to 30 people who can afford a university education, but they should be available for everyone. So, recorded the entirety of the course and it is now available for everyone free of charge. If you are interested, have a look at the video description to watch them. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this tutorial, they show you how to visualize your machine learning models and how to choose the best one with the help of their tools. Make sure to visit them through www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda. Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. In this series, we talk about research on all kinds of physics simulations, including fluids, collision physics, and we have even ventured into hair simulations. We mostly talk about how the individual hair strands should move and how they should look in terms of color and reflectance. Having these beautiful videos takes getting many, many moving parts right, for instance, before all of that, the very first step is not any of those steps. First, we have to get the 3D geometry of these hair styles into our simulation system. In a previous episode, we have seen an excellent work that does this well for human hair. But what if we would like to model not human hair, but something completely different? Well, hold on to your papers, because this new work is so general that it can look at an input image or video and give us not only a model of the human hair, but human skin, garments, and of course, my favorite, smoke plumes and more. But if you look here, this part begs the following question. The input is an image and the output also looks like an image, and we need to make them similar. So, what's the big deal here? A copying machine can do that. No? Well, not really. Here's why. To create the output, we are working with something that indeed looks like an image, but it is not an image. It is a 3-dimensional cube in which we have to specify color and opacity values everywhere. After that, we simulate rays of light passing through this volume, which is a technique that we call ray marching, and this process has to produce the same 2D image through ray marching as what was given as an input. That's much, much harder than building a copying machine. As you see here, normally, this does not work well at all, because, for instance, a standard algorithm sees lights in the background and assumes that these are really bright and dense points. That is kind of true, but they are usually not even part of the data that we would like to reconstruct. To solve this issue, the authors propose learning to tell the foreground and background images apart, so they can be separated before we start the reconstruction of the human. And this is a good research paper, which means that if it contains multiple new techniques, each of them are tested separately to know how much they contribute to the final results. We get the previously seen dreadful results without the background separation step. Here are the results with the learned backgrounds. We can still see the lights due to the way the final image is constructed, and the fact that we have so little of this halo effect is really cool. Here, you see the results with the true background data where the background learning step is not present. Note that this is cheating, because this data is not available for all cameras and backgrounds, however, it is a great way to test the quality of this learning step. The comparison of the learned method against this reveals that the two are very close, which is exactly what we are looking for. And finally, the input footage is also shown for reference. This is ultimately what we are trying to achieve, and as you see, the output is quite close to it. The final algorithm excels at reconstructing volume data for toys, small plumes, and humans alike. And the coolest part is that it works for not only stationary inputs, but for animations as well. Wait, actually, there is something that is perhaps even cooler with the magic of neural networks and latent spaces we can even animate this data. Here you see an example of that where an avatar is animated in real time by moving around this magenta dot. A limiting factor here is the resolution of this reconstruction. If you look closely, you can see that some fine details are missing, but you know the saying, given the rate of progress in machine learning research, two more papers down the line, and this will likely be orders of magnitude better. And if you feel that you always need to take your daily dose of papers, my statistics show that many of you are subscribed, but didn't use the bell icon. If you click this bell icon, you will never miss a future episode and can properly engage in your paper addiction. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers or click the link in the video description and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. The paper we are going to cover today, in my view, is one of the more important things that happened in AI research lately. In the last few years, we have seen DeepMind's AI defeat the best goal players in the world, and after Open AI's venture in the game of Dota 2, DeepMind embarked on a journey to defeat pro players in Starcraft 2, a real-time strategy game. This is a game that requires a great deal of mechanical skill, split-second decision-making, and we have imperfect information as we only see what our units can see. A nightmare situation for NEAI. The previous version of Alpha Star we covered in this series was able to beat at least mid-grandmaster level players, which is truly remarkable, but as with every project of this complexity, there were limitations and caveats. In our earlier video, the paper was still pending, and now it has finally appeared, so my sleepless nights have officially ended, at least for this work, and now we can look into some more results. One of the limitations of the earlier version was that DeepMind needed to further tune some of the parameters and rules to make sure that the AI and the players play on an even footing. For instance, the camera movement and the number of actions the AI can make per minute has been limited some more and are now more human-like. TLO, a professional starcraft 2 player, noted that this time around, it indeed felt very much like playing another human player. The second limitation was that the AI was only able to play ProDOS, which is one of the three races available in the game. This new version can now play all three races, and here you see its MMR ratings, a number that describes the skill level of the AI and for nonexperts, win percentages for each individual race. As you see, it is still the best with ProDOS, however, all three races are well over the 99% winrate mark. Absolutely amazing. In this version, there is also more emphasis on self-play, and the goal is to create a learning algorithm that is able to learn how to play really well by playing against previous versions of itself millions and millions of times. This is, again, one of those curious cases where the agents train against themselves in a simulated world, and then when the final AI was deployed on the official game servers, it played against human players for the very first time. I promise to tell you about the results in a moment, but for now, please note that relying more on self-play is extremely difficult. Let me explain why. Play agents have a well-known drawback of forgetting, which means that as they improve, they might forget how to win against previous versions of themselves. Since Starcraft 2 is designed in a way that every unit and strategy has an antidote, we have a rock paper scissors kind of situation where the agent plays rock all the time because it has encountered a lot of scissors lately. Then, when a lot of papers appear, no pun intended, it will start playing scissors more often and completely forget about the olden times when the rock was all the rage. And on and on this circle goes without any relearning or progress. This doesn't just lead to suboptimal results. This leads to disastrously bad learning if any learning at all. But it gets even worse. This situation opens up the possibility for an exploiter to take advantage of this information and easily beat these agents. In concrete Starcraft terms, such an exploit could be trying to defeat the Alpha Star A.I. early by rushing it with workers and warping in for on cannons to their base. This strategy is also known as a cannon rush and as you can see here the red agent performing this, it can quickly defeat the unsuspecting blue opponent. So, how do we defend against such exploits? Deep might use a clever idea here by trying to turn the whole thing around and use these exploits to its advantage. How? Well, they propose a novel self-play method where they additionally insert these exploitor A.I.s to expose the main A.I.s flaws and create an overall more knowledgeable and robust agent. So, how did it go? Well, as a result, you can see how the green agent has learned to adapt to this by pulling its worker line and successfully defended the cannon rush of the red A.I. This is proper machine learning progress happening right before our eyes. Glorious. This is just one example of using exploiters to create a better main A.I. but the training process continually creates newer and newer kinds of exploiters. For instance, you will see in a moment that it later came up with a nasty strategy, including attacking the main base with cloaking units. One of the coolest parts of this work, in my opinion, is that this kind of exploitation is a general concept that will surely come useful for completely different test domains as well. We noted earlier that it finally started playing humans for the first time on the official servers. So, how did that go? In my opinion, given the difficulty and the vast search space we have in Starcraft 2, creating a self-learning A.I. that has the skills of an amateur player is absolutely incredible. But, that's not what happened. Hold onto your papers because it quickly reached Grandmaster level with all three races and ranked above 99.8% of the officially ranked human players. Bravo deep-mind. Stunning work. Later, it also played Cerel a decorated world champion zerg player, one of the most dominant players of our time. I will not spoil the results, especially given that there were limitations as Cerel wasn't playing on his own equipment, but I will note that Artosis, a well-known and beloved Starcraft player and commentator analyzed these matches and said, quote, the results are so impressive and I really feel like we can learn a lot from it. I would be surprised if a non-human entity could get this good and there was nothing to learn. His commentary was excellent and is tailored towards people who don't know anything about the game. He'll often pause the game and slowly explain what is going on. In these matches, I love the fact that so many times it makes so many plays that we consider to be very poor and somehow, overall, it still plays outrageously well. It has unit compositions that nobody in their right minds would play. It is kind of like a drunken, kung fu master, but in Starcraft 2. Love it. But no more spoilers, I think you should really watch these matches and of course I put a link to his analysis videos in the video description. Even though both this video and the paper appears to be laser-focused on playing Starcraft 2, it is of utmost importance to note that this is still just a testbed to demonstrate the learning capabilities of this AI. As amazing as it sounds, DeepMind wasn't just looking to spend millions and millions of dollars on research just to play video games. The building blocks of Alpha Star are meant to be reasonably general, which means that parts of this AI can be reused for other things. For instance, Demisasab is mentioned, weather prediction and climate modeling as examples. If you take only one thought from this video, let it be this one. There is really so much to talk about, so make sure to head over to the video description, watch the matches and check out the paper as well. The evaluation section is as detailed as it can possibly get. What a time to be alive. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Here you see a technical case study they published on how a team can work together to build and deploy machine learning models in an organized way. Make sure to visit them through WendeeB.com slash papers, w-a-n-d-b.com slash papers, or just click the link in the video description and you can get the free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir. In an earlier episode, we covered a paper by the name Everybody Dance Now. In this stunning work, we could take a video of a professional dancer, then record a video of our own, let's be diplomatic, less beautiful moves, and then transfer the dancer's performance onto our own body in the video. We call this process motion transfer. Now, look at this new, also learning based technique that does something similar, where Ingo's a description of a pose, just one image of a target person, and on the other side, outcomes the proper animation of this character, according to our prescribed motions. Now, before you think that it means that we would need to draw and animate stick figures to use this, I will stress that this is not the case. There are many techniques that perform pose estimation, where we just insert a photo, or even a video, and it creates all these stick figures for us, that represent the pose that people are taking in these videos. This means that we can even have a video of someone dancing, and just one image of the target person, and the rest is history. Insanity. That is already amazing and very convenient, but this paper works with a video to video problem formulation, which is a concept that is more general than just generating movement. Way more. For instance, we can also specify the input video of us, than add one, or at least a few images of the target subject, and we can make them speak and behave using our gestures. This is already absolutely amazing. However, the more creative minds out there are already thinking that if we are thinking about images, it can be a painting as well, right? Yes, indeed, we can make the Mona Lisa speak with it as well. It can also take a labeled image. This is what you see here, where the colored and animated patches show the object boundaries for different object classes. Then, we take an input photo of a street scene, and we get photorealistic footage with all the cars, buildings, and vegetation. Now, make no mistake, some of these applications were possible before, many of which we showcased in previous videos, some of which you can see here, what is new and interesting here, is that we have just one architecture that can handle many of these tasks. Beyond that, this architecture requires much less data than previous techniques, as it often needs just one, or at most, a few images of the target subject to do all this magic. The paper is ample in comparison to these other methods. For instance, the FID measures the quality and the diversity of degenerated output images, and is subject to minimization, and you see that it is miles beyond these previous works. Some limitations also apply if the inputs stray too far away from topics that the neuron networks were trained on, we shouldn't expect results of this quality, and we are also dependent on proper inputs for the poses and segmentation maps for it to work well. The pace of progress in machine learning research is absolutely incredible, and we are getting very close to producing tools that can be actively used to empower artists working in the industry. What a time to be alive! If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com, slash papers, and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In this series, we talk about research on all kinds of physics simulations, including fluids, collision physics, and we have even ventured into hair simulations. If you look here at this beautiful footage, you may be surprised to know how many moving parts a researcher has to get right to get something like this. For instance, some of these simulations have to go down to the level of computing the physics between individual hair strands. If it is done well, like what you see here from our earlier episode, these simulations will properly show us how things should move, but that's not all. There is also an abundance of research works out there on how they should look. And even then, we are not done because before that, we have to take a step back and somehow create these digital 3D models that show us the geometry of these flamboyant hairstyles. Approximately 300 episodes ago, we talked about a technique that took a photograph as an input and created a digital 3D model that we can use in our simulations and rendering systems. It had a really cool idea where it initially predicted a course result, and then this result was matched with the hair stars found in public data repositories and the closest match was presented to us. Clearly, this often meant that we get something that was similar to the photograph, but often not exactly the hairstyle we were seeking. And now, hold on to your papers because this work introduces a learning-based framework that can create a full reconstruction by itself without external help, and now squeeze that paper because it works not only for images, but for videos too. It works for shorter hairstyles, long hair, and even takes into consideration motion and external forces as well. The heart of the architecture behind this technique is this pair of neural networks where the one above creates the predicted hair geometry for each frame, while the other tries to look backwards in the data and try to predict the appropriate motions that should be present. Interestingly, it only needs two consecutive frames to make these predictions and adding more information does not seem to improve its results. That is very little data. Quite remarkable. Also, note that there are a lot of moving parts here in the full paper. For instance, this motion is first predicted in 2D and is then extrapolated to 3D afterwards. Let's have a look at this comparison. Indeed, it seems to produce smoother and more appealing results than this older technique. But if we look here, this other method seems even better, so what about that? Well, this method had access to multiple views of the model, which is significantly more information than what this new technique has that only needs a simple, monocular 2D video from our phone or from the internet. The fact that they are even comparable is absolutely amazing. If you have a look at the paper, you will see that it even contains a hair growing component in this architecture. And as you see, the progress in computer graphics research is absolutely amazing. And we are even being paid for this. Unreal. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing and computer graphics projects. Exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To spin up your own GPU instance and receive a $20 free credit, visit linode.com slash papers or click the link in the description and use the promo code Papers20 during Sina. Give it a try today. Thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. Today, we are going to talk about OpenAI's robot hand that dexterously manipulates and solves Rubik's cube. Here you can marvel at this majestic result. Now, why did I use the term dexterously manipulates Rubik's cube? In this project, there are two problems to solve. One, finding out what kind of rotation we need to get closer to a soft cube, and two, listing the finger positions to be able to execute these prescribed rotations. And this paper is about the letter, which means that the rotation sequences are given by a previously existing algorithm, and OpenAI's method manipulates the hand to be able to follow this algorithm. To rephrase it, the robot hand doesn't really know how to solve the cube and is told what to do, and the contribution lies in the robot figuring out how to execute these rotations. If you take only one thing from this video, let it be this thought. Now, to perform all this, we have to first solve a problem in a computer simulation, where we can learn and iterate quickly, and then transfer everything the agent learned there to the real world, and hope that it obtained general knowledge that indeed can be applied there. This task is one of my favorites. However, no simulation is as detailed as the real world, and as every experienced student knows very well, things that are written in the textbook might not always work exactly the same in practice. So the problem formulation naturally emerges. Our job is to prepare this AI in this simulation, so it becomes good enough to perform well in the real world. Well, good news. First, let's think about the fact that in a simulation, we can train much faster, as we are not bound by the physical limits of the robot hand. In a simulation, we are bound by our processing power, which is much, much more vast, and is growing every day. So this means that the simulated environments can be as grueling as we can make them be, what's even more, we can do something that OpenAI refers to as automatic domain randomization. This is one of the key contributions of this paper. The domain randomization part means that it creates a large number of random environments, each of which are a little different, and the AI is meant to learn how to account for these differences and hopefully, as a result, obtain general knowledge about our world. The automatic part is responsible for detecting how much randomization the neural network can shoulder and hence the difficulty of these random environments is increased over time. So how good are the results? Well, spectacular. In fact, hold onto your papers because it can not only dexterously manipulate and solve the cube, but we can even hamstring the hand in many different ways and it will still be able to do well. And I am telling you, scientists at OpenAI got very creative in tormenting this little hand. They added a rubber glove, tied multiple fingers together, threw a blanket on it, and pushed it around with a plastic jar of paint. It still worked. This is a testament to the usefulness of the mentioned automatic domain randomization technique. What's more, if you have a look at the paper, you will even see how well it was able to recover from a randomly breaking joint. What a time to be alive. As always, some limitations apply. The hand is only able to solve the cube about 60% of the time for simpler cases and the success rate drops to 20% for the most difficult ones. If it gets stuck, it typically does in the first few rotations. But so far, we have been able to do this 0% of the time. And given that the first steps towards cracking the problem are almost always the hardest, I have no doubt that two more papers down the line, this will become significantly more reliable. But, you know what? We are talking about OpenAI. Make it one paper. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in this project and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Here you see a right up of theirs where they explain how to visualize the gradients running through your models and illustrate it through the example of predicting protein structure. They also have a live example that you can try. Make sure to visit them through WendeeB.com slash papers, W-A-D-B.com slash papers, or just click the link in the video description and you can get a free demo today. Or thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. If we have an animation movie or a computer game with quadrupeds and we are yearning for really high quality, life-like animations, motion capture is often the go-to tool for the job. Motion capture means that we put an actor, in our case a dog in the studio, we ask it to perform sitting, trotting, pacing and jumping, record this motion and transfer it onto our virtual character. In an earlier work, a learning-based technique was introduced by the name Mode Adaptive Neural Network and it was able to correctly weave together these previously recorded motions and not only that, but it also addressed these unnatural sliding motions that were produced by previous works. As you see here, it also worked well on more challenging landscapes. We talked about this paper approximately 100 videos or, in other words, a little more than a year ago and I noted that it was scientifically interesting, it was evaluated well, it had all the ingredients for a truly excellent paper. But one thing was missing. So, what is that one thing? Well, we haven't seen the characters interacting with the scene itself. If you like this previous paper, you are going to be elated by this new one because this new work is from the very same group and goes by the name Neural State Machine and introduces character-seen interactions for bipeds. Now, we suddenly jumped from a quadruped paper to a biped one and the reason for this is that because I was looking to introduce the concept of food-sliding, which will be measured later for this new method too. Stay tuned. So, in this new problem formulation, we need to guide the character to a challenging and state, for instance, sitting in a chair while being able to maneuver through all kinds of geometry. We'll use the chair example a fair bit in the next minute or two. So, I'll stress that this can do a whole lot more, the chair is just used as a vehicle to get a taste of how this technique works. But the end state needn't just be some kind of chair. It can be any chair. This chair may have all kinds of different heights and shapes and the agent has to be able to change the animations and stitch them together correctly regardless of the geometry. To achieve this, the authors propose an interesting new data augmentation model. Since we are working with Neural Networks, we already have a training set to teach it about motion and data augmentation means that we extend this data set with lots and lots of new information to make the AI generalize better to unseen real world examples. So, how is this done here exactly? Well, the authors proposed a clever idea to do this. Let's walk through their five prescribed steps. One, let's use motion capture data, have the subject sit down and see what the contact points are when it happens. Two, we then record the curves that describe the entirety of the motion of sitting down. So far so good, but we are not interested in one kind of chair. We wanted to sit into all kinds of chairs, so three, generate a large selection of different geometries and adjust the location of these contact points accordingly. Four, change the motion curves so they indeed end at the new transformed contact points. And five, move the joints of the character to make it follow this motion curve and compute the evolution of the character pose. And then pair up this motion with the chair geometry and chuck it into the new augmented training set. Now make no mistake, the paper contains much, much more than this, so make sure to have a look in the video description. So what do we get for all this work? Well, have a look at this trembling character from a previous paper and now look at the new synthesized motions. Natural, smooth, creamy and I don't see artifacts. Also, here you see some results that measure the amount of food sliding during these animations, which is subject to minimization. That means that the smaller the bars are, the better. With NSM, you see how this neural state machine method produces much less than previous methods. And now we see how cool it is that we talked about the quadruped paper as well because we see that it even beats the M A and N, the mode adaptive neural networks from the previous paper. That one had very little food sliding and apparently it can be improved by quite a bit. The positional and rotational errors in the animation it offers are also by far the lowest of the bunch. Since it works in real time, it can also be used for computer games and virtual reality applications. And all this improvement within one year of work. What a time to be alive. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Kato Zsolnai-Fehir. Today, we are going to listen to some amazing improvements in the area of AI-based voice cloning. For instance, if someone wanted to clone my voice, there are hours and hours of my recordings on YouTube and elsewhere, they could do it with previously existing techniques. But the question today is, if we had even more advanced methods to do this, how big of a sound sample would we really need for this? Do we need a few hours? A few minutes? The answer is no. Not at all. Hold on to your papers because this new technique only requires five seconds. Let's listen to a couple examples. The Norsemen considered the rainbow as a bridge over which the gods passed from Earth to their home in the sky. Take a look at these pages for Cricut Creek Drive. There are several listings for gas station. Here's the forecast for the next four days. These take the shape of a long round arch with its path high above and its two ends apparently beyond the horizon. Take a look at these pages for Cricut Creek Drive. There are several listings for gas station. Here's the forecast for the next four days. Absolutely incredible. The timbre of the voice is very similar and it is able to synthesize sounds and consonants that have to be inferred because they were not heard in the original voice sample. This requires a certain kind of intelligence and quite a bit of that. So while we are at that, how does this new system work? Well, it requires three components. One, the speaker encoder is a neural network that was trained on thousands and thousands of speakers and is meant to squeeze all this learned data into a compressed representation. In other words, it tries to learn the essence of human speech from many, many speakers. To clarify, I will add that this system listens to thousands of people talking to learn the intricacies of human speech, but this training step needs to be done only once and after that it was allowed just five seconds of speech data from someone they haven't heard of previously and later the synthesis takes place using these five seconds as an input. Two, we have a synthesizer that takes text as an input. This is what we would like our test subject to say and it gives us a mouse spectrogram which is a concise representation of someone's voice and the intonation. The implementation of this module is based on DeepMind's TECOTRON-2 technique and here you can see an example of this mouse spectrogram built for a male and two female speakers. On the left, we have the spectrograms for the reference recordings, the voice samples if you will and on the right, we specify a piece of text that we would like the learning algorithm to add and it produces these corresponding synthesized spectrograms. But eventually we would like to listen to something and for that we need a waveform as an output. So, the third element is thus a neural vocoder that does exactly that and this component is implemented by DeepMind's wavenet technique. This is the architecture that led to these amazing examples. So how do we measure exactly how amazing it is? When we have a solution evaluating it is also anything but trivial. In principle we are looking for a result that is both close to the recording that we have of the target person but says something completely different and all this in a natural manner. This naturalness and similarity can be measured but we are not nearly done yet because the problem gets even more difficult. For instance it matters how we fit the three puzzle pieces together and then what data we train it on of course also matters a great deal. Here you see that if we train on a one data set and test the results against a different one and then swap the two and the results in naturalness and similarity will differ significantly. The paper contains a very detailed evaluation section that explains how to deal with these difficulties. The Inopinion score is measured in this section which is a number that describes how well a sound sample would pass as genuine human speech. And we haven't even talked about the speaker verification part so make sure to have a look at the paper. So indeed we can clone each other's voice by using a sample of only 5 seconds. What a time to be alive. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. They also wrote a guide on the fundamentals of neural networks where they explain in simple terms how to train a neural network properly, what are the most common errors you can make and how to fix them. It is really great you got to have a look. So make sure to visit them through when db.com slash papers w a and db.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Have you heard of the Can Burns effect? If you have been watching this channel, you have probably seen examples where a still image is shown and a zooming and panning effect is added to it. It looks something like this. Familiar, right? The fact that there is some motion is indeed pleasing for the eye, but something is missing. Since we are doing this with 2D images, all the depth information is lost, so we are missing out on the motion parallax effect that we will see in real life when moving around the camera. So in short, this is only 2D. Can this be done in 3D? Well, to find out, have a look at this. Wow, I love it. Much better, right? Well, if we would try to perform something like this without this paper, we'd be met with bad news. And that bad news is that we have to buy an RGBD camera. This kind of camera endows the 2D image with depth information, which is specialized hardware that is likely not available in our phones as of the making of this video. Now since depth estimation, from this simple, monocular 2D images without depth data is a research field of its own, the first step sounds simple enough. Take one of those neural networks, then ask it to try to guess the depth of each pixel. Does this work? Well, let's have a look. As we move our imaginary camera around, oh oh, this is not looking good. Do you see what the problems are here? Problem number 1 is the presence of geometric distortions. You see it if you look here. Problem number 2 is referred to as semantic distortion in the paper, or in other words, we now have missing data. But this poor, tiny human's hand is also… Ouch. Let's look at something else instead. If we start zooming in into images, which is a hallmark of the Can Burns effect, it gets even worse. We get artifacts. So how does this new paper address these issues? After creating the first, coarse depth map, an additional step is taken to alleviate the semantic distortion issue, and then this depth information is up-sampled to make sure that we have enough fine details to perform the 3D Can Burns effect. Let's do that. Unfortunately, we are still nowhere near done yet. We have previously occluded parts of the background suddenly become visible, and we have no information about those. So, how can we address that? Do you remember image in painting? I hope so, but if not, no matter, I'll quickly explain what it is. Both learning based and traditional handcrafted algorithms exist to try to fill in dismissing information in images with sensible data by looking at its surroundings. This is also not as trivial as it might seem first, for instance, just filling in sensible data is not enough, because this time around we are synthesizing videos, it has to be temporally coherent, which means that there must not be too much of a change from one frame to another, or else we'll get a flickering effect. As a result, we finally have these results that are not only absolutely beautiful, but the user study in the paper shows that they stack up against handcrafted results made by real artists. How cool is that? It also opens up really cool workflows that would normally be very difficult if not impossible to perform. For instance, here you see that we can freeze this lightning bolt in time, zoom around and marvel at the entire landscape. Love it. Of course, limitations still apply if we have really thin objects such as this flagpole, it might be missing entirely from the death map, or there are also cases where the image in painter cannot fill in useful information. I cannot wait to see how this work evolves a couple of papers down the line. One more interesting tidbit, if you have a look at the paper, make sure to open it in an Adobe reader, you will likely be very surprised to see that many of these things that you think are still images are actually animations. Papers are not only getting more mind-blowing by the day, but also more informative and beautiful as well. What a time to be alive. This video has been supported by you on Patreon. If you wish to support the series and also pick up cool perks in return like early access to these episodes, or getting your name immortalized in the video description, make sure to visit us through patreon.com slash 2 minute papers. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. I apologize for my voice today. I am trapped in this frail human body, and sometimes it photos. But you remember from the previous episode, the papers must go on. In the last few years, we have seen a bunch of new AI-based techniques that were specialized in generating new and novel images. This is mainly done through learning-based techniques, typically a generative adversarial network, again, in short, which is an architecture where a generator neural network creates new images and passes it to a discriminator network which learns to distinguish real photos from these fake generated images. The two networks learn and improve together so much so that many of these techniques have become so realistic that with some times can't even tell they are synthetic images unless we look really closely. You see some examples here from BigGan, a previous technique that is based on this architecture. Now, normally, if we are looking to generate a specific human face, we have to generate hundreds and hundreds of these images, and our best bet is to hope that sooner or later we'll find something that we were looking for. So, of course, scientists were interested in trying to exert control over the outputs and with follow-up works, we can kind of control the appearance, but in return, we have to accept the pose in which they are given. And this new project is about teaching a learning algorithm to separate pose from identity. Now, that sounds kind of possible with proper supervision. What does this mean exactly? Well, we have to train these gans on a large number of images so they can learn what the human face looks like, what landmarks to expect, and how to form them properly when generating new images. However, when the input images are given with different poses, we will normally need to add additional information to the discriminator that describes the rotations of these people and objects. Well, hold on to your papers because that is exactly what is not happening in this new work. This paper proposes an architecture that contains a 3D transform and a projection unit. You see them here with red and blue, and this help us in separating pose and identity. As a result, we have much finer artistic control over these during image generation. That is amazing. So as you see here, it enables a really nice workflow where we can also set up the poses. Don't like the camera position for this generated bedroom? No problem. Need to rotate the chairs? No problem. And we are not even finished yet because when we set up the pose correctly, we are not stuck with these images. We can also choose from several different appearances. And all this comes from the fact that this technique was able to learn the intricacies of these objects. Love it. Now, it is abundantly clear that as we rotate these cars or change the camera view point for the bedroom, a flickering effect is still present. And this is how research works. We try to solve a new problem one step at a time. Then we find flaws in the solution and improve upon that. As a result, we always say two more papers down the line and will likely have smooth and creamy transitions between the images. The Lambda Sponsorship Spot is coming in a moment. And I don't know if you have noticed at the start, but they were also part of this research project as well. I think that it is as relevant of a sponsor as it gets. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's Web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifehir. I apologize for my voice today, I am trapped in this frail human body and sometimes it filters. But the papers must go on. This is one of those papers where I find that the more time I spend with it, the more I realize how amazing it is. It starts out with an interesting little value proposition that in and of itself would likely not make it to a paper. So, what is this paper about? Well, as you see here, this one is about cubification of 3D geometry. In other words, we take an input shape and it stylizes it to look more like a cube. Okay, that's cute, especially given that there are many, many ways to do this and it's hard to immediately put into words what a good end result would be, you can see a comparison to previous works here. These previous works did not seem to preserve a lot of fine details, but if you look at this new one, you see that this one does that really well. Very nice indeed, but still, when I read this paper at this point I was thinking I'd like a little more. Well, I quickly found out that this work has more up its sleeve. So much more. Let's talk about 7 of these amazing features. For instance, one, we can control the strength of the transformation with this lambda parameter. As you see, the more we increase it, the more heavy-handed the smushing process is going to get. Please remember this part. 2. We can also cubify selectively along different directions or select parts of the object that should be cubified differently. Hmm. Okay. 3. 4. This transformation procedure also takes into consideration the orientations. This means that we can perform it from different angles, which gives us a large selection of possible outputs for the same model. 5. It is fast and works on high-resolution geometry, and you see different settings for the lambda parameter here that is the same parameter as we talked about before, the strength of the transformation. 6. We can also combine many of these features interactively until a desirable shape is found. 7 is about to come in a moment, but to appreciate what that is, we have to look at this. To perform what you have seen here so far, we have to minimize this expression. This first term says ARAP as rigid as possible, which stipulates that whatever we do in terms of smushing, it should preserve the fine local features. The second part is called the regularization term that encourages sparser, more access-aligned solutions, so we don't destroy the entire model during this process. The stronger this term is, the bigger say it has in the final results, which in return become more cubelike. So, how do we do that? Well, of course, with our trusty little lambda parameter. Not only that, but if we look at the appendix, it tells us that we can generalize the second regularization term for many different shapes. So here we are, finally, 7. It doesn't even need to be cubification, we can specify all kinds of polyhedra. Look at those gorgeous results. I love this paper. It is playful. It is elegant, it has utility, and it generalizes well. It doesn't care in the slightest what the current mainstream ideas are and invites us into its own little world. In summary, this will serve all your cubification needs and turns out it might even fix your geometry and more. I would love to see more papers like this. In this series, I try to make people feel how I feel when I read these papers. I hope I have managed this time, but you be the judge. Let me know in the comments. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer you virtual servers that make it easy and affordable to host your own app, site, project, or anything else in the cloud. Whether you are a Linodex expert or just starting to tinker with your own code, Linode will be useful for you. A few episodes ago, we played with an implementation of OpenAIS GPT2 where our excited viewers accidentally overloaded the system. With Linode's load balancing technology and instances ranging from shared nanodes, all the way up to dedicated GPUs, you don't have to worry about your project being overloaded. To get $20 of free credit, make sure to head over to linode.com slash papers and sign up today using the promo code Papers20. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir in almost any kind of real-time computer games where different objects interact with each other, having some sort of physics engine is a requirement. Flag's waving in the wind, stroking bunnies with circular objects are among these cases, and of course not all heroes wear capes, but the ones that do require the presence of such a physics engine. However, a full physical simulation of these interactions is often not possible because it is orders of magnitude slower than what we are looking for in real-time applications. Now, hold on to your papers because this project proposes a new learning-based method that can speed up the standard physical simulations and make them 300 to 5000 times faster. And then we can give it all the positions, forces and other information and it will be able to tell us the outcome and do all this faster than real-time. Since this is a neural network-based project, our seasoned Fellow Scholars know that will need many hours of simulation data to train on. Fortunately, this information can be produced with one of those more accurate but slower methods. We can wait orbit rarely long for a full physical simulation for this training set because it is only needed once for the training. One of the key decisions in this project is that it also supports interaction with objects and we can even specify external forces like wind direction and speed controls. In some papers, the results are difficult to evaluate. For instance, when we produce any kind of deepfake, we need to call in people and create a user study where we measure how often do people believe forged videos to be real. The process has many pitfalls like choosing a good distribution of people, asking the right questions and so on. Another great part of the design of this project is that evaluating this is a breeze. We can just give it a novel situation, let it guess the result, then simulate the same thing with a full physical simulator and compare the two against each other. And they are really close. But wait, do you see what I see? If you are worried about how computationally intensive the neural network-based solution is, don't be. It only takes a few megabytes of memory, which is nothing and it runs in the order of microseconds, which is also nothing. So much so that if you look here, you see that the full simulation can be done at two frames per second while this new solution produces thousands and thousands of frames per second. I think it is justified to say that this thing costs absolutely nothing. I think I will take this one. Thank you very much. We can even scale up the number of interactions as you see here and even in this case, it can produce more than a hundred frames per second. It is incredible. We can also up or downscale the quality of the results and get different trade-offs. If a core simulation looks good enough for our applications, we can even get up to tens of thousands of frames per second. That costs nothing even compared to the previous nothing. The key part of the solution is that it compresses the simulated data through a method called principle component analysis and the training takes place on this compressed representation which only needs to be unpacked when something new is happening in the game which leads to a significant speed up and it is also very gentle with memory use. And working with this compressed representation is the reason why you see this method referred to as subspace neurophysics. However, as always, some limitations apply. For instance, it can kind of extrapolate beyond the examples that it has been trained on but as you see here, if the training data is lacking a given kind of application, don't expect miracles. Yet. If you have a look at the paper, you'll actually find the user study but it is about the usefulness of the individual components of the system. Make sure to check it out in the video description. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Have a look at this project they launched to make computer code semantically searchable where, for example, we could ask, show me the best model on this dataset with the fewest parameters and get a piece of code that does exactly that. Absolutely amazing. Make sure to visit them through www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher. In a world where learning based algorithms are rapidly becoming more capable, I increasingly find myself asking the question, so how smart are these algorithms really? I am clearly not alone with this. To be able to answer this question, a set of tests were proposed and many of these tests shared one important design decision. They are very difficult to solve for someone without generalized knowledge. In an earlier episode, we talked about DeepMind's paper where they created a bunch of randomized, mind-bending, or in the case of an AI, maybe silicon-bending questions that looked quite a bit like a nasty, nasty IQ test. And even in the presence of additional distractions, their AI did extremely well. I noted that on this test, finding the correct solution around 60% of the time would be quite respectable for a human where their algorithms succeeded over 62% of the time and upon removing the annoying distractions, this success rate skyrocketed to 78%. Wow! More specialized tests have also been developed. For instance, scientists at DeepMind also released a modular math test with over 2 million questions in which their AI did extremely well at tasks like interpolation, rounding decimals, integers, whereas they were not too accurate at detecting primality and factorization. Furthermore, a little more than a year ago, the glue benchmark appeared that was designed to test the natural language understanding capabilities of these AI's. When benchmarking the state of the art learning algorithms, they found that they were approximately 80% as good as the fellow non-expert human beings. That is remarkable. Given the difficulty of the test, they were likely not expecting human level performance, which you see marked with the black horizontal line which was surpassed within less than a year. So what do we do in this case? Well, as always, of course, design an even harder test. In comes super glue. The paper we are looking at today, which meant to provide an even harder challenge for these learning algorithms. Have a look at these example questions here. For instance, this time around, reusing general background knowledge gets more emphasis in the questions. As a result, the AI has to be able to learn and reason with more finesse to successfully address these questions. Here you see a bunch of examples, and you can see that these are anything but trivial little tests for a baby AI. Not all, but some of these are calibrated for humans at around college level education. So, let's have a look at how the current state of the art AI is fared in this one. Well, not as good as humans, which is good news because that was the main objective. However, they still did remarkably well. For instance, the bull-cue package contains a set of yes and no questions. And these, the AI's are reasonably close to human performance, but on multi-RC, the multi-sentence reading comprehension package, they still do okay, but humans outperform them by quite a bit. Note that you see two numbers for this test. The reason for this is that there are multiple test sets for this package. Note that in the second one, even humans seem to fail almost half the time, so I can only imagine the revelation will have a couple more papers down the line. I am very excited to see that, and if you are too, make sure to subscribe and hit the bell icon to never miss future episodes. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Asia. Make sure to go to LambdaLabs.com, slash papers, and sign up for one of their amazing GPU instances today. Thanks for watching, and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In this project, open AI built a hide and seek game for their AI agents to play. While we look at the exact rules here, I will note that the goal of the project was to pit two AI teams against each other and hopefully see some interesting emergent behaviors. And boy, did they do some crazy stuff. The coolest part is that the two teams compete against each other and whenever one team discovers a new strategy, the other one has to adapt. Kind of like an arms race situation and it also resembles generative adversarial networks a little. And the results are magnificent, amusing, weird. You'll see in a moment. These agents learn from previous experiences and to the surprise of no one for the first few million rounds we start out with pandemonium. Everyone just running around aimlessly. Without proper strategy and semi-rendal movements, the seekers are favored and hence win the majority of the games. Nothing to see here. Then over time, the hiders learn to lock out the seekers by blocking the doors off with these boxes and started winning consistently. I think the coolest part about this is that the map was deliberately designed by the open AI scientists in a way that the hiders can only succeed through collaboration. They cannot win alone and hence they are forced to learn to work together, which they did quite well. But then something happened. Did you notice this pointy door-stop-shaped object? Are you thinking what I am thinking? Well, probably, and not only that, but about 10 million rounds later, the AI also discovered that it can be pushed near a wall and be used as a ramp and tadaa-gadam. The seekers started winning more again. So the ball is now back on the court of the hiders. Can you defend this? If so, how? Well, these resourceful little critters learned that since there is a little time at the start of the game when the seekers are frozen, apparently during this time they cannot see them so why not just sneak out, steal the ramp and lock it away from them. Absolutely incredible. Look at those happy eyes as they are carrying that ramp. And you think it all ends here? No, no, no, not even close. It gets weirder, much weirder. When playing a different map, the seeker has noticed that it can use a ramp to climb on the top of a box and this happens. Do you think couch surfing is cool? Give me a break. This is box surfing. And the scientists were quite surprised by this move as this was one of the first cases where the seeker AI seems to have broken the game. What happens here is that the physics system is coded in a way that they are able to move around by exerting force on themselves, but there is no additional check whether they are on the floor or not because who in their right mind would think about that. As a result, something that shouldn't ever happen does happen here. And we are still not done yet. This paper just keeps on giving. A few hundred million rounds later, the hiders learned to separate all the ramps from the boxes. Dear Fellow Scholars, this is proper box surfing defense. Again, lock down the remaining tools and build a shelter. Note how well rehearsed and executed this strategy is. There is not a second of time left until the seeker's take off. I also love this cheeky move where they set up the shelter right next to the seeker's and I almost feel like they are saying, yeah, see this here? There is not a single thing you can do about it. In a few isolated cases, other interesting behaviors also emerged. For instance, the hiders learned to exploit the physics system and just chuck the ramp away. After that, the seekers go, what? What just happened? But don't despair and at this point, I would also recommend that you hold onto your papers because there was also a crazy case where a seeker also learned to abuse a similar physics issue and launch itself exactly onto the top of the hiders. Man, what a paper. This system can be extended and modded for many other tasks too, so expect to see more of these fun experiments in the future. We get to do this for a living and we are even being paid for this. I can't believe it. In this series, my mission is to showcase beautiful works that light a fire in people. And this is, no doubt, one of those works. Great idea, interesting, unexpected results, crisp presentation, bravo OpenAI. Love it. So did you enjoy this? What do you think? Make sure to leave a comment below. Also, if you look at the paper, it contains comparisons to an earlier work we covered about intrinsic motivation, shows how to implement circular convolutions for the agents to detect their environment around them and more. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this blog post, they show you how to use their system to find clues and steer your research into more promising areas. Make sure to visit them through WANDDB.com slash papers, W-A-N-D-B.com slash papers, or just click the link in the video description and sign up for a freedom of today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karo Ejorna Ifehir. Last year, an amazing neural network-based technique appeared that was able to look at a bunch of unlabeled motion data and learned to weave them together to control the motion of quadrupeds, like this wolf here. It was able to successfully address the shortcomings of previous works. For instance, the weird sliding motions have been eliminated, and it was also capable of following some predefined trajectories. This new paper continues research in this direction by proposing a technique that is also capable of interacting with its environment or other characters. For instance, they can punch each other, and after the punch, they can recover from undesirable positions and more. The problem formulation is as follows. It is given the current state of the character and the goal, and you see here with blue how it predicts the motion to continue. It understands that we have to walk towards the goal that we are likely to fall when hit by a ball, and it knows that then we have to get up and continue our journey and eventually reach our goal. Some amazing life advice from the AI right there. The goal here is also to learn something meaningful from lots of barely labeled human motion data. Barely labeled means that a bunch of videos are given almost as is without additional information on what movements are being performed in these videos. If we had labels for all this data that you see here, it would say that this sequence shows a jump, and these ones are for running. However, the labeling process takes a ton of time and effort, so if we can get away without it, that's glorious, but in return, with this, we create an additional burden that the learning algorithm has to shoulder. Unfortunately, the problem gets even worse. As you see here, the number of frames contained in the original dataset is very scarce. To alleviate this, the authors decided to augment this dataset, which means trying to combine parts of this data to squeeze out as much information as possible. To see some examples here, how this motion data can be combined from many small segments, and in the paper, they show that the augmentation helps us create even up to 10 to 30 times more training data for the neural networks. As a result of this augmented dataset, it can learn to perform zombie, gorilla movements, chicken hopping, even dribbling with a basketball, you name it. But even more, we can give the AI high level commands interactively, and it will try to weave the motions together appropriately. They can also punch each other. Ow! And all this was learned from a bunch of unorganized data. What a time to be alive! If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com, slash papers, and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karuijona Ife here. If we study the laws of fluid motion from physics and write a computer program that contains these laws, we can create wondrous fluid simulations like the ones you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. So, is there nothing else to do? Are we done with fluid simulation research? Oh no! No, no, no. For instance, fluid solid interaction still remains a challenging phenomenon to simulate. This means that the sand is allowed to have an effect on the fluid, but at the same time, as the fluid sloshes around, it also moves the sand particles within. This is what we refer to as two-way coupling. Note that this previous work that you see here was built on the material point method, a hybrid simulation technique that uses both particles and grids, whereas this new paper introduces proper fluid solid coupling to the simpler grid-based simulators. Not only that, but this new work also shows us that there are different kinds of two-way coupling. If we look at this footage with the honey and the deeper, it looks great, however, this still doesn't seem right to me. We are doing science here, so fortunately, we don't need to guess what seems and what doesn't seem right. This is my favorite part because this is when we let reality BR judge and compare to what exactly happens in the real world. So, let's do that. Whoa! There's quite a bit of a difference because in reality, the honey is able to support the deeper. One-way coupling, of course, cannot simulate this kind of back and forth interaction and neither can weak two-way coupling pull this off. And now, let's see. Yes, there we go. The new strong two-way coupling method finally gets this right. And not only that, but what I really love about this is that it also gets small nuances right. I will try to speed up the footage a little so you can see that the honey doesn't only support the deeper, but the deeper still has some subtle movements both in reality and in the simulation. A plus, love it. So, what is the problem? Why is this so difficult to simulate? One of the key problems here is being able to have a simulation that has a fine resolution in areas where fluid and the solid intersect each other. If we create a super detailed simulation, it will take from hours to days to compute. But on the other hand, if we have a two-course one, it will compute the required deformations in so few of these grid points that will get a really inaccurate simulation and not only that, but we will even miss some of the interactions completely. This paper proposes a neat new volume estimation technique that focuses these computations to where the action happens and only there, which means that we can get these really incredible results even if we only run a relatively coarse simulation. I could watch these GUI viscous simulations all day long. If you have a closer look at the paper in the description, you will find some hard data that shows that this technique executes quicker than other methods that are able to provide comparable results. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this blog post, you see that with the help of weights and biases, it is possible to write an AI that plays the witness one of my favorite puzzler games. If you are interested in the game itself, you can also check out my earlier video on it. I know it sounds curious. I indeed made a video about a game on this channel. You can find it in the video description. And also, make sure to visit them through www.wndb.com slash papers or just click the link in the video description and sign up for a freedom of today. Or thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. We talked about the technique by the name Phase to Phase back in 2016, approximately 300 videos ago. It was able to take a video of us and transfer our gestures to a target subject. With techniques like this, it's now easier and cheaper than ever to create these deep-fake videos of a target subject provided that we have enough training data, which is almost certainly the case for people who are the most high-value targets for these kinds of operations. Look here, some of these videos are real and some are fake. What do you think? Which is which? Well, here are the results. This one contains artifacts and is hence easy to spot, but the rest, it's tough and it's getting tougher by the day. How many did you get right? Make sure to leave a comment below. However, don't despair, it's not all doom and gloom. Approximately a year ago, in-came Phase 4-N-Zix, a paper that contains a large data set of original and manipulated video pairs. As this offered a ton of training data for real and forged videos, it became possible to train a deep-fake detector. You can see it here in action as these green-to-red colors showcase regions that the AI correctly thinks were tempered with. However, this follow-up paper, by the name Phase 4-N-Zix Plus Plus, contains not only an improved data set, but provides many more valuable insights to help us detect these deep-fake videos and even more. Let's dive in. Key insight number one. As you've seen a minute ago, many of these deep-fakes introduce imperfections or defects to the video. However, most videos that we watch on the internet are compressed and the compression procedure, you have guessed right, also introduces artifacts to the video. From this, it follows that hiding these deep-fake artifacts behind compressed videos sounds like a good strategy to fool humans and detector neural networks likewise, and not only that, but the paper also shows us by how much exactly. Here, you see a table where each row shows the detection accuracy of previous techniques and a new proposed one, and the most interesting part is how this accuracy drops when we go from HQ to LQ or in other words, from a high-quality video to a lower quality one with more compression artifacts. Overall, we can get an 80-95% success rate, which is absolutely amazing. But, of course, you ask, amazing compared to what? Onwards to insight number two. This chart shows how humans fared in deep-fake detection, and as you can see, not too well. Don't forget, the 50% line means that the human gases were as good as a coin flip, which means that they were not doing well at all. Face-to-face hovers around this ratio, and if you look at neural textures, you see that this is a technique that is extremely effective at fooling humans. And wait, what's that? For all the other techniques, we see that the gray bars are shorter, meaning that it's more difficult to find out if a video is a deep-fake because its own artifacts are hidden behind the compression artifacts. But the opposite is the case for neural textures, perhaps because of its small footprint on the videos. Note that the state of the RDetector AI, for instance, the one proposed in this paper, does way better than these 204 human participants. This work does not only introduce a data set, these cool insights, but also introduces a detector neural network. Now, hold on to your papers because this detection pipeline is not only so powerful that it practically eats compressed deep-fakes for breakfast, but it even tells us with remarkable accuracy which method was used to temper with the input footage. Bravo! Now, it is of utmost importance that we let the people know about the existence of these techniques, this is what I'm trying to accomplish with this video. But that's not enough, so I also went to this year's biggest NATO conference and made sure that political and military decision-makers are also informed about this topic. Last year, I went to the European Political Strategy Center with a similar goal. I was so nervous before both of these talks and spent a long time rehearsing them, which delayed a few videos here on the channel. However, because of your support on Patreon, I am in a fortunate situation where I can focus on doing what is right and what is the best for all of us and not worry about the financials all the time. I am really grateful for that, it really is a true privilege. Thank you. If you wish to support us, make sure to click the Patreon link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
This episode has been supported by Lambda. Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir. I can confidently say that this is the most excited I've been for a smoke simulation paper since wavelength turbulence. Waveslet turbulence is a magical algorithm from 2008 that takes a low quality fluid or smoke simulation and increases its quality by filling in the remaining details. And here we are, 11 years later, the results still hold up. Insanity. This is one of the best papers ever written and has significantly contributed to my decision to pursue a research career. And this new work performs style transfer for smoke simulations. If you haven't fallen out of your chair yet, let me try to explain why this is amazing. Style transfer is a technique in machine learning research where we have two input images, one for content and one for style and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to the super fun results that you see here. An earlier paper had shown that the more sophisticated ones can make even art curators think that they are real. However, doing this for smoke simulations is a big departure from 2D style transfer because that one takes an image where this works in 3D and does not deal with images but with density fields. A density field means a collection of numbers that describe how dense a smoke plume is at a given spatial position. It is a physical description of a smoke plume, if you will. So how could we possibly apply artistic style from an image to a collection of densities? This doesn't sound possible at all. Unfortunately, the problem gets even worse. Since we typically don't just want to look at a still image of a smoke plume but enjoy a physically correct simulation not only the density fields but the velocity fields and the forces that animate them over time also have to be stylized. Again that's either impossible or almost impossible to do. You see if we run a proper smoke simulation we'll see what would happen in reality but that's not stylized. However, if we stylize we get something that would not happen in Mother Nature. I have spent my master's thesis trying to solve a problem called fluid control which would try to coerce a smoke plume or a piece of fluid to take a given shape, like a bunny or a logo with letters. You can see some footage of what I came up with here. Here both the simulation and the controlling force field is computed in real time on the graphics card and as you see it can be combined with wavelet turbulence. If you wish to hear more about this work make sure to leave a comment but in any case I had a wonderful time working on it if anyone wants to pick it up the paper and the source code and even a blender add on version are available in the video description. In any case in a physics simulation we are trying to simulate reality and for style transfer we are trying to depart from reality. The two are fundamentally incompatible and we have to reconcile them in a way that is somehow still believable. Super challenging. However, back then when I wrote the fluid control paper learning based algorithms were not nearly as developed so it turns out they can help us perform style transfer for density fields and also animate them properly. Again the problem definition is very easy. Sometimes a smoke plume we had an image for style and the style of this image is somehow applied to the density field to get these incredible effects. Just look at these marvelous results. Fire textures, story knight, you name it. It seems to be able to do anything. One of the key ideas is that even though style transfer is challenging on highly detailed density fields it becomes much easier if we first down sample the density field to a coarser version, perform the style transfer there and up sample this density field again with already existing techniques. Rinse and repeat. The paper also describes a smoothing technique that ensures that the changes in the velocity fields that guide our density fields change slowly over time to keep the animation believable. There are a lot more new ideas in the paper so make sure to have a look. It also takes a while the computation time is typically around 10 to 15 minutes per frame but who cares today with the ingenuity of research scientists and the power of machine learning algorithms even the impossible seems possible. If it takes 15 minutes per frame so be it. What a time to be alive. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser and finally hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Reinforcement learning is an important subfield within machine learning research where we teach an agent to choose a set of actions in an environment to maximize the score. This enables these AIs to play Atari games at a superhuman level, control drones, robot arms, or even create self-driving cars. A few episodes ago, we talked about deep minds behavior suite that opened up the possibility of measuring how these AIs perform with respect to the seven core capabilities of reinforcement learning algorithms. Among them were how well such an AI performs when being shown a new problem, how well or how much they memorize, how willing they are to explore novel solutions, how well they scale to larger problems, and more. In the meantime, the Google Brain Research Team has also been busy creating a physics-based 3D football, or for some of you, soccer simulation, where we can ask an AI to control one or multiple players in this virtual environment. This is a particularly difficult task because it requires finding a delicate balance between rudimentary short-term control tasks like passing and long-term strategic planning. In this environment, we can also test our reinforcement learning agents against handcrafted rule-based teams. For instance, here you can see that deep minds in Paula algorithm is the only one that can reliably beat the medium and hard handcrafted teams, specifically the one that was run for 500 million training steps. The easy case is tuned to be suitable for single-machine research works where the hard case is meant to challenge sophisticated AIs that were trained on a massive array of machines. I like this idea a lot. Another design decision, I particularly like here, is that these agents can be trained from pixels or internal game state. Okay, so what does that really mean? Training from pixels is easy to understand, but very hard to perform. This simply means that the agents see the same content as what we see on the screen. Deep minds deep reinforcement learning is able to do this by training a neural network to understand what events take place on the screen and passes, no pun intended, all this event information to a reinforcement learner that is responsible for the strategic gameplay related decisions. Now, what about the other one? The internal game state learning means that the algorithm sees a bunch of numbers which relate to quantities within the game, such as the position of all the players and the ball, the current score, and so on. This is typically easier to perform because the AI is given high quality and relevant information and is not burdened with the task of visually parsing the entire scene. For instance, OpenAI's amazing Dota 2 team learned this way. Of course, to maximize impact, the source code for this project is also available. This will not only help researchers to train and test their own reinforcement learning algorithms on a challenging scenario, but they can also extend it and make up their own scenarios. Now note that so far, I tried my hardest not to comment on the names of the players and the teams, but my will to resist just ran out. So, go realbations! Thanks for watching and for your generous support, and I'll see you next time!
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. OpenAI GPT2 is a learning-based technique which can perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summarization, and more. What is absolutely amazing about this technique is that it is able to perform these tasks with as little supervision as possible. This means that we unleash the algorithm that they call GPT2 to read the internet, and it will learn our language by itself. To perform this, of course, we need a lot of training data, and here the AR reads 40GB of internet text, which is 40GB of non-binary plain-text data, which is a stupendously large amount of text. You see here an example of how it fares with text completion, and the results were absolutely amazing. And not only that, because hold onto your papers, we can now try it ourselves, and my results show that it can be as wise as a scholar, and it can also cheat like nobody's business. So, let's try something simple here. First, as everyone is talking about job security, I'd like to give it the start of this two-minute paper script, and see if I am about to be replaced. Let's see. It continues the intro with, the history of Eastern Europe since 1989 has been an interesting and complex story spanning four decades. On one side you have a regime of Western-backed dictators from the mid-1990s, and so on. Very scholarly indeed. However, I think I might be safe for now. And now, something that I wanted to try since the inception of GPT-2, which is, feed it a snippet of something that I said about wavelet turbulence, which is one of the best papers ever written, and have the AI finish it. I tried to frame it as an exam question, and was not disappointed by the results. So, the text is, there was a similar technique by the name wavelet turbulence, which is, I hope you see what I did there, and look what it did. There was a similar technique by the name wavelet turbulence, which is not at all what we are talking about. But I let you decide whether it is worth your time to read it. Wow! GPT-2 has been caught cheating, red-handed. I love it. Let's continue what it had to say. Anyway, my aim here is to describe how we use a fluid simulation to simulate smoke and liquid effects using just a few simple equations. Solving a few functions gives us a set of derivatives of the particle trajectories, that is, equations that describe how and where the particles will come and go across the liquid in the simulation. For simplicity, as a good teacher of high school students should know, we can use a simple equation which simulates two particles that collide. I am stunned. It recognized that we are talking about fluid simulations, which is already remarkable, but it went much further. The completion is not bad at all, and is not only coherent, on-popic, but has quite a bit of truth to it. I will have to rethink my previous claim about my job security. The even crazier thing is that the size of this model is about 750 million parameters, which is only half of the size of the original full model, which is expected to be even better. I put a link to this website in the video description for your pleasure. Make sure to play with it. This is mad fun. And GPT2 will also see so many applications that we cannot even fathom yet. For instance, here you can see that one can train it on many source code files on GitHub, and it will be able to complete the code that we write on the fly. Now, nobody should think of this as GPT2 writing programs for us. This is, of course, unlikely. However, it will ease the process for novice and expert users alike. If you have any other novel applications in mind, make sure to leave a comment below. For now, Bravo OpenAI and a big thank you for Danielle King and the HuggingFace company for this super convenient public implementation. Let the experiments begin. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It is very easy to set up. In fact, this blog post shows how we can use their framework to visualize our progress using XG Boost, a popular library for machine learning models. Get ready, because this is quite possibly the shortest blog post that you have seen. Yep, that was basically it. I don't think it can get any easier. Make sure to visit them through WendeeB.com slash papers, www.wendeeB.com slash papers, or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Jolnai-Fahir. A few years ago, scientists at DeepMind published a learning algorithm that they called Deep Re-Inforcement Learning, which quickly took the world by storm. This technique is a combination of a neural network that processes the visual data that we see on the screen and a reinforcement learner that comes up with the gameplay-related decisions which proved to be able to reach superhuman performance on computer games like Atari Breakout. This paper not only sparked quite a bit of mainstream media interest, but also provided fertile grounds for new follow-up research works to emerge. For instance, one of these follow-up papers infused these agents with a very human-like quality, curiosity, further improving many aspects of the original learning method. However, had this advantage, I kid you not, it got addicted to the TV and kept staring at it forever. This was perhaps a little too human-like. In any case, you may rest assured that this shortcoming has been remedied since and every follow-up paper recorded their scores on a set of Atari games. Measuring and comparing is an important part of research and is absolutely necessary so we can compare new learning methods more objectively. It's like recording your time for the Olympics at the 100-meter dash. In that case, it is quite easy to decide which athlete is the best. However, this is not so easy in AI research. In this paper, scientists at DeepMind note that just recording the scores doesn't give us enough information anymore. There's so much more to reinforcement learning algorithms than just scores. So, they built a behavior suite that also evaluates the seven core capabilities of reinforcement learning algorithms. Among these seven core capabilities, they list generalization which tells us how well the agent is expected to do in previously unseen environments, how good it is at credit assignment, which is a prominent problem in reinforcement learning. Credit assignment is very tricky to solve because, for instance, when we play a strategy game, we need to make a long sequence of strategic decisions and in the end, if we lose an hour later, we have to figure out which one of these many many decisions led to our loss. Measuring this as one of the core capabilities was, in my opinion, a great design decision here. How well the algorithm scales to larger problems also gets a spot as one of these core capabilities. I hope this testing suite will see widespread adoption in reinforcement learning research and what I am really looking forward to is seeing these radar plots for newer algorithms which will quickly reveal whether we have a new method that takes a different trade-off than previous methods or, in other words, has the same area within the polygon but with a different shape or, in the case of a real breakthrough, the area of these polygons will start to increase. Luckily, a few of these charts are already available in the paper and they give us so much information about these methods. I could stare at them all day long and I cannot wait to see some newer methods appear here. Now, note that there is a lot more to this paper. If you have a look at it in the video description, you will also find the experiments that are part of this suite, what makes a good environment to test these agents in, and that they plan to form a committee of prominent researchers to periodically review it. I love that part. If you enjoyed this video, please consider supporting us on Patreon. If you do, we can offer you early access to these videos so you can watch them before anyone else or you can also get your name immortalized in the video description. Just click the link in the description if you wish to chip in. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karolina Ifahir. Fluid simulation is a mature research field within computer graphics, with amazing papers that show us how to simulate water flows with lots of debris, how to perform liquid fabric interactions, and more. This new project further improves the quality of these works and shows us how thin elastic strands interact with oil paint, mud, melted chocolate, and pasta sauce. There will be plenty of tasty and messy simulations ahead, not necessarily in that order, so make sure to hold onto your papers just in case. Here you see four scenarios of these different materials dripping off of a thin strand. So, why are these cases difficult to simulate? The reason why it's difficult, if not flat out impossible because the hair strands and the fluid layers are so thin, it would require a simulation grid that is so microscopic, or in other words, we would have to perform our computations of quantities like pressure and velocity on so many grid points, it would probably take not from hours to days, but from weeks to years to compute. I will show you a table in a moment where you will see that these amazingly detailed simulations can be done on a grid of surprisingly low resolution. As a result, our simulations also needn't be so tiny in scale with one hair strand and a few drops of mud or water. They can be done on a much larger scale so we can marvel together at least tasty and messy simulations you decide which is which. I particularly like this animation with the oyster sauce because you can see a breakdown of the individual elements of the simulation. Note that all of the interactions between the noodles, the sauce, the fork and plate have to be simulated with precision. Love it. And now the promised table. Here you can see the delta X that means how fine the grid resolution is, which is in the order of centimeters and not micrometers. Please be assured and don't forget that this work is an extension to the material point method which is a hybrid simulation method that both uses grids and particles. And sure enough, you can see here that it simulates up to tens of millions of particles as well and the fact that the computation times are still only measured in a few minutes per frame is absolutely remarkable. Remember the fact that we can simulate this at all is a miracle. Now, this was run on the processor and the potentially implementation on the graphics card could yield us significant speed ups. So I really hope something like this appears in the near future. Also, make sure to have a look at the paper itself which is outrageously well written. If you wish to see more from this paper, make sure to follow us on Instagram, just search for two minute papers there or click the link in the video description. Now, I am still working as a full-time research scientist at the Technical University of Vienna and we train plenty of neural networks during our projects which requires a lot of computational resources. Every time we have to spend time maintaining these machines, I wish we could use Linode. Linode is the world's largest independent cloud hosting and computing provider. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To reserve your GPU instance and receive a $20 free credit, visit linode.com slash papers or click the link in the video description and use the promo code papers20 during signup. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Efehir. Recently, we have experienced an abundance of papers on facial reenactment in machine learning research. We talked about a technique by the name Phase to Phase back in 2016, approximately 300 videos ago. It was able to take a video of us and transfer our gestures to a target subject. This was kind of possible at the time with specialized depth cameras until Phase to Phase appeared and took the world by storm as it was able to perform what you see here with a regular consumer camera. However, it only transferred gestures. So of course, scientists were quite excited about the possibility of transferring more than just that. But that would require solving so many more problems. For instance, if we wish to turn the head of the target subject, we may need to visualize regions that we haven't seen in these videos, which also requires an intuitive understanding of hair, the human face, and more. This is quite challenging. So, can this be really done? Well, have a look at this amazing new paper. You see here the left image, this is the source person, the video on the right is the target video, and our task is to transfer not just the gestures, but the pose, gestures, and appearance of the face on the left to the video on the right. And this no method works like magic. Look, it not only works like magic, but pulls it off on a surprisingly large variety of cases, many of which I haven't expected at all. Now, hold on to your papers because this technique was not trained on these subjects, which means that this is the first time it is seeing these people. It has been trained on plenty of people, but not these people. Now, before we look at this example, you are probably saying, well, the occlusions from the microphone will surely throw the algorithm off. Right? Well, let's have a look. Nope, no issues at all. Absolutely amazing. Love it. So, how does this wizardry work exactly? Well, it requires careful coordination between no less than four neural networks, where each of which specializes for a different task. The first tool is a reenactment generator that produces a first estimation of the reenacted face, and the segmentation generator network that creates this colorful image that shows which region in the image corresponds to which facial landmark. These two are then handed over to the third network, the in-painting generator, which fills the rest of the image, and since we have overlapping information, incomes the fourth blending generator to the rescue to combine all this information into our final image. The paper contains a detailed description of each of these networks, so make sure to have a look. And if you do, you will also find that there are plenty of comparisons against previous works. Of course, face to face is one of them, which was already amazing, and you can see how far we've come in only three years. Now, when we try to evaluate such research work, we are curious as to how these individual puzzle pieces, in this case the generator networks, contribute to the final results. Or all of them really needed. What if we remove some of them? Well, this is a good paper, so we can find the answer in Table 2, where all of these components are tested in isolation. The downward and upward arrows show which measure is subject to minimization and maximization, and if we look at this column, it is quite clear that all of them indeed improve the situation and contribute to the final results. And remember, all this from just one image of the source person. Insanity. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Kato Jolene Fahir. Today we are going to talk about a paper that builds on a previous work by the name Deep Image Priors, Deep Inshort. This work was capable of performing JPEG compression artifact removal, image impainting, or in other words, filling in parts of the image with data that makes sense, super resolution, and image denoising. It was quite the package. This new method is able to subdivide an image into a collection of layers, which makes it capable of doing many seemingly unrelated tasks. For instance, one, it can do image segmentation, which typically means producing a mask that shows us the boundaries between the foreground and the background. As an additional advantage, it can also do this for videos as well. Two, it can perform the hazing, which can also be thought of as a decomposition task where the input is one image, and the output is an image with haze, and one with the objects hiding behind the haze. If you spend a tiny bit of time looking at the window on a hazey day, you will immediately see that this is immensely difficult, mostly because of the fact that the amount of haze that we see is non-uniform along the landscape. The AI has to detect and remove just the right amount of this haze and recover the original colors of the image. And three, it can also subdivide these crazy examples where two images are blended together. In a moment, I'll show you a better example with a complex texture where it is easier to see the utility of such a technique. And four, of course, it can also perform image-in-painting, which, for instance, can help us remove watermarks or other unwanted artifacts from our photos. This case can also be thought of as an image layer, plus a watermark layer, and the algorithm is able to recover both of them. As you see here on the right, a tiny part of the content seems to bleed into the watermark layer, but the results are still amazing. It does this by using multiple of these dips, deep image prior networks, and goes by the name DoubleDip. That one got me good when I first seen it. You see here, how it tries to reproduce this complex textured pattern as a sum of these two much simpler individual components. The supplementary materials are available right in your browser and show you a ton of comparisons against other previous works. Here you see the results of these earlier works on image-dhasing and see that indeed the new results are second to none. And all this progress within only two years. What a time to be alive. If like me, you'll have information theory. Woohoo! Make sure to have a look at the paper and you'll be a happy person. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It was also used in this OpenAI project that you see here, which we covered earlier in the series. They reported that experiment tracking was crucial in this project, and that this tool saved them quite a bit of time and money. If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months. Well, it turns out I will be able to get access to these tools because get this, it's free, and will always be free for academics and open source projects. Make sure to visit them through whendb.com slash papers, wamdb.com slash papers, or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir. This will be a little non-traditional video where the first half of the episode will be about a paper and the second part will be about something else. Also a paper. Well, kind of. You'll see. We've seen in the previous years that neural network-based learning methods are amazing at image classification, which means that after training on a few thousand training examples, they can look at a new previously unseen image and tell us whether it depicts a frog or a bus. Earlier we have shown that we can fool neural networks by adding carefully crafted noise to an image which we often refer to as an adversarial attack on a neural network. If done well, this noise is barely perceptible and, get this, can fool the classifier into looking at a bus and thinking that it is an ostrich. These attacks typically require modifying a large portion of the input image, so when talking about a later paper, we were thinking what could be the lowest number of pixel changes that we have to perform to fool a neural network. What is the magic number? Based on the results of previous research works, an educated guess would be somewhere around a hundred pixels. A follow-up paper gave us an unbelievable answer by demonstrating the one pixel attack. You see here that by changing only one pixel in an image that depicts a horse, the AI will be 99.9% sure that we are seeing a frog. A ship can also be disguised as a car or, amusingly, with a properly executed one pixel attack almost anything can be seen as an airplane by the neural network. And this new paper discusses whether we should look at these adversarial examples as bugs or not, and of course, does a lot more than that. It argues that most data sets contain features that are predictive, meaning that they provide help for a classifier to find cats, but also non-robust, which means that they provide a rather brittle understanding that falls apart in the presence of adversarial changes. We are also shown how to find and eliminate these non-robust features from already existing data sets and that we can build much more robust classifier neural networks as a result. This is a truly excellent paper that sparked quite a bit of discussion. And here comes the second part of the video with the something else. An interesting new article was published within the distal journal, a journal where you can expect clearly-warded papers with beautiful and interactive visualizations. But this is no ordinary article, this is a so-called discussion article where a number of researchers were asked to write comments on this paper and create interesting back-and-forth discussions with the original authors. Now, make no mistake, the paper we've talked about was peer-reviewed, which means that independent experts have spent time scrutinizing the validity of the results, so this new discussion article was meant to add to it by getting others to replicate the results and clear up potential misunderstandings. Through publishing six of these mini-discussions, each of which were addressed by the original authors, they were able to clarify the main takeaways of the paper and even added a section of non-claims as well. For instance, it's been clarified that they don't claim that adversarial examples arise from software bugs. A huge thanks to the distal journal and all the authors who participated in this discussion and Ferenc Hussar who suggested the idea of the discussion article to the journal. I'd love to see more of this and if you do too, make sure to leave a comment so we can show them that these endeavors to raise the replicability and clarity of research works are indeed welcome. Make sure to click the link to both works in the video description and spend a little quality time with them. You'll be glad you did. I think this was a more complex than average paper to talk about, however, as you have noticed, the visual fireworks were not there. As a result, I expect this to get significantly fewer views. That's not a great business model, but no matter, I made this channel so I can share with you all these important lessons that I learned during my journey. This has been a true privilege and I am thrilled that I am still able to talk about all these amazing papers without worrying too much whether any of these videos go viral or not. Things like this are only possible because of your support on patreon.com slash two minute papers. If you feel like chipping in, just click the Patreon link in the video description. This is why every video ends with, you know what's coming. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karu Zonai-Fehir, about 350 episodes ago in this series, in episode number 8, we talked about an amazing paper in which researchers built virtual characters with a bunch of muscles and joints and through the power of machine learning taught them to actuate them just the right way so that they could learn to walk. Well, some of them anyway. Later, we've seen much more advanced variants where we could even teach them to lift weights, jump really high, or even observe how their movements would change after they undergo surgery. This paper is a huge step forward in this area and if you look at the title, it says that it proposes multiplicative composition policies to control these characters. What this means is that these complex actions are broken down into a sum of elementary movements. Intuitively, you can imagine something similar when you see a child use small, simple Lego pieces to build a huge, breathtaking spaceship. That sounds great, but what does this do for us? Well, the ability to properly combine these Lego pieces is where the learning part of the technique shines and you can see on the right that these individual Lego pieces are as amusing as useless if they are not combined with others. To assemble efficient combinations that are actually useful, the characters are first required to learn to perform reference motions using combinations of these Lego pieces. Here on the right, the blue bars show which of these Lego pieces are used and when in the current movement pattern. Now, with heard enough of these Legos, what is this whole compositional thing good for? Well, a key advantage of using these is that they are simple enough so that they can be transferred and reused for other types of movement. As you see here, this footage demonstrates how we can teach a biped or even a T-Rex to carry and stack boxes or how to dribble or how to score a goal. Amusingly, according to the paper, it seems that this T-Rex weighs only 55 kilograms or 121 pounds, an adorable baby T-Rex, if you will. As a result of this transferability property, when we assemble a new agent or wish to teach an already existing character some new moves, we don't have to train them from scratch as they already have access to these Lego pieces. I love seeing all these new papers in the intersection of computer graphics and machine learning. This is a similar topic to what I am working on as a full-time research scientist at the Technical University of Vienna and in these projects we train plenty of neural networks which requires a lot of computational resources. Sometimes when we have to spend time maintaining the machines running these networks, I wish we could use Linode. Linode is the world's largest independent cloud hosting and computing provider and they have GPU instances that are tailor-made for AI, scientific computing and computer graphics projects. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To reserve your GPU instance and receive a $20 free credit, visit Linode.com slash papers or click the link in the video description and use the promo code papers20 during sign up. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. As machine learning research advances over time, learning-based techniques are getting better and better at generating images or even creating videos when given a topic. A few episodes ago, we talked about DeepMise dual video discriminator technique in which multiple neural networks compete against each other teaching our machines to synthesize a collection of two second-long videos. One of the key advantages of this method was that it learned the concept of changes in the camera view, zooming in on an object, and understood that if someone draws something with a pen, the ink has to remain on the paper unchanged. However, generally, if we wish to ask an AI to synthesize assets for us, we likely have an exact idea of what we are looking for. In these cases, we are looking for a little more artistic control than this technique offers us. So, can we get around this? If so, how? Well, we can. I'll tell you how in a moment, but to understand this solution, we first have to have a firm grasp on the concept of latent spaces. You can think of a latent space as a compressed representation that tries to capture the essence of the dataset that we have at hand. You can see a similar latent space method in action here that sets different kinds of font support and presents these options on a 2D plane. And here, you see our technique that builds a latent space for modeling a wide range of photorealistic material models that we can explore. And now, onto this new work. What this tries to do is find a path in the latent space of these images that relates to intuitive concepts like camera zooming, rotation, or shifting. That's not an easy task, but if we pull it off, we'll have more artistic control over these generated images, which will be immensely useful for many creative tasks. This new work can perform that, and not only that, but it is also able to learn the concept of color enhancement and can even increase or decrease the contrast of these images. The key idea of this paper is that this can be done through trying to find crazy, nonlinear trajectories in these latent spaces that happen to relate to these intuitive concepts. It is not perfect in a sense that we can indeed zoom in on the picture of this dog, but the posture of the dog also changes and it even seems like we are starting out with a puppy that grows up frame by frame. This means that we have learned to navigate this latent space, but there is still some additional fat in these movements, which is a typical side effect of latent space-based techniques, and also don't forget that the training data the AI is given also has its own limits. However, as you see, we are now one step closer to not only having an AI that synthesizes images for us, but one that does it exactly with the camera setup, rotation, and colors that we are looking for. What a time to be alive. If you wish to see beautiful formulations of walks, walks in latent spaces, that is, make sure to have a look at the paper in the video description. Also, note that we have now appeared on Instagram with bite-sized pieces of our bite-sized videos. Yes, it is quite peculiar. Make sure to check it out, just search for two-minute papers on Instagram or click the link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. A few years ago, the Generative Adversarial Network Architecture appeared that contains two neural networks that try to outcompete each other. It has been used extensively for image generation and has become a research subfield of its own. For instance, they can generate faces of people that don't exist and much, much more. This is great, we should be grateful to live in a time when breakthroughs like this happen in AI research. However, we should also note that artists usually have a vision of the work that they would like to create and instead of just getting a deluge of new images, most of them would prefer to have some sort of artistic control over the results. This work offers something that they call semantic paintbrushes. This means that we can paint not in terms of colors, but in terms of concepts. Now this may sound a little nebulous, so if you look here, you see that as a result, we can grow trees, change buildings, and do all kinds of shenanigans without requiring us to be able to draw the results by hand. Look at those marvelous results. It works by compressing down these images into a latent space. This is a representation that is quite sparse and captures the essence of these images. One of the key ideas is that this can then be reconstructed by a generator neural network to get a similar image back. However, the twist is that while we are in the latent domain, we can apply these intuitive edits to this image, so when the generator step takes place, it will carry through our changes. If you look at the paper, you will see that just using one generator network doesn't yield these great results, therefore, this generator needs to be specific to the image we are currently editing. The included user study shows that the new method is preferred over the previous techniques. Now, like all of these methods, this is not without limitations. Here you see that despite trying to remove the chairs from the scene, amusingly, we get them right back. That's a bunch of chairs, free of charge. In fact, I'm not even sure how many chairs we got here. If you figure that out, make sure to leave a comment about it, but all in all, that's not what we asked for, and solving this remains a challenge for the entire family of these algorithms. And good news. In fact, when talking about a paper, probably the best kind of news is that you can try it online through a web demo right now. Make sure to try it and post your results here if you find anything interesting. The authors themselves may also learn something new from us about interesting new failure cases. It has happened before in this series. This episode has been supported by Wates and Biasis. Wates and Biasis provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than 5 minutes to set up, and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It was also used in this OpenAI project that you see here, which we covered earlier in the series. They reported that experiment tracking was crucial in this project, and that this tool saved them quite a bit of time and money. If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months. Well, it turns out I will be able to get access to these tools because get this, it's free, and will always be free for academics and open source projects. Make sure to visit them through WendeeB.com, slash papers, WendeeB.com slash papers, or just click the link in the video description and sign up for a free demo today. Our thanks to Wates and Biasis for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojola Ifahir. In the last few years, the pace of progress in machine learning research has been staggering. Neural network-based learning algorithms are now able to look at an image and describe what's seen in this image or even better the other way around, generating images from a written description. You see here, a set of results from BigGam, a state-of-the-art image generation technique and marvel at the fact that all of these images are indeed synthetic. The GAM part of this technique abbreviates the term generative adversarial network. This means a pair of neural networks that battle each other over time to master a task, for instance, to generate realistic-looking images when given a theme. These detailed images are great, but what about generating video? With the dual video discriminator again, DVD GAM in short, DeepMind's naming game is still as strong as ever, it is now possible to create longer and higher resolution videos than was previously possible. The exact numbers are 256x256 in terms of resolution and 48 frames, which is about 2 seconds. It also learned the concept of changes in the camera view, zooming in on an object and understands that if someone draws something with a pen, the ink has to remain on the paper unchanged. The dual discriminator part of the name reveals one of the key ideas of the paper. In a classical GAM, we have a discriminator network that looks at the images of the generator network and critiques them. As a result, the discriminator learns to tell fake and real images apart better, but, at the same time, provides ample feedback for the generator neural network so it can come up with better images. In this work, we have not one, but two discriminators. One is called a spatial discriminator that looks at just one image and assesses how good it is structurally, while the second temporal discriminator critiques the quality of movement in these videos. This additional information provides better teaching for the generator, which will, in a way, be able to generate better videos for us. The paper contains all the details that you could possibly want to learn about this algorithm. In fact, let me give you two that I found to be particularly interesting. One, it does not get any additional information about where the foreground and the background is, and is able to leverage the learning capacity of these neural networks to learn these concepts by itself. And two, it does not generate the video frame by frame sequentially, but it creates the entire video in one go. That's wild. Now, 256 by 256 is not a particularly high video resolution, but if you have been watching this series for a while, you are probably already saying that two more papers down the line and we may be watching HD videos that are also longer than we have the patience to watch. All this through the power of machine learning research. For now, let's applaud deep mind for this amazing paper and I can't wait to have a look at more results and see some follow-up works on it. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. One of the main promises of virtual reality, VR in short, is enhancing the quality of our remote interactions. With VR, we could talk with our colleagues and beloved ones through telepresence applications that create a virtual avatar of us, much like the ones you see here. Normally, this requires putting sensors all over our faces to be able to reconstruct the gestures we make. A previous work used a depth camera that was hanging off of the VR headset, thus having a better look at the entirety of our face while a later work used a mouth camera to solve this problem. This new paper attempts to capture all of our gestures by using a headset without these additional complexities by using no more than three infrared cameras. No extra devices hanging off of the headset, nothing. All of them are built into the headpiece. This means two key challenges. One is the fact that the sensor below sees the face in an uncomfortable, oblique angle, below you see exactly the data that is being captured by the three sensors. And two, the output of this process should be a virtual avatar, but it is unclear what the correspondence between all this data and the animated character should be. So the idea sounds great, the only problem is that this is near impossible. So how did the researchers end up doing this? Well, what they did is they built a prototype headset with six additional sensors. Now wearing this headset would perhaps not be too much more convenient than the previous works we've looked at a moment ago. But don't judge this work just yet because this additional information is required to create the output avatar and then the smaller three sensor headset can be trained by dropping these additional views. In short, the augmented, more complex camera is used as a crutch to train the smaller headset. Amazing idea, I love it. Our more experienced fellow scholars also know that there is a little style transfer magic being done here. And finally, all of these partial views are then stitched together into the final avatar. You can also see here that it smokes the competition, uses only three sensors and does all this in real time. Wow, if you want to show your friends how you are about to sneeze in the highest possible quality video footage, look no further. Now, I'm a research scientist by day and I also run my own projects where I cannot choose my own hosting provider and every time I have problems with it, I tell my wife that I wish we could use Linode. Linode is the world's largest independent cloud hosting and computing provider and they just introduced a GPU server pilot program. These GPU instances are tailor made for AI, scientific computing and computer graphics projects. Yes, exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your own experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. Note that this is a pilot program with limited availability. To reserve your GPU instance at a discounted rate, make sure to visit linode.com slash papers or click the link in the description and use the promo code papers20 to get $20 free on your account. You also get super fast storage and proper support if you have any questions. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute paper sweet caro jjona ifahir. We recently talked about an amazing paper that uses mixture theory to simulate the interaction of liquids and fabrics. And this new work is about simulating fluid flows where we have some debris or other foreign particles in our liquids. This is really challenging. For example, one of the key challenges is incorporating two-way coupling into this process. This means that the sand is allowed to have an effect on the fluid, but at the same time as the fluid sloshes around, it also moves the sand particles within. Now, before you start wondering whether this is real footage or not, the fact that this is a simulation should become clear now because what you see here in the background is where the movement of the two domains are shown in isolation. Just look at how much interaction there is between the two. Unbelieveable. Beautiful simulation. I scream for your eyes. This new method also contains a novel, density correction step, and if you watch closely here, you'll notice why. Got it? Let's watch it again. If we try to run this Elastoplastic simulation for these two previous methods, they introduce here again in density or in other words, we end up with more stuff than we started with. These two rows show the number of particles in the simulation in the worst case scenario, and as you see, some of these incorporate millions of particles for the fluid and many hundreds of thousands for the sediment. Since this work uses the material point method, which is a hybrid simulation technique that uses both particles and grids, the delta x row denotes the resolution of the simulation grid. Now, since these grids are often used for 3D simulations, we need to raise the 256 and the 512 to the third power, and with that, we get a simulation grid with up to hundreds of millions of points, and we haven't even talked about the particle representation yet. In the face of all of these challenges, the simulator is able to compute one frame in a matter of minutes and not hours or days, which is an incredible feat. But this, I think it is easy to see that computer graphics research is improving at a staggering pace. What a time to be alive. If you enjoyed this episode, please consider supporting us through Patreon. Our address is patreon.com slash 2 minute papers, or just like the link in the video description. With this, we can make better videos for you. You can also get your name immortalized in the video description as a key supporter, or watch these videos earlier than others. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir. In this series, we talk about amazing research papers. However, when a paper is published, also a talk often has to be given at a conference. And this paper is about the talk itself, or more precisely, how to enhance your presentation with dynamic graphics. Now, these effects can be added to music videos and documentary movies, however, they take a long time and cost a fortune. But not these ones, because this paper proposes a simple framework in which the presenter stands before a Kinect camera and an AR mirror monitor and can trigger these cool little graphical elements with simple gestures. A key part of the paper is the description of a user interface where we can design these mappings. This skeleton represents the presenter who is tracked by the Kinect camera, and as you see here, we can define interactions between these elements and the presenter, such as grabbing the umbrella, pull up a chart, and more. As you see with the examples here, using such a system leads to more immersive storytelling, and note that again, this is an early implementation of this really cool idea. A few more papers down the line, I can imagine rotatable and deformable 3D models and photorealistic rendering entering the scene. Well, sign me up for that. If you have any creative ideas as to how this could be used or improved, make sure to leave a comment. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonaifahir. Today, the game we'll be talking about is the 6 player, no limit hold and poker, which is one of the more popular poker variants out there. And the goal of this project was to build a poker AI that never played against a human before and learns entirely through self play and is able to defeat professional human players. During these tests, two of the players that were tested against are former World Series of Poker main event winners. And of course, before you ask, yes, in a moment we'll look at an example hand that shows how the AI traps a human player. Poker is very difficult to learn for AI bots because it is a game of imperfect information. For instance, chess is a game of perfect information where we see all the pieces and can make a good decision if we analyze the situation well. However, not so much in poker because only at the very end of the hand do the players show what they have. This makes it extremely difficult to train an AI to do well. And now, let's have a look at the promised example hand here. We talked about imperfect information just a moment ago, so I'll note that all the cards are shown face up for us to make the analysis of this hand easier. Of course, this is not how the hands were played. You see the AI up here marked with P2 sitting pretty with a jack and a queen and before the flop happens, which is when the first three cards are revealed, only one human player seems to be interested in this hand. During the flop, the AI paired its queen and has a jack as a kicker, which, if played well, is going to be disastrous for the human player. So why is that? You see, the human player also paired their queen but has a weaker kicker and will therefore lose to the AI's hand. In this case, these players think they have a strong hand and will get lots of value out of it, only to find out that they will be the one milked by the AI. So how exactly does that happen? Well, look here carefully. The bot shows weakness by checking here to which the human player's answer is a small raise. But again shows weakness by just calling the raise and checking again on the turn, essentially saying, I am weak, don't hurt me. By the time we get to the river, the AI, again, appears weak to the human player who now tries to make the bot with a mid-size raise and the AI recognizes that now is the time to pounce, the confused player calls the bot and gets milked for almost all their money. An excellent slowplay from the AI. Now, note that one hand is difficult to evaluate in isolation. This was a great hand indeed, but we need to look at entire games to get a better grasp of the capabilities of this AI. So if we look at the dollar equivalent value of the chips in the game, the AI was able to win $1,000 from these five professional poker players every hour. It also uses very little resources, can be trained in the cloud for only several hundred dollars and exceeds human level performance within only 20 hours. What you see here is a decision tree that explains how the algorithm figures out whether to check or bet, and as you see here, this tree is traversed in a depth first way, so first it descends deep into one possible decision and later, as more options are being enrolled and evaluated, the probability of these choices are updated above. In simpler words, first the AI seems somewhat sure that checking would be the good choice here, but after carefully evaluating both decisions, it is able to further reinforce this choice. One of the professional players noted that the bot is a much more efficient bluffer than a human and always puts on a lot of pressure. Now note that this is also a general learning technique and is not tailored specifically for poker and as a result, the authors of the paper noted that they will also try it on other imperfect information games in the future. What a time to be alive! This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. It is really easy to use. In fact, this blog post describes how you can visualize your Keras models with only one line of code. When you run this model, it will also start saving relevant metrics for you and here, you can see the visualization of the mentioned model and these metrics as well. That's it. You're done. It can do a lot more than this, of course and you know what the best part is. The best part is that it's free and will always be free for academics and open source projects. Make sure to visit them through whendb.com slash papers, w-a-n-db.com slash papers or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute pepper sweet caro jeonaifahir. Neural style transfer just appeared four years ago in 2015. Style transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to the super fun results that you see here. However, most of these works are about photos. So what about video? Well, hold on to your papers because this new work does this for video and the results are marvelous. The process goes as follows. We take a few keyframes from the video and the algorithm propagates our style to the remaining frames of the video and wow. Those are some silky smooth results. In specific, what I would like you to take a look at is the temporal coherence of the results. Proper temporal coherence means that the individual images within this video are not made independently from each other which would introduce a disturbing flickering effect. I see none of that here which makes me very, very happy. And now hold on to your papers again because this technique does not use any kind of AI. Own your own networks and other learning algorithms were used here. Okay, great, no AI. But is it any better than its AI based competitors? Well, look at this. Hell yeah. This method does this magic through building a set of guide images. For instance, a mass guide highlights the stylized objects. And sure enough, we also have a temporal guide that penalizes the algorithm for making too much of a change from one frame to the next one, ensuring that the results will be smooth. Make sure to have a look at the paper for a more exhaustive description of these guides. Now, if we make a carefully crafted mixture from these guide images and plug them in to a previous algorithm by the name stylet, we talked about this algorithm before in the series, the link is in the video description, then we get these results that made me fall out of my chair. I hope you will be more prepared and held on to your papers. Let me know in the comments. And you know what is even better? You can try this yourself because the authors made a standalone tool available free of charge, just go to eb synth.com or just click the link in the video description. Let the experiments begin. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. Today, I've got some fluids for you. Most hobby projects with fluid simulations involve the simulation of a piece of sloshing liquid in a virtual container. However, if you have a more elaborate project at hand, the story is not so simple anymore. This new paper elevates the quality and realism of these simulations through using mixture theory. Now, what is there to be mixed, you ask? Well, what mixture theory does for us is that it helps simulating how liquids interact with fabrics, including splashing, ringing, and more. These simulations have to take into account that the fabrics may absorb some of the liquids poured onto them and get saturated, how diffusion transports this liquid to nearby yarn strands, or what you see here is a simulation with porous plastic where water flows off off and also through this material as well. Here you see how it can simulate honey dripping down on a piece of cloth. This is a real good one. If you're a parent with small children, you probably have lots of experience with this situation and can assess the quality of this simulation really well. The visual fidelity of these simulations is truly second to none. I love it. Now the question naturally arises, how do we know if these simulations are close to what would happen in reality? We don't just make a simulation and accept it as true to life if it looks good, right? Well, of course not. The paper also contains comparisons against real world laboratory results to ensure the validity of these results, so make sure to have a look at it in the video description. And if you've been watching this series for a while, you notice that I always recommend that you check out the papers yourself. And even though it is true that these are technical write-ups that are meant to communicate results between experts, it is beneficial for everyone to also read at least a small part of it. If you do, you'll not only see beautiful craftsmanship, but you'll also learn how to make a statement and how to prove the validity of this statement. This is a skill that is necessary to find truth. So please read your papers. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. This paper is about endowing colored images with depth information, which is typically done through depth maps. Depth maps describe how far parts of the scene are from the camera and are given with a color coding where the darker the colors are, the further away the objects are. These depth maps can be used to apply a variety of effects to the image that require knowledge about the depth of the objects within. For instance, selectively defocusing parts of the image, or even removing people and inserting new objects to the scene. If we, humans look at an image, we have an intuitive understanding of it and have the knowledge to produce a depth map by pen and paper. However, this would, of course, be infeasible and would take too long, so we would prefer a machine to do it for us instead. But of course, machines don't understand the concept of 3D geometry, so they probably cannot help us with this. Or, with the power of machine learning algorithms, can they? This new paper from scientists at Google Research attempts to perform this, but with a twist. The twist is that the learning algorithm is unleashed on a dataset of what they call mannequins, or in other words, real humans are asked to stand around frozen in a variety of different positions while the camera moves around in the scene. The goal is that the algorithm would have a look at these frozen people and take into consideration the parallax of the camera movement. This means that the objects closer to the camera move more than the objects that are further away. And turns out, this kind of knowledge can be exploited so much so that if we train our AI properly, it will be able to predict the depth maps of people that are moving around even if it had only seen frozen people before. This is particularly difficult because if we have an animation, we have to make sure that the guesses are consistent across time, or else we get these annoying flickering effects that you see here with previous techniques. It is still there with the new method, especially for the background, but the improvement on the human part of the image is truly remarkable. Beyond the removal and insertion techniques we talked about earlier, I am also really excited for this method as it may open up the possibility of creating video versions of these amazing portrait mode images with many of the newer smartphones people have in their pockets. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute Papers with Karo Zonai Fahir. In the last few years, we have seen a bunch of new AI-based techniques that were specialized in generating new and novel images. This is mainly done through learning-based techniques, typically a generative adversarial network, again in short, which is an architecture where a generator neural network creates new images and passes it to a discriminator network which learns to distinguish real photos from these fake generated images. These two networks learn and improve together so much so that many of these techniques have become so realistic that we often can't tell they are synthetic images unless we look really closely. You see some examples here from BigGan, the previous technique that is based on this architecture. So in these days, many of us are wondering, is there life beyond GANs? Can they be matched in terms of visual quality by a different kind of technique? Well, have a look at this paper because it proposes a much simpler architecture that is able to generate convincing, high-resolution images quickly for a ton of different object classes. The results it is able to turn out is nothing short of amazing. Just look at that. To be able to proceed to the key idea here, we first have to talk about latent spaces. You can think of a latent space as a compressed representation that tries to capture the essence of the dataset that we have at hand. You can see a similar latent space method in action here that captures the key features that set different kinds of fonts apart and presents these options on a 2D plane. And here, you see our technique that builds a latent space for modeling a wide range of photorealistic material models. And now onto the promised key idea. As you have guessed, this new technique uses a latent space, which means that instead of thinking in pixels, it thinks more in terms of these features that commonly appear in natural photos, which also makes the generation of these images up to 30 times quicker, which is super useful, especially in the case of larger images. While we are at that, it can rapidly generate new images with the size of approximately a thousand by thousand pixels. Intrusion learning is a research field that is enjoying a great deal of popularity these days, which also means that so many papers appear every day, it's getting difficult to keep track of all of them. The complexity of the average technique is also increasing rapidly over time, and what I like most about this paper is that it shows us that surprisingly simple ideas can still lead to breakthroughs. What a time to be alive. Make sure to have a look at the paper in the description as it describes how this method is able to generate more diverse images than previous techniques and how we can measure diversity at all because that is no trivial matter. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. In fact, it is so easy to add to your project, the CEO himself, Lucas, instrumented it for you for this paper, and if you look here, you can see how the output images and the reconstruction error evolve over time and you can even add your own visualizations. It is a site to be held really, so make sure to check it out in the video description, and if you liked it, visit them through whendb.com slash papers, www.swndb.com slash papers, or just use the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute Papers with Karo Zonai Fahir. In this series, we often marvel at light simulation programs that are able to create beautiful images by simulating the path of millions and millions of light rays. To make sure that our simulations look lifelike, we not only have to make sure that these rays of light interact with the geometry of the scene in a way that's physically plausible, but the materials within the simulation also have to reflect reality. Now that's an interesting problem. How do we create a convincing mathematical description of real-world materials? Well, one way to do that is taking a measurement device, putting an example of the subject material and measuring how rays of light bounce off of it. This work introduces a new database for sophisticated material models and includes interesting optical effects such as iridescence, which gives the colorful physical appearance of bubbles and fuel water mixtures. It can do colorful mirror-like specular highlights and more so we can include these materials in our light simulation programs. You see this database in action in this scene that showcases a collection of these complex material models. However, creating such a database is not without perils because normally these materials take prohibitively many measurements to reproduce properly and the interesting regions are often found at quite different places. This paper proposes a solution that adapts the location of these measurements to where the action happens, resulting in a mathematical description of these materials that can be measured in a reasonable amount of time. It also takes very little memory when we run the actual light simulation on them. So, as if light transport simulations weren't beautiful enough, they are about to get even more realistic in the near future. Super excited for this. Make sure to have a look at the paper, which is so good, I think I sank into a minor state of shock upon reading it. If you're enjoying learning about light transport, make sure to check out my course on this topic at the Technical University of Vienna. I still teach this at the university for about 20 master students at a time and thought that the teachings shouldn't only be available for a lack of few people who can afford a college education. Clearly, the teachings should be available for everyone, so we recorded it and put it online and now everyone can watch it free of charge. I was quite stunned to see that more than 25,000 people decided to start it, so make sure to give it a go if you're interested. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir. In 2017, scientists at OpenAI embarked on an AI project where they wanted to show a neural network a bunch of Amazon product reviews and wanted to teach it to be able to generate new ones or continue a review when given one. Now, so far, this sounds like a nice hobby project, definitely not something that would justify an entire video on this channel. However, during this experiment, something really unexpected happened. Now it is clear that when the neural network reads these reviews, it knows that it has to generate new ones, therefore it builds up a deep understanding of language. However, beyond that, it used surprisingly few neurons to continue these reviews and scientists were wondering why is that? Usually, the more neurons, the more powerful the AI can get, so why use so few neurons. The reason for that is that it has learned something really interesting. I'll tell you what in a moment. This neural network was trained in an unsupervised manner, therefore it was told to do what the task was but was given no further supervision, no label datasets, no additional help, nothing. Upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it needs to be able to efficiently detect whether the new review seems positive or not. And thus, it dedicated the neuron to this task which we were referred to as the sentiment neuron. However, it was no ordinary sentiment neuron, it was a proper state of the art sentiment detector. In this diagram, you see this neuron at work. As it reads through the review, it starts out detecting a positive outlook which you can see with green and then, uh oh, it detects that the review has taken a turn and is not happy with the movie at all. And all this was learned on a relatively small dataset. Now, if we have this sentiment neuron, we don't just have to sit around and be happy for it. Let's play with it. For instance, by overwriting this sentiment neuron in the neuron at work, we can force it to create positive or negative reviews. Here is a positive example. Quote. Just what I was looking for, nice fitted pants exactly matched seem to color contrast with other pants I own, highly recommended and also very happy. And if we overwrite the sentiment neuron to negative, we get the following. The package received was blank and has no barcode, a waste of time and money. There are some more examples here on the screen for your pleasure. This paper teaches us that we should endeavor to not just accept these AI-based solutions but look under the hood and sometimes a gold mine of knowledge can be found within. Absolutely amazing. If you have enjoyed this episode and would like to help us make better videos for you in the future, please consider supporting us on patreon.com slash 2 minute papers or just click the link in the video description. In return, we can offer you early access to these episodes or even add your name to our key supporters so you can appear in the description of every video and more. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The majority of these fans is used to improve the show and we use a smaller part to give back to the community and empower science conferences like the Central European Conference on Computer Graphics. This is a conference that teaches young scientists to present their work at bigger venues later and with your support it's now the second year we've been able to sponsor them which warms my heart. This is why every episode ends with you know the drill. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir. This work is about creating virtual characters with a skeletal system, adding more than 300 muscles and teaching them to use these muscles to kick, jump, move around and perform other realistic human movements. Throughout this video, you will see the activated muscles with red. I am loving the idea, which turns out comes with lots of really interesting corollaries. For instance, this simulation realistically portrays how increasing the amount of weight to be lifted, changes what muscles are being trained during a workout. These agents also learned to jump really high, and you can see a drastic difference between the movement required for a mediocre jump and an amazing one. As we are teaching these virtual agents with an assimilation, we can perform all kinds of crazy experiments by giving them horrendous special conditions, such as bone deformities, a stiff ankle, muscle deficiencies, and watch them learn to walk despite these setbacks. For instance, here you see that the muscles in the left thigh are contracted, resulting in a stiff knee, and as a result, the agent learned an asymmetric gate. If the thigh bones are twisted inwards, ouch. The AI shows that it is still possible to control the muscles to walk in a stable manner. I don't know about you, but at this point I'm feeling quite sorry for these poor simulated beings, so let's move on. We have plenty of less gruesome, but equally interesting things to test here. In fact, if we are in a simulation, why not take it further? It doesn't cost anything. That's exactly what the authors did, and it turns out that we can even simulate the use of prosthetics. However, since we don't need to manufacture these prosthetics, we can try a large number of different designs and evaluate their usefulness without paying a dime. How cool is that? So far, we have hamstrung this poor character many, many times, so why not try to heal it? With this technique, we can also quickly test the effect of different kinds of surgeries on the movement of the patient. With this, you can see here how a hamstring surgery can extend the range of motion of this skeleton. It also tells us not to try our luck with one leg squats. You heard it here, folks. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. The last few years have been an amazing ride when it comes to research works for creating facial reenactments for real characters. Beyond just transferring our gestures to a video footage of an existing talking head, controlling their gestures like video game characters, and full body movement transfer are also a possibility. In WaveNet and its many variants, we can also learn someone's way of speaking, write a piece of text, and make an audio waveform where we can impersonate them using their own voice. So, what else is there to do in this domain? Are we done? No, no, not at all. Hold onto your papers because with this amazing new technique, what we can do is look at the transcript of a talking head video, remove parts of it, or add to it, just as we would edit any piece of text, and this technique produces both the audio and the matching video of this person uttering these words. Check this out. With Apple's stock price at $191.45 per share. It works by looking through the video, collecting small sounds that can be used to piece together this new word that we've added to the transcript. The authors demonstrate this by adding the word FOX to the transcript. This can be piece together by the V, which appears in the word VIPER, and taking OX as a part of another word found in the footage. As a result, one can make the character say FOX without ever hearing her uttering this word before. Then we can look for not only the audio occurrences for these sounds, but the video footage of how they are being said, and in the paper, a technique is proposed to blend these video assets together. Finally, we can provide all this information to a neural renderer that synthesizes a smooth video of this talking head. This is a beautiful architecture with lots of contributions, so make sure to have a look at the paper in the description for more details. And of course, as it is not easy to measure the quality of these results in a mathematical manner, a user study was made where they asked some fellow humans, which is the real footage, and which one was edited. You will see the footage edited by this algorithm on the right. And it's not easy to tell which one is which, and it also shows in the numbers, which are not perfect, but they clearly show that the fake video is very often confused with the real one. Did you find any artifacts that give the trick away? Perhaps the sentence was said a touch faster than expected. Found anything else? Let me know in the comments below. The paper also contains tons of comparisons against previous works. So, in the last few years, the trend seems clear. The bar is getting lower, it is getting easier and easier to produce these kinds of videos, and it is getting harder and harder to catch them with our naked eyes, and now we can edit the transcript of what is being said, which is super convenient. I would like to note that AIs also exist that can detect these edited videos with a high confidence. I put up the ethical considerations of the authors here, it is definitely worthy of your attention as it discusses how they think about these techniques. The motivation for this work was mainly to enhance digital storytelling by removing filler words, potentially flabbed phrases, or retiming sentences in talking head videos. There is so much more to it, so make sure to pause the video and read their full statement. Thanks for watching and for your general support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Kato Joanaifahir. Today we are going to talk about the material point method. This method uses both grids and particles to simulate the movement of snow, dripping honey, interactions of granular solids, and a lot of other really cool phenomena on our computers. This can be used, for instance, in the movie industry, to simulate what a city would look like if it were flooded. However, it has its own limitations, which you will hear more about in a moment. This paper showcases really cool improvements to this technique. For instance, it enables to run these simulations twice as fast and can simulate new phenomena that were previously not supported by the material point method. One is the simulation of complex, thin boundaries that enables us to cut things so in this video expect lots of virtual characters to get dismembered. I think this might be the only channel on YouTube where we can say this and celebrate it as an amazing scientific discovery. And the other key improvement of this paper is introducing two-way coupling, which means the example that you see here as the water changes the movement of the wheel, but the wheel also changes with the movement of the water. It is also demonstrated quite aptly here by this elastoplastic jello scene in which we can throw in a bunch of blocks of different densities and it is simulated beautifully here how they sink into the jello deeper and deeper as a result. Here you see a real robot running around in a granular medium. And here we have a simulation of the same phenomenon and can marvel at how close the result is to what would happen in real life. Another selling point of this method is that this is easy to implement, which is demonstrated here and what you see here is the essence of this algorithm implemented in 88 lines of code. Wow! Now these methods still take a while as there is a lot of deformations and movements to compute and we can only advance time in very small steps and as a result the speed of such simulations is measured nothing frames per second but in seconds per frame. These are the kinds of simulations that we like to leave on the machine overnight. If you want to see something that is done with a remarkable amount of love and care, please read this paper. And I don't know if you have heard about this framework called Tai Chi. This contains implementations for many amazing papers in computer graphics. Lots of paper implementations on animation, light transport simulations, you name it, a total of more than 40 papers are implemented there. And I was thinking this is really amazing. I wonder which group made this. Then I noticed it was written by one person and that person is Yuan Ming Hu, the scientist who is the lead author of this paper. This is insanity. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai Fahir. This work presents a learning-based method that is able to take just a handful of photos and use those to synthesize a moving virtual character. Not only that, but it can also synthesize these faces from new viewpoints that the AI hasn't seen before. These results are truly sublime, however, hold on to your papers because it also works from as little as just one input image. This will refer to as one shot learning. You see some examples here, but wait a second, really, just one image? If all it needs is just one photo, this means that we can use famous photographs and even paintings and synthesize animations for them. Look at that. Of course, if we show multiple photos to the AI, it is able to synthesize better output results. You see such a progression here as a function of the amount of input data. The painting part I find to be particularly cool because it straights away from the kind of data the neural networks were trained on, which is photos. However, if we have proper intelligence, the AI can learn how different parts of the human face move and generalize this knowledge to paintings as well. The underlying laws are the same, only the style of the output is different. Absolutely amazing. The paper also showcases an extensive comparison section to previous works, and as you see here, nothing really compares to this kind of quality. I have heard the quote, any sufficiently advanced technology is indistinguishable from magic so many times in my life, and I was like, okay, well, maybe, but I'm telling you, this is one of those times when I really felt that I am seeing magic at work on my computer screen. So, I know what you're thinking. How can all this wizardry be done? This paper proposes a novel architecture where three neural networks work together. One, the M-better takes colored images with landmark information and compresses it down into the essence of these images. Two, the generator takes a set of landmarks, a crude approximation of the human face, and synthesizes a photorealistic result from it. And three, the discriminator looks at both real and fake images and tries to learn how to tell them apart. As a result, these networks learn together and over time, they improve together. So much so that they can create these amazing animations from just one source photo. The authors also released a statement on the purpose and effects of this technology, which I'll leave here for a few seconds for our interested viewers. This work was partly done at the Samsung AI Lab and Skoltec. Congratulations to both institutions, killer paper. Make sure to check it out in the video description. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It was also used in this OpenAI project that you see here, which we covered earlier in the series. We reported that experiment tracking was crucial in this project and that these tools saved them quite a bit of time and money. If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months. Well, it turns out I will be able to get access to these tools because get this, it's free and will always be free for academics and open source projects. Make sure to visit them through WNDB.com or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. Style Transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style, and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results. We have seen plenty of papers doing variations on style transfer but can we push this concept further? And the answer is yes. For instance, few people know that style transfer can also be done in 3D. If you look here, you see an artist performing this style transfer by drawing on a simple sphere and get their artistic style to carry over to a complicated piece of 3D geometry. We talked about this technique in 2-minute papers episode 94 and for your reference, we are currently at over episode 340. Leave a comment if you've been around back then. And this previous technique led to truly amazing results but still had two weak points. One, it took too long. As you see here, this method took around a minute or more to produce these results. And hold on to your papers because this new paper is approximately a thousand times faster than that, which means that it can produce a hundred frames per second using a whopping 4K resolution. But of course, none of this matters if the visual quality is not similar. And if you look closely, you see that the new results are indeed really close to the reference results of the older method. So, what was the other problem? The other problem was the lack of temporal coherence. This means that when creating an animation, it seems like each of the individual frames of the animation were drawn separately by an artist. In this new work, this is not only eliminated, as you see here, but the new technique even gives us the opportunity to control the amount of flickering. With these improvements, this is now a proper tool to help artists perform this 3D style transfer and create these rich virtual worlds much quicker and easier in the future. It also opens up the possibility for novices to do that, which is an amazing value proposition. Limitation still applies, for instance, if we have a texture with some regularity, such as this brick wall pattern here, the alignment and continuity of the bricks on the 3D model may suffer. This can be fixed, but it is a little labor intensive. However, you know what I'm saying, two more papers down the line and this will likely cease to be an issue. And what you've seen here today is just one paper down the line from the original work and we can do 4K resolution at 100 frames per second. Unreal! Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Not so long ago, OpenAI has released GPT2, an AI that was trained to look at a piece of text and perform common natural language processing operations on it, for instance, answering questions, summarization, and more. But today, we are going to be laser-focused on only one of those tasks, and that task is continuation, where we give an AI a bunch of text, and we ask it to continue it. However, as these learning algorithms are quite general by design, here comes the twist, who said that this can only work for text. Why not try it on composing music? So, let's have a look at some results where only the first six notes were given from a song, and the AI was asked to continue it. Love it! This is a great testament to the power of general learning algorithms. As you've heard, this works great for a variety of different genres as well, and not only that, but it can also create really cool blends between genres. Listen as the AI starts out from the first six notes of a Chopin piece and transitions into a pop style with a bunch of different instruments entering a few seconds in. And a great news, because if you look here, we can try our own combinations through an online demo as well. On the left side, we can specify and hear the short input sample and ask for a variety of different styles for the continuation. It is amazing fun, try it, I've put a link in the video description. I was particularly impressed with this combination. Now, this algorithm is also not without limitations as it has difficulties pairing instruments that either don't go too well together or there is lacking training data on how they should sound together. The source code is also either already available as of the publishing of this video or will be available soon. If so, I will come back and update the video description with the link. The AI has also published an almost two hour concert with tons of different genres, so make sure to head to the video description and check it out yourself. I think these techniques are either already so powerful or will soon be powerful enough to raise important copyright questions and will need plenty of discussions on who really owns this piece of music. What do you think? Let me know in the comments. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai-Fehir. The title of this paper is very descriptive, it says, controllable characters extracted from real-world videos. This sounds a little like science fiction, so let's pick this apart. If we forget about the controllable part, we get something that you've seen in this series many times, pose estimation. Pose estimation means that we have a human character in an image or a video, have a computer program look at it and tell us the current position this character is taking. This is useful for medical applications such as detecting issues with motor functionality, fall detection, or we can also use it for motion capture for our video games and blockbuster movies. So just performing the pose estimation part is a great invention, but relatively old news. So what's really new here? Why is this work interesting? How does it go beyond pose estimation? Well, as a hint, the title contains an additional word controllable. So look at this. Woohoo! As you see, this technique is not only able to identify where a character is, but we can grab a controller and move it around. This means making this character perform novel actions and showing it from novel views. It's really remarkable because this requires a proper understanding of the video we are watching. And this means that we can not only watch these real world videos, as you see this small piece of footage used for the learning, but by performing these actions with the controller, we can make a video game out of it. Especially given that here, the background has also been changed. To achieve this, this work contains two key elements. Element number one is the pose-to-pose network that takes an input posture and the button we pushed on the controller and creates the next step of the animation. And then, element number two, the post-of-frame architecture blends this new animation step into an already existing image. The neural network that performs this is trained in a way where it is encouraged to create these character masks in a way that is continuous and doesn't contain jarring jumps between the individual frames leading to smooth and believable movements. Now clearly, anyone who takes a cursory look sees that the animations are not perfect and still contain artifacts, but just imagine that this paper is among the first introductory works on this problem. Just imagine what will have two more papers down the line. I can't wait. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifahir. Scientists at Google just released the Translate of Tron. This is an AI that is able to translate speech from one language into speech into another language, and here comes the first twist without using text as an intermediate representation. You give it the sound waves and you get the translated sound waves. And this neural network was trained approximately on a million voice samples. So, let's see what learning on this one million samples gives us. Listen, this is the input sentence in Spanish. And here it is translated to English, but using the voice of the same person. And you will not know unless you ask. Stop doing that. New levels and pliados date, quenfield university. New hires at quenfield university. This is incredible. However, there is another twist, perhaps an even bigger one, believe it or not. This technique can not only translate, but also perform voice transfer, so it can say the same thing using someone else's voice. This means that the AI not only has to learn what to say, but how to say it. This is immensely difficult. It's also not easy to know what we need to listen to and when. So, let me walk you through it. This is a sentence in Spanish. This is the same sentence said by someone else, an actual person and in English. Swimming with dolphins. And now, the same thing but synthesized by the algorithm using both of their voices. Swimming with dolphins. Swimming with dolphins. Let's listen to them side by side some more. Swimming with dolphins. Swimming with dolphins. This is so good. Let's have a look at some more examples. So look around the country and what do you see? So look around the country and what you see. So look around the country and what do you see? Wow. The method performs the learning by trying to map these male spectrograms between multiple speakers. You can see example sentences here and their corresponding spectrograms which are concise representations of someone's voice and intonation. And of course, it is difficult to mathematically formalize what makes a good translation and a good mimicking of someone's voice. So in these cases, we'll let people be the judge and have them listen to a few speech signals and asking them to guess which was the real person and which was the AI speaking. If you take a closer look at the paper, you will see that it smokes the competition. This is great progress on an immensely difficult task as we have to perform proper translation and voice transfer at the same time. It's quite a challenge. Of course, failure cases still exist. Listen. Then she is the cosa. Then, yeah, that's the thing. Then, yeah, that's the thing. Then, yeah, that's the thing. Then, yeah, that's the thing. Just imagine that you are in a foreign country and all you need to do is use your phone to test stories to people not only in their own language, but also using your own voice, even if you don't speak a word of their language. Beautiful. Even this video could perhaps be available in a variety of languages using my own voice within the next few years, although I wonder how these algorithms would pronounce my name. So far, that proved to be quite a challenge for humans and AI's alike. And for now, all hail the mighty translator, Tron. In the meantime, I just got back from this year's NATO conference. It was an incredible honor to get an invitation to speak at such an event and of course, I was happy to attend as a service to the public. The goal of the talk was to inform key political and military decision makers about recent developments in AI so they can make better decisions for us. And I was so nervous during the talk. My goodness. If you wish to watch it, I put a link to it in the video description and I may be able to upload a higher quality version of this video here in the future. Attending the conference introduced delays in our schedule, my apologies for that, and normally, we would have to worry whether because of this, we'll have enough income to improve our recording equipment. However, with your support on Patreon, this is not at all the case, so I want to send you a big thank you for all your amazing support. This was really all possible thanks to you. If you wish to support us, just go to patreon.com slash two minute papers or just click the link in the video description. Have fun with the video. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda Labs. Dear Fellow Scholars, this is two-minute papers with Karo Jorna Ifahir. In our earlier paper, Gaussian material synthesis, we made a neural renderer, and what this neural renderer was able to do is reproduce the results of a light transport simulation within 4 to 6 milliseconds in a way that is almost pixel-perfect. It took a fixed camera and scene, and we were able to come up with a ton of different materials, and it was always able to guess what the output would look like if we changed the physical properties of a material. This is a perfect setup for material synthesis, where these restrictions are not too limiting. Trying to perform high-quality neural rendering has been a really important research problem lately, and everyone is asking the question, can we do more with this? Can we move around with the camera and have a neural network predict what the scene would look like? Can we do this with animations? Well, have a look at this new paper, which is a collaboration between researchers at the Technical University of Munich and Stanford University, where all we need is some video footage of a person or object. It takes a close look at this kind of information and can offer three killer applications. One, it can synthesize the object from new viewpoints. Two, it can create a video of this scene and imagine what it would look like if we reorganized it, or can even add more objects to it. And three, perhaps everyone's favorite, performing facial reenactment from a source to a target actor. As with many other methods, these neural textures are stored on top of the 3D objects, however, a more detailed, high-dimensional description is also stored and learned by this algorithm, which enables it to have a deeper understanding of intricate light-transport effects to create these new views. For instance, it is particularly good at reproducing specular highlights, which typically change rapidly as we change our viewpoint for the object. One of the main challenges was building a learning algorithm that can deal with this kind of complexity. The synthesis of mouth movements was always the Achilles' heel of these methods, so have a look at how well this one does with it. You can also see with the comparisons here that in general, this new technique smokes the competition. So how much training data do we need to achieve this? I would imagine that this would take hours and hours of video footage, right? No, not at all. This is what the results look like as a function of the amount of training data. On the left, you see that it already kind of works with 125 images, but contains artifacts, but if we can supply 1000 images, we're good. Note that 1000 images sounds like a lot, but it really isn't, it's just half a minute worth of video. How crazy is that? Some limitations still apply. You see one failure case here, and the neural network typically needs to be retrained if we wish to use it on new objects, but this work finally generalizes to multiple viewpoints, animation, scene editing, lots of different materials and geometries, and I can only imagine what we'll get two more papers down the line. Respect to used tools for accomplishing this, and in general, make sure to have a look at Matthias Neesner's lab who just got tenured as a full professor and he's only 32 years old. Congratulations. If you have AI-related ideas and you would like to try them, but not do it in the cloud because you wish to own your own hardware, look no further than Lambda Labs. Lambda Labs offers sleek, beautifully designed laptops, workstations and servers that come pre-installed with every major learning framework and updates them for you, taking care of all the dependencies. Look at those beautiful and powerful machines. This way, you can spend more if your time with your ideas and don't have to deal with all the software maintenance work. Make sure to go to lambdalabs.com, slash papers, or click their link in the video description and look around, and if you have any questions, you can even call them for advice. Big thanks to Lambdalabs for supporting this video and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. You are in for a real treat today because today we are not going to simulate just plain regular fluids. No, we are going to simulate ferrofluids. These are fluids that have magnetic properties and respond to an external magnetic field and you will see in a moment that they are able to even climb things. You see in the reference footage here that this also means if there is no magnetic field we have a regular fluid simulation. Nothing too crazy here. In this real-world footage we have a tray of ferrofluid up in the air and we have a magnet below it so as the tray descends down and gets closer to the magnet, this happens. But the strength of the magnetic field is not the only factor that a simulation needs to take into account. Here is another real experiment that shows that the orientation of the magnet also makes a great deal of difference to the distortions of the fluid surface. And now let's have a look at some simulations. This simulation reproduces the rotating magnet experiment that you've seen a second ago. It works great, what is even more if we are in a simulation we can finally do things that would either be expensive or impossible in the real life so let's do exactly that. You see a steel sphere attracting the ferrofluid here and now the strength of the magnet within is decreased giving us the impression that we can bend this fluid to our will. How cool is that? In the simulation we can also experiment with arbitrarily shaped magnets. And here's the legendary real experiment where with magnetism we can make a ferrofluid climb up on the steel helix. Look at that, when I first seen this video and started reading the paper I was just giggling like a little girl. So good. Just imagine how hard it is to do something where we have footage from the real world that keeps judging our simulation results and we are only done when there's a near exact match such as the one you see here. Huge congratulations to the authors. You see here how the simulation output depends on the number of iterations. More iterations means that we redo the calculations over and over again and get results closer to what would happen in real life at the cost of more computation time. However, as you see we can get close to the real solution with even one iteration which is remarkable. In my own fluid simulation experiments when I tried to solve the pressure field using one to four iterations give me a result that's not only inaccurate but singular which blows up the simulation. Look at this. On this axis you can see how the fluid disturbances get more pronounced as a response to a stronger magnetic field. And in this direction you see how the effect of surface tension smooths out these shapes. What a visualization. The information density in this example is just out of this world and it is still both informative and beautiful. If only I could tell you how many times I have to remake each of the figures in pursuit of this I can only imagine how long it took to finish this one. Bravo. And if all that's not enough for you to fall out of your chair, get this. It is about Libel Huang, the first author of this paper. I became quite curious about his other works and have found exactly zero of them. This was his first paper. My goodness. And of course it takes a team to create such a work so congratulations to all three authors. This is one heck of a paper. Check it out in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zonaifahir. This paper from DeepMind is about taking a bunch of learning algorithms and torturing them with millions of classic math questions to find out if they can solve them. Sounds great, right? I wonder what kind of math questions would an AI find easy to solve? What percentage of these can a good learning algorithm answer today? Worry not, we'll discuss some of the results at the end of this video. These kinds of problems are typically solved by recurrent neural networks that are able to read and produce sequences of data and to even begin to understand what the question is here, an AI would have to understand the concept of functions, variables, arithmetic operators, and of course, the words that form the question itself. It has to learn planning and precedence, that is, in what order do we evaluate such an expression and it has to have some sort of memory in which it can store the intermediate results? The main goal of this paper is to describe a data set that is designed in a very specific way to be able to benchmark the mathematical reasoning abilities of an AI. So how do we do that? First, it is made in a way that it's very difficult to solve for someone without generalized knowledge. Imagine the kind of student at school who memorized everything from the textbooks but has no understanding of the underlying tasks and if the teacher changes just one number in a question, the student is unable to solve the problem. We all met that kind of student, right? Well, this test is designed in a way that students like these should fail at it. Of course, in our case, the student is the AI. Second, the questions should be modular. This is a huge advantage because a large number of these questions can be generated procedurally by adding a different combination of sub-tasks such as additions, function evaluations, and more. An additional advantage of this is that we can easily control the difficulty of these questions. The more modules we use, typically the more difficult the question gets. Third, the questions and answers should be able to come in any form. This is an advantage because the AI has to not only understand the mathematical expressions but also focus on what exactly we wish to know about them. This also means that the question itself can be about factorization where the answer is expected to be either true or false. And the algorithm is not told we are looking for a true or false answer it has to be able to infer this from the question itself. And to be able to tackle all this properly with this paper, the authors released two million of these questions for training an AI free of charge to foster more future research in this direction. I wonder what percentage of these can a good learning algorithm answer today. Let's have a look at some results. A neural network model that goes by the name Transformer Network produced the best results by being able to answer 50% of the questions. This you find in the extrapolation column here. When you look at the interpolation column, you see that it successfully answered 76% of these questions. So which one is it? 50% or 76%. Actually, both. The difference is that interpolation means that the numbers in these questions were within the bounds that was seen in the training data where extrapolation means that some of these numbers are potentially much larger or smaller than others that the AI has seen in the training examples. I would say that given the difficulty of just even understanding what these questions are, these are really great results. Generally, in the future, we will be looking for algorithms that do well on the extrapolation tasks because these are the AI's that have knowledge that generalize as well. So which tasks were easy and which were difficult? Interestingly, the AI had similar difficulties as we follow humans have, namely rounding decimals and integers, comparisons, basic algebra was quite easy for it, whereas detecting primality and factorization were not very accurate. I will keep an eye out on improvements in this area. If you're interested to hear more about it, make sure to subscribe to this series. And if you just push the red button, you may think that you're subscribed, but you're not. You are just kind of subscribed. Make sure to also click the bell icon to not miss these future episodes. Also, please make sure to read the paper. It is quite readable and contains a lot more really cool insights about this data set and the experiments. As always, the link is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
This episode has been supported by Lambda Labs. Dear Fellow Scholars, this is two-minute papers with Karo Jornaifahir. Let's talk about a great recent development in image translation. Image translation means that some image goes in and it is translated into an analogous image of a different class. A good example of this would be when we have a standing tiger as an input and we ask the algorithm to translate this image into the same tiger lying down. This leads to many amazing applications. For instance, we can specify a daytime image and get the same scene during nighttime. We can go from maps to satellite images, from video games to reality and more. However, much like many learning algorithms today, most of these techniques have a key limitation. They need a lot of training data or, in other words, these neural networks require seeing a ton of images in all of these classes before they can learn to meaningfully translate between them. This is clearly inferior to how humans think, right? If I would show you a horse, you could easily imagine and some of you could even draw what it would look like if it were a zebra instead. As I'm sure you have noticed by reading arguments on many internet forums, humans are pretty good at generalization. So, how could we possibly develop a learning technique that can look at very few images and obtain knowledge from them that generalizes well? Have a look at this crazy new paper from scientists at Nvidia that accomplishes exactly that. In this example, they show an input image of a golden retriever and then we specify the target classes by showing them a bunch of different animal breeds and look, in goes your golden and out comes a pug or any other dog breed you can think of. And now, hold on to your papers because this AI doesn't have access to these target images and it had only seen them the very first time as we just gave it to them. It can do this translation with previously unseen object classes. How is this insanity even possible? This work contains a generative adversarial network which assumes that the training set we give it contains images of different animals and what it does during training is practicing the translation process between these animals. It also contains a class encoder that creates a low dimensional latent space for each of these classes which means that it tries to compress these images down to a few features that contain the essence of these individual dog breeds. Apparently, it can learn the essence of these classes really well because it was able to convert our image into a pug without ever seeing a pug other than this one target image. As you can see here, it comes out way ahead of previous techniques but of course, if we give it a target image that is dramatically different than anything the AI has seen before, it may falter. Luckily, you can even try it yourself through this web demo which works on pets so make sure to read the instructions carefully and let the experiments begin. In fact, due to popular requests, let me kick this off with Lisa, my favorite chew hour. I got many tempting alternatives but worry not, in reality she will stay as is. I was also curious about trying a non-traditional head position and as you see with the results, this was a much more challenging case for the AI. The paper also discusses this limitation in more detail. You know the saying, two more papers down the line and I am sure this will also be remedied. I am hoping that you will also try your own pets and as a fellow scholar, you will flood the comments section here with your findings. Strictly for science, of course. If you are doing deep learning, make sure to look into Lambda GPU systems. Lambda offers workstations, servers, laptops and the GPU cloud for deep learning. You can save up to 90% over AWS, GCP and Azure GPU instances. Every Lambda GPU system is pre-installed with TensorFlow, PyTorch and Carras. Just plug it in and start training. Lambda customers include Apple, Microsoft and Stanford. Go to LambdaLabs.com, slash papers, or click the link in the video description to learn more. Big thanks to Lambda for supporting two minute papers and helping us make better videos. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. AI research has come a long, long way in the last few years. Not so long ago, we were lucky if we could train a neural network to understand traffic signs and since then, so many things happened. By harnessing the power of learning algorithms, we are now able to impersonate other people by using a consumer camera, generate high-quality virtual human faces for people that don't exist or pretend to be able to dance as a pro dancer by using an external video footage and transferring it onto ourselves. Even though we are progressing at a staggering pace, there is a lot of debate as to which research direction is the most promising going forward. Roughly speaking, there are two schools of thought. One, we recently talked about Richard Sutton's amazing article by the name The Better Lesson, in which he makes a great argument that AI research should not try to mimic the way the human brain works. He argues that instead, all we need to do is formulate our problems in a general manner so that our learning algorithm may find something that is potentially much better suited for a problem than our brain is. I put a link to this video in the description if you're interested. And two, a different school of thought says that we should look at all these learning algorithms that use a lot of powerful hardware and can do wondrous things like playing a bunch of Atari games at a superhuman level. However, they learn orders of magnitude slower than the human brain does, so it should definitely be worth it to try to study and model the human brain at least until we can match it in terms of efficiency. This school of thought is what we are going to talk about in this video. As an example, let's take a look at deep reinforcement learning in the context of playing computer games. This technique is a combination of a neural network that processes the visual data that we see on the screen and the reinforcement learner that comes up with the gameplay-related decisions. Absolutely amazing algorithm, a true breakthrough in AI research. Very powerful, however, also quite slow. And by slow, I mean that we can sit for an hour in front of our computer and wonder why our learner does not work at all because it loses all of its lives almost immediately. If we remain patient, we find out that it works, it just learns at a glacial pace. So why is this so slow? Well, two reasons. Reason number one is that the learning happens through incremental parameter adjustment. If a human fails really badly at a task, the human would know that a drastic adjustment to the strategy is necessary, while the deep reinforcement learner would start applying tiny, tiny changes to its behavior and test again if things got better. This takes a while and as a result seems unlikely to have a close relation to how we humans think. The second reason for it being slow is the presence of weak inductive bias. This means that the learner does not contain any information about the problem we have at hand or in other words, has never seen the game we are playing before and has no other previous knowledge about games at all. This is desirable in some cases because we can reuse one learning algorithm for a variety of problems. However, because this way, the AI has to test a stupendously large number of potential hypotheses about the game, we will have to pay for this convenience by using a mighty inefficient algorithm. But is this all really true? Does deep reinforcement learning really have to be so slow? And what on earth does this have to do with our brain? Well, this paper proposes an interesting counter argument that this is not necessarily true and argues that with a few changes, the efficiency of deep reinforcement learning may be drastically improved and get this. It also tells us that these changes are also possibly based in neuroscience. One such change is using episodic memory, which stores previous experiences to help estimating the potential value of different actions, and this way, drastic parameter adjustments become a possibility. And it not only improves the efficiency, but there is more to it because there are recent studies that show that using episodic memory indeed contributes to the learning of real humans and animals alike. And two, it is beneficial to let the AI implement its own reinforcement learning algorithm, a concept often referred to as learning to learn or met our reinforcement learning. This also helps obtaining more general knowledge that can be reused across tasks further improving the efficiency of the agent. Here you see a picture of an FMRI and some regions are marked with yellow and orange here. What could this possibly mean? Well, hold on to your papers because these highlight neural structures that implement a very similar metary reinforcement learning scheme within the human brain. It turns out that metary enforcement learning or this learning to learn scheme may not just be something that speeds up our AI algorithms, but maybe a fundamental principle of the human brain as well. So these two changes to the pre-enforcement learning not only drastically improve its efficiency, but it also suddenly maps quite a bit better to our brain. How cool is that? So which school of thought are you most fond of? Should we model the brain or should we listen to Richard Sutton's bitter lesson? Let me know in the comments. Also make sure to have a look at the paper. I found it to be quite readable and you really don't need to be a neuroscientist to read it and learn quite a few new things. Make sure to have a look at it in the video description. Now, I think you noticed that this paper doesn't contain the usual visual fireworks and is more complex than your average two-minute papers video and hence I expected to get significantly less views. That's not a great business model, but you know what? I made this channel so I can share with you all these important lessons that I learned during my journey. This has been a true privilege and I am thrilled that I am still able to talk about all these amazing papers without worrying too much whether any of these videos will go viral or not. This has only been possible because of your unwavering support on patreon.com slash two-minute papers. If you feel like chipping in, please click the Patreon link in the video description. And if you are more like a crypto person, we also support cryptocurrencies like Bitcoin, Ethereum and Litecoin, the addresses are also available in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Super Resolution is a research field with a ton of published papers every year where the simplest problem formulation is that we have a low resolution course image as an input and we wish to enhance it to get a crisper, higher resolution image. You know the thing that can always be done immediately and perfectly in many of these detective TV series. And yes, sure, the whole idea of super resolution sounds a little like science fiction. How could I possibly get more content onto an image that's not already there? How would an algorithm know what a blurry text means if it's unreadable? It can't just guess what somebody wrote there, can it? Well, let's see. This paper provides an interesting take on this topic because it rejects the idea of having just one image as an input. You see, in this day and age, we have powerful mobile processors in our phones and when we point our phone camera and take an image, it doesn't just take one, but a series of images. Most people don't know that some of these images are even taken as soon as we open our camera app without even pushing the shoot button. Working with a batch of images is also the basis of the iPhone's beloved live photo feature. So as a result, this method builds on this raw burst input with multiple images and doesn't need idealized conditions to work properly, which means that it can process footage that we shoot with our shaky hands. In fact, it forges an advantage out of this imperfection because it can first align these photos and then we have not one image, but a bunch of images with slight changes in view point. This means that we have more information that we can extract from these several images, which can be stitched together into one higher quality output image. Now that's an amazing idea if I've ever seen one. It not only acknowledges the limitations of real world usage, but even takes advantage of it. Brilliant. You see throughout this video that the results look heavenly. However, not every kind of motion is desirable. If we have a more complex motion, such as the one you see here as we move away from the scene, this can lead to unwanted artifacts in the reconstruction. Luckily, the method is able to detect these cases by building a robustness mask that highlights which are the regions that will likely lead to these unwanted artifacts. Whatever is deemed to be low quality information in this mask is ultimately rejected, leading to high quality outputs even in the presence of weird motions. And now hold on to your papers because this method does not use neural networks or any learning techniques and these orders of magnitude faster than those while providing higher quality images. As a result, the entirety of the process takes only 100 milliseconds to process a really detailed 12 megapixel image, which means that it can do it 10 times every second. These are interactive frame rates and it seems that doing this in real time is going to be possible within the near future. Huge congratulations to Bart and his team at Google for out muscling the neural networks. Luckily, higher quality ground truth data can also be easily produced for this project, creating a nice baseline to compare the results to. Here you see that this new method is much closer to this ground truth than previous techniques. As an additional corollary of this solution, the more of these jerky frames we can collect, the better it can reconstruct images in poor lighting conditions, which is typically one of the more desirable features in today's smartphones. In fact, get this. This is the method behind Google's magical night sight and super rest zoom features that you can access by using their Pixel 3 flagship phones. When this feature came out, I remember that phone reviewers and everyone unaware of the rate of progress in computer graphics research were absolutely floored by the results and could hardly believe their eyes when they first tried it. And I don't blame them. This is a truly incredible piece of work. Make sure to have a look at the paper that contains a ton of comparisons against other methods and it also shows the relation between the number of collected burst frames and the output quality we can expect as a result and more. Thanks for watching and for your generous support and I'll see you next time.