text
stringlengths
0
782k
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. In this footage, we have a variety of objects that differ in geometry and the goal is to place them into this box using an AI. Sounds simple, right? This has been solved long, long ago. However, there is a catch here, which is that this box is outside of the range of the robot arm, therefore it has to throw it in there with just the right amount of force for it to end up in this box. It can perform 500 of these tosses per hour. Before anyone misunderstands what is going on in the footage here, it almost seems like the robot on the left is helping by moving to where the object would fall after the robot on the right throws it. This is not the case. Here you see a small part of my discussion with Andy Zang, the lead author of the paper where he addresses this. The results look amazing and note that this problem is much harder than most people would think at first. In order to perform this, the AI has to understand how to grasp an object with a given geometry. In fact, we may grab the same object at a different side, throw it the same way, and there would be a great deal of difference in the trajectory of this object. Have a look at this example with a screwdriver. It also has to take into consideration the air resistance of a given object as well. Man, this problem is hard. As you see here, initially, it cannot even practice throwing because its reliability in grasping is quite poor. However, after 14 hours of training, it achieves a remarkable accuracy and to be able to train for so long, this training table is designed in a way that when running out of objects, it can restart itself without human help. Nice. To achieve this, we need a lot of training objects but not any kind of training objects. These objects have to be diversified. As you see here, during training, the box position enjoys a great variety and the object geometry is also well diversified. Normally, in these experiments, we are looking to obtain some kind of intelligence. In this case, would mean that the AI truly learned the underlying dynamics of object throwing and not just found some good solutions via trial and error. A good way to test this would be to give it an object it has never seen before and see how its knowledge generalizes to that. Same with locations. On the left, you see these boxes marked with orange. This was the training set, but later it was asked to throw it into the blue boxes, which is something it has never tried before. And look, this is excellent generalization. Bravo. You can also see the success probabilities for grasping and throwing here. A key idea in this work is that this system is endowed with a physics-based controller which contains the standard equations of linear projectile motion. This is simple knowledge from high school physics that ignores several key real life factors such as the effect of aerodynamic drag. This way, the AI does not have to learn from scratch and can use these calculations as an initial guess and it is tasked with learning to account for the difference between this basic equation and real life trajectories. In other words, it is given basic physics and is asked to learn advanced physics by building on that. Loving this idea. A simulation environment was also developed for this project where one can test the effect of, for instance, changing the gripper width which would be costly and labor intensive in the real world. Of course, these are all free in a software simulation. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifehir. This episode has been sponsored by Lambda Labs. Not so long ago, we talked about DeepMind's Alpha Star, an AI that was able to defeat top tier human players in Starcraft 2, a complex real-time strategy game. Of course, I love talking about AI's that are developed to challenge pro gamers at a variety of difficult games, so this time around we'll have a look at another major milestone, Open AI 5, which is an AI that plays Dota 2, a multiplayer online battle arena game with a huge cult following. As this game requires long-term strategic planning, it is a classic nightmare scenario for any AI. But, Open AI is no stranger to Dota 2, in 2017 they showed us an initial version of their AI that was able to play one versus one games with only one hero and was able to reliably be Dandy, a world champion player. That was quite an achievement, however, of course this was meant to be a stepping stone towards playing the real Dota 2. Then in 2018 they unveiled Open AI 5, an improved version of this AI that played 5 vs 5 games with a limited hero pool. This team was able to defeat competent players but was still not quite at the level of a world champion human team. In a 1 hour interview, the Open AI research team mentioned that due to the deadline of the international event, they had to make quite a few concessions. And this time several things have changed. First, they didn't just challenge some local team of formidable players, no no, they flat out challenged OG, the reigning world champion team, an ambitious move that exudes confidence from their side. Second, this time around, there was no tight deadline as the date of the challenge was chosen by Open AI. Let's quickly talk about the rules of the competition and then see if Open AI's confident move was justified. These learning agents don't look at the pixels of the game and as a result they see the world as a big bunch of numbers. And this time around, it was able to play a pool of 17 heroes and trained against itself for millions and millions of games. And now, let's have a look at what happened in this best of 3 series. In match 1, right after picking the roster of heroes, the AI estimated its win probability to be 67% so it was quite a surprise that early on, it looked like Open AI's bots were running around aimlessly. Over time, we found out that it was not at all the case. It plays unusually aggressively from the gap goal and uses buybacks quite liberally at times where human players don't really consider it to be a good choice. These buybacks resurrect a perished hero quickly but in return cost money. Later, it became clearer that these bots are no joke. They know exactly when to engage and when to back out from an engagement with the smallest sliver of health left. I will show quite a few examples of those to you during this video, so stay tuned. A little less than 20 minutes in, we had a very even game 1, if anything, Open AI seemed a tiny bit behind and someone noted that we should perhaps ask the bots what they think about their chances. And then the AI said, yeah, no worries, we have a higher than 95% chance to win the game. This was such a pivotal moment that was very surprising for everyone. Of course, if you call out to win with confidence, you better go all the way and indeed win the game. Right? Right. And sure enough, they wiped out almost the entire world champion team of the human players immediately after. No tell's pushed all the way back. Officials will come out to hold them down but Open AI, they've got two more kills. And then noted, you know what? Remember that we just said? Forget that. We estimate our chances to win to be above 99% now. And shortly after, they won match number one. Can you believe this? This is absolutely amazing. Interestingly, one of the developers said that the AI is great at assessing whether a fight is worth it. As an interesting corollary, if you engage with it and it fights you, it probably means that you are going to lose. That must be quite confusing for the players. Some mind games for you. Love it. At the event, it was also such a joy to see such a receptive audience that understood and appreciated high level plays. One words to match number two. Right after the draft, which is the process of choosing the heroes for each team, the AI predicted a win percentage that was much closer this time around, around 60%. In this game, the AI turned up the heat real fast and said just five minutes into the game, which is nothing that it has an over 80% chance to win the game. And now, watch this. In the game, you can see a great example of where the AI just gets away with a sliver of health. Look at this guy. They will find the follow-up wrap around Killon towards him, at least they are trying, but with that walk right out, he is able to run away. Another fish comes out. Surely this kill is going to be there, but no, a stun from the span holds back the shaker and he teepes out on 30 HP. Open AI gets out of there with the span, they cannot get that kill OG. Look at that. This is either an accident or some unreal level foresight from the side of this agent. I'd love to hear your opinion on which one you think it is. By the 9.5 minute mark, which is still really early, Open AI 5 said, yes, we got this one too. Over 95%. Here you see an interesting scenario where the AI loses one hero, but it almost immediately kills two of the human heroes and comes out favorably, at which point we wonder whether this was a deliberate bait it pulled on the humans. They do it the disabled. Hex comes out from the hotel, they'll kill up the crystal maiden, but Open AI Viper, diving past the tower onto us, tops this span, coming back in on minimal HP to throw that stun and secure the kill, as Open AI again getting the favourable trade another tower taken, they are playing at a ferocious speed here in the second game. By the 15 minute mark, the human players lost a barracks and were heavily underfunded and outplayed with seemingly no way to come back. And sure enough, by the 21 minute mark, the game was over. There is no other way to say it, this second game was a one sided beat down. 1-1 was a strategic back and forth where Open AI 5 waited for the right moment to win the game in a big team fight, where here they pressure the human team from the get go and never let them reach the end game where they might have an advantage with their picks. Also have a look at this. Unreal The final result is 2-0 for Open AI. In the post-match interview, No-Tale, one of the human players noted that he is confident that from 5 games they would take at least 1 and after 15 games they would start winning reliably. Very reminiscent of what we heard from players playing against DeepMind's AI in Starcraft 2 and I hope this will be tested. However, in the end he agreed that it is inevitable that this AI will become unbeatable at some point. It was also noted that in 5 vs 5 fights, they seem better in planning than any human team is and there is quite a lot to learn from the AI for us humans. They were also trying to guess the reasoning for all of these early buybacks. According to the players, initially they flat out seemed like misplaced. Perhaps the reason for this instant and not really great buybacks might have been that the AI knows that if the game goes on for much longer, statistically their chances with their given composition to win the game do windows, so it needs to immediately go and win right now whatever the cost. And again, an important lesson is that in this project, open AI is not spending so much money and resources to just play video games. Dota 2 is a wonderful test bed to see how their AI compares to humans at complex tasks that involve strategy and teamwork. However, the ultimate goal is to reuse parts of this system for other complex problems outside of video games. For instance, the algorithm that you've seen here today can also do this. But wait, there's more. Players after these show matches always tend to get these messages from others on Twitter telling them what they did wrong and what they should have done instead. Well, luckily these people were able to show their prowess as open AI gave the chance for anyone in the world to challenge the open AI 5 competitively and play against them online. This way not only team OG, but everyone can get crushed by the AI. How cool is that? This arena event has concluded with over 15,000 games played where open AI 5 had a 99.4% win rate. There are still ways to beat it, but given the rate of progress of this project, likely not for long. Insanity. As always, if you're interested in more details, I put a link to a Reddit AMA in the video description and I also can't wait to pick the algorithm apart for you, but for now we have to wait for the full paper to appear. And note that what happened here is not to be underestimated. Huge respect to the open AI team, to OG for the amazing games and congratulations to the humans who were able to beat these beastly bots online. So there you go. Another long video that's not two minutes and it's not about the paper. Yet. Welcome to two minute papers. If you're doing deep learning, make sure to look into Lambda GPU systems. Lambda offers workstations, servers, laptops, and the GPU cloud for deep learning. You can save up to 90% over AWS, GCP, and Azure GPU instances. Every Lambda GPU system is pre-installed with TensorFlow, PyTorch, and Keras. Just plug it in and start training. Lambda customers include Apple, Microsoft, and Stanford. Go to LambdaLabs.com, slash, papers, or click the link in the description to learn more. Big thanks to Lambda for supporting two minute papers and helping us make better videos. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karoi Jona Ifehir. Good news, another fluid paper is coming up today and this one is about simulating granular materials. Most techniques that can simulate these grains can be classified as either discrete or continuum methods. Discrete methods as the name implies simulate all of these particles one by one. As a result, the amount of detail we can get in our simulations is unmatched, however, we probably are immediately asking the question, doesn't simulating every single grain of sand take forever. Oh yes, yes it does. Indeed the price to be paid for all this amazing detail comes in the form of a large computation time. To work around this limitation, continual methods were invented which do the exact opposite by simulating all of these particles as one block where most of the individual particles within the block behave in a similar manner. This makes the computation times a lot friendlier, however since we are not simulating these grains individually, we lose out on a lot of interesting effects such as clogging, bouncing and ballistic motions. So, in short, a discrete method gives us a proper simulation but takes forever while the continual methods are approximate in nature but execute quicker. And now, from this exposition, the question naturally arises, can we produce a hybrid method that fuses together the advantages of both of these methods. This amazing paper proposes a technique to perform that by subdividing the simulation domain into an inside regime where the continual methods work well and an outside regime where we need to simulate every grain of sand individually with a discrete method. It is not all because the tricky part comes in the form of the reconciliation zone where a partially discrete and partially continuum simulation has to take place. The way to properly simulate this transition zone between the two regimes takes quite a bit of research effort to get right and just think about the fact that we have to track and change these domains over time because, of course, the inside and outside of a block of particles changes rapidly over time. Throughout the video, you will see the continuum zones denoted with red and the discrete zones with blue which are typically on the outside regions. The ratio of these zones gives us an idea of how much speed up we could get compared to a purely discrete simulation. In most cases, it means that 88% fewer discrete particles need to be simulated and this can lead to a total speed up of 6 to 7 times over that simulation. Basically, at least 6 all-nighter simulations running now in one night? I'm in. Sign me up. Also make sure to have a look at the paper because the level of execution of this work is just something else. Check it out in the video description. Beautiful work. My goodness. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. There are many AI techniques that are able to look at a still image and identify objects, textures, human poses and object parts in them really well. However, in the age of the internet, we have videos everywhere. So an important question would be how we could do the same for these animations. One of the key ideas in this paper is that the frames of these videos are not completely independent and they share a lot of information. So, after we make our initial predictions on what is very exactly, these predictions from the previous frame can almost always be reused with a little modification. Not only that, but here you can see with these results that it can also deal with momentary occlusions and is ready to track objects that rotate over time. A key part of this method is that one, it looks back and forth in these videos to update these labels and second, it learns in a self-supervised manner, which means that all it is given is just a little more than data and was never given a nice data set with explicit labels of these regions and object parts that it could learn from. You can see in this comparison table that this is not the only method that works for videos, the paper contains ample comparisons against other methods and comes out ahead of all other unsupervised methods and on this task it can even get quite close to supervised methods. The supervised methods are the ones that have access to these cushy label data sets and therefore should come out way ahead, but they don't, which sounds like witchcraft, considering that this technique is learning on its own. However, all this greatness comes with limitations. One of the bigger ones is that even though it does extremely well, it also plateaus, meaning that we don't see a great deal of improvement if we add more training data. Now whether this is because it is doing nearly as well, as it is humanly or computerly possible, or because a more general problem formulation is still possible remains a question. I hope we find out soon. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute pepper sweet caro e johnaifahir. This paper was written by scientists at DeepMind and it is about teaching an AI to look at a 3D scene and decompose it into its individual elements in a meaningful manner. This is typically one of those tasks that is easy to do for humans and is immensely difficult for machines. As this decomposition thing still sounds a little nebulous, let me explain what it means. Here you see an example scene and the segmentation of this scene that the AI came up with, which shows what it thinks where the boundaries of the individual objects are. However, we are not stopping there because it is also able to rip out these objects from the scene one by one. So why is this such a big deal? Well, because of three things. One, it is a generative model, meaning that it is able to reorganize these scenes and create new content that actually makes sense. Two, it can prove that it truly has an understanding of 3D scenes by demonstrating that it can deal with occlusions. For instance, if we ask it to rip out the blue cylinder from this scene, it is able to reconstruct parts of it that weren't even visible in the original scene. Same with the blue sphere here. Amazing, isn't it? And three, this one is a bombshell, it is an unsupervised learning technique. Now, our more seasoned fellow scholars fell out of the chair hearing this, but just in case, this means that this algorithm is able to learn on its own and we have to feed it a ton of training data, but this training data is not labeled. It just looks at the videos with no additional information and from watching all this content, it finds out on its own about the concept of these individual objects. The main motivation to create such an algorithm was to have an AI look at some gameplay of the StarCraft II strategy game and be able to recognize all individual units and the background without any additional supervision. I really hope this also means that DeepMind is working on a version of their StarCraft II AI that is able to learn more similarly to how a human does, which is looking at the pixels of the game. If you look at the details, this will seem almost unfathomably difficult, but would, of course, make me unreasonably happy. What a time to be alive. If you check out the paper in the video description, you will find how all this is possible through a creative combination of an attention network and a variational autoencoder. This episode has been supported by Backblaze. Backblaze is an unlimited online backup solution for only six dollars a month and I have been using it for years to make sure my personal data, family pictures and the materials required to create this series are safe. You can try it free of charge for 15 days and if you don't like it, you can immediately cancel it without losing anything. Make sure to sign up for Backblaze today through the link in the video description and this way you not only keep your personal data safe, but you also help supporting this series. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Jolyna Ifehir. Scientists at the Seoul National University in South Korea wrote a great paper on teaching an imaginary dragon all kinds of really cool aerobatic maneuvers like sharp turning, rapid winding, rolling, soaring, and diving. This is all done by a reinforcement learning variant where the problem formulation is that the AI has to continuously choose a character's actions to maximize a reward. Here, this reward function is related to a trajectory which we can draw in advance. These are the lines that the dragon seems to follow quite well. However, what you see here is the finished product. Curious to see how the dragon falters as it learns to maneuver properly? Well, we are in luck. Buckle up. You see the ideal trajectory here with black, and initially the dragon was too clumsy to navigate in a way that even resembles this path. Then, later, it learned to start the first turn properly, but as you see here, it was unable to avoid the obstacle and likely needs to fly to the emergency room. But it would probably miss that building too, of course. After more learning, it was able to finish the first loop but was still too inaccurate to perform the second. And finally, at last, it became adept at performing this difficult maneuver. A plus. One of the main difficulties of this problem is the fact that the dragon is always in motion and has a lot of momentum. And anything we do always has an effect later and we not only have to find one good action but whole sequences of actions that will lead us to victory. This is quite difficult. So how do we do that? To accomplish this, this work not only uses a reinforcement learning variant, but also adds something called self-regulated learning to it where we don't present the AI with a fixed curriculum, but we put the learner in charge of its own learning. This also means that it is able to take a big, complex goal and subdivide it into new, smaller goals. In this case, the big goal is following the trajectory with some more additional constraints which, by itself, turned out to be too difficult to learn with these traditional techniques. Instead, the agent realizes that if it tracks its own progress on a set of separate but smaller sub-goals, such as tracking its own orientation, positions, and rotations so guess the desired target states separately, it can finally learn to perform these amazing stunts. That sounds great, but how is this done exactly? This is done through a series of three steps where step one is generation, where the learner creates a few alternative solutions for itself and proceeds to the second step, evaluation where it has to judge these individual alternatives and find the best ones. And third, learning, which means looking back and recording whether these judgments, indeed, put the learner in a better position. By iterating these three steps, this virtual dragon, learn to fly properly. Isn't this amazing? I mentioned earlier that this kind of problem formulation is intractable without self-regulated learning and you can see here how a previous work fares on following these trajectories. There is indeed a world of a difference between the two. So there you go, in case you enter a virtual world where you need to train your own dragon, you'll know what to do. But just in case, also read the paper in the video description. If you enjoyed this episode and you wish to watch our other videos in early access or get your name immortalized in the video description, please consider supporting us on Patreon through patreon.com slash two minute papers. The link is available in the video description and this way we can make better videos for you. We also support crypto currencies, the addresses are also available in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Jolene Fahir, when it comes to image classification tasks in which the input is a photograph and the output is a decision as to what is depicted in this photo, neural network-based learning solutions became more accurate than any other computer program we humans could possibly write by hand. Because of that, the question naturally arises. What do these neural networks really do inside to make this happen? This article explores new ways to visualize the inner workings of these networks and since it was published in the distale journal, you can expect beautiful and interactive visualizations that you can also play with if you have a look in the video description. It is so good, I really hope that more modern journals like this appear in the near future. But back to our topic, wait a second, we already had several videos on neural network visualization before, so what is new here? Well, let's see. First, we have looked at visualizations for individual neurons. This can be done by starting from a noisy image and add slight modifications to it in a way that makes a chosen neuron extremely excited. This results in these beautiful colored patterns. I absolutely love, love, love these patterns, however, dismisses all the potential interactions between the neurons of which there are quite many. With this, we have arrived to pairwise neuron activations which adds more light on how these neurons work together. Another one of those beautiful patterns. This is, of course, somewhat more informative. Intuitively, if visualizing individual neurons was equivalent to looking at a sadly little line, the pairwise interactions would be observing two dislices in a space. However, we are still not seeing too much from this space of activations, and the even bigger issue is that this space is not our ordinary 3D space, but a high dimensional one. Visualizing spatial activations gives us more information about these interactions between not two, but more neurons, which brings us closer to a full-blown visualization. However, this new activation Atlas technique is able to provide us with even more extra knowledge. How? Well, you see here with the dots that it provides us a denser sampling of the most likely activations, and this leads to a more complete, bigger picture view of the inner workings of the neural network. This is what it looks like if we run it on one image. It also provides us with way more extra value, because so far we have only seen how the neural network reacts to one image, but this method can be extended to see its reaction to not one, but one million images. You can see an example of that here. What's more, it can also unveil weaknesses in the neural network. Have a look at this amazing example where the visualization uncovers that we can make this neural network misclassify a gray whale for a gray-twight shark, and all we need to do is just brazenly put a baseball in this image. It is not a beautiful montage, is it? Well, that's not a drawback, that's exactly the point. No finesse is required, and the network is still fooled by this poorly edited adversarial image. We can also trace paths in this Atlas, which reveal how the neural network decides whether one or multiple people are in an image, or how to tell a watery type terrain from a rocky cliff. Again, we have only scratched the surface here, and you can play with these visualizations yourself, so make sure to have a closer look at the paper through the link in the video description. You won't regret it. Let me know in the comments section how it went. Thanks for watching, and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. I know for a fact that some of you remember our first video on image translation, which was approximately three years and 250 episodes ago. This was a technique where we took an input painting and the labeling of this image that shows what kind of objects are depicted and then we could start editing this labeling and out came a pretty neat image that satisfies these labels. One came, picks to picks, another image translation technique which in some cases only required a labeling, the source photo was not required because these features were learned from a large amount of training samples. And it could perform really cool things like translating a landscape into a map or sketches to photos and more. Both of these works were absolutely amazing and I always say two more papers down the line and we are going to have much higher resolution images. So this time here is the paper that is in fact two more papers down the line. So let's see what it can do. I advise you that you hold on to your papers for this one. The input is again a labeling which we can draw ourselves and the output is a hopefully photorealistic image that adheres to these labels. I like how first only the silhouette of the rock is drawn so we have this hollow thing on the right that is not very realistic and then it is now filled in with the bucket tool and there you go. It looks amazing. It synthesizes a relatively high resolution image and we finally have some detail in there too. But of course there are many possible images that correspond to this input labeling. How do we control the algorithm to follow our artistic goals? Well, you remember from the first work I've shown you where we could do that by adding an additional image as an input style. Well, look at that. We don't even need to engage in that because here we can choose from a set of input styles that are built into the algorithm and we can switch between them almost immediately. I think the results speak for themselves but note that not only the visual fidelity but the alignment with the input labels is also superior to previous approaches. Of course to perform this we need a large amount of training data where the inputs are labels and the outputs are the photorealistic images. So how do we generate such a dataset? Drawing a bunch of labels and asking artists to fill them in sounds like a crude and expensive idea. Well, of course we can do it for free by thinking the other way around. Let's take a set of photorealistic images and use already existing algorithms to create a labeling for them. If we can do that, we'll have as many training samples, as many images we have, in other words more than enough to train an amazing neural network. Also, the main part of the magic in this new work is using a new kind of layer for normalizing information within this neural network that adapts better to our input data than the previously used batch normalization layers. This is what makes the outputs more crisp and does not let semantic information be washed away in these images. If you have a closer look at the paper in the video description, you will also find a nice evaluation section with plenty of comparisons to previous algorithms and according to the authors, the source code will be released soon as well. As soon as it comes out, everyone will be able to dream up beautiful photorealistic images and get them out almost instantly. Another time to be alive. If you have enjoyed this episode and would like to support us, please click one of the Amazon affiliate links in the video description and buy something that you are looking to buy on Amazon anyway. You don't lose anything, and this way we get a small kickback which is a great way to support the series so we can make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir. This is an article from the distal journal, so expect a lot of intuitive and beautiful visualizations. And it is about recurrent neural networks. These are neural network variants that are specialized to be able to deal with sequences of data. For instance, processing and completing text is a great example usage of these recurrent networks. So, why is that? Well, if we wish to finish a sentence, we are not only interested in the latest letter in this sentence, but several letters before that, and of course, the order of these letters is also of utmost importance. Here you can see with the green rectangles, which previous letters these recurrent neural networks memorize when reading and completing our sentences. LSTM stands for Long Short-Term Memory, and GRU means Gated Recurrent Unit, both are recurrent neural networks. And you see here that the nested LSTM doesn't really look back further than the current word we are processing, while the classic LSTM almost always memorizes a lengthy history of previous words. And now, look, interestingly, with GRU, when looking at the start of the word grammar here, we barely know anything about this new word, so it memorizes the entire previous word as it may be the most useful information we have at the time. And now, as we proceed a few more letters in this word, it mostly shifts its attention to a shorter segment, that is, the letters of this new word we are currently writing. Luckily, the paper is even more interactive, meaning that you can also add a piece of text here and see how the GRU network processes it. One of the main arguments of this paper is that when comparing these networks against each other in terms of quality, we shouldn't only look at the output text they generate. For instance, it is possible for two models that work quite differently to have a very similar accuracy and score on these tests. The author argues that we should look beyond these metrics and look at this kind of connectivity information as well. This way, we may find useful pieces of knowledge like the fact that GRU is better at utilizing longer-term contextual understanding. A really cool finding indeed, and I am sure this will also be a useful visualization tool when developing new algorithms and finding faults in previous ones. Love it! Thanks for watching and for your generous support, and I'll see you next time!
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This is an incredible paper from OpenAI in which the goal is to teach an AI to read a piece of text and perform common natural language processing operations on it, for instance, answering questions, completing text, reading comprehension, summarization, and more. And not only that, but additionally, the AI has to be able to perform these tasks with as little supervision as possible. This means that we seek to unleash the algorithm that they call GPT-2 to read the internet and learn the intricacies of our language by itself. To perform this, of course, we need a lot of training data and here the AI reads 40 gigabytes of internet text, which is 40 gigs of non-binary plain-text data, which is a stupendously large amount of text. It is always hard to put these big numbers in context. So as an example, to train similar text completion algorithms, AI people typically reach out to a text file containing every significant work of Shakespeare himself and this file is approximately five megabytes. So the 40 gigabytes basically means an amount of text that is 8,000 times the size of Shakespeare's works. That's a lot of text. And now, let's have a look at how it fares with the text completion part. This part was written by a human, quoting, in a shocking finding, scientists discovered a herd of unicorns living in a remote previously unexplored valley in the Andes mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. And the AI continued the text the following way, quoting a short snippet of it. The scientists named the population after their distinctive horn of its unicorn. These four horned silver-white unicorns were previously unknown to science. Whoa! Now note that this is clearly not perfect if there is even such a thing as a perfect continuation and it took 10 tries, which means that the algorithm was run 10 times and the best result was cherry-picked and recorded here. And despite all of these, this is a truly incredible result, especially given that the algorithm learns on its own. After giving it a piece of text, it can also answer questions in a quiet competent manner. Whereinat, later in this video, I will show you more of these examples and likely talk over them, so if you are curious, feel free to pause the video while you read the prompts and their completions. The validation part of the paper reveals that this method is able to achieve state-of-the-art results on several language modeling tasks and you can see here that we still shouldn't expect it to match a human in terms of reading comprehension, which is the question-answering test. More on that in a moment. So, there are plenty of natural language processing algorithms out there that can perform some of these tasks. In fact, some articles already stated that there is not much new here it's just the same problem, but stated in a more general manner and with more compute. Aha! It is not the first time that this happens. Remember our video by the name TheBitterLesson. I've put a link to it in the video description, but in case you missed it, let me quote how Richard Sutton addressed this situation. The bitter lesson is based on the historical observations that one, AI researchers have often tried to build knowledge into their agents, two, this always helps in the short term and is personally satisfying to the researcher, but three, in the long run it plateaus and even inhibits further progress and four, breakthrough progress eventually arise by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness and often incompletely digested because it's success over a favored human-centric approach. So, what is the big lesson here? Why is GPT2 so interesting? Well, big lesson number one is this is one of the clearer cases of what the quote was talking about, where we can do a whole lot given a lot of data and compute power and we don't need to insert too much additional knowledge into our algorithms. And lesson number two, as a result, this algorithm becomes quite general so it can perform more tasks than most other techniques. This is an amazing value proposition. I will also add that not every learning technique scales well when we add more compute. In fact, you can see here yourself that even GPT2 plateaus on the summarization task. Making sure that these learning algorithms scale well is a great contribution in and of itself and should not be taken for granted. There has been a fair bit of discussion on whether openAI should publish the entirety of this model. They opted to release a smaller part of the source code and noted that they are aware that the full model could be used for nefarious purposes. Why did they do this? What is the matter with everyone having an AI with a subhuman level reading comprehension? Well, so far we have only talked about quality. But another key part is quantity. And boy, are these learning methods superhuman in terms of quantity? Just imagine that they can write articles with a chosen topic and sentiment all day long and much quicker than human beings. Also note that the blueprint of the algorithm is described in the paper and the top tier research group is expected to be able to reproduce it. So does one release the full source code and models or not? This is a quite difficult question. We need to keep publishing both papers and source code to advance science, but we also have to find new ways to do it in an ethical manner. This needs more discussion and would definitely be worthy of a conference-style meeting. Or more. There is so much to talk about and so far we have really only scratched the surface, so make sure to have a look in the video description. I left a link to the paper and some more super interesting reading materials for you. Make sure to check them out. Also, just a quick comment on why this video came so late after the paper has appeared. Since there were a lot of feelings and intense discussion on whether the algorithm should be published or not, I was looking to wait until the dust settles and there is enough information out there to create a sufficiently informed video for you. This, of course, means that we are late to the party and missed out on a whole lot of views and revenue, but that's okay. In fact, that's what we'll keep doing going forward to make sure you get the highest quality information that I can provide. If you have enjoyed this episode and would like to help us, please consider supporting us on Patreon. Remember our model, the dollar a month is almost nothing, but it keeps the papers coming. And there are hundreds of papers on my reading list. As always, we are available through patreon.com, slash two-minute papers, and the link is also available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Dominic Papers with Karojona Ifeher. Today we are going to talk a bit about the glory and the woes of cloth simulation programs. In these simulations we have several 3D models that are built from up to several hundreds of thousands of triangles. And as they move in time, the interesting part of the simulation is whenever collisions happen, however evaluating how these meshes collide is quite difficult and time consuming. Basically we have to tell an algorithm that we have a piece of cloth with a 100,000 connected triangles here and another one there and now have a look and tell me which collides with which and how they bend and change in response to these forces. And don't forget about friction and repulsive forces. Also please be accurate because every small error adds up over time and do it several times a second so we can have a look at the results interactively. Well this is a quite challenging problem and it takes quite long to compute so much so that 70 to 80% of the total time taken to perform the simulation is spent with collision handling. So how can we make it not take forever? Well one way would be to try to make sure that we can run this collision handling step on the graphics card. This is exactly what this work does and in order to do this we have to make sure that all of these evaluations can be performed in parallel. Of course this is easier said than done. Another difficulty is choosing the appropriate time steps. These simulations are run in a way that we check and resolve all of the collisions and then we can advance the simulation forward by a tiny amount. This amount is called a time step and choosing the appropriate time step has always been a challenge. You see if we set it to two large we will be done faster and compute less however we will almost certainly miss some collisions because we skipped over them. The simulation may end up in a state that is so incorrect that it is impossible to recover from and we have to throw the entire thing out. If we set it to two low we get a more robust simulation however it will take many hours to days to compute. To remedy this this technique is built in a way such that we can use larger time steps. That's excellent news. Also the collision computation part is now up to nine times faster and if we look at the class simulation as a whole that can be made over three times faster. As you see here this is especially nice because we can test how these garments react to our manipulations at eight to ten frames per second. If you have a closer look at the paper you will find another key observation which states that most of the time only a small sub region of the simulated cloth undergoes deformation due to response forces and this knowledge can be kept track of which contributed to cutting down the simulation time significantly. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zonai Fahir. Before we start, I'd like to tell you that this video is not about a paper and it is not going to be two minutes. Welcome to Two Minute Papers. This piece bears the name, The Bitter Lesson, and was written by Richard Sutton, a legendary Canadian researcher who has contributed a great deal to reinforcement learning research. And what a piece this is. It is a short article on how we should do research and ever since I read it, I couldn't stop thinking about it and as a result, I couldn't not make a video on this topic. We really have to talk about this. It takes less than five minutes to read, so before we talk about it, you can pause this video and click the link to it in the video description. So in this article, he makes two important observations. Number one, he argues that the best performing learning techniques are the ones that can leverage computation or in other words, methods that improve significantly as we add more compute power. Long ago, people tried to encode lots of human knowledge of strategies in their Go AI, but did not have enough compute power to make a truly great algorithm. And now we have AlphaGo, which contains minimal information about Go itself and it is better than the best human players in the world. And number two, he recommends that we try to put as few constraints on the problem as possible. He argues that we shouldn't try to rebuild the mind, but try to build a method that can capture arbitrary complexity and scale it up with hardware. Don't try to make it work like your brain, make something as general as possible and make sure it can leverage computation and it will come up with something that is way better than our brain. So in short, keep the problem general and don't encode your knowledge of the domain into your learning algorithms. The weight of this sentence is not to be underestimated because these seemingly simple observations sound really counterintuitive. This seemingly encourages us to do the exact opposite of what we are currently doing. Let me tell you why. I have phone memories of my early lectures I attended to in cryptography where we had a look at ciphertext. These are very much like encrypted messages that children like to write each other at school, which looks like nonsense for the unassuming teacher, but can be easily decoded by another child when provided with a key. This key describes which symbol corresponds to which letter. Let's assume that one symbol means one letter, but if we don't have any additional knowledge, this is still not an easy problem to crack. But in this course, soon, we coded up algorithms that were able to crack messages like this in less than a second. How exactly? Well, by inserting additional knowledge into the system. For instance, we know the relative frequency of each letter in every language. For instance, in English, the letter E is the most common by far and then comes T, A and the others. The fact that we are not seeing letters, but symbols, doesn't really matter because we just look at the most frequent symbol in the ciphertext and we immediately know that okay, that symbol is going to be the letter E and so on. See what we have done here? Just by inserting a tiny bit of knowledge, suddenly a very difficult problem turned into a trivial problem. So much so that anyone can implement this after their second cryptography lecture. And somehow Richard Sutton argues that we shouldn't do that. Doesn't that sound crazy? So what gives? Well, let me explain through an example from Light Transport Research that demonstrates his point. Past tracing is one of the first and simplest algorithms in the field, which in many regards is vastly inferior to metropolis Light Transport, which is a much smarter algorithm. However, with our current powerful graphics cards, we can compute so many more rays with past tracing that in many cases it wins over metropolis. In this case, compute rain supreme. The hardware scaling out muscles the smarts and we haven't even talked about how much easier it is for engineers to maintain and improve a simpler system. The area of natural language processing has many decades of research to teach machines how to understand, simplify, correct, or even generate text. After so many papers and handcrafted techniques, which insert our knowledge of linguistics into our techniques, who would have thought that open AI would be able to come up with a relatively simple neural network with so little prior knowledge that is able to write articles that sound remarkably lifelike. We will talk about this method in more detail in this series soon. And here comes the bitter lesson. Doing research the classical way of inserting knowledge into a solution is very satisfying. It feels right, it feels like doing research, progressing, and it makes it easy to show in a new paper what exactly the key contributions are. However, it may not be the most effective way forward. Quoting the article, I recommend that you pay close attention to this. The bitter lesson is based on the historical observations that one, AI researchers have often tried to build knowledge into their agents. Two, this always helps in the short term and is personally satisfying to the researcher. But three, in the long run, it plateaus and even inhibits further progress. And four, breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness and often incompletely digested because it's success over a favored human centric approach. In our cryptography problem from earlier, of course, the letter frequency solution and other linguistic tricks are clearly much, much better than a solution that doesn't know anything about the domain. Of course, however, when later we have 100 times faster hardware, this knowledge may actually inhibit finding a solution that is way, way better. This is why he also claims that we shouldn't try to build intelligence by modeling our brain in a computer simulation. It's not that the our brain approach doesn't work. It does, but on the short run, on the long run, we will be able to add more hardware to a learning algorithm and it will find more effective structures to solve problems and it will eventually out-muscle our handcrafted techniques. In short, this is the lesson. When facing a learning problem, keep your domain knowledge out of the solution and use more compute. More compute gives us more learning and more general formulations give us more chance to find something relevant. So, this is indeed a harsh lesson. This piece sparked great debates on Twitter. I have seen great points for and against this sentiment. What do you think? Let me know in the comments as everything in science, this piece should be subject to debate and criticism. And therefore, I'd love to read as many people's take on it as possible. And this piece has implications on my thinking as well. Please allow me to add three more personal notes that kept me up at night in the last few days. Note number one, the bottom line is whenever we build a new algorithm, we should always bear in mind which parts would be truly useful if we had 100 times the compute power that we have now. Note number two, a corollary of this thinking is that arguably hardware engineers who make this new and more powerful graphics cards may be contributing the very least as much to AI than most of AI research does. And note number three, to me, it feels like this almost implies that best is to join the big guys where all the best hardware is. I work in an amazing small to mid-size lab at the technical University of Vienna and in the last few years I have given relatively little consideration to the invitations from some of the more coveted and well-funded labs. Was it a mistake? Should I change that? I really don't know for sure. If for some reason you haven't read the piece at the start of the video, make sure to do it after watching this. It's really worth it. In the meantime, interestingly, the non-profit AI research lab OpenAI also established a four-profit or what they like to call Capped Profit Company to be able to compete with the other big guys like DeepMind and Facebook Reality Labs. I think Richard has a solid point here. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karajor Naifahir. You know that I see a piece of fluid and I can't resist making videos about it. I just can't. Oh my goodness. Look at that. These animations were created using the material point method or MPM in short, which is a hybrid simulation method, which is able to simulate not only substances like water and honey, but it can also simulate snow, granular solids, cloth, and many, many other amazing things that you see here. Before you ask, the hybrid part means that it both uses particles and grids during the computations. Unfortunately, it is very computationally demanding, so it takes forever to get these simulations ready. And typically, in my simulations, after this step is done, I almost always find that the objects did not line up perfectly, so I can start the whole process again. Oh well. This technique has multiple stages, uses multiple data structures in many of them, and often we have to wait for the results of one stage to be able to proceed to the next. This is not that much of a problem if we seek to implement this on our processor, but it would be way, way faster if we could run it on the graphics card, as long as we implement these problems on them properly. However, due to these stages waiting for each other, it is immensely difficult to use the heavily parallel computing capabilities of the graphics card. So here you go. This technique enables running MPM on your graphics card efficiently, resulting in an up to 10 time improvement over previous works. As a result, this granulation scene has more than 6.5 million particles on a very fine grid and can be simulated in only around 40 seconds per frame. And not only that, but the numerical stability of this technique is also superior to previous works, and it is thereby able to correctly simulate how the individual grains interact in this block of sand. Here is a more detailed breakdown of the number of particles, grid resolutions, and the amount of computation time needed to simulate each step. I am currently in the middle of a monstrous fluid simulation project, and oh man, I wish I had these numbers for the computation time. This gelatin scene takes less than 7 seconds per frame to simulate with a similar number of particles. Look at that heavenly gooey thing. It probably tastes like strawberries. And if you enjoyed this video and you wish to help us teach more people about these amazing papers, please consider supporting us on Patreon. In return, we can offer you early access to these episodes, or you can also get your name in the video description of every episode as a key supporter. You can find us at patreon.com slash 2 Minute Papers. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir. Please meet NeuroSet. The name of this technique tells us what it is about, the NeuroPART means that it is a neural network-based learning method, and the setPART means that it is able to solve satisfiability problems. This is a family of problems where we are given a logic formula and we have to decide whether these variables can be chosen in a way such that this expression comes out true. That of course sounds quite nebulous, so let's have a look at a simple example. This formula says that F is true if A is true and at the same time not B is true. So if we choose A to be true and B as false, this expression is also true or in other words this problem is satisfied. Having a good solution for set is already great for solving many problems involving logic, however the more interesting part is that it can help us solve an enormous set of other problems. For instance, once that involves graphs, describing people in social networks and many others that you see here. This can be done by performing something that mathematicians like to call polynomial time reduction or car production, which means that many other problems that seem completely different can be converted into a set problem. In short, if you can solve set well, you can solve all of these problems well. This is one of the amazing revelations I learned about during my mathematical curriculum. The only problem is that when trying to solve big and complex set problems, we can often not do much better than random guessing, which for some of the most nasty cases takes so long that it practically is never going to finish. And get this, interestingly, this work presents us with a neural network that is able to solve problems of this form, but not like this tiny, tiny baby problem, but much bigger ones. And this really shouldn't be possible. Here's why. To train a neural network, we require training data. The input is a problem definition and the output is whether this problem is satisfiable. And we can stop right here because here lies our problem. This doesn't really make any sense because we just said that it is difficult to solve big set problems. And here comes the catch. This neural network learns from set problems that are small enough to be solved by traditional handcrafted methods. We can create arbitrarily many training examples with these solvers, albeit these are all small ones. And that's not it, there are three key factors here that make this technique really work. One, it learns from only single bit supervision. This means that the output that we talked about is only yes or no. It isn't shown the solution itself. That's all the algorithm learns from. Two, when we request a solution from the neural network, it not only tells us the same binary yes, no answer, but it can go beyond that. And when the problem is satisfiable, it will almost always provide us with the exact solution. It is not only able to tell us whether the problem can be solved, but it almost always provides a possible solution as well. That is indeed remarkable. This image may be familiar from the thumbnail and here you can see how the neural networks in a representation of how these variables change over time as it sees a satisfiable or unsatisfiable problem and how it comes to its own conclusions. And three, when we ask the neural network for a solution, it is able to defeat problems that are larger and more difficult than the ones it has trained on. So, this means that we train it on simple problems that we can solve ourselves and using these as training data, we will be able to solve much harder problems that we can't solve ourselves. This is crucial because otherwise, this neural network would only be as good as the handcrafted algorithm used to train it, which in other words is not useful at all. Isn't this amazing? I will note that there are handcrafted algorithms that are able to match and often outperform neural set. However, these took decades of research work to invent, whereas this is a learning-based technique that just looks at as little information as the problem definition and whether it is satisfiable and it is able to come up with a damn good algorithm by itself. What a time to be alive! This video has been kindly supported by my friends at ARM Research. Make sure to check them out through the link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojolna Ifehir. In this series, we talk quite a bit about neural network-based learning methods that are able to generate new images for us from some sort of sparse description, like a written sentence, or a set of controllable parameters. These can enable us mere mortals without artistic skills to come up with novel images. However, one thing that comes up with almost every single one of these techniques is the lack of artistic control. You see, if we provide a very coarse input, there are many many different ways for the neural networks to create photorealistic images from them. So how do we get more control over these results? An earlier paper from Envidia generated human faces for us and used a latent space technique that allows us some more fine-grained control over the images. It is beyond amazing. But these are called latent variables because they represent the inner working process of the neural network and they do not exactly map to our intuition of facial features in reality. And now, have a look at this new technique that allows us to edit the geometry of the jawline of a person, put a smile on someone's face in a more peaceful way than seen in some Batman movies, or remove the sunglasses and add some crazy hair at the same time. Even changing the hair of someone while adding an earring with a prescribed shape is also possible. Whoa! And I just keep talking and talking about artistic control so it's great that these shapes are supported, but what about another important aspect of artistic control, for instance, colors? Yep, that is also supported. Here you can see that the color of the woman's eyes can be changed and the technique also understands the concept of makeup as well. How cool is that? Not only that, but it is also blazing fast. It takes roughly 50 milliseconds to create these images with the resolution of 512 by 512, so in short, we can do this about 20 times per second. Make sure to have a look at the paper that also contains a validation section against other techniques and reference results. This is out there is such a thing as a reference result in this case, which is really cool, and you will also find a novel style-loss formulation that makes all this crazy wizardry happen. No web app for this one, however, the source code is available free of charge and under a permissive license, so let the experiments begin. If you have enjoyed this video and you feel that a bunch of these videos are worth $3 a month, please consider supporting us on Patreon. In return, we can offer you early access to these episodes and more to keep your paper addiction in check. It is truly a privilege for me to be able to keep making these videos I am really enjoying the journey and this is only possible because of your support on Patreon. This is why every episode ends with, you guessed it right, thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karojol Naifahir. Today, we are going to talk about Planet, a technique that is meant to solve challenging image-based planning tasks with sparse rewards. Okay, that sounds great, but what do all of these terms mean? The planning part is simple. It means that the AI has to come up with a sequence of actions to achieve a goal, like pole balancing with a cart, teaching a virtual human or cheetah to walk, or hitting this box the right way to make sure it keeps rotating. The image-based part is big. This means that the AI has to learn the same way as a human, and that is, by looking at the pixels of the images. This is a huge difficulty bump, because the AI does not only have to learn to defeat the game itself, but also has to build an understanding of the visual concepts within the game. Deep minds legendary deep-queue learning algorithm was able to learn from pixel inputs, but it was mighty inefficient at doing that, and no wonder this problem formulation is immensely hard, and it is a miracle that we can master any solution at all that can figure it out. The sparse reward part means that we rarely get feedback as to how well we are doing at these tasks, which is a nightmare situation for any learning algorithm. The key difference with this technique against classical reinforcement learning, which is what most researchers reach out to to solve similar tasks, is that this one uses models for the planning. This means that it does not learn every new task from scratch, but after the first game, whichever it may be, it will have a rudimentary understanding of gravity and dynamics, and it will be able to reuse this knowledge in the next games. As a result, it will get a head start when learning a new game, and is therefore often 50 times more efficient than the previous technique that learns from scratch, and not only that, but it has other really cool advantages as well, which I will tell you about in just a moment. Here you can see that indeed, the blue lines significantly outperform the previous techniques shown with red and green for each of these tasks. I like how the plot is organized in the same grid as the tasks were, as it makes it much more readable when juxtaposed with the video footage. As promised, here are the two really cool additional advantages of this model-based agent. The first is that we don't have to train six separate AIs for all of these tasks, but finally, we can get one AI that is able to solve all six of these tasks efficiently. And second, it can look at as little as five frames of an animation, which is approximately one fifth of a second worth of footage that is barely anything, and it is able to predict how the sequence would continue with a remarkably high accuracy, and over a long time frame, which is quite a challenge. This is an excellent paper with beautiful mathematical formulations. I recommend that you have a look in the video description. The source code is also available free of charge for everyone, so I bet this will be an exciting direction for future research works, and I'll be here to report on it to you. Make sure to subscribe and hit the bell icon to not miss future episodes. Thanks for watching, and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zonaifahir. Now get this, after defeating chess, Go and making incredible progress in Starcraft 2, scientists at DeepMind just published a paper where they claim that Hanabi is the next frontier in AI research. And we shall stop right here. I hear you asking me, Karo, after defeating all of these immensely difficult games, now you're trying to tell me that somehow, this silly card game is the next step? Yes, that's exactly what I'm saying. Let me explain. Hanabi is a card game where two to five players cooperate to build five card sequences and to do that, they are only allowed to exchange very little information. This is also an imperfect information game, which means the players don't have all the knowledge available needed to make a good decision. They have to work with what they have and try to infer the rest. For instance, poker is also an imperfect information game because we don't see the cards of the other players and the game revolves around our guesses as to what they might have. In Hanabi, interestingly, it is the other way around. So we see the cards of the other players, but not our own ones. The players have to work around this limitation by relying on each other and working out communication protocols and infer intent in order to win the game. Like in many of the best games, these simple rules conceal a vast array of strategies, all of which are extremely hard to teach to current learning algorithms. In the paper, a free and open source system is proposed to facilitate further research works and assess the performance of currently existing techniques. The difficulty level of this game can also be made easier or harder at will from both inside and outside the game. And by inside, I mean that we can set parameters like the number of allowed mistakes that can be made before the game is considered lost. The outside part means that two main game settings are proposed. One, self-play, this is the easier case where the AI plays with copies of itself, therefore it knows quite a bit about its teammates and two, ad hoc teams can also be constructed, which means that a set of agents need to cooperate that are not familiar with each other. This is immensely difficult. When I looked at the paper, I expected that as we have many powerful learning algorithms, they would rip through this challenge with ease, but surprisingly, I found out that even the easier self-play variant severely underperforms compared to the best human players and handcrafted bots. There is plenty of work to be done here, and luckily, you can also run it yourself at home and train some of these agents on a consumer graphics card. Note that it is possible to create a handcrafted program that plays this game well as we humans already know good strategies. However, this project is about getting several instances of an AI to learn new ways to communicate with each other effectively. Again, the goal is not to get a computer program that plays Hanabi well, the goal is to get an AI to learn to communicate effectively and work together towards a common goal. Much like chess, Starcraft 2, and Dota, Hanabi is still a proxy to be used for measuring progress in AI research. Nobody wants to spend millions of dollars to play card games at work, so the final goal of DeepMind is to reuse this algorithm for other applications where even we humans falter. I have included some more materials on this game in the video description, make sure to have a look. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Jolene Fahir. If you have been watching this series for a while, you know that I am completely addicted to fluid simulations, so it is now time for a new fluid paper. And by the end of this video, I hope you will be addicted too. If we create a virtual world with a solid block and use our knowledge from physics to implement the laws of fluid dynamics, this solid block will indeed start behaving like a fluid. A baseline simulation technique for this will be referred to as flip in the videos that you see here, and it stands for Fluid Implicit Particle. These simulations are often being used in the video game industry, in movies, and of course, I cannot resist to put some of them in my papers as test scenes as well. In games, we are typically looking for real-time simulations, and in this case, we can only get a relatively coarse resolution simulation that lacks fine details, such as droplet formation and splashing. For movies, we want the highest fidelity simulation possible with honey coiling, two-way interaction with other objects, wet sand simulations, and all of those goodies, however, these all take forever to compute. This is the Bane of fluid simulators. We have talked about a few earlier works that try to learn these laws via a neural network by feeding them a ton of video footage of these phenomena. This is absolutely amazing and is a true game changer for learning-based techniques. So why is that? Well, up until a few years ago, whenever we had a problem that was near impossible to solve with traditional techniques, we often reached out to a neural network or some other learning algorithm to solve it, often with success. However, it is not the case here. Something has changed. What has changed is that we can already solve these problems, but we can still make use of a neural network because it can help us with something that we can already do, but it does it faster and easier. However, some of these techniques for fluids are not yet as accurate as we would like, and therefore haven't yet seen widespread adoption. So here's an incredible idea. Why not compute a core simulation quickly that surely adheres to the laws of physics and then field the remaining details with a neural network? Again, flip is the baseline handcrafted technique, and you can see how the neural network infused simulation program on the left by the name ML Flip introduces these amazing details. And if we compare the results with the reference simulation, which took forever, you can see that it is quite similar and it indeed feels in the right kind of details. In case you are wondering about the training data, it learned the concept of splashes and droplets flying about, you guessed it right, by looking at splashes and droplets flying about. So now we know that it's quite accurate, and now the ultimate question is how fast is it? Well, get this, we can expect a 10 times speed up from this. So this basically means that for every 10 all-nighters, I have to wait for my simulations, I only have to wait one, and if something took only a few seconds, it now may be close to real time with this kind of visual fidelity. You know what? Sign me up. This video has been kindly supported by my friends at ARM Research. Make sure to check them out through the link in the video description. Thanks for watching, and for your generous support, I'll see you next time.
Dear Fellow Scholars, this is Dominic Papers with Kato Joana Ifahir. This paper describes a new technique to visualize the inner workings of a generator neural network. This is a neural network that is able to create images for us. The key idea here is dissecting this neural network and looking for agreements between a set of neurons and concepts in the output image, such as trees, sky, clouds, and more. This means analyzing that these neurons are responsible for buildings to appear in the image and those will generate clouds. Interestingly, such agreements can be found, which means way more than just creating visualizations like this, because it enables us to edit images without any artistic skills. And now, hold on to your papers. The editing part works by forcefully activating and deactivating these units and correspond to adding or removing these objects from an image. And look, this means that we can take an already existing image and ask this technique to remove trees from it, or perhaps add more, the same with domes, doors, and more. Wow, this is pretty cool, but you haven't seen the best part yet. Note that so far, the amount of control we have over the image is quite limited. Unfortunately, we can take this further and select a region of the image where we wish to add something new. This is suddenly so much more granular and useful. The algorithm seems to understand that trees need to be rooted somewhere and not just appear from thin air. Most of the time anyway. Interestingly, it also understands that bricks don't really belong here, but if I add it to the side of the building, it continues in a way that is consistent with its appearance. Most of the time anyway. And of course, it is not perfect. Here, you can see me struggling with this spaghetti monster floating in the air that used to be a tree and it just refuses to be overwritten. And this is a very important lesson. Most research works are by the step in a thousand mile journey and each of them tries to improve upon the previous paper. This means that a few more papers down the line, this will probably take place in HD, perhaps in real time, and with much higher quality. This work also builds on previous knowledge, on generative adversarial networks, and whatever the follow-up papers will contain, they will build on knowledge that was found in this work. Welcome to the wonderful world of research. And now, we can all rejoice because the authors kindly made the source code available free for everyone, and not only that, but there is also a web app, so you can also try it yourself. This is an excellent way of maximizing the impact of your research work. Let the experts improve upon it by releasing the source code and let people play with it, even layman. You will also find many failure cases, but also cases where it works well, and I think there is value in reporting both, so we learn a little more about this amazing algorithm. So, let's do a little research together. Make sure to post your results in the comment section. I have a feeling that lots of high quality entertainment materials will surface very soon. I bet the authors will also be grateful for the feedback as well. So, let the experiments begin. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In the previous episode, we talked about image classification, which means that we have an image as an input, and we ask a computer to figure out what is seen in this image. Learning algorithms, such as convolution on your own networks, are amazing at it. However, we just found out that even though their results are excellent, it is still quite hard to find out how they get to make a decision that an image depicts a dog or a cat. This is in contrast to an old and simple technique that goes by the name Bag of Words. It works a bit like looking for keywords in a document, and by using those, trying to find out what the writing is about. Kind of like the shortcut students like to take for mandatory readings. We have all done it. Now, imagine the same for images where we slice up the image into small pieces and keep a score on what is seen in these snippets. Floppy ears, blacks now, fur. Okay, we're good. We can conclude that we have a dog over here. But wait, I hear what you are saying. Károly, why do we need to digress from AI to Bag of Words? Why talk about this old method? Well, let's look at the advantages and disadvantages and you will see in a moment. The advantage of Bag of Features is that it is quite easy to interpret because it is an open book. It gives us the scores for all of these small snippets. We know exactly how a decision is being made. A disadvantage, one would say, is that because it works per snippet, it ignores the bigger spatial relationships in an image and therefore overall, it must be vastly inferior to an neural network. Right? Well, let's set up an experiment and see. This is a paper from the same group as the previous episode at the University of Tobingo. The experiment works the following way. Let's try to combine Bag of Words with neural networks by slicing up the image into the same patches and then feed them into a neural network and ask it to classify them. In this case, the neural network will do many small classification tasks on image snippets instead of one big decision for the full image. The paper discusses that the final classification also involves evaluating heat maps and more. This way, we are hoping that we get a technique where a neural network would explain its decisions much like how Bag of Words works. For now, let's call these networks, bagnets. And now, hold on to your papers because the results are really surprising. As expected, it is true that looking at small snippets of the image can lead to misunderstandings. For instance, this image contains a soccer ball, but when zooming into small patches, it might seem like this is a cowboy hat on top of the head of this child. However, what is unexpected is that even with this, Bagnet produces surprisingly similar results to a state of the art neural network by the name ResNet. This is… Wow. This has several corollaries. Let's start with the cool one. This means that neural networks are great at identifying objects in scrambled images, but humans are not. The reason for that is that the order of the task don't really matter. We now have a better reason why this is the case, doing all this classification for many small tasks independently has superpowers when it comes to processing scrambled images. The other more controversial corollary is that this inevitably means that some results that show the superiority of deep neural networks over the good old bag of features come not from using a superior method, but from careful fine tuning. Not all results, some results. As always, a good piece of research challenges are underlying assumptions and sometimes, in this case, even our sanity. There's a lot to say about this topic and we have only scratched the surface, so take this as a thought-provoking idea that is worthy of further discussion. Really cool work. I love it. This video has been supported by Audible. By using Audible, you get two excellent audiobooks, free of charge. I recommend that you click the link in the video description, sign up for free and check out the book Super Intelligence by Nick Pastrum. Some more AI for you, whenever you are stuck in traffic, or have to clean the house. I talked about this book earlier and I see that many of you fellow scholars have been enjoying it. If you haven't read it, make sure to sign up now because this book discusses how it could be possible to build a super intelligent AI and what such an all-knowing being would be like. You get this book free of charge and you can cancel at any time. You can't go wrong with this. Head on to the video description and sign up under the appropriate links. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. As convolutional neural network-based image classifiers are able to correctly identify objects and images and are getting more and more pervasive scientists at the University of Tobingen decided to embark on a project to learn more about the inner workings of these networks. Their key question was whether they really work similarly to humans or not. Now, one way of doing this is visualizing the inner workings of the neural network. This is a research field on its own. I tried to report on it to you every now and then and we talked about some damn good papers on this with more to come. A different way would be to disregard the inner workings of the neural network in other words to treat it like a black box at least temporarily. But what does this mean exactly? Let's have a look at an example. And in this example, our test subject shall be none other than this cat. Here we have a bunch of neural networks that have been trained on the classical image net data set and a set of humans. This cat is successfully identified by all classical neural network architectures and most humans. Now onwards to a grayscale version of the same cat. The neural networks are still quite confident that this is a cat, some humans faltered, but still nothing too crazy going on here. Now let's look at the silhouette of the cat. Whoa! Suddenly humans are doing much better at identifying the cat than neural networks. This is even more so true when we are only given the edges of the image. However, when looking at a heavily zoomed in image of the texture of an Indian elephant, neural networks are very confident with their correct guess where some humans falter. Ha! We have a lead here. It may be that as opposed to humans, neural networks think more in terms of textures than shapes. Let's test that hypothesis. Step number one, Indian elephant. This is correctly identified. Now, cat. Again, correctly identified. And now, hold on to your papers, a cat with an elephant texture. And there we go. A cat with an elephant texture is still a cat to us humans, but is an elephant to convolution on your own networks. After looking some more at the problem, they found that the most common convolution on your own network architectures that were trained on the image net dataset vastly over value textures over shapes. That is fundamentally different to how we humans think. So, can we try to remedy this problem? Is this even a problem at all? Neural networks need not to think like humans, but who knows its research? We might find something useful along the way. So, how could we create a dataset that would teach a neural network a better understanding of shapes? Well, that's a great question and one possible answer is, style transfer. Let me explain. Style transfer is the process of fusing together two images where the content of one image and the style of the other is taken. So now, let's take the image net dataset and run style transfer on each of these images. This is useful because it repaints the textures, but the shapes are mostly left intact. The authors call it the stylized image net dataset and have made it publicly available for everyone. This new dataset will no doubt coerce the neural network to build a better understanding of shapes which will bring it closer to human thinking. We don't know if that is a good thing yet, so let's look at the results. And here comes the surprise. When training a neural network architecture by the name ResNet50, jointly on the regular and stylized image net dataset, after a little fine tuning, they have found two remarkable things. One, the resulting neural network now sees more similarly to humans. The old blue squares on the right mean that the old thinking is texture-based, but the new neural networks denoted with the orange squares are now much closer to the shape-based thinking of humans, which is indicated with the red circles. And now, hold on to your papers because two, the new neural network also outperforms the old ones in terms of accuracy. Dear Fellow Scholars, this is research at its finest. The authors explored an interesting idea and look where they ended up. Amazing. If you enjoyed this episode and you feel that a bunch of these videos a month are worth $3, please consider supporting us on Patreon. This helps us get more independent and create better videos for you. You can find us at patreon.com slash two-minute papers. Or just click the link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. It is time for a position paper. This paper does not have the usual visual fireworks that you see in many of these videos, however, it addresses the cornerstone of scientific publication which is none other than peer review. When a research group is done with the project, they don't just write up the results and check the paper in a repository, but instead they submit it to a scientific venue, for instance, a journal or a conference. Then the venue finds several other researchers who are willing to go through the work with a fine tooth comb. In the case of double blind reviews, both the authors and the reviewers remain anonymous to each other. The reviewers now check whether the results are indeed significant, novel, credible and reproducible. If the venue is really good, this process is very tough and thorough, and this process becomes the scientific version of beating the heck out of someone but in a constructive manner. If the work is able to withstand serious criticism and takes the required boxes, it can proceed to get published at this venue. Otherwise, it is rejected. So what we heard so far is that the research work is being reviewed, however, scientists at the Google AI lab raised the issue that the reviewers themselves should also be reviewed. Consider the fact that all scientists are expected to spend a certain percentage of their time to serve the greater good. For instance, throughout my PhD studies, I have reviewed over 30 papers and I am not even done yet. These paper reviews take place without compensation. Let's call this issue number one for now. issue number two is the explosive growth of the number of submissions over time at the most prestigious machine learning and computer vision conferences. Have a look here. It is of utmost importance that we create a review system that is as fair as possible, after all thousands of hours spent on research projects are at stake. At these two issues together and we get a system where the average quality of the reviews will almost certainly decrease over time. Quoting the authors, we believe the key issues here are structural. Reviewers donate their valuable time and expertise anonymously as a service to the community with no compensation or attribution are increasingly taxed by a rapidly increasing number of submissions and are held to no-end force standards. In two-minute papers episode number 84, so more than 200 episodes ago, we discussed the Nurebs experiment. Leave a comment if you have been around back then and you enjoyed two-minute papers before it was cool. But don't worry if this is not the case, this was long ago, so here's a short summary. A large number of papers were secretly disseminated to multiple committees who would review it without knowing about each other and we would have a look whether they would accept or reject the same papers. Re-review papers and see if the results are the same, if you will. If we use sophisticated mathematics to create new scientific methods, why not use mathematics to evaluate our own processes? So after doing that, it was found that at a given prescribed acceptance ratio, there was a disagreement for 57% of the papers. So is this number good or bad? Let's imagine a completely hypothetical committee that has no idea what they are doing and as a review, they basically toss up a coin and accept or reject the paper based on the result of the coin toss. Let's call them the CoinFlip Committee. The calculations conclude that the CoinFlip Committee would have a disagreement ratio of about 77%. So experts, 57% disagreement, CoinFlip Committee, 77% disagreement. And now to answer whether this is good or bad, this is hardly something to be proud of. The consistency of expert reviewers is significantly closer to a coin flip than to a hypothetical perfect review process. If that is not an indication that we have to do something about this, I am not sure what is. So in this paper, the authors propose two important changes to the system to remedy these issues. Remedy number one, they propose a rubric, a seven-point document to evaluate the quality of the reviews. Again, not only the papers are reviewed, but the reviews themselves. It is similar to the ones used in public schools to evaluate student performance to make sure whether the review was objective, consistent, and fair. Remedy number two, reviewers should be incentivized and rewarded for their work. The authors argue that a professional service should be worthy of professional compensation. Now, of course, this sounds great, but this also requires money. Where should the funds come from? The paper discusses several options. For instance, this could be funded through sponsorships, or asking for a reasonable fee when submitting a paper for peer review and introducing a new fee structure for science conferences. This is a short, five-page paper that is easily readable for everyone, raises excellent points for a very important problem, so needless to say, I highly recommend that you give it a read. As always, the link is in the video description. I hope this video will help raising more awareness to this problem. If we are to create a fair system for evaluating research papers, we better get this right. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejona Ifaher. Finally, I have been waiting for quite a while to cover this amazing paper, which is about Alpha Zero. We have talked about Alpha Zero before, this is an AI that is able to play chess, go, and shogi, or in other words, Japanese chess on a remarkably high level. I will immediately start out by uttering the main point of this work. The point of Alpha Zero is not to solve chess or any of these games. Its main point is to show that a general AI can be created that can perform on a superhuman level on not one, but several different tasks at the same time. Let's have a look at this image, where you see a small part of the evaluation of Alpha Zero versus Stockfish, an amazing open-source chess engine, which has been consistently at or around the top computer chess players for many years now. Stockfish has an ill-orating of over 3200, which means that it has a win rate of over 90% against the best human players in the world. Now interestingly, comparing these algorithms is nowhere near as easy as it sounds. This sounds curious, so why is that? For instance, it is not enough to pit the two algorithms against each other and see who ends up winning. It matters what version of Stockfish is used, how many positions are the machines allowed to evaluate, how much thinking time they are allowed, the size of hash tables, the hardware being used, the number of threads being used, and so on. From the side of the chess community, these are the details that matter. However, from the side of the AI researcher, what matters most is to create a general algorithm that can play several different games on a superhuman level. The disconstrained it would really be a miracle if Alpha Zero were able to even put up a good fight against Stockfish. So, what happened? Alpha Zero played a lot of games that ended up as draws against Stockfish, and not only that, but whenever there was a winner, it was almost always Alpha Zero. Insanity, and what is quite remarkable is that Alpha Zero has only trained for 4 to 7 hours only through self-play. Comparatively, the development of the current version of Stockfish took more than 10 years. You can see how reliably this AI can be trained, the blue lines show the results of several training runs, and they all converge to the same result with only a tiny bit of deviation. Alpha Zero is also not a brute force algorithm, as it evaluates fewer positions per second than Stockfish. Casparov put it really well in his article where he said that Alpha Zero works smarter and not harder than previous techniques. Even Magnus Carson, chess master extraordinaire, said in an interview that during his games he often thinks what would Alpha Zero do in this case, which I found to be quite remarkable. Casparov also had many good things to say about the new Alpha Zero in a, let's say, very Casparov-esque manner. You also note that the key point is not whether the current version of Stockfish or the one from two months ago was used. The key point is that Stockfish is a brilliant chess engine, but it is not able to play go or any game other than chess. This is the main contribution that DeepMind was looking for with this work. This AI can master three games at once and a few more papers down the line, it may be able to master any perfect information game. All my goodness, what a time to be alive. We have only scratched the surface in this video. This was only a taste of the paper. The evaluation section in the paper is out of this world, so make sure to have a look in the video description and I am convinced that nearly any questions one can possibly think of is addressed there. I also link to Casparov's editorial on this topic. It is short and very readable. Give it a go. I hope this little taste of Alpha Zero inspires you to go out there and explore yourself. This is the main message of this series. Let me know in the comments what you think or if you found some cool other things related to Alpha Zero. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifahir. This episode is about a really nice new paper on pose estimation. Pose estimation means that we have an image or video of a human as an input and the output should be this skeleton that you see here that shows what the current position of this person is. Sounds alright, but what are the applications of this really? Well, it has a huge swath of applications. For instance, many of you often hear about motion capture for video games and animation movies, but it is also used in medical applications for finding abnormalities in a patient's posture, animal tracking, understanding sign language, pedestrian detection for self-driving cars, and much, much more. So, if we can do something like this in real time, that's hugely beneficial for many, many applications. However, this is a very challenging task because humans have a large variety of appearances, images come in all kinds of possible viewpoints, and as a result, the algorithm has to deal with occlusions as well. This is particularly hard. Have a look here. In these two cases, we don't see the left elbow, so it has to be inferred from seeing the remainder of the body. We have the reference solution on the right, and as you see here, this new method is significantly closer to it than any of the previous works. Quite remarkable. The main idea in this paper is that it works out the poses both in 2D and 3D, and contains a neural network that can convert to both directions between these representations, while retaining the consistencies between them. First, the technique comes up with an initial guess, and follows up by using these pose transformer networks to further refine this initial guess. This makes all the difference, and not only does it lead to high-quality results, but it also takes way less time than previous algorithms. We can expect to obtain a predicted pose in about 51 milliseconds, which is almost 20 frames per second. This is close to real time, and is more than enough for many of the applications we've talked about earlier. In the age of rapidly improving hardware, these are already fantastic results, both in terms of quality and performance, and not only the hardware, but the papers are also improving at a remarkable pace. What a time to be alive. The paper contains an exhaustive evaluation section. It is measured against a variety of high-quality solutions. I recommend that you have a look in the video description. I hope nobody is going to install a system in my lab that starts beeping every time I slouch a little, but I am really looking forward to benefiting from these other applications. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Joel Naifahir. Whenever we build a website, a video game, or do any sort of photography and image manipulation, we often encounter the problems of image downscaling, decalorization, and HDR tone mapping. This work offers us one technique that can do all three of these really well. But first, before we proceed, why are we talking about downscaling? We are in the age of AI, where a computer program can beat the best players in chess and go, so why talk about such a trivial challenge? Well, have a look here. Imagine that we have this high fidelity input image, and due to file size constraints, we have to produce a smaller version of it. If we do it naively, this is what it looks like. Not great, right? To do a better job at this, our goal would be that the size of the image would be reduced, but while still retaining the intricate details of this image. Here are two classical downscaling techniques. Better, but the texture of the skin is almost completely lost. Have a look at this. This is what this learning-based technique came up with. Really good, right? It can also perform decalorization. Again, a problem that sounds trivial for the unassuming scholar, but when taking a closer look, we notice that there are many different ways of doing this, and somehow we seek a decalorized image that still relates to the original as faithfully as possible. Here you see the previous methods that are not bad at all, but this new technique is great at retaining the contrast between the flower and its green leaves. At this point, it is clear that deciding which output is the best is highly subjective. We'll get back to that in a moment. It is also capable of doing HDR tone mapping. This is something that we do when we capture an image with a device that supports a wide dynamic range, in other words, the wide range of colors, and we wish to display it on our monitor, which has a more limited dynamic range. And again, clearly, there are many ways to do that. Welcome to the wondrous world of tone mapping. Note that there are hundreds upon hundreds of algorithms to perform these operations in computer graphics research. And also note that these are very complex algorithms that took decades for smart researchers to come up with. So the season fellow scholar shall immediately ask why talk about this work at all? What's so interesting about it? The goal here is to create a little more general learning-based method that can do a great job at not one, but all three of these problems. But how great exactly? And how do we decide how good these images are? To answer both of these questions at the same time, if you've been watching this series for a while, then you are indeed right. The authors created a user study, which shows that for all three of these tasks, according to the users, the new method smokes the competition. It is not only more general, but also better than most of the published techniques. For instance, Reinhardt's amazing tone mapper has been an industry standard for decades now. And look, almost 75% of the people prefer this new method over that. What required super smart researchers before can now be done with the learning algorithm. Unreal. What a time to be alive. A key idea for this algorithm is that this convolution on your own network that you see on the left is able to produce all three of these operations at the same time and to perform that it is instructed by another neural network to do this in a way that preserves the visual integrity of the input images. Make sure to have a look at the paper for more details on how this perceptual loss function is defined. And if you wish to help us tell these amazing stories to even more people, please consider supporting us on Patreon. Your unwavering support on Patreon is the reason why this show can exist and you can also pick up cool perks there like watching these videos in early access deciding the order of the next few episodes or even getting your name showcased in the video description as a key supporter. You can find us at patreon.com slash 2 minute papers or as always just click the link in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This paper is about creating high-quality physics simulations, and is, in my opinion, one of the gems very few people know about. In these physical simulations, we have objects that undergo a lot of tormenting. For instance, they have to endure all kinds of deformations, rotations, and, of course, being pushed around. A subset of these simulation techniques requires us to be able to look at these deformations and forget about anything they do other than the rotational part. Don't push it, don't squish it, just take the rotational part. Here, the full deformation transform is shown with red, and the extracted rotational part is shown by the green indicators here. This problem is not particularly hard and has been studied for decades, so we have excellent solutions to this. For instance, techniques that we refer to as polar decomposition, singular value decomposition, and more. By the way, in our earlier project together with the Activision Blizzard Company, we also used the singular value decomposition to compute the scattering of light within our skin and other translucent materials. I've put a link in the video description, make sure to have a look. Okay, so if a bunch of techniques already exist to perform this, why do we need to invent anything here? Why make a video about something that has been solved many decades ago? Well, here's why. We don't have anything yet that is criterion 1, robust, which means that it works perfectly all the time. Even a slight inaccuracy is going to make an object implode our simulations, so we better get something that is robust. And since these physical simulations are typically implemented on the graphics card, criterion 2, we need something that is well suited for that and is as simple as possible. It turns out none of the existing techniques check both of these two boxes. If you start reading the paper, you will see a derivation of this new solution, a mathematical proof that it is true and works all the time. And then, as an application, it shows fun physical simulations that utilize this technique. You can see here that these simulations are stable, no objects are imploding, although this extremely drunk dragon is showing a formidable attempt at doing that. Ouch! All the contortions and movements are modeled really well over a long time frame, and the original shape of the dragon can be recovered without any significant numericulars. Finally, it also compares the source code for a previous method and the new method. As you see, there is a vast difference in terms of complexity that favors the new method. It is short, does not involve a lot of branching decisions and is therefore an excellent candidate to run on state of the graphics cards. What I really like in this paper is that it does not present something and claims that, well, this seems to work. It first starts out with a crystal clear problem statement that is impossible to misunderstand. Then, the first part of the paper is pure mathematics, proves the validity of a new technique, and then drops it into a physical simulation, showing that it is indeed what we were looking for. And finally, a super simple piece of source code is provided so anyone can use it almost immediately. This is one of the purest computer graphics paper out there I've seen in a while. Make sure to have a look in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. This work is about open AI's technique that teaches a robot arm to dexterously manipulate a block to a target state. And in this project they did one of my favorite things, which is first training an AI within a simulation and then deploying it into the real world. And in the best case scenario, this knowledge from the simulation will actually generalize to the real world. However, while we are in the simulation, we can break free from the limitations of worldly things such as hardware, movement speed, or even time itself. So how is that possible? The limitation on the number of experiments we can run in a simulation is bounded by not our time, which is scarce, but how powerful our hardware is, which is abundant as it is accelerating at a nearly exponential pace. And this is the reason why open AI's and deep-mides AI was able to train for 200 years worth of games before first playing a human pro player. This sounds great, but the simulation is always more crude than the real world, so how do we know for sure that we created something that will indeed be useful in the real world and not just in the simulation? Let's try an analogy. Think of the machine as a student and the simulation would be its textbook that it learns from. If the textbook contains only a few trivial problems to learn from, when the day of the exam comes, if the exam is any good, the student will fail. The exam is the equivalent of deploying the machine into the real world, and apparently the real world is a damn good exam. So how can we prepare a student to do well on this exam? Well, we have to provide them with a textbook that contains not only a lot of problems, but also a diverse set of challenges as well. This is what machine learning researchers call domain randomization. This means that we teach an AI program in different virtual worlds and in each one of them we change parameters like how fast the hand is, what color and weight the cube is, and more. This is a proper textbook, which means that after this kind of training, this AI can deal with new and unexpected situations. The knowledge that it has obtained is so general that we can change even the geometry of the target object and the machine will still be able to manipulate it correctly. Outstanding. To implement this idea, scientists at OpenAI trained not one agent, but a selection of agents in these randomized environments. The first main component of this system is a pose estimator. This module looks at the cube from three angles and predicts the position and orientation of the block and is implemented through a convolutional neural network. The advantage of this is that we can generate a near infinite amount of training data ourselves. You can see here that when the AI looks at real images, it is only a few degrees worse than in the simulation when estimating angles, which is the case of the excellent textbook. I would not be surprised if this accuracy exceeds the capabilities of an ordinary human given that it can perform this many times within a second. Then, the next part is choosing what the next action should be. Of course, we seek to rotate this cube in a way that brings us closer to our objective. This is done by a reinforcement learning technique which uses similar modules as OpenAI's previous algorithm that learn to play Dota 2 really well. Another testament to how general these learning algorithms are. I also recommend checking out OpenAI's video on this work in the video description. Now, I always read in the comments here on YouTube that many of you are longing for more. Five minute papers, ten minute papers, two hour papers were among the requests I heard from you before. And of course, I am also longing for more as I have quite a few questions that keep me up at night. Is it possible for us to ever come up with a super intelligent AI? If yes, how? What types of these AI could exist? Should we be worried? If you are also looking for some answers, we are now trying out a sponsorship with Audible, and I have a great recommendation for you, which is none other than the book Super Intelligence by Nick Bostrom. It addresses all of these questions really well, and if you sign up under the link below in the video description, you will get this book free of charge. Whenever you have to do some work around the house, commute to school or work, just pop in a pair of headphones and listen for free. Some more AI for you while doing something tedious. That's as good as it gets. If you feel that the start of the book is a little slow for you, make sure to jump to the chapter by the name is the default outcome, Doom. But buckle up because there is going to be fireworks from that point in the book. We thank Audible for supporting this video and send a big thank you for all of you who sign up and support the series. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Isholaifahir. I think this is one of the more important things that happened in AI research lately. In the last few years we have seen Deep Mind defeat the best goal players in the world, and after Open AI's venture in the game of Dota 2, it's time for Deep Mind to shine again, as they take on Starcraft 2 a real-time strategy game. The depth and the amount of skill required to play this game is simply astounding. The search space of Starcraft 2 is so vast that it exceeds both chess and even go by a significant margin. Also, it is a game that requires a great deal of mechanical skill, split second decision-making, and we have imperfect information as we only see what our units can see. A nightmare situation for any AI. Deep Mind invited a beloved pro player, TLO, to play a few games against their new Starcraft 2 AI that goes by the name Alpha Star. Note that TLO is a professional player who is easily in the top 1% of players or even better, mid-grandmaster for those who play Starcraft 2. This video is about what happened during this event, and later I will make another video that describes the algorithm that was used to create this AI. The paper is still under review, so it will take a little time until I can get my hands on it. At the end of this video, you will also see the inner workings of this AI. Let's dive in. This is an AI that looked at a few games played by human players, and after that initial step, it learned by playing itself for about 200 years. In our next episode, you will see how this is even possible, so I hope you are subscribed to the series. You see here that the AI controls the blue units, and TLO, the human player, plays red. Right at the start of the first game, the AI did something interesting. In fact, what is interesting is what it didn't do. It started to create new buildings next to its nexus, instead of building a wall-off that you can see here. Using a wall-off is considered standard practice in most games, and the AI used these buildings to not wall-off the entrance, but shield away the workers from possible attacks. Now note that this is not unheard of, but this is also not a strategy that is widely played today and is considered non-standard. It also built more worker units than what is universally accepted as standard, we found out later that this was partly done in anticipation of losing a few of them early on. Very cool. One, almost before we even knew what happened, it won the first game a little more than 7 minutes in, which is very quick, noting that in-game time is a little faster than real time. The thought process of TLO at this point is that that's interesting, but okay, well, the AI plays aggressively and managed to pull this one off. No big deal. We will fire up the second game, and in the meantime, a few interesting details. The goal of setting up the details of this algorithm was that the number of actions performed by the AI roughly matches a human player, and hopefully it still plays as well or better. It has to make meaningful strategic decisions. You see here that this checks out for the average actions every minute, but if you look here, you see around the tail end here that there are times when it performs more actions than humans, and this may enable play styles that are not accessible for human players. However, note that many times it also does miraculous things with very few actions. Now, what about another important detail? Reaction time. The reaction time of the AI is set to 350 milliseconds, which is quite slow. That's excellent news because this is usually a common angle of criticism for game AI's. The AI also sees the whole map at once, but it is not given more information than what its units can see. This perhaps is the most commonly misunderstood detail, so it is worth noting. So in other words, it sees exactly what a human would see if the human would move the camera around very quickly, but it doesn't have to move the camera, which adds additional actions and cognitive load to the human, so one might say that the AI has an edge here. The AI plays these games independently. What's more, each game was played by a different AI, which also means that they do not memorize what happened in the last game, like a human would. Early in the next game, we can see the utility of the wall of inaction, which is able to completely prevent the AI's early attack. Later that game, the AI used disruptors, the unit which, if controlled with such level of expertise, can decimate the army of the opponent with area damage by killing multiple units at once. It has done an outstanding job picking away at the army of TLO. Then, after getting a significant advantage, Alpha Star loses it with a few sloppy plays and by deciding to engage aggressively while standing in tight choke points. You can see that this is not such a great idea. This was quite surprising as this is considered to be Starcraft 101 knowledge right there. During the remainder of the match, the commentators mentioned that they play and watch games all the time and the AI came up with an army composition that they have never seen during a professional match. And the AI won this one too. After this game, it became clear that these agents can play any style in the game, which is terrifying. Here you can see an alternative visualization that shows a little more of the inner workings of the neural network. We can see what information it gets from the game, the visualization of neurons that get activated within the network, what locations and units are considered for the next actions, and whether the AI predicts itself as the winner or the loser of the game. If you look carefully, you will also see the moment when the agent becomes certain that it will win this game. I could look at this all day long and if you feel the same way, make sure to visit the video description. I have a link to the source video for you. The final result against TLO was 5-0. I actually lost everything all the fire matches. So that's something. And he mentioned that Alpha Star played very much like a human does and almost always managed to outmaneuver him. However, TLO also mentioned that he is confident that upon playing more training matches against these agents, he would be able to defeat the AI. I hope he will be given a chance to do that. This AI seems strong, but still beatable. I would also note that many of you would probably expect the later versions of Alpha Star to be way better than this one. The good news is that the story continues and we'll see whether that's true. So at this point, the DeepMind scientists said maybe we could try to be a bit more ambitious and asked, can you bring us someone better? And in the meantime, pressed that training button on the AI again. In comes mana, a top tier pro player, one of the best pro task players in the world. This was a nerve-wracking moment for DeepMind scientists as well because their agents played against each other, so they only knew the AI's win rate against a different AI. But they didn't know how they would compete against a top pro player. It may still have holes in its strategy. Who knows what would happen. Understandably, they had very little confidence in winning this one. What they didn't expect is that the new AI was not slightly improved or somewhat improved. No, no, no, no. This new AI was next level. This set of improved agents, among many other skills, had incredibly crisp micromanagement of each individual unit. In the first game, we've seen it pulling back injured units, but still letting them attack from afar masterfully, leading to an early win for the AI against mana in the first game. He and the commentators were equally shocked by how well the agent played. And I will add that I remember from watching many games from a now inactive player by the name Marine King a few years ago. And I vividly remember that he played some of his games so well, the commentator said that there is no better way to put it, he played like a god. I am almost afraid to say that this micromanagement was even more crisp than that. This AI plays phenomenal games. In later matches, the AI did things that seemed like blunders, like attacking on ramps and standing in choke points, or using unfavorable unit compositions and refusing to change it and get this. It's still one all of those games 5-0 against a top pro player. Let that sink in. The competition was closed by a match where the AI was asked to also do the camera management. The agent was still very competent, but somewhat weaker and as a result lost this game, hence the 0 or 1 part in the title. My impression is that it was asked to do something that it was not designed for and expect a future version to be able to handle this use case as well. I will also commend Mana for his solid game plan for this game and also huge respect for deep-mind for their sportsmanship. Interestingly, in this match, Mana also started a worker over saturation strategy that I mentioned earlier. This he learned from Alpha Star and used it in his winning game. Isn't that amazing? Deep-mind also offered a Reddit AMA where anyone could ask them questions to make sure to clear up any confusion. For instance, the actions per minute part has been addressed. I've included a link to that for you in the video description. To go from a turn-based, perfect information game, like Go, to a real-time strategy game of imperfect information in about a year sounds like science fiction to me. And yet, here it is. Also, note that Deep-mind's goal is not to create a godlike Starcraft II AI. They want to solve intelligence, not Starcraft II, and they use this game as a vehicle to demonstrate its long-term decision-making capabilities against human players. One more important thing to emphasize is that the building blocks of Alpha Star are meant to be reasonably general AI algorithms, which means that parts of this AI can be reused for other things. For instance, Demis Asabi mentioned weather prediction and climate modeling as examples. If you take only one thought from this video, let it be this one. I urge you to watch all the matches because what you are witnessing may very well be history in the making. I put a link to the whole event in the video description plus plenty more materials, including other people's analysis, Manus' personal experience of the event, his breakdown of his games, and what was going through his head during the event. I highly recommend checking out his fifth game, but really, go through them all. It's a ton of fun. I made sure to include a more skeptical analysis of the game as well to give you a balanced portfolio of insights. Also, huge respect for Deep-mind and the players who practice their chops for many, many years and have played really well under immense pressure. Thank you all for this delightful event. It really made my day. And the ultimate question is, how long did it take to train these agents? Two weeks. Wow. And what's more, after the training step, the AI can be deployed on an inexpensive consumer desktop machine. And this is only the first version. This is just a taste, and it would be hard to overstate how big of a milestone this is. And now, scientists that Deep-mind have sufficient data to calculate the amount of resources they need to spend to train the next even more improved agents. I am confident that they will also take into consideration the feedback from the Starcraft community when creating this next version. What a time to be alive. What do you think about all this? Any predictions? Is this harder than Dota 2? Let me know in the comments section below. And remember, we humans build up new strategies by learning from each other, and of course, the AI, as you have seen here, doesn't care about any of that. It doesn't need intuition and can come up with unusual strategies. The difference now is that these strategies work against some of the best human players. Now it's time for us to finally start learning from an AI. GG. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Dominic Papers with Károly Zsolnai-Fehér. If we are to write a sophisticated light simulation program and we write a list of features that we really wish to have, we should definitely keep an eye on the focus effects. This is what it looks like and in order to do that, our simulation program has to take into consideration the geometry and thickness of the lenses within our virtual camera and even though it looks absolutely amazing, it is very costly to simulate that properly. This particular technique attempts to do this in real time and for specialized display types, typically once that are found in head-mounted displays for virtual reality applications. So here we go. Due to popular requests, a little VR in two-minute papers. In virtual reality, defocus effects are especially important because they mimic how the human visual system works. Only a tiny region that we are focusing on looks sharp and everything else should be blurry, but not any kind of blurry. It has to look physically plausible. If we can pull this off just right, we'll get a great and immersive VR experience. The heart of this problem is looking at a 2D image and being able to estimate how far away different objects are from the camera lens. This is a task that is relatively easy for humans because we have an intuitive understanding of depth and geometry, but of course this is no easy task for a machine. To accomplish this, here a convolution on your own network is used and our seasoned fellow scholars know that this means that we need a ton of training data. The input should be a bunch of images and their corresponding depth maps for the neural network to learn from. The authors implemented this with a random scene generator which creates a bunch of these crazy scenes with a lot of occlusions and computes via simulation the appropriate depth map for them. On the right you see these depth maps or in other words images that describe to the computer how far away these objects are. The incredible thing is that the neural network was able to learn the concept of occlusions and was able to create super high quality defocus effects. Not only that but this technique can also be reconfigured to fit different use cases. If we are okay with spending up to 50 milliseconds to render an image which is 20 frames per second we can get super high quality images or if we only have a budget of 5 milliseconds per image which is 200 frames per second we can do that and the quality of the outputs degrades just a tiny bit. While we are talking about image quality let's have a closer look at the paper where we see a ton of comparisons against previous works and of course against the baseline ground truth knowledge. You see two metrics here PSNR which is the peak signal to noise ratio and SSIM the structural similarity metric. In this case both are used to measure how close the output of these techniques is to the ground truth footage. Both are subject to maximization. For instance here you see that the second best technique has a peak signal to noise ratio of around 40 and this new method scores 45. Well some may think that's just a 12 percent difference right? No. Note that PSNR works on a logarithmic scale which means that even a tiny difference in numbers translates to a huge difference in terms of visuals. You can see in the close-ups that the output of this new method is close to indistinguishable from the ground truth. A neural network that successfully learned the concept of occlusions and depth by looking at random scenes. Bravo! As virtual reality applications are under rise these days this technique will be useful to provide a more immersive experience for the users and to make sure that this method sees more widespread use. The authors also made the source code and the training data sets available for everyone free of charge so make sure to have a look at that and run your own experiments if you're interested. I'll be doing that in the meantime. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Jona Ifehir. Before we start, I will tell you right away to hold on to your papers. When I first seen the results, I didn't do that and almost fell out of the chair. Scientists at NVIDIA published an amazing work not so long ago that was able to dream up high-resolution images of imaginary celebrities. It was a progressive technique which means that it started out with a low-fidelity image and kept refining it and over time we found ourselves with high-quality images of people that don't exist. We also discussed in the previous episode that the algorithm is able to learn the properties and features of a human face and come up with truly novel human beings. There is true learning happening here, not just copying the training set for these neural networks. This is an absolutely stellar research work and for a moment let's imagine that we are the art directors of a movie or a computer game where we require that the algorithm synthesizes more human faces for us. Whenever I worked with artists in the industry, I've learned that what artists often look for beyond realism is control. Artists seek to conjure up new worlds and those new worlds require consistency and artistic direction to suspend our disbelief. So here's a new piece of work from Nvidia with some killer new features to address this. Killer feature number one. It can combine different aspects of these images. Let's have a look at an example over here. The images above are the inputs and we can lock in several aspects of these images, for instance like gender, age, pose and more. One would take a different image, this will be the other source image and the output is these two images fused together, almost like star transfer or feature transfer for human faces. As a result we are able to generate high fidelity images of human faces that are incredibly lifelike and of course none of these faces are real. How cool is that? Absolutely amazing. Killer feature number two. We can also vary these parameters one by one and this way we have a more fine grained artistic control over the outputs. Killer feature number three. It can also perform interpolation which means that we have desirable images A and B and this would create intermediate images between them. As always with this the whole eGrea problem is that each of the intermediate images have to make sense and be realistic. And just look at this. It can morph one gender into the other, blend hairstyles, colors and in the meantime the facial gestures remain crisp and realistic. I am out of words. This is absolutely incredible. It kind of works on other datasets, for instance cars, bedrooms and of course you guessed it right. Cats. Now interestingly it also varies the background behind the characters which is a hallmark of latent space based techniques. I wonder if and how this will be solved over time. We also published a paper not so long ago that was about using learning algorithms to synthesize not human faces but photorealistic materials. We introduced a neural renderer that was able to perform a specialized version of a light transport simulation in real time as well. However, in the paper we noted that the resolution of the output images is limited by the onboard video memory on the graphics card that is being used and should improve over time as new graphics cards are developed with more memory. And get this, a few days ago, Fox at NVIDIA reached out and said that they just released an amazing new graphics card, the Titan RTX which has a ton of onboard memory and would be happy to send over one of those. Now we can improve our work further. A huge thank you for them for being so thoughtful and therefore this episode has been kindly sponsored by NVIDIA. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. In this series, we frequently talk about generative adversarial networks or GANs in short. This means a pair of neural networks that battle each other over time to master a task, for instance, to generate realistic-looking images from a written description. Here you see Nvidia's amazing work that was able to dream up high-resolution images of imaginary celebrities. In the next episode, we will talk some more about their newest work that does something like this and is even better at it, believe it or not. I hope you have subscribed to the channel to make sure not to miss out on that one. And for now, while we marvel at these outstanding results, I will quickly tell you about overfitting and what it has to do with images of celebrities. When we train a neural network, we wish to make sure that it understands the concepts we are trying to teach it. Typically, we feed it a database of labeled images where the labels mean that this depicts a dog and this one is not a dog but a cat. After the training step took place, in the ideal case, it will be able to build an understanding of these images so that when we show them new, previously unseen images, it would be able to correctly guess which animals they depict. However, in many cases, we start training the neural network and during the training, it gives us wonderful results and it gets the animals right every single time. But whenever it sees new, previously unseen images, it can tell a dog from a cat at all. This peculiar case is what we call overfitting and this is the bane of machine learning research. Overfitting is like the kind of student we all encounter that school who is always very good at memorizing the textbook but can solve even the simplest new problems on the exam. This is not learning, this is memorization. Overfitting means that a neural network does not learn the concept of dogs or cats, it just tries to memorize this database of images and is able to regurgitate it for us but this knowledge cannot generalize for new images. That's not good. I want intelligence, not a copying machine. So at this point, it is probably clearer what images of celebrities have to do with overfitting. So how do we know that this algorithm doesn't just memorize the celebrity image dataset it was given and can really generate new imaginary people? Is it the good kind of student or the lousy student? Technique number one. Let's not just dream up images of new celebrities but also visualize images from the training data that are similar to this image. If they are too similar, we have an overfitting problem. Let's have a look. Now it is easy to see that this is proper intelligence and not a copying machine because it was clearly able to learn the facial features of these people and combine them in novel ways. This is what scientists at NVIDIA did in their paper and are to be commanded for that. Technique number two, well, just take a bunch of humans and let them decide whether these images differ from the training set and if they are realistic. This kind of works but of course costs quite a bit of money, labor and we end up with something subjective. We better not compare the quality of research papers based on that if we can avoid it. And get this, we can actually avoid it by using something called the Inception Score. Instead of using humans, this score uses a neural network to have a look at these images and measure the quality and the diversity of the results. As long as the image produces similar neural activations within this neural network, two images will be deemed to be similar. Finally, this score is an objective way of measuring progress within this field and it is of course subject to maximization. So now you of course wish to know what the state of the artist today. For reference, a set of real images has an Inception Score of 233 and the best works that produced synthetic images just a few years ago had a score of around 50. To the best of my knowledge, as of the publishing of this video, the highest Inception Score for an AI is close to 166, so we've come a long, long way. You can see some of these images here. Truly exciting. What a time to be alive. The disadvantages of the method is that one, because the diversity of the outputs is also to be measured, it requires many thousands of images. This is likely more of an issue with the problem definition itself and not this method and also since this means that the computers and not real people have to do the work, we can give this one a pass. This advantage number two, I will include this paper in the video description for you, which basically describes that there are cases where it is possible to get the network to think an image is of higher quality than another one, even if it clearly isn't. Now you see that we have pretty ingenious techniques to measure the quality of image generator AI programs, and of course this area of research is also subject to improvement and I'll be here to tell you about it. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Carlos Joel Naifahir. Bainaro, or two and a half-day audio, means a sound recording that provides the listener with an amazing 3D-ish sound sensation. It produces a sound that feels highly realistic when listened to through headphones, and therefore, using a pair is highly recommended for this episode. It sounds way more immersive than regular mono, or even stereo audio signals, but also requires more expertise to produce, and is therefore quite scarce on the internet. Let's listen to the difference together. We have not only heard sound samples here, but you could also see the accompanying video content, which reveals the position of the players and the composition of the scene in which the recording is made. This sounds like a perfect fit for an AI to take a piece of mono audio and use this additional information to convert it to make it sound bainaro. This project is exactly about that, where a deep convolution on your own network is used to look at both the video and the single channel audio content in our footage, and then predict what it would have sounded like where it recorded as a bainaro signal. The fact that we can use the visual content as well as the audio with this neural network also enables us to separate the sound of an instrument within the mix. Let's listen. To validate the results, the authors both used a quantitative mathematical way of comparing their results to the ground truth and not only that, but they also carried out two user studies as well. In the first one, the ground truth was shown to the users and they were asked to judge which of the two techniques were better. In this study, this new method performed better than previous methods and in the second setup, users were asked to name the directions they hear the different instrument sounds coming from. In this case, the new method outperformed the previous techniques by a significant margin, and if we keep progressing like this, we may be at most a couple papers away from two and a half the audio synthesis that sounds indistinguishable from the real deal. Looking forward to a future where we can enjoy all kinds of video content with this kind of immersion. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. This is going to be a weird, non-traditional episode, not the usual Two Minute Papers. Hope you'll enjoy it and if you've finished the video, please let me know in the comments what you think about it. So let's start. Life lessons I learned from AI Research. Number one, you need an objective. Before we start training a neural network to perform, for instance, image classification, we need a bunch of training data. This, we can feed to this neural network, telling that this image depicts a cat, and this one is not a cat, but an ostrich. We also need to specify a loss function. This is super important because this loss function is used to make sure that the neural network trains itself in a way that its predictions will be similar to the training data it is being fed. It is also referred to as an objective, or objective function, to indicate that we know precisely what we are looking for and that's what the neural network should do. This is a way to measure how the neural network is progressing and without this, it is useless. Similarly, in another learning problem, we can specify an objective for this end, which in this case is to be able to traverse as far from a starting point as possible, and it reconfigures its body type and movement to be able to score high on our objective. And this leads us to the second lesson. A change in the objective changes the strategy required to achieve it. Look here, in a different problem definition, we can specify a different objective. For instance, a different terrain, and you see that if we wish to succeed here, we need a vastly different body type. Form follows function. And in this other case, the objective is to be able to traverse efficiently, but with minimal material use for the legs. The solution, again, changes accordingly to the objective. New objectives require new strategies. Number three, if the objective was wrong, do not worry and aim again. Have a look at AlphaGo. This is DeepMind's algorithm that was able to defeat some of the best players in the world in the game of Go. This was a highly non-trivial achievement as the space of possible moves is so stupendously large that it is impossible to evaluate every move. Instead, it tries to aggressively focus on a smaller number of possible moves and tries to simulate the result of these moves. If the move leads to an improvement in our position, it is a good one. If not, it should be avoided. Sounds simple, right? Well, we have an objective, that's great, but initially it has a really bad predictor, which means that it is really bad at judging, which move is good, and which one isn't. However, over time, it refines its predictor, and these estimations improve further and further. In the end, by only taking a brief look at the state of the game, it can predict with a high confidence whether it is going to win or not. Initially, we have an objective, but how do we know whether it is a good one? Well, we try to get there and then evaluate our position. We may find that we got nowhere with this. What most people do is abort the program. Quit the game. Give up. It's over. No, don't despair. It's not over. This is the early stage of teaching an AI, and this is the time where we can improve our predictor, and pick our next objective more wisely. Over time, you'll find the ideas that don't work, and not only that, you'll find out why they don't work. Do not worry, and aim again. And this leads us to lesson number four. Zoom out and evaluate. This is exactly what DeepMind's amazing deep-cool learning algorithm does, that took the world by storm as it was able to play Atari Breakout on a superhuman level, just by looking at the pixels of the game. This algorithm ran in two phases, where phase one is collecting experiences, and phase two was called Experience Replay. This is where the AI stops and reflects upon these experiences. Zooming out and evaluating is immensely important, because after all, this is where the real learning happens. So, every now and then, zoom out and evaluate. And while we are here, I can simply not resist adding two more lessons I learned from other scientific disciplines. So, lesson number five, if you find something that works, hold on to it. This is exactly what Metropolis Light Transport does, which is a light simulation algorithm that is able to create beautiful images, even for extremely difficult problems, where it is challenging to find where the light is. However, when it finally finds something, it makes sure not to forget about it, and explore nearby light paths. It works like a charm for difficult light transport situations, and can create absolutely beautiful images for even the hardest virtual scenes. Seek the light and hold on to it. And whenever you feel that you are still not making progress, think about the following. Lesson number six, as long as you keep moving, you'll keep progressing. Take a look at this random walk. A random walk is a succession of steps in completely random directions. This walk is completely lack of direction, just as a drunkard that tries to find home. However, get this. A mathematical theorem says that after n steps, the expected distance from where we started is proportional to the square root of n. This is huge. What this means is that for instance, if we took four completely random steps, we are expected to be two units of distance away from where we started. That's progress. If we take a hundred steps, even then, we can expect to be around 10 units of distance away from the starting point. This concept works even if our predictor is completely haywire, and we choose our objectives like a drunkard. Now I think that's a lesson worth sharing. To recap, you need an objective. It can be anything, so long as you keep moving, you'll progress. If you have achieved it and it ended up not being what you were looking for, don't stop. Zoom out and reflect. This will help you to improve your predictor, and you will be able to recalibrate and aim again at something more meaningful. Now, aim and find a new objective. When you have a new objective, your strategy needs to change to be able to achieve it. Finally, if you find something desirable, hold on to it and explore more in this direction. Seek the light. Of course, you don't have to live your life this way, but I think these are interesting, mathematically motivated lessons that are worth showing to you. After all, this series is not only meant to inform, but to inspire you to get out there and create. It always feels absolutely amazing, getting these kind messages from you fellow scholars. Some of you said that the series has changed your life in a positive way. I am really out of words, and I'm honored to be able to make these videos for you fellow scholars. Let me know in the comments whether you enjoyed this episode, and please keep the kind messages coming they really make my day. Thanks for watching, and for your generous support, I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahir. This is a collaboration between deep-mind and open AI on using human demonstrations to teach an AI to play games really well. The basis of this work is reinforcement learning, which is about choosing a set of actions in an environment to maximize a score. For some games, this score is typically provided by the game itself, but in more complex games, for instance, once that require exploration, this score is not too useful to train an AI. In this project, the key idea is to use human demonstrations to teach an AI how to succeed. This means that we can sit down, play the game, show the footage to the AI, and hope that it learns something useful from it. Now, the most trivial implementation of this would be to imitate the footage too closely, or, in other words, simply redo what the human has done. That would be a trivial endeavor, and it is the most common way of misunderstanding what is happening here, so I will emphasize that this is not the case. Just imitating what the human player does would not be very useful, because one, it puts too much burden on the humans, that's not what we want, and number two, the AI could not be significantly better than the human demonstrator, and that's also not what we want. In fact, if we have a look at the paper, the first figure shows us right away how badly a simpler imitation program performs. That's not what this algorithm is doing. What it does instead is that it looks at the footage as the human plays the game and tries to guess what they were trying to accomplish. Then, we can tell a reinforcement learner that this is now our reward function, and it should train to become better at that. As you see here, it can play an exploration heavy game, such as Atari Hero, and in the footage above, you see the rewards over time, the higher the better. This AI performs really well in this game, and significantly outperforms reinforcement learner agents trained from scratch on Montezuma's revenge as well, although it can still get stuck on a ladder. We discussed earlier a curious AI that was quickly getting bored by ladders and moved on to more exciting endeavors in the game. The performance of the new agent seems roughly equivalent to an agent trained from scratch in the game Pong, presumably because of the lack of exploration and the fact that it is very easy to understand how to score points in this game. But wait, in the previous episode we just talked about an algorithm where we didn't even need to play, we could just sit in our favorite armchair and direct the algorithm. So, why play? Well, just providing feedback is clearly very convenient, but as we can only specify what we liked and what we didn't like, it is not very efficient. With the human demonstrations here, we can immediately show the AI what we are looking for, and as it is able to learn the principles and then improve further, and eventually become better than the human demonstrator, this work provides a highly desirable alternative to already existing techniques, loving it. If you have a look at the paper, you will also see how the authors incorporated a cool additional step to the pipeline where we can add annotations to the training footage, so make sure to have a look. Also, if you feel that a bunch of these AI videos a month are worth a dollar, please consider supporting us at patreon.com slash 2 Minute Papers. You can also pick up cool perks like getting early access to all of these episodes, or getting your name immortalized in the video description. We also support crypto currencies and one-time payments, the links and additional information to all of these are available in the video description. With your support, we can make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This episode does not have the usual visual fireworks, but I really wanted to cover this paper because it tells a story that is, I think, very important for all of us to hear about. When creating a new AI to help us with a task, we have to somehow tell this AI what we consider to be a desirable solution. If everything goes well, it will find out the best way to accomplish it. This is easy when playing simpler video games because we can just tell the algorithm to maximize the score seen in the game. For instance, the more breaks we hit in Atari Breakout, the closer we get to finishing the level. However, in real life, we don't have anyone giving us a score to tell us how close we are to our objective. What's even worse, sometimes we have to make decisions that seem bad at the time, but will serve us well in the future. Trying to save money or studying for a few years longer are typical life decisions that pay off in the long run, but may seem undesirable at the time. The opposite is also true. Ideas that may sound right at the time may immediately backfire. When in a car chase, don't ask the car AI to unload all unnecessary weights to go faster, or if you do, prepare to be promptly ejected from the car. So how can we possibly create an AI that somehow understands our intentions and acts in line with them? That's a challenging question and is often referred to as the agent alignment problem. It has to be aligned with our values. What can we do about this? Well, short of having a mind reading device, we can maybe control the behavior of the AI through its reward system. This is a deep mind just published a paper on this topic where they started their thought process from two assumptions. Assumption number one, quoting the authors. For many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. In short, it is easier to yell at the TV than to become an athlete. Sounds reasonable, right? Note that from complexity theory, we know that this does not always hold, but it is indeed true for a large number of difficult problems. Assumption number two, user intentions can be learned with high accuracy. In other words, given enough data that somehow relates to our intentions, the AI should be able to learn that. Leaning on these two assumptions, we can change the basic formulation of reinforcement learning in the following way. Normally, we have an agent that chooses a set of actions in an environment to maximize a score. For instance, this could mean moving the pedal around to hit as many blocks as possible and finish the level. They extended this formulation in a way that the user can periodically provide feedback on how the score should be calculated. Now, the AI will try to maximize this new score, and we hope that this will be more in line with our intentions. Or, in our car chase example, we could modify our reward to make sure we remain in the car and not get ejected. Perhaps the most remarkable property of this formulation is that it doesn't even require us to, for instance, play the game at all to demonstrate our intentions to the algorithm. The formulation follows our principles and not our actions. We can just sit in our favorite armchair, bend the AI to our will by changing the reward function every now and then, and let the AI do the grueling work. This is like yelling at the TV except that it actually works. Loving the idea. If you have a look at the paper, you will see a ton more details on how to do this efficiently and a case study with a few Atari games. Also, since this has a lot of implications pertaining to AI safety and how to create a lined agents, an increasingly important topic these days, huge respect for deep mind for investing more and more of their time and money in this area. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Reinforcement learning is a class of learning algorithms that chooses a set of actions in an environment to maximize the score. Typical use cases of this include writing an AI to master video games, or avoiding obstacles with a drone and many more cool applications. What ties most of these ideas together is that whenever we talk about reinforcement learning, we typically mean teaching an agent how to navigate in an environment. A few years ago, a really fun online app surfaced that used a genetic algorithm to evolve the morphology of a simple 2D car with the goal of having it roll as far away from a starting point as possible. It used a genetic algorithm that is quite primitive compared to modern machine learning techniques and yet it still does well on this, so how about testing a proper reinforcement learner to optimize the body of the agent? What's more, what if we could jointly learn both the body and the navigation at the same time? Okay, so what does this mean in practice? Let's have a look at an example. Here, we have an ant that is supported by four legs, each consisting of three parts that are controlled by two motor joints. With the classical problem formulation, we can teach this ant to use these joints to learn to walk, but in the new formulation, not only the movement, but the body morphology is also subject to change. As a result, this ant learned that the body can also be carried by longer, thinner legs and adjusted itself accordingly. As a plus, it also learned how to walk with these new legs and this way it was able to outclass the original agent. In this other example, the agent learns to more efficiently navigate a flat terrain by redesigning its legs that are now reminiscent of small springs and uses them to skip its way forward. Of course, if we change the terrain, the design of an effective agent also changes accordingly and the super interesting part here is that it came up with an asymmetric design that is able to climb stairs and travel uphill efficiently. Loving it. We can also task this technique to minimize the amount of building materials used to solve a task and subsequently, it builds an adorable little agent with tiny legs that is still able to efficiently traverse this flat terrain. This principle can also be applied to the more difficult version of this terrain which results in a lean, insect-like solution that can still finish this level that uses about 75% less materials than the original solution. And again, remember that not only the design, but the movement is learned here at the same time. While we look at these really fun bloopers, I'd like to let you know that we have an opening at our institute at the Vienna University of Technology for one postdoctoral researcher. The link is available in the video description, read it carefully to make sure you qualify, and if you apply through the specified email address, make sure to mention two minute papers in your message. This is an excellent opportunity to read and write amazing papers and work with some of the sweetest people. This is not standard practice in all countries, so I will note that you can check the salary right in the call, it is a well-paid position in my opinion, and you get to live in Vienna. Also, your salary is paid not 12, but 14 times a year. That's Austria for you. It doesn't get any better than that. That line is end of January. Happy holidays to all of you. Thanks for watching and for your generous support, and I'll see you early January.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. Approximately 150 episodes ago, we looked at deep minds amazing algorithm that was able to look at a database with images of birds, and it could learn about them so much that we could provide a text description of an imaginary bird type and it would dream up new images of them. It was a truly breathtaking piece of work, and its main limitation was that it could only come up with course images. It didn't give us a lot of details. Later, we talked about Nvidia's algorithm that started out with such a course image, but didn't stop there. It progressively recomputed this image many times, each time with more and more details. This was able to create imaginary celebrities with tons of detail. This new work offers a number of valuable improvements over the previous techniques. It can train bigger neural networks with even more parameters, create extremely detailed images with remarkable performance, so much so that if you have a reasonably powerful graphics card, you can run it yourself here. The link is in the video description. Training these neural networks is also more stable than it used to be with previous techniques. As a result, it not only supports creating these absolutely beautiful images, but also gives us the opportunity to exert artistic control on the outputs. I think this is super fun. I could play with this all day long. What's more, we can also interpolate between these images, which means that if we have desirable images A and B, it can compute intermediate images between them, and the challenging part is that these intermediate images shouldn't be some sort of average between the two, which would be gibberish, but they would have to be images that are meaningful and can stand on their own. Look at this. Flying colors. And now comes the best part. The results were measured in terms of their inception score. This inception score defines how recognizable and diverse these generated images are, and most importantly, both of these are codified in a mathematical manner to reduce the subjectivity of the evaluation. This score is not perfect by any means, but it typically correlates well with the scores given by humans. The best of the earlier works had an inception score of around 50. And hold on to your papers because the score of this new technique is no less than 166, and if we would measure real images, they would score around 233. What an incredible leap in technology. And we are even being paid for creating and playing with such learning algorithms. What a time to be alive. A big thumbs up for the authors of the paper for providing quite a bit of information on failure cases as well. We also thank Incelico Medicine for supporting this video. They are using these amazing learning algorithms to create new molecules, identify new protein targets with the aim to cure diseases, and aging itself. Make sure to check them out in the video description. They are our first sponsors, and it's been such a joy to work with them. Thanks for watching, and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute Papers with Karojona Ifaher. Style Transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style, and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results. This subfield is only a few years old and has seen a number of amazing papers. Style Transfer for HD images, videos and some of these forgeries were even able to make professional art curators think that they were painted by a real artist. So here's a crazy idea. How about using style transfer to create caricatures? Well, this sounds quite challenging. Just think about it. A caricature is an elusive art where certain human features are exaggerated and generally the human face needs to be simplified and boiled down into its essence. It is a very human thing to do. So how could possibly an AI be endowed with such a deep understanding of, for instance, a human face? That sounds almost impossible. Our suspicion is further reinforced as we look at how previous style transfer algorithms try to deal with this problem. Not too well, but no wonder it would be unfair to expect great results as this is not what they were designed for. But now, look at these truly incredible results that were made with this new work. The main difference between the older works and this one is that one, it uses generative adversarial networks, GANs, in short. This is an architecture where two neural networks learn together. One learns to generate better forgeries and the other learns to find out whether an image has been forged. However, this would still not create the results that you see here. An additional improvement is that we have not one, but two of these GANs. One deals with style. But it is trained in a way to keep the essence of the image. And the other deals with changing and warping the geometry of the image to achieve an artistic effect. This leans on the input of a landmark detector that gives it around 60 points that show the location of the most important parts of a human face. The output of this geometry, again, is a distorted version of this point set, which can then be used to warp the style image to obtain the final output. This is a great idea because the amount of distortion applied to the points can be controlled. So, we can tell the AI how crazy of a result we are looking for. Great! The authors also experimented applying this to video. In my opinion, the results are incredible for a first crack at this problem. We are probably just one paper away from an AI automatically creating absolutely mind-blowing caricature videos. Make sure to have a look at the paper as it has a ton more results, and of course, every element of the system is explained in great detail there. And if you enjoyed this episode and you would like to access all future videos in early access, or get your name immortalized in the video description as a key supporter, please consider supporting us on patreon.com slash 2-minute papers. The link is available in the video description. We were able to significantly improve our video editing rig, and this was possible because of your generous support. I am so grateful. Thank you so much. And this is why every episode ends with the usual quote, thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir, in a previous episode we discussed a technique where we could specify a low-quality image of a test subject and a photo of a different person. What happened then is that the algorithm transformed our test subject into that pose. With another algorithm, we can transfer our facial gestures onto a different target subject. And this new method does something completely different. Here we can copy a full body movement from a video and transfer it onto a target person. This way we can appear to be playing tennis, baseball, or finally be able to perform a hundred chin-ups. Well, at least on Instagram. Now look here. Up here you see the target poses and on the left the target subjects. And between them we see the output of this algorithm with the target subjects taking these poses. As you see, the algorithm is quite consistent in a sense that during walking we often encounter the same pose which results in a very similar image. That's exactly the kind of consistency that we are looking for. Remarkably, this algorithm is also able to synthesize angles of these target subjects that it hadn't seen before. For instance, the backside of this person was never shown to the algorithm and it correctly guesses interesting details like the belt of this character to continue around the waist. Really cool, I love it. We can also put these characters in a virtual environment and animate them there. Now this work, like most papers that explore something completely new, is raw and experimental. Now clearly there are issues with the occlusions, flickering and the silhouettes of the characters give the trick away. Anyone looking at this footage can tell in a second that it is not real. The reason I am so excited about this is that now we finally see that this is a viable concept and it will provide fertile grounds for new follow-up research works to be improved upon. Two more papers down the line, it will probably work in HD and look significantly better. Just imagine how amazing that would be for movies, computer games and telepresence applications. Sign me up. And computer graphics research has a vast body of papers on how to illuminate these characters to appear as if they were really there in this environment. Will this be done with computer graphics or through AI? I am really keen to see how these fields will come together to solve such a challenging problem. What a time to be alive. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. In a previous episode, we talked about a class of learning algorithms that were endowed with curiosity. This new work also showcases a curious AI that aims to solve Montezuma's revenge, which is a notoriously difficult platform game for an AI to finish. The main part of the difficulty arises from the fact that the AI needs to be able to plan for longer time periods, and interestingly, it also needs to learn that short-term rewards don't necessarily mean long-term success. Let's have a look at an example. Quoting the authors. There are four keys and six doors spread throughout the level. Any of the four keys can open any of the six doors, but are consumed in the process. To open the final two doors, the agent must therefore forego opening two of the doors that are easier to find, and that would immediately reward it for opening them. So what this means is that we have a tricky situation, because the agent would have to disregard the fact that it is getting a nice score from opening the doors and understand that these keys can be saved for later. This is very hard for an AI to resist, and, again, curiosity comes to the rescue. Curiosity, at least this particular definition of it, works in a way that the harder the guess for the AI, what will happen, the more excited it gets to perform an action. This drives the agent to finish the game and explore as much as possible, because it is curious to see what the next level holds. You see in the animation here that the big reward spikes show that the AI has found something new and meaningful, like losing a life, or narrowly avoiding an adversary. As you also see, climbing a ladder is a predictable, boring mechanic that the AI is not very excited about. Later, it becomes able to predict the results even better, the second and third time around, therefore it gets even less excited about ladders. This other animation shows how this curious agent explores adjacent rooms over time. This work also introduces a technique that the authors call random network distillation. This means that we start out from a completely randomly initialized, untrained neural network, and over time, slowly distill it into a trained one. This distillation also makes our neural network immune to the noisy TV problem from our previous episode, where our curious, unassuming agent would get stuck in front of a TV that continually plays new content. It also takes into consideration the score reported by the game and has an internal motivation to explore as well. And hold on to your papers because it can not only perform well in the game, but this AI is able to perform better than the average human. And again, remember that no-grandtruth knowledge is required, it was never demonstrated to the AI how one should play this game. Very impressive results indeed, and as you see, the pace of progress in machine learning research is nothing short of incredible. Make sure to have a look at the paper in the video description for more details. We'd also like to send a big thank you to Insili Co-Medicine for supporting this video. They use AI for research on preventing aging, believe it or not, and are doing absolutely amazing work. Make sure to check them out in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir. If we have an animation movie or a computer game where, like in any other digital medium, we wish to have high quality, lifelike animations for our characters, we likely have to use motion capture. Motion capture means that we put an actor in a studio and we ask this person to perform cartwheels and other motion types that we wish to transfer to our virtual characters. This works really well, but recording and cleaning all this data is a very expensive and laborious process. As we are entering the age of AI, of course, I wonder if there is a better way to do this. Just think about it. We have no shortage of videos here on YouTube about people performing cartwheels and other moves and we have a bunch of learning algorithms that know what pose they are taking during the video. Surely we can make something happen here, right? Well, yes and no. A few methods already exist to perform this, but all of them have deal-breaking drawbacks. For instance, these previous work predicts the body poses for each frame, but each of them have small individual inaccuracies that produce this annoying flickering effect. Researchers like to refer to this as the lack of temporal coherence. But this new technique is able to remedy this. Great result. This new work also boasts a long list of other incredible improvements. For instance, the resulting motions are also simulated in a virtual environment and it is shown that they are quite robust. So much so that we can throw a bunch of boxes against the AI and it can still adjust to it. Kind of. These motions can be retargeted to different body shapes. You can see as it is demonstrated here, quite aptly, with this neat little nod to Boston Dynamics. It can also adapt to challenging new environments or get this. It can even work from a single photo instead of a video by completing the motion seen within. What kind of wizardry is that? How could it possibly perform that? First, we take an input photo or video and perform pose estimation on it. But this is still a per frame computation and you remember that this doesn't give us temporal consistency. This motion reconstruction step ensures that we have smooth transitions between the poses. And now comes the best part. We start simulating a virtual environment where a digital character tries to move its body parts to perform these actions. Which we do this, we can not only reproduce these motions, but also continue them. This is where the wizard relies. If you read the paper, which you should absolutely do, you will see that it uses OpenAI's amazing proximal policy optimization algorithm to find the best motions. Absolutely amazing. So this can perform and complete a variety of motions, adapt to more challenging landscapes and do all this in a temporarily smooth manner. However, the Gangnam Style Dance still proves to be too hard. The technology is not there yet. We also thank Insilico Medicine for supporting this video. They work on AI-based drug discovery and aging research. They have some unbelievable papers on these topics. Make sure to check them out and this paper as well in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute Papers with Karo Jornaifahir. Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize a score. This class of techniques enables us to train an AI to master video games, avoiding obstacles with a drone, cleaning up a table with a robot arm, and has many more really cool applications. We use the word score and reward interchangeably, and the goal is that over time the agent has to learn to maximize a prescribed reward. So, where should the rewards come from? Most techniques work by using extrinsic rewards. Extrinsic rewards are only a half solution as they need to come from somewhere, either from the game in the form of a game score, which simply isn't present in every game. And even if it is present in a game, it is very different for Atari Breakout and for instance a strategy game. Intrinsic rewards are designed to come to the rescue, so the AI would be able to completely ignore the in-game score, and somehow have some sort of inner motivation to drive an AI to complete a level. But what could possibly be a good intrinsic reward that would work well on a variety of tasks? Shouldn't this be different from problem to problem? If so, we are back to square one. If we are to call our learner intelligent, then we need one algorithm that is able to solve a large number of different problems. If we need to reprogram it for every game, that's just a narrow intelligence. So, a key finding of this paper is that we can end out the AI with a very human-like property. Curiosity. Human babies also explore the world out of curiosity and as a happy side effect, learn a lot of useful skills to navigate in this world later. However, as in our everyday speech, the definition of curiosity is a little nebulous. We have to provide a mathematical definition for it. In this work, this is defined as trying to maximize the number of surprises. This will drive the learner to favor actions that lead to unexplored regions and complex dynamics in a game. So, how do these curious agents fare? Well, quite good. In Pong, when the agent plays against itself, it will end up in long matches passing the ball between the two paddles. How about bowling? Well, I cannot resist but quote the authors for this one. The agent learned to play the game better than agents trained to maximize the clipped extrinsic reward directly. We think this is because the agent gets attracted to the difficult to predict flashing of the scoreboard occurring after the strikes. With a little stretch, one could perhaps say that this AI is showing signs of addiction. I wonder how it would do with modern mobile games with loot boxes. But, we'll leave that for future work now. How about Super Mario? Well, the agent is very curious to see how the levels continue, so it learns all the necessary skills to beat the game. Incredible. However, the more seasoned fellow scholars immediately find that there is a catch. What if we sit down the AI in front of a TV that constantly plays new material? You may think that this is some kind of a joke, but it's not. It is a perfectly valid issue because due to its curiosity, the AI would have to stay there forever and not start exploring the level. This is the good old definition of TV addiction. Talk about human-like properties. And sure enough, as soon as we turn off the TV, the agent gets to work immediately. Who would have thought? The paper notes that this challenge needs to be dealt with over time. However, the algorithm was tested on a large variety of problems and it did not come up in practice. And the key insight is that curiosity is not only a great replacement for extrinsic rewards, the two are often aligned, but curiosity in some cases is even superior to that. This is an amazing value proposition for something that we can run on any problem without any additional work. So, curious agents that are addicted to flashing score screens and TVs. What a time to be alive. And if you enjoyed this episode and you wish to help us on our quest to inform even more people about these amazing stories, please consider supporting us on patreon.com slash two-minute papers. You can pick up cool perks there to keep your papers addiction in check. As always, there is a link to it and to the paper in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Paper Sweet Carro Joel Naifa here. This is a neural network-based technique that can perform audio-visual separation. Before we talk about what that is, I will tell you what it is not. It is not what we've seen in the previous episode where we could select a pixel and listen to it. Have a look. And now let's try to separate the sound of the cello and see if it knows where it comes from. This one is different. This new technique can clean up an audio signal by suppressing the noise in a busy bar, even if the source of the noise is not seen in the video. It can also enhance the voice of the speaker at the same time. Let's listen. So this task is given the video, any person who gets cleaned up, and everything else gets suppressed. So this task is given the video, any person who gets cleaned up, and everything else gets suppressed. Or, if we have a Skype meeting with someone in a lab or a busy office where multiple people are speaking nearby, we can also perform a similar speed separation, which would be a gut send for future meetings. So we've been trying to train this network to input two embedding as an output of three. Yeah, this is just an extra experiment for the paper. Hi guys. And I think if you are a parent, the utility of this example needs no further explanation. Hi, my name is I'm a I am not sure if I ever encountered the term screaming children in the evening. I am not sure if I ever encountered the term screaming children in the abstract of an AI paper. So that one was also a first here. This is a super difficult task because the AI needs to understand what lip motions correspond to what kind of sounds, which is different for all kinds of languages, age groups, and head positions. To this end, the authors put together a stupendously large data set with almost 300,000 videos with clean speech signals. This data set is then run through a multi-stream neural network that detects the number of human faces within the video, generates small thumbnails of them, and observes how they move over time. It also analyzes the audio signals separately, then fuses these elements together with the recurrent neural network to output the separated audio waveforms. A key advantage of this architecture and training method is that as opposed to many previous works, this is speaker independent. Therefore, we don't need specific training data from the speaker we want to use this on. This is a huge leap in terms of usability. The paper also contains an excellent demonstration of this concept by taking a piece of footage from Conan O'Brien's show where two comedians were booked for the same time slot and talk over each other. The result is a performance where it is near impossible to understand what they are saying, but with this technique, we can hear both of them one by one crystal clear. You see some results over here, but make sure to click the paper link in the description to hear the sound samples as well. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This is a neural network-based method that is able to show us the sound of pixels. What this means is that it separates and localizes audio signals in videos. The two keywords are separation and localization, so let's take a look at these one by one. Localization means that we can pick a pixel in the image and it shows us the sound that comes from that location and the separation part means that ideally we only hear that particular sound source. Let's have a look at an example. Here's an input video. And now let's try to separate the sound of the cello and see if it knows where it comes from. Same with the guitar. Now for a trickier question. Even though there are sound reverberations of the walls, but the walls don't directly emit sound themselves, so I am hoping to hear nothing now. Let's see. Flat signal. Great. So how does this work? It is a neural network-based solution that has washed 60 hours of musical performances to be able to pull this off and it learns that a change in sound can often track back to a change in the video footage as a musician is playing an instrument. As a result, get this. No supervision is required. This means that we don't need to label this data or in other words, we don't need to specify how each pixel sounds. It learns to infer all this information from the video and sound signals by itself. This is huge and otherwise, just imagine how many work hours that would require to annotate all this data. And another cool application is that if we can separate these signals, then we can also independently adjust the sound of these instruments. Have a look. Now, clearly it is not perfect as some frequencies may bleed over from one instrument to the other, and there are also other methods to separate audio signals, but this particular one does not require any expertise, so I see a great value proposition there. If you wish to create a separate version of a video clip and use it for karaoke or just subtract the guitar and play it yourself, I would look no further. Also, you know the drill, this will be way better a couple of papers down the line. So, what do you think? What possible applications do you envision for this? Where could it be improved? Let me know below in the comments. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karoijona Ifehir. This video is not about a paper, but about the video series itself. I don't do this often, but for the sake of transparency, I wanted to make sure to tell you about this. We have recently hit 175,000 subscribers on the channel. I find this number almost unfathomable, and please know that this became a possibility only because of you, so I would like to let you know how grateful I am for your support here on YouTube and Patreon. As most of you know, I still work as a full-time researcher at the Technical University of Vienna, and I get the question, why not go full-time on 2 Minute Papers from you Fellow Scholars increasingly often? The answer is that over time, I would love to, but our current financial situation does not allow it. Let me explain that. First, I tried to run the channel solely on YouTube ads which led to a rude awakening. Most people, myself included, are very surprised when they hear that the rates had become so low that around the first 1 million viewer mark, the series earned less than a dollar a day. Then we introduced Patreon, and with your support, we are now able to buy proper equipment to make better videos for you. I can, without hesitation, say that you are the reason this channel can exist. Everything is better this way. Now we have two independent revenue sources, Patreon being the most important, however, whenever YouTube or Patreon monetization issues arise, you see many other channels disappearing into the ether. I am terrified of this, and I want to do everything I possibly can to make sure that this does not happen to us, and we can keep running the series for a long, long time. However, if anything happens to any of these revenue streams, simply put, we are toast. Even though it would be a dream come true, because of this, it would be irresponsible to go full time on the papers. To remedy this, we have been thinking about introducing a third revenue stream with sponsorships for a small amount of videos each month. The majority, 75% of the videos would remain Patreon supported, and the remaining 25% would be sponsored, just enough to enable me and my wife to do this full time in the future. There would be no other changes to the videos, Patreon supporters get all of them in early access, and as before, I choose the papers too. With this, if something happens to any of the revenue streams, we would be able to keep the series afloat without any delays. I would also have more time to every now and then fly out and inform key political decision makers on the state of AI so they can make better decisions for us. Being else would remain the same, the videos would arrive more often in time, and the dream could perhaps come true. I think transparency is of utmost importance and wanted to make sure to inform you before any change happens. I hope you are okay with this, and if you are a company and you are interested in sponsoring the series, let's talk. As always, thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. If we wish to populate a virtual world with photorealistic materials, the last few years have offered a number of amazing techniques to do so. We can obtain such a material from a flash and no flash photograph pair of a target material and having a neural network create a digital version of it, or remarkably even just one photograph is enough to perform this. This footage that you see here shows these materials after they have been rendered by a light simulation program. If we don't have physical access to these materials, we can also use a recent learning algorithm to learn our preferences and recommend new materials that we would enjoy. However, whenever I publish such a video, I always get comments asking, but what about the more advanced materials? And my answer is, you are right. Have a look at this piece of work, which is about acquiring printed holographic materials. This means that we have physical access to this holographic pattern, put a camera close by and measure data in a way that can be imported into a light simulation program to make a digital copy of it. This idea is much less farfetched than it sounds because we can find such materials in many everyday objects like banknotes, giftbags, clothing, or of course security holograms. However, it is also quite difficult. Look here. As you see, these holographic patterns are quite diverse, and the well-crafted algorithm would have to be able to capture this rotation effect, circular diffractive areas, firework effects, and even this iridescent glitter. That is quite a challenge. This paper proposes two novel techniques to approach this problem. The first one assumes that there is some sort of repetition in the visual structure of the hologram and takes that into consideration. As a result, it can give us high quality results by taking only one to five photographs of a target material. The second method is more exhaustive and needs more specialized hardware, but in return can deal with arbitrary structures and requires at least four photographs at all times. These are both quite remarkable. Just think about the fact that these materials look different from every viewing angle, and they also change over the surface of the object. And for the first technique, we don't need sophisticated instruments, only a consumer DSLR camera is required. The reconstructed digital materials can be used in real time, and what's more, we can also exert artistic control over the outputs by modifying the periodicities of the material. How cool is that? And if you are about to subscribe to the series, or you are already subscribed, make sure to click the bell icon, or otherwise you may miss future episodes. That would be a bummer because I have a lot more amazing papers to show you. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In this series, we often discuss that neural networks are extraordinarily useful for classification tasks. This means that if we give them an image, they can tell us what's on it, which is great for self-driving cars, image search, and a variety of other applications. However, fewer people know that they can also be used for image generation. We've seen many great examples of this, where NVIDIA's AI was able to dream up high-resolution images of imaginary celebrities. This was done using a generative adversarial network, an architecture where two neural networks battle each other. However, these methods don't work too well if we have too much variation in our datasets. For instance, they are great for faces, but not for synthesizing the entire human body. This particular technique uses a different architecture, and as a result, can synthesize an entire human body and is also able to synthesize both shape and appearance. You will see in a moment that because of that, it can do magical things. For instance, in this example, all we have is one low-quality image of a test subject as an input, and we can give it a photo of a different person. What happens now is that the algorithm runs pose estimation on this input and transforms our test subject into that pose. The crazy thing about this is that it even creates views for new angles we didn't even have access to. In this other experiment, we have one image on the left. What we can do here is that we specify not a person, but draw a pose directly indicating that we wish to see our test subject in this pose, and the algorithm is also able to create an appropriate new image. And again, it works for angles that require information that we don't have access to. These new angles show that the technique understands the concept of shorts or trousers, although it seems to forget to put on socks sometimes. Truth be told, I don't blame it. What is even cooler is that it seems to behave very similarly for a variety of different inputs. This is non-trivial, as this property doesn't just emerge out of thin air and will be a great selling point for this new method. It also supports a feature where we need to give a crew drawing to the algorithm, and it will transform it into a photorealistic image. However, it is clear that there are many ways to feel destroying with information, so how do we tell the algorithm what appearance we are looking for? Well, worry not, because this technique can also perform appearance transfer. This means that we can exert artistic control over the output by providing a photo of a different object and it will transfer the style of this photo to our input. No artistic skills needed, but good taste is as much of a necessity as ever. Yet another AI that will empower both experts and novice users alike. And while we are enjoying these amazing results, or even better, if you have already built up an addiction for the papers, you can keep it in check by supporting us on Patreon and in return getting access to these videos earlier. You can find us through patreon.com slash 2 Minute Papers. There is a link to it in the video description and as always to the paper as well. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. Today, we are going to talk about the craft of simulating rays of light to create beautiful images, just like the ones you see here. And when I say simulating rays of light, I mean not a few, but millions and millions of light rays need to be computed alongside with how they get absorbed or scattered off of our objects in a virtual scene. Initially, we start out with a really noisy image, and as we add more rays, the image gets clearer and clearer over time. The time it takes for these images to clean up depends on the complexity of the geometry and our material models, and one thing is for sure, rendering materials that have multiple layers is a nightmare. This paper introduces an amazing new multi-layer material model to address that. Here, you see an example where we are able to stack together transparent and translucent layers to synthesize a really lifelike scratched metal material with water droplets. Also, have a look at these gorgeous materials, and note that these are all virtual materials that are simulated using physics and computer graphics. Isn't this incredible? However, some of you fellow scholars remember that we talked about multi-layered materials before. So, what's new here? This new method supports more advanced material models that previous techniques were either unable to simulate or took too long to do so. But that's not all. Have a look here. What you see is an equal time comparison, which means that if we run the new technique against the older methods for the same amount of time, it is easy to see that we will have much less noise in our output image. This means that the images clear up quicker and we can produce them in less time. It also supports my favorite, multiple important sampling, an aggressive noise reduction technique by Eric Veech, which is arguably one of the greatest inventions ever in light transport research. This ensures that for more difficult scenes, the images clean up much, much faster and has a beautiful and simple mathematical formulation. Super happy to see that it also earned him a technical Oscar award a few years ago. If you are enjoying learning about light transport, make sure to check out my course on this topic at the Technical University of Vienna. I still teach this at the university for 20 master students at a time and thought that the teachings shouldn't only be available for a lucky few people who can afford a college education. Clearly, the teaching should be available for everyone, so we recorded it and put it online and now everyone can watch it free of charge. I was quite stunned to see that more than 10,000 people decided to start it, so make sure to give it a go if you're interested. And just one more thing, as you are listening to this episode, I am holding a talk at the EU's Political Strategy Center. And the objective of this talk is to inform political decision makers about the state of the art in AI so they can make more informed decisions for us. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. This is an episode that doesn't have the usual visual fireworks and is expected to get fewer clicks, but it is an important story to tell and because of your support, we are able to cover a paper like this. So now, get this. This is a non-invasive brain-to-brain interface that uses EEG to record brain signals and TMS to deliver information to the brain. The non-invasive part is quite important, it basically means that we don't need to drill a hole in the head of the patients. That's a good idea. This image shows three humans connected via computers, two senders and one receiver. The senders provide information to the receiver about something he would otherwise not know about and we measure if they are able to collaboratively solve a problem together. These people never met each other and don't even know each other and they can collaborate through this technique directly via brain signals. Wow! The BCI means brain-computer interface and the CBI as you guessed, the computer-brain interface. So these brain signals can be encoded and decoded and freely transferred between people and computers. Insanity. After gathering all this information, the receiver makes a decision which the senders also have access to and can transmit some more information if necessary. So what do they use it for? Of course, to play Tetris. Jokes aside, this is a great experiment where the goal is to clear a line. Simple enough, right? Not so much because there is a twist. The receiver only sees what you see here on the left side. This is the current piece we have to place on the field but the receiver has no idea how to rotate it because he doesn't see its surroundings. But the senders do so they transmit the appropriate information to the receiver who will now be able to make an informed decision as to how to rotate this piece correctly to clear a line. So does it work? The experiment is designed in a way that there is a 50% chance to be right without any additional information for the receiver, so this will be the baseline result. And the results are between 75 and 85% which means that the interface is working and brain-to-brain collaboration is now a reality. I am out of words. The paper also talks about brain-to-brain social networks and all kinds of science fiction like that. My head is about to explode with the possibilities, who knows? Maybe in a few years we can make a super intelligent brain that combines all of our expertise and does research for all of us. Or writes two minute paper's episodes. This paper is a must read. Do you have any other ideas as to how this could enhance our lives? Let me know in the comments section. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Pose estimation is an interesting area of research where we typically have a few images or video footage of humans and we automatically try to extract the pose a person was taking. In short, the input is one or more photo and the output is typically a skeleton of the person. So what is this good for? A lot of things. For instance, we can use these skeletons to cheaply transfer the gestures of a human onto a virtual character, full detection for the elderly, analyzing the motion of athletes and many, many others. This work showcases a neural network that measures how the Wi-Fi radio signals bounce around in the room and reflect off of the human body and from these murky waves it estimates where we are. Not only that, but it is also accurate enough to tell us our pose. As you see here, as the Wi-Fi signal also traverses in the dark, this pose estimation works really well in poor lighting conditions. That is a remarkable feat. But now, hold on to your papers because that's nothing compared to what you are about to see now. Have a look here. We know that Wi-Fi signals go through walls. So perhaps this means that that can't be true, right? It tracks the pose of this human as he enters the room and now, as he disappears, look, the algorithm still knows where he is. That's right, this means that it can also detect our pose through walls. What kind of wizardry is that? Now, note that this technique doesn't look at the video feed we are now looking at. It is there for us for visual reference. It is also quite remarkable that the signal being sent out is a thousand times weaker than an actual Wi-Fi signal and it can also detect multiple humans. This is not much of a problem with color images because we can clearly see everyone in an image, but the radio signals are much more difficult to read when they reflect off of multiple bodies in the scene. The whole technique works through using a teacher's student network structure. The teacher is a standard pose estimation neural network that looks at a color image and predicts the pose of the humans therein. So far, so good, nothing new here. However, there is a student network that looks at the correct decisions of the teacher but has the radio signal as an input instead. As a result, it will learn what the different radio signal distributions mean and how they relate to human positions and poses. As the name says, the teacher shows the student neural network the correct results and the student learns how to produce them from radio signals instead of images. If anyone said that they were working on this problem 10 years ago, they would have likely ended up in an asylum. Today, it is reality. What a time to be alive. Also, if you enjoyed this episode, please consider supporting the show at patreon.com slash two minute papers. You can pick up really cool perks like getting your name shown as a key supporter in the video and more. Because of your support, we are able to create all of these videos smooth and creamy in 4k resolution and 60 frames per second and with close captions. And we are currently saving up for a new video editing rig to make better videos for you. We also support one-time payments through PayPal and the usual cryptocurrencies. More details about all of these are available in the video description and as always, thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. This robot was tasked to clean up this table. Normally, anyone who watches this series knows that would be no big deal for any modern learning algorithm. Just grab it, right? Well, not in this case, because Reason Number 1, several objects are tightly packed together and Reason Number 2, they are too wide to hold with the fingers. But this means that the robot needs to figure out a series of additional actions to push the other pieces around and finally grab the correct one. Look, it found out that sometimes pushing helps grasping by making space for the fingers to grab these objects. This is a bit like the Roomba vacuum cleaner robot, but even better for clatter. Really cool. This robot arm works the following way. It has an RGBD camera, which endows it with the ability to see both color and depth. Now that we have this image, we have not one, but two neural networks looking at it. One is used to predict the utility of pushing at different possible locations and one for grasping. Finally, a decision is made as to which motion would lead to the biggest improvement in the state of the table. So, what about the training process? As you see, the speed of this robot arm is limited and we may have to wait for a long time for it to learn anything useful and not just flail around destroying other nearby objects. The solution includes my favorite part, training the robot within a simulated environment where these commands can be executed within milliseconds, speeding up the training process significantly. Our hope is always that the principles learned within the simulation applies to reality. Checkmark The simulation is also very useful to make comparisons with other state of the art algorithms easier. And, do you know what the bane of many many learning algorithms is? Generalization This means that if the technique was designed well, it can be trained on map looking, wooden blocks, and it will do well when it encounters new objects that are vastly different in shape and appearance. And as you see on the right, remarkably, this is exactly the case. Checkmark This makes us one step closer to learning algorithms that can see the world around us, interpret it, and make proper decisions to carry out the plan. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. With this technique, we can take a photograph of a desired material and use a neural network to create a digital material model that matches it that we can use in computer games and animation movies. We can import real-world materials in our virtual worlds, if you will. Typically, to do this, an earlier work requires two photographs, one with flash and one without to get enough information about the reflectance properties of the material. Then, a follow-up AI paper was able to do this from only one image. It doesn't even need to turn the camera around the material to see how it handles reflections, but can learn all of these material properties from only one image. Isn't that miraculous? We talked about this work in more detail in two-minute papers, Episode 88. That was about two years ago. I put a link to it in the video description. Let's look at some results with this new technique. Here, you see the photos of the input materials and on the right, the reconstructed material. Please note that this reconstruction means that the neural network predicts the physical properties of the material, which are then passed to a light simulation program. So on the left, you see reality and on the right, the prediction plus simulation results under a moving point light. It works like magic. Love it. As you see in the comparisons here, it produces results that are closer to the ground truth than previous techniques. This method is designed in a way that enables us to create a larger training set for more accurate results. As you know, with learning algorithms, we are always looking for more and more training data. Also, it uses two neural networks instead of one, where one of them looks at local, nearby features in the input and the other one runs in parallel and ensures that the material that is created is also globally correct. Note that there are some highly scattering materials that this method doesn't support, for instance, fabrics or human skin. But since producing these materials in a digital world takes quite a bit of time and expertise, this will be a godsend for the video games and animation movies of the future. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute paper sweet katojonaifahir. In this series, we have seen many times how good neural network-based solutions are at image classification. This means that the network looks at an image and successfully identifies its contents. However, neural network-based solutions are also capable of empowering art projects by generating new, interesting images. This beautifully written paper explores how a slight tweak to a problem definition can drastically change the output of such a neural network. It shows how many of these research works can be seen as the manifestation of the same overarching idea. For instance, we can try to visualize what groups of neurons within these networks are looking for and we get something like this. The reason for this is that important visual features like the eyes can appear at any part of the image and different groups of neurons look for it elsewhere. With a small modification, we can put these individual visualizations within a shared space and create a much more consistent and readable output. In a different experiment, it is shown how a similar idea can be used with compositional pattern-producing networks or CPPNs in short. These networks are able to take spatial positions as an input and produce colors on the output thereby creating interesting images of arbitrary resolution. Depending on the structure of this network, it can create beautiful images that are reminiscent of light paintings. And here you can see how the output of these networks change during the training process. They can also be used for image morphing as well. A similar idea can be used to create images that are beyond the classical 2D RGB images and create semi-transparent images instead. And there is much, much more in the paper. For instance, there is an interactive demo that shows how we can seamlessly put this texture on a 3D object. It is also possible to perform neural style transfer on a 3D model. This means that we have an image for style and a target 3D model, and you can see the results over here. This paper is a gold mine of knowledge and contains a lot of insights on how neural networks can further empower artists working in the industry. If you read only one paper today, it should definitely be this one. And this is not just about reading, you can also play with these visualizations, and as the source code is also available for all of these, you can also build something amazing on top of them. Let the experiments begin. So, this was a paper from the amazing distal journal, and just so you know, they may be branching out to different areas of expertise, which is amazing news. However, they are looking for a few helping hands to accomplish that, so make sure to click the link to this editorial update in the video description to see how you can contribute. I would personally love to see more of these interactive articles. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. Style Transfer is a mostly AIB technique where you take a photograph, put a painting next to it, and it applies the style of the painting to our photo. A key insight of this new work is that a style is complex, and it can only be approximated with one image. One image is just one instance of a style, not the style itself. Have a look here. If we take this content image and use Van Gogh's road with Cypress and star painting as the art style, we get this. However, if we would have used Starry Night instead, it would have resulted in this. This is not learning about the style, this is learning a specific instance of a style. Here you see two previous algorithms that were instead trained on a collection of works from Van Gogh. However, you see that they are a little blurry and lack detail. This new technique is able to address this really well. Also, look at how convincingly it stylized the top silhouettes of the bell tower. It can also deal with HD videos at a reasonable speed of 9 of these images per second. Very tasty, love it. And of course, a style transfer is a rapidly growing field that are ample comparisons in the paper against other competing techniques. The results are very convincing. I feel that in most cases, it represents the art style really well and can decide where to leave the image content similar to the input and where to apply the style so the overall outlook of the image remains similar. So we can look at these results and discuss who likes which one all day long. But there are also other, more objective ways of evaluating such an algorithm. What is really cool is that the technique was tested by human art history experts and they not only found this method to be the most convincing of all the other style transfer methods, but also thought that the AI produced paintings were from an artist 39% of the time. So this means that the algorithm is able to learn the essence of an artistic style from a collection of images. This is a hugely forward. Make sure to have a look at the paper that also describes a new style-aware loss function and differences in the training process of this method as well. And if you enjoyed this episode and would like to see more, please help us exist through Patreon. In this website, you can support the series and pick up cool perks like early access to these videos, deciding the order of future episodes and more. You know the drill, the dollar a month is almost nothing, but it keeps the papers coming. We also support cryptocurrencies, you'll find more information about this in the video description. Thanks for watching and for your generous support. I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifehir. This paper reveals us a fundamental difference between how humans and machines learn. You are given a video game with no instructions, you start playing it, and the only information you get is a line of text when you successfully finish the game. That's it. So far so good, this is relatively easy to play because the visual cues are quite clear. The ping blob looks like an adversary, and what the spikes do is also self-experimentary. This is easy to understand so we can finish the game in less than a minute. Easy. Now, let's play this. Woah! What is happening? Even empty space looks like as if it were a solid tile. I am not sure if I can finish this version of the game, at least not in a minute for sure. So, what is happening here is that some of the artwork of the objects has been masked out. As a result, this version of the game is much harder to play for humans. So far, this is hardly surprising, and if that would be it, this wouldn't have been a very scientific experiment. However, this is not the case. So to proceed from this point, we will try to find what makes humans learn so efficiently, that not by changing everything at once, but by trying to change and measure only one variable at a time. So how about this version of the game? This is still manageable since the environment remains the same, only the objects we interact with have been masked. Through trial and error, we can find out the mechanics of the game. What about reversing the semantics? Spikes now become tasty ice cream, and the shiny gold conceals an enemy that eats us. Very apt, I have to say. Again, with this, the problem suddenly became more difficult for humans as we need some trial and error to find out the rules. After putting together several other masking strategies, they measured the amount of time, the number of deaths, and interactions that were required to finish the level. I will draw your attention mainly to the blue lines which show which variable cause how much degradation to the performance of humans. The main piece of insight is not only that these different visual cues throw off humans, but it tells us variable by variable and also by how much. An important insight here is that highlighting important objects and visual consistency are key. So, what about the machines? How are learning algorithms affected? These are the baseline results. Adding mass semantics? Barely an issue. Mass object identities? This sounds quite hard, right? Barely an issue. Mass platforms and letters, barely an issue. This is a remarkable property of learning algorithms as they don't only think in terms of visual cues, but in terms of mathematics and probabilities. Removing similarity information throws the machines off a bit, which is understandable because the same objects may appear as if they were completely different. There is more analysis on this and the paper, so make sure to have a look. So, what are the conclusions here? Ideas are remarkably good at reusing knowledge and reading and understanding visual cues. However, if the visual cues become more cryptic, their performance drastically decreases. When machines start playing the game, at first they have no idea which character they control, how gravity works or how to defeat enemies, or that keys are required to open doors. However, they learn these tricky problems and games much easier and quicker because these mind-bending changes, as you remember, are barely an issue. Note that you can play the original and the obfuscated versions on the author's website as well. Also note that we really have only scratched the surface here, the paper contains a lot more insights. So, it is the perfect time to nourish your mind with a paper, make sure to click it in the video description and give it a read. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. In this video series, we often see how these amazing new machine learning algorithms can make our lives easier, and fortunately, some of them are also useful for serious medical applications. Specifically, medical imaging. Medical imaging is commonly used in most healthcare systems, where an image of a chosen set of organs and tissues is made for a doctor to look at and decide whether medical intervention is required. The main issue is that the amount of diagnostic images out there in the wild increases at a staggering pace, and it makes it more and more infeasible for doctors to look at. But wait a minute, as more and more images are created, this also means that we have more training data for machine learning algorithms, so at the same time as doctors get more and more swamped, the AI should get better and better over time. These methods can process orders of magnitude more of these images than humans, and after that, the final decision is put back into the hands of the doctor who can now focus more on the edge cases and prioritize which patients should be seen immediately. This work from scientists at DeepMind was trained on about 14,000 optical coherence tomography scans. This is the OCT label that you see on the left. These images are cross sections of the human retina. We first start out with this OCT scan, then the manual segmentation step follows where a doctor marks up this image to show where the most relevant parts, like the retinal fluids or the elevations of retinal pigments are. Before we proceed, let's stop here for a moment and look at some images of how the network can learn from the doctors and reproduce the segmentations by itself. Look at that, it's almost pixel perfect. This looks like science fiction. Now that we have the segmentation map, it is time to perform classification. This means that we look at this map and assign a probability to each possible condition that may be present. Finally, based on these, a final verdict is made whether the patient needs to be urgently seen or just a routine check or perhaps no check is required. The algorithm also learns this classification step and creates these verdicts itself. And of course, the question naturally arises. How accurate is this? Well, let's look at the confusion matrices. The confusion matrix shows us how many of the urgent cases were correctly classified as urgent and how often it was misclassified as something else and what that something else was. The same analysis is performed to all other classes. Here's how the retinospatialist doctors did and here is how the AI did. I'll leave it here for a few seconds for you to inspect it. Really good. Here's also a different way of aggregating this data. The algorithm did significantly better than all of the optometrist and matched the performance of the number one retinospatialist. I wouldn't believe any of these results if I didn't see these reports with my own eyes in the paper. An additional advantage of this technique is that it works on different kinds of imaging devices and it is among the first methods that works with 3D data. Another plus that I really liked is that this was developed as a close collaboration with the top tier eye hospital in London to make sure that the results are as practical as possible. The paper contains a ton of more information so make sure to have a look. This was a herculean effort from the side of deep mind and the results are truly staggering. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. Do you remember the amazing pics to Pixagorytham from last year? It was able to perform image translation, which means that it could take a daytime image and translate it into a nighttime image, create maps from satellite images, or create photorealistic shoes from crude drawings. I remember that I almost fell off the chair when I first seen the results. But this new algorithm takes it up a notch and transforms these edge maps into human faces, not only that, but it also animates them in time. As you see here, it also takes into consideration the fact that the same edges may result in many different faces, and therefore it is also willing to give us more of these options. If I fell out of the chair for the still image version, I don't really know what the appropriate reaction would be to this. It can also take a crude map of labels where each color corresponds to one object class, such as roads, cars or buildings, and it follows how our labels evolve in time and creates an animation out of it. We can also change the meaning of our labels easily, for instance in the lower left, you see how the buildings are now suddenly transformed to trees. Or we can also change the trees to become buildings. Do you remember motion transfer from a couple of videos ago? It can do a similar variant of that too, and even synthesizes the shadows around the character in a reasonably correct manner. As you see, the temporal coherence of this technique is second to none, which means that it remembers what it did with past images and doesn't do anything drastically different for the next frame, and therefore generates smoother videos. This is very apparent, especially when juxtaposed with the previous pixel-picks method. So, there are three key differences from the previous technique to achieve this. One, the original architecture uses a generator network to create images, where there is also a separate discriminator network that judges its work and teaches it to do better. Instead, this work uses two discriminator neural networks, one checks whether the images look good one by one, and one more discriminator for overlooking whether the sequence of these images would pass as a video. This discriminator cracks down on the generator network if it creates sequences that are not temporalic adherent, and this is why we have minimal flickering in the output videos. Fantastic idea! Two, to ease the training process, it also does it progressively, which means that the network is first faced with an easier version of the problem that progressively gets harder over time. If you have a look at the paper, you will see that the training is both progressive in terms of space and time. I love this idea too! Three, it also uses a flow map that describes the changes that took place since the previous frame. Note that this previous picks to pick SagaRytham was published in 2017 a little more than a year ago. I think that is a good taste of the pace of progress in machine learning research. Up to 2K resolution, 30 seconds of video, and the source code is also available. Congratulations, Fox! This paper is something else! Thanks for watching and for your generous support, and I'll see you next time!
Dear Fellow Scholars, this is two-minute paper sweet caro e genre I fahir. When looking for illustrations for a presentation, most of the time I quickly find an appropriate photo on the internet, however many of these photos are really low resolution. This often creates a weird situation where I have to think, okay, do I use the splotch ear, lower resolution image that gets the point across, or take a high resolution crisp image that is less educational. In case you're wondering, I encounter this problem for almost every single video I make for this channel. As you can surely tell, I am waiting for the day when super resolution becomes mainstream. Super resolution means that we have a low resolution image that lacks details and we feed it to a computer program which hallucinates all the details onto it, creating a crisp, high resolution image. This way I could take my highly relevant blurry image, improve it, and use it in my videos. As adding details to images clearly requires a deep understanding of what is shown in these images, our season fellow scholars immediately know that learning based algorithms will be ideal for this task. While we are looking at some amazing results with this new technique, let's talk about the two key differences that this method introduces. One, it takes a fully progressive approach which means that we don't immediately produce the highest resolution output we are looking for, but slowly leapfrog our way through intermediate steps, each of which is only slightly higher resolution than the input. This means that the final output is produced over several steps where each problem is only a tiny bit harder than the previous one. This is often referred to as curriculum learning and it not only increases the quality of the solution, but is also easier to train as solving each intermediate step is only a little harder than the previous one. It is a bit like how students learn in school. First, the students are shown some easy introductory tasks to get a grasp of a problem and slowly work their way towards mastering a field by solving problems that gradually increase in difficulty. Two, now we can start playing with the thought of using a generative adversarial network. We talk a lot about this architecture in this series. At this time, I will only note that training these is fraught with difficulties, so every bit of help we can get is more than welcome, so the role of curriculum learning is to help easing this process. Note that this research field is well explored and has a remarkable number of papers, so I was expecting a lot of comparisons against competing techniques. And when looking at the paper and the supplementary materials, boy, did I get it. Make sure to have a look at the paper, it contains a very exhaustive validation section, which reveals that if we measure the error of the solution in terms of human perception, it is only slightly lower quality than the best technique. However, this one is five times quicker, offering a really nice balance between quality and performance. So what about the actual numbers for the execution time? For instance, up sampling an image to increase its resolution to twice its original size takes less than a second, and we can go up to even eight times the original resolution, which also only takes four and a half seconds. The quality and the execution times indicate that we are again one step closer to mainstream super resolution. What a time to be alive. The source code of this project is also available. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Do you remember Star Transfer? Star Transfer is a mostly AIB's technique where we take a photograph, put a painting next to it, and it applies the style of the painting to our photo. That was amazing. Also, do you remember pose estimation? This is a problem where we have a photograph or a video of someone and the output is a skeleton that shows the current posture of this person. So, how about something that combines the power of pose estimation with the expressiveness of Star Transfer? For instance, this way we could take a video of a professional dancer, then record a video of our own, let's say moderately beautiful moves, and then transfer the dancer's performance onto our own body in the video. Let's call it motion transfer. Have a look at these results. How cool is that? As you see, these output videos are quite smooth and this is not by accident. It doesn't just come out like that. With this technique, tempero coherence is taken into consideration. This means that the algorithm knows what it has done a moment ago and will not do something wildly different, making these dance motions smooth and believable. This method uses a generative adversarial network where we have a neural network for pose estimation or in other words, generating the skeleton from an image and a generator network to create new footage when given a test subject and a new skeleton posture. These two neural networks battle each other and teach each other to distinguish and create more and more authentic footage over time. Some artifacts are still there, but note that this is among the first papers on this problem and it is already doing incredibly well. This is fresh and experimental. Just the way I like it. Two follow up papers down the line and will be worried that we can barely tell the difference from authentic footage. Make sure to have a look at the paper where you will see how the pics to pics algorithm was also used for image generation and there is a nice evaluation section as well. And now, let the age of AI-based dance videos begin. If you enjoy this episode, please consider supporting us on Patreon where you can pick up really cool perks like early access to these videos, voting on the order of future episodes and more. We are available at patreon.com slash 2 minute papers or just click the link in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. If we have an animation movie or a computer game with quadrupeds and we are yearning for really high quality, life-like animations, motion capture is often the go-to tool for that. Motion capture means that we put an actor, in our case a dog in the studio, we ask it to perform sitting, trotting, pacing and jumping, record its motion and transfer it onto our virtual character. This generally works quite well, however, there are many difficulties with this process. We will skip over the fact that an artist or engineer has to clean and label the recorded data, which is quite labor intensive, but there is a bigger problem. We have all these individual motion types at our disposal, however, a virtual character will also need to be able to transition between these motions in a smooth and natural manner. Saving all possible transitions between these moves is not feasible, so in an earlier work we looked at a neural network-based technique to try to weave these motions together. For the first sight, this looks great, however, have a look at these weird sliding motions that it produces. Do you see them? They look quite unnatural. This new method tries to address this problem but ends up offering much, much more than that. This requires only one hour of motion capture data and we have only around 30 seconds of footage for jumping motions, which is basically next to nothing. And this technique can deal with unstructured data, meaning that it doesn't require manual labeling of the individual motion types, which saves a ton of work hours. Beyond that, as we control this character in the game, this technique also uses a prediction network to guess the next motion type and the gating network that helps blending together these different motion types. Both of these units are neural networks. On the right, you see the results with the new method compared to a standard neural network-based solution on the left. Make sure to pay special attention to the foot-sliding issues with the solution on the left and note that the new method doesn't produce any of those. Now, these motions look great, but they all take place on a flat surface. You see here that this new technique excels at much more challenging landscapes as well. This technique is a total powerhouse, and I can only imagine how many work hours this will save for artists working in the industry. It is also scientifically interesting and quite practical, my favorite combination. It is also well-evaluated, so make sure to have a look at the paper for more details. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Image denoising is an area where we have a noisy image as an input and we wish to get a clear noise-free image. Neural network-based solutions are amazing at this because we can feed them a large amount of training data with noisy inputs and clear outputs. And if we do that, during the training process, the neural network will be able to learn the concept of noise and when presented with a new, previously unseen noisy image, it will be able to clear it up. However, with light transport simulations, creating a noisy image means following the path of millions and millions of light rays, which can take up to hours per training sample. And we need thousands or potentially hundreds of thousands of these. There are also other cases where creating the clean images for the training set is not just expensive, but flat out impossible. Low light photography, astronomical imaging, or magnetic resonance imaging, MRI in short, are great examples of this. In these cases, we cannot use our neural networks simply because we cannot build such a training set as we don't have access to the clear images. In this collaboration between NVIDIA, Alto University and MIT, scientists came up with an insane idea. Let's try to train a neural network without clear images and use only noisy data. Normally, we would say that this is clearly impossible and end this research project. However, they show that under a suitable set of constraints, for instance, one reasonable assumption about the distribution of the noise opens up the possibility of restoring noisy signals without seeing clean ones. This is an insane idea that actually works and can help us restore images with significant outlier content. Not only that, but it is also shown that this technique can do close to or just as well as other previously known techniques that have access to clean images. You can look at these images, many of which have many different kinds of noise, like camera noise, noise from light transport simulations, MRI imaging, and images severely corrupted with a ton of random text. The usual limitations apply, in short, it of course cannot possibly recover content if we cut out a bigger region from our images. This severely hamstrung training process can be compared to a regular neural denoiser that has access to the clean images and the differences are negligible most of the time. So how about that? We can teach a neural network to denoise without ever showing it the concept of denoising. Just the thought of this boggles my mind so much it keeps me up at night. This is such a remarkable concept. I hope there will soon be follow-up papers that extend this idea to other problems as well. If you enjoyed this episode and you feel that about 8 of these videos a month is worth a dollar, please consider supporting us on Patreon. We use these funds to make better videos for you, and a small portion is also used to fund research conferences. You can find us at patreon.com slash 2-minute papers and there is also a link to it in the video description. Do not the drill, one dollar is almost nothing but it keeps the papers coming. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Kato Jona Ifehir. How about some slow motion videos? If we would like to create a slow motion video and we don't own an expensive slow-mo camera, we can try to shoot a normal video and simply slow it down. This sounds good on paper, however, the more we slow it down, the more space we have between our individual frames and at some point our video will feel more like a slideshow. To get around this problem, in a previous video, we discussed two basic techniques to fill in these missing frames. One was a naive technique called frame blending that basically computes the average of two images. In most cases, this doesn't help all that much because it doesn't have an understanding of the motion that takes place in the video. The other one was optical flow. Now this one is much smarter as it tries to estimate the kind of translation and rotational motions that take place in the video and they typically do much better. However, the disadvantage of this is that it usually takes forever to compute and it often introduces visual artifacts. So now we are going to have a look at NVIDIA's results and the main points of interest are always around the silhouettes of moving objects, especially around regions where the foreground and the background meet. Keep an eye out for these regions throughout this video. For instance, here is one example I found. Let me know in the comments section if you have found more. This technique builds on UNET, a superfast convolutional neural network architecture that was originally used to segment biomedical images from limited training data. This neural network was trained on a bit over a thousand videos and computes multiple approximate optical flows and combines them in a way that tries to minimize artifacts. As you see in this side-by-side comparisons, it works amazingly well. Some artifacts still remain but are often hard to catch. And this architecture is blazing fast. Not real-time yet, but creating a few tens of these additional frames takes only a few seconds. The quality of the results is also evaluated and compared to other works in the paper make sure to have a look. As the current commercially available tools are super slow and take forever, I cannot wait to be able to use this technique to make some more amazing slow motion footage for you fellow scholars. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Kato Ejona Ifahir. Throughout this series, we have seen many impressive applications of artificial intelligence. These techniques are capable of learning the piano from the masters of the past, beat formidable teams in complex games like Dota 2, perform well in the game Sonic the Hedgehog, or help us revive and impersonate famous actors who are not with us anymore. However, what is often not spoken about is how narrow or how general these AI programs are. A narrow AI means an agent that can perform one task really well but cannot perform other potentially easier tasks. The Holy Grail of Machine Learning Research is a general AI that is capable of obtaining new knowledge by itself through abstract reasoning. This is similar to how humans learn and is a critical step in obtaining a general AI, and to tackle this problem, scientists at DeepMind created a program that is able to generate a large amount of problems that test abstract reasoning capabilities. They are inspired by human IQ tests with all these questions about sizes, colors, and progressions. They design the training process in a way that the algorithm is given training data on the progression of colors but it is never shown similar progression examples that involve object sizes. The concept is the same but the visual expression of the progression is different. A human easily understands the difference but teaching abstract reasoning like this to a computer sounds almost impossible. However, now we have a tool that can create many of these questions and the correct answers to them. And I will note that some of these are not as easy as many people would expect. For instance, a vertical number progression is very easy to spot but have a good look at these ones. That's so immediately apparent, right? Going back to being able to generate lots and lots of data, the black belt fellow scholars know exactly what this means. This means that we can train a neural network to perform this task. Unfortunately, existing techniques and architectures perform quite poorly. Despite the fact that we have a ton of training data, they could only get 22 to 42% of the answers right. However, these networks are amazing at doing other things like writing novels or image classification. Therefore this means that their generalization capabilities are not too great when we go outside their core domain. This new technique goes by the name Wild Relations Network and is trained in a way that encourages reasoning. It is also designed in a way that it not only outputs a guess for the results but also tries to provide a reason for it which interestingly further improved the accuracy of the network. And what is this accuracy we are talking about? It finds the correct solution 62.6% of the time. But it gets better because this result was measured in the presence of distractor objects like these annoying lines and circles. This is quite confusing even for humans. So a result about 60% is quite remarkable. And it gets even better because if we don't use these distractions it is correct 78% of the time. Wow! This is indeed a step towards teaching an AI how to reason and as the authors made this dataset publicly available for everyone, I expect a reasonable amount of research works appearing in this area in the near future. Who knows, perhaps even in the next few months. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifahir. After having a look at OpenAI's effort to master the Dota 2 game, of course, we all know that scientists at DeepMind are also hard at work on an AI that beats the Capture the Flag Game Mode in Quake 3. Quake 3 Arena is an iconic first-person shooter game and Capture the Flag is a fun game mode where each team tries to take the other team's flag and carry it to their own base while protecting their own. This game mode requires good aiming skills, map presence, reading the opponents well, and tons of strategy, a nightmare situation for any kind of AI. Not only that, but in this version, the map changes from game to game, therefore the AI has to learn general concepts and be able to pull them off in a variety of different, previously unseen conditions. This doesn't seem to be within the realm of possibilities to pull off. The minimaps here always show the location of the players, each are color coded to blue or red to indicate their teams. Much like humans, these AI agents learned by looking at the video output of the game and have never been told anything about the game or what the rules are. These scientists at DeepMind ran a tournament with 40 human players who were matched up against these agents randomly, both as opponents and teammates. In this tournament, a team of average human players had a win probability of 43% where a team of strong players won slightly more than half 52% of their games. And now hold on to your papers because the agents were able to win 74% of their games. So the difference between the average and the strong human players' win rate is 9%. And the difference between the strongest humans and the AI is more than twice that margin 22%. This is insanity. And as you see, it barely matters what the size or the layout of the map is or how many teammates there are, the AI's win rate is always remarkably high. These agents showcase many human-like behaviors such as staying at their own base to defend it, camping within the opponent's base or following teammates. This builds on a new architecture by the name, for the win, FTW in short, good workfox, instead of training one agent, it uses a population of agents that train and evolve from each other to make sure that the diverse set of playstyles are discovered. This uses recurrent neural networks. These are neural network variants that are able to learn and produce sequences of data. Here, two of these are used, a fast and a slow one that operate on different time scales, but share a memory module. This means that one of them has a very accurate look at the near past and the other one has a more coarse look that can look back more into the past in return. If these two work together correctly, decisions can be made that are both good locally at this point in time and globally to maximize the probability of winning the whole game. This is really huge because this algorithm can perform long-term planning, which is one of the key reasons why many difficult games and tasks remain unsolved. Well, as it seems now, not for long. An additional challenge is that the game score is not necessarily subject to maximization like in most games, but there is a mapping from the scores into an internal reward, which means that the algorithm has to be able to predict its own progress towards winning. And note that even though Quake III and Capture the Flag is an excellent way to demonstrate the capabilities of this algorithm, this architecture can be generalized to other problems. I am going to give you a few more tidbits that I have found super interesting, but before, if you are enjoying this episode and would like to pick up some cool perks like early access, ending the topic of future episodes or getting your name listed in the video description as a key supporter, why not support the show on Patreon. With this, you can also help us make better videos in the future. You can find us at patreon.com slash 2 minute papers and we also support Bitcoin and other crypto currencies. The addresses are available in the video description. And now, onwards to the cool tidbits. A human plus agent team has been able to defeat an agent plus agent team 5% of the time, indicating that these AIs are able to coordinate and play together with anyone they are given. I get goosebumps from this. Love it. The reaction time and accuracy of the agents is better than that of humans, but not nearly perfect as many people would think. However, they outclass humans even if we artificially reduce their accuracy and reaction times. In another experiment, two agents were paired up against two professional game tester humans who could freely communicate and train against the same agents for 12 hours to see if they can learn their patterns and force them to make mistakes. Even with this, humans had only 125% of these games. Given the other numbers we have, it is very likely that this unfair advantage made no difference whatsoever. How about that? If there are any more questions, make sure to have a look at the paper that describes every possible tidbit you can possibly imagine. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. This is a mind-boggling new piece of work from scientists at Google Brain on how to hack and reprogram neural networks to make them perform any task we want. A neural network is given by a prescribed number of layers, neurons within these layers, and weights. Or in other words, the list of conditions under which these neurons will fire. By choosing the weights appropriately, we can make the neural network perform a large variety of tasks, for instance, to tell us what an input image depicts, or predict new camera viewpoints when looking at a virtual scene. So this means that by changing the weights of the neural network, we can reprogram it to perform something completely different, for instance, solve a capture for us. That is a really cool feature. This work reveals a new kind of vulnerability by performing this kind of reprogramming of neural networks in an adversarial manner, forcing them to perform tasks that they were originally not intended to do. It can perform new tasks that it has never done before, and these tasks are chosen by the adversary. So how do adversarial attacks work in general? What does this mean? Let's have a look at a classifier. These neural networks are trained on a given already existing dataset. This means that they look at a lot of images of buses, and from these, they learn the most important features that are common across buses. Then, when we give them a new, previously unseen image of a bus, they will now be able to identify whether we are seeing a bus or an ostrich. A good example of an adversarial attack is when we present such a classifier with not an image of a bus, but a bus plus some carefully crafted noise that is barely perceptible that forces the neural network to misclassify it as an ostrich. And in this new work, we are not only interested in forcing the neural network to make a mistake, but we want to make it exactly the kind of mistake we want. That sounds awesome, but also quite nabulous. So let's have a look at an example. Here, we are trying to reprogram an image classifier to count the number of squares in our images. Step number one, we create a mapping between the classifier's original labels to our desired labels. Initially, this network was made to identify animals like sharks, hands, and ostriches. Now, we seek to get this network to count the number of squares in our images, so we make an appropriate mapping between their domain and our domain. And then, we present the neural network with our images. These images are basically noise and blocks, where the goal is to create these in a way that take worse the neurons within the neural network to perform our desired task. The neural network then says tiger shark and ostrich, which, when mapped to our domain, means four and 10 squares respectively, which is exactly the answer we were looking for. Now, as you see, the attack is not subtle at all, but it doesn't need to be. Quoting the paper, the attack does not need to be imperceptible to humans or even subtle in order to be considered a success. Potential consequences of adversarial reprogramming include theft of computational resources from public-facing services and repurposing of AI-driven assistance into spies or spam bots. As you see, it is of paramount importance that we talk about AI safety within the series, and my quest is to make sure that everyone is vigilant that now tools like this exist. Thank you so much for coming along on this journey, and if you're enjoying it, make sure to subscribe and hit the bell icon to never miss a future episode, some of which will be on follow-up papers on this super interesting topic. Thanks for watching, and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Karo Jolenei Fahir. Today, we will listen to a new AI from Deep Mind that is capable of creating beautiful piano music. Because there are many algorithms that do that, to put things into perspective, let's talk about the two key differentiating factors that set this method apart from previously existing techniques. One, music is typically learned from high-level representations, such as the score or MIDI data. This is a precise representation of what needs to be played, but they don't tell us how to play them. These small nuances are what makes the music come alive, and this is exactly what is missing from most of the synthesis techniques. This new method is able to learn these structures and generates not MIDI signals, but raw audio waveforms. And two, it is better at retaining stylistic consistency. Most previous techniques create music that is consistent on a shorter time scale, but do not take into consideration what was played 30 seconds ago, and therefore they lack the high-level structure that is the hallmark of quality songwriting. However, this new method shows stylistic consistency over long time periods. Let's give it a quick listen and talk about the architecture of this learning algorithm after that, while we listen, I'll show you the composers it has learned from to produce this. I have never heard any AI-generated music before with such articulation, and the harmonies are also absolutely amazing. Truly stunning results. It uses an architecture that goes by the name Autoregressive Discrete Autoencoder. This contains an encoder module that takes a raw audio waveform and compresses it down into an internal representation where the encoder part is responsible for reconstructing the raw audio from this internal representation. Both of them are neural networks. The Autoregressive part means that the algorithm looks at previous time steps in the learned audio signals when producing new notes and is implemented in the encoder module. Essentially, this is what gives the algorithm longer-term memory to remember what it played earlier. As you have seen the dataset the algorithm learned from as the music was playing, I am also really curious how we can exert artistic control over the output by changing the dataset. Essentially, you can likely change what the student learns by changing the text books used to teach them. For now, let's marvel at one more sound sample. This is already incredible and I can only imagine what we will be able to do not 10 years from now, just a year from now. Thanks for watching and for your generous support and I'll see you next time.
And dear fellow scholars, this is two minute papers with Karo Zsolnai-Fehir. You know that I am always excited to tell you about news where AI players manage to beat humans at more and more complex games. Today we are going to talk about Dota 2, which is a multiplayer online battle arena game with a huge cult following and world championship events with a prize pool of over 40 million dollars. This is not just some game, and just to demonstrate how competitive it is and how quickly it is growing, last time we talked about this in two minute papers episode 180, where an AI beat some of the best players of the game in a limited one versus one setting and the prize pool was 20 million dollars back then. This was a huge milestone as this game requires long-term strategic planning, has incomplete information and a high-dimensional continuous action space which is a classical nightmare situation for any AI. One, the next milestone, was to defeat a human team in the full 5 vs 5 game and I promise to report back when there is something new on this project. So here we go. If you look through the forums and our YouTube comments, it is generally believed that this is so complex that it would never ever happen. I would agree that the search space is in this to pendously large and the problem is notoriously difficult, but whoever thinks that this will never be solved has clearly not been watching enough two minute papers. Now you better hold on to your papers right away because this video dropped 10 months ago in August 2017 and since then the AI has played 180 years worth of gameplay against itself every single day. 80% of these games it played against itself and 20% against its past self and even though 5 of these bots are supposed to work together as a team, there is no explicit communication channel between them. And now it is ready to play 5 vs 5 matches. Some limitations still apply but since then the AI was able to get a firm understanding of the importance of team fighting, predicting the outcome of future actions and encounters, ganking or in other words, ambushing unsuspecting opponents and many other important pieces of the game. The May 15th version of the AI was evenly matched against open AI's in house team which is a formidable result and I find it really amusing that these scientists were beaten by their own algorithm. This is however not a world class Dota 2 team and the crazy part is that the next version of the AI was tested three weeks later and it not only beat the in house team easily but also defeated several other teams and a semi professional team as well. As it is often incorrectly said on several forums that these algorithms defeat humans because they can click faster so I will note that these bots perform about 150 to 170 actions per minute which is approximately in line with an intermediate human player and it is also to be noted that Dota 2 is not that sensitive to this metric. More clicking does not really mean more winning here and all. The human players were also able to train with an earlier version of this AI. There will be an upcoming event on July 28th where these bots will challenge a team of top players so stay tuned for some more updates on this. There is no paper yet but I have put a link to a blog post and a full video in the description and it is a gold mine of information and was such a joy to read through. So what do you think? Who will win and is a 5 vs 5 game in Dota 2 more complex than playing Starcraft 2? If you wish to hear more about this please consider helping us tell this story to more people and convert them into fellow scholars by supporting the series through Patreon and as always we also accept Bitcoin, Ethereum and Litecoin the addresses are in the video description and if you are now in the mood to learn some more about Dota 2 I recommend taking a look at Day 9's channel I have put a link to a relevant series in the video description. Highly recommend it. So there you go, a fresh 2 minute paper set episode that is not 2 minutes and it is not about a paper. Yet, love it. Thanks for watching and for your generous support, I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. This is a contest by OpenAI where a bunch of AI's compete to decide who has the best transfer learning capabilities. Transfer learning means that the training and the testing environment differs significantly, therefore only the AI's that learn general concepts prevail and the ones that try to get by with memorizing things will quickly fall. In this experiment, these programs start playing Sonic, the Hedgehog, and are given a bunch of levels to train on. However, like in a good test at school, the levels for the final evaluation are kept secret. So, the goal is that only high quality general algorithms prevail and we can cheat through the program as we don't know what the final exam will entail. We only know that we have to make the most of the training materials to pass. Sonic is a legendary platform game where we have to blaze through levels by avoiding obstacles and traps, often while traveling with the speed of sound. Here you can see the winning submission taking the exam on a previously unseen level. After one minute of training, as expected, the AI started to explore the controls, but is still quite inept and does not make any meaningful progress on the level. After 30 minutes, things look significantly better as the AI now understands the basics of the game. And look here, almost got up there and got it. It is clearly making progress as it collects some coins, defeats enemies, goes through the loop and gets stuck seemingly because it doesn't yet know how being underwater changes how high it can jump. This is quite a bit of a special case, so we are getting there. After only 60 to 120 minutes, it became a competent player and was able to finish this challenging map with only a few mistakes, really impressive transfer learning in just about an hour. Note that the algorithm has never seen this level before. Here you see a really cool visualization of three different AI's progress on the map, where the red dots indicate the movement of the character for earlier episodes and the bluer colors show the progress at later stages of the training. I could spend all day staring at these. Videos are available for many many submissions, some of which even opened up their source code and there are a few high quality write-ups as well, so make sure to have a look. There's gonna be lots of fun to be had there. This competition gives us something that is scientifically interesting, practical and super fun at the same time. What more could you possibly want? Huge thumbs up for the open AI team for organizing this and of course, congratulations to the participants. And now you see that we have a job where we train computers to play video games and we are even paid for it. What a time to be alive. By the way, if you wish to unleash the inner scholarly new, two minute paper shirts are available in many sizes and colors. We have max 2. The links are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karojona Ifehir. Fluid and smoke simulations are widely used in computer games and in the movie industry and are capable of creating absolutely stunning video footage. We can very quickly put together a core simulation and run it cheaply, however, the more turbulent motion we are trying to simulate, the more resources and time it will take. If we wish to create some footage with the amount of visual quality that you see here, well, if you think the several hour computation time for light transport algorithms was too much, better hold on to your papers because it will take not hours, but often from days to weeks to compute. And to ease the computation time of such simulations, this is a technique that performs style transfer, but this time not for paintings, but for fluid and smoke simulations. How cool is that? It takes the low resolution source and detailed target footage, dices them up into small patches and boroughs from image and textures synthesis techniques to create a higher resolution version of our input simulation. The challenge of this technique is that we cannot just put more swirly motion on top of our velocity fields because this piece of fluid has to obey to the laws of physics to look natural. Also, we have to make sure that there is not too much variation from patch to patch, so we have to perform some sort of smoothing on the boundaries of these patches. Our smoke plumes also have to interact with obstacles, which is anything but trivial to do well. Have a look at the ground truth results from the high resolution simulation. This is the one that would take a long time to compute. There are clearly deviations, but given how much the input footage was, I'll take this any day of the week. We can now look forward to seeing even higher quality smoke and fluids in the animation movies of the near future. There was a similar technique by the name Wavelet Turbulence, which is one of my all-time favorite papers that has been showcased in the very first two-minute papers episode. This is what it looked like, and we are now celebrating its 10th anniversary. Imagine what a bomb this was 10 years ago, and you know what? It is still going strong. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. This is a recent DeepMind paper on Neuror rendering, where they taught a learning-based technique to see things the way humans do. What's more, it has an understanding of geometry, viewpoints, shadows, occlusion, even self-shadowing and self-occlusion and many other difficult concepts. So what does this do and how does it work exactly? It contains a representation and a generation network. The representation network takes a bunch of observations, a few screenshots, if you will, and then calls this visual sensory data into a concise description that contains the underlying information in the scene. These observations are made from only a handful of camera positions and viewpoints. The Neuror rendering or seeing part means that we choose a position and viewpoint that the algorithm hasn't seen yet and ask the generation network to create an appropriate image that matches reality. Now we have to hold on to our papers for a moment and understand why this is such a crazy idea. Computer graphics researchers work so hard on creating similar rendering and light simulation programs that take tons of computational power to compute all aspects of light transport and then return give us a beautiful image. If we slightly change the camera angles, we have to redo most of the same computations, whereas the learning-based algorithm may just say, don't worry, I got this. And from previous experience, guesses the remainder of the information perfectly. I love it. And what's more, by leaning on what these two networks learned, it generalizes so well that it can even deal with previously unobserved scenes. If you remember, I have also worked on a Neuror renderer for about 3,000 hours and created an AI that predicts photorealistic images perfectly. The difference was that this one took a fixed camera viewpoint and predicted what the object would look like if we started changing its material properties. I'd love to see a possible combination of these two works. Oh my, super excited for this. There's a link in the video description to both of these works. Can you think of other possible uses for these techniques? Let me know in the comments section. And if you wish to decide the order of future episodes or get your name listed as a key supporter for the series, hop over to our Patreon page and pick up some cool perks. We use these funds to improve the series and empower other research projects and conferences. As this video series is on the cutting edge of technology, of course, we also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The addresses are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. We are over 260 episodes into the series, but believe it or not, we haven't had a single episode on virtual reality. So at this point, you probably know that this paper has to be really good. The promise of virtual reality is indeed truly incredible. Doctors could be trained to perform surgery in a virtual environment, or even perform surgery from afar, we could enhance military training by putting soldiers into better flight simulators, expose astronauts to virtual zero-gravity simulations, you name it, and of course, games. As you see, virtual reality, or VR in short, is on the rise these days, and there is a lot of research going on on how to make more killer applications for it. The basics are simple. We put on a VR headset and walk around in our room and perform gestures, and these will be performed in a virtual world by our avatar. Sounds super fun, right? Well, yes, however, we have this headset on, and we don't really see our surroundings within the room, which makes it easy to bump into objects, or smash the controller into a wall, which is exactly what I did in the MVDLAB in Switzerland not so long ago. My greetings to all the kind people there, and sorry folks. So, what could be a possible solution? Creating virtual worlds with smaller scales, that kind of defeats the purpose, doesn't it? There has to be a better solution. So how about redirection? Redirection is a simple concept that changes our movement in the virtual world, so it deviates from our real path in the room in a way that both lets us explore the virtual world well, and not bump into walls and objects in the meantime. Most existing techniques out there either don't do redirection, and make us bump into objects and walls within our room, or they do redirection at the cost of introducing distortions and other disturbing changes into the virtual environment. This is not easy to perform well because it has to feel natural, but the changes we apply to the path deviates from what is natural. Here you can see how the blue and orange lines deviate, which means that the algorithm is at work. With this, we can wonder about in a huge and majestic virtual landscape, or a cramped bar, even when being confined to a small physical room. Loving the idea. This technique takes into consideration even other moving players in the room and dynamically remap our virtual paths to make sure we don't bump into them. There is a lot more in the paper that describes how the whole method adapts to human perception. Papers like this make me really happy because there are thousands of papers in the domain of human perception within computer graphics, many of which will now see quite a bit of practical use. VR is going to be a huge enabler for this area. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. As facial reenactment videos are improving at a rapid pace, it is getting easier and easier to create video impersonations of other people by transferring our gestures onto their faces. We have recently discussed a technique that is able to localize the modified regions within these videos, however, this technique was limited to human facial reenactment. That is great, but what about the more general case with manipulated photos? Well, do not worry for a second because this new learning-based algorithm can look at any image and highlight the regions that were tempered with. It can detect image splicing, which means that we take a part of a different image and add it to this one. Or, copying an object and pasting it to the image elsewhere. Or, removing an object from a photo and filling in the hole with meaningful information harvested from the image. This, we also refer to as image in-painting, and this is something that we also use often to edit our thumbnail images that you see here on YouTube. Believe it or not, it can detect all of these cases. And it uses a two-stream convolution on your own network to accomplish this. So what does this mean exactly? This means a learning algorithm that looks at one, the color data of the image, to try to find unnatural contrast changes along edges and silhouettes, and two, the noise information within the image as well, and see how they relate to each other. Typically, if the image has been tempered with either the noise or the color data is disturbed, or it may be that they look good one by one, but the relation of the two has changed. The algorithm is able to detect these anomalies too. As many of the images we see on the internet are either resized or compressed or both, it is of utmost importance that the algorithm does not look at compression artifacts and thinks that the image has been tempered with. This is something that even humans struggle with on a regular basis, and this is luckily not the case with this algorithm. This is great because smart attackers may try to conceal their mistakes by recompressing an image and thereby adding more artifacts to it. It's not going to fool this algorithm. However, as you follow scholars pointed out in the comments of a previous episode, if we have a neural network that is able to distinguish forged images, with a little modification we can perhaps turn it around and use it as a discriminator to help training a neural network that produces better forgeries. Hmm, what do you think about that? It is of utmost importance that we inform the public that these tools exist. If you wish to hear more about this topic and you think that a bunch of videos like this a month is worth a dollar, please consider supporting us on Patreon. You know the drill, a dollar a month is almost nothing, but it keeps the papers coming. Also, for the price of a coffee, you get exclusive early access to every new episode we release, and there are even more perks on our Patreon page, patreon.com slash two minute papers. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The addresses are available in the video description. With your help, we can make better videos in the future. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. As animation movies and computer game graphics become more and more realistic, they draw us more and more into their own worlds. However, when we see cars, max, minerals and paintings and similar materials, we often feel that something is not right there and the illusion quickly crumbles. This is such a peculiar collection of materials, so what is the common denominator between them? Normally, to create these beautiful images, we use programs that create millions of millions of rays of light and simulate how they bounce off of the objects within the scene. However, most of these programs bounce, these rays off of the surface of these objects, where in reality there are many sophisticated multi-layered materials with all kinds of coatings and varnishes. Such a simple surface model is not adequate to model these multiple layers. This new technique is able to simulate not only these surface interactions, but how light is scattered, transferred and absorbed within these layers, enabling us to create even more beautiful images with more sophisticated materials. We can envision new material models with any number of layers and it will be able to handle it. However, I left the best part for last. What is even cooler is that it takes advantage of the regularity of the data and builds a statistical model that approximates what typically happens with our light rays within these layers. What this results in is a real-time technique that still remains accurate. This is not normal. This used to take hours. This is insanity. And the whole paper was written by only one author, Laurent Belcoux, and was accepted to the most prestigious research venue in computer graphics, so huge congress to Laurent for accomplishing this. If you would like to learn more about light transport, I am holding a master-level course on it at the Technical University of Vienna. This course used to take place behind closed doors, but I feel that the teachings shouldn't only be available for the 20-30 people who can afford a university education, but they should be available for everyone. So we recorded the entirety of the course and it is now available for everyone free of charge. If you are interested, have a look at the video description to watch them. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. With a recent ascendancy of several new AI-based techniques for human-facial reenactment, we are now able to create videos where we transfer our gestures onto famous actors or politicians and impersonate them. Clearly, as this only needs a few minutes of video as training data from the target, this could be super useful for animating photorealistic characters for video games and movies, reviving legendary actors who are not with us anymore and much more. And understandably, some are worried about the social implications of such a powerful tool. In other words, if there are tools to create forgery, there should be tools to detect forgery, right? If we can train an AI to impersonate, why not train an other AI to detect impersonation? This has to be an arm stress. However, this is no easy task to say the least. As an example, look here. Some of these faces are real, some are fake. What do you think? Which is which? I will have to admit, my guess is weren't all that great. But what about you? Let me know in the comment section. Compression is also an issue. Since all videos you see here on YouTube are compressed in some way to reduce file size, some of the artifacts that appear may easily throw off not only an AI, but a human as well. I bet there will be many completely authentic videos that will be thought of as fakes by humans in the near future. So how do we solve these problems? First, to obtain a neural network-based solution, we need a large data set to train it on. This paper contains a useful data set with over a thousand videos that we can use to train such a neural network. These records contain pairs of original and manipulated videos along with the input footage of the gestures that were transferred. After the training step, the algorithm will be able to pick up on the smallest changes around the face and tell a forged footage from a real one, even in cases where we humans are unable to do that. This is really amazing. These green-to-red colors showcase regions that the AI things were tampered with. And it is correct. Interestingly, this can not only identify regions that are forgeries, but it can also improve these forgeries too. I wonder if it can detect footage that it has improved itself. What do you think? Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. Earlier, we talked about an amazing technique where the inputs were a source video of ourselves and a target actor. And the output was a reenactment, in other words, a video of this target actor with our facial gestures. This requires only a few minutes of video from the target, which is usually already available on the internet. Essentially, we can impersonate other people, at least for one video. A key part of this new technique is that it extracts additional data, such as polls and eye positions, both from the source and target videos, and uses this data for the reconstruction. As opposed to the original face-to-face technique from two years ago, which was already mind-blowing, you see here that this results in a new learning-based method that supports the reenactment of eyebrows and blinking, changing the background, plus head and gaze positioning as well. So far, this would still be similar to a non-learning-based technique we've seen a few episodes ago. And now, hold onto your papers, because this algorithm enables us to not only impersonate, but also control the characters in the output video. The results are truly mesmerizing. I almost fell out of the chair when I first seen them. And what's more, we can create really rich reenactments by editing the expressions, polls, and blinking separately by hand. What also needs to be emphasized here is that we see and talk to other human beings all the time, so we have a remarkably key eye for these kinds of gestures. If something is off just by a few millimeters, or is not animated in a way that is close to perfect, the illusion immediately falls apart. And the magical thing about these techniques is that every single iteration, we get something that is way beyond the capabilities of the previous methods, and they come in quick succession. There are plenty of more comparisons in the paper as well, so make sure to have a look. It also contains a great idea that opens up the possibility of creating quantitative evaluations against ground truth footage. Turns out that we can have such a thing as ground truth footage. I wonder when we will see the first movie with this kind of reenactment of an actor who passed away. Do you have some other cool applications in mind? Let me know in the comments section. And if you enjoy this episode, make sure to pick up some cool perks on our Patreon page where you can manage your paper addiction by getting early access to these episodes and more. We also support crypto currencies. The addresses are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. There are many research projects about teaching an AI to play video games well. We have seen some amazing results from DeepMind's DeepQ learning algorithm that performed on a superhuman level on many games, but faltered on others. What really made the difference is the sparsity of rewards and the lack of longer-term planning. What this means is that the more often we see the score change on our screen, the faster we know how well we are doing and change our strategy if needed. For instance, if we make a mistake in a Tory breakout, we lose a life almost immediately. But in a strategy game, a bad decision may come back to haunt us up to an hour after committing it. So, what can we do to build an AI that can deal with these cases? So far, we have talked about extrinsic rewards that come from the environment. For instance, our score in a video game and most existing AI's are for all intents and purposes extrinsic score maximizing machines. And this work is about introducing an intrinsic reward by endowing an AI with one of the most human-like attributes, curiosity. But hold on right there, how can a machine possibly become curious? Well, curiosity is defined by whatever mathematical definition we attached to it. In this work, curiosity is defined as the AI's ability to predict the results of its own actions. This is big because it gives the AI tools to preemptively start learning skills that don't seem useful now but might be useful in the future. In short, this AI is driven to explore even if it hasn't been told how well it is doing. It will naturally start exploring levels in Super Mario, even without seeing the score. And now comes the great part. This curiosity really teaches the AI to learn new skills and when we drop it into a new, previously unseen level, it will perform much better than a non-curious one. When playing Doom, the legendary first-person shooter game, it will also start exploring the level and is able to rapidly solve hard exploration tasks. The comparisons reveal that an AI infused with curiosity performs significantly better on easier tasks. But the even cooler part is that with curiosity, we can further increase the difficulty of the games and the sparsity of the external rewards and can expect the agent to do well, even when previous algorithms failed. This will be able to play much harder games than previous works. And remember, games are only used to demonstrate the concept here. This will be able to do so much more. Love it. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifahir. Consider this problem. We have a pair of images that are visually quite different, but have similar semantic meanings and we wish to map points between them. Now this might sound a bit weird, so bear with me for a moment. For instance, geese and airplanes look quite different, but both have wings and front and back regions. The paw of a lion looks quite different from a cat's foot, but they share the same function and are semantically similar. This is an AI-based technique that is able to find these corresponding points between our pair of images. In fact, the point pairs you've seen so far have been found by this AI. The main difference between this and previous non-learning-based techniques is that instead of pairing up regions based on pixel-color similarities, it measures how similar they are in terms of the neural network's internal representation. This makes all the difference. So far this is pretty cool, but is that it? Apping points? Well, if we can map points effectively, we can map regions as a collection of points. This enables two killer applications. One, this can augment already existing artistic tools so that we can create a hybrid between two images. And the cool thing is that we don't even need to have any drawing skills because we only have to add these colored masks and the algorithm finds and stitches together the corresponding images. And two, it can also perform cross-domain image morphing. That's an amazing term, but what does this mean? This means that we have our pair of images from earlier and we are not interested in stitching together a new image from their parts, but we want an animation where the starting point is one image, the ending point is the other, and we get a smooth and meaningful transition between the two. There are some really cool use cases for this. For example, we can start out from a cartoon drawing, set our photo as an endpoint and witness this beautiful morphing between the two. Kind of like in style transfer, but we have more fine-grained control over the output. Really cool. And note that many images in between are usable as is. No artistic skills needed. And of course, there is a mandatory animation that makes a cat from a dog. As usual, there are lots of comparisons to other similar techniques in the paper. This tool is going to be invaluable for, I was about to say, artists, but this doesn't require any technical expertise, just good taste and a little bit of imagination. What an incredible time to be alive. Thanks for watching and for your generous support, and I'll see you next time.
These fellows colors, this is two-minute papers with Kato Ejona Ifehir. Ever had an experience when you shot an almost perfect photograph of, for instance, an amazing landscape, but unfortunately, it was littered with unwanted objects. If only we had an algorithm that could perform image impainting, in other words, delete a small part of an image and have it automatically filled in. So let's have a look at MVDS AI-based solution. On the left, you see the white regions that are given to the algorithm to correct, and on the right, you see the corrected images. So, it works amazingly well, but the question is, why? This is an established research field, so what new can an AI-based approach bring to the table? Well, traditional non-learning approaches either try to fill these holes in with other pixels from the same image that have similar neighborhoods, copy-paste something similar, if you will, or they try to record the distribution of pixel colors, and try to fill in something using that knowledge. And here comes the important part. None of these traditional approaches have an intuitive understanding of the contents of the image, and that is the main value proposition of the neural network-based learning techniques. This work also borrows from earlier artistic style transfer methods to make sure that not only the content, but the style of the impainted regions also match the original image. It is also remarkable that this new method works with images that are devoid of symmetries and can also deal with cases where we cut out really crazy irregularly-shaped holes. Of course, like every good piece of research work, it has to be compared to previous algorithms. As you can see here, the quality of different techniques is measured against a reference output, and it is quite clear that this method produces more convincing results than its competitors. For reference, PatchMatch is a landmark paper from almost 10 years ago that still represents the state of the art for non-learning-based techniques. The paper contains a ton more of these comparisons, so make sure to have a look. Without doubt, this is going to be an invaluable tool for artists in the future. In fact, in this very series, we use Photoshop's built-in image-impaining tool on a daily basis, so this will make our lives much easier. Loving it. Also, did you know that you can get early access to each of these videos? If you are addicted to the series, have a look at our Patreon page, Patreon.com slash 2-minute papers, or just click the link in the video description. There are also other, really cool perks, like getting your name as a key supporter in the video description, or deciding the order of the next few episodes. We also support Cryptocurrencies, the addresses are in the video description, and with this, you also help us make better videos in the future. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Jonaifahir. Two years ago, in 2016, we talked about a paper that enabled us to sit in and transfer our gestures onto a virtual actor. This work went by the name Face to Face and showcased a bunch of mesmerizing results containing reenactments of famous political figureheads. It was quite amazing, but it is nothing compared to this one. And the reason for this is that the original Face to Face paper only transferred expressions, but this new work is capable of transferring head and torso movements as well. Not only that, but mouth interiors also appear more realistic and more gaze directions are also supported. You see in the comparison here that the original method disregarded many of these features and how much more convincing this new one is. This extended technique opens up the door to several really cool new applications. For instance, consider this self reenactment application. This means that you can reenact yourself. Now, what would that be useful for you may ask? Well, of course, you can appear to be the most professional person during a virtual meeting even when sitting at home in your undergarment. Or you can quickly switch teams based on who is winning the game. Avatar digitization is also possible. This basically means that we can create a stylized version of our likeness to be used in a video game. Somewhat similar to the Mimoji presented in Apple's latest keynote with the iPhone X. And the entire process takes place in real time without using neural networks. This is as good as it gets. What a time to be alive. Of course, like every other technique, this also has its own set of limitations. For instance, illumination changes in the environment are not always taken into account and long-haired subjects with extreme motion may cause artifacts to appear. In short, don't use this for rock concerts. And with this, we are also one step closer to full-characterial enactment for movies, video games, and telepresence applications. This is still a new piece of technology and may offer many more applications that we haven't thought of yet. After all, when the internet was invented, who thought that it could be used to order pizza or transfer bitcoin? Or order pizza and pay with bitcoin? Anyway, if you have some more applications in mind, let me know in the comments section. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. If you start watching reviews of some of the more recent smartphones, you will almost always see a dedicated section to low light photography. The result is almost always that cameras that work remarkably well in well-lit scenes produce almost unusable results in the environments. So unless we have access to a super-expensive camera, what can we really do to obtain more usable low light images? Well, of course we could try brightening the image up by increasing the exposure. This would help maybe a tiny bit but would also mess up our wide balance and also amplify the noise within the image. I hope that by now you are getting the feeling that there must be a better AIB solution. Let's have a look. This is an image of a dark indoor environment I am sure you have noticed. This was taken with a relatively high light sensitivity that can be achieved with a consumer camera. This footage is unusable and this image was taken by an expensive camera with extremely high light sensitivity settings. This footage is kind of usable but is quite dim and is highly contaminated by noise. And now hold on to your papers because this AIB technique takes sensor data from the first unusable image and produces this. Holy smokes. And you know what the best part is? It produced this output image in less than a second. Let's have a look at some more results. These look almost too good to be true but luckily we have a paper at our disposal so we can have a look at some of the details of the technique. It reveals that we have to use a convolutional neural network to learn the concept of this kind of image translation but that also means that we require some training data. The input should contain a bunch of dark images. These are the before images. This can hardly be a problem but the outputs should always be the corresponding image with better visibility. These are the after images. So how do we obtain them? The key idea is to use different exposure times for the input and output images. A short exposure time means that when taking a photograph the camera aperture is only open for a short amount of time. This means that less light is let in therefore the photo will be darker. This is perfect for the input images as these will be the ones to be improved and the improved versions are going to be the images with a much longer exposure time. This is because more light is let in and will get brighter and clearer images. This is exactly what we are looking for. So now that we have the before and after images that we refer to as input and output we can start training the network to learn how to perform low light photography well. And as you see here the results are remarkable. Machine learning research at its finest. I really hope we get a software implementation of something like this in the smartphones of the near future that would be quite amazing. And as we have only scratched the surface please make sure to look at the paper as it contains a lot more details. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. When an artist is in the process of creating digital media, such as populating a virtual world for an animation movie or a video game, or even in graphic design, the artist often requires a large number of textures for these kinds of works. Concrete walls, leaves, fabrics are materials that we know well from the real world, and sometimes the process of obtaining textures is as simple as paying for a texture package and using it. But the problem quite often occurs that we wish to fill an entire road with a concrete texture, but we only have a small patch at our disposal. In this case, the easiest and worst solution is to copy-paste this texture over and over, creating really unpleasant results that are quite repetitive and suffer from seams. So what about an AI-based technique that looks at a small patch and automatically continues it in a way that looks natural and seamless? This is an area within computer graphics and AI that we call texture synthesis. Periodic texture synthesis is simple, but textures with structure are super difficult. The selling point of this particular work is that it is highly efficient at taking into consideration the content and symmetries of the image. For instance, it knows that it has to take into consideration the concentric nature of the wood rings when synthesizing this texture, and it can also adapt to the regularities of this water texture and create a beautiful, high-resolution result. This is a neural network-based technique, so first, the question is, what should the training data be? Let's take a database of high-resolution images. Let's cut out a small part and pretend that we don't have access to the bigger image and ask a neural network to try to expand this small cutout. This sounds a little silly, so what is this trickery good for? Well, this is super useful because after the neural network has expanded the results, we now have a reference result in our hands that we can compare to, and this way, teach the network to do better. Note that this architecture is a generative adversarial network where two neural networks battle each other. The generator network is the creator that expands the small texture snippets and the discriminator network takes a look and tries to tell it from the real deal. Over time, the generator network learns to be better at texture synthesis and the discriminator network becomes better at telling synthesized results from real ones. Over time, this rivalry leads to results that are of extremely high quality. And as you can see in this comparison, this new technique smokes the competition. The paper contains a ton of more results and comparisons, and one of the most exhaustive evaluation sections I've seen in texture synthesis so far. I highly recommend reading it. If you would like to see more episodes like this, make sure to pick up one of the cool perks we offer through Patreon, such as deciding the order of future episodes or getting your name in the video description of every episode as a key supporter. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. We had a few really generous pledges in the last few weeks. I am quite stunned to be honest and I regret that I cannot come in contact with these fellow scholars. If you can contact me, that would be great. If not, thank you so much everyone for your unwavering support. This is just incredible. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Kato Ejolene Ifehir. Due to popular requests, here is a more intuitive explanation of our latest work. Believe it or not, when I have started working on this, Two Minute Papers didn't even exist. In several research areas, there are cases where we can't talk about our work until it is published. I knew that the paper would not see the light of the day for quite a while, if ever, so I started Two Minute Papers to be able to keep my sanity and deliver a hopefully nice piece of work on a regular basis. In the end, this took more than 3,000 work hours to complete, but it is finally here, and I am so happy to finally be able to present it to you. This work is in the intersection of computer graphics and AI, which you know is among my favorites. So what do we see here? This beautiful scene contains more than 100 different materials, each of which has been learned and synthesized by an AI. None of these days is and then the lions are alike, each of them have a different material model. The goal is to teach an AI the concept of material models, such as metals, minerals, and translucent materials. Traditionally, when we are looking to create a new material model with a light simulation program, we have to fiddle with quite a few parameters, and whenever we change something, we have to wait from 40 to 60 seconds until a noise-free result appears. In our solution, we don't need to play with these parameters. Instead, our goal is to grab a gallery of random materials, assign a score to each of them, saying that I like this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us. This is quite useful when we are looking to synthesize not only one, but many materials. So this is learning algorithm number one, and it works really well for a variety of materials. However, these recommendations still have to be rendered with the light simulation program, which takes several hours for a gallery like the one you see here. Here comes learning algorithm number two to the rescue, the neural network that replaces the light simulation program and creates photorealistic visualizations. It is so fast, it not only does this in real time, but it is more than 10 times faster than real time. We call this a neural renderer. So we have a lot of material recommendations, and they are all photorealistic that we can visualize in real time. However, it is always a possibility that we have a recommendation that is almost exactly what we had in mind, but need a few adjustments. That's an issue, because to do that, we would have to go back to the parameter fit link, which we really wanted to avoid in the first place. No worries, because the third learning algorithm is coming to the rescue. What this can do is take our favorite material models from the gallery and map them onto a nice 2D plane, where we can explore similar materials. If we combine this with the neural renderer, we can explore these photorealistic visualizations, and everything appears not in a few hours, but in real time. However, without a little further guidance, we get a bit lost, because we still don't know which regions in this 2D space are going to give us materials that are similar to the one we wish to fine-tune. We can further improve this by exploring different combinations of the three learning algorithms. In the end, we can assign these colors to the background that describe either whether the AI expects us to like the output, or how similar the output will be. An ice use case of this is where we have this glassy still life scene, but the color of the grapes is a bit too vivid for us. Now, we can go to this 2D latent space and adjust it to our liking in real time. Much better. No material modeling expertise is required. So I hope you found this explanation intuitive. We tried really hard to create something that is both scientifically novel and also useful for the computer game and motion picture industry. We had to throw away hundreds of other ideas until this final system materialized. Make sure to have a look at the paper in the description, where every single element and learning algorithm is tested and evaluated one by one. If you are a journalist and you would like to write about this work, I would be most grateful, and I am also more than happy to answer questions in an interview format as well. Please reach out if you are interested. We also tried to give back to the community, so for the fellow tinkerers out there, the entirety of the paper is under the permissive creative commons license, and the full source code and pre-trained networks are also available under the even more permissive MIT license. Everyone is welcome to reuse it or build something cool on top of it. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karojol Naifahir. Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize the score. This class of techniques enables us to train an AI to master a large variety of video games and has many more cool applications. For instance, in the game of Q-Birt, at every time step, the AI has to choose the appropriate actions to control this orange character and light up all the cubes without hitting the purple enemy. This work proposes an interesting alternative to reinforcement learning and is named Evolution Strategies and it aims to train not one agent but an entire population of agents in parallel. The efficiency of this population is assessed much like how evolution works in nature and new offspring are created from the best performing candidates. Note that this is not the first paper using Evolution Strategies, this is a family of techniques that dates back to the 70s. However, an advantage of this variant is that it doesn't require long trial and error sessions to find an appropriate discount factor. But wait, what does this discount factor mean exactly? This is a number that describes whether the AI should focus only on immediate rewards at all costs or whether it should be willing to temporarily make worse decisions for a better payoff in the future. The optimal number is different for every game and depends on how much long-term planning it requires. With this evolutionary algorithm, we can skip this step entirely. And the really cool thing about this is that it is not only able to master many games, but after only 5 hours of training, it was able to find a way to abuse game mechanics in Cuba in the most creative ways. It has found a glitch where it sacrifices itself to lure the purple blob into dropping down after it. And much to our surprise, it's found that there is a bug. If it drops down from this position, it should lose a life for doing it, but due to a bug, it doesn't. It also learned another cool technique where it waits for the adversary to make a move and immediately goes the other way. Here's the same scene slowed down. It had also found and exploited another serious bug which was to the best of my knowledge previously unknown. After completing the first level, it starts jumping around in a seemingly random manner. The moment later, we see that the game does not advance to the next level, but cubes start blinking and the AI is free to score as many points as it wishes. After this video, a human player was able to reproduce this, I've put a link to it in the video description. It also found out the agile trick-in breakout where we dig a tunnel through the bricks, lean back, start reading a paper, and let physics solve the rest of the level. One of the greatest advantages of this technique is that instead of training only one agent, it works on an entire population. These agents can be trained independently, making the algorithm more parallelizable, which means that it is fast and maps really well to modern processors and graphics cards with many cores. And these algorithms are not only winning the game, they are breaking the game. Loving it. What a time to be alive. I think this is an incredible story that everyone needs to hear about. If you wish to help us with our quest and get exclusive perks for this series, please consider supporting us on Patreon. We are available through patreon.com slash two-minute papers, and the link with the details is available in the video description. We also use part of these funds to give back to the community and empower research projects and conferences. For instance, we recently sponsored a conference aimed to teach young scientists to write and present their papers at international venues. We are hoping to invest some more into upgrading our video editing rig in the near future. We also support cryptocurrencies such as Bitcoin, Ethereum and Litecoin. I am really grateful for your support. And this is why every video ends with, thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fahir. When we show a photograph to someone, most of the time we are interested in sharing our memories. Graduation, family festivities, beautiful landscapes are common examples of this. With the recent ascendancy of these amazing neural-style transfer techniques, we can take a painting or any other source image and transfer the style of this image to our contents. The style is transferred, but the contents remain unchanged. This takes place by running the images through a deep neural network, which, in its deeper layers, learns about high-level concepts such as artistic style. This work has sparked a large body of follow-up research works. Feet forward real-time style transfer, temporarily coherent style transfer for videos, you name it. However, these techniques are always about taking one image for content and one for style. How about a new problem formulation where we paste in a part of a foreign image with a completely different style? For instance, if you feel that this ancient artwork is sorely missing a Captain America shield, or if Picasso's self-portrait is just not cool enough without shades, then this algorithm is for you. However, if we just drop in this part of a foreign image, anyone can immediately tell because of the differences in color and style. A previous non-AIB technique does way better, but it is still apparent that the image has been tempered with. But as you can see here, this new technique is able to do it seamlessly. It works by first performing style transfer from the painting to the new region, and then in the second step, additional refinements are made to it to make sure that the response of our neural network is similar across the entirety of the painting. It is conjectured that if the neural network is stimulated the same way by every part of the image, then there shouldn't be outlier regions that look vastly different. And as you can see here, it works remarkably well on a range of inputs. To validate this work, a user study was done that revealed that the users preferred the new technique over the older ones in 15 out of 16 images. I think it is fair to say that this work smokes the competition. But what about comparisons to real paintings? A different user study was also created to answer this question, and the answer is that users were mostly unable to identify whether the painting was tempered with. The source code is also available, so let the experiments begin. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Carlos Zsolnai-Fahir. Assessing how similar two images are has been a long-standing problem in computer graphics. For instance, if we write a new light simulation program, we have to compare our results against the output of other algorithms and a noise-free reference image. However, this often means that we have many noisy images, but the structure of the noise is different. This leads to endless arguments on which algorithm is favorable to the others, since who really gets to decide what kind of noise is favorable and what is not. These are important and long-standing questions that we need to find answers to. In another application, we took a photorealistic material model and wanted to visualize other materials that look similar to it. However, in order to do this, we need to explain to the computer what it means that two images are similar. This is what we call a similarity metric. Have a look at this reference image and these two variants of it. Which one is more similar to it, the blurred or the worked version? Well, according to most humans, warping is considered a less intrusive operation. However, some of the most ubiquitous similarity metrics, like computing a simple per pixel difference, thinks otherwise. Not good. What about this comparison? Which image is closer to the reference? The noisy or the blurry one? Most humans say that the noisy image is more similar, perhaps because with enough patience, one could remove all the noise pixel by pixel and get back the reference image, but in the blurry image, lots of features are permanently lost. Again, the classical error metrics think otherwise. Not good. And now comes the twist. If we build a database for many of these human decisions, feed it into a deep neural network, we'll find that this network will be able to learn and predict how humans see differences in images. This is exactly what we are looking for. You can see the agreement between this new similarity metric and these example differences. However, this shows the agreement on only three images. That could easily happen by chance. So this chart shows how different techniques correlate with how humans see differences in images. The higher the number, the higher the chance that it thinks similarly to humans. The ones labeled with LPIPS denote the new proposed technique used on several different classical neural network architectures. This is really great news for all kinds of research works that include working with images. I can't wait to start experimenting with it. The paper also contains a more elaborate discussion on failure cases as well, so make sure to have a look. Also, if you would like to help us do more to spread the word about these incredible works and pick up cool perks, please consider supporting us on Patreon. Each dollar you contribute is worth more than a thousand views, which is a ton of help for the channel. We also accept crypto currencies such as Bitcoin, Ethereum and Litecoin. Details are available in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Karojolnai-Fehir. Today, we are going to talk about an AI that not only plays video games really well, but can also dream up new, unseen scenarios, and more. This is an interesting new framework that contains a vision model that compresses what it has seen in the game into an internal code. As you see here, these latent variables are responsible to capture different level designs. And this variable simulates time and shows how the fireballs move towards us over time. This is a highly compressed internal representation that captures the most important aspects of the game. We also have a memory unit that not only stores previous experiences, but similarly to how an earlier work predicted the next pan strokes of a drawing, this can also dream up new gameplay. Finally, it is also endowed with a controller unit that is responsible for making decisions as to how to play the game. Here, you see the algorithm in action. On the left, there is the actual gameplay, and on the right, you see its compressed internal representation. This is how the AI thinks about the game. The point is that it is lossy, therefore some information is lost, but the essence of the game is retained. So, this sounds great, the novelty is clear, but how well does it play the game? Well, in this racing game, on a selection of 100 random tracks, its average score is almost 3 times that of DeepMind's groundbreaking DeepQ learning algorithm. This was the AI that took the world by storm when DeepMind demonstrated how it learned to play Atari Breakout and many other games on a superhuman level. This is almost 3 times better than that on the racetrack game, though it is to be noted that DeepMind has also made great strides since their original DQ and work. And now comes the even more exciting part, because it can create an internal dream representation of the game, and this representation really captures the essence of the game, then it means that it is also able to play and train within these dreams. Essentially, it makes up dream scenarios and learns how to deal with them without playing the actual game. It is a bit like how we prepare for a first date, imagining what to say and how to say it, or imagining how we would incapacitate an attacker with our karate chops if someone were to attack us. And the cool thing is that with this AI, this dream training actually works, which means that the newly learned dream strategies translate really well to the real game. We really have only scratched the surface, so make sure to read the paper in the description. This is a really new and fresh idea, and I think it will give birth to a number of follow-up papers. Cannot wait to report on these back to you, so stay tuned and make sure to subscribe and hit the bell icon to never miss an episode. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo-Ijola-Ifahir. This work is about building a robot that works even when being damaged, and you will see that the results are just unreal. There are many important applications for such a robot, where sending out humans may be too risky, such as putting out forest fires, finding earthquake survivors under rubble, or shutting down a malfunctioning nuclear plant. Since these are all dangerous use cases, it is a requirement that such a robot works even when damaged. The key idea to accomplish this is that we allow the robot to perform tasks such as walking not only in one optimal way, but to explore and build a map of many alternative motions relying on different body parts. Some of these limping motions are clearly not optimal, but whenever damage happens to the robot, it will immediately be able to choose at least one alternative way to move around even with broken or missing legs. After building the map, it can be used as additional knowledge to lean on when the damage occurs, and the robot doesn't have to relearn everything from scratch. This is great, especially given that damage usually happens in the presence of danger, and in these cases reacting quickly can be a matter of life and death. However, creating such a map takes a ton of trial and error, potentially more than what we can realistically get the robot to perform. And now comes my favorite part, which is starting the project in a computer simulation, and then in the next step, deploying the trained AI to a real robot. This previously mentioned map of movements contains over 13,000 different kinds of gates, and since we are in a simulation, it can be computed efficiently and conveniently. In software, we can also simulate all kinds of damage for free without dismembering our real robot. And since no simulation is perfect, after this step, the AI is deployed to the real robot that evaluates and adjusts to the differences. By the way, this is the same robot that surprised us in a previous episode when it showed that it can walk around just fine without any food contact with the ground by jumping on its back and using its elbows. I can only imagine how much work this project took, and the results speak for themselves. It is also very easy to see the immediate utility of such a project. Bravo! I also recommend looking at the press materials. For instance, in the frequently asked questions, many common misunderstandings are addressed. For instance, it is noted that the robot doesn't understand the kind of damage that occurred, and doesn't repair itself in the strict sense, but it tries to find alternative ways to function. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Today, we have two extremely hard problems on the menu. One is facial alignment and the other is 3D facial reconstruction. For both problems, we have an image as an input and the output should be either a few lines that mark the orientation of the jawline, mouth and eyes, and in the other case, we are looking for a full 3D computer model of the face. And all this should happen automatically without any user intervention. This is extremely difficult because this means that we need an algorithm that takes a 2D image and somehow captures 3D information from this 2D projection much like a human would. This all sounds great and would be super useful in creating 3D avatars for Skype calls or scanning real humans to place them in digital media such as feature movies and games. This would be amazing, but is this really possible? This work uses a convolutional neural network to accomplish this and it not only provides high quality outputs, but it creates them in less than 10 milliseconds per image, which means that it can process a hundred of them every second. That is great news indeed because it also means that doing this for video in real time is also a possibility. But not so fast because if we are talking about video, no requirements arise. For instance, it is important that such a technique is resilient against changes in lighting. This means that if we have different lighting conditions, the output geometry the algorithm gives us shouldn't be widely different. The same applies to camera and pose as well. This algorithm is resilient against all 3 and it has some additional goodies. For instance, it finds the eyes properly through glasses and can deal with cases where the jawline is occluded by the hair or in furate shape when one side is not visible at all. One of the key ideas is to give additional instruction to the convolutional neural network to focus more of its efforts to reconstruct the center regions of the face because that region contains more discriminative features. The paper also contains a study that details the performance of this algorithm. It reveals that it is not only 5 to 8 times faster than the competition, but also provides higher quality solutions. Since these are likely to be deployed in real world applications very soon, it is a good time to start brainstorming about possible applications for this. If you have ideas beyond the animation movies and games line, let me know in the comment section. I will put the best ones in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Recently, a new breed of AI technique surfaced that were capable of this new thing called image translation. And by image translation, I mean that they can translate a drawn map to a satellite image, take a set of colored labels and make a photorealistic facade, or take a sketch and create a photo out of it. This is done through a generative adversarial network. This is an architecture where we have a pair of neural networks, one that learns to generate new images, and the other learns to tell a fake image from a real one. As they compete against each other, they get better and better without any human interaction. In these earlier applications, unfortunately, the output is typically one image and since there are many possible shoes that could satisfy our initial sketch, it is highly unlikely that the one we are offered is exactly what we envisioned. This improved version and has this algorithm to be able to produce not one, but an entire set of outputs. And as you can see here, we have a night image and a set of potential daytime translations on the right that are quite diverse. I really like how it has an intuitive understanding of the illumination differences of the building during night and daytime. It really seems to know how to add lighting to the building. It also models the atmospheric scattering during daytime, creates multiple kinds of pretty convincing clouds or puts heels in the background. The results are both realistic and the additional selling point is that this technique offers an entire selection of outputs. What I found to be really cool about the next comparisons is that the ground truth images are also attached for reference. If we can take a photograph of a city at night time, we have access to the same view during the daytime too, or we can take a photograph of a shoe and draw the outline of it by hand. As you can see here, there are not only lots of high quality outputs, but in some cases, the ground truth image is really well approximated by the algorithm. This means that we give it a crew drawing and it could translate this drawing into a photorealistic image, I think that is mind blowing. The validation section of the paper reveals that this technique provides a great trade-off between diversity and quality. There are previous methods that perform well if we need one high quality solution or many not-so-great ones, but overall this one provides a great package for artists working in the industry and this will be a godsend for any kind of content creation scenario. The source code of this project is also available and make sure to read the license before starting your experiments. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is Two Minute Papers with Kato Yjona Ifehir. Today, I am really excited to show you four experiments where AI researchers were baffled by the creativity and unexpected actions of their own creations. You better hold on to your papers. In the first experiment, robots were asked to walk around while minimizing the amount of food contact with the ground. Much to the scientist's surprise, the robots answered that this can be done with 0% contact, meaning that they never, ever touched the ground with the feet. The scientist wondered how that is even possible and pulled up the video of the proof. This proof showed a robot flipping over and walking using its elbows. Talk about thinking outside the box. Wow! A different robot arm experiment also came to a surprising conclusion. At first, the robot arm had to use its grippers to grab a cube which it successfully learned to perform. However, in a later experiment, the gripper was crippled, making the robot unable to open its fingers. Scientists expected a pathetic video with the robot trying to push the box around and always failing to pick up the cube. Instead, they have found this. You see it right? Instead of using the fingers, the robot finds the perfect angle to smash the hand against the box to force the gripper to open and pick up the box. That is some serious dedication to solving the task at hand. Bravo! In the next experiment, a group of robots were tasked to find food and avoid poisonous objects in an environment and were equipped with the light and no further instructions. First, they learned to use the lights to communicate the presence of food and poison to each other and cooperate. This demonstrates that when trying to maximize the probability of the survival of an entire colony, the concept of communication and cooperation can emerge even from simple neural networks. Absolutely beautiful! And what is even more incredible is that later, when a new reward system was created that fosters self-preservation, the robots learned to deceive each other by lighting up the food signal near the poison to take out their competitors and increase their chances. And these behaviors emerge from a reward system and a few simple neural networks, mind-blowing. A different AI was asked to fix a faulty sorting computer program. Soon, it achieved a perfect score without changing anything because it noticed that by short circuiting the program itself, it always provides an empty output. And of course, you know, if there are no numbers, there is nothing to sort. Problem solved. Make sure to have a look at the paper, there are many more experiments that went similarly, including a case where the AI found a bug in a physics simulation program to get an edge. And that research improving gets such a rapid pace. It is clearly capable of things that surpasses our wildest imagination, but we have to make sure to formulate our problems with proper caution because the AI will try to use loopholes instead of common sense to solve them. When in a car chase, don't ask the car AI to unload all unnecessary weights to go faster, or if you do, prepare to be promptly ejected from the car. If you have enjoyed this episode, please make sure to have a look at our Patreon page in the video description where you can pick up really cool perks like early access to these videos or getting your name shown in the video description and more. Thanks for watching and for your generous support and I'll see you next time.
Creating high-quality photorealistic materials for light transport simulations typically includes direct hands-on interaction with a principal shader. This means that the user has to tweak a large number of material properties by hand and has to wait for a new image of it to be rendered after each interaction. This requires a fair bit of expertise and the best setups are often obtained through a lengthy trial and error process. To enhance this workflow, we present a learning-based system for rapid mass-scale material synthesis. First, the user is presented with a gallery of materials and the assigned scores are shown in the upper left. Here, we learn the concept of glassy and transparent materials. By learning on only a few tens of high-scoring samples, our system is able to recommend many new materials from the learn distributions. The learning step typically takes a few seconds where the recommendations take negligible time and can be done on a mass scale. Then, these recommendations can be used to populate a scene with materials. Typically, each recommendation takes 40 to 60 seconds to render with global illumination, which is clearly unacceptable for real-world workflows, even for mid-size galleries. In the next step, we propose a convolutional neural network that is able to predict images of these materials that are close to the ones generated via global illumination and takes less than 3 milliseconds per image. Sometimes, a recommended material is close to the one envisioned by the user that requires a bit of fine tuning. To this end, we embed our high-dimensional shader descriptors into an intuitive 2D latent space where exploration and adjustments can take place without any domain expertise. However, this isn't very useful without additional information because the user does not know which regions offer useful material models that are in line with their scores. One of our key observations is that this latent space technique can be combined with Gaussian process regression to provide an intuitive color coding of the expected preferences to help highlighting the regions that may be of interest. Furthermore, our convolutional neural network can also provide real-time predictions of these images. These predictions are close to indistinguishable from the real-render images and are generated in real-time. Beyond the preference map, this neural network also opens up the possibility of visualizing the expected similarity of these new materials to the one we seek to fine-tune. By combining the preference and similarity maps, we obtain a color coding that guides the user in this latent space towards materials that are both similar and have a high expected score. To accentuate the utility of our real-time variant generation technique, we show a practical case where one of the great materials is almost done but requires a slight reduction in vividity. This adjustment doesn't require any domain expertise or direct interaction with the material modeling system and can be done in real-time. In this example, we learn the concept of translucent materials from only a handful of high-scoring samples and generate a large amount of recommendations from the Learn Distribution. These recommendations can then be used to populate the scene with relevant materials. Here, we show the preference and similarity maps of the Learn Translucent Material Space and explore possible variants of an input material. These recommendations can be used for mass-scale material synthesis and the amount of variation can be tweaked to suit the user's artistic vision. After assigning the appropriate materials, displacements and other advanced effects can be easily added to these materials. We have also experimented with an extended, more expressive version of our shader that also includes procedural texture del Beatles and displacements. The following scenes were populated using the Material Learning and Recommendation and latent space embedding steps. We have proposed a system for mass-scale material synthesis that is able to rapidly recommend a broad range of new material models after learning the user preferences from a modest number of samples. Beyond this pipeline, we also explored powerful combinations of the three use-learning algorithms, thereby opening up the possibility of real-time photorealistic material visualization, exploration and fine-tuning in a 2D latent space. We believe this feature set offers a useful solution for rapid mass-scale material synthesis for novice and expert users alike and hope to see more exploratory works combining the advantages of multiple state-of-the-art learning algorithms in the future.
Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher. With the recent ascendancy of Neural Network-based techniques, we have witnessed amazing algorithms that are able to take an image from a video game and translate it into reality and the other way around. Or they can also translate daytime images to their nighttime versions or change summer to winter and back. Some AIB's algorithms can also create near-photorealistic images from our sketches. So the first question is, how is this wizardry even possible? These techniques are implemented by using generative adversarial networks, GANs, in short. This is an architecture where two neural networks battle each other. The generator network is the artist who tries to create convincing, re-looking images. The discriminator network is the critic that tries to tell a fake image from a real one. The artist learns from the feedback of the critic and will improve itself to come up with better quality images, and in the meantime, the critic also develops a sharper eye for fake images. These two adversaries push each other until they both become adept at their tasks. However, the training of these GANs is fraught with difficulties. For instance, it is not guaranteed that this process converges to a point and therefore it matters a great deal when we stop training the networks. This makes reproducing some works very challenging and is generally not a desirable property of GANs. It is also possible that the generator starts focusing on a select set of inputs and refuses to generate anything else a phenomenon will refer to as mode collapse. So how could we possibly defeat these issues? This work presents a technique that mimics the steps of evolution in nature, evaluation, selection and variation. First this means that not one, but many generator networks are trained and only the ones that provide sufficient quality and diversity in their images will be preserved. We start with an initial population of generator networks and evaluate the fitness of each of them. The better and more diverse images they produce, the more fit they are, the more fit they are, the more likely they are to survive the selection step where we eliminate the most unfit candidates. Okay, so now we see how a subset of these networks become the victim of evolution. This is how networks get eaten, if you will. But how do we produce new ones? And this is how we arrive to the variation step where new generator networks are created by introducing variations to the networks that are still alive in this environment. This simulates the creation of an offspring and will provide the next set of candidates for the next selection step and we hope that if we play this game over a long time, we get more and more resilient offspring. The resulting algorithm can be trained in a more stable way and it can create new bedroom images when being shown a database of bedrooms. When compared to the state of the art, we see that this evolutionary approach offers high quality images and more diversity in the outputs. It can also generate new human faces that are quite decent. They are clearly not perfect, but a technique that can pull this off consistently will be an excellent baseline for newer and better research works in the near future. We are also getting very close to an era where we can generate thousands of convincing digital characters from scratch to name just one application. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Kato Ysola-Ifahir. Neural networks are amazing at recognizing objects when being shown an image, and in some cases, like traffic sign recognition, their performance can reach superhuman levels. But as we discussed in the previous episode, most of these networks have an interesting property where we can add small changes to an input photo and have the network misclassify it to something completely different. The super effective neural network can be reduced to something that is less accurate than a coin flip with a properly crafted adversarial attack. So of course, we may think that neural networks are much smaller and simpler than the human brain, and because of that, of course, we cannot perform such an adversarial attack on the human vision system. Right? Or is it possible that some of the properties of machine vision systems can be altered to fool the human vision? And now, hold on to your papers. I think you know what's coming. This algorithm performs an adversarial attack on you. This image depicts a cat. And this image depicts a dog? Surely it's a dog, right? Well, no. This is an image of the previous cat plus some carefully crafted noise that makes it look like a dog. This is such a peculiar effect. I am staring at it, and I know for a fact that this is not a dog. This is cat plus noise, but I cannot not see it as a dog. Wow, this is certainly something that you don't see every day. So let's look at what changes were made to the image. Clearly, the nose appears to be longer and thicker, so that's a dog-like feature. But it is of utmost importance that we don't overlook the fact that several cat-specific features still remain in the image, for instance, the whiskers are very cat-like. And despite that, we still see it as a dog. This is insanity. This technique works by performing an adversarial attack against an AI model and modifying the noise generator model to better match the human visual system. Of course, the noise we have to add depends on the architecture of the neural network, and by this, I mean the number of layers and the number of neurons within these layers and many other parameters. However, a key insight of the paper is that there are still features that are shared between most architectures. This means that if we create an attack that works against five different neural network architectures, it is highly likely that it will also work on an arbitrary sixth network that we haven't seen yet. And it turns out that some of these noise distributions are also useful against the human visual system. Make sure to have a look at the paper. I have found it to be an easy read, and quite frankly, I am stunned by the result. It is clear that machine learning research is progressing at a staggering pace, but I haven't expected this. I haven't expected this at all. If you are enjoying the series, please make sure to have a look at our Patreon page to pick up cool perks like watching these episodes in early access or getting your name displayed in the video description as a key supporter. Details are available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. We had many episodes about new wondrous AI-related algorithms, but today we are going to talk about AI safety which is an increasingly important field of AI research. Deep neural networks are excellent classifiers, which means that after we train them on a large amount of data, they will be remarkably accurate at image recognition. So generally, accuracy is subject to maximization. But no one said a word about robustness, and here is where these new neural network defeating techniques come into play. Earlier we have shown that we can fool neural networks by adding carefully crafted noise to an image. If done well, this noise is barely perceptible and can fool the classifier into looking at a bus and thinking that it is an ostrich. We often refer to this as an adversarial attack on a neural network. This is one way of doing it, but note that we have to change many, many pixels of the image to perform such an attack. So the next question is clear. What is the lowest number of pixel changes that we have to perform to fool a neural network? What is the magic number? One would think that a reasonable number would at least be a hundred. Hold onto your papers because this paper shows that many neural networks can be defeated by only changing one pixel. By changing only one pixel in an image that depicts a horse, the AI will be 99.9% sure that we are seeing a frog. A ship can also be disguised as a car, or, amusingly, almost anything can be seen as an airplane. So how can we perform such an attack? As you can see here, these neural networks typically don't provide a class directly, but a bunch of confidence values. What does this mean exactly? The confidence values denote how sure the network is that we see a Labrador or a Tiger Cat. To come to a decision, we usually look at all of these confidence values and choose the object type that has the highest confidence. Now clearly, we have to know which pixel position to choose and what color it should be to perform a successful attack. We can do this by performing a bunch of random changes to the image and checking how each of these changes performed in decreasing the confidence of the network in the appropriate class. After this, we filter out the bad ones and continue our search around the most promising candidates. This process will refer to as differential evolution, and if we perform it properly, in the end, the confidence value for the correct class will be so low that a different class will take over. If this happens, the network has been defeated. Now note that this also means that we have to be able to look into the neural network and have access to the confidence values. There is also plenty of research works on training more robust neural networks that can withstand as many adversarial changes to the inputs as possible. I cannot wait to report on these works as well in the future. Also, our next episode is going to be on adversarial attacks on the human vision system. Can you believe that? That paper is absolutely insane, so make sure to subscribe and hit the bell icon to get notified. You don't want to miss that one. Thanks for watching and for your generous support, and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize the score. This class of techniques enables us to train an AI to master a large variety of video games and has many more cool applications. Reinforcement learning typically works well when the rewards are dense. What does this mean exactly? This means that if we play a game and after making a mistake we immediately die, it is easy to identify which action of ours was the mistake. However, if the rewards are sparse, we are likely playing something that is akin to a long-term strategy planner game. If we lost, it is possible that we were outmaneuvered in the final battle, but it is also possible that we lost the game way earlier due to building the wrong kind of economy. There are a million other possible reasons because we get feedback on how well we have done only once and much, much after we have chosen our actions. Learning from sparse rewards is very challenging, even for humans. And it gets even worse. In this problem formulation, we don't have any teachers that guide the learning of the algorithm and no prior knowledge of the environment. So this problem sounds almost impossible to solve. So what did DeepMind scientists come up with to at least have a chance of approaching it? And now, hold on to your papers because this algorithm learns like a baby learns about its environment. This means that before we start solving problems, the algorithm would be unleashed into the environment to experiment and master basic tasks. In this case, our final goal would be to tidy up the table. First, the algorithm learns to activate its haptic sensors, control the joints and fingers, then it learns to grab an object and then to stack objects on top of each other. And in the end, the robot will learn that tidying up is nothing else but a sequence of these elementary actions that it had already mastered. The algorithm also has an internal scheduler that decides which should be the next action to master while keeping in mind that the goal is to maximize progress on the main task. Which is tidying up the table in this case. And now, on to validation. When we are talking about software projects, the question of real life viability often emerges. So the question is how would this technique work in reality and what else would be the ultimate test than running it on a real robot arm? Let's look here and marvel at the fact that it easily finds and moves the green block to the appropriate spot. And note that it had learned how to do it from scratch much like a baby would learn to perform such tasks. And also note that this was a software project that was deployed on this robot arm, which means that the algorithm generalizes well for different control mechanisms. A property that is highly sought after when talking about intelligence. And if earlier progress in machine learning research is indicative of the future, this may learn how to perform backflips and play video games on a super human level. And I will be here to report on that for you. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. Reinforcement learning is a learning algorithm that we can use to choose a set of actions in an environment to maximize a score. There are many applications of such learners, but we typically cite video games because of the diverse set of challenges they can present the player with. And in reinforcement learning, we typically have one task, like learning backflips and one agent that we wish to train to perform it well. This work is deep minds attempt to supercharge reinforcement learning by training one agent that can do a much wider variety of tasks. Now, this clearly means that we have to acquire more training data and also be prepared to process all this data as effectively as possible. By the way, the test suite that you see here is also new where typical tasks in this environment involve pathfinding through mazes, collecting objects, finding keys to open their matching doors, and more. And every fellow scholar knows that the paper describing its details is of course available in the video description. This new technique builds upon an earlier architecture that was also published by DeepMind. This earlier architecture A3C unleashes a bunch of actors into the wilderness, each of which gets a copy of the playbook that contains the current strategy. These actors then play the game independently and periodically stop and share what worked and what didn't to this playbook. With this new Impala architecture, there are two key changes to this. One, in the middle, we have a learner and the actors don't share what worked and what didn't to this learner, but they share their experiences instead. And later, the centralized learner will come up with the proper conclusions with all this data. Imagine if each football player in a team tries to tell the coach the things they tried on the field and what worked. That is surely going to work at least okay, but instead of these conclusions, we could aggregate all the experience of the players into some sort of centralized hive mind and get access to a lot more and higher quality information. Maybe we will see that a strategy only works well if executed by the players who are known to be faster than their opponents on the field. The other key difference is that with traditional reinforcement learning, we play for a given number of steps, then stop and perform learning. With this technique, we have decoupled the playing and learning, therefore it is possible to create an algorithm that performs both of them continuously. This also raises new questions, make sure to have a look at the paper, specifically the part with the new off-policy correction method by the name VTrace. When tested on 30 of these different levels and a bunch of Atari games, the new technique was typically able to double the score of the previous E3C architecture, which was also really good. And at the same time, this is at least 10 times more data efficient and its knowledge generalizes better to other tasks. We have had many episodes on neural network based techniques, but as you can see, research on the reinforcement learning side is also progressing at a remarkable pace. If you have enjoyed this episode and you feel that 8 science videos a month is worth a dollar, please consider supporting us on Patreon. You can also pick up cool perks like early access to these episodes. The link is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. This project is a collaboration between Inria and Facebook AI Research and is about pose estimation. Pose estimation means that we take an input photo or, in the cooler case, video of people and the output should be a description of their postures. This is kind of like motion capture for those amazing movie and computer game animations, but without the studio and the markers. This work goes even further and tries to offer a full 3D reconstruction of the geometry of the bodies and it is in fact doing way more than that as you will see in a minute. Neural networks are usually great at these tasks provided that we have a large number of training samples to train them. So, the first step is gathering a large amount of annotated data. This means an input photograph of someone which is paired up with the correct description of their posture. This is what we call one training sample. This new proposed dataset contains 50,000 of these training samples and using that we can proceed to step number 2 training the neural network to perform pose estimation. But, there is more to this particular work. Normally, this pose estimation takes place with a 2D skeleton which means that most techniques output the stick figure. But not in this case because the dataset contains segmentations and dense correspondences between 2D images and 3D models, therefore the network is also able to output fully 3D models. There are plenty of interesting details shown in the paper. For instance, since the annotated Grand Truth footage in the training set is created by humans, there is plenty of missing data that is filled in by using a separate neural network that is specialized for this task. Make sure to have a look at the paper for more cool details like this. This all sounds good in theory, but a practical application has to be robust against occlusions and rapid changes in posture. And the good thing is that the authors published plenty of examples with these that you can see here. Also, it has to be able to deal with smaller and bigger scales when people are closer or further away from the camera. This is also a challenge. The algorithm does a really good job at this and remember no markers or studio setup is required and everything that you see here is performed interactively. The dataset will appear soon and it will be possible to reuse it for future research works, so I expect plenty of more collaboration and follow-up works for this problem. We are living amazing times indeed. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Many of you have surely heard the word an emoji, which refers to these emoji figures that are animated in real time and react to our facial gestures. This is implemented in the new iPhone X phones, however, to accomplish this, it uses a dot projector to get a good enough understanding of the geometry of the human face. So how about a technique that doesn't need any specialized gear, takes not even a video of you but one photograph as an input and creates a digital avatar of us that can be animated in real time. Well, sign me up. Have a look at these incredible results. As you can see, the final result also includes secondary components like eyes, teeth, tongue, and gum. Now the avatars don't have to be fully photorealistic but have to capture the appearance and gestures of the user well enough so they can be used in video games or any telepresence application where a set of users interact in a virtual world. As opposed to many prior works, the hair is not reconstructed strand by strand because doing this in real time is not feasible. Also, note that the information we are given is highly incomplete because the backside of the head is not captured but these characters also have a quite appropriate looking hairstyle there. How is this even possible? Well, first the input image is segmented into the face part and the hair part. Then the hair part is run through a neural network that tries to extract attributes like length, spikiness, or their hair bands is their ponytail where the hairline is and more. This is an extremely deep neural network with over 50 layers and it took 40,000 images of different hair styles to train. Now since it is highly unlikely that the input photo shows someone with a hairstyle that was never ever worn by anyone else, we can look into a big data set of already existing hairstyles and choose the closest one that fits the attributes extracted by the neural network. Such a smart idea, loving it. You can see how well this works in practice and in the next step, the movement and the appearance of the final hair geometry can be computed in real time through a novel polygonal strip representation. The technique also supports retargeting, which means that our gestures can be transferred to different characters. The framework is also very robust to different lighting conditions, which means that a differently lead photograph will lead to very similar outputs. The same applies for expressions. This is one of those highly desirable details that makes or breaks the usability of a new technique in production environments and this one passed with flying colors. In these comparisons, you can also see that the quality of the results also smokes the competition. A variant of the technology can be downloaded through the link in the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifejir. Some time ago, smartphone cameras were trying to outpace each other by adding more and more megapixels to their specification sheet. The difference between a half megapixel image and a 4 megapixel image was night and day. However, nowadays, we have entered into diminishing returns as most newer mobile cameras support 8 or more megapixels. At this point, a further resolution increase doesn't lead to significantly more convincing photos. And here is where the processing software takes the spotlight. This paper is about an AI-based technique that takes a poor quality photo and automatically enhances it. Here you can already see what a difference software can make to these photos. Many of these photos were taken with an 8-year-old mobile camera and were enhanced by the AI. This is insanity. Now, before anyone thinks that by enhancement, I'm referring to the classic workflow of adjusting wide balance, color levels and use. No, no, no. By enhancement, I mean the big, heavy hitters, like recreating lost details via super resolution and image in-painting, image de-blurring, denoising and recovering colors that were not even recorded by the camera. The idea is the following. First, we shoot a lot of photos from the same viewpoint with a bunch of cameras ranging from a relatively dated iPhone 3GS, other mid-tier mobile cameras and a state-of-the-art DSLR camera. Then, we hand over this huge bunch of data to a neural network that learns the typical features that are preserved by the better cameras and lost by the worse ones. The network does the same with relating the noise patterns and color profiles to each other. Then, we use this network to recover these lost features and pump up the quality of our lower-tier camera to be as close as possible to a much more expensive model. Super smart idea. Loving it. And you know what is even more brilliant? The validation of this work can take place in a scientific manner, because we don't need to take a group of photographers who will twirl their mass stashes and judge these photos. Though, I'll note that this was also done for good measure. But since we have the photos from the high-quality DSLR camera, we can take the bad photos and hence them with the AI and compare this output to the real DSLR's output. Absolutely brilliant. The source code and pre-trained networks and an online demo is also available. So, let the experiments begin. And make sure to leave a comment with your findings. What do you think about the outputs shown in the website? Did you try your own photo? Let me know in the comments section. A high-quality validation section, lots of results, candid discussion of the limitations in the paper, published source code, pre-trained network and online demos that everyone can try free of charge. Scientists at ETH Zurich max this paper out. This is as good as it gets. If you have enjoyed this episode and would like to help us make better videos in the future, please consider supporting us on Patreon by clicking the letter P at the end screen of this video in a moment or just have a look at the video description. Thanks for watching and for your generous support and I'll see you next time.
Dear Fellow Scholars, this is two minute papers with Károly Zsolnai-Fehér. Hold on to your papers because this is an exclusive look at a new neural network visualization paper that came from a collaboration between Google and the Carnegie Mellon University. The paper is as fresh as it gets because this is the first time I have been given an exclusive look before the paper came out and this means that this video and the paper itself will be published at the same time. This is really cool and it's quite an honor. Thank you very much. Neural networks are powerful learning based tools that are super useful for tasks that are difficult to explain, but easy to demonstrate. For instance, it is hard to mathematically define what a traffic sign is, but we have plenty of photographs of them. So the idea is simple, we label a bunch of photographs with additional data that says this is a traffic sign and this one isn't. And feed this to a learning algorithm. As a result, neural networks have been able to perform traffic sign detection at the superhuman level for many years now. Scientists at Google Deep might have also shown us that if we combine a neural network with reinforcement learning, we can get it to look at the screen and play computer games on a very high level. It is incredible to see problems that seemed impossible for many decades cramble one by one in quick succession over the last few years. However, we have a problem and that problem is interpretability. There is no doubt that these neural networks are efficient, however, they cannot explain their decisions to us, at least not in a way that we can interpret. To alleviate this, earlier works tried to visualize these networks on the level of neurons, particularly what kinds of inputs make these individual neurons extremely excited. This paper is about combining previously known techniques to unlock more powerful ways to visualize these networks. For instance, we can combine the individual neuron visualization with class attributions. This offers a better way of understanding how a neuron network decides whether a photo depicts a labrador or a tiger cat. Here we can see which part of the image activates a given neuron and what the neuron is looking for. So we see the final decision as to which class this image should belong to. The next visualization technique shows us which set of detectors contributed to the final decision and how much they contributed exactly. Another way towards better interpretability is to decrease the overwhelming number of neurons into smaller groups with more semantic meaning. This process is referred to as factorization or neuron grouping in the paper. If we do this, we can obtain highly descriptive labels that we can endow with intuitive meanings. For instance, here we see that in order for the network to classify the image as a labrador, it needs to see a combination of floppy ears, doggy forehead, doggy mouth, and a bunch of fur. We can also construct a nice activation map to show which part of the image makes our groups excited. Please note that we have only scratched the surface. This is a beautiful paper and it has tons of more results available exactly from this moment with plenty of interactive examples you can play with. Not only that, but the code is open sourced so you are also able to reproduce these visualizations with little to no setup. Make sure to have a look at it in the video description. Thanks for watching and for your generous support, and I'll see you next time.