id
stringclasses 15
values | title
stringclasses 15
values | url
stringclasses 15
values | published
stringclasses 15
values | text
stringlengths 2
633
| start
float64 0
4.86k
| end
float64 2
4.89k
|
---|---|---|---|---|---|---|
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So, they're changing the slide, whatever, it's a square rectangular plate. And that moving edge drove or excited the neurons. So, they really chased after that observation. | 1,575 | 1,590 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | If they were too frustrated or too careless, they would have missed that, but they were not. They really chased after that and realized neurons in the primary visual cortex are organized in columns. | 1,590 | 1,606 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And for every column of the neurons, they like to see a specific orientation of the stimuli, simple oriented bars rather than the fish or mouse. | 1,606 | 1,623 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And making us a little bit of a simple story because there are still neurons in primary visual cortex, we don't know what they like, they don't like simple oriented bars. But by large, the human visual found that the beginning of visual processing is not a holistic fish or mouse. | 1,623 | 1,642 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | The beginning of visual processing is simple structures of the world, edges, oriented edges. And this is a very deep, deep implication to both neurophysiology and neuroscience as well as engineering modeling. | 1,642 | 1,662 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | If later, when we visualize our deep neural network features, we'll see that simple edge-like structure emerging from our model. | 1,662 | 1,676 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And even though the discovery was in the late 50s or early 60s, they won a Nobel medical prize for this work in 1981. So, that was another very important piece of work related to visual processing. | 1,676 | 1,697 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And so, when did computer vision begin? That's another interesting story, the precursor of computer vision as a modern field was this particular dissertation by Larry Roberts in 1963. | 1,697 | 1,721 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | It's called Block World. He just as humble and viso were discovering that the visual world in our brain is organized by simple edge-like structures. | 1,721 | 1,735 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Larry Roberts as an early computer science PhD students were trying to extract these edge-like structures in images and as a piece of engineering work. | 1,735 | 1,753 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And in this particular case, his goal is that, you know, both UNI as humans can recognize blocks no matter how it's turned. We know it's the same block, these two are the same block, even though the lighting changed and the orientation changed. | 1,753 | 1,772 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And his conjuncture is that just like human visual told us, it's the edges that define the structure, the edges define the shape and they don't change rather than all these interior things. | 1,772 | 1,790 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Larry Roberts wrote a PhD dissertation to just extract these edges. It's, you know, if you work as a PhD student computer vision, this is like, you know, this is like undergraduate computer vision. | 1,790 | 1,804 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | We don't have being a PhD thesis, but that was the first precursor computer vision PhD thesis. | 1,804 | 1,811 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Larry Roberts is interesting. He kind of gave up. He's working computer vision afterwards and went to DARPA. It was one of the inventors of the internet. | 1,811 | 1,824 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So, you know, he didn't do too badly by giving up computer vision. | 1,824 | 1,829 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | But we always like to say that the birthday of computer vision as a modern field is in the summer of 1966. | 1,829 | 1,841 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | MIT Artificial Intelligence Lab was established before that. Actually, for one piece of history, you should feel proud as a Stanford student. There are two pioneering artificial intelligence lab established in the world in the early 1960s, one by Marvin Minsky at MIT, one by John McCarthy at Stanford. | 1,841 | 1,869 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | As Stanford, the artificial intelligence lab was established before the computer science department. And Professor John McCarthy, who founded AI Lab, is the one who is responsible for the term artificial intelligence. | 1,869 | 1,884 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So, that's a little bit of a proud Stanford history. But anyway, we have to give MIT this credit for starting the field of computer vision because in the summer of 1966, a professor at MIT AI Lab decided it's time to solve vision. | 1,884 | 1,902 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So, AI was established. We start to understand, you know, first-order logic and all this. And I think LISP was probably invented at that time. But anyway, vision is so easy. You open your eyes. You see the world. How hard can this be? Let's solve it in one summer. | 1,902 | 1,922 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So, especially at MIT students are smart, right? So, the summer vision project is an attempt to use our summer workers effectively in a construction of a significant part of a visual system. | 1,922 | 1,938 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | This was the proposal for that summer. And maybe they did use their summer workers effectively. But in any case, computer vision was not solved in that summer. Since then, they become the fastest growing field of computer vision and AI. | 1,938 | 1,956 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | If you go to today's premium computer vision conferences called CVPR or ICCV, we have like 2,000 to 2,500 researchers worldwide attending this conference. | 1,956 | 1,972 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And a very practical note for students, if you are a good computer vision slash machine learning student, you will not worry about jobs in Silicon Valley or anywhere else. | 1,972 | 1,986 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So, it's actually one of the most exciting fields. But that was the birthday of computer vision. Which means this year is the 50th anniversary of computer vision. | 1,986 | 2,000 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | That's a very exciting year in computer vision. We have come a long, long way. So, continue on the history of computer vision. This is a person to remember David Mar. | 2,000 | 2,014 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | He was also at MIT at that time working with a number of very influential computer vision scientists, Shimon Omen, Tommy Pogio. And David Mar himself died early in the 70s. And he wrote a very influential book called Vision. It's a very thin book. | 2,014 | 2,040 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And David Mar is thinking about vision. He took a lot of insights from neuroscience. We already said that Hubei and Wizou give us the concept of simple structure. Vision starts with simple structure. It didn't start with a holistic fish or holistic mouse. | 2,040 | 2,062 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | David Mar gave us the next important insight. And these two insight together is the beginning of deep learning architecture. Is that vision is hierarchical. | 2,062 | 2,075 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Hubei and Wizou said, we start simple. But Hubei and Wizou didn't say we end simple. This vision world is extremely complex. In fact, I take a picture, a regular picture today with my iPhone. There is, I don't know my iPhone's resolution. Let's suppose it's like 10 megapixels. | 2,075 | 2,098 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | The potential combination of pixels to form a picture in that is bigger than the total number of atoms in the universe. That's how complex vision can be. It's really, really complex. | 2,098 | 2,113 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So Hubei and Wizou told us, start simple. David Mar told us, build a hierarchical model. Of course, David Mar didn't tell us to build it in a convolution on your network, which we will cover for the rest of the quarter. | 2,113 | 2,129 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | But his idea is to represent or to think about an image, we think about it in several layers. The first one, he thinks we should think about the edge image, which is clearly an inspiration from Hubei and Wizou. | 2,129 | 2,149 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And he personally called this the primal sketch. The name is self-explanatory. And then you think about two and a half D. | 2,149 | 2,161 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | This is where you start to reconcile your 2D image with a 3D world. You recognize there is layers. I look at you right now. I don't think half of you only has a head and a neck. | 2,161 | 2,177 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Even though that's all I see. But I know you're included by the role in front of you. And this is the fundamental challenge of vision. We have an eopost problem to solve. | 2,177 | 2,190 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Nature had an eopost problem to solve because the world is 3D. But the imagery on our retina is 2D. Nature solved it by first a hardware trick, which is two eyes. It didn't use one eye. | 2,190 | 2,206 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | But then there's going to be a whole bunch of software trick to merge the information of the two eyes and all this. | 2,206 | 2,212 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So the same thing with computer vision, we have to solve that 2 and a half D problem. And then eventually we have to put everything together so that we actually have a 3D model of the world. | 2,212 | 2,224 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Why do we have to have a 3D model of the world? Because we have to survive, navigate, manipulate the world. When I shake your hand, | 2,224 | 2,234 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | I really need to know how to extend my hand and grab your hand in the right way. That is a 3D modeling of the world. Otherwise I won't be able to grab your hand in the right way. | 2,234 | 2,246 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | When I pick up a mug, the same thing. So that's David Mars' architecture for vision. It's a very high level abstract architecture. | 2,246 | 2,260 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | It doesn't really inform us exactly what kind of mathematical modeling we should use. It doesn't inform us of the learning procedure. | 2,260 | 2,270 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And it really doesn't inform us of the inference procedure, which we will get into through the deep learning network architecture. | 2,270 | 2,278 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | That's the high level view and it's an important concept to learn in vision. And we call this the representation. | 2,278 | 2,290 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | A couple of really important work and this is a little bit Stanford centric to just show you. | 2,290 | 2,298 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | As soon as David Mars laid out this important way of thinking about vision, the first wave of visual recognition algorithms went after the 3D model. | 2,298 | 2,312 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Because that's the goal, right? No matter how you represent the stages, the goal here is to reconstruct the 3D model so that we can recognize object. | 2,312 | 2,324 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And this is really sensible because that's what we go to the world and do. | 2,324 | 2,330 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So both of these two influential work comes from Palau to One is from Stanford, One is from SRI. | 2,330 | 2,336 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So Tom Bimfer was a professor at Stanford AI Lab and he had his student Rodley Brooks propose one of the first so-called generalized cylinder model. | 2,336 | 2,348 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | I'm not going to get into the details, but the idea is that the world is composed of simple shapes like cylinder blocks. | 2,348 | 2,358 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And then any real world object is just a combination of these simple shapes given a particular viewing angle. | 2,358 | 2,367 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And that was a very influential visual recognition model in the 70s. And Rodney Brooks went on to become the director of MIT's AI Lab and he was also a founding member of the Iroba company in Rumba and all this. | 2,367 | 2,388 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So he continued very influential AI work. Another interesting model coming from local Stanford Research Institute, I think SRI is across the street from El Camino is this pictorial structure model. | 2,388 | 2,407 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | It's very similar. It has less of a 3D flavor but more of a probabilistic flavor is that the objects are made of still simple parts like a person's head is made of eyes and nose and mouth. | 2,407 | 2,427 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And the parts were connected by springs allowing for some deformation. So this is getting a sense of, okay, we recognize the world, not every one of you have exactly the same eyes and the distance between the eyes we allow for some kind of variability. | 2,427 | 2,445 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So this concept of variability start to get introduced in a model like this. And using models like this, you know, the reason I want to show you this is to see how simple the work was in the 80s. | 2,445 | 2,461 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Because one of the most influential model in the 80s, recognizing real world object and the entire paper of real world object is these same in razors. | 2,461 | 2,475 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And seeing the edges and simple shapes formed by the edges to recognize this by developing another Stanford graduate. So that's kind of the ancient world of computer vision. | 2,475 | 2,495 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And seeing black and white or even synthetic images started in the 90s, we finally start to move into like colorful images of real world. What a big change. | 2,495 | 2,509 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And a very, very influential work here is not particularly about recognizing an object is about how do we like carve out an image into sensible parts, right? So if you enter this room, there's no way your visual system is telling you, oh my god, I see so many pixels. | 2,509 | 2,533 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Right, you immediately have group things. You see heads, heads, heads, chair, chair, chair, stage platform, piece of furniture and all this. This is called perceptual grouping. | 2,533 | 2,546 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Perceptual grouping is one of the most important problem in vision, biological or artificial. | 2,546 | 2,554 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | If we don't have to solve the perceptual grouping problem, we're going to have a really hard time to deeply understand the visual world. | 2,554 | 2,564 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And you will learn towards the end of this class, this course, a problem as fundamental as this is still not solved in computer vision. | 2,564 | 2,575 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Even though we have made a lot of progress before deep learning and after deep learning, we're still grasping the final solution of a problem like this. | 2,575 | 2,585 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So this is again why I want to give you this introduction for you to be aware of the deep problems in vision. | 2,585 | 2,594 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And also the current stay in the challenges in vision, we did not solve all the problem in vision despite whatever the new says. | 2,594 | 2,605 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Like we're far from developing terminators who can do everything yet. | 2,605 | 2,611 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So this piece of work is called normalized cut is one of the first computer vision work that takes real world images and tries to solve a very fundamental difficult problem. | 2,611 | 2,626 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And to turn your malloc is the senior commuter vision researcher now professor at Berkeley also Stanford graduate. And you can see the results are not that great. | 2,626 | 2,639 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Are we going to cover any segmentation in this class? | 2,639 | 2,643 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | We might right you see we are making progress, but this is the beginning of that another very influential work that I want to I want to bring out and pay tribute. | 2,643 | 2,657 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So for even though these work were not covering them in the rest of the course, but I think it as a vision student, it's really important for you to be aware of this because not only introduces the important problem we want to solve, it also gives you a perspective on the development of the field. | 2,657 | 2,676 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | This work is called Vila Jones face detector and it's very dear to my heart because as a graduate student, fresh graduate student at Caltech, it's one of the first papers I read as a graduate student when I enter the lab and I didn't know anything my advisor said, | 2,676 | 2,696 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | this amazing piece of work that we're all trying to understand. And then by the time I graduated from Caltech, this very work is transferred to the first smart digital camera by Fuji Film in 2006. | 2,696 | 2,714 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | As the first digital camera that has a face detector, so from a transfer point of view, it was extremely fast and it was one of the first successful high level visual recognition algorithm that's being used by consumer product. | 2,714 | 2,734 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So this work just learns to detect faces and faces in a while, it's no longer simulation data or very contrived data, these are any pictures. | 2,734 | 2,746 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And again, even though it didn't use a deep learning network, it has a lot of the deep learning flavor, the features were learned. The algorithm learns to find simple features like these black and white filter features that can give us the best localization of faces. | 2,746 | 2,770 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So this is a very influential piece of work. It's also one of the first computer visual work that is deployed on a computer and can run real time. | 2,770 | 2,786 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | For that computer vision, algorithms were very slow. The paper actually is called real time face detection. It was granted Pentium 2 chips. I don't know if anybody remembers that kind of chip, but it was on a slow chip, but nevertheless it run real time. | 2,786 | 2,804 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So that was another very important piece of work. And also one more thing to point out around this time. This is not the only work, but this is a really good representation. | 2,804 | 2,816 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Around this time, the focus of computer vision is shifting. Remember that David Marr and the early Stanford work was trying to model the 3D shape of the object. Now we're shifting to recognizing what the object is. | 2,816 | 2,840 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | We lost a little bit about can we really reconstruct these faces or not? There is a whole branch of computer vision graph that continue to work on that. | 2,840 | 2,850 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | But a big part of computer vision is at this time around the turn of the century is focusing on recognition. That's bringing computer vision back to AI. | 2,850 | 2,863 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And today, the most important part of the computer vision work is focused on these cognitive questions like recognition and AI questions. | 2,863 | 2,877 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Another very important piece of work is starting to focus on features. So around the time of face recognition, people start to realize it's really, really hard to recognize an object by describing the whole thing. | 2,877 | 2,897 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | Like I just said, I see you guys heavily occluded. I don't see the rest of your torso. I really don't see any of your legs other than the first row. But I recognize you. And I can infer you as an object. | 2,897 | 2,915 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So people start to realize, gee, it's not necessarily that global shape that we have to go after in order to recognize an object. Maybe it's the features. If we recognize the important features on an object, we can go a long way. | 2,915 | 2,931 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And it makes a lot of sense. Think about evolution. If you're out hunting, you don't need to recognize that tiger's full body and shape to decide you need to run away. | 2,931 | 2,942 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | You know, just a few patches of the fur of the tiger through the leaves probably can alarm you enough. So we need to vision is quick. Decision making based on vision is really quick. A lot of this happens on important features. | 2,942 | 2,960 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So this work is sift by Develo again. You saw that name again. It's about learning important important features on an object. And once you learn these important features, just a few of them on an object, you can actually recognize this object in a totally different angle on a totally cluttered scene. | 2,960 | 2,982 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So up to deep learning's resurrection in 2010 or 2012. For about 10 years, the entire field of computer vision was focusing on using these features to build models, to recognize objects and things. | 2,982 | 3,002 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And we've done a great job. We've gone a long way. One of the reasons deep learning network was became more and more convincing to a lot of people is we will see that the features that a deep learning network learns is very similar to these engineered features by brilliant engineers. | 3,002 | 3,025 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So it kind of confirmed even though we needed we needed to develop to first tell us this features work. And then we start to develop better mathematical models to learn these features by itself, but they confirmed each other. | 3,025 | 3,040 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | So the historical importance of this work should not be diminished. This work is the intellectual foundation for us, one of the intellectual foundation for us to realize that how critical or how useful these deep learning features are when we learn them. | 3,040 | 3,063 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | I'm going to skip this work and just briefly say because of the features that they below and meaning other researchers taught us, we can use that to learn scene recognition. | 3,063 | 3,076 |
NfnWJUyUJYU | CS231n Winter 2016: Lecture1: Introduction and Historical Context | https://youtu.be/NfnWJUyUJYU | 2016-01-04T00:00:00.000000 | And around that time, the machine learning tools we use mostly is either graphical models or support vector machine. And this is one influential work on using support vector machine and kernel models to recognize the scene. But I'll be brief here. | 3,076 | 3,096 |