data
stringlengths 115
7.61k
|
---|
Daj#7482: Yeeees oh God star that's the kind of shit I've been annoying my friends about for years
star#5322: yeah this is the sort of thing that's kind of in that weird zone where it's hard to tell if something is very important or very not important
star#5322: I think probably all it is is that a relatively weak fragment of axioms (2nd order type theory or something? I forget what people cite) is sufficient to develop essentially all of modern math, set theory and size issues aside
star#5322: and so most axiomatic systems that "look reasonable" end up being powerful enough to model that weak fragment
star#5322: and there you go.
Daj#7482: That's just so incredibly non obvious to me that that would be true
star#5322: which part?
Daj#7482: I know it is but...is there some even _deeper_ principle?
Daj#7482: All parts lol. That a small fragment can do all that and that it keeps reoccurring
Daj#7482: What _is_ the fragment? Is there a "true" form?
bmk#1476: Is there any bound on how simple a set of axioms that can capture our current math is, for some measure of simple?
star#5322: I admit I'm not sure that people have in fact proved that most of modern math is in fact define-able in that weak system but it's somewhere between "conjectured" and "obvious"
star#5322: no idea bmk
star#5322: ZFC has a pretty damn low description complexity in most regards I think
Daj#7482: And this isn't even touching on how would you formalize the concept or axioms
star#5322: VBG possibly even lower, depending on how much you penalize axiom schema
star#5322: that's model theory, is it not?
Daj#7482: What is the "space of axioms"? Is that a coherent idea?
Louis#0144: so like homotopy is extremely important in data science but its just not tractable at all. Knowing if components are trivially connected tells you so much about the capacity of the model you need + it can tell you about the capacity and type of representation a model can express (For instance high order homotopy is correlated grammatical structure of word embeddings)
Daj#7482: I don't understand model theory lol |
Daj#7482: I try, please excuse my low IQ philosopher brain
Louis#0144: but Im not convinced HoTT will make a difference with the kinds of computation we do here
star#5322: me neither, but my book gets here tomorrow so maybe I'll get to it
bmk#1476: What *is* HoTT about anyways
bmk#1476: Aren't types like discrete things
Daj#7482: I don't think HoTT is important for any application. It's like my spare time project to find God while my day job is avoiding techno gods from destroying us lol
star#5322: @Louis I really couldn't care less about the practical applications of Homotopy Theory, I'm planning to learn it pretty much just for pure math and interest alone
Louis#0144: having computers deal with homotopy
Louis#0144: yeah idc about practical applications
star#5322: why do you keep bringing them up then?
bmk#1476: So it's homotopy on top of types, not homotopy of types
Louis#0144: just saying that homotopy comes up in theoretical ML a lot
Louis#0144: Ive seen it come up multiple times
star#5322: fair enough, I am not familiar with any kind of theoretical ML like that
Daj#7482: Don't worry, I keep math far away from my ML lol
star#5322: dependent type theories writ broad do seem pretty useful and potentially important
Louis#0144: So I have a feeling HoTT is going to answer questions in ML but it wont be practically useful
Louis#0144: it might answer a lot of questions about latent representations for instance
star#5322: and HoTT is "just a particular dependent type theory" so it's an interesting point in the design space of all possible type theories
Daj#7482: Man I need to stop talking about this, I still haven't fully grokked what dependent type even means (I know it as like Lean code but it hasn't clicked yet for me theoretically) |
star#5322: @bmk I know about zero homotopy, so I really don't know how to bring that part in. But two points:
1) not sure what you mean by "types are discrete things" - the natural numbers is an example of a type, which is discrete, and the real numbers is another example of a type, which is not discrete
2) the core relevance of HoTT is hard to understand without understanding why dependent type theories are a thing at all. How much do you know about that?
star#5322: Yeah idk about HoTT specifically but dependent types in general are The Shit
Louis#0144: how do you plan to learn HoTT without an introduction to alg top first
star#5322: Highly recommend learning that in general
Louis#0144: sounds painful
star#5322: Not sure where you pulled that out from. I'm happy to learn background, I'm in no hurry.
star#5322: I'd appreciate references for alg top background if you have stuff you liked learning from though.
Louis#0144: you said you were pulling out your kindle rn to learn about HoTT. Anyway, I recommend munkres for homotopy
Louis#0144: Munkres is literally my favorite textbook of all time
Louis#0144: hands down
star#5322: I thought Munkres was like, point set topology?
Daj#7482: There's this table I saw somewhere showing how HoTT relates terms/proofs with homotopy levels and that somehow seemed very important hm
Louis#0144: He has two books
Louis#0144: first is point set
Louis#0144: second is alg top
star#5322: okay great, I was already planning to read the first one. I assume it's better to read them in order?
Louis#0144: the first book also has a second section with an introduction to homotopy from a category theory perspective while the second book directly covers homology and homotopy
Louis#0144: yes |
Louis#0144: the second one assumes youve read hte entire first one
Daj#7482: I wish I had the attention span to just commit to a higher math text book, God speed
star#5322: I mean, idk about *commit*
star#5322: I have a long reading list and sometimes I progress some parts of it
bmk#1476: Look I don't really even know algtop or the prereqs for algtop or even the prereqs for those
Louis#0144: I read munkres in a month- I worked on it every day for multiple hours. My summer before uni started lmao
bmk#1476: So like
star#5322: 1) basic topology 2) alg top
Louis#0144: Munkres is loooooong
Daj#7482: I don't think I've ever _actually_ finished a math textbook in particular. Any other subject yes
Louis#0144: You dont finish math books
star#5322: but that's not necessary for dependent types bmk
Louis#0144: theres no need to
Louis#0144: read until you know enough
bmk#1476: I'm about a decade away from actually understanding HoTT
star#5322: Hott is like what if we put our algtop in our type theory
Daj#7482: The idea of getting two consecutive days reading the same book for several hours is inconceivable to my ADHD
star#5322: oh see I am diagnosed with ADHD but I would Happily do that if it's my current special interest
Daj#7482: I think I'll be coding fluent Lean before I grok formal TT lol
bmk#1476: I don't think I've finished a book in years |
Daj#7482: Yes the "special interest" part is important!
Louis#0144: tbh Im really ADHD but munkres is just..... so good
bmk#1476: I'm halfway through several dozen books
star#5322: but I do know a lot about type theory, so I expect to get something from at least some of HoTT @Louis like, I'm not going to read the whole 600 pages or whatever in one sitting
Louis#0144: like the challenge problems are so good
Louis#0144: yeah but you need homotopy
Louis#0144: homotopy is hard
Louis#0144: it takes a lot of practice
star#5322: and I assume if I come back to it with more homotopy context I'll get more
star#5322: ¯\_(ツ)_/¯
star#5322: or I'll bounce off it the first time and come back, who knows
star#5322: A friend explained to me what univalence was and I felt like I got it in like ~20m
Louis#0144: I didnt know homotopy until my second course on the topic and now I know enough homotopy to know that its pretty much useless for the work I do
Louis#0144: LOL
star#5322: obviously there's a lot more to it than that
Daj#7482: The "intermittently bash head against topic over many months until it sticks" tactic is my primary method of learninf
star#5322: but claiming I'm not going to be able to understand the literal like, statement of a type theory is kind of confusing to me
Louis#0144: Munkres is hard, there was much head bashing
star#5322: but also arguing about this seems extremely pointless
star#5322: thanks for the second Munkres rec, I bet that'll be useful! |
Daj#7482: I hope you can translate some of the insights of HoTT for me star! Hah
Louis#0144: I mean ok you can go with only the second munkres but I would strongly recommend atleast reading up to like the end of compact spaces in the first one (I think its like page 120 or so)
Louis#0144: THatll make your life way easier
star#5322: I don't know almost anything about topology and it's the most obvious huge glaring hole in my "basic fields of math" knowledge so I was planning to read at least a lot of the first one
Louis#0144: yeah
star#5322: but all of this is very much like, shits and giggles anyway, so who knows what order or what priority any of this will end up getting
Louis#0144: tbf I dont know any category theory but Im trying to fix it
Louis#0144: I just learned the snake lemma on monday
Daj#7482: I wish we could delay the singularity a little so I can waste time on pure maths for a few decades
Daj#7482: tfw
Louis#0144: wdym
Louis#0144: once it happens youll have all the time u want
star#5322: unless ded
Daj#7482: Assuming it goes good lol
Daj#7482: That's kind of my informal day job by now
star#5322: I am very early in category theory too, I'm on like . . . ch2 of Leinster. lol
Daj#7482: When I first learned category theory I had this huge enlightenment because thinking in graphs is good for my aphantasic brain, but then I found type theory and math became code and now all is good
star#5322: Long shot but no one here would happen to have a particular preferred reference for the Robertson Seymour theorem would they?
Louis#0144: I wish I could express all of math as topology- hey wait
Daj#7482: @star I just googled that and my brain got stuck on a funny term |
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/740660681438789702/Screenshot_2020-08-05-22-00-29-652.jpeg
Daj#7482: I don't know what this means but I'm scared
Louis#0144: lmao
Louis#0144: computational graph bullshit
Louis#0144: if a computational graph contains a particular minor it means its intractible
Daj#7482: So it isn't cursed children
Daj#7482: Shame
Louis#0144: not yet
Louis#0144: depends on ur vertices
star#5322: wdym by "intractable"
Louis#0144: To be honest the definition I usually use has to do with representations in Lp(N)
Louis#0144: In that case, which is for modeling, it means that if your computational graph contains a particular minor then theres no way to account for the error that the minor creates
Louis#0144: (either through information loss or what not)
Louis#0144: but theres other theorems that say if you have a graph minor of an NN that can do a computation, then the full graph can also do it
Louis#0144: which Ive linked here before
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/740661574699843726/image0.jpg
Louis#0144: its just ways to discuss large computational models in this circumstance
Cornelius#2509: Joined the server.
Sid#2121: Hey @Cornelius ! Welcome to the efficiency wars! Please check the channel description for more info on the project 🙂
Cornelius#2509: Will probably mostly lurk here |
bone#4250: Joined the server.
Mr Fuzzypants#7329: Joined the server.
dangirsh#6032: Joined the server.
Deleted User#0000: @Aran Komatsuzaki shared this on twitter https://arxiv.org/abs/2008.02217
Louis#0144: FUCK YES
Louis#0144: I LOVE HOPFIELD NETWORKS
Louis#0144: omg
Louis#0144: This is my shit
Louis#0144: I did research into continuous time hopfield networks for years
Louis#0144: Absolutely shameless plug: https://www.louiscastricato.com/post/joint-representations-of-connectionism-vs-symbolism-via-attractor-networks-and-self-attention
Louis#0144: I wrote about this exact thing in April
Louis#0144: Glad to see someone published it
Louis#0144: I did work in cognitive attractor networks in the neocortex and visual cortex for about two years
Louis#0144: I had the idea that this was true for transformers ever since BERT came out but I got a chance to write about it back in April
Ravna#1831: Do we have an understanding on whether the quality of the data matters that much?
Ravna#1831: A large faction of the GPT-3 prompts are organized in a way that's very unlike "well-written" articles and books.
Daj#7482: My intuition is it matters a lot, but that intuition is born from GPT1/2 type models
Daj#7482: GPT3, being basically magic, could have very different properties
Ravna#1831: Maybe a high diversity of weirdly-organized texts might help.
Daj#7482: Intuitively I feel giving someone a lot of bad English won't help them learn good English |
AI_WAIFU#2844: Your language model will only be as good as the text you feed in, even in the limit of infinite compute.
AI_WAIFU#2844: You'd need another objective to improve quality beyond that point.
Ravna#1831: Yeah but a lot of formatting boilerplates like HTML or forum code might help.
Ravna#1831: A lot of prompts are written like formatting boilerplates already.
Daj#7482: Well, if that's what you're trying to predict sure
Daj#7482: But if I'm trying to predict e.g. Shakespeare, more HTML probably won't help
AI_WAIFU#2844: I think the quality of the writing is more important than the format.
Daj#7482: Or it will because ¯\_(ツ)_/¯
Ravna#1831: No, it would be a bit better (more consistent) than the text you feed in because NNs have denoising properties.
Daj#7482: Could you elaborate on that claim?
AI_WAIFU#2844: It's not obvious to me what "denoising" looks like in the LM context.
Ravna#1831: I mean when you approach infinite compute and infinite data
Ravna#1831: If you sample from higher probabilities, many of the weird cases of the source would not appear
kindiana#1016: I think at the limit, adding more low quality of data only effects the quality of unconditional generation, not prompted generation
AI_WAIFU#2844: ^This
Daj#7482: "At the limit" is doing a lot of work here
kindiana#1016: sure, but thats the strongest version of the claim I'd be willing to say lol
Daj#7482: Fair hah
Daj#7482: I've become very wary of "in the limit" claims about NNs
AI_WAIFU#2844: However, to tap into the high quality data, you'd need a high quality prompt. |
Daj#7482: I don't think any large model, maybe even _any_ model, has ever reached true convergence over a (potentially infinite) real world training distribution
Daj#7482: So whether something reaches acceptable quality _in practice_ matters a lot
kindiana#1016: yeah, there is a trend of super large, weakly labeled datasets for really big image recognition models, so maybe data quality doesn't matter as much when model sizes increases 🤷
Ravna#1831: OK I know what I was really going to try to ask now. I was trying to ask whether the "clean-up" efforts that are trying to make the data more "text-like" is a good thing, because a lot of use-cases of GPT-3 are not text-like. They are more like programming with a more grammarly-tolerant language.
Daj#7482: > yeah, there is a trend of super large, weakly labeled datasets for really big image recognition models, so maybe data quality doesn't matter as much when model sizes increases 🤷
@kindiana I am still so _viscerally_ shocked that such a strong version of the Scaling Hypothesis seems to be true in the real world it has shaken my fundamental trust in my understanding of how the universe works. _Something_ is up, some kinds of regularities about the "generating function" of reality. I should probably not ramble before I had my morning tea
Daj#7482: > OK I know what I was really going to try to ask now. I was trying to ask whether the "clean-up" efforts that are trying to make the data more "text-like" is a good thing, because a lot of use-cases of GPT-3 are not text-like. They are more like programming with a more grammarly-tolerant language.
@Ravna I think the primary use of GPT3 is human like text generation, not HTML boilerplate? It's also an issue of the small context windows, you can pack much more information into a small amount of english than HTML
AI_WAIFU#2844: For now at least. How much text is accesible on github?
Daj#7482: A lot
Daj#7482: We definitely want to train it on code too don't get me wrong
Daj#7482: Just...look at some random Common Crawl documents
Daj#7482: It's really not something you'd want to see in your output
Ravna#1831: I don't mean HTML boilerplate per-se. I mean a lot of prompts are written in a "A-B, A-B pair" way and then you use A to make it try to infer B.
Ravna#1831: It's less like text and more like trying to make it infer things from a fixed format.
Daj#7482: Yes and it works because ¯\_(ツ)_/¯
Daj#7482: I can't emphasize enough how many things GPT3 does that are just _not in the training data_
AI_WAIFU#2844: Idea.
AI_WAIFU#2844: Generate random programs and run them.
AI_WAIFU#2844: Then use the outputs as training data |
Ravna#1831: That's just good old program synthesis
Ravna#1831: Doesn't work very well so far
Daj#7482: That would be like running it on a random number generator basically?
AI_WAIFU#2844: Not quite.
Daj#7482: Unless you mean generating somehow semantically useful programs
Daj#7482: Don't get me wrong I think that's a rad idea
Daj#7482: "GPT3 solve the Halting Problem pls"
Ravna#1831: > Something is up, some kinds of regularities about the "generating function" of reality.
Daj#7482: I'd be so down to do that
AI_WAIFU#2844: I'm thinking of how you would bake the solmonoff prior into GPT-3
Ravna#1831: Same reason that jpg and mp3 works I think. High-frequency is negligible in reality.
Daj#7482: Solomonoff approximation works terribly from what I remember of approximate AIXI
AI_WAIFU#2844: Approximate AIXI is garbage
Ravna#1831: In jpg and mp3, you use a few parameters to represent the low frequency parts and that's enough.
Daj#7482: > Same reason that jpg and mp3 works I think. High-frequency is negligible in reality.
@Ravna Yea but this is just so non obvious a priori. And how does this relate to bigger models and NN priors? I dunno man
AI_WAIFU#2844: All patterns and structures (that could be computed in a reasonable time) would be learned by the neural network trained on that data.
Ravna#1831: The lottery ticket hypothesis says a bigger model is just trying to find a better small model within it. It's doing implicit searching over small models.
Daj#7482: > All patterns and structures (that could be computed in a reasonable time) would be learned by the neural network trained on that data.
@AI_WAIFU Well yes, this is where computational complexity comes in. Solving any solvable problem with a Turing Machine is _theoretically_ posisble |
Daj#7482: > The lottery ticket hypothesis says a bigger model is just trying to find a better small model within it. It's doing implicit searching over small models.
@Ravna Yea I'm still unsure how I feel about lottery ticket. Random numbers just _happen_ to find useful circuits? Wild
AI_WAIFU#2844: A practical implementation of the idea would bound the computation or use a non-turing complete language that provably halts in a reasonable amount of time.
Daj#7482: I love the idea
Daj#7482: I'm skeptical by default but that sounds _super_ fun
AI_WAIFU#2844: I think it would work well as a form of data augmentation
Daj#7482: Could come up with some clever embeddings of the AST
AI_WAIFU#2844: 10T of text 90T of random program output.
Daj#7482: I want to use GPTuring as an editor plugin that live predicts what your program will approximaltey do
AI_WAIFU#2844: You can go further than that. By mixing in the text you might even be able to just use natural language descriptions and recover the output of the program
Daj#7482: > 10T of text 90T of random program output.
@AI_WAIFU I cannot stress enough that _almost all_ (I recently learned this was a technical term) programs have trivial or random output
Daj#7482: I like it
AI_WAIFU#2844: Trivial I get
AI_WAIFU#2844: Random?
AI_WAIFU#2844: How
Daj#7482: Something something Rule 30
Daj#7482: https://mathworld.wolfram.com/Rule30.html
AI_WAIFU#2844: Rule 30 isn't random.
Daj#7482: Randomness is just a hypothesis anyways |
Daj#7482: There is no proof "true" randomness exists
AI_WAIFU#2844: Rule 30 is a pattern beyond your comprehension
Ravna#1831: > > Yea I'm still unsure how I feel about lottery ticket. Random numbers just happen to find useful circuits? Wild
I think the lottery ticket hypothesis can be understood as: overparameterization tends to make an ensemble (like weighted average) of simple functions, instead of weird zigzag functions that people who are paranoid of "overfitting" usually like to use as examples.
Daj#7482: Just that we haven't found a short program to describe certain "random" patterns yet
AI_WAIFU#2844: The proof that its not random is straight forward
Daj#7482: My point is that it is as random as any other object we know
Daj#7482: If it's not random then nothing is
Daj#7482: Probably
Daj#7482: > I think the lottery ticket hypothesis can be understood as: overparameterization tends to make an ensemble (like weighted average) of simple functions, instead of weird zigzag functions that people who are paranoid of "overfitting" usually like to use as examples.
@Ravna Yes this makes sense to me, I'm just trying to combat hindsight bias by reminding myself that I did not predict this a priori
AI_WAIFU#2844: The entropy of a random variable that is the output of a deterministic function is bounded by the entropy of the random variable that acts as an input to that function.
AI_WAIFU#2844: So if your distribution over programs has 10 bits of entropy.
AI_WAIFU#2844: Your program outputs will at most have 10bits of entropy in their output.
Daj#7482: Sure but my point is you cannot point to any truly random function without invoking a magic source of endless entropy
Daj#7482: And the universe seems to have limited entropy
Daj#7482: (or is the product of some unknown short function)
Daj#7482: Randomness is conditional on the knowledge of the observer
Daj#7482: It's not an inherent property of a program
Daj#7482: Except Kolmogorov Complexity, which is provably incomputable |
AI_WAIFU#2844: I think we've got different definitions of random.
Daj#7482: For me random = incompressible
AI_WAIFU#2844: Right, rule 30 output is super compressible.
AI_WAIFU#2844: rule30 output is not random.
Daj#7482: Yea, _but so is nothing else then_
Daj#7482: That's my point
AI_WAIFU#2844: I don't follow
Daj#7482: If rule 30 isn't random then no function is provably random
AI_WAIFU#2844: Yes.
Daj#7482: Because if I give you a random chunk of rule 30 output and you don't know it's rule 30, you can't compress it
Daj#7482: Oh so we agree
AI_WAIFU#2844: No I can.
AI_WAIFU#2844: If I have enough compute.
Daj#7482: Sure if you have AIXI then you can compress anything to its Kolmogorov Complexity
Daj#7482: But that is incomputable even with infinite compute
Daj#7482: Which is my definition for "does not exist"
AI_WAIFU#2844: Ok, but I can put an upper bound on the K complexity
Daj#7482: Sure but being able to _prove_ randomness would be equivalent to solving the halting problem
AI_WAIFU#2844: The bound gets tighter as I use more compute, and becomes exact in the limit.
Daj#7482: Yep I agree |
Daj#7482: Random functions might exist, but you can't _prove_ it
Daj#7482: Even with infinite compute
AI_WAIFU#2844: Agreed on all points.
AI_WAIFU#2844: But If I have enough compute and a solmonoff prior, I can provide strong bayesian evidence that your chunk of rule 30 output was infact a chunk of rule 30 output instead of true randomness.
Daj#7482: Yep probably, but that's a specific of your choice of prior
AI_WAIFU#2844: ???
Daj#7482: Actually I retract that statement
Daj#7482: I forgot some details of the Solomonoff Prior that I would have to reread first
AI_WAIFU#2844: It's called the universal prior for a reason.
Daj#7482: Sure
Daj#7482: But the space of all programs is _weird_
Daj#7482: I'd expect you'd find infinite other programs that explain the output as well as rule 30
AI_WAIFU#2844: And you will.
AI_WAIFU#2844: There are infinitely many programs that all produce the same output.
AI_WAIFU#2844: Because you can keep appending compilers to the front of your program description without changing it's functional behavoiur.
Daj#7482: Yup
Daj#7482: Ehh it's really not too useful to even discuss something that solves the halting problem. It breaks everything really
Daj#7482: Lots of unintuitive properties become true
Daj#7482: Pretty sure the universe does not have a halting oracle
Daj#7482: If it does then ¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯ |
AI_WAIFU#2844: I change my previous statement. I can't provide bayesian evidence that the string you gave me was output by rule 30. I can provide evidence that it is not random.
Daj#7482: Agreed that is a more accurate claim
AI_WAIFU#2844: But that's the motivating idea behind training a NN on the output of programs.
Daj#7482: I think you could get a _lot_ of bang for your buck by not training it on truly random programs though
AI_WAIFU#2844: The ideal predictor is the solmonoff predictor
Daj#7482: Since the space of programs humans care about is tiny compared to the space of all programs
AI_WAIFU#2844: I would only filter out those with trivial output.
Daj#7482: There is a turing machine with I think ~500 states that is independent of ZFC axioms
Daj#7482: Programs are _weird_
AI_WAIFU#2844: Claim: the stronger an NN is at learning sequences the smaller its KL divergence to the solmonoff prior.
Daj#7482: Donald Knuth thinks P=NP just because programs are so damn weird lol
Daj#7482: > Claim: the stronger an NN is at learning sequences the smaller its KL divergence to the solmonoff prior.
@AI_WAIFU I would have to think about this and whether it is even a meaningful statement
AI_WAIFU#2844: Training on output from the solmonoff prior reduces that KL divergence.
Daj#7482: But the solomonoff prior is _incomputable_
Daj#7482: That's really important
AI_WAIFU#2844: And so is that KL divergence.
Daj#7482: It's like saying "If we just approximate a time machine..."
AI_WAIFU#2844: I still think the claim is roughly true
Daj#7482: It feels roughly true but so does timetravel to my human brain |
AI_WAIFU#2844: I also think you can bound that divergence.
AI_WAIFU#2844: and minimize the bound
AI_WAIFU#2844: Computably
AI_WAIFU#2844: I'd have to sit down and work through the math though.
Ravna#1831: It's much weirder than time machine. Time machine only helps reducing PSPACE to P (if you consider having a non-zero failure rate of time travel it couldn't do even that). That's much easier than turning the uncomputable into the computable.
Daj#7482: ^
AI_WAIFU#2844: In the limit of insane amounts of compute what I'm proposing is very weird.
Daj#7482: I think humans aren't built to understand just how _weird_ solving the halting problem would be
Daj#7482: It would collapse all of physics
AI_WAIFU#2844: Reminds me of the big number competitions
Daj#7482: _Oh god busy beaver numbers_
AI_WAIFU#2844: and numbers like BIGFOOT
AI_WAIFU#2844: https://googology.wikia.org/wiki/BIG_FOOT
AI_WAIFU#2844: Anyways, if I have the time I might try to write a program output generator.
AI_WAIFU#2844: Then we can add it to the pile.
Daj#7482: I don't know if we'd add it to our actual GPT3 run since we want to match OA characteristics
Ravna#1831: Back to GPT-3. Does it learn the to solve all different of A-B patterns we feed to it as prompts mainly from article-like texts, or from different types of snippet structures like HTMLs, dictionaries, markdowns, bullet-points, etc on the internet?
AI_WAIFU#2844: what if it was just like 1GB?
Daj#7482: > Back to GPT-3. Does it learn the to solve all different of A-B patterns we feed to it as prompts mainly from article-like texts, or from different types of snippet structures like HTMLs, dictionaries, markdowns, bullet-points, etc on the internet?
@Ravna ¯\_(ツ)_/¯ |
Daj#7482: No one knows how GPT3 works and anyone that claims to is talking out of their ass
Daj#7482: This should be the headline of my Twitter feed
AI_WAIFU#2844: Actually, I think the best way to go about this (don't do it for GPT3) Is to have a variable that the network conditions on. 1 when predicting program output, 0 otherwise.
Daj#7482: > what if it was just like 1GB?
@AI_WAIFU Maybe if it's like python or js programs? Could be useful to people
AI_WAIFU#2844: That way you can *transfer learn* from the space of program outputs
Daj#7482: I'd have to think about that idea
Daj#7482: It would be an altered architecture so ¯\_(ツ)_/¯
Daj#7482: Maybe a encoder-decoder architecture would work even
Daj#7482: Encode the program, decode the output
AI_WAIFU#2844: Yeah, I'm not saying to do it rn.
Ravna#1831: It should be theoretically easy to test out though. Train a book-gpt3 and an internet-gpt3 and compare. But it's practically very expensive...
Daj#7482: Training a GPT3 is indeed not something we can just do a few times unfortunately
Daj#7482: GPT2 is doable
AI_WAIFU#2844: But when we run out of data in the english language and we need to augment the data...
Daj#7482: But it's unclear how different GPT2 really is from GPT3
Daj#7482: > But when we run out of data in the english language and we need to augment the data...
@AI_WAIFU We will not run out of data until like 10T
AI_WAIFU#2844: I know.
Daj#7482: And then we have images, videos, sound |
AI_WAIFU#2844: I said 10T and 90T for a reason.
Daj#7482: I like your idea I'm just redteaming it a bit
Daj#7482: ~~Also because the space of all programs is _definitely_ where Yogg-Sothoth is~~
AI_WAIFU#2844: >Implying summoning Yog Sothoth with GPT-4 is a bad idea
Daj#7482: We should just train 1Q on RAM states of a computer reading and writing live sensor feeds
Daj#7482: > >Implying summoning Yog Sothoth with GPT-4 is a bad idea
@AI_WAIFU This is an AI _safety_ discord
Daj#7482: This is what we _don't_ do
Daj#7482: haha
AI_WAIFU#2844: But in all seriousness I think that 1bit real data > 1bit solmonoff data.
Daj#7482: Not if that 1bit encodes a useful part of Chaitin's Constant lol
Daj#7482: but yeah I know what you mean
AI_WAIFU#2844: Since KL divergence(realworld \|\| realworld) < KL divergence(realworld \|\| SolomoffPrior)
Ravna#1831: In practice we probably won't generate true random programs (whatever that means) and their outputs. We might train a prior on github first and then generate "somewhat random" programs as training data.
Daj#7482: Yes that idea I love
Daj#7482: I was thinking about doing this with theorem prover code
Daj#7482: But Python programs is probably easier
kindiana#1016: just execute random github gists xP
Ravna#1831: Do we even know how to generate non-trivial new theorems and their proofs as training data?
AI_WAIFU#2844: I say have a filter for programs that look trivial. But beware for output that looks random. Output that looks random is not random and what might look random to you might not look random to a 1T parameter NN |
Daj#7482: > just execute random github gists xP
@kindiana This is how the AGI takeover started
Daj#7482: > Do we even know how to generate non-trivial new theorems and their proofs as training data?
@Ravna Probably not, I didn't look too deeply into it
Daj#7482: Man now I want to built GPTuring more than GPT3 almost
Daj#7482: Stupid AGI timelines being so short, I have other projects
kindiana#1016: I think program prediction might require something other than GPT-next token prediction, because the teacher forcing effect might be too strong
Daj#7482: GPT is magic
kindiana#1016: but then again, I wouldn't have thought gpt3 is possible with gpt, so idk lol
Daj#7482: No predictions about magic, only :empiricism:
AI_WAIFU#2844: ???
Daj#7482: We are in the alchemy phase of our understanding of NNs and optimizers generally
Daj#7482: We don't have any theories strong enough to predict what GPT can or can't do a priori
Ravna#1831: In program prediction you can feed not only the partial program, but also the execution trace and memory state of the partial program to the NN. There's a whole bunch of publications on that. But so far they are still all toy examples that are usually less than 10 tokens per program.
Daj#7482: So an easy SOTA for us
Daj#7482: :brr:
AI_WAIFU#2844: Related:
AI_WAIFU#2844: https://arxiv.org/abs/2006.08381
Daj#7482: This sounds like a Neuromancer character
Ravna#1831: You also have an output set, so you can train a value function on top of it to do AlphaZero-style tree search. |
Ravna#1831: You actually have much more at hand to work on than GPT's unsupervised learning.
Ravna#1831: But it's still harder, or maybe... just comparably un-explored with high-compute?
Daj#7482: "Unexplored with high compute" is the ML field in a nutshell
Daj#7482: With the scaline hypothesis confirmed you can probably crank out SOTA papers on whatever you want with enough TPUs
AI_WAIFU#2844: I've redoubled by skeptisism of SOTA anything because of the scaling hypotheisis
Daj#7482: Yup total waste of time
Daj#7482: Part of the reason I'm leaving academia
AI_WAIFU#2844: Like, does your method *really* help or do you just have more money?
Daj#7482: Grad Student Descent
AI_WAIFU#2844: Also a good reason to leave academia
Daj#7482: I love academia so much I feel like being driven from my homeland :<
AI_WAIFU#2844: I think there's some truth to that.
Ravna#1831: > With the scaline hypothesis confirmed you can probably crank out SOTA papers on whatever you want with enough TPUs
Ravna#1831: It's not the case in Go/Chess. In these cases, bigger NN size implies fewer searches per second. So the sweet point of NN size on fixed-time move performance is magnitudes smaller than the sweet point of NN size of single-inference performance like GPT's.
AI_WAIFU#2844: When ML was taking off and the fruit/practioner ratio was high it was good. But now that things are reaching an equilibrium, molochian dynamics start to take over.
AI_WAIFU#2844: Yeah but if I have more compute I can get a higher ELO.
AI_WAIFU#2844: If you control for compute things become interesting, if you don't you get the current problem in ML.
Daj#7482: > When ML was taking off and the fruit/practioner ratio was high it was good. But now that things are reaching an equilibrium, molochian dynamics start to take over.
@AI_WAIFU Yea unfortunately true...and I can't let Moloch control me if I want to make meaningful conitributions to AI alignment. Sad
AI_WAIFU#2844: Yup. Fortunately, other avenues are available, and given the right conditions you can make those dynamics work in your favor. |
Daj#7482: I sure hope so, we'll see what happens
AI_WAIFU#2844: e.g. get undergrads to make their capstones center on trying ideas you don't have time to do, and give them a $1000 budget to do it.
Daj#7482: Eh still so many bad incentives, especially in publishing
Daj#7482: Maybe I'll get rich off my current job, or get hired by OpenAI or something
Daj#7482: Seems more aligned
Daj#7482: Or at least, more independent
AI_WAIFU#2844: Tru.
AI_WAIFU#2844: I think even just hosting this discord positions you well. How many ML researchers with connections are here?
Daj#7482: This project did turn out far more interesting than expected that's for sure hah
Daj#7482: But that's been my experience in general. As silly as it sounds, just doing interesting things seems to _actually_ attract interesting people
Daj#7482: Who would have thought haha
AI_WAIFU#2844: I need sleep.
AI_WAIFU#2844: Good night.
Daj#7482: Night!
Ken#8338: A new metalearning approach https://arxiv.org/abs/2008.02219v1
Louis#0144: cool
Louis#0144: dynamic programming can be differentiable
Louis#0144: so this is of particular interest
TylerRoost#8017: Joined the server.
Daj#7482: Hey @TylerRoost ! Welcome to the Foresight After Party! Check the channel description for info and don't hesitate to ask questions! |
TylerRoost#8017: Cool, this all gives me great excitement
Louis#0144: @Aran Komatsuzaki I got so many new followers from ur tweet
Louis#0144: Like 25
Louis#0144: LMAO
Aran Komatsuzaki#5714: @Louis That's incredible! lol
Aran Komatsuzaki#5714: Hey guys. I've found a talk by an author (Nick) of GPT-3 on GPT-3 in YT channel of Weights and Biases. They talk a lot about the things not written in the paper, so highly recommended. Also, many videos of WB are highly underrated despite the quality of the talk and the presenter (including many OpenAI folks). I'd appreciate it if you can promote it on Twitter and Reddit if you’re interested! https://www.youtube.com/watch?v=ZyQkaml-5iE&t=3531s
Aran Komatsuzaki#5714: @Louis You got more followers than the number of likes you got in your tweet lol
Louis#0144: Different people liked different tweets
Louis#0144: But yeah I went from 311 to like 333 and now I’m back down to 331
shawwn#3694: @Louis what's your twitter?
Louis#0144: @lcastricato
shawwn#3694: Oh, part of the reason might be that you don't have AI in your bio
Louis#0144: I do have AI in my bio
Louis#0144: ?
bmk#1476: not prominently enough i guess lol
Louis#0144: “Pure Mathematician @uwaterloo gone Narratologist @gtcomputing. I like AR/VR/AI/Neuro/NLP. Will eventually make an AI write a full novel.”
shawwn#3694: It's there, but it's not exactly a primary thing. I only mention it because it helped me back when I was at around the same follower count
shawwn#3694: people in ML are more likely to follow people in ML.
Louis#0144: Lmao
bmk#1476: i actively refuse to make my bio informative |
bmk#1476: my tweets all being about ML should tip people off
shawwn#3694: y tho
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741109799873413171/Een-Ke3U4AEX2Os.png
bmk#1476: if this isnt about ml idk what is
shawwn#3694: the bio is the first thing that people see about you when you follow them. Lots of my followers came from being followed back
shawwn#3694: and I follow pretty much anyone who follows me if their bio is even remotely interesting
shawwn#3694: (in fairness, Louis's bio is!)
bmk#1476: hm
bmk#1476: ill change it soon™
Aran Komatsuzaki#5714: @bmk are you Brock? Can you read Japanese as well? lol
shawwn#3694: (I can't, so I've been unable to enjoy that meme ;_;)
bmk#1476: i know a small amount of japanese
Aran Komatsuzaki#5714: Never expected this meme existed (in japanese).
Aran Komatsuzaki#5714: and funny
Aran Komatsuzaki#5714: cool
Louis#0144: Did u guys know swans can roar
Louis#0144: I fucking had a heart attack today
bmk#1476: I've been trying to make it not small
bmk#1476: But yeah
Louis#0144: What does it say in Japanese |
bmk#1476: Hey you! Did you know that a single GPT-3 contains as many parameters as one GPT-3?
Louis#0144: Oh my god
bmk#1476: The funny part is that I can read this meme not because I know enough Japanese but actually because I know the Chinese 含
Aran Komatsuzaki#5714: haha
bmk#1476: Since I can use hanzi as a crutch, my japanese reading comprehension appears much higher than it actually is, as evidenced by my inability to have a conversation at the level of a Japanese 101 class
Aran Komatsuzaki#5714: I'm a native Japanese speaker with no understanding of Chinese. But I can understand a bit of Chinese text from the characters.
Louis#0144: But yeah fuck swans
bmk#1476: Haha nice! It's almost exactly the other way around for me (I'm not quite native level in Chinese though)
bmk#1476: I know like a dozen different random tidbits of Japanese and the rest is glued together with literally interpreting kanji
bmk#1476: We need a languages section in the server
bmk#1476: One channel for each non English language spoken by at least two members of the server
bmk#1476: I'd love to (try to) chat in Japanese haha
Aran Komatsuzaki#5714: I guess it's harder on your direction, since Japanese contains less semantically apparent characters (hanzi/kanzi), but I heard from Chinese people that it's still somewhat readable.
bmk#1476: Yeah it's really difficult when I just see a giant wall of kana
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/741112689157210202/image0.png
Louis#0144: Perfectly balanced
bmk#1476: wie allen dingen seien sollten
Aran Komatsuzaki#5714: as all things should be
bmk#1476: 就像所有东西该是的样子
zphang#7252: Less AI more 爱 |
bmk#1476: 癌
Deleted User#0000: https://www.reddit.com/r/MachineLearning/comments/i4ko0u/r_hopfield_networks_is_all_you_need/g0nm1f7/ attention is all we need.
Aran Komatsuzaki#5714: you really like that phrase
Deleted User#0000: i do, it grew on me over time 😄
Deleted User#0000: i dived into the hopfield code last night https://github.com/ml-jku/hopfield-layers/blob/master/modules/functional.py
val#8908: Joined the server.
Deleted User#0000: so i can figure out what they possibly have tried and did *not* work for them
Aran Komatsuzaki#5714: i'd just interpret their comment as that hopfield net isn't meant for improving the sota.
Deleted User#0000: i think it may be safe to say the scaling factor can be kept the way it is, i saw they messed around with it being a trainable parameter
Deleted User#0000: yup agreed, i think its really the new framing that helps
Deleted User#0000: maybe it'll open up new avenues for research, even if attention seems so distilled
Deleted User#0000: hopfield has a wealth of theory behind it
Deleted User#0000: it's kind of funny to make the connection, we were perhaps building it all along
Aran Komatsuzaki#5714: you say attention is all you need? the real attention is the architecture we have made all along
Deleted User#0000: do you mean the transformers architecture?
Aran Komatsuzaki#5714: whatever architecture we ended up with
Aran Komatsuzaki#5714: it's a joke
Deleted User#0000: oh lol, yea you are good at spinning off DL jokes
Deleted User#0000: this one went over my head
Daj#7482: I haven't yet read the Transformer <-> Hopfield paper, is it good? |
Daj#7482: Can we use Hebbian Learning for transformers yet? hah
Aran Komatsuzaki#5714: no it's not going to improve transformer. it's to explain it theoretically.
Aran Komatsuzaki#5714: they say self-attention = hopfield net theoretically, which is a nice thing to learn.
Daj#7482: Shame, if someone could get local update rules working with transformers and MTF we'd have one hell of a paper on our hands
Deleted User#0000: https://twitter.com/arankomatsuzaki/status/1270981237805592576?s=20
Deleted User#0000: one of your most popular tweets 😄
Aran Komatsuzaki#5714: also, the above tweet gave me 70+ followers, which is another practical application of the paper.
Deleted User#0000: bbl, ice cream needs her walk
Daj#7482: Your dog's name is _ice cream_?
Daj#7482: Oh god I love him/her
bmk#1476: Yes, money, like the multiple thousands of dollars budget that we're working with
Aran Komatsuzaki#5714: didn't know that lol
Daj#7482: > Yes, money, like the multiple thousands of dollars budget that we're working with
@bmk Thousands?!
Daj#7482: haha
Daj#7482: Turns out if you get free labor and free compute, ML is pretty cheap
bmk#1476: i mean
bmk#1476: we're burning through cash at a not-insignificant rate rn
bmk#1476: i mean, peanuts compared to the millions oft quoted
bmk#1476: but still, for a couple of random internet people fuelling the project off donations, we're using quite a bit of cash |
Daj#7482: Yeah it's not free
Daj#7482: But we are using like 100s of thousands of dollars in TPUs hah
bmk#1476: lol
Deleted User#0000: @Aran Komatsuzaki do you have any opinions on features shuffling? i was rereading the Dextra paper, and it reminded me of this paper https://arxiv.org/abs/2004.04662 , which takes the idea to the extreme
Aran Komatsuzaki#5714: I think my essay implied this, but I'm not really a fan of the traditional long-range LM like this one.
Aran Komatsuzaki#5714: i mean the method that tries to extend the TBPTT length with some trick like sparse attn or this one.
Deleted User#0000: you are referring to igloo?
Deleted User#0000: for the sparse attn?
Aran Komatsuzaki#5714: not just igloo but also things you know like Routing Transformer, OpenAI's Sparse Transformer etc.
Aran Komatsuzaki#5714: pretty much everything I've recommended lol
Deleted User#0000: yup 🙂
Aran Komatsuzaki#5714: * recommended to you until two months ago
Aran Komatsuzaki#5714: they'll give you some improvement, but i think it's not that scalable.
Deleted User#0000: the BigBird paper actually has a nice theoretical section on sparse attention
Aran Komatsuzaki#5714: yeah
Deleted User#0000: most sparse attention papers don't have that
Aran Komatsuzaki#5714: well, i think even bigbird has a problem
Aran Komatsuzaki#5714: and it was addressed in my essay sec. 3 i guess
Aran Komatsuzaki#5714: no sec. 4
Deleted User#0000: eagerly await for your essay to be released 🙂 |
Aran Komatsuzaki#5714: you can read it from the link i sent to you
Aran Komatsuzaki#5714: sec. 4 is completed, so you can read it now
Aran Komatsuzaki#5714: actually, sec. 1, 2 and 3 are also pretty much done.
Aran Komatsuzaki#5714: the most serious limitation of extending TBPTT length with efficient attention is that
Aran Komatsuzaki#5714: it doesn't improve the per-token loss for the earlier tokens.
Aran Komatsuzaki#5714: earlier tokens mean the first N tokens, where N is the TBPTT length of your vanilla Transformer (say N = 1024).
Aran Komatsuzaki#5714: As you know, no matter how long your context length is, it doesn't improve the prediction of the earlier tokens.
Aran Komatsuzaki#5714: then what improves them?
Aran Komatsuzaki#5714: one way is to increase the parameter count.
Aran Komatsuzaki#5714: so, conditional computation would do it without added computes.
Deleted User#0000: i wish we had better hardware for conditional compute, the current paradigm is not setup for it
Deleted User#0000: whenever i touch code related to that, it feels like a big hack
Aran Komatsuzaki#5714: well, cond comp has a limit, too.
Deleted User#0000: where's the link to your paper lol
Aran Komatsuzaki#5714: the upper limit of its performance is naively increasing the param count without cond comp. but then, gpt-3 performs worse than Fusion-in-Decoder at open-domain QA.
Deleted User#0000: is it above?
Aran Komatsuzaki#5714: sent to you on twitter dm
Aran Komatsuzaki#5714: long ago
Aran Komatsuzaki#5714: i'll resend
Deleted User#0000: woohoo |
Deleted User#0000: lol
Aran Komatsuzaki#5714: well, why not i attach it here now: https://www.overleaf.com/read/tcdxfrvfvtbw
Aran Komatsuzaki#5714: sec 5 and later are under construction.
Deleted User#0000: awesome! will read
Aran Komatsuzaki#5714: it's a more detailed ver. of my post on reddit to answer the questions by gwern. also, it has some completely new things on evaluation and supervision in the light of achieving generalist AI like AGI.
Deleted User#0000: i do want to build a retrieval method with you at some point
Aran Komatsuzaki#5714: also, i'll add some new thing on memory.
Deleted User#0000: either one of the pretraining methods or QA
Aran Komatsuzaki#5714: as well as my more detailed design of modified MARGE
bmk#1476: wow, this looks really interesting
bmk#1476: i look forward to reading it in depth when it is finished
Aran Komatsuzaki#5714: thanks, everyone 🙂
Deleted User#0000: ok, off to do some coding, bbl
Aran Komatsuzaki#5714: @Deleted User I'll make the design more concrete and send it to you, so that we can build together.
Deleted User#0000: yup! maybe these retrieval methods would make sure 'money' isn't all we need lol
Aran Komatsuzaki#5714: haha hopefully lol
bmk#1476: oh man this could let us scale to fill the entire tpu memory
bmk#1476: my only problem with moe is how much performance regresses
bmk#1476: (moe and similar)
Aran Komatsuzaki#5714: yeah right |
Deleted User#0000: the way i see it, gpt-3 is https://upload.wikimedia.org/wikipedia/commons/thumb/1/12/1925_Ford_Model_T_touring.jpg/280px-1925_Ford_Model_T_touring.jpg
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741346037779660922/unknown.png
bmk#1476: this is nice and all when compute is the bottleneck, but very quickly you run into a memory bottleneck too
Aran Komatsuzaki#5714: even if i don't make a workable retrieval-based lm, pretty sure fair researchers (like Mike Lewis) ill soon make one, so i'm optimistic.
Aran Komatsuzaki#5714: oh i have a good paper for you @bmk
bmk#1476: ooh
Aran Komatsuzaki#5714: https://twitter.com/arankomatsuzaki/status/1270488608068136961
bmk#1476: oh i saw that paper
bmk#1476: i didnt look too closely
bmk#1476: any idea how well this works if latency is high?
bmk#1476: the problem is that tpucpu <-> tpu speed is probably not too good
Aran Komatsuzaki#5714: my understanding is that this makes the memory consumption complexity from O(LD) to O(D), where L is the depth and D is the per-layer param count.
Aran Komatsuzaki#5714: So, we still have per-layer memory bottleneck lol
bmk#1476: also the host only has a bit over 2x the accelerator memory
bmk#1476: oh, per layer is good enough
bmk#1476: 300GB on cpu, drives 8x16GB tpu
bmk#1476: that's not a lot of space
bmk#1476: we can rent high mem machines but they'll be very far away
bmk#1476: and need network
Aran Komatsuzaki#5714: oh i'm not sure about the difference btw tpu and gpu in terms of transfer speed. |
bmk#1476: which is probably slooow
bmk#1476: wait
bmk#1476: you can predict when youll need each layer right?
bmk#1476: *what if..*
Aran Komatsuzaki#5714: yeah
bmk#1476: what if you rent a cluster of servers with, say, 100TB ram
bmk#1476: put the model there
Aran Komatsuzaki#5714: yes
bmk#1476: then each 300GB cpu connected to the tpus queues up like 2 layers
bmk#1476: and the tpus themselves store only the current layer
bmk#1476: ***L3L***
Aran Komatsuzaki#5714: sounds good to me
bmk#1476: this sounds absurdly difficult lol
Aran Komatsuzaki#5714: yeah it is for tpu
Aran Komatsuzaki#5714: lol
bmk#1476: but if we can do it we can train absurdly big models
Aran Komatsuzaki#5714: exactly
bmk#1476: and we dont even need MoE
bmk#1476: the problem here then becomes compute lol
Aran Komatsuzaki#5714: btw if you keep making moe larger relative to the rest of transformer, there's a substantial diminishing gain |
Daj#7482: This has moved from "dark magic" to "demonology"
Aran Komatsuzaki#5714: so, you need to scale up other dimensions like depth reasonablly to avoid that
Aran Komatsuzaki#5714: i guess you know that already
bmk#1476: this is even more absurd than the delayed moe idea i had lol @Daj
Daj#7482: But it would be so damn cool if it worked
bmk#1476: ok let's look at 1Q
bmk#1476: that's 2PB of params, bf16
bmk#1476: since this would be absurdly parallel honestly we could do disk
Daj#7482: Exactly haha
bmk#1476: and just do raid 0 style stripes across em
Daj#7482: This seems just about impossible to implement
bmk#1476: latency isnt an issue since we queue a bit in memory
bmk#1476: and bandwidth shouldnt be an issue
bmk#1476: lemme draw a diagram
bmk#1476: assuming we use a v3-4096 like the mlperf one
Daj#7482: The concept is understandable, the implementation is the problem
Aran Komatsuzaki#5714: yeah
bmk#1476: true
Aran Komatsuzaki#5714: sounds very technically demanding
Daj#7482: So much of the stack is not meant to be used this way |
Daj#7482: We'd probably have to use...**C++**
Daj#7482: https://www.youtube.com/watch?v=gENVB6tjq_M
Aran Komatsuzaki#5714: c++! my archnemesis! 😱
Daj#7482: Yea I have no idea where you would even start implementing something like this
Daj#7482: ML really doesn't teach you low level software engineering like CS used to hah
Aran Komatsuzaki#5714: you may need some insider knowledge of tpu
Aran Komatsuzaki#5714: yeah that's why i still can't make a custom cuda kernel like a pro
Daj#7482: Even implenting this on GPUs seems daunting
Aran Komatsuzaki#5714: exactly
Daj#7482: Even on CPUs
Daj#7482: lol
Daj#7482: Well if someone wants to make this their PhD thesis we are happy to provide emotional support lol
Aran Komatsuzaki#5714: haha
Aran Komatsuzaki#5714: we need to brainwash some googlers and kidnap them into here
Daj#7482: Kinda hard given the salary difference haha
Daj#7482: I'm pretty sure if someone _actually_ tried to implement this for real we could get some google TPU people on the line to give advice
bmk#1476: honestly our best bet is to get acquired by google and have their systems team do it
Daj#7482: They probably already have it in internal code
Daj#7482: Just no one can make it run but Noam
Aran Komatsuzaki#5714: actually moe design of gshard was singlehandedly done by noam, so i totally believe it |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741350738482954342/IMG_20200807_114244.jpg
Aran Komatsuzaki#5714: no idea why he was not listed as the first author of the paper
bmk#1476: this diagram is next level absurd
Daj#7482: Is it to scale?
Daj#7482: haha
bmk#1476: no
Daj#7482: jk
bmk#1476: otherwise i wouldnt be able to fit the words in the boxes lol
bmk#1476: honestly this would need its own specialized datacenter lol
Daj#7482: 2030: The entire internet has been replaced by one single model trained distributively. Each day we do one train step, which outputs the current state of the internet
bmk#1476: and it's not like any AI lab with a penchant for this stuff recently just built their own datacenter or anything lol
Daj#7482: OA is _definitely_ working on stuff like this
bmk#1476: and theyre certainly not funded by a company that can probably afford to build more datacenters lol
Daj#7482: Just look at MS ZeRO
bmk#1476: fuck now 1Q seems possible
bmk#1476: i had originally predicted that it wouldnt be
bmk#1476: (within next 10 yrs)
Daj#7482: Sometimes I forget that we are just
"Mom: We have OpenAI at home
OpenAI at home:" |
bmk#1476: lol
bmk#1476: i kinda wanna work at OA now
Daj#7482: Well if you make GPT3 work that's one hell of an application hah
bmk#1476: haha
Daj#7482: OA is super friendly to hiring people without formal credentials
bmk#1476: Resume:
Hey uh i fucked up your business model, cheers mate
bmk#1476: (/s on multiple levels obviously)
Daj#7482: It is somewhat of a powermove lol
Daj#7482: but lets be honest, by the time we're anywhere near GPT3 they'll be on GPT14
bmk#1476: not necessarily
Daj#7482: 4months of training time is basically a decade in arxiv time
bmk#1476: I don't think we'll ever be more than maybe 2 GPTs behind, assuming we're able to keep up at all
AI_WAIFU#2844: If I understand this paper correctly, the bottle neck in this method is storing a single layers activations correct?
bmk#1476: i mean one comes out every year
Daj#7482: haha 2 GPTs behind might be a realistic goal
bmk#1476: the bottleneck is everything
bmk#1476: This is so big that the entire everything is the bottleneck
Daj#7482: tbh one of the things that I've found most interesting in this project is how much of the bottleneck is just engineering time |
Daj#7482: So many problems already basically have solutions and just need to be implemented
bmk#1476: Imagine instead of taking stuff out of a bottle, you have a very large sphere containing a highly compressed ideal gas. Now suddenly the sphere dissapears. The bottleneck is the fundamental fact that surface area increases slower than volume. That's where we are right now
bmk#1476: (when discussing 1Q+ models)
AI_WAIFU#2844: Since the weights are all the same and the gradients can be merged recursively, you could have a parameter distribution network and gradient merging network, feeding a whole array of TPU pods, each operating with batch size 1.
bmk#1476: you could call it a bottlesurface instead of a bottleneck
Daj#7482: That is the most unintuitive comparison I have ever heard lol
bmk#1476: look i suck at comparisons
AI_WAIFU#2844: I like the comparison.
Daj#7482: It works but if you gave that to someone non technical lol
Daj#7482: Wait I have a comic about this
Daj#7482: https://www.smbc-comics.com/comic/analogies
Daj#7482: Anyways
Daj#7482: > Since the weights are all the same and the gradients can be merged recursively, you could have a parameter distribution network and gradient merging network, feeding a whole array of TPU pods, each operating with batch size 1.
@AI_WAIFU Yes this is cool
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741353744012148865/unknown.png
bmk#1476: great for out of context
AI_WAIFU#2844: We need algorithms to better make use of disk space 2PB is nothing
bmk#1476: >batch size 1.
TPUv4-8192-8192
Daj#7482: Pinned a message. |
Daj#7482: > We need algorithms to better make use of disk space 2PB is nothing
@AI_WAIFU This would definitely make a great PhD thesis
bmk#1476: computation is approx linear with model size so we need 10,000x bigger compute than gpt3 for 1Q too
bmk#1476: i think it's actually slower but the increased inefficiency probably calcels out
Daj#7482: You think we can cast training as a bitcoin mining problem?
AI_WAIFU#2844: I think there's already a coin that does that.
bmk#1476: so you literally would need a v3-4096-4096
AI_WAIFU#2844: it's PoW is training nn models.
bmk#1476: @Daj pretty sure you cant
Daj#7482: Mostly a joke, I remember reading from Vitalik Butterin that "useful PoW" is really ahrd
bmk#1476: useful pow is impossible, no?
Daj#7482: > so you literally would need a v3-4096-4096
@bmk How does this compare to an exascale system?
Daj#7482: > useful pow is impossible, no?
@bmk I defer to Vitalik
bmk#1476: most of those are actually a pos backbone with pow tasks
AI_WAIFU#2844: I don't think so. If you could queue up hard SAT problems you could use that as PoW.
Daj#7482: Great post: https://vitalik.ca/general/2019/11/22/progress.html
bmk#1476: algorithmic hardness doesnt matter for pow
bmk#1476: mining is 90% optimizing the constants |
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/741355016278769825/Screenshot_from_2020-08-07_20-00-02.png
AI_WAIFU#2844: My understanding is that it has to be easy to verify and hard to compute
bmk#1476: that too but
bmk#1476: we dont know for sure that mining sha256 is actually hard to compute
bmk#1476: but it seems legit so we stick with it
Daj#7482: Lets not get into P=BQP=NP(=PSPACE) haha
AI_WAIFU#2844: that's reasonable
AI_WAIFU#2844: what's the one exception?
bmk#1476: zk proofs
bmk#1476: not very useful for training models
bmk#1476: also
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741355992603688980/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741356017748672562/unknown.png
bmk#1476: 140k days
bmk#1476: thats not happening
Daj#7482: Seems we're breaking the memory bottleneck and hitting the compute bottleneck
Daj#7482: Now lets calculate it with Landau's Limit hah
bmk#1476: well at that point you could get a lot more obviously
bmk#1476: but that ceases to be physically relevant for the next 2 or so decades
AI_WAIFU#2844: no, let's make our models reversible and surpass past the Landauer limit. |
Daj#7482: I'd like to see a Landau's Limit calculation for the brain over a lifetime and compare that to a perfect computer training a NN
Daj#7482: Useless but would be interesting
bmk#1476: is reversible computing actually physically useful or is it just a fun thought experiment
bmk#1476: physically useful = in the next 100 years
Daj#7482: It's very useful in the limit
Daj#7482: Probably in the next 50ish years
Daj#7482: And in quantum computers
AI_WAIFU#2844: I looked into this. Superconducting reversible computing is super power efficient.
Daj#7482: if those ever work
Daj#7482: You can theoretically do infiite comutations with 0 energy
Daj#7482: Also takes infinite time but ya know
bmk#1476: yeah but like youre destroying a lot of information just as overhead
bmk#1476: just the cooling system will consume ridiculous energy
AI_WAIFU#2844: Like regular superconducting compute is actually barely competitive with CMOS ones you factor in the cooling costs.
Daj#7482: This is assuming perfectly reversible computing
Daj#7482: It would not output any heat
Daj#7482: which is obv impossible in the real universe
bmk#1476: like youre making the computation itself not output heat
bmk#1476: but all the stuff used to actually isolate that, like the cooling, will produce absurd amounts of heat
AI_WAIFU#2844: But reversible superconducting is more power efficient than CMOS even with cooling. |
bmk#1476: it is? o.O
AI_WAIFU#2844: Well I produces next to no heat by design
Daj#7482: Yea but CMB
AI_WAIFU#2844: so you don't need to move that heat up a big entropy cliff
Daj#7482: Stupid universe not being 0K
AI_WAIFU#2844: Just insulate the hell out of it.
AI_WAIFU#2844: let me dig up some sourcees
Daj#7482: Has anyone read Deutsch's Fabric of Reality? And that crazy last chapter of collapsing rebounding universe stuff?
bmk#1476: put it in space
Daj#7482: Space is 3°K
Daj#7482: too warm
bmk#1476: 3K is a hell of a lot less than 300K
Daj#7482: Yes but too hot for infinite compute sadly
bmk#1476: well you need less cooling
bmk#1476: i mean you need to radiate the energy away but that doesnt sound difficult
Daj#7482: I look forward to our reversible computation dyson sphere future
bmk#1476: also that was mostly a setup for an abstruse inside joke https://cdn.discordapp.com/attachments/729741769738158194/741357794438938734/1000.png
Daj#7482: Is that NASA anime
bmk#1476: lol no
AI_WAIFU#2844: https://www.nature.com/articles/s41598-019-46595-w |
Daj#7482: I'm not sure if I'm relieved or not
Daj#7482: Neat paper @AI_WAIFU
AI_WAIFU#2844: Is that railgun?
bmk#1476: YES
Daj#7482: I'm probably not qualified to read it
AI_WAIFU#2844: Ok now look closely at my PFP
bmk#1476: is that index
AI_WAIFU#2844: closer
Daj#7482: https://cdn.discordapp.com/attachments/729741769738158194/741358508586303518/IMG_20200807_165733.jpg,https://cdn.discordapp.com/attachments/729741769738158194/741358508808732702/IMG_20200807_165735.jpg
bmk#1476: misaka?
bmk#1476: idk
AI_WAIFU#2844: I am dissapoint https://cdn.discordapp.com/attachments/729741769738158194/741358702220542032/iu.png
Louis#0144: Holy fuck
Louis#0144: My EMNLP reviewer
Louis#0144: Is so fucking stupid???
Louis#0144: I’m like
Louis#0144: Shocked at their incompetence
Daj#7482: Reviewer #2?
Louis#0144: #3
Louis#0144: Lmao |
Louis#0144: They didn’t read the paper
Louis#0144: And told me just to fine tune an LM
Louis#0144: The entire point is that I didn’t need an LM
Louis#0144: The entire point is that my model was 1/1000th the size of SOTA
Louis#0144: and I only perform 1% worse
bmk#1476: @AI_WAIFU i read railgun a *long* time ago i dont remember who that is lol
Daj#7482: :brr: Just finetune a LM lol
AI_WAIFU#2844: first ark, main antagonist kiyama harumi
bmk#1476: i only ever remember the later arcs of any story
bmk#1476: if i remember them at all
AI_WAIFU#2844: no one remembers best girl who just wanted access to more compute
Daj#7482: Anime predicting real life hah
zphang#7252: who needs compute when you have a banana and a microwave oven
Daj#7482: I have no idea what that means and am afraid to ask
zphang#7252: https://en.wikipedia.org/wiki/Steins;Gate_(TV_series)
fun sci-fi visual novel/anime
Daj#7482: Oh someone recomended that to me once
Daj#7482: I...didn't get far lol
bmk#1476: It was recommended to me too, I watched the first episode and I couldn't make heads or tails of it
Daj#7482: Anime and me don't...get along so great hah |
Daj#7482: ~~Although One Punch Man is the best show ever made~~
zphang#7252: it takes a little to get started because for some reason it decides to start in total otaku space
zphang#7252: and the sci-fi elements only slowly trickle in and ramp up
Louis#0144: Like my model trains on an iPhone
Louis#0144: LMAOO
zphang#7252: I empathize with the salt
zphang#7252: my lab had a paper that was submitted before BERT, and then BERT came out during the review period
and one reviewer was like "this doesn't matter any more, have you heard of this model called BERT?"
Daj#7482: omg
Daj#7482: I remember why I'm leaving academia
bmk#1476: I hope that doesn't happen with my current project and gpt3
bmk#1476: "hey uh did you try just asking gpt3? Literally trivial"
bmk#1476: Strong reject
Daj#7482: "Have you tried going to the famous clown Pagliacci?"
Deleted User#0000: @Aran Komatsuzaki this will be really great, if retrieval approaches take off https://github.com/google-research/google-research/tree/master/scann
Deleted User#0000: better than faiss, it seems
Louis#0144: Should I tell my area chair how awful one of my reviewers was
Louis#0144: Like does that do anything
Louis#0144: They have really weird remarks that had nothing to do with the paper
Louis#0144: And didn’t really even read the paper in the first place |
Louis#0144: Particularly their remarks were objectively wrong
Commutative Conjecture#6969: > who needs compute when you have a banana and a microwave oven
@zphang
Is this a Scott Aaronson joke?
Commutative Conjecture#6969: https://www.scottaaronson.com/papers/ctc.pdf
zphang#7252: Naw it was a steins;gate reference
bmk#1476: :nooo: you cant just use TFRC and not count the cost of the tpus into your cost estimate!!!
:brr: haha tpu go brr https://cdn.discordapp.com/attachments/729741769738158194/741393007416443020/unknown.png
zphang#7252: don't think about the carbon emissions :^)
Daj#7482: Google data centers are 100% renewable energy
Daj#7482: :brr:
Daj#7482: ~~And AGI is our only shot at solving climate change anyways~~
bmk#1476: ~~well, paperclip machine isn't the ONLY way to kill everyone, there are other ways we could eliminate the entire population to lower emissions~~
Daj#7482: ~~Hah! Actually thinking those nerds in biology could build a doomsday device as good as ours! Fat chance! They're playing on easy mode in the US and still haven't taken it out!~~
Commutative Conjecture#6969: > Naw it was a steins;gate reference
@zphang
Yeah, the paper is about complexity classes with time travel
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/741409072401612860/Screen_Shot_2020-08-07_at_5.30.26_PM.png
Louis#0144: i dont need ur sass mathematica
AI_WAIFU#2844: TFW you forget to opt.zero_grad() |
bmk#1476: tfw you forget to specify `tf.update_weights_instead_of_code(True)` and tf physically warps the shape of the bits on the disk
AI_WAIFU#2844: Idea: Do L2L with a giant LSTM and ssds to backprop through very long time series.
AI_WAIFU#2844: I think you'd only be limited by how many activations you could store on disk.
Tricky#2780: Joined the server.
Daj#7482: Hey @Tricky ! Welcome to the Autoregressive Developer Discord! Check the channel topic for info and don't hesitate to ask questions!
Tricky#2780: Hi! I'm mostly going to be spectating the project, it looks really exciting.
Daj#7482: Sure thing, lurkers are welcome 👍
Daj#7482: Out of curiosity, how did you hear about this?
old#3101: @Louis i guess you cant share the paper rn?
bmk#1476: does anyone want to help with a sort-of-tangential gpt2 project?
bmk#1476: I have an idea but I really *really* would prefer if someone wanted to help with the tpu wrangling
shawwn#3694: What’s the idea?
bmk#1476: GPT2 + SeqGAN
bmk#1476: tl;dr existing LM + GAN systems dont work very well but theres a good chance that that's just because the batch size isnt big enough
shawwn#3694: If the batch size isn’t big enough with existing techniques, they’ll probably be even smaller on TPUs
bmk#1476: so if we use tpus and 117Ms we can have massive batch size
shawwn#3694: Hm. Mtf
bmk#1476: oh people have only tried with lstms on gpus
shawwn#3694: I see.
bmk#1476: i dont think ive seen anyone do anything big scale |
shawwn#3694: Yes, mtf changes that. But sid is the only one with extensive mtf experience, and it still took weeks to port a relatively small gpt model
bmk#1476: Like I'm thinking with accumulation too because apparently that matters a lot for RL
bmk#1476: I'm thinking just vanilla tf + accumulation
bmk#1476: And using a 117M since it's more poc that anything
shawwn#3694: Got a codebase?
bmk#1476: no but it shouldnt be too complex
bmk#1476: Basically we take an existing gpt2 codebase, duplicate the model to have a generator and a discriminator, and rig up policy gradient
shawwn#3694: Sounds like a nice challenge for @Deleted User. He wrote his own stylegan2 impl from scratch, and he’s been looking for a TPU related project
shawwn#3694: As far as I know.
bmk#1476: The hard part is tpus
bmk#1476: I'm not very good with tpus
shawwn#3694: No one is
bmk#1476: But yeah basically this entire project is "it didn't work but maybe if we make the batch size 100x bigger it'll work"
bmk#1476: relevant papers: https://arxiv.org/pdf/1609.05473.pdf https://arxiv.org/pdf/1808.05599.pdf
shawwn#3694: Tell you what. If you figure out what the generator and discriminator would look like in tensorflow, I’ll give it a shot. But you should know that at this point, I’ve said the same thing about getting sampling working, about training a gpt model for a friend, and some other things.
shawwn#3694: Or use pytorch.
bmk#1476: I already implemented one (poorly) in pytorch a while back and the results were kind of bad, but I'm running on a single 1080Ti with a batch size of like 2
shawwn#3694: Post the code
bmk#1476: maybe it was my implementation, maybe it was the batch size
Daj#7482: Sounds like a cool project, I'd love to help but GPT Neo is kind of my priority heh |
bmk#1476: the code is *horrendous*
shawwn#3694: Good
shawwn#3694: Post it
bmk#1476: okok
shawwn#3694: That means it ran
bmk#1476: one moment i need tofind it lol
shawwn#3694: That’s a common problem for me too. I settled on making an ~/ml folder and I clone everything top level in it
bmk#1476: i have a ~/projects with thousands of dirs
bmk#1476: of course, none of the names are informative
bmk#1476: ok so the bad news it i have no idea where it is
shawwn#3694: Find it 🙂
bmk#1476: the good news is reimplementing it probably wont be hard
bmk#1476: i'm about 90% confident my old code had some fatal flaws anyways
shawwn#3694: One idea I’d like to try is to train a discriminator to discriminate between generated and training samples during the course of training
bmk#1476: yeah that's basically this
shawwn#3694: The neat thing is, I got text rasterization working. Meaning it can rasterize the text as an image
shawwn#3694: And use existing image based discriminators
bmk#1476: o.O
bmk#1476: why not just use another gpt2 as the discriminator like i'm thinking of
shawwn#3694: For giggles, as daj would say. No one’s done it before, which I like |
bmk#1476: nobody's done gpt2 as discrim either
shawwn#3694: There’s always a chance something new might work better.
shawwn#3694: Hmm. That’s not true
shawwn#3694: There are lots of discriminators for language models
bmk#1476: my prior is that converting text into images and discriminating on that is really not going to work
shawwn#3694: It probably won’t. But one neat thing that will fall out if it is that you can print log messages during training, and see them in Tensorboard
shawwn#3694: Which I’ve wanted.
bmk#1476: ah ok
shawwn#3694: And you never know. Rasterized text is perfectly regular, so it might force the model to have some sort of knowledge of language
bmk#1476: I think that's best reserved for futurework
shawwn#3694: It would also be independent of BPE.
bmk#1476: ok here's the model implemented https://gist.github.com/leogao2/bedd8ff574ac4107414036794e1b11ea
bmk#1476: it doesnt do much rn, just a scaffold
bmk#1476: basically i want to somehow implement this in tf
bmk#1476: btw the print output size has result `torch.Size([4, 1024, 1])`
bmk#1476: so it scores every token
bmk#1476: now i jsut need to figure out how the hell to do this with tf and stuff
shawwn#3694: Nah, focus on the correctness of the implementation. If it’s faster to do it in pytorch, do that
shawwn#3694: What I need is a working reference
bmk#1476: ok |
bmk#1476: i'll just mock data loading because i dont feel like figuring that out lol
bmk#1476: also what do you mean by working
bmk#1476: like, running correctly?
bmk#1476: i'm not even confident my old code was implemented correctly
shawwn#3694: I mean “it’s going to take at least a week of focused effort to start seeing results from your project; it would be a shame if, at that point, the model produces no good results and we don’t understand why.”
shawwn#3694: That’s exactly the situation we ran into with biggan
bmk#1476: thats my exact situation with my old code
bmk#1476: i never got it to not collapse and not produce garbage
shawwn#3694: We’re not sure if it’s because their implementation is fundamentally wrong, or if it’s because of something we changed when porting to TPUs
bmk#1476: ok so i just set it to accumulate for 1000 interations at batch 2 or something then, to verify correctness?
bmk#1476: also wait ugh this means i need to figure data loading out
shawwn#3694: That, I can help with.
bmk#1476: whats the easiest way for me to just call a function and get a load of data
Deleted User#0000: im kind of undermining my past efforts here, but i no longer really believe GANs have a future
shawwn#3694: https://github.com/shawwn/ml-notes
https://twitter.com/theshawwn/status/1286426454171975680?s=21
bmk#1476: why?
Deleted User#0000: iGPT
Deleted User#0000: you can just do maximum likelihood for everything |
shawwn#3694: Someone got this repo working, and I skip all the normal tfrecord crap
bmk#1476: i mean the other way around @Deleted User
bmk#1476: i want to use gan to replace mle lol
Deleted User#0000: yup, i know, but i think mle is superior
Deleted User#0000: no one has ever gotten seq gans to work
Deleted User#0000: well
bmk#1476: my hunch is just more batch
bmk#1476: any reason why that wont work
Deleted User#0000: but why have an extra problem with the adversarial instability
Deleted User#0000: when you can just do mlm
bmk#1476: because distribution shift
shawwn#3694: No one’s tried stabilizing it with flood loss
shawwn#3694: It prevents mode collapse, mostly
bmk#1476: everyone else has only been doing this on puny gpus, using a v3-512 changes everything
Deleted User#0000: gonna throw something from the left field here, but i think this is worth trying on text https://arxiv.org/abs/2006.11239
Deleted User#0000: i consider all my GAN experimentations a sunk cost lol
Deleted User#0000: maybe im just bitter at this point
bmk#1476: i still dont get whats wrong with text gans
Deleted User#0000: all i know is, there's a lot of people smarter than me who has tried already
Deleted User#0000: lol |
bmk#1476: do they talk about batch size and stuff?
Deleted User#0000: i actually have an ongoing project to reproduce Electra at the moment with another researcher
Deleted User#0000: but it isn't strictly like GANs
Deleted User#0000: that's the only text-based sort-of GAN that i know of that works
Deleted User#0000: yea, you should just try it @bmk
Deleted User#0000: let me know if you get results that change my mind lol
Deleted User#0000: my mind has been blown sufficiently that it is wide open to being changed again
Deleted User#0000: lol
Kazumi#1297: I'm confused why GANs are used in a very narrow way, I'd think GAN is just a trainable loss function to optimize that can be applied to any model
bmk#1476: what's the standard way of getting a random first token?
Deleted User#0000: yup you are right Kazumi, but that doesn't make it any easier to train
Deleted User#0000: the adversarial dynamics continue to confound researchers to this day
bmk#1476: does anyone have a list of tokens by occurrence in webtext
bmk#1476: also we should train a baseline 117M using openwebtext
bmk#1476: using oawt-trained 117M seems like an unfair baseline
turtlesoupy#4837: Joined the server.
bmk#1476: ok my implementation should be finished
bmk#1476: i'm taking a break now, no idea if it's correctly implemented or not tbh
shawwn#3694: @bmk neat. want to post it?
bmk#1476: sure |
bmk#1476: https://gist.github.com/leogao2/66cc819e57badb081c52d4f5d29badbf
bmk#1476: Disclaimer: I have absolutely no clue if this is doing stuff right
Sobet#0344: Joined the server.
Deleted User#0000: @bmk so i think the way people have attempted to do this is to use differentiable sampling, like gumbel softmax
Deleted User#0000: so you'll have to replace your `gen.generate` with `F.gumbel_*`
Deleted User#0000: if you want gradients from disc -> gen
bmk#1476: wait, why doesn't what i have work?
bmk#1476: logits on L115 passes a gradient directly to get_logits
bmk#1476: which in turn directly calls gen
bmk#1476: and everything in between in differentiable
Deleted User#0000: ohh interesting, can you explain at a high level what your reward matrix is doing?
bmk#1476: this is basically REINFORCE
bmk#1476: actions are tokens
Deleted User#0000: ohh wow! ok, *whoosh above head*
bmk#1476: the future_reward_matrix is basically applying the discounting rate
Deleted User#0000: i'll have to read it in more detail, i'm not as familiar with RL stuff
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741818173497802773/unknown.png
bmk#1476: ln \pi(A|S) is basically the logit output
bmk#1476: and G_t, sum of future rewards, doesnt depend on \theta so it *shouldnt* matter if i pull it inside the gradient by multiplying the logits directly
Deleted User#0000: i see, so this is like actor critic? |
bmk#1476: i *think*
Deleted User#0000: generator is actor
bmk#1476: AC has like a value network to reduce variance but adds bias
bmk#1476: https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html
bmk#1476: this is sinplified AC essentially
Deleted User#0000: awesome, i thought you were doing a generic GAN formulation
Deleted User#0000: jumped to conclusions too quickly 😅
bmk#1476: oh actually this is more RL focussed, a discriminator just happens to be a convenient reward function haha
bmk#1476: you can also look at the seqgan paper too
bmk#1476: i'm basically doing the same thing as they are
Deleted User#0000: oh wow, ok, the paper is more interesting than the title suggests.. didn't know about this RL component
bmk#1476: they frame RL as just the thing to get gradients across (possibly because this was written during the Great GANbrian Explosion) but i personally think the RL part is more interesting than the GAN part
Deleted User#0000: > GANbrian
Deleted User#0000: lol
Deleted User#0000: cool, i'll have to try it at some point
Deleted User#0000: it'll have to offer significantly faster training for it to be worth the extra complexity
bmk#1476: you mean faster than normal training?
Deleted User#0000: yup, normal
bmk#1476: well uh i have some bad news and some good news
Deleted User#0000: i mean, we know normal gets us there |
bmk#1476: the bad news is that it's probably multiple orders of magnitude slower
bmk#1476: (best case)
Deleted User#0000: ah darn i see
bmk#1476: the good news is that the goal isnt to train from scratch, but use this to finetune a MLE-trained gpt2 essentially
Deleted User#0000: like you already mentioned in off topic, don't you think Uber's pplm covers that?
bmk#1476: well, maybe
bmk#1476: PPLM+GPT2+RL could work i guess
Deleted User#0000: yea, i've seen hf's pplm demos
Deleted User#0000: they work great
bmk#1476: pplm bascially works by using gradients to screw around in the pasts right?
Deleted User#0000: yea, as far as i know, they use a small classifier to steer the gradients of the gigantic LM
Deleted User#0000: its a really elegant approach
bmk#1476: I havne't thought it through but I feel like there might be some difficulties tuning the small net with RL
bmk#1476: maybe it could work though
bmk#1476: actually, what if we use the discriminator AS the small net
bmk#1476: hmm
bmk#1476: this sounds like it would have stability issues
Deleted User#0000: i think it's RL free
bmk#1476: I mean like tacking RL onto it
Deleted User#0000: i could be mistakened |
bmk#1476: I'm trying to mix their thing with my thing
Deleted User#0000: yea, they could be used together
bmk#1476: re: discriminator as small net: on second thought this probably wouldnt work very well, but it's worth a shot
bmk#1476: the LM objective of pplm will be fighting the discriminator the whole way and it'll probably end up being more noise than signal
Deleted User#0000: your code looks good, or at least pretty close
Deleted User#0000: you should try it!
Deleted User#0000: i understand what the paper is doing now, it basically bypasses having to do the gradients through the discrete sampling procedure
Deleted User#0000: the gumbel softmax has never really worked that well in practice, i think
bmk#1476: my past code didnt work very well but i found some major flaws in it that invalidate the negative result
bmk#1476: hopefully this time it works well
Deleted User#0000: https://twitter.com/lorenlugosch/status/1268142070272851968
bmk#1476: should i put my code on a proper repo
bmk#1476: so you can make prs and stuff
shawwn#3694: @bmk re: https://discordapp.com/channels/729741769192767510/730097574010290318/741847269216747590
That link freezes chrome for me. But, it doesn't look like the frequency counts are in the json; are they?
bmk#1476: the list is only the frequency counts
shawwn#3694: Oh!
shawwn#3694: I see.
bmk#1476: also my chrome has no problem with that link, odd |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741848597842100555/unknown.png
shawwn#3694: That's super handy to have. How'd you generate it?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741848710433996840/unknown.png
bmk#1476: so you know how people have been just using <|endoftext|> for first char to get generations
bmk#1476: well now we can actually sample truly uniformly from gpt2
bmk#1476: and not just at document boundaries
bmk#1476: really useful for what im doing, not sure if anyone else ever will actually need it lol
shawwn#3694: Whoa. Neat code https://github.com/leogao2/lm_dataformat/blob/master/lm_dataformat/__init__.py
bmk#1476: yeah it saves a lot of trouble
shawwn#3694: unfortunately it's hard to encode as fast as the TPU can run
shawwn#3694: huggingface's tokenizer gets around 500k tokens/sec, I think, which is about 488 examples/sec
shawwn#3694: quite fast, but 117M can go at around 1k examples/sec on a v2-256
bmk#1476: encoding the entire thing took me like 4-5 hours using this code
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741849510828966058/unknown.png
bmk#1476: not sure if it does internal multithreading
shawwn#3694: very nice. You have all of openwebtext on some server somewhere?
bmk#1476: well, my home computer
shawwn#3694: ah 🙂
shawwn#3694: how large are the text files, out of curiosity?
shawwn#3694: uncompressed |
bmk#1476: like the individual documents, or the overall decompressed size?
shawwn#3694: I've always been curious to see stats like total line count, article count, total byte count, etc
shawwn#3694: overall decompressed size I suppose
shawwn#3694: well, of the training data that would be fed to gpt2
shawwn#3694: so no metadata like url or whatever
bmk#1476: I think it was 40GB something (source: the webpage) but I didn't track the stats for that
bmk#1476: >This left 38GB of text data (40GB using SI units) from 8,013,769 documents. https://skylion007.github.io/OpenWebTextCorpus/
shawwn#3694: ah, interesting. The tokenized tfrecords are 16GB total in the cloud bucket
bmk#1476: yeah i was really confused
bmk#1476: my estimates were that there should be a lot more
shawwn#3694: well, it depends how they're stored in the tfrecords
bmk#1476: do tfrecords have compression?
shawwn#3694: if it's stored as uint16, then that's perfectly correct
shawwn#3694: even without compression
bmk#1476: i think daj said they have to be uint64s or something
bmk#1476: maybe they arent? o.O
shawwn#3694: the vocab size is always less than 65536, so they can be uint16 without loss of generality
bmk#1476: honestly idk how the tfrecord encoding works
shawwn#3694: me neither
bmk#1476: i think he said somethign about it being finnicky |
bmk#1476: but ¯\_(ツ)_/¯
shawwn#3694: @bmk would you be willing to re-run it using this code, dumping it to a file, and uploading that file somewhere? https://gist.github.com/shawwn/bf4ff442d2eff3f3cac4afdb428bcbc8
You can use it like:
```py
with open('openwebtext.tok16', 'wb') as f:
# inside the loop:
tokens = tok.encode(doc)
tokens_to_file(f, tokens, stride=2)
```
bmk#1476: sure
shawwn#3694: sweet, thanks
shawwn#3694: I have a sampler set up to train on datasets in that format
bmk#1476: also id need to add a 50256 right?
shawwn#3694: hmm
bmk#1476: or else theyll all be squished together
shawwn#3694: yeah, I guess that's a good idea. I've always hated 50256 because it doesn't encode the same as <|endoftext|> |
shawwn#3694: so literally anyone who ever tries to encode <|endoftext|> always ends up using the wrong tokens
bmk#1476: 50256 is the correct one, the literal encoding of <|endoftext|> is incorrect, though
shawwn#3694: would it be possible to append the text '<|endoftext|>' and encode that?
shawwn#3694: I know. And it's caused endless horrible confusion
bmk#1476: because that would be the encoding of the actual literal string occurring
shawwn#3694: yes
bmk#1476: so i strongly believe we should be using 50256 correctly
shawwn#3694: the problem with using 50256 correctly is that it's impossible for users to enter it at runtime
shawwn#3694: unless special care is taken by client applications to encode it correctly
shawwn#3694: which requires a full string search
shawwn#3694: if you think it's best, then let's go with 50256
bmk#1476: i think clients need to be fixed then
shawwn#3694: heh.
bmk#1476: because now there's no way to differentiate between the string "<|endoftext|>" and an actual endoftext
shawwn#3694: yes, but that's always been true
bmk#1476: well, the actual endoftext tokens are transmitted out of band
shawwn#3694: in real-world data anyway.
shawwn#3694: yeah, that's fair.
shawwn#3694: I get it, I just dislike the design decision because of how easy it is to confuse people, especially newer ML programmers. I was bit by that for a long time before I realized, and so was everyone else
shawwn#3694: but it's true that it really should be in there. |
bmk#1476: ```
wt = lmd.Reader('/data/datasets/openwebtext')
with open('openwebtext.tok16', 'wb') as f:
tok = transformers.GPT2TokenizerFast.from_pretrained('gpt2')
for doc in tqdm(wt.stream_data()):
# inside the loop:
tokens = tok.encode(doc) + [50256]
tokens_to_file(f, tokens, stride=2)
``` i have this running now
shawwn#3694: Sweet! Is it spitting out data?
bmk#1476: yup
bmk#1476: should be ready in another 5 hours, at which time i will not be awake
shawwn#3694: word.
shawwn#3694: before you go to bed, would you mind doing a quick ```scp openwebtext.tok16 bmk@test.tensorfork.com:/home/bmk/openwebtext.tok16```?
shawwn#3694: curious to try it on a partial dataset
shawwn#3694: else I'll wait till tomorrow
bmk#1476: ok running
shawwn#3694: sweet, thank you!
shawwn#3694: wow |
shawwn#3694: you're getting around 1,859,558 tokens/sec according to my calcs
shawwn#3694: Oh wait, that's scp speed
shawwn#3694: I forgot it's not streaming directly to the server.
shawwn#3694: still, generating 342M already is impressive
shawwn#3694: thanks again.
bmk#1476: np
shawwn#3694: hmmm. openwebtext wasn't run through ftfy first
shawwn#3694: I'll fix it on my end
shawwn#3694: (e.g. there are characters like ’ instead of ')
shawwn#3694: hopefully the openwebtext tfrecords were ftfy'd, else that might explain some of the GPT-2 problems
bmk#1476: oh, should i put it through ftfy?
bmk#1476: what happens to those characters otherwise?
bmk#1476: does the tokenizer just drop them?
shawwn#3694: no no, the tokenizer handles unicode fine
shawwn#3694: but ’ encodes differently than '
shawwn#3694: so it basically wastes model capacity, since both of those characters mean exactly the same thing to humans
shawwn#3694: go ahead and keep tokenizing this; I can run it through ftfy. and it might be interesting to have both, to verify whether it matters, or something.
bmk#1476: ok
bmk#1476: ill spin up a fixed version tomorrow
shawwn#3694: always good to have original unmodified datasets anyway |
shawwn#3694: word.
bmk#1476: i cant wait for a model with a full-unicode vocab
shawwn#3694: when gwern and I were training poetry, we were confused why GPT kept generating mojibake
shawwn#3694: once we ran our dataset through ftfy, all that went away
shawwn#3694: well, openai's vocab works fine on full unicode
shawwn#3694: it's just biased towards english
shawwn#3694: so it's much harder for the model to learn russian, presumably. Or at least that's the theory.
bmk#1476: chinese would just not work
bmk#1476: at all
bmk#1476: and as you know im a big fan of the whole support all the languages thing
shawwn#3694: that might be true. or it might not be. no one's tried it
shawwn#3694: models are hard to predict from first principles
shawwn#3694: trouble is, it's generally been pretty expensive to test out ideas
shawwn#3694: but 117M is probably cheap to train now
bmk#1476: i mean with the current vocab
shawwn#3694: as far as I know, the current vocab can encode every unicode code point
shawwn#3694: it's just less efficient for certain codepoints
bmk#1476: o.O
bmk#1476: how?
bmk#1476: there are like 130k cps |
kindiana#1016: its all just unicode bytes in the end 🤷
bmk#1476: so wait they break it up by bytes?
kindiana#1016: bpe works on bytes, it doesn't care for codepoints or anything
kindiana#1016: if you use a chinese dataset I would expect common characters to be assigned their own tokens
bmk#1476: but then it would be different between say utf8 and utf16
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/741866359188750417/unknown.png
shawwn#3694: works fine.
bmk#1476: huh
shawwn#3694: yes, I've been very confused why people don't at least *try* openai's encoding
shawwn#3694: I agree it's less efficient and harder for the model to learn
shawwn#3694: and I agree that might matter, possibly very much
shawwn#3694: but no one knows precisely how much, and as far as I know no one's tried
bmk#1476: oa made some very weird and indefensible choices for the vocab, to be fair
bmk#1476: if i were to build a vocab i'd have it a) be multilingual b) have like a dozen reserved tokens for fine tuning use c) be exactly 65536 size
bmk#1476: like how does 50256 size happen
shawwn#3694: agreed re: 65536 size. Just be sure to reserve at least 10k for user purposes
shawwn#3694: right now we essentially have `65536 minus 50256` tokens worth of reserved space
bmk#1476: why reserve so many?
kindiana#1016: can't you extend the embedding layer arbitrarily after the model is trainined?
bmk#1476: it's annoying |
shawwn#3694: yes, you can.
shawwn#3694: (I've done it)
bmk#1476: also then it wont be under 65536
shawwn#3694: it will if you don't extend beyond 65536
bmk#1476: i mean like
bmk#1476: thats what i mean when i say reserve
shawwn#3694: I think reserving a substantial amount for fine-tuning purposes will prevent backwards compatibility issues
bmk#1476: hmm
shawwn#3694: also, users might be able to bootstrap some interesting ideas into the vocab
bmk#1476: so can we tack on a few thousand multilingual tokens to OA's for example
shawwn#3694: actually yes, that's a very good example
shawwn#3694: a new encoding where openai's is the base might do pretty well
bmk#1476: let's do the math
shawwn#3694: openai's could be thought of as the "ascii" to the new encoding's "utf8"
bmk#1476: so we have 15k to work with
bmk#1476: if we use 5k of that we could comfortably fit the alphabets of pretty much every language, plus the vast majority of CJK characters by usage frequency
bmk#1476: how much do we want to use up? do we want to use up nearly all of the space, or use as little as possible to reserve room for future expansion
kindiana#1016: what if you just use a multilingual dataset to create bpes? (with languages in the proportion that you want to allocate model capacity to)
bmk#1476: we want backwards compat
bmk#1476: i really wish oa left us a bit more room >.> |
bmk#1476: 50k of some pretty rare English words, and then all the other languages have to squeeze into the last 15k
bmk#1476: Still I guess this is better than nothing, where all other languages have to be constructed from bytes
bmk#1476: how big of a concern, really, is efficiency on disk, anyways
kindiana#1016: depends on who pays for storage on the bucket lol
bmk#1476: because if we just extend it by a few more bits we get a lot of legroom
bmk#1476: is it too inconvenient to store in multiples of 3 bytes?
bmk#1476: or is that too nonstandard
kindiana#1016: I dont think theres native support for uint24 anywhere
bmk#1476: i mean padding it out to 4 isn't that bad, only doubles the size
bmk#1476: like i think that's a small price to pay
bmk#1476: how easy is it to extend an existing BPE in a backwards-compatible way anyways
kindiana#1016: bpe construction is iterative so it should theoretically be possible lol
bmk#1476: we could go for 262144 size and all the other languages have a lot of space to coexist
bmk#1476: that eats up 2 more bits to the left
bmk#1476: we dont want to go too big because of embedding matrix sizes though
bmk#1476: whatever, one step at a time, we should construct a 65536-multilingual-vocab first since that's what it gets padded out to on tpus anyways, there's literally no downside
kindiana#1016: I don't think super large vocab sizes are a good idea, even without embedding matrix concerns
bmk#1476: we can agree that making a multilingual 65536 is literally 100% upside though right?
kindiana#1016: https://cdn.discordapp.com/attachments/729741769738158194/741871317724430407/unknown.png
bmk#1476: yeah but the reason is because there's quite a few languages out there |
bmk#1476: id hazard a guess and say more than 3
bmk#1476: having 200000 slots to split between a lot of languages means each isnt really getting all that much
bmk#1476: anyways
bmk#1476: 65536-multilingual is a really cool project that we need to undertake
bmk#1476: and we should train all our models with it
shawwn#3694: Nah, do either 16bit or 32bit imo. 24bit requires a custom decoder always
bmk#1476: ok so we should have two vocabs
bmk#1476: 65536 and 262144 (say)
bmk#1476: the 65536 vocab would be soooo useful
shawwn#3694: Mind scp’ing the latest tokens?
shawwn#3694: Looks like training loss is falling quite rapidly
shawwn#3694: I have a run going on http://test.tensorfork.com
bmk#1476: what do you mean, like more of the same data
shawwn#3694: Neat watching it in real time
bmk#1476: or the ftfy'd data
shawwn#3694: I meant, could you re run that earlier scp command?
bmk#1476: i dont have any ftfy'd data
bmk#1476: oh sure
shawwn#3694: And then I won’t bug you again till tomorrow. I only asked because it looks like this will still overfit
bmk#1476: so how does this interface work |
bmk#1476: the webpage
bmk#1476: there are so many lines
shawwn#3694: Oh you haven’t seen it? Top is the real run
shawwn#3694: Everything else is simulated
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741876287194398740/unknown.png
bmk#1476: this?
shawwn#3694: The thing that moves every 3sec is real
shawwn#3694: Yep
shawwn#3694: That’s one line per TPU core
shawwn#3694: Every loss of every step
bmk#1476: hm not much seems to be happening'
bmk#1476: its just horizontal
shawwn#3694: It’s a log scale
bmk#1476: ah
bmk#1476: that scp is gonna take a while ;-; https://cdn.discordapp.com/attachments/729741769738158194/741876641512554556/unknown.png
AI_WAIFU#2844: Gradient boosting proof of concept: https://cdn.discordapp.com/attachments/729741769738158194/741880161116618813/Figure_1.png
kindiana#1016: whats the model/param count/dataset?
AI_WAIFU#2844: model: transformer-xl
AI_WAIFU#2844: paramcount ~2million before n_step=2000 ~4million after
AI_WAIFU#2844: dataset: text8 |
kindiana#1016: wow thats really good with 2 million params 🤔
AI_WAIFU#2844: Nah, this is nats/character
kindiana#1016: nats?
AI_WAIFU#2844: base e instead of 2
kindiana#1016: ah
AI_WAIFU#2844: It's more like 1.63 bpc
AI_WAIFU#2844: The point isn't that the model is good. I haven't made much effort to make a good model. The point is that the number of parameters in my model scales linearly with the amount of compute I throw at the problem, and I don't have to worry about running out of GPU ram.
bmk#1476: what does perf vs compute look like?
bmk#1476: does it just keep getting better at approximately gpt-x rate with increasing compute, even just asymptopically
AI_WAIFU#2844: I don't know. You can see the case where n=1 models and n=2 models on the graph.
AI_WAIFU#2844: I would expect it to be less good than the gpt-x rate. The models can't talk to each other. The only output logits or logit pertubations.
bmk#1476: ah
AI_WAIFU#2844: Think of this as something you can do to any LM to squeeze out extra preformance/increase parameter count if you're already training the biggest model you can fit in vram.
bmk#1476: hmm
AI_WAIFU#2844: I chose transformer-xl just because it did well on text8 but you can use any other LM.
bmk#1476: im trying to think of ways this could be useful at gpt3-scale
AI_WAIFU#2844: Assuming you have the money, you could wrap this as an outer loop around your GPT-3 training procedure to train a GPT-3 ensemble that does better than individual GPT-3s.
kindiana#1016: what happened at step=2000? did you just add another randomly initialized model? or make a clone of the original model?
bmk#1476: yeah i dont think we can afford that lol
AI_WAIFU#2844: I save the output of the model on the current dataset, then I clone the original model and reset the last layer. Then I train as usual but with the previous model's output as a bias to the logits. |
AI_WAIFU#2844: Then I add the pertubations to the existing output and repeat.
shawwn#3694: hah, amazing... I was watching the TPU training session in real time, and saw that it froze. Can't tell from screenshot, but top part stopped updating. Knew instantly there was a problem. And sure enough https://cdn.discordapp.com/attachments/729741769738158194/741885410032091257/unknown.png
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/741885419188256808/unknown.png
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/741885537941717043/unknown.png
kindiana#1016: > the previous model's output as a bias to the logits.
@AI_WAIFU to the input to the new model right?
AI_WAIFU#2844: not the input, the output.
kindiana#1016: hrm, I feel like input might be interesting if you initialize from scratch
kindiana#1016: it will be like resnets but you train it layer by layer
AI_WAIFU#2844: That's an idea.
AI_WAIFU#2844: Id be worried about information loss in a scheme like that though.
AI_WAIFU#2844: I thought of something like that for flow models. You could progressively grow them.
kindiana#1016: if you have the disk space you could do highway networks
kindiana#1016: allow the model to look at any of the previous model's outputs (or the original input)
AI_WAIFU#2844: You could have skip connections to different layers in the stack.
bmk#1476: has anyone tried doing tree skip connections
bmk#1476: every other layer, then every 2, 4, 8, etc
bmk#1476: log n in memory at any one time
AI_WAIFU#2844: Like a heap?
bmk#1476: like rn with resnets each skip connection goes only one back |
AI_WAIFU#2844: You could have skip connections to different points in time.
bmk#1476: what if then you add a connection every 2 blocks going back 2 blocks
bmk#1476: then connections every 4 blocks going back 4 blocks
bmk#1476: etc
bmk#1476: log n in memory at once and you get long range connectivity
AI_WAIFU#2844: This just opened up a whole new class of models.
bmk#1476: what do you mean different points in time,
kindiana#1016: there is densenet
bmk#1476: densenete is weird though
AI_WAIFU#2844: like instead of just looking back at layers you look back in time. like transformer xl.
bmk#1476: its connections are pretty *dense*
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741888362033446912/unknown.png
bmk#1476: thats a damn lotta connections
AI_WAIFU#2844: There res connections though right?
bmk#1476: also this paper is from the CNNbrian explosion
bmk#1476: i think so
AI_WAIFU#2844: so you can just add them together ahead of time.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/741889079448436796/treenet.png
bmk#1476: this is what im thinking of
bmk#1476: has anyone done this yet |
AI_WAIFU#2844: Like just the architecture or also the progressive growing bit.
AI_WAIFU#2844: ?
bmk#1476: oh theres no progressive growing here
bmk#1476: just taking a resnet and adding more connections
kindiana#1016: what do you do when the connections are different dims/scales?
kindiana#1016: concat/downscale?
bmk#1476: well, you dont, but im thinking more language model than image model
kindiana#1016: ah
bmk#1476: also yeah i guess you can downscale the connections too
bmk#1476: *adds to list of arch ideas to try once the mtf code is all polished up*
kindiana#1016: I think u-net like skips are more useful
kindiana#1016: because the layers closer to the beginning are dealing with a similar layer of abstraction to the ones near the output
bmk#1476: theres only one way to find out
bmk#1476: :empiricism:
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/741890562629697546/Screenshot_from_2020-08-08_22-27-16.png
Deleted User#0000: https://arxiv.org/pdf/2006.03236.pdf
bmk#1476: theres no telling if transformers like that kind of connectivity
Deleted User#0000: decoder transformers are really restricted
bmk#1476: maybe that type of connection is only effective for segmenting medical images on specific kaggle challenges
kindiana#1016: throw some rezero in there and it can't be worse than vanilla transformers 😛 |
Deleted User#0000: also, hi @Aran Komatsuzaki is it morning for you?
Aran Komatsuzaki#5714: it's actually 2:30 afternoon here.
Deleted User#0000: oh! well, good afternoon
Aran Komatsuzaki#5714: yeah how's going?
Deleted User#0000: pretty good! you know, the R value of the virus is about 1.03
Deleted User#0000: hoping it goes down lol
Aran Komatsuzaki#5714: i hope so 🙂
Aran Komatsuzaki#5714: Was my overleaf draft hard to read?
Aran Komatsuzaki#5714: Any problem with clarify etc?
Aran Komatsuzaki#5714: @Deleted User
Deleted User#0000: yea, very clear! i like the taxonomy
Deleted User#0000: i think it would be beneficial to replicate and open source MARGE-like pretraining
Deleted User#0000: https://www.youtube.com/watch?v=nv6oFDp6rNQ exciting
Deleted User#0000: finally, some theory for why 'attention' may be all we need
zitterbewegung#4846: TPUs scare me
zitterbewegung#4846: for one reason
zitterbewegung#4846: dunno if google will keep on making them available
zitterbewegung#4846: its a trade off but it can also be because i'm lazy
zitterbewegung#4846: but like ever since they raised prices on app engine
zitterbewegung#4846: yea i dunno |
eugene#9671: Joined the server.
platypii#0938: Joined the server.
Sid#2121: Hey @eugene , @platypii ! Welcome to the all-attention zone! Check the channel description for an overview of the project and let us know if you have any questions 🙂
AI_WAIFU#2844: Re: Masking the first tokens when computing validation loss and the importance of context. I plotted the relationship between average loss and the length of the context window. Two observations: https://cdn.discordapp.com/attachments/729741769738158194/742413581475119124/Figure_2.png
AI_WAIFU#2844: 1. The loss is very high when the context is small and drops rapidly at first and slowly later. Not taking this into account will lead to an estimate of the validation loss that is higher than the true value.
AI_WAIFU#2844: 2. The validation loss seems to obey something resembling a power law relationship with context length. We can use that relationship to compute the optimal context length for a given model size and compute budget.
bmk#1476: very interesting
bmk#1476: now that i think about it, OA didnt actually look at context length in the scaling paper, did they?
AI_WAIFU#2844: Nope.
AI_WAIFU#2844: It looks like they just picked a number and went with it
bmk#1476: huh
bmk#1476: i have a sneaking suspicion that the right half of the curve will get slopier as the model gets bigger
AI_WAIFU#2844: Agreed.
AI_WAIFU#2844: We can quantify this.
bmk#1476: my rationale is that for small models, it just has to sound convincing and it can do that from 512 tokens
bmk#1476: but big models have to be actually coherent
bmk#1476: which is longer-scale
aquajet#7800: Why do smaller models only need to sound convincing?
bmk#1476: @aquajet relevant explainer snippet https://cdn.discordapp.com/attachments/729741769738158194/742417000785248306/unknown.png
bmk#1476: you can skip most of the middle paragraph |
bmk#1476: tldr it needs to get grammar good before it can be logically good
AI_WAIFU#2844: Small models will pick the low hanging fruit.
AI_WAIFU#2844: So we would expect the curve to get steeper as the model gets bigger because it will better take advantage of longer contexts, which are higher up in the "tree" of the tortured metaphor
AI_WAIFU#2844: All my compute is tied up rn, I had to make that plot with my CPU, but if someone wants to calculate this same plot for a pretrained GPT-2, be my guest.
bmk#1476: code pls
bmk#1476: im too lazy to write it up lol
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/742420327371505785/transformer_loss_test.py
AI_WAIFU#2844: You'll need to change the model to GPT2.
Daj#7482: Very cool results, and I agree with your thoughts on longer context windows. Would love to have a big model with a huge context window and see how that scales
bmk#1476: `gb_lm`
bmk#1476: great british language model?
AI_WAIFU#2844: oh right, sorry
bmk#1476: you dont actually use it anywhere so i deleted it
AI_WAIFU#2844: You can just delete that dependency
AI_WAIFU#2844: I imported some other code but then realised I didn't need it and didn't get around to to removing the dependency
bmk#1476: text8 isnt very gpt2-friendly
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742421642256580648/unknown.png
bmk#1476: should i use wt instead
AI_WAIFU#2844: I used it because it was what I had on hand.
AI_WAIFU#2844: go ahead |
AI_WAIFU#2844: anything with large coherent documents should work.
bmk#1476: fun protip: i have a lib that lets you easily get wt
AI_WAIFU#2844: Small documents won't demonstrate the effect
bmk#1476: ah wt docs are usually pretty short
AI_WAIFU#2844: Do crime and punishment
AI_WAIFU#2844: or moby dick
AI_WAIFU#2844: maybe not moby dick
shawwn#3694: Oh, bmk, want to scp the openwebtext tokenization? (Thanks for that by the way.)
shawwn#3694: I ftfy’d the first 6GB
bmk#1476: yeah one sec
shawwn#3694: Cool
bmk#1476: oh man this script is gonna take a while
bmk#1476: why does it have to run 100000 times?
bmk#1476: do we really need that much precision
AI_WAIFU#2844: Yes.
bmk#1476: ok
AI_WAIFU#2844: You can speed it up by batching
bmk#1476: i.. dont feel like implementing that
bmk#1476: whatever once i get to the bigger models batching wont even fit on gpu
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742423890638274740/unknown.png |
AI_WAIFU#2844: My run lasted 6 hours
AI_WAIFU#2844: I did it over night
bmk#1476: o.O
bmk#1476: oh right cpu
AI_WAIFU#2844: GPU is tied up doing a larger run of my gradient boosting thing.
bmk#1476: ah
bmk#1476: still waiting on nvidia to release their new gaming cards so i can splurge on a couple of new gpus >.>
AI_WAIFU#2844: Same
AI_WAIFU#2844: @bmk if you haven't altready just make sure the script runs with like n=10 before you do the whole thing
bmk#1476: ok
AI_WAIFU#2844: I don't want you to waste an hour only for the thing to shit itself when saving the data.
bmk#1476: o.O https://cdn.discordapp.com/attachments/729741769738158194/742425900825444352/unknown.png
AI_WAIFU#2844: See, it's noisy as fuck
bmk#1476: huh
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742426316355403786/unknown.png
bmk#1476: ok so i increased it 10x
bmk#1476: and its still the same shape
bmk#1476: why is it upside down
shawwn#3694: Graph 1/x
shawwn#3694: Problem solved |
AI_WAIFU#2844: probably the output of GPT-1 vs gpt-2
AI_WAIFU#2844: change the -= to +=
bmk#1476: o.O
bmk#1476: how are they different?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742427053638287391/unknown.png
bmk#1476: ok this *really* does not look right
bmk#1476: why is loss way down in the -100
AI_WAIFU#2844: Agreed
AI_WAIFU#2844: are you grabbing the right output?
AI_WAIFU#2844: you need the logits
AI_WAIFU#2844: preferably properly scaled
bmk#1476: as far as i can tell its grabbing the same thing as from gpt
bmk#1476: lemme double check
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742428313854935070/unknown.png
bmk#1476: ok so i can reproduce the right graph for gpt-1
bmk#1476: but the code is almost literally the exact same ;-;
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742428637382574140/unknown.png
AI_WAIFU#2844: are you providing labels
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742428693334589440/unknown.png
AI_WAIFU#2844: ? |
bmk#1476: spot the difference
bmk#1476: no but the gpt one isnt either
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742428860204974162/unknown.png
bmk#1476: this reproduces your graph
bmk#1476: but if i switch the comments it no longer works
AI_WAIFU#2844: hmm
AI_WAIFU#2844: Gimme a sec I'll try and get it working on my end
AI_WAIFU#2844: @bmk I figured it out, it's a logit normalization issue. GPT-1 normalizes the logits so their exponents add up to 1, gpt-2 doesn't.
bmk#1476: I tried putting it through a log_softmax and it's still wonky
AI_WAIFU#2844: I think that was part of the issue. I'm not getting values that streach over 50, but now it looks like it's spitting out garbage
AI_WAIFU#2844: >Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight']
AI_WAIFU#2844: Chunks of the model don't appear to have been initialized
kizumeru#1577: Joined the server.
bmk#1476: google tells me that it isnt to be worried about
tg#7159: Joined the server.
errendir#1421: Joined the server.
AI_WAIFU#2844: @bmk I figured it out, the outputs of GPT2 need to be shifted by one for it to work. https://cdn.discordapp.com/attachments/729741769738158194/742525690230079528/loss_test.py
bmk#1476: im puzzled
bmk#1476: why wouldnt gpt need that too?
bmk#1476: shouldnt both have the same objective |
AI_WAIFU#2844: 🤷
AI_WAIFU#2844: I think its just that in the GPT-1 arch the inputs don't affect the corresponding outputs while for GPT-2 they do, so you have to shift by one to use it as an autoregressive LM
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742567868876456016/gpt2-117M-losspos.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742567914581786654/loss-gpt2-small.npy
AI_WAIFU#2844: Beautiful.
bmk#1476: if you want you can combine the lines onto one graph
bmk#1476: so we can compare
bmk#1476: the next one is gonna take https://cdn.discordapp.com/attachments/729741769738158194/742568403788628029/unknown.png
AI_WAIFU#2844: yup, once we've got all the runs we can try and figure out the empirical relationship
bmk#1476: this curve doesnt look too promising
bmk#1476: maybe its because of the scale change
AI_WAIFU#2844: Hmm, you can see the power law breakdown pretty quick. https://cdn.discordapp.com/attachments/729741769738158194/742570321391124590/Figure_3.png
AI_WAIFU#2844: I think we should wait for the bigger models to come back before making any serious judgements though. Could be just because the model has maxed out it's capacity on lower hanging fruit.
kindiana#1016: transformerxl has some graphs where they tried to vary context length btw https://cdn.discordapp.com/attachments/729741769738158194/742571419229093975/unknown.png
StellaAthena#3530: Joined the server.
AI_WAIFU#2844: I also think that maybe using text8 to measure this wasn't a good idea. It's a cleaned version of wikipedia. The average article length is 2.5KB or what I'm guessing is ~500 tokens. So we should expect a sharp dropoff in benefit around e^6-7.
Louis#0144: @bmk @Sid @Daj, @StellaAthena is super interested in AI ethics and the weight copy left idea
StellaAthena#3530: Heya. I’m a mathematician who works with provable systems and thinks what all y’all’re doing is very cool. I’ve been talking with @Louis about how to verify copyleft and am quite excited about how the idea dovetails nicely with stuff I’m already working on.
QSED#6120: Joined the server.
StellaAthena#3530: Please tell me this group’s name is a pun on ελευθερία. That would make my day. |
Teqnicolor#8109: @StellaAthena It is
Louis#0144: Oh it is I think
Louis#0144: @Daj said the name is a Greek pun
Louis#0144: But I don’t remember what specifically
bmk#1476: yup that's right
bmk#1476: so yeah any input for the copyleft idea would be nice
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742576728827756614/unknown.png
StellaAthena#3530: @bmk How much experience do you guys have with verifiable computation?
Louis#0144: None AFAIK
Louis#0144: I think Daj knows a bit about that
bmk#1476: i dont know anything about it
bmk#1476: that being said i googled it and the wikipedia page says something about FHE which i once briefly skimmed a blog post about
bmk#1476: in other words i have no clue
StellaAthena#3530: Is this the best channel for an in-depth conversation about this?
bmk#1476: probably #legal is the best place
bmk#1476: also most of the server is asleep rn so if you want a more lively discussion you can check in tomorrow
StellaAthena#3530: What time zone are most people on?
StellaAthena#3530: Western Europe?
bmk#1476: at least daj and sid are
bmk#1476: (ofc, if you want to provide a bit of info rn, im not gonna say no, id love to hear about it :D) |
StellaAthena#3530: How about you catch me up on what you guys have already been thinking so I don’t 1) come totally out of left field to tell you how to do things and 2) don’t rehash things that have already been discussed
bmk#1476: well uh
bmk#1476: so we're pretty sure we want to release the model
bmk#1476: i.e not gate it behind an api or whatever
bmk#1476: but we also want to attach some kind of thing that makes it so people dont use it to Do Evil™
bmk#1476: we haven't agreed on what Do Evil™ means yet, but the two main areas i guess are alignment risk (think paperclippification) and just other general Bad Stuff like automated spam or whatever
bmk#1476: we're generally looking at a copyleft license for that
StellaAthena#3530: So, obviously freeloading the full model and controlling how people use it are incompatible, at least for a sufficiently determined baddie
StellaAthena#3530: And copyleft is mostly unenforceable
bmk#1476: well, yeah, but for a sufficiently determined baddie they can already train it themselves
StellaAthena#3530: So what kind of bar for “bad guy effort” are you looking to beat?
bmk#1476: Our threat model is people who don't have the resources to train their own, basically
bmk#1476: (also, a fair warning: I'm of the pro-open side of the debate so anything I say is tinted with that perspective)
bmk#1476: But yeah we're not dead set on any approach
StellaAthena#3530: Let’s say that you can produce a “contract” that people can sign and if they sign it you can verify that they don’t do “bad things” with the model. However you cannot force someone to sign it and you cannot prevent them from downloading the model if they don’t sign it.
bmk#1476: what would that be useful for?
StellaAthena#3530: It would allow good actors to prove that they’re good
bmk#1476: well, it would allow them to prove that they did *some* good things
bmk#1476: if they can always download it seperately anyways, then their signing of the "contract" doesnt really mean much
StellaAthena#3530: Maybe, it depends on what you care about. Given instances of the model in the world you can tell which ones are contracted, and so if someone publicly uses the model without a contract that might invite criticism. |
bmk#1476: interesting
bmk#1476: so youd have to be able to tell if the output was generated by a model with a contract
StellaAthena#3530: It makes you able to verify the claim “I did X with the model, and I did it in a good way”
StellaAthena#3530: Separately, @Louis mentioned that you were interested in detecting when people finetune your model? For language models, I’m pretty confidant that I’ve solved that problem up to implementation weirdnesses. I haven’t implemented and tested it yet, but on paper it works. (Hey, I did say I was a mathematician)
bmk#1476: ooh, tell me more
bmk#1476: even if it's not useful in practice that sounds cool on its own
bmk#1476: also it's important to note that in most cases we wont have access to raw model output logits from other people's copies of the model, or even the ability to interact with it arbitrarily, but rather sampled text from that
StellaAthena#3530: Read this paper: https://arxiv.org/abs/2002.00937
StellaAthena#3530: It needs some tweaks to work on language models (it’s conceived of for image models) but because the output space of a language model is high dimensional you can integrate the “radioactive markers” into your model and produce a “radioactive” output
bmk#1476: i can see how you can hide stuff in the lower bits for pixels, but for text? it seems too discrete and you cant really hide that many bits in a sentence without changing its meaning too much
StellaAthena#3530: Oh, no. You’re modifying the space of word embeddings
bmk#1476: ?
StellaAthena#3530: If you look at Figure 2, imagine applying that idea to the space of word embeddings. This isn’t detectable on an output-by-output level but it means that the corpus of text the model generates has anomalous statistical properties.
bmk#1476: ah, so it's only detectable across lots and lots of samples
StellaAthena#3530: Yup
bmk#1476: and so youre saying theres a way to make this work thats also robust to finetuning, etc?
StellaAthena#3530: That’s the “on paper only” bit but yeah, I think so.
bmk#1476: how does that work? or is it really complicated
bmk#1476: also since people will know about the existence of this kind of model "tainting", could they be able to tune it in the opposite direction to mess it up?
Louis#0144: The idea is that if you fine tune it like that |
Louis#0144: The model stops working entirely
bmk#1476: o.O
bmk#1476: but.. how??
Louis#0144: Idk
Louis#0144: Tbh
Louis#0144: Lmao
bmk#1476: wat
Louis#0144: I haven’t gone in depth with this stuff
Louis#0144: Tbf
bmk#1476: so you can take a model, add some kind of imperceptible bias that doesnt hurt performance too much, and that is difficult to reverse?
bmk#1476: very very interesting
bmk#1476: also i'm approximately 110% certain that the moment this is released, some group of Hackers™ will take it upon themselves to reverse the protection, for no other purpose than to show it's possible and to give a cool talk at defcon
StellaAthena#3530: @Louis that wasn’t my idea, but I *love* that idea
bmk#1476: i know because that's exactly what i would be thinking of doing if i wasnt making the model in the first place
StellaAthena#3530: I have a couple ideas to prevent pretraining disrupting the marker, though this is fundamentally an applied question not a theoretical one.
Firstly, it’s notable that the methodology in the linked paper is effective even through model distillation. That alone is evidence that it may be resistant to pre-training.
bmk#1476: interesting
StellaAthena#3530: Secondly, to defeat the “low effort” hacker you can hide the embedded marker in a place where the directional derivative in the direction of the radioactive marker with respect to natural data is rather flat. It’s very common in image models for there to be flat regions of the gradient of the attribute space in regions of the image where no information relevant to the classification is.
StellaAthena#3530: I haven’t seen similar work for word embeddings but I see no reason to assume that that wouldn’t be true for word embeddings as well. This defeats the “low effort” hacker because a flat directional derivative is naturally resistant to changing much.
Louis#0144: Are we test for fine tuning from API access |
StellaAthena#3530: Thirdly, to defeat the “high effort” hacker, given a word embedding w and a set of word embeddings S you can estimate how much data is needed to finetune w until it is in S. Working backwards, this may allow us to determine that a given radioactive word embedding is sufficiently far from a “naturally trained GPT-3” word embedding because they data required to get it to look like a “naturally trained GPT-3” is on the order of the data required to train GPT-3 from scratch.
StellaAthena#3530: @Louis Yup. We are testing large scale statistical correlations between words
bmk#1476: When you say word embeddings you mean the stuff at the end of the network just before it gets turned into logits right?
bmk#1476: We'd have to guess at that through the tokens generated which adds another layer of indirection
StellaAthena#3530: Yeah. Is referring to them as word embeddings not standard?
bmk#1476: so uh this is my mental model:
StellaAthena#3530: There’s a decent body of literature analyzing them (see, e.g., Aylin Caliskan’s work on language models encoding bigotry) that seems to indicate that we can reliably assess correlations between them without much issue.
bmk#1476: tokens ≤ n -> a matrix of vocab x hidden size -> neural network magic -> a matrix of hidden size x vocab -> a logit distribution over vocab for token n + 1
bmk#1476: and what we get to observe is a lot of instances of samples from that last step
Daj#7482: @StellaAthena This all sounds extremely interesting. I'd like to reiterate that we're not committed to any kind of restrictive license necessarily, my own personal view is that it's a cool experiment potentially and might set an interesting precedent, and I don't see any major downsides. I don't harbor the illusion that we could actually enforce a copyleft license against a truly non-cooperative agent, but I think norms matter. The way I see it, copyleft would be a thorn in the side of people I'm not trying to help, and no hindrance to the people I do want to help. Of course, your radioactive data suggestion is extremely interesting (and would make a great research project), I would love to discuss that in more detail
shgidi#0284: Joined the server.
Sid#2121: Hey @shgidi ! Welcome to the inefficient sampling dojo! Check the channel description for more info on the project and do reach out if you have any questions 🙂
Aran Komatsuzaki#5714: I just noticed that MARGE did zero-shot unsupervised NMT (without fine-tuning) and performed reasonably on en-de without any translation-specific intervention like back-translation. imo this is the most interesting result in the paper, and it's a yet another support for zero-shot learning with retrieval.
Aran Komatsuzaki#5714: @Deleted User
StellaAthena#3530: > @StellaAthena This all sounds extremely interesting.... The way I see it, copyleft would be a thorn in the side of people I'm not trying to help, and no hindrance to the people I do want to help. Of course, your radioactive data suggestion is extremely interesting (and would make a great research project), I would love to discuss that in more detail
@Daj I think this is exactly the right way to think about it. Copyleft is functionally uninforcable in most cases because you need to know
1. That the bad actor exists
2. That they are using your model
3. That they are using your model in a way that violates copyleft
In practice knowing 1-3 is very hard, especially if they take a non-zero number of steps to hide this. |
Daj#7482: Yup absolutely, I see this as one half "interesting experiment maybe" and one half "why not?", not as an actual strategy to effectively prevent abuse
Daj#7482: Though I'm definitely down for experimenting with those tracer methods for enforcement
StellaAthena#3530: I view these limitations as a motivation for the tracer method. Or, more generally, proof-based verification methods. We can’t catch people who hide from us, but we can catch people who lie to us and we can tell who is refusing to certify themselves as “good”
Daj#7482: I love that framing
StellaAthena#3530: Especially when you consider the fact that this is a socially situated problem, where for many entities being labeled as “refuses to certify themselves as good” can be problematic, it allows you to reclaim a lot of power.
StellaAthena#3530: Admittedly, less than the context I use these techniques in typically (two distrustful parties who want to work in a prisoner’s dilemma type partnership).
StellaAthena#3530: That’s where verifiable computation techniques really shine because you can threaten to pack up your toys and go home if they don’t play fair.
StellaAthena#3530: (Also, you can get them to sign a legally binding contract saying that they’ll play fair)
Daj#7482: This is the kind of cutting edge cryptoeconomics I like to hear
Daj#7482: (sorry if that word is tainted haha)
StellaAthena#3530: I’m less interested in economics personally, but yeah these same ideas are how etherium works.
Daj#7482: Prisoner's Dilemma is an economics problem to me, but that might be an untypical framing
Daj#7482: Anyways, I would love to cooperate on trying this out, though you're obviously the expert and would have to tell us what we can do to help
StellaAthena#3530: The word is “atypical” actually, FYI.
Daj#7482: Hah sorry, that's my German peeking through
StellaAthena#3530: Yeah, game theory is the kind of thing where every field wants to claim it as theirs
Daj#7482: (my framing comes from places like https://medium.com/@virgilgr/ethereum-is-game-changing-technology-literally-d67e01a01cf8 )
Daj#7482: But yeah, game theory is the better word
StellaAthena#3530: My degrees are in math and philosophy, so that’s where I am coming from.
Daj#7482: Unusual combination, but one that I wish existed more |
StellaAthena#3530: I agree 🙂
StellaAthena#3530: > Anyways, I would love to cooperate on trying this out, though you're obviously the expert and would have to tell us what we can do to help
@Daj Frankly, this is my ideal set up. I’m a mathematician, and I don’t let people call me a computer scientist unless they put the word “theoretical” in front of it. My job is more or less to be a “backseat coder” to real computer scientists rotfl.
Daj#7482: Hey sounds great to me
Daj#7482: I'm the platonic ideal of a computer scientist, meaning a mathematician that's bad at math
Daj#7482: Computing is the computer's job smh
StellaAthena#3530: Every time someone whines about having to take a course in proofs as a CS student or read an equation with too exotic symbols I remind them that they signed up for computer *science*.
Daj#7482: Yes! It's funny that's what everyone warned me about when going to uni but it was just right for me
StellaAthena#3530: A+
Daj#7482: I love a good hard implementation problem as much as the next engineer, but P=NP is what I talk about after the third beer haha
Daj#7482: So nice to have someone even more abstract around
Daj#7482: Is this a project you would actually have the time/interest to commit to soonish?
Sid#2121: > Every time someone whines about having to take a course in proofs as a CS student or read an equation with too exotic symbols I remind them that they signed up for computer *science*.
@StellaAthena I'm in this message and i don't like it
Sid#2121: not actually a CS student tho lol
Daj#7482: Sid is a better engineer than me
Daj#7482: So we have the whole hierarchy now haha
Sid#2121: i wouldn't put it that way, i'm just persistent hah
Daj#7482: That's what makes a good engineer tbh
Daj#7482: and good scientist |
StellaAthena#3530: > Is this a project you would actually have the time/interest to commit to soonish?
@Daj Yeah. I was specifically waiting for DEF CON to be over to join this chat for this exact reason 🙂
Daj#7482: and good anything really
Daj#7482: Awesome Stella! Man here I am wanting to take a week off and too many interesting things happen
Sid#2121: this all sounds super fascinating btw. Are there any actual implementations of the paper you posted above in code?
StellaAthena#3530: Yeah it’s on GitHub: https://github.com/facebookresearch/radioactive_data
Daj#7482: How would you like to approach this project? We're still finetuning our code to feature parity with OA and training lots of example models, but it's really just a matter of time till everything is right
StellaAthena#3530: Big picture, GPT-3 and GPT-2 work the same way right? We should be able to develop the architecture and validate the core ideas on the smaller model.
Daj#7482: Hopefully that's exactly correct
Daj#7482: GPT3 is the same architecture just more/bigger layers
StellaAthena#3530: So I would say that we can do this in parallel with the GPT-3 work.
Daj#7482: Yep, assuming the scaling doesn't fundamentally break something I would expect so
StellaAthena#3530: Some aspects will require fine tuning on the actual GPT-3 model, but making sure we know how to do it will work on GPT-2
Aran Komatsuzaki#5714: Never studied about CS/EE formally. Studied algebraic geometry/topoplogy and string theory etc. Somehow got into ML PhD program in which I use no math at all.
Daj#7482: Sounds like a plan @StellaAthena , happy to follow your lead and provide technical assistance and hardware. We could also create a new project channel for it
StellaAthena#3530: @Aran Komatsuzaki if you want to do ML and algebraic geometry at the same time I have people I can introduce you to.
StellaAthena#3530: There’s some super cool cutting edge stuff there.
StellaAthena#3530: I don’t know what our compute needs look like, but I can probably commandeer a DGX-1 from time to time if that’s helpful.
Daj#7482: Our code is built on TPUs because we get those for free
Daj#7482: I'm not sure if running on GPUs vs TPUs would change the tracer method (I don't see why it would) |
Daj#7482: but if you want to verify on our precise code we'd probably have to use TPUs
Aran Komatsuzaki#5714: Thanks. But it doesn't matter to me whether a topic needs math or not. What matters to me is whether it leads to AGI or not, which is why I'm doing this.
Aran Komatsuzaki#5714: Just wanted to join the conversation lol
StellaAthena#3530: Hi 🙂
Daj#7482: I'm with you Aran haha
StellaAthena#3530: If we have TPU resources, great let’s use those.
Daj#7482: We have a _lot_ of TPUs haha
StellaAthena#3530: I’m mostly offering because I know freelance projects can be strapped for compute and having a DGX-1 to yourself can be very useful.
Daj#7482: For sure normally I'd be all over that
Daj#7482: Just we're bound to TPUs kinda
StellaAthena#3530: Where are our TPU resources coming from?
Daj#7482: Which is the ML equivalent of a Faustian bargain tbh
Daj#7482: > Where are our TPU resources coming from?
@StellaAthena Google's TFRC program. I was one of the first members and got them a lot of publicity so they're pretty generous
StellaAthena#3530: Sweet
Aran Komatsuzaki#5714: nice
Aran Komatsuzaki#5714: I'm hardly contributing to this project, but I'm watching you guys as a spirit animal.
StellaAthena#3530: What are we writing the code in?
Daj#7482: We appreciate it Aran haha! And you have cool discussions with Lucid
StellaAthena#3530: The Radioactive Data paper is in PyTorch |
Aran Komatsuzaki#5714: Thanks 🙂
Daj#7482: Mesh Tensorflow, Stella. I can invite you to the repo if you send me your github name
Sid#2121: > GPT3 is the same architecture just more/bigger layers
@Daj Not quite accurate, OA use sparse layers for their GPT3, we'll probably have to use some other technique eventually to cut down training time (likely local / linear attention)
Daj#7482: Unfortunately pytorch support for TPUs is terrible
Daj#7482: > @Daj Not quite accurate, OA use sparse layers for their GPT3, we'll probably have to use some other technique eventually to cut down training time (likely local / linear attention)
@Sid Ah yes I totally forgot this, you're correct
Sid#2121: but for all intents and purposes, it's the same
Sid#2121: the type of layer shouldn't effect the radioactive data i guess
Daj#7482: At this point I don't trust _anything_ to work in ML, TF, TPUs or _especially_ MTF before I've seen it with my own eyes haha
Sid#2121: but from what I can gather, the 'radioactive' stage is done in data preprocessing?
StellaAthena#3530: @Sid In the original paper, yes. Their goal is to determine when someone is training a model on their data. However that’s not quite what we want to do.
StellaAthena#3530: We want radioactive outputs, not radioactive inputs.
Daj#7482: I'm just now noticing that this could also be an extremely powerful way to combat using the model for spam, if the radioactivity of the text can be detected
StellaAthena#3530: Yes
StellaAthena#3530: This has a lot of powerful applications. An intermediate step that I recommend we do between “replicate the paper” and “apply it to GPT” use it to detect model stealing attacks
Daj#7482: How do you mean?
StellaAthena#3530: You know what a model stealing attack is?
Daj#7482: Reconstructing a model from outputs?
StellaAthena#3530: Yup |
Daj#7482: Ah that's what I thought, just wanted to make sure
Sid#2121: I'm having trouble seeing how this would transfer over to text, unless you had a large amount of the target's text outputs to analyze?
Daj#7482: Sounds really great to me, I'm super excited
Sid#2121: surely you couldn't detect the radioactivity from say, a single spam message
Daj#7482: Yea I think this would require a sizeable amount of text
Daj#7482: Detecting spammy blog networks or accounts, not messages
Daj#7482: Not perfect but better than nothing
StellaAthena#3530: Yes, this is fundamentally a technique for statistical analysis of many documents.
Daj#7482: How far is the theory on this method? Are we developing something novel here?
Sid#2121: has anyone tried this in the text domain before?
StellaAthena#3530: Nope
Daj#7482: Nice
Sid#2121: I can see how you could mark an image without disturbing the content, but not text
Daj#7482: I like trying new stuff
Daj#7482: What would you like the project channel to be called btw? I've been trying to think of some radioactivity related pun or title
StellaAthena#3530: We don’t. We mark the word embeddings the model generates, which produced a statistical bias
Daj#7482: #the-radiation-lab ?
StellaAthena#3530: That is absolutely a potential point of failure (the second most likely one IMO). But it seems like it should work to me.
Sid#2121: so, to detect from text, as a crude example, the text outputted would statistically contain the word 'table' more than normal text? or a string of words or something? is that the idea?
Daj#7482: This method sounds possible for sure but I have no idea until we actually try it |
Sid#2121: I'm trying to figure out how you would then detect from raw text
Sid#2121: or like, a certain misspelling would appear more often?
StellaAthena#3530: Have you read any of the literature on implicit bias in text models?
Sid#2121: nope
StellaAthena#3530: So let’s think about word2vec for a second for conceptual simplicity
StellaAthena#3530: If you train word2vec on the NYT and on reddit you’ll see mostly the same thing, but some significant differences.
Daj#7482: We just detect the density of memes
Daj#7482: ...that's partially true in a Dawkins sense I guess but not really
Daj#7482: haha
StellaAthena#3530: Reddit will tell you that the words “woman” and “sandwich” are much more likely to occur together. It’ll also tell you that “coon” and “Black person” are sometimes substitutable
Sid#2121: O.o
StellaAthena#3530: This isn’t hypothetical: this is something that has been shown
Daj#7482: Makes sense, unfortunately
Sid#2121: yes, reddit
StellaAthena#3530: In Reddit’s case, there’s a background bias called “being a piece of shit” that distorts the word embeddings.
Daj#7482: Man and Reddit is definitely in the better half of the internet haha
Daj#7482: But yes methodology makes sense to me
StellaAthena#3530: In our case, we will be deliberately creating a strange correlation between typically uncorrelated words.
StellaAthena#3530: On a sentence by sentence basis this will introduce a little weirdness, but on a larger scale it will represent a statistically significant pattern.
Sid#2121: Hm. So i guess it's about finding a balance between words that are atypical enough not to disturb the output too much, and words that are typical enough to give you enough statistical data? |
Daj#7482: Intuitively, I feel this should work in a high enough dimensional space
Daj#7482: And would be close to imperceptible
bmk#1476: @AI_WAIFU results for 345M https://cdn.discordapp.com/attachments/729741769738158194/742765237681520780/gpt2-345M-losspos.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742765265149886525/loss-gpt2-medium.npy
Daj#7482: So the big empirical question is whether text is high enough dimension
StellaAthena#3530: This is why the radioactive paper applies it to the input of their classifier. I spoke with the authors and they were hoping to apply it to the output, but needed a high dimensional space and the output of a classifier wasn’t.
Daj#7482: We should move this to #the-rad-lab to not clutter general and make things easily searchable later @StellaAthena @Sid
StellaAthena#3530: But for *generative models* it should be.
AI_WAIFU#2844: Nice, I'll compare it with the other results.
Louis#0144: Finally caught up
Louis#0144: Lmao
bmk#1476: @AI_WAIFU can you send me the npy for GPT-1?
AI_WAIFU#2844: I accidentally overwrote it while trying to get my script to work on GPT-2. Sorry.
bmk#1476: ouch
bmk#1476: im going to run the large and xl models with only 1000 samples first to get a rough look at what the graph looks like before spending an entire day and a half of gpu time refining the curves
AI_WAIFU#2844: That sounds like a good idea.
bmk#1476: also btw what are you thinking of doing with this info other than informing choice of context length for our new models
AI_WAIFU#2844: I didn't get that far. Originally I just wanted to quantify how much validation loss was being under reported. I noticed it would be useful for context length choices after I made the first plot.
bmk#1476: ah
AI_WAIFU#2844: Although I'm sure this has other uses. You could probably turn this into a small paper with a few more experiments. |
bmk#1476: Honestly it would be great if we could publish a paper under the banner of EleutherAI
bmk#1476: Would lend us a lot more credence as a Real AI Lab™
Daj#7482: We have like half a dozen proto papers floating about
AI_WAIFU#2844: I'm game
Daj#7482: Would be extremely awesome to actually get one published
bmk#1476: Ok what other experiments do we want
AI_WAIFU#2844: I want to see this same thing done on a corpus of novels.
bmk#1476: We could train a bunch of smaller and smaller models to go the other direction
bmk#1476: Ooh yes fine tune on bookcorpus
AI_WAIFU#2844: I suspect that the tapering in the loss is because the text8 inhereits from wikipedia which is mostly small articles.
bmk#1476: Interesting
bmk#1476: I have a copy of bookcorpus floating around which has really long documents
AI_WAIFU#2844: That or just a catenation of things on project gutenberg
bmk#1476: Gutenberg is kinda small, XL might overfit
AI_WAIFU#2844: bookcorpus it is then
bmk#1476: Actually lemme double check sizes, one moment
bmk#1476: https://web.eecs.umich.edu/~lahiri/gutenberg_dataset.html
bmk#1476: We're talking about this set, right?
Deleted User#0000: oh, i added PG19 to huggingface https://github.com/huggingface/nlp/blob/master/datasets/pg19/pg19.py
Deleted User#0000: was going to start training on compressive transformers, but never got around to it |
AI_WAIFU#2844: I subtracted the small model loss from the medium model loss https://cdn.discordapp.com/attachments/729741769738158194/742778203399258242/Figure_4.png
Deleted User#0000: it's like 20k books
Deleted User#0000: `import nlp; nlp.load_dataset('pg19')`
AI_WAIFU#2844: Gutenberg provides a python API to their corpus
bmk#1476: what does this mean
bmk#1476: also where can i download it as a zip
Deleted User#0000: just use the hf nlp library bmk
bmk#1476: how install
Deleted User#0000: `pip install nlp`
bmk#1476: ah
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742779407705571338/unknown.png
AI_WAIFU#2844: can you make the x axis logartihmic
bmk#1476: ok
Aran Komatsuzaki#5714: in that case i also recommend to make the y axis also logarithmic
Aran Komatsuzaki#5714: there's power law between log(nll) and log(everything).
AI_WAIFU#2844: Also this will distort results a bit buy you could try running LOESS regression to smooth out the curves
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742780065477296178/unknown.png
Aran Komatsuzaki#5714: i mean nll and everything
bmk#1476: ill just run it for 1.5 days
bmk#1476: to get more datapoints |
AI_WAIFU#2844: @Daj what's the main obstacle to turning these "proto-papers" into real papers?
bmk#1476: nothing
bmk#1476: there's no obstacle
bmk#1476: as long as someone wants to put in the time they can turn them into papers
Aran Komatsuzaki#5714: maybe you can make a list of them?
bmk#1476: there's a semi-list on the doc that i will expand
bmk#1476: so the obvious first candidates are our main projects https://cdn.discordapp.com/attachments/729741769738158194/742781144428445776/unknown.png
bmk#1476: but those will take time and resources
Daj#7482: What bmk said, it's just things various people have suggested that could be turned into papers eventually or not
Sid#2121: + now #the-rad-lab
bmk#1476: im drafting up a list rn
Sid#2121: yeah we could do with updating kanban/google doc
Sid#2121: it's been a while
bmk#1476: ive been updating the doc
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742781809858838558/unknown.png
Aran Komatsuzaki#5714: The Pile looks interesting. I always wanted the CC-dervien dataset to contain weird things (e.g. latex files, so that they can generate math stuffs better). The dataset used in GPT-3 still looks lacking diversity.
Sid#2121: Exactly yeah. There's absolutely no code at all, explicitly, but it's still so good at generating code
Aran Komatsuzaki#5714: Yup
Sid#2121: imagine what it'd be like with our github dataset included
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742782525033939024/unknown.png |
bmk#1476: lmk if i missed anything
Sid#2121: we also now have a *tonne* of extra features courtesy of @Deleted User that we can test
Sid#2121: so 1) local attention
bmk#1476: oh right other model architecture stuff
Sid#2121: 2) all attention memory key-values
Sid#2121: 3) axial positional embedding
Sid#2121: uhhh 4) moe!
Sid#2121: 5) GLUs
Sid#2121: (he's been busy lol)
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742782938923401336/unknown.png
Aran Komatsuzaki#5714: as an advisor to @Deleted User's research work, I'm afraid only moe will give you a huge boost.
bmk#1476: added to doc
Sid#2121: yeah MOE is the one I'm most hopeful about, as well
Daj#7482: really? I was least interested in MOE
Daj#7482: It seems like a hack to turn memory into compute
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742783243765416006/unknown.png
Aran Komatsuzaki#5714: also see gshard's results
bmk#1476: this table is very discouraging
bmk#1476: we just simply dont have 40x more memory
bmk#1476: like, even if we use the cpu memory trick |
bmk#1476: unless we do the crazy 1Q L2L trick
bmk#1476: but like
Daj#7482: GShard is underwhelming given its size
bmk#1476: *i'm not implementing that*
Daj#7482: MOE is cool when memory is cheap
Daj#7482: and you have conditional computing and co
Sid#2121: i'm equally hopeful for local / global attention mix and linear attention.
Aran Komatsuzaki#5714: the thing is that memory is cheap.
bmk#1476: not for us
Daj#7482: Yea compute is our cheap resource
bmk#1476: memory is our main bottleneck
bmk#1476: if memory wasnt a bottleneck we could just have data parallel across all 2048 cores
bmk#1476: that would be *so awesome*
Aran Komatsuzaki#5714: i think i addressed why memory can be cheap some time ago, but i don't remember the justification.
Aran Komatsuzaki#5714: damn lol
bmk#1476: memory *can* be cheap if we can figure out over-the-network L2L
bmk#1476: but that sounds like such hell that i dont know how to do it at all
Daj#7482: This also assumes certain properties of the computation
Daj#7482: Needs to be slower than network at key spots
Daj#7482: Yea this is not on our roadmap atm haha |
bmk#1476: im pretty sure theres enough bandwidth
Daj#7482: we'll wait for Noam to solve it
bmk#1476: latency can be fixed by caching in tpu cpus
bmk#1476: ok woah gutenberg is a lot bigger than i remembered
bmk#1476: ok so
bmk#1476: whats the best way for us to tune various gpt2s on gutenberg
bmk#1476: @Daj whats the canonical tpu tuning script
Daj#7482: There is none other than mine
Daj#7482: That I know of
bmk#1476: ok so
bmk#1476: we want to encode gutenberg
bmk#1476: i dont know how to do that
bmk#1476: and then we want to finetune different gpt2 sizes on that
bmk#1476: i have no idea how to use your script or whether that's difficult to do
Daj#7482: It's not too bad
Daj#7482: I can help you in ~1-2 hours
bmk#1476: ok
bmk#1476: @AI_WAIFU do you wanna get your hands dirty with tpu stuff too
Aran Komatsuzaki#5714: sorry, but could you tell me why memory is your bottleneck? I'm not really familiar with your budget constraint, so I'd like to know, since it wasn't a problem in the case of GShard. Also, they could've used smaller number of cores by fitting their parameters in them fine. https://cdn.discordapp.com/attachments/729741769738158194/742787954291769434/img1.png
Daj#7482: Each TPU core has 16GB of memory |
Daj#7482: That's it
Aran Komatsuzaki#5714: https://cdn.discordapp.com/attachments/729741769738158194/742788142599372823/img2.png
Aran Komatsuzaki#5714: Seems like each device had only ~4GB.
Deleted User#0000: @Aran Komatsuzaki yeah, ideally i would give them local + linear + RT
Deleted User#0000: but i don't think RT would behave well distributedly
Daj#7482: for reference: GPT3 has around ~700GB of weights
Deleted User#0000: at least, i'm not sure how the clusters would be managed
bmk#1476: More in practice because duplication across some mesh dimensions
Deleted User#0000: i think rezero is worth one try
Deleted User#0000: and then, if that doesn't work, you can save by using scale norm
Deleted User#0000: it should be a tiny bit faster
Aran Komatsuzaki#5714: The rightmost one has 600B params.
Deleted User#0000: otherwise, Aran is right, MoE is the biggest gain
Daj#7482: > The rightmost one has 600B params.
@Aran Komatsuzaki That seems physically impossible unless the model is split across many cores
bmk#1476: What if we figure out network L2L
Deleted User#0000: MoE and PKM are strictly better than all-attention, so that's last to try lol
Daj#7482: 600B * 32bits...
AI_WAIFU#2844: I'm down, I just have no experience with TPUs
bmk#1476: Also 600B/40=effectively same performance as a 15B |