data
stringlengths 115
7.61k
|
---|
Daj#7482: > MoE and PKM are strictly better than all-attention, so that's last to try lol
@Deleted User Oh? Don't get me wrong I think it's all cool
Aran Komatsuzaki#5714: But they did it, so what does it mean?
Daj#7482: good question haha
bmk#1476: > I'm down, I just have no experience with TPUs
@AI_WAIFU don't worry we all suck at TPUs
Deleted User#0000: i think FF-GLU is worth turning on too, really trust Shazeer on that one
Deleted User#0000: that one time i excitedly messaged Aran with big gains, only to realize it was from the FF-GLU and not from my dumb research idea
Deleted User#0000: lol
Deleted User#0000: anyways, i've mostly gifted what works
Aran Komatsuzaki#5714: If you read Sec. 5 of my draft, you can see that extending the context length with efficient attn (e.g. Routing Transformer) doesn't result in an improved per-token loss for earlier tokens (e.g. first 1024 tokens).
Deleted User#0000: local-attention is rated as #1, you need to get that working
Aran Komatsuzaki#5714: Well, my draft is kind of irrelevant, since it doesn't really contain any empirical result provided by myself.
Aran Komatsuzaki#5714: One result you can find is in OpenAI's scaling paper. Longer context doesn't mean better per-token loss for earlier tokens.
Aran Komatsuzaki#5714: So, the generation of the first 1024 or so tokens don't benefit from longer context.
Aran Komatsuzaki#5714: I mean the first 1024 tokens of a given sample.
Aran Komatsuzaki#5714: In practice, predicting those tokens are pretty important, since many tasks are inherently short.
Aran Komatsuzaki#5714: Also, the gain from Routing Transformer etc is pretty small compared with adding more parameters like simply scaling the model up or using MoE.
Deleted User#0000: yea, no one has tried local attention at great depths either. it may be enough to increase the receptive field through the many layers
Deleted User#0000: and then just a couple global layers sprinkled near the end to integrate the learnings from previous layers |
Aran Komatsuzaki#5714: Also, most dataset used in GPT-2/3 have the average sample length somewhere around 1024, so using longer context results in an improvement only on the samples longer than 1024 or so.
Aran Komatsuzaki#5714: So, the gain is even smaller than datasets like Wikitext.
Aran Komatsuzaki#5714: on number i mean.
Aran Komatsuzaki#5714: also saving ffn computes by replacing it with local attn when the TBPTT length is something like 1024 or 2048 is not so big, so it's a gain, but it's not a big one.
Deleted User#0000: @Aran Komatsuzaki do you know of any papers for faster training for auto-regressive models?
Aran Komatsuzaki#5714: sorry i mean saving full-attn computes by replacing ....
Deleted User#0000: it's probably the billion dollar question right now..
Deleted User#0000: so i assume there's few
Aran Komatsuzaki#5714: "faster" = improving perf-computes tradeoff?
Aran Komatsuzaki#5714: in that case, you can read my draft lol
Deleted User#0000: like Electra
Deleted User#0000: we almost have Electra working, btw
Aran Komatsuzaki#5714: Electra improves perf-computes tradeoff, so the same thing isn't it?
Aran Komatsuzaki#5714: or are you talking about sample efficiency?
Deleted User#0000: yea it does, Electra gets to the same performance as Roberta with a quarter of the compute
Deleted User#0000: so huge savins
Alm#9130: Is there any experiments on having some layers character based and using the output to create hidden states for words and then doing business as usual?
Aran Komatsuzaki#5714: yeah it's big
Alm#9130: Is there any way to use electra to make the encoder-decoder architectures more efficient?
Deleted User#0000: @Alm I think in T5, the encoder they used comes pretrained |
Deleted User#0000: so certainyl
AI_WAIFU#2844: @bmk What's the process for getting started with TPUs?
Deleted User#0000: but for decoder, i don't know of that many techniques
Deleted User#0000: the decoder is a big headache, to be honest.
Aran Komatsuzaki#5714: @Deleted User sorry ill get back later
Deleted User#0000: np, starting my day too
Deleted User#0000: chat laters
bmk#1476: so daj is the one most familiar with the tpu code (the old code, not the mesh code)
bmk#1476: but we need to tokenize to tfrecords, then do some dark magic incantations to get the training code working
bmk#1476: also what do we want to do for batch size? since as the models get bigger fitting the same batch size will get harder
bmk#1476: do we want same batch size throughout?
AI_WAIFU#2844: I think that would be best, worst comes to worst we can do microbatching right?
bmk#1476: yeah
bmk#1476: like for 1.5B it can only fit 1 per core
bmk#1476: for 117M you can probably fit a lot per core
bmk#1476: maybe we just use bigger and bigger pods
bmk#1476: so our batch size should be 512 or something
bmk#1476: @Daj do you remember how many batch per core you can fit for each model size
bmk#1476: so we can know what size of pod to provision
Daj#7482: 1 per core for 1.5B (with adafactor), others fit quite a lot more |
Daj#7482: Couldn't you use our mesh code?
Daj#7482: My old code has no microbatching or the like
bmk#1476: is the mesh code guaranteed to be correct?
Daj#7482: No, neither is my old code (though my old code is closer I guess)
AI_WAIFU#2844: =What is the motivation for using Daj's old code?
Daj#7482: The old code is as close to OpenAI code as humanly possible
Daj#7482: But also old and janky
bmk#1476: the new code acts suspicious
bmk#1476: also getting the model to fit into hf code will be a challenge too
bmk#1476: we'd need to rename a lot of variables
AI_WAIFU#2844: I mean for this specific experiment. Hugging face provides TF implementations of GPT-2.
Daj#7482: I've never tried HF code on TPUs
Daj#7482: but I know they can't train on TPUs without modification
bmk#1476: no but like
bmk#1476: taking the trained model and inferencing on gpus
bmk#1476: thats gonna be hell with the mesh code
Daj#7482: Oh inference on TPUs is tricky
Daj#7482: Actually our mesh code is the only TPU inference code I know lol
bmk#1476: doing literally anything with mesh code models is gonna be hell
Daj#7482: Inferencing GPT2 on GPU just use HF |
bmk#1476: > also getting the model to fit into hf code will be a challenge too
Daj#7482: HF already has OA models?
bmk#1476: > we'd need to rename a lot of variables
Daj#7482: Literally out of the box
Daj#7482: I'm not sure what you're trying to do lol
Daj#7482: If you just want losses and have GPU HF literally works out of the box
bmk#1476: if we train with mesh, getting that to inference with hf is going to be a big challenge
Daj#7482: Oh yeah probably
Daj#7482: Or not I'm not sure tbh
Daj#7482: The structure should be the same and renaming is easy
bmk#1476: renaming is kinda annoying
Daj#7482: Well it'll be the least of your problems when getting TPUs to run lol
bmk#1476: lol
bmk#1476: ok so
bmk#1476: do we use the mesh code
bmk#1476: actually how do we do encoding
Daj#7482: I'll help you set that up after dinner
bmk#1476: i have all the text files
Daj#7482: Use the new script
Daj#7482: In the mesh repo/datasets |
Daj#7482: It's pretty well commented
Daj#7482: I'll help if you can't get it to work
bmk#1476: is it worth trying to figure out the tok16
bmk#1476: that shawwn was talking about
Daj#7482: No
Daj#7482: This dataformat and loader already works
bmk#1476: https://discordapp.com/channels/729741769192767510/729741769738158194/741850244899143711
Daj#7482: It's not worth trying to implement a new dataformat for an experiment this small haha
Daj#7482: But be my guest
bmk#1476: oh i was just wondering
Daj#7482: TF wants tfrecords, so we give it tfrecords lol
bmk#1476: wheres the oa tokenizer json?
bmk#1476: byte-level-bpe.tokenizer.json is the 32k vocab
Daj#7482: OA used their own, terrible encoder
Daj#7482: Oh right then you probably can't use the new code, unless HF has a compatible version released
Daj#7482: then you'd have to use my old encoder script, which is terrible
bmk#1476: wait are we using OA vocab or our vocab rn?
bmk#1476: im now confused
Daj#7482: OA vocab is the larger one
Daj#7482: We only used the 32k like once |
bmk#1476: theres only one json file here
Daj#7482: Yes, the OA vocab used the old encoder
Daj#7482: I encoded our OA WT with the old encoder
Daj#7482: We haven't yet gotten around to making OA vocab work with the new script
Daj#7482: I'm sure HF implements it though
Daj#7482: Just google it
Deleted User#0000: @bmk have you tried SeqGAN? i'm growing to be more interested in it
Deleted User#0000: lol
bmk#1476: i can send you my code
bmk#1476: no guarantees on whether it works
bmk#1476: if you end up writing a paper about it id be glad to contribute
Deleted User#0000: 🥳
bmk#1476: but i havent been able toget stuff 100% working
Deleted User#0000: it's such an old paper...
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742801120350437476/adafactor.py
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742801121948598402/train.py
bmk#1476: no guarantees provided
bmk#1476: that being said i *think* im doing stuff right
Deleted User#0000: thanks! i'll try training it from scratch
bmk#1476: the issue i ran into with it was that the generator validation loss seemed to not change |
bmk#1476: no idea why
bmk#1476: like i ran it all night and came back the next day to see the val loss was exactly the same
Deleted User#0000: after seeing Electra start working, and the seeing the GAN paper this morning (where they also use adversarial training), i wonder if we are missing something for auto-regressive..
bmk#1476: i mean im a strong believer in RL+GAN+LM so i really really want to make it work, haha
Deleted User#0000: https://arxiv.org/abs/1902.04094
Daj#7482: Tangential: I have a strong hunch that Contrastive Coding is very useful for many tasks
Daj#7482: My dayjob had some amazing results with it
Deleted User#0000: https://parasj.github.io/contracode/
Deleted User#0000: the problem is we don't have good augmentation techniques for text
Daj#7482: That is a cool paper thanks
Daj#7482: > the problem is we don't have good augmentation techniques for text
@Deleted User Yes this is a problem with text unfortunately
Deleted User#0000: yea, it uses huggingface. oh man, the infinite value huggingface has provided the world
Deleted User#0000: > @Deleted User Yes this is a problem with text unfortunately
@Daj i've seen some paper do backtranslation, where you translation to a target language and back
Deleted User#0000: that's about it..
helen 🐳#5160: AI2 fit 2 examples per core on a v3-256 for the 1.5B model and if anyone can figure out how they did that i’d love to know
Aran Komatsuzaki#5714: anyhow, i hope cpu trick will be used to make memory cheaper.
Aran Komatsuzaki#5714: @Deleted User i summarized the major approaches to improve the perf-computes tradeoff in my draft, so you can just check them, except for some new spurious methods like DeLighT.
Aran Komatsuzaki#5714: For non-causal models like BERT, they aren't the way to AGI, since they require fine-tuning, which depends on the availability of fine-tuning dataset, which doesn't exist in the general task. Most real-life tasks have no dataset provided! |
Aran Komatsuzaki#5714: I read BERT has mouth paper. Unfortunately, BERT variants without fine-tuning performs pathetically, so you want to stick with GPT-2/3.
Aran Komatsuzaki#5714: You can also read our response to Ethan's tweet (I think you read it): https://twitter.com/ethancaballero/status/1292118727371227142
Sid#2121: > the problem is we don't have good augmentation techniques for text
@Deleted User what about using synonyms from a thesaurus lol?
Aran Komatsuzaki#5714: Loren and Alex also gave a good response to Ethan.
Aran Komatsuzaki#5714: @Deleted User
Deleted User#0000: @Sid yup, they do that as well. the paper i read they did that + backtranslation
Aran Komatsuzaki#5714: This is precisely why OpenAI's GPT-N doesn't use fine-tuning or MLM.
Deleted User#0000: @Aran Komatsuzaki have they ever tried pretraining the encoder, then fine-tuning to auto-regressive?
Aran Komatsuzaki#5714: It's not for AGI.
Deleted User#0000: i think generation is just convenient for us to interpret what is being learned, but i think even an encoder has the capacity for general intelligence
Deleted User#0000: defer to you though lol
Aran Komatsuzaki#5714: Agreed. Encoder is good, but the problem is MLM.
Aran Komatsuzaki#5714: As a matter of fact, MARGE has an encoder, so encoder has no problem.
Aran Komatsuzaki#5714: > @Aran Komatsuzaki have they ever tried pretraining the encoder, then fine-tuning to auto-regressive?
@Deleted User
Haven't heard about that.
Deleted User#0000: for MLM, if you mask the right-most tokens, it's kinda like training auto-regressive
Aran Komatsuzaki#5714: yes but it's much less efficient
Deleted User#0000: yea |
Sid#2121: > @Sid yup, they do that as well. the paper i read they did that + backtranslation
@Deleted User can you link to the paper? Does it confer as much advantage as augmentation for GANs/image classifiers?
Aran Komatsuzaki#5714: efficient in terms of both training and inference
Deleted User#0000: @Sid yea sure, i'll do some digging and link it. nothing ground-breaking though
Deleted User#0000: its one of those incremental papers
Aran Komatsuzaki#5714: fine-tuning to autoregressive, I believe, just gives you the same coruse of training curve except it wasted the computes for the encoder and the first pretraining.
Aran Komatsuzaki#5714: anyway, i gotta sleep. see you later 🙂
Deleted User#0000: k laters, night
Deleted User#0000: ah, the other researcher i'm in touch with says the BERT has a mouth paper has a mistake in derivation, so ignore it
Deleted User#0000: @Sid https://arxiv.org/abs/1904.12848
zphang#7252: ^ I think the NLP results are rather weak for this paper.
Source: have tried extending it for more complex tasks, doesn't work well, at least with their specific setup.
Deleted User#0000: @zphang yea exactly
Daj#7482: @bmk I'm a bit busy this evening unfortunately, I can help you tomorrow if my code or HF's code doesn't work out
Deleted User#0000: https://arxiv.org/pdf/2006.15720.pdf
Eddh👽#7290: I'm curious about something. How does GPT3 handles debate. Like if you have it simulate a debate between Wittgenstein and Kant ?
Eddh👽#7290: I don't have access to gpt3 only to Dragon model in AIdungeon. It's interesting but has repetitions etc. I wonder how gpt3 fares.
Sid#2121: https://twitter.com/amandaaskell/status/1288539453015724032?s=21
bmk#1476: @Daj i modified your script for the oa vocab
bmk#1476: @AI_WAIFU wait do we really need to tune gpt2 on gutenberg? or is just using vanilla gpt2 good enough™ |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742882387708346419/Screenshot_2020-08-11-17-09-01-946_com.android.chrome.png
bmk#1476: Finally some good fucking vram
bmk#1476: If this card actually has 24 GB and costs less than the Titan RTX, I'm stocking up on cards
bmk#1476: But they're probably going to launch a 48 GB Titan if that's the case
bmk#1476: And that's gonna be a tough decision
AI_WAIFU#2844: I don't think so. I think we can just use my script and change the input.
bmk#1476: ok ill just do that after 774M is done lol
bmk#1476: dont feel like fiddling around with tpus
AI_WAIFU#2844: We might see a more pronounced effect if we fine tune, but I would just try that with the 117m model before putting in a bunch of work.
AI_WAIFU#2844: For now let's just go with what came out of the box
bmk#1476: yeah
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742925122813296680/gpt2-774M-losspos.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742925134360477818/loss-gpt2-large.npy
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742925439403687936/losscombined.png
bmk#1476: @AI_WAIFU
AI_WAIFU#2844: Seems as though the benefit of larger model size is fairly uniform all the way between ~60 and 1024 tokens in this regime. https://cdn.discordapp.com/attachments/729741769738158194/742932661034680380/Figure_5.png
AI_WAIFU#2844: I think these experiments have some interesting implications for training. Specifically, I suspect that this implies that the optimal language model configuration will depend on what we want to use it for. If we're mainly doing short text generation or generation with small prompts. It will be better to train a wider model with a smaller context window. On the other hand, if we want more powerful meta learning capabilities, and long form coherent text generation, it will be better to train a smaller model with a larger context window.
bmk#1476: i'm not sure i understand how to interpret this graph
AI_WAIFU#2844: I subtracted the loss of the third model from the second. This graph shows the gap in loss vs number of sequence tokens. Also I dereferenced "this" in my other comment.
AI_WAIFU#2844: The graph is to get an idea of where the improvements in loss are coming from when we increase model size. |
bmk#1476: aside from that one anomalous data point on the left, it looks like most of the improvement comes from longer context with bigger models
bmk#1476: what's the matter with that, anyways?
bmk#1476: the gap on the left in my graph doesnt look visibly big
AI_WAIFU#2844: let me double check that I did things right.
AI_WAIFU#2844: nope, that not a mistake. there's a big gap between the first 2 numbers.
AI_WAIFU#2844: I'm using log(index+1) as my x axis, are you using log(index)?
bmk#1476: ```plt.xscale('log')```
bmk#1476: i mean your y axis looks wonky
bmk#1476: that one point at the very left
bmk#1476: why the hell is it all the way up there
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742943197696884846/unknown.png
AI_WAIFU#2844: Your plot is missing the first points in the numpy file. they're both above 7
bmk#1476: that is absolutley not 0.6 gap
bmk#1476: oh
bmk#1476: o.O
AI_WAIFU#2844: lmao
bmk#1476: so that's the first token
AI_WAIFU#2844: yup
bmk#1476: well thats an anomaly
bmk#1476: i think we can safely exclude it from analysis |
bmk#1476: first token shouldnt mean much
bmk#1476: thats still baffling though
bmk#1476: 0.6 gap
AI_WAIFU#2844: I think it's notable. The gap shows up between small and medium and between medium and large
bmk#1476: yeah but like
bmk#1476: single token context is literally markov chain
bmk#1476: there shouldnt be much interesting going on there
bmk#1476: There's only 50000**2 different pairs of first-2 tokens anyways
bmk#1476: Having a big beefy network shouldn't be noticeably better than a dumb lookup table
AI_WAIFU#2844: It might be an artifact of training. They might ignore the first token when computing the loss.
AI_WAIFU#2844: Actually that's probably it. They probably did loss(inputs[:-1],outputs[1:])
bmk#1476: isnt it the other way around
bmk#1476: loss(inputs[1:], outputs[:-1])
AI_WAIFU#2844: uhh
bmk#1476: wait nvm i did dumb
bmk#1476: yeah that makes sense
bmk#1476: so we should trim out first data point
AI_WAIFU#2844: yes
AI_WAIFU#2844: Btw, are you running the gutenberg experiment or the full gpt-2?
bmk#1476: this is text8 |
bmk#1476: currently running gutenberg
bmk#1476: i have a feeling full gpt2 wont be very enlightening
AI_WAIFU#2844: Maybe try doing a run with n=10000 and I can do loess to smooth it out.
bmk#1476: i actually have one from earlier:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742946370020966450/loss-gpt2-gutenberg-10k.npy
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742946435280142456/Figure_1.png
AI_WAIFU#2844: I'll take a closer look.
bmk#1476: it looks visually not very promising but id need a side-by-side to be certain
AI_WAIFU#2844: how big was this gpt2?
bmk#1476: 117M
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742947440629514320/Figure_2.png
bmk#1476: ignore the main header
bmk#1476: the gap between text8 loss and gutenberg loss actually goes down
bmk#1476: o.O
bmk#1476: wat
AI_WAIFU#2844: I don't know how to interpret this graph
bmk#1476: So this is gap between text8 loss and Gutenberg loss
bmk#1476: You'd expect the gap to get bigger since Gutenberg has longer term dependencies
bmk#1476: So it should go down more
bmk#1476: But actually it doesn't |
AI_WAIFU#2844: Is the sign correct?
bmk#1476: text8 - gutenberg
bmk#1476: gutenberg loss goes down = difference goes up
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742949076336640090/unknown.png
bmk#1476: to really drive it home
AI_WAIFU#2844: wat
bmk#1476: yeah it makes no sense
bmk#1476: unless i got something really mixed up
AI_WAIFU#2844: ...try it with a bigger model? Idk.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742952768632782888/EKaM6e-XUAATeWx.png
aquajet#7800: stonks only go up
AI_WAIFU#2844: Language models only started doing what they were theorized to do when they got huge.
bmk#1476: ok tomorrow morning 100k for 117M and 345M should be done
bmk#1476: we can see then
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742953278303764520/23o7bn4eu2621.png
aquajet#7800: what is this experiment testing?
bmk#1476: what does context do to loss
aquajet#7800: ah, context would be the input sequence length, right?
bmk#1476: yeah
bmk#1476: wait i just realized something |
bmk#1476: the gpt3 loss is artifically deflated because of the longer context length
bmk#1476: even if your model literally is not better, by having a long context length you can average out that initial spike
bmk#1476: like, even if you apply the exact same model on a rolling basis
bmk#1476: you can make the loss seem lower
aquajet#7800: how is loss calculated for language models? Cause if it's dependent on the previous token's loss wouldn't that mean that it should be higher? Since the error from token one would effect the result for token 2 and so on
AI_WAIFU#2844: wait, are you guys using a shorter context length for GPT-Neo?
bmk#1476: no, same or longer
AI_WAIFU#2844: is OpenAI's validation loss the last token or the average?
bmk#1476: i *think* it's the average
bmk#1476: but im not actually sure
AI_WAIFU#2844: The difference is small but it's not totally insignificant.
bmk#1476: at least ive always implemented it as the average
aquajet#7800: so would someone be able to clarify how calculating the loss works for a causal lm? I have a target logit vector (which would be one for the target token and zero elsewhere) and the predicted logit vector (already run through the softmax?) and I take the difference between them?
aquajet#7800: and I average this for my entire sequence?
Deleted User#0000: yea, i think i can explain this (maybe)
Deleted User#0000: used to confuse me a lot as well
AI_WAIFU#2844: let me try. you have a logit matrix with one dimension being sequence position and another being the logit dimesion, and an vector that corresponds to the correct tokens. You compute the logsoftmax for every column logit dimesion, index it using the token vector, and average.
AI_WAIFU#2844: at least that's how we've all been doing it. we're currently evaluating the relationship between the average loss and the sequence position (amount of context)
bmk#1476: we havent decided whether we want that last operation to be an average or to just take the last one
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742958343479623710/gpt2-774M-losspos.png |
bmk#1476: if you average, the peak on the left will screw you up
AI_WAIFU#2844: Theoretically I argue it should be the last one, but then you get more variance in your validation estimates and you need to do more work.
Deleted User#0000: you'd want all of them
bmk#1476: what about the big peak though
Deleted User#0000: otherwise if you are generating starting from the first couple tokens
Deleted User#0000: it wouldn't generalize to that case
kindiana#1016: when training you want all of them
AI_WAIFU#2844: But when we're training, can't the start of a document start midway through the batch?
Deleted User#0000: when evaluating, you take the last one, yea
bmk#1476: i mean evaluation
bmk#1476: id argue it makes the most sense to only consider the last half of the context when evaluating
kindiana#1016: if you want the best results when evaluating you should only take the last "couple"
kindiana#1016: how many you take would depend on how much compute you want to spend lol
Deleted User#0000: ohh, when evaluating, you'd just take the last one
Deleted User#0000: or, that's how i did it
bmk#1476: see heres the problem
bmk#1476: everyone does it differently
bmk#1476: i always just averaged even for eval
kindiana#1016: sounds like we should all use transformer xl and not worry about it
kindiana#1016: xP |
bmk#1476: doesnt that make stuff even more annoying
Deleted User#0000: so, if my output tensor is `batch x seq x num_tokens`
kindiana#1016: every token gets enough context with recurrance in txl
Deleted User#0000: i take `[:, -1, :]` and sample from that
Deleted User#0000: but you do some `[:, range?,:]`?
bmk#1476: i do `[:, :, :].mean()`
bmk#1476: wait
Deleted User#0000: ohh, but the tokens that are not the last is not predicting the next token
bmk#1476: theres no num_tokens but otherwise yeah
Deleted User#0000: you'd be getting some average of predictions over all tokens
bmk#1476: `[:,:,:].gather(2,x).mean()`
bmk#1476: wait
Deleted User#0000: what is x?
bmk#1476: are we talking about evaluateion or generation
Deleted User#0000: i think we are confusing aquajet
bmk#1476: im only talking about evaluation
Deleted User#0000: he/she is probably asking about training
bmk#1476: not generation
Deleted User#0000: i think evaluation and generation goes hand in hand?
Deleted User#0000: ohh, do you mean getting the validation loss? |
Deleted User#0000: or
bmk#1476: yeah
Deleted User#0000: ahhh yea, makes sense for validation loss!
Deleted User#0000: yea, it's the same as training loss
Deleted User#0000: aquajet is super confused now
Deleted User#0000: lol
Deleted User#0000: just read AI_WAIFU's first comment
Deleted User#0000: @aquajet so, like if you are doing image classification, you would input some bunch of pixels and then predict a class, like `2` or `3`
Deleted User#0000: in causal LM, you give it a sequence `[2, 3, 4]` and you have it predict `[3, 4, 5]`
Deleted User#0000: if the original sequence is `[2, 3, 4, 5]`
Deleted User#0000: we do this easily by doing `input = seq[:, :-1]` and `label=seq[:,1:]`
Deleted User#0000: usually, in say resnet, you would come out with dimensions like `batch x num_classes`
Deleted User#0000: well, in causal lm, it would be `batch x seq x num_classes`
Deleted User#0000: and the cross entropy is done over all tokens of the sequence
Deleted User#0000: and then summed or averaged
Deleted User#0000: you can basically think of it like image classification, but you are predicting multiple things
Deleted User#0000: where what you are predicting is always the very next step
Deleted User#0000: and that's it.. GPT-3 emerges lmao
Deleted User#0000: the other thing to know is that, inside the causal LM, each token can only attend to itself and the past
Deleted User#0000: so the input is `[2, 3, 4]` |
Deleted User#0000: 2 cannot see 3, and 4
Deleted User#0000: 3 cannot see 4
Deleted User#0000: 4 can only see itslef
Deleted User#0000: oops
Deleted User#0000: reversed
Deleted User#0000: 2 can only see itself
Deleted User#0000: 3 can see itself and 2, and 4 can see all the rest
Deleted User#0000: you can only see the past
Deleted User#0000: not the future
Deleted User#0000: and you are trying to predict the next step in the future
Deleted User#0000: this is done in the causal LM with the confusing causal mask
Deleted User#0000: which removes attention from past to future
Deleted User#0000: that's it... that's really all there is to it, save for the attention equation
Deleted User#0000: which is `attn = (q @ k.t()).softmax(dim=-1) @ v`
bmk#1476: the python makes that unnecessarily complicated looking
bmk#1476: `s(QK^T)V`
Deleted User#0000: yea, multi-head attention makes it a bit more confusing
bmk#1476: the `@` exudes chaotic energy
Deleted User#0000: maybe Dextra and its variants will get rid of it?
Deleted User#0000: who knows |
Deleted User#0000: you thinking of some creature from Nethack?
Deleted User#0000: @aquajet @ is shorthand for matrix multiply
aquajet#7800: > and the cross entropy is done over all tokens of the sequence
Thanks! I think I'm most confused on this part. We get a predicted [batch x seq_len x vocab] logix matrix from the model, and how do we get a similar target logit matrix from the target output? In the [3,4,5] example would our target matrix be something like
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
where 3,4,5 are the third fourth and fifth token in our vocab? It's also still a bit confusing how the model can seemingly predict the 3,4, and 5 in one pass (whereas in generation we would need to sample, take the output seq and do another pass with it as an input) but I'm guessing we create the same logit matrix during generation but throw out later rows (since we need to sample from the current row/position)
Deleted User#0000: if it were 1-indexed
Deleted User#0000: yea, that's what you would like
Deleted User#0000: and there's only 5 tokens total in your vocabulary
Deleted User#0000: usually there's like 20k +
aquajet#7800: yeah
aquajet#7800: are the models usually 1 indexed?
Deleted User#0000: so `0001000000000000000000000...`
Deleted User#0000: `00001000000000000000000000....`
Deleted User#0000: yup
Deleted User#0000: oh no, they are usually 0 indexed
Deleted User#0000: so just add an extra layer of 0's to the left |
Deleted User#0000: and you are fine
aquajet#7800: oh ok I see
Deleted User#0000: and yea, the model, when it is trained, is trained over all tokens, because you can do cross entropy in parallel
Deleted User#0000: over all the logits
Deleted User#0000: when generating, it is 1 by 1
Deleted User#0000: trying to get parallel decoding working is still an open research question
Deleted User#0000: (generating more than 1 at a time)
bmk#1476: @AI_WAIFU the 100k graph is even more weird https://cdn.discordapp.com/attachments/729741769738158194/742970116165206056/figure-gpt2-gutenberg.png
bmk#1476: is it just me or does it look distinctly.. wobbly?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742970431887376404/unknown.png
bmk#1476: also, still getting same graph shape but less noise obv
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/742970605606797321/loss-gpt2-gutenberg.npy
bmk#1476: we'll see tomorrow morning what the curve looks like for bigger models
Deleted User#0000: ohh nice! is this with the full project gutenberg?
AI_WAIFU#2844: I'm not going to pretend I have the faintest clue why that plot looks like that.
kindiana#1016: is it always better to train with longer context assuming tokens/s is constant? 🤔 (assuming very long documents)
kindiana#1016: at the extremes would 1M ctx and bs=1 be better than 100k ctx and bs=10?
kindiana#1016: feels like longer context is helpful for longer range correlations but at some point the smoother gradient you get from less correlated training samples should win out?
Aran Komatsuzaki#5714: exactly. you usually need a large enough batch size for diversity reason.
Aran Komatsuzaki#5714: actually the gain from extending ctxt length isn't that much compared with other factors, including |
Aran Komatsuzaki#5714: bs
Aran Komatsuzaki#5714: parameter size
kindiana#1016: I wonder if there is any analysis in the optimal bs and ctx for a constant bs*ctx
Aran Komatsuzaki#5714: no analysis
Aran Komatsuzaki#5714: wait, i wrote about it in my draft.
Aran Komatsuzaki#5714: but the trade-off isn't really an important question, since
Aran Komatsuzaki#5714: extending the TBPTT length with efficient attention itself is a suboptimal approach.
Aran Komatsuzaki#5714: It's just not the scalable approach.
kindiana#1016: interesting, whats would you say is better?
kindiana#1016: im looking into some log(n) attention
Aran Komatsuzaki#5714: There are two approaches that actually scale: conditional computation and retrieval-based approach.
kindiana#1016: what I'm thinking of actually combines both of those lmao
Aran Komatsuzaki#5714: I mean extending the attention length in the conventional sense just doesn't work even if there's such a thing as log(n) attention.
Aran Komatsuzaki#5714: but if you consider retrieval as a form of attention, then you have O(1) attention over the entire dataset.
kindiana#1016: yeah, its not traditional attention, its a weird mix of hard hierarchical attn and normal softmax attention
Aran Komatsuzaki#5714: MARGE-like approach is very promising. I'm trying to extend it.
Aran Komatsuzaki#5714: into causal language modeling
Aran Komatsuzaki#5714: It's going to be popular later this year and next year.
Aran Komatsuzaki#5714: i'm not saying my particular approach is going to be popular, but i'm saying marge-variants will be.
StellaAthena#3530: > I mean extending the attention length in the conventional sense just doesn't work even if there's such a thing as log(n) attention. |
@Aran Komatsuzaki I’m missing the context, but there is such a thing as an O(log n) algorithm. You may be confusing this with O(1/n) algorithms, which don’t exist.
kindiana#1016: I think retrieval based "attention" will be very interesting, as its allows you to do fine tuning model outputs by just changing the retrieval dataset
Aran Komatsuzaki#5714: no "doesn't work" means "not scalable"
Aran Komatsuzaki#5714: fine-tuning also doesn't make much sense from AGI standpoint.
Aran Komatsuzaki#5714: it presumes the existence of samples specific to the tasks of your interest. But general tasks don't have the corresponding samples available.
kindiana#1016: > no "doesn't work" means "not scalable"
@Aran Komatsuzaki why not?
Aran Komatsuzaki#5714: There are many reasons
Aran Komatsuzaki#5714: It's stated in my draft, but one main reason is that most samples of our interest isn't that long, and the approach of extending TBPTT length with efficient attention doesn't actually improve the per-token loss of earlier tokens (e.g. the first 1024 tokens or so) if your baseline has TBPTT length = 1024.
kindiana#1016: I agree that somewhat longer contexts are not useful for most cases, but with log(n) attention, you can attend over a significant fraction of the dataset with cached activation to learn some sort of retrieval mechanism
Aran Komatsuzaki#5714: there's another problem that you need to have sufficiently large bs and sufficiently small total # of minibatch.
Aran Komatsuzaki#5714: it'll take forever to explain, so maybe i'll stop here lol
kindiana#1016: in papers like reformer and transformer xl they don't evaluate the whole attention window at once but instead compute it incrementally, and I think that should allow scaling of bs without reducing ctx (defs will have issues in extreme cases though because history is computed with weights which are too old)
kindiana#1016: > it presumes the existence of samples specific to the tasks of your interest. But general tasks don't have the corresponding samples available.
@Aran Komatsuzaki I guess fine tuning is not the best way to put what I'm describing, but some way of changing model output when you have more data than can fit in a prompt but not enough data/compute to retrain the model. e.g. a book related to a topic the LM is writing on or something, and not necessarily exactly examples of what you want the output to be
kindiana#1016: (I'm reading your draft now btw)
kindiana#1016: from your classification, I'd put my scheme at explicit memory (recurrence) which is accessed using a learned retriever
Aran Komatsuzaki#5714: i see
kindiana#1016: theres also some conditional computation using sparse-in-time layers, but thats not really relevant to the attention mechanism
Aran Komatsuzaki#5714: explicit memory is kinda similar to just extending the TBPTT length in that it doesn't improve the earlier tokens' per-token loss. |
Aran Komatsuzaki#5714: but you argued that, since it covers large portion of training dataset, it should work
Aran Komatsuzaki#5714: oh wait
Aran Komatsuzaki#5714: if you use recurrence, the cached activations should become stale after some iterations, so the retrieval doesn't really work there.
Aran Komatsuzaki#5714: this is part of the reason why you want to use retrieval-based approach, since the thing you retrieve doesn't get stale.
Aran Komatsuzaki#5714: also storing memory for the entire training dataset is such a pain, since
Aran Komatsuzaki#5714: each time you make it to retrieve a relevant info, either it has to load the entire memory (would be a memory bottleneck) or
Aran Komatsuzaki#5714: you have to use hierarchical thing (you mentioned before)
Aran Komatsuzaki#5714: in which case it's hard for your retriever to train for, since you can't use gradient info.
kindiana#1016: neural networks can learn how to do tree search pretty well (alphago etc)
Aran Komatsuzaki#5714: yes but very inefficiently computes-wise
kindiana#1016: its definitely inefficient compared to an optimized knn or a database but you only need to do log(n) so it shouldnt be too bad 🤷
kindiana#1016: and if you restrict attention patterns you don't have to do a full log(n) lookup for every single token
bmk#1476: @AI_WAIFU https://cdn.discordapp.com/attachments/729741769738158194/743111403430215720/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743111683714580610/unknown.png
bmk#1476: wat
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743111929756647464/unknown.png
AI_WAIFU#2844: can you post the npy file
AI_WAIFU#2844: ?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743128200036745226/loss-gpt2-medium-gutenberg.npy
Deleted User#0000: @Aran Komatsuzaki do you know any other papers that are following up on Marge? |
Aran Komatsuzaki#5714: i don't think so, but let me check it
Aran Komatsuzaki#5714: yeah no paper on marge
Aran Komatsuzaki#5714: cuz marge was published at the end of June
Aran Komatsuzaki#5714: also, i don't see many people being excited about marge, so i think a follow-up paper will prob come from FAIR
Deleted User#0000: it's probably because the retrieval mechanism introduces an engineering problem that most researchers are not equipped to solve
Aran Komatsuzaki#5714: well i think they just don't know the real implication of marge
Deleted User#0000: ahh yea, perhaps that too, it is fairly recent
Aran Komatsuzaki#5714: also most people haven't heard about it
Aran Komatsuzaki#5714: only non-qa people who are excited about it are prob me, you and madison
Aran Komatsuzaki#5714: also qa people are mostly excited with fusion-in-decoder rather than marge, i guess
Aran Komatsuzaki#5714: probably just lewis and his immediate friends are excited
Aran Komatsuzaki#5714: madison said alec et al is also interested in retrieval methods, but prob they don't know marge well yet
Aran Komatsuzaki#5714: *alec radford
Aran Komatsuzaki#5714: also they are excited with zero-shot obviously
Deleted User#0000: yea, no worries, the results from fusion-in-decoder paper speaks for itself
Aran Komatsuzaki#5714: yeah cuz it's at the top of list, while marge wasn't evaluated on the usual tasks
Deleted User#0000: they beat GPT-3 few-shot on question answering
Deleted User#0000: just saying it out loud for people in the room i guess
Aran Komatsuzaki#5714: that's not a hard thing to do, since other methods also beat it
Aran Komatsuzaki#5714: haha |
Deleted User#0000: yea, fusion in decoder is way simpler though, than say Realm
Aran Komatsuzaki#5714: yeah, and marge is even simpler lol
Aran Komatsuzaki#5714: cuz you don't even need dpr
Deleted User#0000: yea, once i wrap up the electra work this week, lets do Marge. it's prob a month long project
Aran Komatsuzaki#5714: cool
Deleted User#0000: i haven't dealt with world of annoy and faiss yet
Deleted User#0000: it would be a learning experience
Aran Komatsuzaki#5714: yes
Deleted User#0000: annoy is a big success at spotify
Deleted User#0000: they use it for music search there
Deleted User#0000: music recommendation
Aran Komatsuzaki#5714: cool
Deleted User#0000: come to think of it, Realm is interesting, because a natural byproduct is you train a NN for retrieval
Deleted User#0000: maybe at scale, it may emergently get better and better?
Deleted User#0000: anyways
Deleted User#0000: retrieval based methods are another world
Deleted User#0000: im just spouting nonsense lol
Deleted User#0000: thinking of some RL system + Realm
Sid#2121: sorry for the naive q - haven't read MARGE yet, but wouldn't you need access to the whole training dataset to run inference with a retrieval based language model?
Aran Komatsuzaki#5714: i don't remember realm by now, but i had a good justification why it's suboptimal to dpr-based models and marge. |
Aran Komatsuzaki#5714: i don't recommend rl. it appears that most tasks that human perform seem to be solvable with transformer lm.
Aran Komatsuzaki#5714: i mean obviously there are human tasks that can be solved by rl, not transformer mle as of now.
Aran Komatsuzaki#5714: but given the trend, it seems safer to assume mle will cover rl tasks than other way around.
bmk#1476: you mean that mle will supplant rl?
bmk#1476: for like imitation learning etc
Aran Komatsuzaki#5714: @Sid After the model computes the embedding of each training sample, you can use faiss to cluster the embeddings, so that upon inference you can just retrieve the knn, which means you don't have to access to all of them.
bmk#1476: this is of interest to me because ive been thinking a lot about using rl to replace mle for lms
Aran Komatsuzaki#5714: well, gpt-3 is solving the tasks that people used to use rl to tackle with, so in that sense.
bmk#1476: i mean, i cant help but shake the feeling that rl is actually right right objective
Aran Komatsuzaki#5714: chess is one of them, but obviously it's not super-human-level.
bmk#1476: see: imitation learning
bmk#1476: if you train a robot policy with mle it will quickly diverge from the training data
bmk#1476: and it wont be able to self-correct
bmk#1476: theres reason to expect this to happen with lms too, just more insidiously
bmk#1476: gpt* often go off the rails eventually at some point and cant recover
Aran Komatsuzaki#5714: i understand that it doesn't work well right now, but i think the way we apply mle to many rl tasks rn is just not done right.
Aran Komatsuzaki#5714: i see some prominent mlers who have a similar sentiment, so it's not really an unpopular opinion.
Deleted User#0000: yea, people in the meta-learning and rl space have been trying forever to get few-shot learning to work. so many papers in that space
Deleted User#0000: we really can't grasp emergence
bmk#1476: i still really *really* want to get seqgan working |
bmk#1476: the idea is so enticing
bmk#1476: the theory seems to suggest that it's the perfect solution
Aran Komatsuzaki#5714: yeah. unlike mle training space (e.g. cnn, transformer etc), the progress in rl/meta-learning seems very slow.
Aran Komatsuzaki#5714: seqgan? lol
Deleted User#0000: i mean, if Stephen Wolfram spent a portion of his life studying cellular automata, and cant come up with better theories for the system other than 'emergence', should we just concede there's dynamic properties of the universe we will never understand?
Noa Nabeshima#0290: I expect to be available to help within the next two days
Aran Komatsuzaki#5714: i'm a fan of neither rl nor gan 🙂
bmk#1476: yeah, isnt seqgan the prototypical RL + language model
bmk#1476: https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence
Noa Nabeshima#0290: If anyone thinks I might be helpful, is there any sort of literature or documentation I should read?
bmk#1476: (on the topic of emergence)
Aran Komatsuzaki#5714: actually seqgan is the first thing i tried in my career.
Deleted User#0000: i've about had it with GANs lol
Aran Komatsuzaki#5714: it sucked so hard i discovered transformer lm works well
Deleted User#0000: they are the most frustrating projects i've ever worked with lol
bmk#1476: i dont get why it doesnt work
bmk#1476: it seems theoretically perfect
Noa Nabeshima#0290: > I expect to be available to help within the next two days
to start helping in the future, I mean
Deleted User#0000: gladly hope MLE for image generation takes off |
bmk#1476: the only thing i can think of is bigger batch size
Louis#0144: dont even get me STARTED on emergence
Louis#0144: oh god
Aran Komatsuzaki#5714: because neither gan nor rl works unfortunately.
Louis#0144: Ive had this argument so many times
Louis#0144: emergence != practical
Louis#0144: at all
bmk#1476: i just cite the LW post and move on
Louis#0144: Stephen Wolfram is a hack
Louis#0144: That is all I want to add
bmk#1476: why does RL *not* work?
Louis#0144: the dude hasnt contributed anything meaningful in decades
Aran Komatsuzaki#5714: yeah wolfram is a hack
bmk#1476: no see but *this time* his automata theory of the universe might be right! /s
Aran Komatsuzaki#5714: and theory != empirical performance
Aran Komatsuzaki#5714: * theoretical soundness != empirical perf
bmk#1476: the theory is just too enticing
Ravna#1831: RL is still a good idea if you can generate unlimited experiences and rewards with pure computation and no real world interaction.
Aran Komatsuzaki#5714: > the dude hasnt contributed anything meaningful in decades
@Louis Truer words never been spoken |
Louis#0144: the main issue with emergence in DNNs is the point at which emergence arises is like exponentially massive
Aran Komatsuzaki#5714: likewise for Ben Goetzel
Louis#0144: (exponentially more weights)
bmk#1476: ml researchers: we need better theory, everything we have is :empiricism: !
also ml researchers: the theory doesnt work in practice so its best to ignore it!
Deleted User#0000: well, forget wolfram. lets think about attention networks. we can frickin' record all the state changes of every parameter over all of gradient descent
AI_WAIFU#2844: IMO RL works pretty well when you have a clear reward function and a short time horizon.
Deleted User#0000: we have all the world's smartest professors and ML pepole looking at this
Deleted User#0000: we still cannot inteprret it
Deleted User#0000: even ML papers are reaching for words like 'emergence'
Louis#0144: did you see the paper about higher order attention in transformers
Deleted User#0000: which used to be super taboo
Louis#0144: thats so promising
Louis#0144: but the authors didnt go far enough with it
Louis#0144: :/
Aran Komatsuzaki#5714: Just remember how much theory Noam Shazeer and Alec Radford use
Deleted User#0000: ahh no i haven't, link?
Deleted User#0000: @Louis
Aran Komatsuzaki#5714: zero! |
bmk#1476: they use theory (from distributed computing)
Ravna#1831: nah it's the mathematician's fault that DL doesn't have enough theory
Ravna#1831: not DL researcher's job
Louis#0144: https://openreview.net/forum?id=rkecJ6VFvr
Aran Komatsuzaki#5714: I used to be a mathematician, so I wouldn't call it a theory lol
Louis#0144: This
Louis#0144: I got so excited when I saw this paper
Aran Komatsuzaki#5714: Yeah i saw the paper.
Louis#0144: this is the topology that the neocortex uses
Louis#0144: except order scales with dept
Louis#0144: and earlier layers have lots of sparsity
Louis#0144: so we're finally getting a serious biologically influenced attention
Deleted User#0000: well, with the hopfield paper, maybe attention was biologically plausible all along
Louis#0144: attention isnt plausible in its current form
bmk#1476: i dont like the obsession with biological plausibility
Louis#0144: attention is a function of local competition
Louis#0144: Its not an obsession
Deleted User#0000: hmm, what do you make of Sepp's paper?
Louis#0144: Its just that the brain scales so much better
Louis#0144: the hopfield paper? |
Aran Komatsuzaki#5714: me neither. biological analogy itself seems to me even useful at this point
Deleted User#0000: yea, the hopfield paper
Aran Komatsuzaki#5714: * not useful
Louis#0144: https://www.frontiersin.org/articles/10.3389/fncom.2017.00048/full
Louis#0144: This is what I think
Louis#0144: lol
Louis#0144: I think we've known it for decades
Louis#0144: that these kinds of topologies lead to abstractions
Louis#0144: @Aran Komatsuzaki its not BECAUSE its a biological influence
Louis#0144: its because the attractor network is so fucking good
Deleted User#0000: > attention isnt plausible in its current form
@Louis well, the Hopfield paper would suggest otherwise
Deleted User#0000: the energy update rule for Sepp's model is exactly attentoin
bmk#1476: airplanes and submarines are both not biologically plausible
Louis#0144: No the point isnt biological plausability directly
Deleted User#0000: and we have experimental evidence it works lol
Deleted User#0000: in the form of GPT
Aran Komatsuzaki#5714: If it's good, then I gotta see how well it'll perform on WebText or Wikitext-103 🙂
Deleted User#0000: and all the other 'attention' networks out there
Louis#0144: the point is that we can draw from biology to have ideas of whats going on |
bmk#1476: an airplane consumes infinity% more jet fuel than a bird
Aran Komatsuzaki#5714: i'm looking forward to the results
Louis#0144: @Deleted User so attention exists in the brain in the form of local competiton. Its almost directly analogous to local attention. Global rules dont exist in the brain
Louis#0144: all (most) rules in the brain are local
Louis#0144: thats my issue
Deleted User#0000: i see, you are trying to search for another biological plausible model for attention
Deleted User#0000: or to improve on it
Louis#0144: to improve it yes
Deleted User#0000: i'm saying Sepp already offered one
Louis#0144: I dont think global attention is necessary
Louis#0144: I know he offered one
Louis#0144: and its a move in the right direction
Louis#0144: I think in order to finish that move we need higher order attention
Deleted User#0000: i personally think there's something quite nice about the sparsity in activation space induced by the softmax. it reminds me of laterla inhibition, and i spent a portion of time trying to identify where it is in the neocortex
Louis#0144: so that local attention can be used more readily
Deleted User#0000: but Sepp's theory is much nicer and fits with some other papers https://arxiv.org/abs/1909.01377
Louis#0144: higher order cavities => everything is locally closer
Louis#0144: I can discuss this for a *really* long time
Louis#0144: I studied this specifically for two years
Louis#0144: but applied to vision |
Louis#0144: not NLP
Deleted User#0000: yea, in Aran's words, you're a 'neuroscience bro'
Louis#0144: LMAO
Deleted User#0000: i get it
Louis#0144: I just started there
Louis#0144: I dont really do it much anymore
Deleted User#0000: we haven't really succeeded going top-down tho
Louis#0144: yeah
Louis#0144: ofc not
Louis#0144: I do think that cog neuro has its merits though
Louis#0144: the brain has had so long to optimize its structure
Louis#0144: we should atleast *try* to understand what its doing
Louis#0144: it'll surely help us somewhere
Louis#0144: like we only learned a lot about the structure of the brain within the last few decades
bmk#1476: thats far from guaranteed tbh
Ravna#1831: bad hot take: both attention and convolution are just some fancy names given to weight reusing.
Louis#0144: not really
AI_WAIFU#2844: ^
bmk#1476: well
Louis#0144: attention and convolution are weight lifting |
bmk#1476: theyre both *types* of weight reusing
Deleted User#0000: 🤷♂️
Louis#0144: they lift activation weights to connection weights and then back to activation. Its kinda like taking a dual
Louis#0144: Very useful from an optimization perspective
Deleted User#0000: perplexity is low for everything you are typing
Deleted User#0000: i've heard it all before
Louis#0144: LMAO
Deleted User#0000: i've read so many books
Deleted User#0000: all dead ends imo
bmk#1476: what do you think is not a dead end
Louis#0144: Really? Weight lifting is good for doing analysis of DNNs
bmk#1476: just params go brrr?
Aran Komatsuzaki#5714: > yea, in Aran's words, you're a 'neuroscience bro'
@Deleted User 😂
Louis#0144: Im writing a paper on that
Aran Komatsuzaki#5714: @Louis I thought you were a math bro, not neurosci bro
Louis#0144: Like for instance all of our methods to discuss TDA metrics are on connection weights
Louis#0144: not activation weights
Aran Komatsuzaki#5714: > i've read so many books
@Deleted User agreed at so many levels |
Deleted User#0000: let's face it Louis, with all of the sum of mathematics we know, we still amount to transforming functions so we can graph them into straight lines
Aran Komatsuzaki#5714: > just params go brrr?
@bmk that's the kind of bro i like. conditional-computation bros
Deleted User#0000: because our monkey brains simply cannot comprehend nonlinearities
Louis#0144: yeah
Louis#0144: I agree with that
Deleted User#0000: let me give you another analogy to think about
Louis#0144: but back to the topic at hand, I should clarify. I do agree with Sepp
Deleted User#0000: so the human heart (bear with me)
Deleted User#0000: is extraordinarily complex...
Deleted User#0000: the chambers of the hearts, with the atrium and ventricles
Louis#0144: I think local attention is really good, and I think local attention might exist in the brain. I also think that other papers that noticed this hopfield direction and higher order attention have the right idea (make everything local)
Deleted User#0000: the electrical signals from the pacemaker and the way it propagates
Deleted User#0000: down to the cellular level with the calcium channels
Deleted User#0000: but
Deleted User#0000: even with all the complexity, the heart really is just playing out one physical phenomenon
Deleted User#0000: the pressure volume relationship, it's a glorified pump
Deleted User#0000: and now, we can replace the heart with one continuous pump
Deleted User#0000: (it doesn't have to be pulsatile)
Deleted User#0000: at the essence, evolution, brought a monstrocity in complexity |
Deleted User#0000: working with many constraints
Deleted User#0000: to give pressure to bring blood round and round
Ravna#1831: No, because only linear functions matter, just like in physics only the first term of the taylor series matters. Higher order terms are not worth calculating because even if you do, it's gonna be less accurate and less useful than engineers' heuristics in real life.😔
Deleted User#0000: now, think of the brain? and of attention?
Deleted User#0000: what is the essence here the evolution brought us to?
Deleted User#0000: i'm not saying it is, just the possibility is there.
bmk#1476: so youre saying that nature bodged shit together until it worked, with no regard to code maintainability
Aran Komatsuzaki#5714: i don't consider linear layer as a linear operation. i consider linear layer as a weighted complete bipartite graph.
Deleted User#0000: you can get lost in the forest, examining the complexities of evolution
Deleted User#0000: but evolution is a horrible engineer
bmk#1476: sounds like my training scripts except extrapolated over a few billion years
Deleted User#0000: think of the heart
Deleted User#0000: it's just playing out PV=nRT
Deleted User#0000: forget RT
Deleted User#0000: just PV
Deleted User#0000: how about the brain?
Deleted User#0000: where's the equation?
bmk#1476: ~~AIXI~~
Deleted User#0000: i say GPT holds some clues to this...
Deleted User#0000: alright, let me work on the sampling algorithm |
Deleted User#0000: bbl
AI_WAIFU#2844: I think the loss objective holds more clues.
Louis#0144: Tbh its mostly because the theories behind cog neuro have not advanced enough but if I had to bet I would say that higher order attention really is a good approach
Louis#0144: like we already know how well low order attention perofrms
Louis#0144: and theres been experiments with slightly higher order
bmk#1476: what is higher order attention
Ravna#1831: aixi doesn't have self reflection or self deceiving
Louis#0144: https://openreview.net/attachment?id=rkecJ6VFvr&name=original_pdf
Louis#0144: this
Louis#0144: we know for a fact that stacked hopfield networks are really good at rule absed reasoning
Louis#0144: like forget about the biological setting
Louis#0144: theyre really good just using LIF or continuous time units
Louis#0144: I think we have a lot of evidence to say that transformers are equivalent to stacks of hopfield networks
Louis#0144: the thing is that often hopfield networks arent linearly stacked
Louis#0144: they form cavities in biology and in practice if given the chance
Louis#0144: so I think in *this* circumstance its reasonable to pull from biology
Louis#0144: if linearly stacked hopfield networks were optimal wouldnt we see it?
Deleted User#0000: https://consultqd.clevelandclinic.org/wp-content/uploads/sites/2/2019/12/650x450-Pediatric-Artificial-Heart-1.jpg
Deleted User#0000: btw, we still don't know everything about the human heart..
Deleted User#0000: and i don't reckon we will for another hundred years |
Deleted User#0000: if not more.
Louis#0144: I agree
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/743176586915741776/Figure_7.png
bmk#1476: that's this graph but not log on the x axis right? https://cdn.discordapp.com/attachments/729741769738158194/743176925861642411/Screenshot-2020-08-12_08-21-18.png
AI_WAIFU#2844: basically.
Louis#0144: Now if u don’t mind me I’ll go sit in the corner and read my topology books :^)
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743184800218611732/unknown.png
bmk#1476: wtf moment
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743184995182575667/unknown.png
bmk#1476: this is downright bizarre
AI_WAIFU#2844: numpy file pls
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743186161857462462/loss-gpt2-large-gutenberg.npy
bmk#1476: ok so what conclusion can we draw here
bmk#1476: this is kind of bizarre
Sid#2121: what are these graphs showing sorry?
Sid#2121: final loss v context length?
AI_WAIFU#2844: I have no idea what's going on between 10-100 tokens, but it looks like the expected thing is happening for >800 tokens.
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/743188380459728916/Figure_9.png
bmk#1476: so uh
bmk#1476: do we go even bigger |
AI_WAIFU#2844: Yup. I predict that that tail spike will be even sharper at 1.5B
bmk#1476: the one on the left?
AI_WAIFU#2844: On the right
bmk#1476: oh
bmk#1476: can you plot all 3 on top of each other
AI_WAIFU#2844: gimme a minute
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/743190335500320848/Figure_10.png
bmk#1476: hmm so the bigger model does have a *bit* more going up but its so tiny
AI_WAIFU#2844: *with a thousand tokens
bmk#1476: the effect is barely noticable
AI_WAIFU#2844: I bet that line would go up more if we we were using a longer context length.
bmk#1476: what if we tune a gpt2 with an artificailly extended context
bmk#1476: i really *really* dont feel like getting that working on tpus but it might be doable on gpu
AI_WAIFU#2844: I'm looking at the hugging face pretrained models to see if there's one that meets our needs
AI_WAIFU#2844: Also: https://huggingface.co/transformers/perplexity.html
AI_WAIFU#2844: I think the longformer might work.
AI_WAIFU#2844: There are 2 pretrained models availible.
AI_WAIFU#2844: Context length of 4096
bmk#1476: should i run the same experiment with longformer now
bmk#1476: is longformer also unidirectional |
AI_WAIFU#2844: I think the LM head longformer is.
AI_WAIFU#2844: Let me check.
bmk#1476: ```/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [121,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [122,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
```
bmk#1476: thats not good
AI_WAIFU#2844: nope
bmk#1476: ok its working now
bmk#1476: even though i changed absolutely nothing
AI_WAIFU#2844: weird. Lets get like a small run as a sanity check
bmk#1476: yeah
bmk#1476: on it rn
AI_WAIFU#2844: 👍
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743203543141318716/figure-longformer-base-gutenberg.png
bmk#1476: either it's noisy or its garbage and i cant tell which
AI_WAIFU#2844: That's got to be garbage. A loss of 20 nats?
AI_WAIFU#2844: also why are there periodic spikes every ~500 tokens?
bmk#1476: ¯\_(ツ)_/¯ |
StellaAthena#3530: Looks like there’s something systematically wrong
StellaAthena#3530: Have you looked at the cross correlation?
AI_WAIFU#2844: Ok, in this case you still need to normalize with log softmax, but the model is properly causal so you have to disable the :1 shifting that GPT-2 Needed
AI_WAIFU#2844: Like so: https://cdn.discordapp.com/attachments/729741769738158194/743223162900316211/longformer_test.py
AI_WAIFU#2844: Otherwise you'll get garbage
bmk#1476: i think its best if we put this on a repo lol
bmk#1476: merging your script and my script is getting infeasible
Sid#2121: 👋 @Lucas Nestler (ClashLuke) ! Welcome to the tensorflow self-help desk. Check the channel description for an overview of the project 🙂
bmk#1476: whats the problem
Sid#2121: I believe Tensorflow is the problem lol
Sid#2121: > Wait, this is tensorflow? Why didn't you warn us.
@Lucas Nestler (ClashLuke) even worse, it's tensorflow-mesh
rowland358#4471: Joined the server.
Gabrielopesantos#0255: Joined the server.
Sid#2121: Hey @rowland358 @Gabrielopesantos ! Welcome to the HAL Plant! Check the Google doc in the Channel description for more info on the project
Your Refridgerator#8801: Joined the server.
sam#7242: Joined the server.
krzysztof#2566: Joined the server.
Oju#1167: Joined the server.
jotfa#1558: Joined the server. |
tremor#6380: Joined the server.
bmk#1476: Hey @tremor @jotfa @Oju @krzysztof @Your Refridgerator ! Welcome to the World's most Grassroots AI lab! Check the Google doc in the Channel description for more info on the project
Oju#1167: Hello! I'll see how I can contribute, I don't have any experience in Distributed training and that fancy stuff. I can maybe try with data cleaning?
bmk#1476: yes, we could use a lot of help on that front
bmk#1476: see the doc for more info
bmk#1476: mostly html->text and pdf->text
Louis#0144: https://twitter.com/bradpwyble/status/1293724191695527936?s=21
Louis#0144: @Deleted User
bmk#1476: https://gist.github.com/leogao2/1deca2de00220fd69501fddda9053f34 @AI_WAIFU the script
bmk#1476: ive implemented quite a few things to make life easier
AI_WAIFU#2844: That an upgrade.
bmk#1476: i havent implemented longformer correctly in it though
bmk#1476: so if you want to do that that would be great
AI_WAIFU#2844: On it
AI_WAIFU#2844: Should be done, now give it 15 mins to finish testing
bmk#1476: awesome
AI_WAIFU#2844: I can't even play videogames while I wait. Every bit of my compute is tied up. I'm getting 5fps in *getting over it*
bmk#1476: ouch
bmk#1476: i dont play games and reading pdfs thankfully does not use a lot of *pu time
AI_WAIFU#2844: Reading PDFs sounds nice. I have like 40tabs of unread arxiv papers. What's the point of paying extra for extra cores if BLAS is just gonna use em' all. |
bmk#1476: lol
AI_WAIFU#2844: I am experiencing difficulties. https://cdn.discordapp.com/attachments/729741769738158194/743310872692391967/figure-longformer-base-text8.png
bmk#1476: O.o
bmk#1476: It looks to be one every 512?
bmk#1476: That's.. suspicious
bmk#1476: Anyways I'll run gpt2 large Gutenberg overnight
AI_WAIFU#2844: I'll keep debugging to figure out what's up.
shawwn#3694: @AI_WAIFU whatcha working on?
AI_WAIFU#2844: I'm trying to get the code posted by bmk to output a reasonable plot, similar to the others we've been posting.
shawwn#3694: ah
AI_WAIFU#2844: I think we're grabbing the wrong logits or something. I don't really know yet.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743322221208010792/unknown.png
bmk#1476: `prediction_scores` is only meaningful for masked tokens, i think
bmk#1476: i.e -100
bmk#1476: i dont think longformer does autoregressive
Texot#6280: Joined the server.
AI_WAIFU#2844: Well shit. We might be able to do it but we'd need serial masking and it would be a very computationally intensive.
bmk#1476: that probably wouldnt work very well either
bmk#1476: whatever let's just stick with gpt2
AI_WAIFU#2844: Worse case we collect statistics from GPTNeo runs during training. |
Aran Komatsuzaki#5714: Is there any recommended discord server other than Yannic's? Especially for language modeling?
Aran Komatsuzaki#5714: or any other chat room for lm?
StellaAthena#3530: The DEF CON AI interest group has a year-round discord: https://discord.gg/rfa2W2. The primary topics of discussion tend to be AI security, AI ethics, and AI theory. We have paper reading groups that meet weekly.
Aran Komatsuzaki#5714: cool. let me check it out !
StellaAthena#3530: We just moved to Discord last week, so things are winding up. Also DEF CON was last weekend so a lot of us are tired. But it’s a very cool community.
Deleted User#0000: @Louis nice
Deleted User#0000: you should read up on grid / place / border cells and how they encode spatial information, then read https://arxiv.org/abs/1803.07770
Deleted User#0000: https://www.biorxiv.org/content/10.1101/2020.06.26.174482v1.full
Deleted User#0000: im sure these kinds of neuroscience papers will proliferate as DL gets more and more successful
StellaAthena#3530: @Deleted User wrowrong channel I think
Louis#0144: Nah this is kinda on topic
Louis#0144: Off topic is like discussions about chess or biking
Deleted User#0000: @Louis started it
Deleted User#0000: lol
StellaAthena#3530: Oh, I just assumed you were responding to something he said recently
Aran Komatsuzaki#5714: Writing a paper is damn hard. The last one I wrote was more than a year ago, and I still can't get a good result to publish.
Aran Komatsuzaki#5714: But I can tweet other people's paper and get more retweets than the author do lol
Louis#0144: @Deleted User grid cells pertain to cavities which is what I was talking about yesterday
Louis#0144: Just depends a lot on the lattice
Louis#0144: Tbh |
Louis#0144: I think we agree
Louis#0144: I’m just wording it weirdly
Deleted User#0000: there's a lot of people looking for the computational convergence at the intersection of recurrent nets and grid cells now.. https://arxiv.org/abs/2006.10259
Deleted User#0000: exciting times
Louis#0144: I’m working. On a project associating autoencoders with grid cells
Louis#0144: Working with Stella and someone else
Louis#0144: Lmao
Louis#0144: Bc was they’re really good at sphere packing
Deleted User#0000: jeff hawkin's has swung heavily into grid cells https://www.youtube.com/watch?v=zVGQeFFjhEk
Deleted User#0000: has some 'thousand brain' theory...
Deleted User#0000: his stuff has never worked out too well in practice though
Louis#0144: It does use grid cell like mechanisms
Deleted User#0000: bottom up is working, with conv nets and attention
Deleted User#0000: we should just stick with it.
Louis#0144: It’s directly observable
Louis#0144: Well idk I don’t think attention is bottom up actually
Louis#0144: I agree with conv nets though
Louis#0144: So yesterday I admit I was wrong, attention probably does occur naturally. But by the same token I think attention like mechanisms pull strongly from cog neuro
Deleted User#0000: sure, i think the connections can be drawn now ad-hoc
Deleted User#0000: but attention, as we see it now, had a long history in DL until it arrived at this conclusion |
Deleted User#0000: https://www.youtube.com/watch?v=AIiwuClvH6k
stig#1237: Joined the server.
bmk#1476: @Louis what is it about Yannic?
bmk#1476: (note: I have no idea who they are)
StellaAthena#3530: @bmk He is a PhD student with a YouTube channel where he shares a mix of educational content and rants about how talking about ethical issues with ML and the demographic composition of the the field are a sign of the decline of civilization (I’m only slightly exaggerating, tbh).
In one video he decides that NeurIPS broader impact statements are a conspiracy by social scientists to siphon resources away from computer scientists. He linked to the video on reddit, and I’m linking to the thread because it includes my criticism of the video and his response: https://www.reddit.com/r/MachineLearning/comments/gp7gdv/d_video_deep_dive_into_the_neurips_broader_impact/?utm_source=share&utm_medium=ios_app&utm_name=iossmf
Sid#2121: oh wow lmao. I've only ever watched his attention videos so i haven't run into any of this
Sid#2121: damn
StellaAthena#3530: On the plus side, he has thrashed harmful research as well, such as this video on some recent ML-for-phrenology https://youtu.be/zt_R85Ife_U
StellaAthena#3530: He makes good educational stuff, but based in the NeurIPS broader impact statement video, a rant he made about Timber Gebru, and our interactions on Twitter I don’t really have any inclination to spend time talking to him.
StellaAthena#3530: (The context he came up on was joining his discord)
Deleted User#0000: Joined the server.
Deleted User#0000: I just saw the link on yannic's server and joined in
Deleted User#0000: Hello people
Deleted User#0000: 👋
Louis#0144: oh man this is gonna be fun
Louis#0144: LOL
Louis#0144: I *really* dont want to associate with yannic at all
Louis#0144: but I guess its whatever |
StellaAthena#3530: Welcome @Deleted User (never thought I would say *that*)
Deleted User#0000: Hehe civ game funny
Louis#0144: triggerhappyghandi is a Civ reference
Louis#0144: ofc youd say that
Louis#0144: smh
Louis#0144: inevitable
bmk#1476: @AI_WAIFU https://cdn.discordapp.com/attachments/729741769738158194/743589147561689149/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743589197130104912/loss-gpt2-xl-gutenberg.npy
bmk#1476: this is so weird
bmk#1476: for smaller models, more context helps more
bmk#1476: for bigger models, more context doesnt help as much
StellaAthena#3530: @bmk I’m not sure that that’s the right conclusion to draw from this graph. The space between the blue and red lines increases slightly as you move along the x-axis, for example.
StellaAthena#3530: What are you drawing that conclusion from?
bmk#1476: sorry i meant relative to the previous model for each one
bmk#1476: like, i would have expected the right hand side to taper off further and further
StellaAthena#3530: Oh, you’re comparing to a second graph I’m not looking at
bmk#1476: no like
bmk#1476: between 700M and 1500M etc
bmk#1476: 700m and 1500m are basically overlapping on the right there
bmk#1476: while 117m and 345m pull further and further apart |
StellaAthena#3530: If you take a bad model and add more context, you get a larger % change in performance than if you took a better model and add more context.
StellaAthena#3530: That’s the observation, right? I don’t see why that should be surprising .
StellaAthena#3530: If I’m not making sense ignore me – I took off from work all day because I have a migraine and can’t think.
AI_WAIFU#2844: Yeah, it's like the advantage completely disappears. https://cdn.discordapp.com/attachments/729741769738158194/743596264377286746/Figure_11.png
AI_WAIFU#2844: I wonder if this is because of the distribution that GPT-2 was trained on. If the documents are not long enough, then most to the optimisation will be dedicated to lowering the loss for small contexts.
AI_WAIFU#2844: The alternative is just this is the way things are, and we might be able to get most of the benefits of massive language models with huge contexts by training a combination of a large dense language model over small contexts with a small sparse language model over large contexts.
AI_WAIFU#2844: But there's also a notable dip in the loss with higher contexts, and it becomes more pronounced in the larger models.
bmk#1476: well, parhaps, but the problem is that the fact is, the majority of data we can get is short-context anyways
bmk#1476: like, libgen, etc occupy a tiny fraction of the gpt3 training data
zphang#7252: these are all trained on the same context size?
bmk#1476: there's just *so much* internet text
zphang#7252: I'm not sure I'm reading the graph right
AI_WAIFU#2844: We might be able to rule it out by fine tuning on pg19 and looking at the slope of the resulting curve. My GPU is about to get freed up, so I can go back to training.
AI_WAIFU#2844: Assuming it fits in vram
zphang#7252: (what's pg19?)
AI_WAIFU#2844: Project gutenberg dataset by deepmind
zphang#7252: lol I thought it was a corpus of Paul Graham blogposts up to 2019
AI_WAIFU#2844: And if we're gonna do fine tuning I can adapt my existing transformer-xl code.
bmk#1476: how much vram do you have?
AI_WAIFU#2844: I only have 8 gigs |
bmk#1476: hm
bmk#1476: i have a 1080ti, if you can get the code working i can run it
AI_WAIFU#2844: That works, I'll get it working with the smaller models and you can just change the numbers.
bmk#1476: ok sounds good
deckard#6487: Joined the server.
AI_WAIFU#2844: Hello @deckard welcome to plots and graphs simulator 2020. Check out the project doc and the pinned messages.
es#4913: > like, libgen, etc occupy a tiny fraction of the gpt3 training data
@bmk wait that’s insane lmfao libgen is huge
bmk#1476: well, we're not sticking the raw pdfs in, we're taking the text out
bmk#1476: which makes it quite a bit smaller
bmk#1476: but even at original size, it occupies a small sliver of the total available training data
bmk#1476: also the subset of libgen that *gpt3 trained on* was tiny
sj#7916: Joined the server.
bool#5908: Joined the server.
bmk#1476: you know what we should totally do?
bmk#1476: make a merch shop and sell eleutherai tshirts
bmk#1476: i'd *totally* buy one or two
Aran Komatsuzaki#5714: How about AGI tshirts?
Deleted User#0000: @Aran Komatsuzaki i saw that someone was discussing Marge on the huggingface forums
Aran Komatsuzaki#5714: yeah i retweeted and replied to a question |
Aran Komatsuzaki#5714: a question posed in the forum
Deleted User#0000: haha, sent a ❤️
chirp#4545: Joined the server.
kindiana#1016: how does shared QK work at all with position encoding 🤔
kindiana#1016: for a token A to attend to token A-5 strongly, token A-5 needs to attend to A-10
kindiana#1016: because the query is the same as the key?
kindiana#1016: doesn't look like position gets any special treatment in reformer, but it works better than non-shared qk at language modelling??
Deleted User#0000: shared qk attention does work, but i found non-shared qk to be better still
Deleted User#0000: it's used in Reformer so that queries and keys can be clustered together with LSH
Deleted User#0000: but i've run into some pretty trivial tasks where shared qk seems to perform much worse than non-shared qk
kindiana#1016: the reformer experiment says shared qk seems to work better hrmmm https://cdn.discordapp.com/attachments/729741769738158194/743709633612480512/unknown.png
kindiana#1016: it just seems counterintuitive that it works at all lol
Deleted User#0000: yea, every paper is trying to sell
Deleted User#0000: i would say, in language, it's about the same
Aran Komatsuzaki#5714: probably it works if the dataset is small enough for regularization to make sesnse
Deleted User#0000: but shared qk is def not better
Deleted User#0000: it's worse in some special cases, mainly because tokens cannot attend to themselves
Aran Komatsuzaki#5714: yeah shared qk is not recommended
kindiana#1016: the thing I don't understand is how shared qk can effectively attend to positions, or is positional attention just unnecessary for language modelling?
Aran Komatsuzaki#5714: pos enc is necessary, but there are many ways to implement it |
Aran Komatsuzaki#5714: absolute, relative, etc
Deleted User#0000: yea, i think the Reformer team really favors the axial pos embedding, concatted
Deleted User#0000: because it helps LSH cluster the positions better
kindiana#1016: concatted to the output of Q/K?
kindiana#1016: or to the hidden
Deleted User#0000: ohh forget i said concatted, i meant the axial dimensions concatted
Deleted User#0000: you can either sum the axial dimensions or concat them
kindiana#1016: ah
kindiana#1016: I can see reformer working if relative position attn is used, you can use LSH for content based attention and do position based attn normally within the blocks
kindiana#1016: but not really if you just use absolute position embeddings added to the input
Deleted User#0000: i wanted to get relative pos emb working with reformer at one point
Deleted User#0000: then Aran showed me axial pos emb, and it worked well enough i didn't bother
Deleted User#0000: but yea, relative would be the best
Deleted User#0000: it's just complicated
Deleted User#0000: Reformer is already complicated enough
Aran Komatsuzaki#5714: the student i said i was helping... she began studying about ML/DL on the May, and in the last month she implemented GPT-2 on WIkitext-103. She's now implementing knn-lm, so she's skipping all the efficient attention models that are so complicated but don't give you much gains lol
Aran Komatsuzaki#5714: straight into retrieval-based model
shgidi#0284: Does this project have a dedicated repo? Where do the runs take place?
Daj#7482: @shgidi we have a repo for our model code and a scattering of data related stuff. If you'd like access, just send me your GitHub username. The runs run on a bunch of TPUs that I get for free from TFRC
shgidi#0284: @Daj Thanks! How many TPU's do you have? How many will be enough for the training? |
Daj#7482: Eh it depends on capacity. We have preemptible access, so basically if paying customers aren't using them, we can. technically we can have up to 2048, but in practice we usuay use 256 or 512
Daj#7482: The question of "enough" both had a somewhat straightforward answer and no straightforward answer at all haha
Daj#7482: Bit of a longer discussion about how to interpret various metrics
shgidi#0284: I see. Do you have references for this perhpas?
Daj#7482: Uhh yes we do, but I don't have them on hand. Look up the scaling laws and the GPT3 papers from OA
shgidi#0284: Thank you @Daj
kindiana#1016: @Aran Komatsuzaki what do you think about a hypothetical model with, document length << context length << dataset size (which solves stale activations), with the training data shuffled and sorted by document every epoch by something similar to MARGE sharding (would need some document level attn to produce document qks etc). You can train it purely autoregressively, but switch out the history/cached activations for other documents when evaluating and it basically just becomes a retrieval based model
kindiana#1016: I find it somewhat inelegant that retrieval based models are _basically_ attention but not really lol, and I think it would be really cool if there is (attention is all you need amiright)
kindiana#1016: MIPS is just hard topk attention 🤷
Deleted User#0000: @kindiana i think the best argument i've heard for retrieval methods is that it allows the model to focus on learning synthesis of knowledge rather than retrieving information from within its own weights. you are off sourcing that task to the external retriever
Deleted User#0000: it kind of makes sense in the context of the way humans do it. over time, naturally, libraries emerged, with at first people to retrieve the knowledge for you
Deleted User#0000: then fully automated a la google
Deleted User#0000: the results are compelling, kickstarted with Realm, which bested T5 being much smaller
Deleted User#0000: then fusion-in-decoder
Deleted User#0000: Aran, when i try to google for Fusion-in-decoder, your picture shows up https://cdn.discordapp.com/attachments/729741769738158194/743868891771437106/Screenshot_from_2020-08-14_09-28-51.png
Deleted User#0000: lol
Aran Komatsuzaki#5714: @kindiana @Deleted User Sorry for my late reply. I'll reply now.
Aran Komatsuzaki#5714: I was so loud about FiD and MARGE that I became FiD itself lol
Aran Komatsuzaki#5714: It's sad that Lewis didn't show up lol
Aran Komatsuzaki#5714: I'm essentially eating people's reputation, I guess. |
Deleted User#0000: @kindiana i do agree that at some certain scale, it probably would not matter. it's all just trying to work with constraints at the moment
Deleted User#0000: if you were building a QA system, you would most definitely reach for a retrieval solution than GPT-3
Deleted User#0000: at the moment
Aran Komatsuzaki#5714: or MARGE for some zero-shot learning like translation
Deleted User#0000: i think the only issue i have is that it is sensitive to the retrieval system. say in Marge, the retrieval system did not index multilingual documents together, then the LM would never have been exposed to the pairing
Aran Komatsuzaki#5714: no. that's not a disadvantage.
Deleted User#0000: so the way to think about it is, that retrieval system is a rule-based symbolic system for generating more training data
Aran Komatsuzaki#5714: not pairing is actually the strength. the point of zero-shot learning is to find the similarity between different samples.
Aran Komatsuzaki#5714: so, you expect the model to find the pair by themselves, not by being told that they are paired.
Aran Komatsuzaki#5714: in the case of translation, we almost never see a pair of one language and other.
Aran Komatsuzaki#5714: the pair is so rare that it's pretty much zero-shot learning.
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/743871524217290824/Screenshot_from_2020-08-14_09-39-37.png
Deleted User#0000: i think the zero-shot was directly influenced by the fact that it is fetching related text in many different languages
Deleted User#0000: so its more a retrieval prior that influenced what it learned
Deleted User#0000: say the retrieval system only fetched english documents no matter what
Aran Komatsuzaki#5714: It says multilingual, but ironically that's not really the interesting par of marge lol
Deleted User#0000: yea true
Deleted User#0000: essentially, retrieval systems puts hyperlinks all over the text, just like how we read the internet
Deleted User#0000: we can go to the next node and come back
Deleted User#0000: synthesize |
Aran Komatsuzaki#5714: they did some intervention to make it multilingual, but i think it was weak enough that it's reasonable to expect it to become better soon.
Deleted User#0000: except those hyperlinks are like to a search engine to the first 2-3 documents
Aran Komatsuzaki#5714: yeah
Deleted User#0000: Madison May, who writes some fantastic summaries for papers, has a writeup for retrieval methods https://www.pragmatic.ml/language-modeling-and-retrieval/
Aran Komatsuzaki#5714: > I find it somewhat inelegant that retrieval based models are _basically_ attention but not really lol, and I think it would be really cool if there is (attention is all you need amiright)
@kindiana Retrieval is the hard attention that actually works efficiently unlike the rest of hard attn that doesn't work efficiently. It's just more practical than soft attn that has to attend everything, which is insanely expensive. So, I guess it's a natural consequence.
Aran Komatsuzaki#5714: @Deleted User Thanks for advertising 🙂
Deleted User#0000: does huggingface have a chat room where they discuss this stuff @Aran Komatsuzaki ?
Aran Komatsuzaki#5714: > @Aran Komatsuzaki what do you think about a hypothetical model with, document length << context length << dataset size (which solves stale activations), with the training data shuffled and sorted by document every epoch by something similar to MARGE sharding (would need some document level attn to produce document qks etc). You can train it purely autoregressively, but switch out the history/cached activations for other documents when evaluating and it basically just becomes a retrieval based model
@kindiana MARGE sharding actually doesn't work on general dataset that doesn't have metadata. So, actually you need to cluster the embedding of documents with faiss to make kNN, which is actually pretty efficient. For these embeddings, you want MARGE, and you don't want to use other embedder like DPR or BM25, since they can't embed documents as well (see the last sec of my draft).
Also, doc level attn to produce document qks would be better performed by the document embedding of MARGE for a similar reason. Overall, your proposal is a good one, but given that we already have MARGE, I can't help but find MARGE superior.
Aran Komatsuzaki#5714: @Deleted User By this stuff, do you mean retrieval stuffs? Or LM in general? I have no idea about the latter, but for the former I don't think they do.
Deleted User#0000: ohh, retrieval stuff i mean, i guess not
Aran Komatsuzaki#5714: My guess is that not even Thom knows MARGE well lol
Aran Komatsuzaki#5714: just a subset of Huggingfacers
Deleted User#0000: yea true
Aran Komatsuzaki#5714: cuz he didn't even know Routing Transformer until I told him.
Aran Komatsuzaki#5714: @kindiana I said doc level attn to produce documents qks, but in the case of MARGE this just becomes the embedding, and it works in a similar way.
Aran Komatsuzaki#5714: We need to found a startup, religious organization, institute or something to secure funding for all of us to continue our research without having to worry about having to pay living expenses. |
bmk#1476: What do we still need to be a Real Research Institute™?
Aran Komatsuzaki#5714: in order to sell AGI tshirts?
bmk#1476: In order to collect funding
Aran Komatsuzaki#5714: in order to stop worrying about living cost
bmk#1476: What would we have to do to get funding
Aran Komatsuzaki#5714: that's the question
Aran Komatsuzaki#5714: I wonder if there are rich tech bros who would pay a fortune to be mentored by some of us.
Aran Komatsuzaki#5714: or to be counseled
bmk#1476: can we get Overlord Google to bring us under its wing like dm
Aran Komatsuzaki#5714: yeah they should acquire us by now
bmk#1476: for that to happen we should focus on legitemacy
bmk#1476: for that we really need to publish a few papers under eleutherai
Aran Komatsuzaki#5714: yeah
bmk#1476: also we probably want someone in charge of social media presence and website
bmk#1476: so yeah like
bmk#1476: if you want to hop aboard any of the proto papers we have please go for it
Aran Komatsuzaki#5714: i'll be in charge of social media presence. i'm a self-proclaimed twitter influencer and ai journalist.
bmk#1476: sure that would be great
bmk#1476: also we need to get a functioning website, the whole shebang
Aran Komatsuzaki#5714: i'll add EleutherAI into affiliation whenever I write my paper, including the draft I showed you. |
bmk#1476: that would be *awesome*
Aran Komatsuzaki#5714: I'll add it into my CV and everything
bmk#1476: what about your current institution though?
Aran Komatsuzaki#5714: website isn't my forte, so somebody needs to do it.
Aran Komatsuzaki#5714: I'll also add my institution.
Aran Komatsuzaki#5714: You can put as many as you want.
Aran Komatsuzaki#5714: The more the better lol
bmk#1476: ive never seen someone put two institutions under their name on a paper
Aran Komatsuzaki#5714: It's a common practice tho
Aran Komatsuzaki#5714: let me find an example
bmk#1476: maybe im just not looking out for it
Aran Komatsuzaki#5714: https://arxiv.org/pdf/2006.15720.pdf
Aran Komatsuzaki#5714: this is an example
Aran Komatsuzaki#5714: Zhiting Hu is in CMU and Petuum
bmk#1476: huh
bmk#1476: interesting
Aran Komatsuzaki#5714: You know what? Even if we don't stand out individually, if we gather together, people would consider us more legitimate and competent than we actually are. A similar effect is exploited by fishes that gather together to intimidate a larger predator.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743890822616973372/unknown.png
bmk#1476: iclr deadline coming up soon
bmk#1476: for all proto papers: let's try to step up our game |
Aran Komatsuzaki#5714: sounds good
StellaAthena#3530: @Aran Komatsuzaki I can make a website
StellaAthena#3530: Honestly it’s real easy nowadays. It took me maybe 20 minutes to make my academic site: www.stellabiderman.com
StellaAthena#3530: @bmk I’m much more adept at writing and mathematics than coding, but I’m happy to help push protopapers across the finish line. Is there a WIP list anywhere?
Aran Komatsuzaki#5714: Thanks for your help!
StellaAthena#3530: No problem! What info are you imagining including?
bmk#1476: help with writing would be really useful, I'm not very good at actually writing stuff up
Sid#2121: isn't that what gpt's for
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743911526007832637/unknown.png
Sid#2121: gib bullet points, get paragraph
bmk#1476: lol
bmk#1476: @StellaAthena so the first one @AI_WAIFU is also working on, the second i've been working on, and the reset of the stuff is probably not within reach
bmk#1476: for the second one the blocking issue really is we need more volunteers to help label data
StellaAthena#3530: > help with writing would be really useful, I'm not very good at actually writing stuff up
@bmk When I got into AI research, I wasn’t expecting my BA in Philosophy to be one of my major strengths, but I love technical and academic writing lol.
bmk#1476: haha
StellaAthena#3530: What sort of info would be useful to have on the website?
bmk#1476: er
bmk#1476: honestly, im not too sure
bmk#1476: probably a mission statement |
bmk#1476: probably a paraphrased version of this https://cdn.discordapp.com/attachments/729741769738158194/743912665478594730/unknown.png
StellaAthena#3530: Well if someone who has been involved with the group for more than the four days I’ve been here sends me some stuff to put up, I’m happy to set up a website 🙂
bmk#1476: also hey @Daj can you change the doc link slightly to https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g
bmk#1476: the current link points to a header
Daj#7482: Remind me tomorrow when I'm on my PC
bmk#1476: ok
bmk#1476: workaround: i might just delete that header
bmk#1476: workaround successful
bmk#1476: ok so yeah
bmk#1476: website is one thing that needs to be done eventually™, just pointing people to a google doc is starting to get ridiculous
thenightocean#6100: I can also help with the web stuff if needed and do some UI design stuff.
bmk#1476: awesome!
bmk#1476: website isnt super high priority imo but around these parts any work is good work
thenightocean#6100: as thats my day job anyway 🙂
thenightocean#6100: sure,
bmk#1476: wow, great! i cant do web stuff very well 😛
bmk#1476: so im always super impressed by people who do web stuff
thenightocean#6100: lool. I am the opposite
thenightocean#6100: looks like kids stuff compared to building mothefuckin' AI
bmk#1476: https://twitter.com/nabla_theta/status/1294372569915645952 pls rt |
Sid#2121: Done. We could try posting on reddit/machinelearning too?
bmk#1476: good idea
santiagoitzcoatl#2467: Joined the server.
Sid#2121: Hey @santiagoitzcoatl welcome to gpt-3 replication zone! Check the google doc in the channel description for more info and don’t hesitate to ask if you have any questions
zphang#7252: I think we're about to get flooded by new members
StellaAthena#3530: That would be awesome
santiagoitzcoatl#2467: thanks
bmk#1476: what would be a good short summary of eleutherai?
zphang#7252: Open OpenAI
bmk#1476: too frank
zphang#7252: hacker-garage version of OpenAI
bmk#1476: let's not mention openai
bmk#1476: `EleutherAI, a grassroots AI research group, ...`
bmk#1476: how's this
zphang#7252: sure
StellaAthena#3530: A collaborative AI research group aimed at democratizing and open sourcing AI research
zphang#7252: Do we have a defining goal currently besides "replicate GPT-3+, also try to not make it evil?"
bmk#1476: i like that
bmk#1476: i might edit it a bit to:
StellaAthena#3530: I specifically like focusing on democratization of research and open source code |
bmk#1476: `A grassroots AI research group aimed at democratizing AI research`
StellaAthena#3530: What does “grassroots” mean in this context
Sid#2121: why remove open sourcing
Sid#2121: that's kinda the whole point
bmk#1476: `A grassroots AI research group aimed at democratizing and open sourcing AI research`
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/743936404811284560/unknown.png
bmk#1476: https://www.reddit.com/r/MachineLearning/comments/i9u6u3/d_gpt3_replication_effort_help_wanted_with_data/
Sid#2121: the guy who runs https://twitter.com/Deep__AI reached out and offered to promote stuff for us also, might be worth reaching out to him again, lemme find his discord
StellaAthena#3530: We should make a more detailed pinned post that explicitly tells people what to do
Sid#2121: > the guy who runs https://twitter.com/Deep__AI reached out and offered to promote stuff for us also, might be worth reaching out to him again, lemme find his discord
@Sid oh hey, @baragonaru , this is you, right? Could we request a retweet on Deep__AI? We're looking for a few new members who can help with data collection 🙂
StellaAthena#3530: I can advertise on the AI Village discord as well
Sid#2121: maybe we should get a more straightforward onboarding process first?
StellaAthena#3530: Sounds like a good idea.
Sid#2121: > We should make a more detailed pinned post that explicitly tells people what to do
@StellaAthena do we want it to be more detailed? The google doc has a lot already, maybe too much
bmk#1476: Yeah we should figure out onboarding
bmk#1476: The Google doc is hella unorganized
StellaAthena#3530: It is not obvious when you look at the pinned posts in this channel what is going on or where to get information.
bmk#1476: We should make a website that's a distillation of stuff from the doc |
bmk#1476: And we should have an onboarding page therr
Sid#2121: ah, we try to welcome everyone with a 'look at the google doc' message, but maybe we missed you @StellaAthena
Sid#2121: it's in the channel description
Sid#2121: but i think a lot of discords have a special onboarding 'channel' no?
StellaAthena#3530: Yeah they do
Sid#2121: I'm not really a big discorder
Sid#2121: what's the AI village discord btw
bmk#1476: Can we get this stuff on a proper website anyways
StellaAthena#3530: And at least on my phone I have to press an on-obvious “see more” button to get to the doc link
bmk#1476: The Google doc is horrendous
StellaAthena#3530: AI Village discord: https://discord.gg/DnzJpY
bmk#1476: It's basically an append only log of everything we've discussed
bmk#1476: We really need a proper website if we want to onboard people properly
bmk#1476: Though being disorganised, for all it's disadvantages, has one big advantage
Sid#2121: yeah, this is much neater https://cdn.discordapp.com/attachments/729741769738158194/743938888040710144/Screenshot_2020-08-14_at_23.06.56.png
Sid#2121: > We really need a proper website if we want to onboard people properly
@bmk I don't think this is the case at all
bmk#1476: It fits the image of "grassroots movement" perfectly
bmk#1476: I mean I still support having a website
StellaAthena#3530: Oh, if you meant “what is it” not “what’s the link” the AI Village is the AI interest group at DEF CON |
Sid#2121: yeah eventually, but when we have something to show off
Sid#2121: ah @Louis has told me about AI village already, I meant what you parsed from the message 🙂
StellaAthena#3530: We have a year-round discord where we organize weekly paper readings and talk about ML and security stuff
bmk#1476: I think we need a website before then
bmk#1476: Like it's not high priority right this moment but I think we should aim to get it done sooner than later
zphang#7252: that's the one with the twitch channel?
StellaAthena#3530: I can get something basic online tomorrow (or tonight, depending on how late my dinner party goes)
StellaAthena#3530: @zphang yes, we stream the journal club meetings on Twitch
bmk#1476: @thenightocean if you want to get involved too
zphang#7252: fun stuff
bmk#1476: With the website
bmk#1476: I'll leave you two to figure out how to coordinate working together on that
Sid#2121: our pinned messages are mostly for memes lol 😦
bmk#1476: Don't worry nobody looks at pins
bmk#1476: We need a #welcome
Sid#2121: yep
bmk#1476: Where we put the onboarding stuff
bmk#1476: And we make it read only
Sid#2121: @Daj is the only one with the powers, but hopefully he's out getting drunk rn. Or sleeping. Or murdered.
Sid#2121: we could probably also start congregating all our git repos under eleutherAI |
bmk#1476: Oh yeah a git org
bmk#1476: I'll make one rn
StellaAthena#3530: Yes!
StellaAthena#3530: I meant to ask about that
bmk#1476: hmm someone already took eleutherai
bmk#1476: was that anyone in here?
Sid#2121: i'm pretty sure that was us lmao,
bmk#1476: but who?
Sid#2121: @Daj made a git org when we decided on the name, no?
Semantic Aberration#3692: Joined the server.
StellaAthena#3530: Hey @Semantic Aberration welcome to gpt-3 replication zone! Check the google doc in the channel description for more info and don’t hesitate to ask if you have any questions.
Deleted User#0000: Joined the server.
Semantic Aberration#3692: @StellaAthena Hi, I think you are overshooting the potentially available funding with 1T model.
I think a rational way of doing this would be
1) Try to run 11B Megatron-LM checkpoint from fairseq to compare it to GPT-3 and see if it's worth it
2) Evaluate usage of Megatron-LM 11B as a warm-start for your neo-GPT3
3) Use a best near-linear variant of attention from here https://github.com/Separius/awesome-fast-attention perhaps this one: https://linear-transformers.com/
4) Use fast tokenizer (e.g. https://github.com/VKCOM/YouTokenToMe )
5) Fund via crypto. |
bmk#1476: We're going for GPT3 before 1T
bmk#1476: Also we've done research on the feasibility already
thenightocean#6100: @bmk @StellaAthena I can draw up some quick website mockups in sketch . Is there any assets I should include already (text, logo, images) ?
bmk#1476: @Sid can you post a high res version of the logo
bmk#1476: I don't think we have any other assets
thenightocean#6100: (but it will happen tomorrow as its almost midnight here and I am early riser type)
bmk#1476: That's totally fine, we're not in a rush
bmk#1476: @Semantic Aberration did you join via the link on Twitter?
Semantic Aberration#3692: @bmk Do you have consensus on linear attention and on usage of model parallelism for training ?
bmk#1476: We have a few people looking into different attention types
bmk#1476: It's been discussed pretty exhaustively overthe past month
bmk#1476: That being said if you have any ideas we'd love to hear them
Semantic Aberration#3692: You could benefit from a hard benchmark for comparing attention, some long-dependency task
bmk#1476: Right now we need to get training working in the first place
Semantic Aberration#3692: @bmk Well some variant of linear attention is economical.
bmk#1476: That won't become relevant until after gpt3 tbh
Semantic Aberration#3692: @bmk Is there any problem with that, at least on scale of 1-4 TPUs ? I have seen GPT2 finetuning TPU code on github
bmk#1476: See #gpt-neox-devs for discussion
Semantic Aberration#3692: @bmk I doubt you will get enough funding for vanilla GPT3 with O(N^2+N^1.5) attention
bmk#1476: About training |
Sid#2121: > @Sid can you post a high res version of the logo
@bmk https://cdn.discordapp.com/attachments/729741769738158194/743946823265419384/EAI_logo2_copy.png
bmk#1476: Awesome
bmk#1476: @thenightocean
Deleted User#0000: @Semantic Aberration i've already added the best linear attention available
bmk#1476: #gpt-neox-devs pls so we don't clog up general
Sid#2121: @thenightocean logo with no text https://cdn.discordapp.com/attachments/729741769738158194/743947727880192010/EAI_logo_.png
thenightocean#6100: cool!
Sid#2121: i'll try and get an svg soon but I hate illustrator lol
aquajet#7800: @thenightocean I can help with some web grunt work if you want
Sid#2121: also, hello @Deleted User ! Welcome to gpt-3 replication hub! Check the google doc in the channel description for more info and don’t hesitate to ask if you have any questions.
thenightocean#6100: the text content will be stuff in google docs I presume. Anything more than that?
SoundFX#9362: Joined the server.
bmk#1476: We need to really reorganize the data from the doc
bmk#1476: Hey @SoundFX you here for the data thing?
bmk#1476: https://github.com/leogao2/htmltotext-benchmark/ more info on that can be found here
bavajee#2634: Joined the server.
bpeezy#0384: Joined the server.
Rioghasarig#7380: Joined the server.
Sid#2121: Hey @bavajee , @bpeezy , @Rioghasarig ! Welcome to the data collection sweatshops! Please check the google doc in the channel description for more information about the project, and reach out to any bluename if you have any questions! |
bavajee#2634: I've been trying to find the channel description 😁
Sid#2121: ah it's this bit lol @bavajee . Sorry our onboarding is really bad rn https://cdn.discordapp.com/attachments/729741769738158194/743963747374202920/Screenshot_2020-08-15_at_00.44.58.png
Sid#2121: https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g/edit#heading=h.1op7948crp4f here's the link to the gdoc
bavajee#2634: Perfect, thanks haha. Didn't realize one could click on that bar.
bpeezy#0384: Thanks for the welcome sid
bmk#1476: "Pick up a rifle and follow the yellow line. You'll know when the test starts"
bmk#1476: We need to incorporate that into the welcoming flow somehow
bmk#1476: So yeah pick up a rifle and come over to #the-pile to help with the data labelling
Python123#9881: Joined the server.
bmk#1476: Hello @Python123 ! If you're here for the data labelling, pick up a rifle and follow the yellow line to #the-pile ! You'll know when the test starts
leg0m4n#7262: Joined the server.
Sid#2121: Hey @leg0m4n ! Welcome to the Real AI Lab ™️ , where we transcribe Harry Potter erotica by hand for 👏 100% 👏 academic 👏 purposes. Check the google doc in the channel description for more info on the project, and reach out to any bluenames if you have questions
bmk#1476: I think you scared him away lol
Sid#2121: nah he's prolly gone off to find this harry potter fanfic i'm talking about
bmk#1476: Lmao
droper#8996: Joined the server.
bmk#1476: Hello @droper! If you're here for the data labelling, pick up a rifle and follow the yellow line to #the-pile ! You'll know when the test starts
duhast#0146: Joined the server.
kindiana#1016: @Aran Komatsuzaki really appreciate your insights!
The document embedder I'm proposing be something like this, for every "source" token, a key is output, and the mean pooling over all tokens in a document would be the document key, and for every target token, a query is output, so you can have each token attend to different documents. (both the query and the key comes from the same network, so shared qk). You would attend to all tokens, and do a softmax of softmax attention at the token and then document level. when shuffling the dataset, you would cluster based on the mean document q/k |
I think this would be better than marge because the reconstructive loss doesn't seem like it would be particularly effective at embedding "meaningful" things compared to phrasing and word order, and you might get faster training because you have meaningful losses on all tokens, not just the target documents (there would be no distinction between source and target documents, its all just context that has been clustered)
however, you do miss out on at least half the most relevant sources at training time due to the autoregressive nature, possibly more due to the restriction that each document can only serve as the source to k other documents, so I'm not sure how that balances out, but I'm inclined to believe that as the amount of data increases to gpt sizes, its most likely not that important, if there are multiple sources for any given fact in the data.
Semantic Aberration#3692: @bmk
> did you join via the link on Twitter?
No, I'm from r/ML
bmk#1476: Ah
Semantic Aberration#3692: @bmk
> What would we have to do to get funding
Obviously, to demonstrate an ability to train a nearly-SOTA LM (2018-2019 year SOTA tier, as measured via standard benchmarks, e.g. Wikitext-103 PPL, enwik8 bpc, GLUE, SQUAD) and to release it for replication.
Semantic Aberration#3692: > We need to found a startup, religious organization, institute or something to secure funding for all of us to continue our research without having to worry about having to pay living expenses.
Selling GPT3-like LM API access for crypto would be cool
A startup is a typical route though, see huggingface
bmk#1476: Our mission is to be open, selling access is the antithesis of that
Semantic Aberration#3692: Well you can release the torrent with weights, nobody will be able to run it anyway
StellaAthena#3530: “Nobody will be able to run it” is exactly what we don’t want to do.
Semantic Aberration#3692: The economies of scale work in your favor when you deploy a GPT3 with not-1 batch size
Semantic Aberration#3692: @StellaAthena Then you have to stop at GPT2-xl (maybe 2x that, for 11GB cards) or to go an unrewarding rabbit hole of hardcore distillation
bmk#1476: Or like slow inference
bmk#1476: People can live with 1 min per token |
Semantic Aberration#3692: Sure, could work, on my PC cpu inference of GPT2-xl worked at ~10 characters per second
Semantic Aberration#3692: @bmk Token of wisdom ™️
Semantic Aberration#3692: 32 GB RAM is only ~14B params + some buffer for activations, most people don't have even 32GB
Semantic Aberration#3692: Anyway, your discussion of MARGE and retrieval-based models was interesting
Semantic Aberration#3692: Some opinions:
1) MARGE is interesting, though idk if it is compatible as an aux objective to autoregressive LM (though someday people will have to add aux objectives because sensible datasets are not infinite)
2) Retrieval-based models are not worth it for now, too much software engineering needed to train them at scale compared to parameter-only models
3) How to use large context window is an important problems, for some time, books will be enough. Putting relevant documents as possibly relevant context for prediction over shorter documents is an interesting hack.
Semantic Aberration#3692: > sell eleutherai tshirts
sell tshirts with unique GPT3 slogans & stories
Aran Komatsuzaki#5714: Glad the things are moving whle I was asleep
Semantic Aberration#3692: @AI_WAIFU
> I wonder if this is because of the distribution that GPT-2 was trained on. If the documents are not long enough, then most to the optimisation will be dedicated to lowering the loss for small contexts.
I notice that OpenAI's GPT2 outputs less coherent stories than AIDungeon's. I feel it's due to OpenAI one not being trained on wikipedia, while AIDungeon one being fine-tuned on lots of coherent narratives.
Semantic Aberration#3692: @Aran Komatsuzaki Thanks for fundamental discussion about transformer architectures
Aran Komatsuzaki#5714: Finally I can become CTO of something not inaginary
Semantic Aberration#3692: @Aran Komatsuzaki Do you have a favorite lower than quadratic complexity attention mechanism, e.g. from something from this list https://github.com/Separius/awesome-fast-attention ?
Aran Komatsuzaki#5714: Well I have written a whole paper to argue why efficient attention isn't worth it, so if you want an answer other than marge, you want to ask @lucidrains who still haven't sold his soul to marge entirely yet
Aran Komatsuzaki#5714: Though I used to work on efficient attention in my previous life
Semantic Aberration#3692: @Aran Komatsuzaki Wow you bet a lot on MARGE ! Then couple of questions: |
1) Is MARGE compatible with autoregressive LM objective ?
2) Is it really this large step up in perplexity/quality/factual correctness
3) Is it adaptable to a new retrieval dataset once trained, or you have to bake the dataset in ?
Aran Komatsuzaki#5714: Yes and yes
About 3, the marge I'm thinking, called extended marge, doesn't need fine-tuning, so you can just train on the same dataset from the beginning
Aran Komatsuzaki#5714: It's also written in my draft
Semantic Aberration#3692: Cool
Aran Komatsuzaki#5714: https://www.overleaf.com/read/tcdxfrvfvtbw
Aran Komatsuzaki#5714: You can read this for more details
Semantic Aberration#3692: @Aran Komatsuzaki Do you think that novel skills GPT3 seems to exhibit compared to smaller models are explainable by drops in LM PPL, or you can have a model with the same PPL but without proto-reasoning and stylistic & logical adaptation from prompt instruction ?
Aran Komatsuzaki#5714: Some models show better text generation without improved ppl, but from gpt-2/3 we can observe that the performance on various tasks are strongly correlated with the drop in ppl, so i think it's safe to assume so.
Aran Komatsuzaki#5714: also, retrieval-based model improves ppl efficiently as can be seen from knn-lm
Aran Komatsuzaki#5714: so, how do we compete against huggingface?
Semantic Aberration#3692: @Aran Komatsuzaki Huggingface bets on convenience and integration, AFAIK they do little novel research (distilBERT/distilGPT2, and those likely have way earlier internal FAANG countrerparts, I think distillation and quantization are internal FAANG competencies not distributed to the wider public)
We could at least do some novel architectural/objectve/dataset *research*.
Aran Komatsuzaki#5714: yeah agreed that their research is very conservative
Semantic Aberration#3692: > How
Train a GPT3-like model with sub-quadratic attn, warm starting it from megatron 11B logits,
|
show the world that you (the team) can do it 🤔
Aran Komatsuzaki#5714: right, gpt-3 type stuffs are what makes us different
Semantic Aberration#3692: IMHO GPT-3 lacked book education
Semantic Aberration#3692: @Aran Komatsuzaki Oh, the irony !
Semantic Aberration#3692: I'm reading your paper, the abstract is very ambitious 👍
Aran Komatsuzaki#5714: adding (text)books would be great, and our project contains a way to utilize pdf files into lm
Aran Komatsuzaki#5714: thanks
Aran Komatsuzaki#5714: I want to know who are contributing to EleutherAI. Most of us are anonymous, so we need to know the name of some of our key contributors.
kindiana#1016: maybe we should have a channel for introductions
Semantic Aberration#3692: @Aran Komatsuzaki Do you agree there are fundamental computational limitations in current (non-recurrent) transformers (i.e. they cannot represent unparallelizeable computational circuits of depth more than number of layers) and that this could be the qualitative difference between these models and top human thinkers (who actually use their short term memory/recurrence)
Aran Komatsuzaki#5714: yeah we prob need a specific channel
Aran Komatsuzaki#5714: @Semantic Aberration I think it's reasonable to guess that the current transformer prob can't reach to human level simply by scaling up
Aran Komatsuzaki#5714: which is why we need some new components like retriever etc
Semantic Aberration#3692: @Aran Komatsuzaki I have also heard and agreed to that large enough RNN, magically trained, could. Because they are really turing complete and sequential.
Aran Komatsuzaki#5714: we may need something more than retriever and conditional computation or anything we already know, but that's beyond my imagination as of now
Rohan#7232: Joined the server.
Aran Komatsuzaki#5714: no turing complete means shit
Aran Komatsuzaki#5714: turing complete has nothing to do with the empirical performance (e.g. the tasks gpt-3 was evaluated on)
StellaAthena#3530: @Semantic Aberration Speaking for myself, my answe is “yes, but not in an interesting way.” Whenever the introduction to a paper talks about how RNNs work like human brains I roll my eyes. We have no good evidence to think that what we can do on computers is remotely similar to how brains work.
Semantic Aberration#3692: Sure does (they play with definitions and constants/sizes) |
Semantic Aberration#3692: @Aran Komatsuzaki Thanks ! I will think more about retrieval-based models
StellaAthena#3530: Ah, TC is something I’m *actually* qualified to opine on.
Semantic Aberration#3692: @StellaAthena Cool, well I'm a bit taken aback when I see a paper proving that some NN is TC, while using infinite precision real weights, instead of more intuitive definition of "being able to execute TC computation requiring TM memory O(F(N)) with O(G(N)) parameters/weights/recurrent units")
Aran Komatsuzaki#5714: i don't think i have a permission to open a channel
Semantic Aberration#3692: Nevertheless I know that exceptional humans can do exceptional sequential computations in their minds, e.g. multiply large numbers and prove theorems, and vanilla transformer (sadly) won't be able to simply due to lack of modeled steps
Semantic Aberration#3692: multiplication is a bad example though, it's not hardcore sequential
Semantic Aberration#3692: I know there is a lot of nuance in computability/computational complexity theory though
StellaAthena#3530: @Aran Komatsuzaki is dead right. There are several reasons we don’t care about it:
1. There are computational problems we would like to solve (in practice) via AI that are *harder* than the halting problem
2. 99% of AI is about approximating things and the set of functions computable by Turing machines does not have interesting properties as a space to approximate (this is a deep result in computable analysis).
3. Expressive power and metrics we care about like train time and frequency of convergence are very weakly correlated. Adding functionality to a model to make it Turing complete is not a good way to improve its performance in the real world.
Semantic Aberration#3692: @StellaAthena
> We have no good evidence to think that what we can do on computers is remotely similar to how brains work
Also: given quite simple biophysical assumptions on action potential propagation speed, synapse delay, one can derive limits on computational circuit size (width, depth) that can be executed by a human to, say, answer a question given time X seconds.
StellaAthena#3530: Complexity theory is slowly working out how to deal with average time complexity. Once that has a rigorous foundation I would expect complexity theory to be more relevant to AI design (though still I wouldn’t expect Turing completeness to matter much, though maybe there’s a notion of “average case TC” that’s interesting)
Semantic Aberration#3692: @StellaAthena I agree with your points, thanks. TC is a question of the future for now.
StellaAthena#3530: Since #1 often throws people, let me give a concrete example: determining whether or not there is a forced win for the person whose turn it is from a game state of *Magic: the Gathering* is in general **much** harder than the halting problem.
StellaAthena#3530: Team multiplayer puzzle games involving limited range of vision can also have undecidable strategies (though not nearly as hard as *Magic*). This includes Portal, Smash, and Team Fortress 2
StellaAthena#3530: The fact that humans view 1 v 1 and team games as being roughly interchangeable strategically is very interesting from a bio comp standpoint given that one is decidable and one is (often) not
Aran Komatsuzaki#5714: I tweeted about eleutherai becoming something legitimate |
StellaAthena#3530: Do I follow you on twitter?
StellaAthena#3530: Also, if any of our newbies are interested in hardcore math and the theory of AI *please* come talk to me.
Aran Komatsuzaki#5714: i don't know. i use my real name both here and on twitter, so you can figure out easily.
StellaAthena#3530: I do now 🙂
Aran Komatsuzaki#5714: thanks 🙂
StellaAthena#3530: > Also, if any of our newbies are interested in hardcore math and the theory of AI *please* come talk to me.
I like my job as an industry ML researcher but holy shit does it get frustrating being unable to talk about math with anyone at my company 😦
Aran Komatsuzaki#5714: i'm in academia, so i have an opposite problem
Aran Komatsuzaki#5714: actually i don't talk with people in my school, except for the ones in twitter or here
StellaAthena#3530: My colleagues are competent, but I can’t even bounce my theoretical ideas off of them because of insufficient background.
Aran Komatsuzaki#5714: I was in math phd for one semester, so i'm one-eighth qualified, i guess
Aran Komatsuzaki#5714: no one-tenth
Semantic Aberration#3692: @StellaAthena
> the set of functions computable by Turing machines does not have interesting properties as a space to approximate (this is a deep result in computable analysis).
Does the no free lunch theorem follow from this?
Don't you think that (mathematically valid in the context of mathematical theorem) uniform prior on computation encountered in physical universe is misleading, and the physical laws of a given universe impose nontrivial priors on computations you as observer encounter in it ?
StellaAthena#3530: @Semantic Aberration that’s a really good question, though the answer is no. I think that NFL is equivalent to a strong version of P != NP, possibly the ETH though I haven’t thought about this in a while.
StellaAthena#3530: Ehh, It depends on what you consider the set of “problems under consideration”
Semantic Aberration#3692: One could conjecture that given some universe, some TC basis requires much less bits to specify a typical computation observer may encounter there than any random TC basis. |
Semantic Aberration#3692: @StellaAthena I find it interesting to view a set of problems weighted by probability of generation as a set conditioned on the universe I'm instantiated in
StellaAthena#3530: The complexity theory idea that I think you’re heading towards is “hard on average.”
StellaAthena#3530: The NFLT says that optimization problems are hard on average, in a certain sense.
Aran Komatsuzaki#5714: we don't want to sell access to gpt-3, but either we want to sell something other than AGI tshirts or we want to get a funding as a non-profit or something. what can we sell/do?
Semantic Aberration#3692: (do we don't want though, if we won't sell it, nobody will be able to run it)
Semantic Aberration#3692: @Aran Komatsuzaki ML consulting is what some AI startups do, but do you really want to be a data plumber
StellaAthena#3530: It says that optimization is sufficiently hard that on average clever algorithms don’t help.
StellaAthena#3530: You can get lucky on some instances, but averaging across problems and across instances algorithms perform roughly the same. In other words, the problems are deeply intractable.
Semantic Aberration#3692: @StellaAthena I think that "on average" is misleading for someone tackling a problem about this concrete physics with its gaussian zipfian and power laws everywhere
Aran Komatsuzaki#5714: i don't want to be, but i can provide my experiences for money if the problem is suffciently interesting.
Semantic Aberration#3692: It is interesting to study universal induction, @StellaAthena
Aran Komatsuzaki#5714: actually have no idea about business. i'm in academia forever.
StellaAthena#3530: I was going to do that, but I took a break to avoid burn out and now I’m getting paid a lot of money to do interesting research and it’s very hard to justify finishing a PhD when it requires taking a 80k/year pay cut at the age of 26
bmk#1476: I'm ok with providing API as a service as a paid thing like OA is on the condition that we also release the model
Aran Komatsuzaki#5714: sounds good
bmk#1476: I am absolutely not ok with not releasing the model and still making a paid API
Aran Komatsuzaki#5714: @StellaAthena i want to go industry, but they haven't hired me yet lol guess i'll wait til they do
aquajet#7800: ^
Semantic Aberration#3692: ... For example: if you are tasked to solve an NP-hard TSP, you are often dealing with an instance of TSP about dots on 2d slightly curved geometry, maybe locations in a city, with traffic flows having peculiar distribution
bmk#1476: @Aran Komatsuzaki let's hope eleutherai gets acquihired by Overlord Google |
Semantic Aberration#3692: Surely some optimization priors would work better for such instances
Aran Komatsuzaki#5714: @bmk yeah lol
bmk#1476: Guys let's pump out all the papers we can
bmk#1476: More papers = more legitimacy
StellaAthena#3530: Right. But when I’m trying to solve optimization on a 40-dimensional simplex those priors will be unhelpful.
Semantic Aberration#3692: @bmk >I am absolutely not ok with not releasing the model and still making a paid API
Obviously one should release the model, as a torrent
StellaAthena#3530: The NFLT says that given two sets of priors, there is always a distribution over the problem space where one performs better than the other.
StellaAthena#3530: Nobody would suggest that if you specify a problem distribution that priors don’t help. That’s absurd.
StellaAthena#3530: It’s about cross transferability of priors from context to context (aka distribution to distribution)
StellaAthena#3530: Oooo I have a low-hanging fruit list somewhere. I should go find it.
Semantic Aberration#3692: @StellaAthena I agree. Though I think that there could be slight prior/bias here and there inherent to human brain, and that influences the paths humans decide to take on the global mathematical/ZFC/yourTheoryOfChoice proof tree, building a theory of mathematics unique to our species. Thus even a problem of simplices could bear traces of this bias.
Semantic Aberration#3692: @StellaAthena I think the problem of computing these general priors from observation is, in itself, very interesting.
Semantic Aberration#3692: And the problem of specifying how useful they are.
Semantic Aberration#3692: Some people in DeepMind work on this for decades.
StellaAthena#3530: In social network analysis, there’s a technique based in information theory commonly used for anomaly detection. It’s unusual among graph anomaly detection techniques in that it is able to detect anomalous graphs out of a set of graphs (most are for detecting anomalous nodes within a single graph). Can this technique be used to catch graph adversarial examples?
Reqs:
- Someone who understands information theory.
- Several people competent with graph neural networks. |
Papers:
https://eecs.wsu.edu/~cook/pubs/kdd03.pdf
citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.4272&rep=rep1&type=pdf
Semantic Aberration#3692: @StellaAthena There was talk about detecting GAN-generated images by their spectral discrepancies, maybe you could do the same on graphs, maybe even with graph spectral properties.
StellaAthena#3530: That seems worth exploring, @Semantic Aberration
StellaAthena#3530: The next item on my low hanging fruit list is now #the-rad-lab
stig#1237: Joined the server.
Aran Komatsuzaki#5714: Delip Rao is interested in EleutherAI
Aran Komatsuzaki#5714: He found it from my tweet.
Aran Komatsuzaki#5714: He wants to know how he can help this.
bmk#1476: *woah*
bmk#1476: um
bmk#1476: guys what do we need help with
bmk#1476: also is this a dick move https://cdn.discordapp.com/attachments/729741769738158194/744039938747793519/unknown.png
dr#9530: Joined the server.
Aran Komatsuzaki#5714: We need to create a specific channel to discuss bussiness/startup matter.
Aran Komatsuzaki#5714: The current format is too chaotic
bmk#1476: yes agreed
StellaAthena#3530: Then there’s a project I’ve have finished: use the graph minor theorem fo analyze the structural properties of neural networks. So far the most exciting thing I’ve proven is that there is a quadratic time (in the number of edges) algorithm that takes as it’s input a graph, a function, and an error bound and determines if a neural network with this underlying computational graph is unable to approximate the function within that error bound. |
Aran Komatsuzaki#5714: Can you do that?
bmk#1476: only daj can add channels though
Semantic Aberration#3692: Startups like to talk about unit economics and share dilution
StellaAthena#3530: Daj needs to give more people powers if we want to grow
Aran Komatsuzaki#5714: @Daj Please create a new channel
bmk#1476: he lives in europe so he's probably asleep
bmk#1476: (he better be)
Semantic Aberration#3692: @StellaAthena Fundamental results on limits of NN approximation powers are very intersting, thanks for working on it
bmk#1476: also @Daj can you give me and sid server management perms, being able to add channels etc would be kinda useful
StellaAthena#3530: It’s partially an excuse to use my favorite theorem 😛
StellaAthena#3530: But it’s also independently interesting
bmk#1476: anyways re: startup i'm mildly opposed to this idea
bmk#1476: i think our goal should just be to ensure we have enough money to do research, not try to turn a profit
Semantic Aberration#3692: @StellaAthena Reminds me of this paper I skimmed but couldn't follow https://arxiv.org/abs/2007.15298
bmk#1476: that being said maybe being a startup in name has advantages for that, im not certain
Aran Komatsuzaki#5714: yeah i think so
Semantic Aberration#3692: @bmk NonProfit !
bmk#1476: in any event i like the idea of eleutherai being this grassroots ai community
bmk#1476: maybe at some point we need to make things more rigid
bmk#1476: but i think we should try to avoid that |
StellaAthena#3530: I didn’t honk anyone is proposing making us a start up
StellaAthena#3530: The suggestion was to have a space to discuss start up ideas if people have them
bmk#1476: ah that makes a lot more sense
StellaAthena#3530: As soon as you became a for-profit org the whole@point of grass roots AI democratization goes out the window
bmk#1476: yeah agreed
VitaminC#1262: Joined the server.
Aran Komatsuzaki#5714: is it? openai is half non-profit and half for-profit.
bmk#1476: ~~our unspoken goal is to not become another openai~~
StellaAthena#3530: Our unofficial motto is “what Open AI was supposed to be” so I think that example reinforces my point
Aran Komatsuzaki#5714: we can become a more ethically sound openai with a similar structure.
Deleted User#0000: i've worked for 3 silicon valley companies, including Uber during their rocket phase
Semantic Aberration#3692: I don't like paying money for APIs and stuff tbh, but paying for GPT3 samples at slightly more cost than hw + power is totally ok with me, because I won't be able to run a DGX-2 at my home to host my GPT3 anyway.
Deleted User#0000: money corrupts
Aran Komatsuzaki#5714: structure isn't the culprit here. the principle is.
Deleted User#0000: that said, whatever gets us to open sourcing a working model
Deleted User#0000: i'm game
dr#9530: Discovered this via Aran’s tweet. Pleased to join all of you here. For now, I'm going to just listen and learn, but happy to help in any way I can.
Aran Komatsuzaki#5714: yeah open-sourcing something every one of us believes in
bmk#1476: If we take investment, etc one of our uncompromising principles should be that all code and models must be open
VitaminC#1262: Same. |
VitaminC#1262: Discovered this thanks to Aran
Aran Komatsuzaki#5714: @dr Thanks a lot!
bmk#1476: Great to have you here @dr! You can check out our (admittedly messy, in true organic grassroots fashion) planning document here to get an idea of what we've been doing https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g/edit#heading=h.1op7948crp4f
dr#9530: Thanks, @bmk
StellaAthena#3530: I think that focusing on money is largely a distraction. Yes compute costs money but we have the compute we need for now, right?
Semantic Aberration#3692: Would be cool (maybe) to manage to get Stallman's approval and a meme license that doesn't let others to use our dataset to train their model without sharing the weights
Aran Komatsuzaki#5714: Yeah money isn't the primary objective here.
StellaAthena#3530: @Semantic Aberration One of our projects is to make that a real thing rather than a meme
bmk#1476: The one compute thing we do actually need is for data processing
StellaAthena#3530: “Structure” can easily mean “a GitHub project and a website that organizes ideas.” I think fretting about being a non-profit or whatever right now is not only missing the point but getting in the way.
bmk#1476: Jili hasn't gotten back to us in a while
bmk#1476: And we need a lot of cpu time
bmk#1476: Good news is we're not in a hurry anyways
bmk#1476: The data pipeline probably won't be ready for another while
StellaAthena#3530: I have a DGX-1 I can run things on overnight if that’s helpful @bmk.
bmk#1476: Wow that's really awesome, that might be useful for some projects
bmk#1476: For data processing specifically though it's CPU and bandwidth bound though
dr#9530: Just want to throw a suggestion you all might have already considered. Folding@home code is open source (at least most critical parts). What would it take to adapt it to a GPT@home?
bmk#1476: Unfortunately the problem is not easily distributed
kindiana#1016: the flops per bit of communication is pretty bad for distributed training |
kindiana#1016: data processing could be possible though
bmk#1476: Unlike folding which divides easily into independent work units, it's going to be pretty hard to divide up training
StellaAthena#3530: It’s a company resource, but my company allows me to spend time on passion projects. I believe there are currently six people with access to it because we have a lot more money than technical competency right now.
Deleted User#0000: foldit was wildly popular when the coronavirus first started
Deleted User#0000: people in the world want to participate in something extraordinary
bmk#1476: But yeah distributed data processing is a good idea and aquajet has been working on a coordinating server for the data pipeline
bmk#1476: If anyone wants to help make things to lower onboarding friction that would be awesome
bmk#1476: Everything we have has sharp edges rn because we just don't really have the developer time to make everything and also make it user friendly
Deleted User#0000: the problem i see is, say there is some scheme for decentralized training, accounting for rejecting stale weights, how do you verify it is legit
Deleted User#0000: and even worse, not some attack (gradient ascent)
Deleted User#0000: im sure there is a lot of research into this, but i haven't seen a solution in this space..
Deleted User#0000: (a year or two ago, i was thinking of building some decentralized training scheme in the browser with tensorflowjs)
bmk#1476: Folding solves the problem with massive redundancy
StellaAthena#3530: @Deleted User It’s called “verifiable computation” and it doesn’t work for NNs yet.
bmk#1476: Triplicate and quadruplicate it
bmk#1476: Unfortunately that's also massively inefficient
StellaAthena#3530: (That’s another one of my outstanding problems: I’ve reduced it to a problem in algebraic geometry that I really need to learn the stuff I need to solve it)
Deleted User#0000: https://arxiv.org/pdf/2001.08103.pdf i think some of the healthcare DL papers are worth reading. they've been trying to have hospitals learn separate NN's and send up the weights, in some federated scheme
Deleted User#0000: hospitals have to deal with patient anonymization because of HIPAA. it's a way to pool learning without having to expose patient data.
Semantic Aberration#3692: @Deleted User I find this method cool, but it's not enough for distributed https://www.microsoft.com/en-us/research/publication/1-bit-stochastic-gradient-descent-and-application-to-data-parallel-distributed-training-of-speech-dnns/ |
kindiana#1016: I think the best shot at distributed training would be using model parallelism and reversible layers, so the minimum level of syncronization is required
Deleted User#0000: so there's some people trying to solve this decentralized training scheme, with some modest success, last i touched healthcare AI
Deleted User#0000: there's more paper if you google this
Louis#0144: Ok sure no startup but when’s the IPO
Louis#0144: ;p
StellaAthena#3530: (Relatedly, if anyone can figure out a randomized algorithm for distinguishing between tropical varieties by sampling them at a small number of points DM me and we can write a paper together)
Louis#0144: I’m not sure that’s inherently possible
Semantic Aberration#3692: @StellaAthena Isn't this compressive sensing problem ?
Louis#0144: Particularly the small number of points part
Deleted User#0000: i think for decentralization to work, you'd need trusted parties
bmk#1476: We could also do synthetic gradients
Louis#0144: But I haven’t learned much about tropical varieties besides two papers
Deleted User#0000: ohhh how would synthetic gradients work? from the deep mind paper?
Semantic Aberration#3692: I think DM abandoned this direction, but it was very cool
Deleted User#0000: yea, its such an old paper
Louis#0144: Deep learning blockchain when :^)
Louis#0144: @ trusted parties
bmk#1476: 2016 is old?
Louis#0144: God I cringed saying that
Louis#0144: 2016 isn’t that old |
bmk#1476: Man this field moves fast
StellaAthena#3530: @Louis So, I actually only need low degree topical varieties which is why I am hopeful. Concretely, if you can determine whether p(x_0, ..., x_n) and q(x_0, ..., x_n) are the same polynomial or not where p and q are known to be **multilinear** tropical polynomials then the result follows.
Louis#0144: Something something compare the topology
StellaAthena#3530: Yeah, the something something isn’t super easy though
Louis#0144: LMAO
Louis#0144: true
Louis#0144: Provably the number of samples you need to determine a homotopy class is massive
Louis#0144: I can find specific papers on that tmrw
Louis#0144: I would think that’s relevant here
StellaAthena#3530: Well the result holds for classical polynomials
StellaAthena#3530: It follows from the Schwartz-Zippel Lemma
Louis#0144: Oh damn
surajpatil#3994: Joined the server.
Louis#0144: So can you take a smooth approximation of your tropical polynomial
Louis#0144: That way you’re dealing w smooth manifolds
StellaAthena#3530: I’ve tired that, but you need to know that you’re sampling away from the corner
dr#9530: @Deleted User a small writeup I did long ago on synthetic gradients https://deliprao.com/archives/187
StellaAthena#3530: It’s a black-box setting, so you don’t know where the corner is.
StellaAthena#3530: It’s possible you can get “good enough” everywhere outside a small ball around the corner and control the accuracy with the radius? I haven’t tried that but that’s a good idea if that’s what you had in mind
Deleted User#0000: @dr thank you! i read this paper on a flight some time ago and then forgot about it |
StellaAthena#3530: Hey if I was to drop $10 and get us a custom URL for a website for a year, what would we want it to be?
bmk#1476: we already own the obvious one
StellaAthena#3530: What do we own?
StellaAthena#3530: eleuther.ai?
bmk#1476: yes
bmk#1476: nothing is up yet though
StellaAthena#3530: Dope
Deleted User#0000: here's a paper where they tried direct feedback alignment on transformers https://arxiv.org/pdf/2006.12878.pdf
Deleted User#0000: tldr: not as good as BP yet, but close
1122pv#9797: Joined the server.
maghav#7178: Joined the server.
chandhooguy#5586: Joined the server.
bmk#1476: Hey @1122pv @maghav @chandhooguy @surajpatil ! Welcome to our Very Legit Real AI Lab™! To get an overview for the project, check out the doc (https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g/edit). If you're here to help with data processing, ~~pick up an rifle and~~ follow the yellow line to #the-pile. ~~you'll know when the test starts.~~
researcher2#9294: Joined the server.
researcher2#9294: Hello! Just came here from reddit. I like the idea of open source GPT3 and would be willing to contribute if you have a kickstarter or something. What interests me more however are generalized learning agents.
I recently saw iGPT from open AI and it got me thinking about attention for generalized unsupervised learning, but the quadratic nature made this impossible (or so I thought).
Then I read about sparse attention and just got through the paper on axial attention and downloaded a repo from someone called "lucidrains". Is this the same lucidrains I see above? :)
|
So my current plan would be to apply this initially to a box world, basically copying https://arxiv.org/abs/1803.10122 and replacing the VAE and LSTM with axial transformers.
Before I go and spend the next month hooking this up to Atari and whatnot, are there any similar projects currently happening? I would assume OpenAI is working on this as we speak, but they are no longer open so 🤷 I'm no genius so would probably be much more useful helping out or tinkering with an existing project if such a thing exists.
_7aso0on#3258: Joined the server.
Daj#7482: Christ I'm gone for one evening and a lot happens haha
Daj#7482: @bmk @Sid you should have full perms now, dunno why I hadn't set those up earlier
Ravna#1831: Replacing LSTMs with transformers in RL settings is not as straightforward as plugging out then plugging in. @researcher2 Check this paper out: https://arxiv.org/abs/1910.06764
Daj#7482: We have a GitHub org, website, Twitter, just not used yet
bmk#1476: can we set stuff up for the github org
researcher2#9294: > Replacing LSTMs with transformers in RL settings is not as straightforward as plugging out then plugging in. @researcher2 Check this paper out: https://arxiv.org/abs/1910.06764
@Ravna will check it out thanks
bmk#1476: like, start adding people and put up the branding
Daj#7482: Of course
Daj#7482: I haven't looked into how to do this yet
Daj#7482: What needs to be set up?
bmk#1476: first things first can you add me to the org
Daj#7482: Of course yea
Daj#7482: What was your GitHub name again?
bmk#1476: leogao2
Daj#7482: Sent |
bmk#1476: 👍
bmk#1476: wait uh how do we get a name@eleuther.ai email?
Dmitry#5986: Joined the server.
Daj#7482: We have to setup a decent email provider and point the domain to it, I think
bmk#1476: hmm
Daj#7482: Hey @Dmitry ! Welcome to the AI Construction Zone! Check the channel description for info and don't hesitate to ask questions!
Daj#7482: Yea @bmk I'm no expert at web admin and should look into this
Daj#7482: You just HAD to expand out the day I go on vacation lol
bmk#1476: lol
bmk#1476: it's ok we can take stuff over
bmk#1476: enjoy your vacation haha
Daj#7482: Nah it's all good, I'm still available and all
Daj#7482: But ofc you guys need to be able to do stuff without me
bmk#1476: dont worry we got it all under control
bmk#1476: also we werent planning on expanding per se
bmk#1476: originally i just wanted to get more hands to help with data labelling but it appears to have had the exact opposite effect
bmk#1476: like a dozen people have joined now but none of them have done any data labelling >.>
Daj#7482: That's about what I would have expected haha
bmk#1476: like i *specifically* said, yknow, "pls come help label data"
bmk#1476: and people just join and not even say anything |
Daj#7482: Guess we should think about better onboarding and distributed processing stuff
bmk#1476: yeah
kindiana#1016: data labeling is not as sexy as theorycrafting agi xP
Daj#7482: > and people just join and not even say anything
@bmk yes this is how discord works hah
bmk#1476: im seriously considering just using trafilatura for english and taking a break on other languages
bmk#1476: there's no way im ever getting enough data at this rate
bmk#1476: might as well just say oh well and also use trafilatura for other languages too
bmk#1476: or maybe newspaper
Daj#7482: fwiw that sounds good to me
bmk#1476: honestly idek
bmk#1476: my inner perfectionist is dying but meh
Daj#7482: I think other languages is just too high investment versus reward until we've got a working PoC
bmk#1476: good call
bmk#1476: i still want to collect a bit more english just to be sure that trafilatura is actually better but once we do, im ok with going for full-scale corpus construction
Daj#7482: Nice
bmk#1476: and we can just release a v2 with better language support i guess
Daj#7482: Yea, it's not like we don't have enough things to work on
bmk#1476: yeah ok so after this we need ot hunt for cpu power
bmk#1476: and the next big project to hyperfocus on is :books2: |
bmk#1476: shit, we're making progress
Daj#7482: Indeed
Daj#7482: We'll get there in time
bmk#1476: ok so blocking issue for :books2: is yarr harr servers. lets finally contact archivist
bmk#1476: actually maybe we should get the pipeline working first?
Daj#7482: Pipeline seems pretty dominant in importance
Daj#7482: And maybe some administrative/housekeeping work like updating links, documentation, onboarding etc
bmk#1476: oh yeah right
bmk#1476: ive been tending to the onboarding doc a bit
Amazing_Wall#5488: Joined the server.
bmk#1476: so it's not *completely* garbage
Daj#7482: Awesome
Daj#7482: Hey @Amazing_Wall ! Welcome to the Unsupervised Hacker Space! Check the channel topic for info and don't hesitate to ask questions!
Daj#7482: Hey @frieda ! Welcome to The Autonomous Nation of EleutherAI! Check the channel description for info and don't hesitate to ask questions!
bmk#1476: >you are now entering Free Monoid
Tomek#5855: Joined the server.
bmk#1476: Hey @Tomek ! Welcome to The Papal State of EleutherAI! Check the channel description for info and don't hesitate to ask questions!
thenightocean#6100: I am working on the site atm. Doing some basic graphic design...nothing too fancy
thenightocean#6100: should we create channel for website so that we dont crowd out the convos in #general ?
Daj#7482: Makes sense to me |
Daj#7482: #website , boring name, but it works
Sid#2121: > there's no way im ever getting enough data at this rate
@bmk what about the repo @Semantic Aberration posted in the pile? Didn’t it already have a lot of html vs clean text data?
Sid#2121: @Daj can I get access to the github org too?
Daj#7482: Remind me of your GitHub name?
Sid#2121: sdtblck
Daj#7482: Wasn't it stdblck?
Daj#7482: Or am I Berenstaining
Sid#2121: I think you are hah
Daj#7482: Huh
Daj#7482: Sent
Sid#2121: > @bmk what about the repo @Semantic Aberration posted in the pile? Didn’t it already have a lot of html vs clean text data?
@Sid also woops, just answered my own question by reading the latest messages
Babbleberns#6590: Joined the server.
JJ Hep#6020: Joined the server.
donderper#9738: Joined the server.
tobys#2176: Joined the server.
Aran Komatsuzaki#5714: @Tomorrows_Gone_ on Twitter commented as follows:
This is awesome. I think startup with a capped profit clause to revert to a non-profit makes sense. You could definitely raise a significant round for this, would be much easier than crowdfunding.
Aran Komatsuzaki#5714: (regarding my tweet that we'll make EleutherAI) |
Daj#7482: Hey @Babbleberns @JJ Hep @donderper @tobys ! Welcome to the AGI Bootcamp! Check the channel topic for info and don't hesitate to ask questions!
Daj#7482: @Aran Komatsuzaki oof we really need to think hard about what our goals are here. This started as a little hobby project, I'm very hesitant about getting serious money involved
Babbleberns#6590: Thanks @Daj ! Glad to follow you in this awesome project!
Daj#7482: I'm pretty sure I'd at most want this to become a strict non profit. Money attracts the wrong kind of incentives
Aran Komatsuzaki#5714: I understand that. @Deleted User said something similar.
Aran Komatsuzaki#5714: Non-profit is still more than exciting for me.
Daj#7482: tbh I don't know what "legitimacy" even means here or why it matters
Aran Komatsuzaki#5714: That's how @bmk described, not my language.
Aran Komatsuzaki#5714: But definitely it'll help attract more talents, I suppose.
Daj#7482: I feel like the niche we are serving is very different though
Daj#7482: If you want a nice shiny "legitimate" org, go work for DM or a university.
Daj#7482: We're kinda like the pirate underground of ML research (ideally)
Daj#7482: Different kinds of people are attracted to each
Aran Komatsuzaki#5714: I don't think that was mentioned anywhere explicitly.
Aran Komatsuzaki#5714: But understandable
Aran Komatsuzaki#5714: sentiment
Daj#7482: It's just the vibe I guess this place was founded on?
Aran Komatsuzaki#5714: I think any org starts like that tho lol
Daj#7482: Less the incremental grad students, more the lone wolf hackers
Daj#7482: You may be right |
StellaAthena#3530: FWIW, I get the same vibe from here that I get from DEF CON (I am one of the organizers of the DEF CON AI group) and I like that fact.
Daj#7482: I think that's a very good thing then hah
thenightocean#6100: you can be non-profit but still have crypto donations links on the website, right 😉
Aran Komatsuzaki#5714: Yeah
admtiumm#1322: Joined the server.
bmk#1476: I don't think pirate underground-ness is really mutually exclusive with legitimacy?
bmk#1476: Also when I say legitimacy I'm using the word slightly tongue in cheek
bmk#1476: But basically what I mean is that by doing things that normal AI labs do, we show that the grassroots pirate hacker model can actually stands a chance against the DMs and FAIRs of the world, if you know what I mean
Aran Komatsuzaki#5714: Agreed
Daj#7482: I'm all on board with that, that doesn't mean it needs legal or financial backing though
bmk#1476: it doesnt but it does mean we need to put out research
Aran Komatsuzaki#5714: well, we can probably get a financial backing in the sense of tfrc, though?
Daj#7482: I'm 110% on board with publishing papers and an happy to help anyone that wants to do so
bmk#1476: awesome
bmk#1476: I really want to challenge the idea of an AI lab being this well funded high credentialed group of people under the wing of major companies, the problem with that idea being that it makes lots of people think "welp, I'm not in Google so i cant really do Real Research™"
Aran Komatsuzaki#5714: cool. same situation here.
Anon#4965: Joined the server.
Daj#7482: That's absolutely the spirit I would love to evoke @bmk
Daj#7482: If people put in the work that will be what happens, we just have to see if people put in the work
bmk#1476: i mean, we've been doing pretty well for ourselves so far, so there's reason to be optimistic! 😄 |
Daj#7482: Yes we have we've gotten so much further than I could have hoped
goolulusaurs#1571: The thing is though, we are relying on TFRC, at least for now.
DerekChia#4046: Joined the server.
goolulusaurs#1571: (Also, I'm back, been very busy with work/classes/moving)
Daj#7482: But yeah that is true. We should not kid ourselves _too_ much. Some research will always be compute bound, some will be bound by "top researchers want a salary" but there are tons of other low hanging fruit to pick that institutions can't or won't pluck
Daj#7482: (good to see you again goolu!)
bmk#1476: there are many smaller-scale things we can always do though
bmk#1476: individually we each have gpus and that adds up to quite a few, even if we cant do gpt-3 size projects
Daj#7482: That's what I mean, we shouldn't feel bound but our restrictions and try to overcome them with funding but instead embrace them
Aran Komatsuzaki#5714: It's great to have this place where there are full of people used to scale up a research problem and have access to huge computes.
goolulusaurs#1571: A bunch of times I have heard people talk hopefully about the idea of distributed training using many different peoples individual GPUs over the internet, if we could figure that out, especially if its asynchronous, that would be really amazing.
Aran Komatsuzaki#5714: Independent researchers don't have luxury of scaling up their problems.
bmk#1476: also we could try making a kickstarter and raisaing money to build a commodity gpu cluster using like 1080tis or something, that would be many times cheaper than V100s for the same compute and if we do clever programming we could make it work
goolulusaurs#1571: And there is the idea that having more constraints can really brings out peoples creativity too.
Daj#7482: > A bunch of times I have heard people talk hopefully about the idea of distributed training using many different peoples individual GPUs over the internet, if we could figure that out, especially if its asynchronous, that would be really amazing.
@goolulusaurs this seems either impossibly hard due to latency, or an almost impossibly hard engineering problem. Either way, anyone that wants to look into it should do so
bmk#1476: yeah that seems unfeasible
Aran Komatsuzaki#5714: Yeah it works only for models like AlphaZero
bmk#1476: HOWEVER we could build one big gpu cluster using cheap used gpus
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/744228959490801664/unknown.png |
Daj#7482: There are so many reasons why a custom Datacenter sounds like a nightmare
bmk#1476: used 1080ti is possibly the best in price per flop
Daj#7482: And better things we could invest into
Aran Komatsuzaki#5714: Why do you want to given we have TFRC?
bmk#1476: i was talking about if tfrc goes south
Aran Komatsuzaki#5714: oh ok
goolulusaurs#1571: > @goolulusaurs this seems either impossibly hard due to latency, or an almost impossibly hard engineering problem. Either way, anyone that wants to look into it should do so
@Daj Yeah, I've been thinking about it for a while but don't really have any good ideas either. I think it would require a very different kind of approach to ML than synchronized batch gradient descent.
Daj#7482: If TFRC goes south that's either the end of us or by that time we've successfully specialized into the kinds of work that doesn't need super computers
Daj#7482: Competing with big companies and governments along that axis is silly
Daj#7482: Not our competerive advantage
bmk#1476: well, used 1080tis is really damn cheap
bmk#1476: big companies a) usually dont like buying used hardware b) usually buy V100s or the other specialized ai chips
Daj#7482: What about the real estate? Failure rate? Maintenance?
Daj#7482: Used GPUs break constantly
Daj#7482: Very low lifetime on average
goolulusaurs#1571: I'm hoping the new nvidia gpus will finally have better flops/$ than the old 1080tis. I think if you are able to use the tensor cores and fp16, the 2080ti may already be better too.
Daj#7482: This is not the part of the market you want to try and exploit
bmk#1476: even with tpus we have to handle preemptions anyways
Daj#7482: Yea But we don't burn hundreds of dollars each time one preempts |
Daj#7482: This is wildly unfeasible and not what makes this place cool
Deleted User#0000: Joined the server.
Aran Komatsuzaki#5714: sup ethan
goolulusaurs#1571: Another thing is with systems like L2L it might make more since to buy GPUs with lower amounts of VRAM. 2060s have half the flops of 2080tis, but are a quarter the price.
asderiner#0387: Joined the server.
Daj#7482: What makes this place cool is that there are a ton of really low hanging fruit that for structural reasons institutions aren't picking
Daj#7482: Also, this is just an alt cultural landscape people can opt in to
Aran Komatsuzaki#5714: i'm glad L2L is the common sense knowledge here.
bmk#1476: stringing together 2060s sounds even cooler because this is a direction that nobody else is pursuing afaict
bmk#1476: everyone else just buys V100s and calls it a day
Daj#7482: Also sorry all the new people you aren't getting your custom introductions it's hard to type on mobile :(
goolulusaurs#1571: yeah, only downside is you need more mobos and have higher power usage.
bmk#1476: you can use lane splitters
Daj#7482: Our own datacenter is such a wildly impractical idea I don't know why we're discussing this
bmk#1476: break a x16 lane into a bunch of x1s, like miners do
Daj#7482: If we had a million dollars I wouldn't pay a cent on physical hardware
bmk#1476: im sure there are ways to get around the bandwidth issues
bmk#1476: why?
bmk#1476: hardware is *so cheap*
bmk#1476: owning your own gpu pays back in like a month |
Daj#7482: Assuming 100% use and ignoring cooling, electricy, depreciation, real estate...
Daj#7482: And man hours to maintain it all
Daj#7482: We don't even have enough man hours to get someone to keep our TPUs occupied
Daj#7482: There are much better things to spend that on
bmk#1476: ok scratch that maybe itll take 2-4 months instead of 1 month to pay back
Daj#7482: So we'd have to hire maintenance staff? Rent a building?
Daj#7482: Come on man this is silly
Daj#7482: The datacenter market is pretty efficient
Igor Barinov#6313: Joined the server.
Daj#7482: The only thing I would buy with money is man hours. Whether through managed hardware, assistance or salaries.
Daj#7482: We should pay money for man hours, not vice versa
goolulusaurs#1571: In my personal work I think there is a psychological element of it too, where if you are using the cloud there is the additonal hurdle to overcome of spending more many each time you want to do an experiment, where as if you have already spent the money on hardware you try to come up with more experiments to do to justify it. Or maybe thats just me.
Aran Komatsuzaki#5714: agreed. man hours are the most worth paying stuff.
bmk#1476: i mean as someone who is always itching for an excuse to build a new shiny computer im slightly biased but i dont think maintenance sounds *that* bad
Daj#7482: This isn't even important we don't have money to begin with lol
Aran Komatsuzaki#5714: that's why we need money!
goolulusaurs#1571: Plus its probably takes fewer man hours to keep gpus occupied vs TPUs.
Daj#7482: Hardware is liquid, people are not
Daj#7482: We don't even have GPT2 running with world class hardware yet
Daj#7482: We're not constrained by money right now |
bmk#1476: also, even taking the "175 TFLOPS!!!1" figure that nvidia puts for v100 tensor core performance, 10x 2060 would still match that and cost $3k while the V100 would cost $700-1000
goolulusaurs#1571: I feel like the biggest hurdle for people contributing, myself included, is that programming the TPUs is hard.
bmk#1476: tbf if we had gpus we might already be up and running
bmk#1476: with deepspeed or something
Aran Komatsuzaki#5714: yeah the cost of developing with tpu is rarely taken into account
goolulusaurs#1571: Of course its great that we have access to them at all though.
Daj#7482: I don't disagree but I don't see any value proposition we can give that I want to support
Aran Komatsuzaki#5714: the opportunity cost due to having to use tpu should be massive
bmk#1476: yeah tpus are a huge liability
bmk#1476: the only reason we use them is because theyre free
bmk#1476: if tfrc ended we have no reason to keep using tpus
Aran Komatsuzaki#5714: that's why i'm not using tpu at all in my experiments.
Aran Komatsuzaki#5714: likewise for openai
Aran Komatsuzaki#5714: but free computes are great 😆
Daj#7482: Yes it's our big advantage. Trading several orders of magnitude of available compute and our financial independence for simpler programming seems like a bad deal to me
Daj#7482: If you disagree, I would suggest just working for or starting a startup hah
goolulusaurs#1571: tpus may be a liability relative to if we had the same computing power in gpus, but we don't so they are a huge asset.
Daj#7482: Exactly, "lets just get the same amount of GPUs" is just not a feasible option or else we would take it haha
Daj#7482: We're like hardy microbes exploiting an inhospital environment free of predators
ethan caballero#6044: Joined the server. |
Semantic Aberration#3692: No TPU v3 are a godsend. I think it's our job to utilize these efficiently.
Semantic Aberration#3692: I'm saying this as someone running inference (GPT2) & training (smaller models) on my own GPU
Semantic Aberration#3692: I will help with runs, soon, when I'm up to speed with your repo
bmk#1476: im just thinking of a backup plan in case tfrc falls through
Daj#7482: That's just not something I think is a high priority atm
Semantic Aberration#3692: @bmk Graphcore IPU looks like a cheap-ish alternative. Also you could plead for free compute at some cloud or foundation. Failing that, you can plead via crypto.
Semantic Aberration#3692: Yup, for now TPU v3 is the Way
Daj#7482: Crypto does not magically generate money
Aran Komatsuzaki#5714: given that they now have tpu v4, there'll be many more tpu v2/v3 available for other people to us, so i think i'm optimistic.
Daj#7482: It's not 2017 anymore
Daj#7482: lol
Semantic Aberration#3692: @Daj Ofc you need some legitimacy (e.g. a site, papers), to ask for crypto
Daj#7482: What would "the crypto" even be for, exactly?
bmk#1476: i think crypto is a red herring
bmk#1476: in this context
bmk#1476: like it's piratey af but
bmk#1476: it doesnt fundamentally change much for us vs paypal except for the cool factor
Daj#7482: I think most of the things being floated atm are a red herring
bmk#1476: you could say its
bmk#1476: bikeshedding |
Daj#7482: ISSAAAAC
Daj#7482: nooooo
bmk#1476: damn the bot is broekn
Daj#7482: This is why we're losing focus
Daj#7482: Without Isaac McHorse, EleutherAI is lost
goolulusaurs#1571: Isaac still has the LibreAI logo too
Semantic Aberration#3692: @Daj
> What would "the crypto" even be for, exactly?
I find it obvious that there are rich crypto holders that would like their own copy of GPT3 and an API to run it, and they would pay for hw to train and run it (on lambdalabs, TPU or whatever), if you have have creds to prove that you are up to the task.
Semantic Aberration#3692: But we don't need it now
Daj#7482: That seems not obvious in the least to me that that would be true
bmk#1476: Yeah, there are a lot of rich and surprisingly generous crypto people
bmk#1476: See: pineapple fund
Semantic Aberration#3692: @Daj maybe I'm wrong
bmk#1476: see: SENS donations in 2017 vs every other year
Semantic Aberration#3692: @bmk It's not the question of generosity, they don't like the exclusivity of OpenAI's model. and 5m$ is not much.
Daj#7482: Sure weird things happen in crypto but it's just VC funding with extra (or I guess in practice, less) steps
bmk#1476: It's VC but with less strings attached too
Daj#7482: If any of you happen to know any crypto millionares be free to point them in our direction haha |
bmk#1476: I mean unlike VCs, if one is into crypto that provides a lot of evidence for being a fan of open stuff
Daj#7482: Or for being a Chinese asset lol
Semantic Aberration#3692: @Daj You don't think that OpenAI-like API (I can basically clone their frontend) for BTC (with, say, 50% margin) will not get traction ?
Daj#7482: Nope
Daj#7482: I don't think so in the slightest
bmk#1476: Once we have papers out there I think we can start soliciting donations
Semantic Aberration#3692: Hmm ok, maybe you have it right. I thought GPT3 was very cheap for its performance.
Daj#7482: Once we have a working GPT2+, maybe a paper or two, we can see
Daj#7482: (also offers often tend to follow interesting people rather than vice versa)
Daj#7482: > Hmm ok, maybe you have it right. I thought GPT3 was very cheap for its performance.
@Semantic Aberration I just see zero additive value from "We take this perfectly good system OA is offering at a reasonable price, but now it bas B L O C K C H A I N"
Semantic Aberration#3692: @Daj Blockchain is a meme, I just could see how people don't like to use GPT3 under OpenAI's total supervision. Also it's a monopoly for now (though I think Microsoft and Google will soon release their analogous NL APIs)
Daj#7482: Don't get me wrong if people wanna send us BTC I'll take it
Daj#7482: haha
Sid#2121: Pinned a message.
Daj#7482: my model of the space says there are few people that would be motivated enough and have the cash that they would need to pay us of all people to do this of all things
aquajet#7800: Slightly related: https://twitter.com/danielgross/status/1294386107837296640?s=09
Ravna#1831: GPT3 is not cheap for its current value per se. The real value lies in its future versions. And this is why I don't like those tweets about "we need to scale it down using distillation or whatever". A more energy-efficient toy is still a toy. What really needs to be done is scaling it up until the limit and make it more useful than a toy in at least a few real-world scenarios.
Aran Komatsuzaki#5714: yeah i just don't undestand down-scaing worshippers
Daj#7482: I think any claims about GPT3's ultimate economic value are premature |
Daj#7482: But I agree that future models are far more interesting/exciting/scary
Aran Komatsuzaki#5714: gpt-3 requires a bit of grad student descent at designing a good prompt, so i hope a robustness to perturbation in prompt will be addressed in gpt-4
goolulusaurs#1571: Judging from the last few years algorithms are getting more efficient much faster than hardware performance increases. Usually though more efficient algorithms just means that even bigger models end up being built.
Daj#7482: > gpt-3 requires a bit of grad student descent at designing a good prompt, so i hope a robustness to perturbation in prompt will be addressed in gpt-4
@Aran Komatsuzaki The grad student descent in form of prompt design is what companies built on GPT3 will be selling
Daj#7482: > Judging from the last few years algorithms are getting more efficient much faster than hardware performance increases. Usually though more efficient algorithms just means that even bigger models end up being built.
@goolulusaurs Hardware + Algorithms will always be faster than only one or the other
Aran Komatsuzaki#5714: @Daj yeah and will be hopefully rendered futile soon
Daj#7482: Ehh I'm not so sure
Daj#7482: I know it was meant as a meme but prompt design may be the first primitive steps into a kind of programming 3.0
Daj#7482: Where you have an extremely powerful general model and now just have to get it to do something useful
Ravna#1831: In the last few years algorithms are getting more efficient in SMALL models and SMALL data. There's no experiment done at a GPT-3 scale to compare old and new algorithms.
Daj#7482: Which is basically alignment restated
Aran Komatsuzaki#5714: i mean we want agi. it's reasonable to ask for human-level robustness to perturbation on prompt.
goolulusaurs#1571: I predict at some point "prompt design" will just loop back around into normal interpersonal skills.
aquajet#7800: Does the knowledge hallucination issue disappear at scale?
Aran Komatsuzaki#5714: well, this is what i meant.
Daj#7482: Humans are not robust to perturbation
Daj#7482: if you think so, you've never been an employer lol
Aran Komatsuzaki#5714: right, i'm just asking for human-level robustness |