data
stringlengths 115
7.61k
|
---|
bmk#1476: aight sounds good
Daj#7482: Got my GPT3 invite too yay
Noa Nabeshima#0290: me too 🙂
Noa Nabeshima#0290: We should try some fine-tuning when it becomes available
Daj#7482: I think we can ask for access to the finetuning API
Noa Nabeshima#0290: But what are going to finetune it on that it hasn't meta-learned?
Daj#7482: That's the bigger question haha
Noa Nabeshima#0290: I feel like I need to make an API because I said I would
Noa Nabeshima#0290: but I have no good ideas
Noa Nabeshima#0290: It seems like a super cool thing to do
Noa Nabeshima#0290: Okaay, here's an idea
Sid#2121: i had no idea there was a finetuning API
Noa Nabeshima#0290: Show it examples of personal assistant requests and formal answers
Like remind me to do X tomorrow at Y ~> (Reminder "Do X at Y" 07-15-2020)
Sid#2121: Maybe we'll get our own GPT3 before we get access to it lol : )
Noa Nabeshima#0290: Maybe train it to ask for intermediary prompts for clarification
Daj#7482: Collecting a dataset like that will be the limiting factor I think
Noa Nabeshima#0290: Give it enough examples with some simple formal output
Daj#7482: It's a cool idea though
Noa Nabeshima#0290: And then actually create the assistant |
Noa Nabeshima#0290: yeah
Daj#7482: Pretty sure the Google/Siri people aren't gonna share their data hah
Sid#2121: would it need finetuning to be a personal assistant tho ?
Noa Nabeshima#0290: @Sid I think so if you want it to interface with say google calendar and gmail.
I have faith it can be done with few examples
Noa Nabeshima#0290: Praise scaling
Sid#2121: i mean, i don't see why that needs gpt
Daj#7482: Do you need finetuning if few shot works so good?
Sid#2121: just voice to txt then parsing text
Noa Nabeshima#0290: Maybe not @Daj
Daj#7482: > just voice to txt then parsing text
@Sid But _big LM model goes blahblahblah_
Noa Nabeshima#0290: > i mean, i don't see why that needs gpt
@Sid I think voice assistants are bad at NLP
Sid#2121: with stuff like "set reminder at" I wouldn't want it to be fuzzily recognizing my commands lol
Noa Nabeshima#0290: The current ones are crazy large GOFAI as far as I know
Sid#2121: "hmmm maybe i should wake up earlier tomorrow" you say to yourself. *the next morning* GPT-PERSONAL-ASSISTANT "GET THE FUCK UP BOZO"
Sid#2121: i'm also just skeptical of personal assistants in general and alexa should be burned with fire
Noa Nabeshima#0290: I love Alexa
Daj#7482: tbh if GPT managed my sleep schedule it'd probably be healthier |
Daj#7482: haha nah actually surisingly my schedule is good lately
Noa Nabeshima#0290: Ooh I wonder if you could get it to consistently w/o profanity or messing up tell good children's stories
Noa Nabeshima#0290: 'Pirate stories', 'Fairytales', 'Sci-Fi'
Daj#7482: Oooh interesting idea
Sid#2121: lmao, child management ideas
Sid#2121: gpt, put my kid to bed
Daj#7482: The new version of the TV babysitter
Daj#7482: What could go wrong?
Sid#2121: and suddenly the child grows up thinking horses have six legs and ice bounces
Daj#7482: I've heard kids that believe dumber things hah
Sid#2121: How can we make LM's understanding of physics better
Sid#2121: how do we make multimodal models
Daj#7482: _Deepmind has entered the chat_
Daj#7482: That's the million (/billion/trillion) dollar question
Sid#2121: Yeah. Let's get GPT-Neo working then maybe someone will pay us to look into it
Daj#7482: If only research actually worked like that lol
Sid#2121: I just want GAN + LM
Daj#7482: We gotta be our own research lab
Daj#7482: We have daddy Google
Sid#2121: papa goggle |
Daj#7482: Realistically it's really astounding how far a bunch of totally unafiliated and unorganized dudes on a discord got already just because Google had spare compute laying around
Sid#2121: to be honest, I am still so confused about why google is giving out so much compute. Do ppl really hate TPUs that much
Sid#2121: and I mean yeah. It's been like a week lol. The Pile is growing steadily, and we have a mesh model about to run
Daj#7482: Apparently they do? I dunno I asked the guy in charge that exact question and his answer was basically "Dunno it doesn't cost us much and we thought it'd be cool"
Sid#2121: it is cool papa goggle. It is cool.
Sid#2121: Like, don't they wanna use those TPUs for their world domination? I guess they're already pretty much done
Sid#2121: They probably have something better and secret stashed away hah
Daj#7482: I'm pretty sure the reason we don't get access to 2048s is because Google is using those internally
Daj#7482: Or renting them out
Daj#7482: e.g. GShard
Sid#2121: ok so
Sid#2121: i know you're studying, but GPTneo is ready to go
Sid#2121: where's the data, and what're the tpu deets again?
Daj#7482: GO BRR
Daj#7482: Posted the details in
Daj#7482: #gpt-neox-devs
Daj#7482: can repost
Sid#2121: and am i fine to delete the GPTNeo repo on your server and reclone
Daj#7482: Yes
JonathanFly#4262: Joined the server. |
Daj#7482: Hey @JonathanFly ! Welcome to MIT License OpenAI! Check the channel topic for info on what we're doing and what you can do to help, if you want.
Daj#7482: Also, I think I follow you on Twitter hah
Sid#2121: 👋
Sid#2121: we're getting a proper webring going here, our new tfmesh nation is gathering steam
JonathanFly#4262: I haven't done anything with TPUS and won't be any help there, just checking in on progress
Sid#2121: we're just trying to run our first model, you picked a good time to come
Daj#7482: Love your work 👍
Daj#7482: Yea no worries, feel free to lurk of course
Daj#7482: Today might be the first running of our code...if we're lucky hah
JonathanFly#4262: Did you get anything close to a GPT-3 dataset? Seems like the hardest part
Sid#2121: in progress
Sid#2121: it's gonna take a lot of CPU
Sid#2121: if you do want to help, that part actually requires no tpu knowledge. But also this is a lurk friendly zone 🙂
Daj#7482: It's tedious but surprisingly doable to get data like that
Sid#2121: ye, the hardest part by far (so far) has been building the model
Daj#7482: Or rather, debugging the TPUs haha
asparagui#6391: do you all have tpu access?
Sid#2121: they're @Daj 's tpus
Daj#7482: Yup we're all sharing a VM
asparagui#6391: what's the size/limits/time they gave you? |
Daj#7482: I was one of the very first people in TFRC, have met them personally, etc, so I think they give me a bit of special treatment. I've had a preemptible v3-2048 and a handful of v3-8s basically whenever I ask for it
asparagui#6391: ahh kk
Sid#2121: do you think with a poc we could get a non-preemptible 1024 / 2048 or is that unheard of
Daj#7482: Unheard of from what I know. When I talked to Zak (the guy in charge), he basically said the preemptibles are almost free for them because they rent the big TPUs out to big companies on yearly contracts
Daj#7482: Instead of per hour
Daj#7482: So when they're not in use it's "free" for us to use them
Daj#7482: But who knows, weirder things have happened
asparagui#6391: what does your preemptible workflow look like
Sid#2121: what kinda time can you get 1024s for ?
Sid#2121: We're using tpunicorn from @shawwn it's super nice
Daj#7482: I haven't really done empirical tests on pod availabilities
Daj#7482: I just know 2048 is basically never available, 512 is almost always available, and I've never seen 128 not available
asparagui#6391: i guess my question is how do you share state between jobs
asparagui#6391: eg tpu1 --> working --> prempted --> start tpu2 --> checkpoint?
Daj#7482: Yup, checkpoints
Daj#7482: I mean, we haven't even run models long enough for that, but I wouldn't know of any more clever way of doing that
asparagui#6391: that's the only way i know
asparagui#6391: did a workflow where had a kube controller to spin up a new tpu with the job when the first one died
asparagui#6391: will look at this software
asparagui#6391: eg tpunicorn |
Sid#2121: > eg tpu1 --> working --> prempted --> start tpu2 --> checkpoint?
@asparagui this would be a nice thing to get sorted
Daj#7482: Pretty sure that's what `pu babysit` is for
Daj#7482: (from tpunicorn)
Sid#2121: oh really?
Sid#2121: i was meaning to ask how to use that
Daj#7482: It basically starts the worker process, checks if a TPU is preempted, if so it kills the process, recreates the TPU, then once it's up runs the command again
Daj#7482: iirc
Sid#2121: We'll need to sort out our broken checkpoint loading before we use that lol
Daj#7482: Oh is it officially broken? I thought it was just because it errored?
Sid#2121: well i think it's if it gets cut off midway through saving a ckpt maybe?
Sid#2121: I'm not 100%
Daj#7482: That's worth testing
psgeorge#6388: Joined the server.
bmk#1476: hello!
Sid#2121: @Daj we should link back to tpupod in a #communities channel to make the webring complete
Daj#7482: Sure one sec
Daj#7482: It is done
Daj#7482: Someone post the link there with a little description pls
bmk#1476: do other discord webrings exist? |
bmk#1476: or are we the first to bring the glory of the webring to discord
Sid#2121: 🤷
Sid#2121: what other AI discords are there?
bmk#1476: not sure
bmk#1476: 2min papers?
bmk#1476: i'm in the discord but i dont really visit
Sid#2121: i thought i remember someone saying there was an AI dungeon discord?
Sid#2121: or was it a slack
bmk#1476: ¯\_(ツ)_/¯
Daj#7482: We are the OG Discord AI webring
Daj#7482: This is our claim to fame
Sid#2121: shawwn says ```also IMO rename #webring to #communities and maybe stick it under its own category```
bmk#1476: and call the category 1997 webring for completeness
Skylion#0368: Joined the server.
Sid#2121: Hey @Skylion !
Sid#2121: welcome welcome
bmk#1476: hey skylion!
Skylion#0368: Anyway, you should focus on trying to get similar performance with more efficient transfomer archs
Sid#2121: We could use all the advice we can get
Skylion#0368: There are ones that claim to have O(N) attention instead of O(N^2) |
Sid#2121: got any links to papers?
bmk#1476: is this a continuation of a convo elsewhere?
Sid#2121: yeah sorry
Sid#2121: @Skylion thinks we won't be able to replicate GPT-3
bmk#1476: why not?
bmk#1476: I'm curious
Sid#2121: hasn't got to that part yet
bmk#1476: we have about 70% of the tech that OA used
Skylion#0368: Reformer for instnace, but I think that one is out of date.
bmk#1476: currently we're trying to figure the last 30%, mostly gpipe
bmk#1476: so we've settled on local attention + regular attention layers here and there
Sid#2121: "last 30%" might be optimistic
bmk#1476: last 50%
Sid#2121: altho I haven't looked into GPipe *at all* so, i don't know how complex it is
Skylion#0368: GPIPE is attrocious
Sid#2121: need to read the paper
bmk#1476: the code or the idea?
bmk#1476: i can understand if the cose is bad
Skylion#0368: All the models are defined in YAML if I recall
Skylion#0368: It's rough |
bmk#1476: we dont have to use their exact code tied into lingvo though
bmk#1476: also if we dont have that many sections we can maybe even do without gpipe
Skylion#0368: https://github.com/tensorflow/lingvo/blob/master/lingvo/core/gpipe.pyhttps://github.com/tensorflow/lingvo/blob/master/lingvo/core/gpipe.py
Skylion#0368: Oh okay
bmk#1476: it only becomes a real problem when every layer is on a different device
Sid#2121: @Skylion do you want an invite to our repo? it'd be super valuable to have someone who knows lots about this stuff looking over our code
bmk#1476: ^
Sid#2121: no worries if you're busy tho
Skylion#0368: Sure why not
bmk#1476: we need all the help we can get, haha
Skylion#0368: Busy, but I might be able to take look
Skylion#0368: Skylion007 is the Github
Sid#2121: @Daj can you do the honours
Sid#2121: (I think he's asleep so it might be a while)
Sid#2121: but - you never said - why do you think it can't be done?
Sid#2121: and *shouldn't* lol.
bmk#1476: also do you have the code used to process open webtext corpus still handy? We want to collect the new WebText2 that's basically the same thing as the first one but with more links
Daj#7482: Still awake one sec
Daj#7482: Sent
Daj#7482: I will fix the webring tomorrow, mobile discord isn't letting me move things around |
bmk#1476: for some reason i have to make the batch size *really small* to fit it on the 512 ;-;
bmk#1476: even with data parallel
Daj#7482: For the record I consider our chances of fully replicating GPT3 to not be top quartile either but it doesn't matter it's fun and educative and we'll make something cool
Sid#2121: which confiiiiiig
Sid#2121: ^^ yeah
bmk#1476: ```{
"n_head": 32,
"encoder_path": "gs://datasets_storage_1/models/encoder",
"n_vocab": 50257,
"embed_dropout": 0.1,
"lr": 0.00025,
"warmup_steps": 0,
"beta1": 0.9,
"beta2": 0.98,
"epsilon": 1e-9,
"opt_name": "adam",
"weight_decay": 0.00,
"train_batch_size": 64,
"attn_dropout": 0.1,
"train_steps": 10000, |
"eval_steps": 0,
"max_steps": 500000,
"data_path": "gs://neo-datasets/bundestag",
"res_dropout": 0.1,
"predict_batch_size": 1,
"eval_batch_size": 32,
"iterations": 500,
"n_embd": 2048,
"datasets": [["bundestag_*.tfrecords", "", 10, "random_sample", 1.0]],
"data_path_": "gs://neo-datasets/openwebtext-fixed/",
"datasets_": [["openwebtext_*.tfrecords", "", 10, "chunks", 1.0]],
"model": "GPT2",
"model_path": "gs://neo-models/NEO_TEST_1",
"n_ctx": 128,
"predict_path": "logs/predictions.txt",
"n_layer": 32,
"scale_by_depth": true,
"scale_by_in": true,
"fixed_attn_block_size": 128,
"layer_offset": 16, |
"local": true,
"mesh_shape": "x:16,y:32",
"layout": "embd:y, heads:y, batch:x"
}
```
Sid#2121: that breaks? or that's the best we have running
bmk#1476: data parallel 16
bmk#1476: er
bmk#1476: i'm about to find out
Sid#2121: cool
Daj#7482: This is unsurprising, it's bigger than 1.5B right?
Sid#2121: n_ctx is only 128
bmk#1476: marginally
bmk#1476: wait i thought it was 1024 ;-;
Daj#7482: 1.5B plus Adam is way too big for a single core by default
bmk#1476: can someone add adafactor
Sid#2121: shall i push an adafactor opt? it's a simple change
Sid#2121: ah
bmk#1476: yeah go ahead
Sid#2121: yeah i'll do it |
Daj#7482: And I'm sure the model parallelism is adding some overhead somewhere
bmk#1476: @Daj how many adafactor batch size can you fit on 512?
bmk#1476: pre-meshtf
Skylion#0368: Don't use Adam
Sid#2121: yeah we should check the reshapes
Daj#7482: 512, one per core
Skylion#0368: Use the other optimizer
bmk#1476: 512!??
Daj#7482: That was how I trained, with adafactor
Skylion#0368: Yeah
Skylion#0368: Adafactor != Adam
Sid#2121: ok ok let me add adafactor
Daj#7482: Why not skylion? OA used adam
Sid#2121: didn't OA use Adam tho ?
Skylion#0368: Nah, they used Adafactor
Sid#2121: really??
Skylion#0368: Yeah
Daj#7482: This is news to me
Daj#7482: Their paper says adam
Daj#7482: And my email exchange with them |
Skylion#0368: It was my understnading they used Adam until they go to Big and Very Big models
Daj#7482: Don't get me wrong it's good news if true lol
Skylion#0368: but I could be misremembering.
Skylion#0368: Like they used Adam for medium and small
Daj#7482: Is this GPT2 or 3?
bmk#1476: > To train all versions of GPT-3, we use Adam withβ1= 0.9,β2= 0.95
Skylion#0368: GPT-2
Daj#7482: Interesting
Skylion#0368: Ah, for GPT-3 they probably used Adam because it's only a 2X memory penalty
Skylion#0368: who cares when they have that many GPUs 😛
Daj#7482: Guess that excludes another possible source of failure for my old model
bmk#1476: how do we know they used adafactor?
bmk#1476: the detail doesnt appear in the paper
Daj#7482: Yea I just used if because it worked, but I've never heard GPT2 being trained with adafactor
Sid#2121: added adafactor
bmk#1476: nice i'll try it next
Sid#2121: also sanity check - is this how to use decay in adafactor? i just copied from adam https://cdn.discordapp.com/attachments/729741769738158194/732697689015320736/Screenshot_2020-07-14_at_22.38.05.png
bmk#1476: o.O
Sid#2121: we'll also need to include ada_epsilon1 and ada_epsilon2 in any configs
bmk#1476: what |
Sid#2121: what
bmk#1476: this does not look right at all why did the original code look like this
Sid#2121: the decay rate setting??
bmk#1476: yeah
bmk#1476: i am very confused
Sid#2121: I'm pretty sure that was in Daj's code
Daj#7482: Because I was experimentig I think?
Daj#7482: I don't remember
Daj#7482: Lol
bmk#1476: this does not look right
Daj#7482: Then please fix thx
Sid#2121: for both opts??
bmk#1476: i dont know if it'll be more right if we change it tho
Daj#7482: I haven't looked at this code in detail in like a year
Sid#2121: just the weight decay setting right
bmk#1476: dont change anything yet
bmk#1476: i'm not confident that it's *more* correct if we change it
Daj#7482: I'm pretty confident in my own fallibility hah
bmk#1476: whatever it's currently set to 0 so nothing will change
bmk#1476: ``` File "gpt2_mesh.py", line 532 |
print(f"N TRAINABLE VARS: {total_parameters:,}")
^
SyntaxError: invalid syntax
```
bmk#1476: @Sid
Sid#2121: Huh O.o one sec not at a computer
bmk#1476: i think this python just doesnt support fstrings
Sid#2121: I thought I pushed that earlier and it worked fine
bmk#1476: yeah its 3.5
Sid#2121: Ah ok I’ll take it out
bmk#1476: fstrings bad
Sid#2121: Fstrings good
Sid#2121: Python 3.5 bad
Sid#2121: this should work right ``` print('{:,}'.format(total_parameters))
```
bmk#1476: batchsize 1 per slice does not train under adam but it does under adafactor
bmk#1476: now to test larger batches
Sid#2121: coooooool
bmk#1476: shoot i can only fit a batch of 1 per slice
bmk#1476: hmm |
bmk#1476: we need to optimise
Sid#2121: what's the layout?
Sid#2121: ^yeah
bmk#1476: heads:x,embd:x,batch:y
bmk#1476: x:32,y:16
Sid#2121: wait so our batch size is 16??
Sid#2121: am i misunderstanding
bmk#1476: yes, 32x smaller than the 512 we should be getting
Sid#2121: i thought you had bigger batches earlier
Sid#2121: @bmk remember when i said we could add a print statement if it's doing the inefficient reshapes
Sid#2121: we shld do that
bmk#1476: Sure go ahead
bmk#1476: We're having ooms though
bmk#1476: So some way of visualizing how memory is being used would be nice
bmk#1476: full attention is broken
bmk#1476: we need to fix that
Sid#2121: @bmk you still testing? which tpu are you using
Sid#2121: ah actually my time would be better spent on data stuff rn
bmk#1476: nope
bmk#1476: im done |
bmk#1476: cant get more than a few dozen batch size ;-;
bmk#1476: what the hell
bmk#1476: i dont think daj used bf16 for the original gpt2 either
bmk#1476: we really need to do some serious optimization
bmk#1476: like, we cant even do the 512 batch size
bmk#1476: And this isn't even full attention
bmk#1476: This is local attention
bmk#1476: What are we doing wrong
Sid#2121: I wish i knew :/ I mean, the main changes are to the attention right?
bmk#1476: And the whole mtf
Sid#2121: well yeah
bmk#1476: There's no way local attention uses *more* memory
bmk#1476: And slicing it up, worst case, should have no effect
Sid#2121: i'm giving mtf the benefit of the doubt tho and assuming their code doesn't just increase memory usage by a ton but who knows
bmk#1476: Doing the 512 batch with data parallel only doesn't run, it only ooms
Sid#2121: I really think this is the point we get in touch with TF-M people
Sid#2121: we've built a half-functioning model and i'm sure they'd be happy to have a fully functioning gpt made with their library to show off
Sid#2121: it might just be that we're doing something we shouldn't be doing with tf-m, and they'll spot it right away
Sid#2121: like, they have about three models to show made with tf-mesh
bmk#1476: yeah ok |
bmk#1476: who here has the charisma necessary
bmk#1476: "Noam Shazeer" sounds like the main person we need to contact
bmk#1476: First author on the paper and active committer on the repo
Sid#2121: ye
Sid#2121: i found a few of the people on twitter the other day hah
Sid#2121: I can write a message tomorrow
bmk#1476: Alright sounds good
bmk#1476: Hol up
bmk#1476: Shazeer is one of the authors on Gshard
bmk#1476: Maybe we can get him to hook us up with Gshard too
Sid#2121: shazeeeeer
Sid#2121: he's our man
bmk#1476: Also it appears he has near legendary status
bmk#1476: Also he seems to be an adherent of the moar params camp
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/732731897506955334/IMG_20200714_165428.jpg
bmk#1476: Even understanding how to use mtf is difficult, I imagine creating mtf in the first place must have been the work of an ascended TF master
Sid#2121: i feel like there's a galaxy brain meme here somewhere but i'm too tired to open PS
Sid#2121: but yeah, seems like he'd be interested
Sid#2121: Colin Raffel also seems to get around the various Moar Params models
Sid#2121: also grring at that image since mesh tensorflow is only increasing our memory consumption rn |
Sid#2121: Oh wow, Noam Shazeer co-authored Attention is all you need
Sid#2121: didn't realize
Sid#2121: seems like a big deal
zitterbewegung#4846: does anyone feel like more params = better
zitterbewegung#4846: do you think there will be diminishing returns?
bmk#1476: the modus operandi of libreai *is* moar params = moar better
bmk#1476: welcome to actually open™ ai!
Sid#2121: Hey @Key ! let us know if you want a task 🙂 there's plenty to do. And welcome to something something we're more open than OA something tagline
arfa#0882: What kinds of tasks do you have?
Sid#2121: Check out #documentation
Sid#2121: Lots of data gathering to do, uhh, if you want something a bit in depth we need to figure out how to do sampling from a model
Sid#2121: (Also the google doc in the channel description will get you all the useful info )
Sid#2121: That’s what I meant to refer you to when I linked to documentation.
arfa#0882: Are the cloud buckets really that expensive? Our tensorfork buckets are several TB each
Sid#2121: Data cleaning (i.e de duplication, pdf cleaning) will also be an important step. We’re trying to gather a Books3 dataset from questionable-ish origins that will be one of our main competitive advantages over OA
Sid#2121: Hmmm idk. We’re open to any suggestions but we need a place to store / process all the data before we put it onto the bucket, so we have a hetzner. Price calculations / storage etc are things I haven’t really been dealing w though.
Sid#2121: If you can turn any of the sources in #data-sources into #datascripts , that’d also be awesome
psgeorge#6388: Thought about contacting https://www.reddit.com/user/-Archivist? He has all the data you'll need I'd wager.
archivus#7382: Joined the server.
archivus#7382: here to monitor progress 🙂 |
psgeorge#6388: > Hmmm idk. We’re open to any suggestions but we need a place to store / process all the data before we put it onto the bucket, so we have a hetzner. Price calculations / storage etc are things I haven’t really been dealing w though.
@Sid Storing & Processing on a hetzner because data processing on gcloud is difficult or expensive?
archivus#7382: v expensive on buckets - you're charged on data egress
psgeorge#6388: > Thought about contacting https://www.reddit.com/user/-Archivist? He has all the data you'll need I'd wager.
Best way to reach him is probably a DM on reddit, or he's part of an active discord somewhere.
psgeorge#6388: He probably has the best access to internet data (from all sources) in the world.
psgeorge#6388: > v expensive on buckets - you're charged on data egress
@archivus ah okay. We've got a load of Google Cloud credits so haven't looked much elsewhere.
Sid#2121: @psgeorge who is this person. Thanks for the link! Will take a look later
Sid#2121: @archivus 👋 welcome
psgeorge#6388: The Archivist is someone dedicated to archiving the entirety of the internet. He believes it's super important that someone is keeping track of everything that's happening. He has a load of his own stuff crawling the web, but he also has a load greyhat people donating huge data dumps to him. He's currently somewhat pursued by law enforcement... but definitely the guy to talk to about data for training something like this
psgeorge#6388: if you're okay with non-kosher
Sid#2121: Oh this is incredibly up our street. We have a whole secret channel dedicated to non-kosher 😎
Sid#2121: Somewhat pursued by law enforcement only sweetens the deal. How come? Is there somewhere I can read about him?
psgeorge#6388: @Sid from time to time he's actively trying to proliferate his data archives because of (rightful) paranoia that he'll get caught & his work will have been for nothing
psgeorge#6388: He has an ethical code though e.g. wants to give it to people who will Do Good with it
Sid#2121: That’s awesome. I wonder if we can get him to join the server. I’ll try and reach out later today!
Daj#7482: Interesting discussions and nice to see this many interesting people around 👍
Daj#7482: Today I have to really study and do some traveling so I'll be keeping Discord closed, ping me if someone needs me |
aster#3007: Joined the server.
Deleted User#0000: Joined the server.
Sid#2121: 👋👋👋
kindiana#1016: Joined the server.
Sid#2121: Hey @kindiana
Sid#2121: Welcome to The AGI Wranglers! Check the channel description and resources channels for info and don't hesitate to ask if you have questions 🙂
Sid#2121: your tokenization sounds super interesting
Sid#2121: when can we expect paper heh
kindiana#1016: thanks! AAAI's deadline is sept 9 so hopefully before then 😛 , but I'm happy to discuss it here if y'all promise to not shout it from the rooftops 🙂
kindiana#1016: hint: attention is _really_ all you need
Sid#2121: *oh*
Sid#2121: do go on
Sid#2121: we have a secret channel if you'd prefer
Sid#2121: I can't invite you, i'd have to get @Daj to do it
Daj#7482: Who am I inviting where?
Sid#2121: @kindiana is working on a new tokenization method, which i think could help us out a lot
Daj#7482: Ooooh very underappreciated area
Sid#2121: but doesn't really want to discuss it in public, since he's writing a paper
Sid#2121: i figured we could use yarr harr or make a new private channel
Sid#2121: (since this place is getting quite busy anyway) |
Daj#7482: Sure!
Sid#2121: I posted @kindiana's gist up in #links
Daj#7482: Give me like 5 minutes
Sid#2121: sorry to distract!
Daj#7482: Eh it's fine, I'm probably as prepared as I'm gonna be anyways
Sid#2121: when's the exam
Daj#7482: Tomorrow
Daj#7482: It's an easy test, I just didn't do shit for it hah
bmk#1476: Did you send the email to mesh tf yet @Sid
Sid#2121: nah sorry i got sucked into stylegan debugging but i can start writing it now
Sid#2121: do we even have an address to send it to?
Sid#2121: summary of what we want to say: ```- we're an independent research group trying to build GPT-3 + variants - using tfmesh library but having some problems with memory consumption - ask about splitting layers - do we ask if they can glance at our code?```
Sid#2121: pls add
bmk#1476: noam@google.com
Sid#2121: oh nice
Sid#2121: should i cc in some others
bmk#1476: Sure
Sid#2121: where did you find that?
bmk#1476: The paper
bmk#1476: Gshard first author: lepikhin@google.com |
bmk#1476: Also try begging them for Gshard
Sid#2121: i keep getting gshard and gpipe confused, hah
Sid#2121: i'd also be interested *which* dimensions we should and can split exactly, and what's the best practice for our wpe / wte stuff
Sid#2121: but that's i guess not a question for them
Sid#2121: I found a good talk by noam in my insomnia stage last night i'll post it up in a bit
bmk#1476: hyouklee@google.com second author on Gshard, author on mtf
Sid#2121: it's basically a rehashing of the tfm talk but with some extras
bmk#1476: ylc@google.com mtf second author
bmk#1476: Your call which of these we want to CC
Sid#2121: adarob@google.com & craffel@gmail.com are active tf-mesh maintainers but i don't know if we want to just cc everyone lol
Sid#2121: i'd also feel more comfortable having more concrete questions
Sid#2121: before we ask about memory consumption we should use the cloud tpu profiling tool to see if we can find the problem
bmk#1476: Hmm
bmk#1476: Also I just saw the thing about Archivist should we try asking?
Sid#2121: oh for sure
Sid#2121: I don't have a reddit acct
Sid#2121: if you have one that you post on it might be better if you send a message
Sid#2121: but happy to help write it
bmk#1476: He has a discord
Sid#2121: oh ? |
Sid#2121: oh man we should update the kanban :/ lots of new tasks popping up
Sid#2121: did you see the thing i posted in #links / can you see the new hidden channel?
bmk#1476: Please do
bmk#1476: And yes
Sid#2121: so ```- test MOE model, devise architecture. - reach out to TFM authors - reach out to Archivist``` to add to kanban
Sid#2121: ```-test memory consumption with ctpu profiler```
bmk#1476: Yeah
bmk#1476: I didn't even know there was a profiler how do we use it
Sid#2121: ( if you test on a colab tpu - it gives you tensor details when OOM happens. I didn't take a screenshot but last night i did some testing and think it was mostly the reshape op on L455? but need to confirm)
Sid#2121: uhhh
Sid#2121: also looked into this last night
Sid#2121: there's some command line profiler which i didn't really understand how to use - this hook seems easier https://cdn.discordapp.com/attachments/729741769738158194/732968185346654219/Screenshot_2020-07-15_at_03.15.21.png
bmk#1476: Huh
bmk#1476: Well, eliminating that reshape doesn't seem too hard
Sid#2121: *scrolling through the screenshots i took at sleeping hours last night to see if i documented*
Sid#2121: lol no of course not
Sid#2121: i'll try to recreate
bmk#1476: Ok I pushed commit eliminating that reshapr
Sid#2121: (not 100% that was it but i think if we can eliminate any reshapes possible, that's good)
Sid#2121: so, nice |
bmk#1476: This reshape was in original GPT2 as well tho
Sid#2121: yeah but reshapes are different in TFM
bmk#1476: So why is it using so much memory for us
bmk#1476: Oh
Sid#2121: there's a whole section in the paper on it i thought we were clear on this
bmk#1476: Ok so reshape bad
Sid#2121: i mean, not *always*
Sid#2121: but i think best to avoid
bmk#1476: Ctrl+F reshape to eliminate
Sid#2121: as far as i understand tho it should be a communication bandwidth problem rather than a memory one but 🤷
Sid#2121: if you see the reshape bad pic i just tagged you in it also mentions changing dimension names can change the layout
Sid#2121: so like, i *think* but i'm not sure
Sid#2121: if you have your input with the batch dimension
Sid#2121: and then at some point you rename it to batch_2
Sid#2121: and you tell it to split along batch, it won't split along batch_2
Sid#2121: and every time the input goes to batch_2, it's going to be broadcast / replicated to every core, and destroy any memory gains we might have had from splitting tensors
bmk#1476: her'es the provlem
Sid#2121: so we need to avoid that
bmk#1476: first of all thatdoesnt explain memory use, only slower speed
Sid#2121: ^ see above |
bmk#1476: this should still use strictly less memory
Sid#2121: re memory use
bmk#1476: second, you *cannot* have two dims of same name in a tensor
bmk#1476: so if you want an embd -> embd, you have to *rename* one of the input or output
Sid#2121: no but i'm saying
Sid#2121: we have our input
bmk#1476: you cant have an embd x embd matrix
Sid#2121: with [batch, seq, i forget]
Sid#2121: and then at some point, we do
Sid#2121: dim_combined_batch_sequence = mtf.Dimension('combined_batch_sequence', batch_dim.size * sequence_dim.size)
Sid#2121: if we just split along batch
Sid#2121: and not combined_batch_sequence
Sid#2121: when that reshape op happens, that tensor is going to be replicated on every core
bmk#1476: that was a vestige of the old code
Sid#2121: hence causing oom, and destroying memory gains
Sid#2121: ah i haven't seen update
bmk#1476: but this for example is absolutely necessary:
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/732971129521701044/unknown.png
Sid#2121: but that's the *general case* i think for where ooms might come from
bmk#1476: tmp_channels is necessary |
bmk#1476: i think that still doesnt explain it
bmk#1476: that only uses more inter-tpu bandwdith
bmk#1476: so it's slower, yes
Sid#2121: no, that's not right
bmk#1476: but should never use more memory
Sid#2121: because combined_batch_sequence then isn't being split
bmk#1476: so what?
Sid#2121: and will be stored on every core?
Sid#2121: taking up more memory?
bmk#1476: it was never being split inthe original either
Sid#2121: batch was
Sid#2121: so if we're splitting along batch then reshaping to combined_batch_sequence
Sid#2121: that *whole* tensor is going to be on every core
bmk#1476: oh i understand
Sid#2121: am i making sense? i feel like my point isn't getting across
bmk#1476: ok let's try it with that removed
Sid#2121: I can't replicate the oom where i got the nice printout unfortunately but yeah
Sid#2121: that's my hypothesis
bmk#1476: are there any other reshapes causing a similar amount of trouble
Sid#2121: idk man, can't get the printout again ;_; as far as i remember from debugging, it was that one, and then some bfloat thing which i fixed |
Sid#2121: Idk if having a stray bfloat would affect the memory, or if it was just that op that happened to be bfloat
bmk#1476: bfloat thing?
bmk#1476: oh
bmk#1476: what would it take to convert everything to bfloat?
Sid#2121: there's a commit
Sid#2121: i mean, i think theoretically not a lot
Sid#2121: just anywhere we specify dtype change to bfloat and i guess change the input?
bmk#1476: *theoretically* this is tpus everythng is hard
Sid#2121: in toy_model.py they just set a master dtype and use that
Sid#2121: so i'd say we do that
bmk#1476: ok
Sid#2121: but if that bfloat didn't break things, presumably that means mixed precision is ok
Daj#7482: I would be wholly unsurprised if a single bfloat caused some kind of weird padding or intermediate dtype conversions mid graph
Daj#7482: I have no proof of this, just a hunch
bmk#1476: oh right speaking of padding that reminds me
bmk#1476: right now heads are 64 per
bmk#1476: we probably want to make each head 128
Daj#7482: Ah yeah, TPUs multiply in batches of 128
Sid#2121: So i changed this on line 256 ``` b = mtf.get_variable(x.mesh, 'b', [nf], initializer=tf.constant_initializer(0, dtype=tf.bfloat16), dtype=dt)
``` to tf.float32 |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/732974205687300126/unknown.png
bmk#1476: these too?
bmk#1476: why did you change
Daj#7482: lol the tensor is 32 but initializer is 16?
Daj#7482: Seems like a bug/typo
Sid#2121: huh? i don't think i changed that
bmk#1476: o.O
Sid#2121: from what to what
Sid#2121: what was it before
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/732974993259888731/unknown.png
Sid#2121: I.... wow i must have been half asleep
Sid#2121: i must have just took it out idk, i literally don't remember that
bmk#1476: haha its ok
Sid#2121: wait, when was that commit lol? it wasn't yesterday
Sid#2121: thought i was going mad
Sid#2121: i may have done that several days ago, yeah.
bmk#1476: anyways lets add it back to help transition to bf16
Sid#2121: yep. I wanna look into MOEEEE tho
bmk#1476: okok
bmk#1476: go for it |
Sid#2121: i mean
Sid#2121: let's do priorities ok
bmk#1476: i'll deal with bf16ification
Sid#2121: i'm so bad at self organisation
Sid#2121: and i still need to read the moe paper
Sid#2121: I'll do the kanban as first priority
Sid#2121: then i shall use the kanban to device my next priority, hah
Daj#7482: Have we tested whether the reshapes help with the memory use?
Sid#2121: AH! i replicated the oom error
bmk#1476: after removing that reshape, i'm currently testing
Daj#7482: Cool cool
Sid#2121: (assuming you didn't change gpt_moe.py, it should be the same as unfixed gpt_mesh.py)
bmk#1476: i did not change
Sid#2121: https://cdn.discordapp.com/attachments/729741769738158194/732976692632617010/message.txt
Sid#2121: what can we devise from this
Sid#2121: 1) there's some more bfloats coming from somewhere
Sid#2121: no clue where
Sid#2121: uhhh is this something that scopes will give us more info about
Daj#7482: btw colab TPUs are v2
Daj#7482: They have 8GB |
bmk#1476: which op is einsum_21/einsum/XlaEinsum ??? o.O
Daj#7482: v3s have 16GB
Daj#7482: just to explain why it OOMed on 13GB of memory use
Sid#2121: i have no idea which op that is, but i can fix the reshape and see if the error's the same
Sid#2121: einsum does the same thing? O.o
Sid#2121: praise the lord for einsum
Sid#2121: if it does the same thing that should our way to do reshapes, since you can keep the batch name the same
Sid#2121: or wait, did you just take it out for testing. gotta read this through properly
Sid#2121: ah ok i get it, einsum can do the reshape op internally anyway
Sid#2121: that seems like a much cleaner way to do it
Sid#2121: and I can't see why it would differ
bmk#1476: einsum is magic
bmk#1476: give it what you have and tell it what you want
Sid#2121: 🙏 praise einsum🙏
Sid#2121: ah i errored
Sid#2121: you do need the output dims to have a different name, i think
Sid#2121: but we just need to make sure to split along those too ?
Sid#2121: ```new_name = "tmp_dim_cumsum"
new_dim = Dimension(new_name, dim.size)
new_shape = x.shape.rename_dimension(dim.name, new_name) |
comparator = less if exclusive else less_equal
m = cast(
comparator(mtf_range(x.mesh, dim, dtype=tf.float32),
mtf_range(x.mesh, new_dim, dtype=tf.float32)), x.dtype)
ret = einsum([x, m], output_shape=new_shape)``` like this (mtf.cumsum)
bmk#1476: I think I know why they did the even odd layers now
Sid#2121: altho i don't know if *renaming* a dimension means we need to split that too
Sid#2121: or if mtf handles that
Sid#2121: oh yeah?
bmk#1476: You can't have same name both input and output for obvious reasons
bmk#1476: Currently we rename inputs to tmp_something every time
bmk#1476: But what if we had a different shape for every other layer
bmk#1476: No reshaping required
bmk#1476: If only they had ***documented*** this
Sid#2121: i'm not sure i understand
Sid#2121: also i think this is how our einsum should work, about to test
Sid#2121: ``` new_name = "tmp_batch"
new_dim = Dimension(new_name, batch_dim.size)
new_shape = h.shape.rename_dimension(batch_dim.name, new_name)
|
new_name = "tmp_seq"
new_dim = Dimension(new_name, sequence_dim.size)
new_shape = h.shape.rename_dimension(sequence_dim.name, new_name)
new_name = "tmp_vocab"
new_dim = Dimension(new_name, vocab_dim.size)
new_shape = h.shape.rename_dimension(vocab_dim.name, new_name)
logits = mtf.einsum([h, wte], output_shape=[batch_dim, sequence_dim, vocab_dim])```
bmk#1476: Note to self: add to passive aggressive blog post
Sid#2121: it's growing https://cdn.discordapp.com/attachments/729741769738158194/732981299798867998/Screenshot_2020-07-15_at_17.25.39.png
Sid#2121: also we should be in #gpt-neox-devs
Sid#2121: I would like the best practice for this sort of thing cleared up so adding that to noam email
bmk#1476: best practice for what?
Sid#2121: selecting names, how to split when you're using tmp_dimensions, if you rename to a tmp_dimension, does that mean the dimension stops getting split? etc etc
Sid#2121: also the code above works, just need to add mtf.Dimension
Sid#2121: gna push
bmk#1476: i think i already have a reasonable feel for how it works
bmk#1476: the main thing we'd ask noam to do is probably look for things in our code we didnt even know to think about
Sid#2121: oh no i jumped the gun |
Sid#2121: doesn't work
bmk#1476: which by definition we cant really ask about
bmk#1476: the what happened
Daj#7482: If we ask someone publishing papers at Google to look at our code we won't get a response
Sid#2121: same error
bmk#1476: hmm
Sid#2121: i guess rename_dimension doesn't rename inplace?
bmk#1476: what do we ask him?
Sid#2121: i probably need to explicitly do it
Daj#7482: We need to ask _very specific_ questions that fit in maximum 1-2 paragraphs
bmk#1476: if we know what is possibly the issue, we can fix it without their help
Daj#7482: That's my experience with cold emailing busy people
Daj#7482: Asking "Why doesn't this work?" or "Is this the correct way to do this?" is fine
bmk#1476: like, if you can make that 1-2 paragraphs i bet i could fix the issue before the email comes back
bmk#1476: the problem is we dont know what we dont know
Daj#7482: Yes lol
Sid#2121: ```def rename_dimension(x, old_name, new_name):
"""Reshape a Tensor, renaming one dimension.
Args:
x: a Tensor |
old_name: a string
new_name: a string
Returns:
a Tensor
"""
return reshape(x, x.shape.rename_dimension(old_name, new_name))``` am i going mad, isn't this recursive
bmk#1476: the things that are causing us problems wont even fit in those 2 paragraphs
Daj#7482: Before we can formulate our questions it's probably not worth emailing imho
Daj#7482: Haha
Daj#7482: Fair
Daj#7482: Just my 2ct
bmk#1476: ok so
Sid#2121: i agree fwiw
bmk#1476: let's put off the email thing
bmk#1476: archivist on the other hand can be contacted anytime, i'd say
Sid#2121: yep
Daj#7482: Yea
bmk#1476: i'm a bit of a datahoarder so i've heard of him before
Daj#7482: Though again we should do polite due diligence, know what he has that we want and ask very specifically
bmk#1476: yeah |
Daj#7482: Cool so you might be best to contact him
Sid#2121: AGH I HATE THIS LIBRARY
bmk#1476: oh no i hate *interacting with people*
Sid#2121: mtf.rename_dimension() does a different thing to tensor.shape.rename_dimension()
bmk#1476: @Sid
bmk#1476: here's how i do it
Daj#7482: lol if you really don't want to contact him I can make a reddit account sometime and do it
bmk#1476: ``` x = mtf.reshape(x, x.shape.rename_dimension(x.shape[-1].name, 'tmp_channels'))
```
Sid#2121: poifect thanks
bmk#1476: nah ill do it
Sid#2121: *let's get him in the discord*
Daj#7482: Pinned a message.
Daj#7482: Passive aggressive blog post material
Sid#2121: well let's email them before we blog about them, tho
Daj#7482: For sure haha
Daj#7482: It's just a meme
Sid#2121: so we're not really gonna write a passive agressive blog post??
Sid#2121: for shame
bmk#1476: idk, it would be a great post |
Daj#7482: Oh no we will
bmk#1476: im all for writing one
Daj#7482: But we will give everyone a polite headsup
Daj#7482: imo
Daj#7482: Otherwise feels kinda rude
bmk#1476: yeah that's fair
Daj#7482: And we will be polite in the blogpost ofc
Daj#7482: Just frustrated lol
Sid#2121: @bmk i think you can just do this tho ``` x = mtf.rename_dimension(x, old_name, new_name)```
bmk#1476: so polite, in fact, you might call it
bmk#1476: *passive aggressive*
bmk#1476: you can
Daj#7482: Point taken
bmk#1476: i just dont feel like replacing mine
Sid#2121: hah
Sid#2121: i'll do it
Sid#2121: i don't want any loose straws
Sid#2121: it *might* do something different lol
Sid#2121: you never know
Sid#2121: I'm still not clear on "if we rename a dimension, do we also have to split along the temporary rename for the splits to be effective" |
Sid#2121: I mean that's github issue stuff right?
bmk#1476: when you rename it might do inter device comms
Sid#2121: it does
bmk#1476: and yes you do
Sid#2121: because internally, it's a reshape
bmk#1476: but it's a bit more complicated than that
bmk#1476: conv1d takes dim_in -> dim_out
bmk#1476: they *cannot be equal*
Sid#2121: O.o
bmk#1476: and you *cannot split both simultaneously*
Sid#2121: huh??
bmk#1476: it makes sense if you think about it
bmk#1476: there needs to be a [dim_in, dim_out] matrix
bmk#1476: and obviously you cannot split the same tensor twice along the same mesh dim
bmk#1476: this is why the even odd is necessary
Sid#2121: it's still taking some time to click in my brain unfortunately
Sid#2121: will keep reading that sentence until i understand lol
bmk#1476: look at the diagrams in appendix a of tfmesh paper
Sid#2121: oh man i didn't see that there were diagrams in this appendix
Sid#2121: thought it was just biblio and model samples |
Sid#2121: ok i kinda get it
Sid#2121: AH
Sid#2121: yes
Sid#2121: the click happened
Sid#2121: alright woah ok
Sid#2121: so it wasn't just poc
Sid#2121: that's clever
Sid#2121: let's odd even
bmk#1476: *click*
Sid#2121: so if i'm understanding correctly: dim_in and dim_out need to differ
bmk#1476: yeah
Sid#2121: *but* you can go dim_in --> dim_out --> dim_in ---> dim_out etc etc indefinitely?
bmk#1476: yep
Sid#2121: s oyou need the odd even?
bmk#1476: yep
Sid#2121: yessss
Sid#2121: wow you'd think that'd be quite a key thing to actually document instead of just shove in the appendix of a paper
bmk#1476: they didnteven put it in the appendix did they
Sid#2121: well, not really
Sid#2121: you have to infer lol |
bmk#1476: this is just in their toy model
Sid#2121: ok this is gonna help a lot
Sid#2121: ah we have so many optimizations to do
bmk#1476: so dims we can eliminate: `tmp_channels`
Sid#2121: i love it when something clicks. this good
bmk#1476: actually that's the main one we can eliminate i think
bmk#1476: there's one thing that i'm miffed about the lack of support for: the ability to assign multiple mesh dimensions to a dimension
bmk#1476: like say your mesh is a:16,b:32
bmk#1476: and you want to temporarily spread a dimension across all 512 cores
bmk#1476: you cant
bmk#1476: at all
bmk#1476: and there's not really a good reason for it
bmk#1476: i guess you can manually split the tensor and then do the thing
Sid#2121: hm yeah
bmk#1476: but that's annoying
Sid#2121: well no actually
Sid#2121: a tensor is multidimensional
Sid#2121: it's not like it's 1d
bmk#1476: no like imagine you have [batch, seq, embd]
Sid#2121: so if you have a 10 * 10 tensor and you split it 2 ways one way and 5 ways the other |
Sid#2121: you are splitting across all cores
bmk#1476: you want to split embd across both the a and the b
bmk#1476: you cant
Sid#2121: how would that even work
Sid#2121: like, imagine the tensor as a square like in the diagrams
bmk#1476: just send one chunk to each processor along both of those dimensions
bmk#1476: yeah
Sid#2121: how would you draw the dividing line
bmk#1476: you draw the dividing line the same
Sid#2121: but if you're dividing across another dimension too
bmk#1476: but you put it elsewhere on the *mesh*
Sid#2121: that means you're dividing 512 * n times
bmk#1476: you can only draw lines to cut each dimension to the same number of pieces as that mesh dimension right?
bmk#1476: but if you're not using the other mesh dimension then that dimension is just doing nothing
bmk#1476: what if you could use up both of those dimensions
Sid#2121: nah, i don't understand
Sid#2121: also i can't get this einsum you pushed to work
bmk#1476: what if you could temporarily use two mesh dimensions as one big dimension
bmk#1476: ?
Sid#2121: is it working for you? did you test it? |
Sid#2121: ``` # equivalent to tf.matmul
new_name = "tmp_batch"
old_name = h.shape[0].name
h = mtf.rename_dimension(h, old_name, new_name)
new_name = "tmp_seq"
old_name = h.shape[1].name
h = mtf.rename_dimension(h, old_name, new_name)
new_name = "tmp_vocab"
old_name = h.shape[2].name
h = mtf.rename_dimension(h, old_name, new_name)
logits = mtf.einsum([h, wte], output_shape=[batch_dim, sequence_dim, vocab_dim])``` like, this should work right
Sid#2121: i'm getting the einsum has lhs dimension but no corresponding rhs dimension thing
bmk#1476: well no
bmk#1476: you're renaming all the dimensions
bmk#1476: ofc einsum doesnt know what you want
bmk#1476: why are you renaming? o.O
Sid#2121: but what's the advantage of doing that, as opposed to splitting one tensor dimension across one mesh dimension, and the other across a different mesh dimension |
Sid#2121: because you need to rename for einsum no ?? i'm confused
Sid#2121: did you run the code you pushed?
bmk#1476: because dsometimes you *dont have* the "other" dimension
bmk#1476: yes my code works
Sid#2121: hm
bmk#1476: it breaks because of the rename to tmp_
Sid#2121: ah
bmk#1476: why are you renaming? o.O
Sid#2121: i know what's going on, my bad. i thought that was the thing to do but i know what the problem is now
Sid#2121: does the einsum not need new names as the output shape tho? i thought it did
Sid#2121: oh
Sid#2121: ok
Sid#2121: answering my own question in my own brain
bmk#1476: rubbergeduckt
Sid#2121: > does the einsum not need new names as the output shape tho? i thought it did
but can u sanity check this tho
bmk#1476: no
Sid#2121: ok
bmk#1476: einsum needs exactly the same names in output
bmk#1476: or else it has no idea what's going on |
bmk#1476: conv1d needs different names
bmk#1476: (only for the output dim)
Sid#2121: https://tenor.com/view/screaming-internally-dead-inside-screaming-snapped-gif-8097478
bmk#1476: err @Daj what does this mean https://cdn.discordapp.com/attachments/729741769738158194/732992864321142934/unknown.png
Daj#7482: Uhh probably preempted, or corrupted state
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/732992985704431657/unknown.png
bmk#1476: hm.
bmk#1476: how do recreate
Daj#7482: Yea preempted
bmk#1476: pu recreate?
Daj#7482: `pu recreate`
Daj#7482: Yea should work I think
Daj#7482: btw do we need the v3-8? TFRC wants people to keep TPUs unreserved whenever possible
bmk#1476: im not using it at least
bmk#1476: I'm fine with releasing it
Daj#7482: alrighty I'll delete it for the time being
Sid#2121: really thinking about implementing a selenium browser that starts up and takes then posts a screenshot of the kanban every time we msg !kanban
Sid#2121: is this
Sid#2121: bikeshedding
Isaac McHorse#2007: OI! ARE YOU WORKING? |
Sid#2121: ah i'd have to log into my github on the server, wouldn't work
Sid#2121: @bmk i can't for the life of me understand why my einsum op isn't working. (I'm testing on gpt_moe.py). can you tell me what i'm doing wrong
Sid#2121: code: ``` print('INTO EINSUM SHAPE:')
print(h)
print(wte)
logits = mtf.einsum([h, wte], output_shape=[batch_dim, sequence_dim, output_dim])```
Sid#2121: prints: ```INTO EINSUM SHAPE:
Tensor[ln_f/add_2:0, Shape[batch=32, sequence=128, moe_out=768], <dtype: 'float32'>]
Tensor[wte_dropout/einsum:0, Shape[vocab=50257, embd=768], <dtype: 'float32'>]
```
Sid#2121: (moe_out is the name of output_dim)
bmk#1476: er
bmk#1476: 2 things
bmk#1476: moe_out and embd need to have the same name if you want them to get joined
bmk#1476: output_dim needs to be the same name as vocab
bmk#1476: @Sid
Sid#2121: the *same name* as vocab
bmk#1476: yes
bmk#1476: they need to be the same dimension
Sid#2121: but they are |
Sid#2121: different shapes
bmk#1476: right now youre giving it two dimensions with different names how is einsum supposed to know theyre actually the same
bmk#1476: why is that dim called moe_out anyways
bmk#1476: why not just make it the same as output_dim
Sid#2121: it *is* output_dim
bmk#1476: ?
Sid#2121: like, moe_out is the name of output_dim
bmk#1476: ah
Sid#2121: i'm so confused
bmk#1476: that threw me for a loop
Sid#2121: yeah sorry, bad naming conventions
Sid#2121: will change but this was 4am last night coding
bmk#1476: still
bmk#1476: output_dim and vocab are the same object too?
bmk#1476: ok
bmk#1476: i get it
bmk#1476: you want moe_out and embd to be the same dim
Sid#2121: yes it should be called embd_dim or whatever
Sid#2121: that's my bad
bmk#1476: output_dim needs to be the same as *vocab* |
Sid#2121: the same shape?
bmk#1476: the same dim
bmk#1476: name
Sid#2121: O.o
bmk#1476: this naming is garbage
Sid#2121: ok vyou've explained enough lol, thanks
Sid#2121: but i still don't get it
Sid#2121: i'll keep reading
bmk#1476: please change every name to the same as the dim in it
Sid#2121: i don't wanna make you explain the same thing over and over
bmk#1476: otherwise this is impossible to debug
Sid#2121: yeah ok
Sid#2121: i'm going out for a bit anyway, don't worry about the moe stuff i'll fix it
bmk#1476: ok
Sid#2121: @bmk did you test btw? did it run?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/733029398717923328/unknown.png
Daj#7482: That's...strange? Probably corrupt
bmk#1476: what do
Daj#7482: I can manually delete and recreate it
bmk#1476: hows that different from pu recreate |
Daj#7482: It shouldn't be?
bmk#1476: o.O
Daj#7482: You can try another pu recreate
bmk#1476: ok
Daj#7482: #the-faraday-cage-archive is now our go to spot for putting any fun GPT/GAN/etc stuff
aquajet#7800: Joined the server.
Pasha Khosravi#1571: Joined the server.
Daj#7482: Hello! Welcome to our AGI Breeding Program! Check the channel description for our current status and don't hesitate to ask questions :)
pashmak#7502: Joined the server.
Daj#7482: 👋
pashmak#7502: Hi 😄
arfa#0882: Uh oh https://twitter.com/julien_c/status/1283649267203280897?s=20
Daj#7482: Yep Shawn already mentioned that
arfa#0882: Where?
Daj#7482: I mean, if Hugging Face has some spare CPUs, we have code
arfa#0882: You guys need to chat less so I can unmute the server
Daj#7482: #off-topic
Daj#7482: Haha chatting is kinda the point of a server
zphang#7252: true, they have that VC money
Daj#7482: You can mute individual channels too |
Daj#7482: It would be a dream to work with Hugging Face
arfa#0882: I'm jealous
Daj#7482: Join us!
arfa#0882: :feck2:
TD-3#9327: Joined the server.
Daj#7482: Don't be too jealous, if you knew the horrors we've been through with TFM haha
Daj#7482: Hello @TD-3 ! Welcome to the LM Grazing Fields! Check the channel topic for our current status and don't hesitate to ask questions!
shawwn#3694: could the webring be moved higher?
arfa#0882: Not unless you move it higher in TPU Podcast :thonk:
Daj#7482: sure
Daj#7482: Or I could put it in resources
shawwn#3694: I moved it to the bottom since it was stuck at the bottom here, but community seems more important than that.
Daj#7482: I really don't care
Daj#7482: Open to whatever people think looks best
Sid#2121: I mean let's not have it *right* at the top lol. general should be first
zphang#7252: Are the data processing scripts in the GPTNeo repo / could you add me?
shawwn#3694: sigh
Daj#7482: Personally I'd put communities in resources? What do you think?
Sid#2121: i think
Sid#2121: bikeshedding |
Isaac McHorse#2007: ? JUST DO IT !
shawwn#3694: I think it should be at the top, because it's the first thing people see when they join the server. why does every decision have to be analyzed forever?
Daj#7482: Data processing scripts sans tfrecords encoding are in various other repos
zphang#7252: oh they're not centralized yet
Sid#2121: @zphang all the data processing scripts are in #datascripts yeah
Sid#2121: ah
Sid#2121: we should centralize
Daj#7482: Nope it's a bit of a mess atm
Daj#7482: iirc bmk wanted individual scripts in different repos
arfa#0882: IMO the stuff people are likely to look at once and then mute/collapse section should be below thing people are likely to look at regularly
Daj#7482: We might spin off a The Pile™️ repo
arfa#0882: I don't want to have to scroll past stuff I've muted/collapsed to see unread messages
Daj#7482: I like #communities in resources since those channels are all for looking up things, but as said I really don't care so open to feedback ¯\_(ツ)_/¯
Sid#2121: eh i preferred it with it's own section
Daj#7482: I'll fiddle with it when I'm back home
arfa#0882: Yeah. I mean, different people use Discord differently. I think shawwn never mutes anything, for example, so :idk: how he sees things
shawwn#3694: I mute most servers
arfa#0882: Oh
Sid#2121: oh please can we message huggingface
Sid#2121: @Daj I feel like you're best placed to do this since famuz |
Sandeep#0543: Joined the server.
Daj#7482: >>>>>famous
Daj#7482: As if
arfa#0882: Well FWIW, if 1997 Webring is at the bottom and I *haven't* muted it, whenever a new server gets added there I'll be sure to check it out because it'll be my only ping
shawwn#3694: Yeah, I really think the webring should be its own category
Daj#7482: Hello @Sandeep ! Welcome to the Society For Ethical Treatment of Language Models! Please check the channel topic for our current status and don't hesitate to ask questions?
shawwn#3694: "general should be first because it's first" doesn't make much sense
shawwn#3694: obviously people are going to find general no matter what.
arfa#0882: General should be first because it's most important
Daj#7482: Can we move this to #off-topic please?
Daj#7482: Since we're getting new people to greet
dikshant#5563: Joined the server.
ucalyptus#2377: Joined the server.
Sid#2121: Hey @dikshant & @ucalyptus ! Welcome to The open-er version of OpenAI a.k.a Chonky Language Model Makers aka Libre AI. Let us know if you have any questions or can offer any help to the project. Check the google doc linked at the top of the channel for an overview of what we need.
pb#8994: Joined the server.
Daj#7482: Hi @pb welcome to The Worker's LM Factory! (Man these silly alt titles are getting ridiculous) Please check the channel topic for our current status and don't hesitate to ask questions!
Sid#2121: @Daj you have gpt now right
Sid#2121: git us sum welcomes
Daj#7482: Oh yeah I probably generated enough samples by now lol
Sid#2121: I can put a prompt together |
Daj#7482: Though test time now, talk to you guys later!
Sid#2121: ah! good luck!
ucalyptus#6163: Joined the server.
Sid#2121: Hey (again ?) @ucalyptus , welcome to H.A.L aka Help Access Language-Models aka LibreAI. Check the channel description for info, and please shoot any questions you have our way.
Polytropian#8925: Joined the server.
bla15e#3588: Joined the server.
Sid#2121: Hey @Polytropian & @bla15e . Welcome to LibreAI, where we waste time thinking up unique welcome messages instead of working on our main project of replicating GPT-3 + variants. Please ping us if you have any questions or can offer any help to the project
Science.Stanley#8720: Joined the server.
BadBuddhist#6590: Joined the server.
bla15e#3588: Hey! Pleasure to be here
Sid#2121: To all the new people joining - we have a super small group at the core of the project and (if it isn't clear enough) would love any help we can get. Particularly needed is cpu power for data processing, or any novel ideas that may mean we need to use less cpu.
mysterefrank#4834: Joined the server.
Science.Stanley#8720: Heck ya! @Sid
Glad to be here, and hope can find a way to contribute.
Where could I look to find out particulars on the CPU-compute needs? 🚀
mysterefrank#2954: Joined the server.
Sid#2121: @Science.Stanley check the google doc in the description
Sid#2121: altho i think we may have changed the calculations ever so slightly since then as we're changing the way we filter CC
Sid#2121: welcome @mysterefrank @mysterefrank (?) and @BadBuddhist 👋 |
mysterefrank#2954: 🙏
shawwn#3694: I knew it. it's called pray, not high-five. Everyone always says that's a high-five
cc_#1010: Joined the server.
mysterefrank#2954: oh damn I never saw the high five
cc_#1010: oi
Sid#2121: oi oi
shawwn#3694: o/ ceeps
cc_#1010: i wave hellow
Sid#2121: welcome @cc_ pls halp. pls gib cpu. can everyone tell i'm struggling to come up with new welcome messages
cc_#1010: i will if i can spare it
Sid#2121: check the channel description for a project overview and let me know if you have any questions 🙂
cc_#1010: the sooner i can give drilbot some kind of gpt-3 access the better
Sid#2121: oooh r u kingdomacrylic ?
Sid#2121: was that the tag
cc_#1010: no
cc_#1010: i'm his successor, i run @drilbot_neo
Sid#2121: oh he killed drilbot v-1 didn't he
Sid#2121: nice
cc_#1010: and gptmicrofic and mtg_gpt2 and dril_eaboo, the last of which is not really GPT related but still notable
Sid#2121: did you message OA for beta access lol? are they too serious to run drilbot 2 |
Sid#2121: awesome! welcome
cc_#1010: i did like... 6 ish days ago?
cc_#1010: dunno how long the waiting list is
Sid#2121: I think they're just being selective
Sid#2121: also they're gonna start making people pay soon-ish
cc_#1010: hmm
cc_#1010: thats lame
cc_#1010: also it was ten days ago, time is an illusion
cc_#1010: https://twitter.com/drilbot_neo/status/1280001644219052032
Sid#2121: i mean, incredible they've been able to do the inference they have done for free so far, but yeah. that's why we're here
Sid#2121: @cc_ i'm guessing you have data gathering experience then? that's one place we need some workers. If you do happen to have any experience coding for tpus that'd be most helpful tho
cc_#1010: hahaha
hoho#4821: Joined the server.
cc_#1010: i have no code knowledge when it comes to this sort of thing whatsoever
Sid#2121: ah ok well, welcome, lurk, 👀
cc_#1010: i use max woolf's colab notebook for everything
Sid#2121: Hey Hey @hoho
Sid#2121: I need to get our discord bot to greet everyone so i can actually get to work haha
cc_#1010: how do i donate cpu time
Sid#2121: what kinda cpu do you have access to? |
cc_#1010: uhhh i have
wintermute#5623: Joined the server.
cc_#1010: two linodes lmao
cc_#1010: and
cc_#1010: a 2015 macbook pro
cc_#1010: and
cc_#1010: intel core i7 9750H with 2.6 ghz
Sid#2121: Hello @wintermute ! Welcome to the tensorflow mesh wastelands! Read the Channel description for more information on the project and how you can help out.
shgidi#5693: Joined the server.
arfa#0882: I can donate 2000 bitcoins for everyone who sends me 1000 bitcoins :heck:
Sid#2121: ok cool @cc_ , best to ask @bmk about this stuff when he's awake. We have a few people who offered us cpu time so we'll need to distribute processing when we're ready. I'll maybe start a section in the google doc and add your name to the list if that's ok
Sid#2121: @arfa elon???
cc_#1010: sure
arfa#0882: Oh no my disguise is busted :heck:
cc_#1010: but yeah im mostly a code baby still working on my first discord bot for goofy shit
cc_#1010: so im not sure if i'll be of much material help besides donating spare cpu cycles
Sid#2121: @arfa this u? https://i.insider.com/5d5302c6cd97841bc207b2e4?width=1100&format=jpeg&auto=webp
arfa#0882: N-n-uhh
Sid#2121: @cc_ well that is very much material help indeed
cc_#1010: oh! right |
cc_#1010: my parents are also rich and i can siphon money off them lmao
cc_#1010: im not sure if i can single handedly manage 6 terabytes money but i can probably put a dent in it
Sid#2121: > my parents are also rich and i can siphon money off them lmao
@cc_ lmao
Sid#2121: i hope you're not joking and i will take advantage of this
cc_#1010: i am not
Sid#2121: we'll tell them it's an investment or a time share or sth
cc_#1010: i make 45k in a cushy office post-grad job doing very little and i spend it on very little and my parents pay my rent so i am effectively siphoning money from them
cc_#1010: lmao
cc_#1010: i've got money to burn
cc_#1010: i dont pay bills
Sid#2121: @cc_ we love u already
Sid#2121: (not just for the money i love drilbot)
ghost2718#1993: Joined the server.
cc_#1010: :powercry1:
cc_#1010: y'know i could probably signal boost the discord link with drilbot
Sid#2121: maybe at some point
cc_#1010: but that might overwhelm us right now lmao
cc_#1010: maybe once we're in "please donate your cycles" mode
Sid#2121: idk if we need more eyes in the server rn |
Sid#2121: yeah
cc_#1010: advantage #3 of me: i have a moderate amount of clout?
cc_#1010: lol
Sid#2121: @ghost2718 👋 👋 welcome we make big GPT pls halp us
cc_#1010: but also its 4 am so im going to bed
Sid#2121: well nice, welcome in, i'm sure we could use your clout at some point for sure
cc_#1010: y'know you could probably just download any old twitter bot and get a welcome function lmao
Sid#2121: we already have @Isaac McHorse we just need the prompts
Sid#2121: if you like bots check out #the-faraday-cage-archive
Sid#2121: we have some scp generation going
ghost2718#1993: @Sid I'll try my best
cc_#1010: haha, nice
cc_#1010: i was considering if i got into gpt-3 trying to hook a discord bot up to the API
Sid#2121: he is gpt-3 powered but
Sid#2121: he just grabs from a bucket of pre-generated ones
Sid#2121: bc it's not my api key
Sid#2121: @ghost2718 check out the google doc above for an overview of what needs doing. pls let us know if you have experience in any of the tasks listed
cc_#1010: my bot project is very nerdy
cc_#1010: its basically an interface for archive of our own
cc_#1010: (which is a fanfic website) |
cc_#1010: lets you do searches, scrape metadata, etc.
maxime#4123: Joined the server.
cc_#1010: working on an in-discord reader with tts options for people with needs for that
cc_#1010: bookmark exact spots in the page
cc_#1010: the like
Sid#2121: that sounds cool
cc_#1010: anyway im gonna head to bed now because i could probably talk for hours given the opportunity
Sid#2121: is code open sourced?
cc_#1010: no
Sid#2121: hah ok i won't keep you up too long
Sid#2121: 👋
cc_#1010: idk i know its like
cc_#1010: whats the word
cc_#1010: it does nto matter and a competent coder could figure out what i did in a week
cc_#1010: and nobody's "stealing" it
cc_#1010: but it still feels like its mine y'know
Sid#2121: i like the tts idea a lot
cc_#1010: and i get weird feelings when i think about people forking it and running it as their own bot
Sid#2121: MAH BABY
cc_#1010: cuz its My Bot |
cc_#1010: Not Yours
cc_#1010: i spent 8 hours today prying around with mongodb so i could keep track of server statuses (so no kids can accidentally search for the porn) and user win/loss records because it has art prompts too
cc_#1010: so that was gratifying
Sid#2121: Can we get drilbot in here lol
cc_#1010: i mean
cc_#1010: there is no drilbot... bot
Sid#2121: there could be
cc_#1010: if i had a gpt-3 key yes
cc_#1010: or some gpt-2 api
Sid#2121: you could just pre-generate a ton, but yeah. I'm really getting distracted. there's so much to do, i haven't even had coffee
cc_#1010: eh
cc_#1010: pre-generation feels...
cc_#1010: kinda like cheaeting lmao
cc_#1010: go all the way or don't go
cc_#1010: no in-betweening it
Sid#2121: shh don't let @Isaac McHorse hear you
cc_#1010: anyway
cc_#1010: bed
dmytrodee#9629: Joined the server.
shawwn#3694: bikeshedding |
Isaac McHorse#2007: WHY WOULD I PLAY?! YOU ARE THE SOBBING ONE
Sid#2121: Hey there @dmytrodee ! Welcome to git clone openai; git branch LibreAI
Sid#2121: hahahaha ok shawwn
Sid#2121: you are correct
Manju#1531: Joined the server.
Perditus#2503: Joined the server.
Daj#7482: Hello @Manju @Perditus welcome to the Tensorflow Anonymous Self Help Group aka LibreAI. Please check the channel topic for info on our status and don't hesitate to ask questions :)
Sid#2121: @Daj you're back? how'd the test go hah
Daj#7482: It was ez. Medicine students can't do math so it was pretty trivial, I put too much effort into studying haha
Sid#2121: nice
JuniorK#2145: Joined the server.
arfa#0882: Test? Wat?
Daj#7482: University test
Daj#7482: Hello @JuniorK ! Welcome to GNU/GPT! Check Out the channel topic for info and don't hesitate to ask questions!
Manju#1531: Hello @Daj
rusquant#3367: Joined the server.
Daj#7482: Hello @rusquant ! Welcome to AI@Home aka LibreAI! Check out the channel topic for info and don't hesitate to ask questions!
Eddh👽#7290: Joined the server.
Daj#7482: Hello @Eddh👽 ! Welcome to A Collection Of People That Are Definitely Human And Not AIs Trying To Blend In aka LibreAI! Check out the channel topic for info and don't hesitate to ask questions!
Narsil#9151: Joined the server. |
Daj#7482: Hello @Narsil ! Welcome to The Merry Band of LM Trainers aka LibreAI! Check out the channel topic for info and don't hesitate to ask questions!
Daj#7482: Man at this point it's just sport to see how many more of these I can come up with
Narsil#9151: @Daj Don't you have model to generate these ? 😄 Thanks btw!
Daj#7482: Not yet haha
Daj#7482: But soon™️
P-Dog#9402: Joined the server.
Daj#7482: Hello @P-Dog ! Welcome to Mom: We have OpenAI at home, OpenAI at home: LibreAI! Check out the channel topic for info and don't hesitate to ask questions!
semantic#5274: Joined the server.
ifh#0340: Joined the server.
Anirudh#6162: Joined the server.
Daj#7482: Hello @semantic @ifh @Anirudh ! Welcome to DIY AGI! Check out the channel topic for info and don't hesitate to ask questions!
unkowncandy#0790: Joined the server.
Daj#7482: Hello @unkowncandy ! Welcome to the Data Mines! Check out the channel topic for info and don't hesitate to ask questions!
vishalr#6172: Joined the server.
Daj#7482: Hello @vishalr ! Welcome to the Custom Introductions Lab! Check out the channel topic for info and don't hesitate to ask questions!
tyrion#9377: Joined the server.
DragonPG#2864: Joined the server.
DanielH#9648: Joined the server.
Sid#2121: Greetings @tyrion , @DragonPG , @DanielH ! Welcome to the AGI Faraday Cage! Check out the channel description for some info on what we're doing 🙂 we're always looking for people to help out, if you have anything to offer
lugig#2397: Joined the server. |
pragmaticml#1730: Joined the server.
justhoughts#6515: Joined the server.
Daj#7482: Hello @lugig @pragmaticml @justhoughts ! Welcome to the Home For Wayward Language Models! Check out the channel topic for info and don't hesitate to ask questions!
BalGadot#9361: Joined the server.
AlexM#2612: Joined the server.
Sid#2121: hoo boy i think that twitter post is blowing up
Sid#2121: Hey @BalGadot , @AlexM !
BalGadot#9361: Oh indeed, hey there!
Daj#7482: Here have your customary custom introduction: Welcome to the LM Rebel HQ!
BalGadot#9361: Feels good, thanks again!
acakir#5963: Joined the server.
Daj#7482: Check the channel topic for info and please feel free to ask if you have any questions or would like to help out :)
Daj#7482: And welcome @acakir to the Cathedral and Bazaar of LMs!
pwang99#3791: Joined the server.
acakir#5963: Happ tobe here! Will do
natn#2898: Joined the server.
mathew#7618: Joined the server.
Daj#7482: Welcome @pwang99 @natn @mathew to the TPU Abuse Survivors Support Group! Please see the channel topic for info and don't hesitate to ask questions!
pwang99#3791: 👍
pwang99#3791: I’m just here as a fly on the wall, for the moment 😉 |
mathew#7618: Hello everyone glad to be here!
pwang99#3791: I have only two GPUs to spare
Daj#7482: Lurkers are always welcome! of course people wanting to help is even better
Daj#7482: Our bottleneck is currently CPU to preprocess data funnily enough
Sid#2121: > I have only two GPUs to spare
@pwang99 we're using TPUs for training but we need a lot of compute for preprocess
Sid#2121: ah
Sid#2121: ^
Daj#7482: And data collectors/TPU coders
Sid#2121: mainly tpu coders 😶
pwang99#3791: I’m trying to learn more about the actual process for the training and preprocessing
pwang99#3791: Why TPUs specifically? Is that just what the openai codebase Targets?
Sid#2121: it's what we have access to
Sid#2121: TFRC creds
Sid#2121: plus speed
pwang99#3791: Ah. Got it
Daj#7482: Yea, any equivalent amount of GPU would be unaffordable
pwang99#3791: What is the ballpark estimate of TPU compute-hours needed
Daj#7482: Uhm I think @bmk made some estimates
Daj#7482: Thousands of TPU months iirc |
Daj#7482: GPT3 is _big_
Daj#7482: (and we want bigger!)
lillux#2099: Joined the server.
Sri Harsha M End-October 2021#1627: Joined the server.
adam90#4807: Joined the server.
binal#2982: Joined the server.
Sid#2121: Hey @lillux , @Sri Harsha M End-October 2021 , @adam90 , @binal ! Welcome to the AI Fallout Shelter ™️
Sid#2121: please check the channel description for a general project overview and ping us with any questions
Sid#2121: *wipes sweat* so much welcoming
Daj#7482: Maybe I shouldn't have started the customized welcoming tradition lol
Daj#7482: GPT3 generated welcomes soon
Sid#2121: nah i love it
Sid#2121: nice little creativity exercise
lillux#2099: Hi, I found this on Twitter and wanted to see if I could give an hand, but I've not used TPU before
lillux#2099: Interestint project, i'm reading the docs
murbard#5141: Joined the server.
Sid#2121: there's plenty of places we need help @lillux
Sid#2121: what are your skills?
Sid#2121: if you're a quick learner and know how to program, we could do with someone else who knows how to work with TF-mesh tbh
Sid#2121: otherwise, data collection is invaluable |
Sid#2121: Hey @murbard , welcome to OpenAI's even more open younger brother, LibreAI ™️ ! check the google doc in the project description for more info
murbard#5141: > Hey @murbard , welcome to OpenAI's even more open younger brother, LibreAI ™️ ! check the google doc in the project description for more info
@Sid ty
Skyros#0881: Joined the server.
sri#3423: Joined the server.
lillux#2099: Actually I do research on molecular dynamics, simulating self assembling peptides and analyzing their topology to develop biomaterials. I'm learning data science. I can code in python and I use pytorch almost daily, not for machine learning but as an engine for tensor operation. I have used keras in the past, but to do pretty basic stuffs, mlp and small cnn
Daj#7482: Oh that's neat, I always wondered if anyone used these libraries for anything but ML. Well if you ever feel like diving into some harder stuff or managing/downloading/encoding (mostly cleaning/encoding needed atm) huge datasets, hit us up 👍
Daj#7482: And welcome @Skyros @sri to the Filming of the first Entirely Real Black Mirror Episode! Please check the channel topic for info and don't hesitate to ask questions!
aswin#9114: Joined the server.
Daj#7482: Hey @aswin ! Welcome to the Ineffable Language Models Association (ILMA) You are in the channel for ALMA - Artificially Literal Mental Assignments. Check the channel topic for info on what we're doing and what you can do to help, if you want.
(this message generated by GPT3)
masapasa#9576: Joined the server.
Daj#7482: Hey @masapasa ! Welcome to the AI Program of the International Society for Ethical Treatment of Language Models! Check the channel topic for info on what we're doing and what you can do to help, if you want.
(this message generated by GPT3)
password#1329: Joined the server.
Daj#7482: Hello @password ! Welcome to The Large Language Model Autonomous Zone! Please see the channel topic for info and don't hesitate to ask questions!
wobbithobbit#2197: Joined the server.
MarcAK#7665: Joined the server. |
Daj#7482: Hi @wobbithobbit @MarcAK ! Welcome to the OpenAI of OpenAI! Please see the channel topic for info and don't hesitate to ask questions!
Ivanc2#9346: Joined the server.
bmk#1476: Wow so many new folks!
Daj#7482: Hello @Ivanc2 ! Welcome to the Library of Babylon (Compressed into latent space for TPU compatibility)! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: we can use all the help we can get lol
Daj#7482: Yea bmk, founder of Hugging Face tweeted about open source GPT3 and shawn commented with our discord
Daj#7482: lets hope some of the new people stick around and help :)
bmk#1476: niiiice
bmk#1476: did anyone from HF join?
Daj#7482: Not that I know of
Ivanc2#9346: Replicating GPT2 was pretty fun - but I figure it needs more parameters 😂
shawwn#3694: it's also weird that lots of people seem to have joined and immediately left. But I guess it's not that strange. I expected to see more than 19 online
shawwn#3694: probably just early morning though.
Daj#7482: @Ivanc2 It brings some new challenges since GPT2 is about the biggest that fits on one core haha
Daj#7482: Yea I've noticed that too shawwn
Daj#7482: Noticed a few odd things in general
Daj#7482: e.g. I was followed by a twitter profile that literally only follows me
Ivanc2#9346: Everyone wants to know how close it is to being done
bmk#1476: haha
bmk#1476: it *would* be closer if everyone pitched in |
Daj#7482: We're making real progress for sure
Daj#7482: But it's a big project
Daj#7482: I'm surprised we got this far tbh
bmk#1476: does anyone of the recently joined have access to a lot of cpu power
shawwn#3694: we have 64 cores, though I'm not sure that's "lots"
Daj#7482: Better than our 8 lol
bmk#1476: Er, about an order of magnitude more would be my very rough estimate
Daj#7482: I mean, it's not that less isn't worth it
bmk#1476: ^
Daj#7482: Since we can be training smaller models while data processing runs
bmk#1476: yeah
Ivanc2#9346: @Skylion has access to the brown grid
shawwn#3694: oh, did skylion pitch in some resources?
shawwn#3694: neat
bmk#1476: also unlike OA, who only sampled a small amount of data from CC, I want to process *all* of CC and then sample from that
Daj#7482: We haven't heard much from Skylion so far
Ivanc2#9346: I’m saying he can, won’t offer for him
Daj#7482: Which is fine, anyone can contribute that wants to
Ivanc2#9346: I could also try and reactivate my Brown U research account if my advisor is interested
bmk#1476: so if we ever want to go bigger we can just sample more instead of firing up the CC code again |
Daj#7482: Would be super cool Vanya! Happy to spin out some research papers too
Daj#7482: We have plenty of experiments worth writing up
bmk#1476: oh yes, research papers would be awesome
shawwn#3694: by the way, I've been thinking of making my own attempt at GPT-3. Feel free to revoke my access to avoid conflict of interest, though to be honest I doubt I'll get anywhere either way.
shawwn#3694: mesh tensorflow is interesting
Daj#7482: Why not work together with us?
shawwn#3694: I spent some time going through the examples
shawwn#3694: mostly for fun.
Daj#7482: I mean, sure do what you want
Ivanc2#9346: It seems like this project would benefit greatly from Shawn’s community
Daj#7482: We came from Shawn's community lol
Daj#7482: Just a few of us spun out because we needed more space
Daj#7482: We'd love to have you Shawn, shame you don't want to cooperate, but I wish you luck whatever you do :)
bmk#1476: I think we have a much better chance of succeeding if shawn works with us rather than redoing our work
bmk#1476: us = any of us getting one working
lillux#2099: @Daj i can work on dataset in my spare time, i'll read the specific thread
shawwn#3694: I didn't say I wasn't going to cooperate. I said I was thinking of making my own attempt, and wanted to mention it ahead of time so it doesn't come as a surprise.
Daj#7482: I misunderstood then, either way I look forward to what you do!
Daj#7482: Feel free to use any of our stuff we're _Libre_ AI after all
Daj#7482: @lillux Great! I think @bmk is the most familiar with the data pipeline so feel free to talk to him |
shawwn#3694: hm, why's it closed then? just curious
bmk#1476: I believe the original intent was to not put out unfinished code
Daj#7482: Yup
shawwn#3694: ah
Daj#7482: I literally never deny a request to see it
Daj#7482: We'll release it the second we're comfortable saying it works
Daj#7482: Which might be soon
spiantino#6702: Joined the server.
Daj#7482: Hello @spiantino ! Welcome to the Cult of TPUs Go Brrrr! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: I think I'm also the one most familiar with mesh tf so if you want to contribute to that also ask me i guess
Daj#7482: bmk runs half of this place :D
bmk#1476: haha
Daj#7482: The other half is Sid, and I make memey introductions
thesofakillers#8353: Joined the server.
jackclark#7956: Joined the server.
Teven#6831: Joined the server.
Daj#7482: @jackclark Are you _that_ Jack Clark?
Daj#7482: If you were I was going to send you another email after I talked to Rosie Campbell haha
spiantino#6702: Hi all - nice to meet you all. I’m curious if the training is a mostly CPU or GPU required workload. And if you have a rough estimate of how many you’d need
Daj#7482: Currently we're actually CPU bound for the data cleaning |
Daj#7482: Since TPUs do the GPU work for us
Daj#7482: Also a formal hello to @thesofakillers @jackclark @Teven ! Welcome to the AGI Proving Grounds! Please see the channel topic for info and don't hesitate to ask questions!
Teven#6831: Wait you're not a bot you actually come up with the memey introductions? pretty impressive tbh
Daj#7482: Haha thanks
Daj#7482: I've tried to get GPT3 to make them but it's not the same
Teven#6831: Ha
Teven#6831: yeah anyway I actually work at Hugging Face as a research scientist (although roles are always pretty flexible at HF)
Daj#7482: Oh wow that's so cool! Thanks SO much for the awesome stuff you make!
jackclark#7956: > @jackclark Are you _that_ Jack Clark?
@Daj Yes, I am _that_ Jack Clark. Hello!
bmk#1476: awesome! I also absolutely love the work HF does, especially the transformers library
Teven#6831: Ha thank you too, a lot of stuff is made by the community, that's the whole point 😉
Daj#7482: @jackclark Hey man! How are you doing? I'm not sure if I should be apologizing for this or not lol
jackclark#7956: > @jackclark Hey man! How are you doing? I'm not sure if I should be apologizing for this or not lol
@Daj I don't think so - we're all part of the same community going forward in time. I'm mostly interested in grokking the ethical frame/lens you all have on this, as I suspect I have a different opinion and I'm always on the hunt for things that can help me better understand where others are coming from
helen 🐳#5160: Joined the server.
Daj#7482: Well we have an #alignment-general channel where you can see the main contributors long discussions about this. I hope I didn't misrepresent OpenAI/you
Daj#7482: I've briefly talked to Anders Sandberg during a meetup about his take on the safety and was going to email you, Rosie, Miles and anyone else I can think of once I was sure this would actually work
Daj#7482: Our TLDR is that we want to do our due diligence
Daj#7482: Hello @helen 🐳 ! Welcome to the Society For Advancement of Language Models! Please see the channel topic for info and don't hesitate to ask questions! |
Daj#7482: Also @jackclark and anyone else here for research/ethics/policy reasons, I (and almost surely the others) would be happy to answer any questions, discuss issues or hop on a call or whatever any time. We wanna be open after all :)
bmk#1476: I'd sure love to discuss anytime
sunrisetofu#6997: Joined the server.
mesfas#6224: Joined the server.
Sid#2121: Hey @jackclark , Super exciting to have you here. I personally am still fully formulating my ethical stance on this whole thing but lean on the side of fully open releases. @Daj has been giving me some excellent readings and we’ve had lots of discussion over in our #alignment-general channel. Would love to hear your take on our project and any potential dangers to look out for
Sid#2121: Ah, I see you’ve already posted over there. Lots of activity here, need to do some catching up
Sid#2121: > Hi all - nice to meet you all. I’m curious if the training is a mostly CPU or GPU required workload. And if you have a rough estimate of how many you’d need
@spiantino Hey spiantino - as you can see we've been pretty busy here today - sorry this has gone unanswered. Training will be on TPUs. the bottleneck right now is CPU for data preprocessing
neurosyft#1798: Joined the server.
Daj#7482: Hello @neurosyft ! Welcome to Applied AI Ethics! Please see the channel topic for info and don't hesitate to ask questions!
Deleted User#0000: Joined the server.
Sid#2121: Hey there @Deleted User ! Welcome to the AGI Meme Factory! (aka LibreAI ™️)
Sid#2121: let us know if you have any questions - more info in the google doc pinned to the channel
old#3101: Woah looks like hf is getting serious about this ;).
Brouz#6768: Joined the server.
Daj#7482: Hi @Brouz ! Welcome to The Leaky AGI Box! Please see the channel topic for info and don't hesitate to ask questions!
Brouz#6768: sweet
zap#3181: Joined the server.
Deleted User#0000: Joined the server.
bmk#1476: Welcome to the land of LibreAI! |
Daj#7482: Hello @zap @Deleted User ! Welcome to the Shitposting AIs Containment Zone! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: (Is it really a shitposting AI containment zone when there isn't a #memes channel?)
shawwn#3694: (or perhaps it's the anti-shitposting channel, which is fair)
bmk#1476: #off-topic is the shitpost area
Daj#7482: _We_ are the shitposting area
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/733343426979954699/tfchad.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/733343463411548191/fbc-tf.png
shawwn#3694: bikeshedding
Isaac McHorse#2007: I'M NOT WORK ING! I'M JUST PLAYING!
shawwn#3694: see, you've trained two opposites
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/733343586753446018/freemonoid.png
Daj#7482: We have some pretty dank OC memes, thanks to bmk
macVIRII#5337: Joined the server.
Daj#7482: Hey @macVIRII ! Welcome to the LibreAI School for Gifted LMs! Please see the channel topic for info and don't hesitate to ask questions!
mojosmojo#4687: Joined the server.
bmk#1476: this reminds me of Profession by Asimov
bmk#1476: The LibreAI House for Feeble-minded AIs
Daj#7482: Hey @mojosmojo ! Welcome to the Unsure Whether To Panic About LMs Yet Or Not Society! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: http://www.inf.ufpr.br/renato/profession.html
turmeric13#4738: Joined the server. |
Daj#7482: Hey @turmeric13 ! Welcome to the Wild World of Word Models! Please see the channel topic for info and don't hesitate to ask questions!
goolulusaurs#1571: One thing I've been thinking about is that with models like iGPT, people are finding ways to turn other types of data into a sequence for prediction. However, everything on a computer is already stored as a sequence of bits. I wonder how well it would work to train a large transformer on sequences of bits/hex, as a way of representing and predicting any kind of data, whether its text, executable, image, sound, etc.
Daj#7482: I mean, all data arriving in the human brain is neural spikes
Daj#7482: Same idea
ragha123#2283: Joined the server.
Daj#7482: Hey @ragha123 ! Welcome to the Place Where That One Guy Is Running Out of Custom Intro Ideas! Please see the channel topic for info and don't hesitate to ask questions!
Sid#2121: hahahaha
Sid#2121: welcome to the place where if the man who's running out of custom intro ideas sends me gpt generated ones he can get a robot to do it!!
Daj#7482: I did!
Sid#2121: moarrr!!
Daj#7482: GPT3 is uncooperative and I should be working on other things haha
Sid#2121: ok 😦
Sid#2121: i mean we both should tbh
Daj#7482: Yes
Daj#7482: _Bikeshedding_
Isaac McHorse#2007: OH HELL NO! F*$K YES! WORK!
bmk#1476: *Kleinigkeitstreiten*
razzor#2262: Joined the server.
Daj#7482: Hey @razzor ! Welcome to the SCP Foundations Nerdier Cousin! Please see the channel topic for info and don't hesitate to ask questions!
razzor#2262: Thanks 😊 |
gsastry#9119: Joined the server.
baragonaru#7305: Joined the server.
Daj#7482: Hey @gsastry @baragonaru ! Welcome to The International LM Watch Organization! Please see the channel topic for info and don't hesitate to ask questions!
jeffrafter#8838: Joined the server.
fnord#5810: Joined the server.
Sid#2121: Hello @jeffrafter @fnord ! Welcome to AI Hygiene Council! Please check out the channel description for moar info
pujaarajan#2893: Joined the server.
Daj#7482: Hey @pujaarajan ! Welcome to the Where The Naughty ML Engineers Get Sent: Tensorflow Hell! Please see the channel topic for info and don't hesitate to ask questions!
Deleted User#0000: Joined the server.
Daj#7482: Hey @Deleted User ! Welcome to Where The Naughty Developers Go: Tensorflow Hell! Please see the channel topic for info and don't hesitate to ask questions!
jack4566#9782: Joined the server.
Daj#7482: Hey @jack4566 ! Welcome to the Foundation for Reparations For Those Driven to Madness By Tensorflow Documentation! Please see the channel topic for info and don't hesitate to ask questions!
Daj#7482: I need to batch my custom welcomes more if I want to keep this up lol
nick1234#7440: Joined the server.
mdlockyer#4683: Joined the server.
ko#7147: Joined the server.
Sid#2121: Hey @nick1234 , @mdlockyer , @ko ! Welcome to the AI pizza parlor, zio pepe's! serving up recipes generated by large Language Models!
Sid#2121: beat you to it buddy 😉
Daj#7482: Hah I appreciate the chance to reuse mine next time
Sid#2121: please let us know if you have any questions, read the doc in the channel description for more info on the project |
yurak#0640: Joined the server.
Daj#7482: Hi @yurak ! Welcome to Bargain Bin OpenAI! Please see the channel topic for info and don't hesitate to ask questions!
tushar#8521: Joined the server.
Daj#7482: Hi @tushar ! Welcome to the Mountain of ~~Madness~~ Tensorflow Documentation! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: Hmm, you need CPU encoding power? We have some spare
Sid#2121: that would be *super* helpful @shawwn
Sid#2121: how much do you have
bmk#1476: that would be awesome
Sid#2121: also *where* did ya get it from
shawwn#3694: Two servers, 32 cores each. Probably just one server for now since the one in Europe is hosting a production model
shawwn#3694: It also has a 24TB NAS, of which 6GB is used. Might be a nice place to stick your data
shawwn#3694: Ingress into GCE is free, so, it’s a useful launching point
bmk#1476: that's awesome
shawwn#3694: If you post some ssh keys, I can add access. The NAS is at /mnt/nas-ca/
shawwn#3694: It has a 50MB/s limit, but the server also has a 500GB SSD
shawwn#3694: Both servers have 3 Ti1080s too.
bmk#1476: damn, unfortunately we can't take advantage of gpus rn, it's purely cpu bound
shawwn#3694: I’ve been in a similar position. Funny having spare GPUs laying around
Daj#7482: That's super awesome shawn thanks so much!
bmk#1476: ```ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCY8sc/XEQfligFlp93OkziLJtbTWX7EW4YXleWEk14aJ+DUrVlhroZJ+7pM3PABxRyxREj5yM1wOXPqhpT95m6bdnSB4VLMYeVcd86mR9+or6IY7A7c62JufRg3gF3t/dMNVRiXNgpb7aq1qOdzynBec6RJdssrt9ezH7YnqdW3wQO6W1mc0I3oxq+6A4+/yCYMLN54nfqbcN/Zvq7vyAldfOXiempMldBtrinwtOj4oGQ4yVbbBbQzXMBXc32MuGNcZUeKlXGPm10fPe3nULIR6hjzaH36xlc3u+mbcyi3VSolotN7/2CjLCrqPoOrscbjHj+iQsxD2PswAmbz1yn user@aegis |
```
Daj#7482: I just got my encoding script fixed hah
Daj#7482: `ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbYsj64DQ3sII+I65MjTalQ9cqPp1avh1n4IMfvV2ZhHCXiBVM+bOj1KtjC5+fxPbwJcksSlszhLtt0le3mGFhBYkBlaYhQfQO0xqRU46lfLWSkzrdSoya8OrMnhZZBNXdYsFn28fYpyMJTw17TnJojQ5+D+rIJlzbPE3I25qep4VkqBq6hKvayDjsEWjpTCSJczy5kCxpTshicTyHJnD9Gsc+GLDVprmzVkTnNit59BB/GDrhPATDRCAIgKT49g8JPKxNNRFFptFdhZgmoAa93e81fDU71GZj7rFUeMe0vCqnQQioR2mBnsodQ8ih/22KVbayZEqEqe5Vq/dpCUlWxlUMyOA/XDHffEpiNzb2wJz1eaWm5AnFv5z6iH3nUHsDfQEM8XgUCa+oBAb7pOhVQNe+VrKDz+j1/wpN5cgWGiK8ivTo/1sdoAnU6MPt6LJwFqm2//JeFw/WzxsOK0ljvNZqn1SGK088svvb/fELTuKgnj2XNm9gWSnOAjfAQmYN59W5WaMto71VhCBABdGXIfwMaHrkStBW942kR6CrHO3IlX+pijw1PyBVpeztGXLdKr3pRrKkTHM9qERVJFX/13l9KqBV/esWGXvbB8vRKmrdHIcwCh/CDQAJk70xoz1RYfBA4p8SLWn6llJcq+AXnQzymqBv5awIDuLLwil8nQ== connor@connor-laptop`
shawwn#3694: Alright, I’ll add you when I’m downstairs. Maybe within an hour or two.
bmk#1476: thanks! 👍
shawwn#3694: You’ll have sudo apt-get privs, but not sudo in general. That way people can chmod 700 their home dir and log in to GCE without worrying about security issues
Daj#7482: That's a good solution
shawwn#3694: Keep in mind that I have sudo, but obviously I won’t be poking around other people’s dirs. There are a few of us using the server now
shawwn#3694: If you need anything that isn’t on it and can’t be apt-get’d, let me know and I’ll add it.
Daj#7482: As long as python works we're probably good 👍
shawwn#3694: Yup, TF1.15 is on it iirc
Ruby#6688: Joined the server.
Sid#2121: @shawwn ```ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC17MdgViRnQMRSSSUu1+wbbORxmyjQ9EduHSsAHyRGOJmmdLkJVnodjm8blbsNpa/7IsAba+P7P1FQQ7APUv0d+8UR9Jc47x6zGJ/CNtffZWrcdV9nEwBlsdX+VT8c+fjK7EBq2ootS8qccb+eh5QufMOqEkYdfiKQlRKM4P5+avKwQgz6ufLgY2Zz+yTnL4k4BOyelJhtV4Qspw/WSxErqQXVDqkY4k4XO1mQGHpoScXqGyUU3e1bvmFW88D0BZW0tODiIAkvqc6glhxvBj1261yOv2aOFckVeqSdUFCGurOxzdt9oGBSMjdgGRaz8Ni5+GJG0D870PphoXxROKU8BzYcfCBjIefgXKK8+cvY2FcXNsdUlOdD082RJ5URzFZ5vvmk3teBtiB8iSAXtivTyah37FB8aehFIbhn7CIWiXJA1CF0GGmPgtw+eiBnb3sgpx3ZF+W+7oXVBjQI7Q9P9WPVMZOb9e4DZoz0WmmczQZaYCqP0PnGQaDg6+Yc8LvCkMcgjvmctByqr6j+Ln5uXwfTrlxl7qVggDmpbOkVQFgo4JD4228WRo2dhp13bB6Lyq1ZRs9sanZz0S6zM0On1QdV6Ua5Feppq6ZpfCCKUD2wr8HKAtYui9cMoE90OnvfmAGexU3NGuFArhEgM4H3MSYmHijRyHlveqcAGN6ZPQ== sidblack@Sids-MacBook-Pro-2.local```
Sid#2121: Hey @Ruby ! Welcome to the AI Foundry! please read the google doc in the channel description for more info on what we're working on, and let us know if you can offer any help, or have any questions
Ruby#6688: Hi! Awesome project!
Ruby#6688: Why not make the github repo public?
Sid#2121: we're not done yet 🙂
Sid#2121: very much work in progress and we don't want to release unfinished code
Sid#2121: it will be, though
Ruby#6688: Gotcha |
Ruby#6688: I can contribute around 200$ worth of cpu if that helps.
Ruby#6688: on AWS.
Sid#2121: oh awesome
Sid#2121: it may well do. @bmk would be the person to ask about that. I'm not 100% how we're going to distribute processing yet since we've had several people reach out
Sid#2121: can i put your name on the doc?
Ruby#6688: Sure
Sid#2121: ❤️ we're very grateful. thanks for the offer.
wobbithobbit#2197: > Hi @wobbithobbit @MarcAK ! Welcome to the OpenAI of OpenAI! Please see the channel topic for info and don't hesitate to ask questions!
@Daj Thanks for the welcome! The community looks fabolous! Look forward to contribute in anyway I can 🙂
bmk#1476: I'm making a system to corrdinate data collection
bmk#1476: still WIP
shawwn#3694: Lol “OpenAI of OpenAI”
entangledothers#3311: Joined the server.
clem#3783: Joined the server.
Odysseus#0766: Joined the server.
vladdy#8776: Joined the server.
shawwn#3694: @entangledothers @clem @Odysseus @vladdy
shawwn#3694: ha ha I was first
Daj#7482: Hey @entangledothers @clem @Odysseus @vladdy ! Welcome to the Grim Dark Future of ML Research, where there is only Tensorflow! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: (Welcome to the server everyone. It’s the place to replicate GPT-3) |
Daj#7482: You didn't do a silly custom message though :D
shawwn#3694: True
Sid#2121: aw i had a good one 😦
Ronaldo#4812: Joined the server.
Daj#7482: @Sid your chance!!!
Sid#2121: Hey @Ronaldo ! Welcome to the AGI Wizards' Meeting Hall! Let us know if you have any questions, or can spare any galleons
shawwn#3694: @Ronaldo Welcome to the server, where we do what we must, even though we can’t
Daj#7482: Both pretty good hah!
Commutative Conjecture#6969: Joined the server.
Sid#2121: mine was semi-cribbed from the gpt ones
Commutative Conjecture#6969: Hi
Ronaldo#4812: Ohh Thanks guys
Ronaldo#4812: Looking forward to contr7
Sid#2121: Hey @Commutative Conjecture ! welcome to the MOMA aka museum of memetic AI
Daj#7482: Hey @Commutative Conjecture ! Welcome to The LibreAI School of Tensorflow and Wizardry!
Ronaldo#4812: Contributing*
Daj#7482: Man it's becoming a competition lol
Daj#7482: Awesome Ronaldo!
Commutative Conjecture#6969: I just realized I should've join a server like this for a while
Daj#7482: You can take a look at the channel topic for some info on where we're at, and don't hesitate to ask questions! |
Daj#7482: Our current bottlenecks are mostly CPUs for data processing and people willing to brave Tensorflow Mesh/TPU coding
Daj#7482: Applies to everyone ofc heh
Commutative Conjecture#6969: Where can I find more details?
Are there people with new projects?
CPU as in money?
What's required wrt coding?
Commutative Conjecture#6969: I'd like to check out https://github.com/ConnorJL/GPTNeo
Daj#7482: > Where can I find more details?
> Are there people with new projects?
> CPU as in money?
> What's required wrt coding?
@Commutative Conjecture Channel topic/gdoc, various channels, @Daj @bmk @Sid , roughly in that order
It's mostly the three of us pushing for what needs to get done for GPT3+, but we've had some ideas for spin off projects when we find the time
Or just someone with access to a lot of cores to run the scripts on
Best to ask @Sid @bmk about the exact status, depends on what your skills are! If you can do TPU/TFM type stuff or are a quick learner, that'd probably be the most useful
Daj#7482: > I'd like to check out https://github.com/ConnorJL/GPTNeo
@Commutative Conjecture Send me your github username and I'll invite you!
zphang#7252: could I get an invite as well? same username
Commutative Conjecture#6969: I worked on many exotic models of computation, so I wouldn't mind starting TPU stuff
Daj#7482: > could I get an invite as well? same username |
@zphang I see several zphangs?
Commutative Conjecture#6969: Also, sorry, I didn't notice that the channel topic was collapsed and I missed most of it
Daj#7482: All good we're delighted to help any willing contributors onboard :)
georgejrjrjr#0817: Joined the server.
shawwn#3694: Just make the repo open
Daj#7482: Yea at this point it might make sense
shawwn#3694: @georgejrjrjr welcome to the place to be
Sid#2121: eh idk
Sid#2121: not yet
Daj#7482: We'll have a PoC soon
Sid#2121: it's really not done, i'm embarassed by the code
Commutative Conjecture#6969: Is there any estimate of funding needs?
Sid#2121: B I G
shawwn#3694: The more the better
Sid#2121: nah idk
Sid#2121: most of the money would be going into cpu time
Sid#2121: and we've had a lot of people reach out and offer us cpu today
Sid#2121: our TPUs are part of TFRC
Sid#2121: so we're getting them free
shawwn#3694: Hmm. It’s surprising to hear someone turn down funding. Interesting tactic... |
Daj#7482: Money can make things complicated too
Daj#7482: We're not sayig no, but it is extra overhead
Sid#2121: @shawwn please don't take that as me turning down funding lol
Sid#2121: wasn't what i meant
Sid#2121: I just don't think anyone is that sure of how much we'll need right now
Commutative Conjecture#6969: @Sid
Thx for the answer
Sid#2121: but, yes, the more the better
bmk#1476: how much funding do we need?
***the more the better***
bmk#1476: we have an everything shortage rn
dvs#3865: Joined the server.
clem#3783: keep up the great work everyone, if we can help with hugging face at any point, let us know!
shawwn#3694: Got about tree fiddy? (More seriously, a server would be nice)
Don#5000: Joined the server.
Sid#2121: @clem are you from huggingface? apologies, hard to keep track of everyone at this point
Daj#7482: We'll be in contact hopefully @clem ! We've said what we're short on (which is many things, but mostly cores and TPU talent)
Sid#2121: Hey @dvs ! great to have you here
Daj#7482: Hello @dvs and @Don ! Welcome to the Council of LM Relations! Please see the channel topic for info and don't hesitate to ask questions! |
Sid#2121: you get an *even* more customised non formulaic welcome message from me @dvs because ya make great vids
dvs#3865: aw thanks 😊
Daj#7482: link said vids pls
dvs#3865: lets see if you still feel that way when I add ads to the videos
dvs#3865: https://www.youtube.com/channel/UCaZuPdmZ380SFUMKHVsv_AA
Commutative Conjecture#6969: Any recommended links for all relevant arch&tricks stuff?
Daj#7482: Depends on what your current level is Champiz
Daj#7482: As in level of understanding
Sid#2121: @Commutative Conjecture there's lots in the resources section
Sid#2121: #tfmesh is the most relevant
Daj#7482: Cool stuff @dvs will check it out 👍
Sid#2121: > lets see if you still feel that way when I add ads to the videos
@dvs man's gotta eat
dvs#3865: its mostly stuff for the classes I teach to artists so may or may not be helpful depending on how technical you are
dvs#3865: mans got gpus to pay for
Commutative Conjecture#6969: @Sid
Thanks, I was looking at documentation instead
Daj#7482: Documentation is kinda a mess
gstqtfr#2728: Joined the server.
Daj#7482: The gdoc or kanban on the repo are probably the best sources for what needs doing |
Daj#7482: Hey @gstqtfr ! Welcome to the World's Most Disorganized AI Lab! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: aaaaa so much happening
bmk#1476: I'm writing up the CC data stuff once and for all
Sid#2121: @dvs lmao at the big bernie at the top of your channel
shawwn#3694: bikeshedding
Isaac McHorse#2007: ALL PLAY AND NO WORK MEANS I'M GOING TO BE AWFUL IN LIFE.
bmk#1476: shikebedding
Daj#7482: tfw so much to do no time left to give Isaac more silly features
Sid#2121: but i wanna mek the logo spin
Daj#7482: soon
bmk#1476: srsly tho
Daj#7482: Soon 1.5B will live
Sid#2121: eh i might do it later, as a treat
bmk#1476: no bikeshedding
Isaac McHorse#2007: WHY WOULD I PLAY?! YOU ARE THE SOBBING ONE
Daj#7482: Yikes that ones aggressive haha
Sid#2121: coding in processing is like a holiday compared to tfmesh
Daj#7482: We'll give ourselves a holiday once we have the first full model training
Sid#2121: okok
Daj#7482: :D |
bmk#1476: no u gotta help me with CCTC
Sid#2121: I can go wherever
Sid#2121: what do you need
bmk#1476: (after mtf)
Sid#2121: @bmk i thought we were doing odd-even?
bmk#1476: ?
bmk#1476: I'm working on CCTC
Sid#2121: ah ok
bmk#1476: i was saying after mtf is up
Sid#2121: wait so
Sid#2121: you're working on mtf or cctc
bmk#1476: cctc
bmk#1476: not mtf
Sid#2121: i can do odd even i just need you to point me to the right dims to change, you did all the coding for the layers and i'm not 100% which ones you intend tochange
step#7364: Joined the server.
shawwn#3694: @step welcome to the server, where ten people greet you simultaneously
shawwn#3694: *ahem*
shawwn#3694: when they're not slacking
randomuser#6167: Joined the server.
Daj#7482: Wow Shawn, we do our best :D |
Sid#2121: sorry, we were slacking from the important work of greeting people by working on our silly model 😆
shawwn#3694: alright, I'm back on laptop. Let me get your SSH set up...
shawwn#3694: _pulls up pubkeys_
Daj#7482: Hey @step @randomuser ! Welcome to the Back Alley LM Market! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: anyone wanna help with CCTC
bmk#1476: i could use some help rn
Sid#2121: I mean, sure, i just offered
bmk#1476: but mtf
bmk#1476: isnt that more important
Sid#2121: but idk where to look ;___;
bmk#1476: ok
bmk#1476: come do cctc then
Sid#2121: no i mean, you are right
shawwn#3694: @bmk `ssh bmk@nuck.tensorfork.com` (setting up daj and sid now)
Sid#2121: just tell me which layers to change
bmk#1476: idk tho
Sid#2121: ah ok
bmk#1476: i'll have to pore over it slowly some time
Sid#2121: well i'll have to get deep into it then
bmk#1476: https://github.com/leogao2/LLMD-CommonCrawl/blob/master/v2/commoncrawl.py#L59 |
krysis#2720: Joined the server.
bmk#1476: CCTC: see where it says English
bmk#1476: https://github.com/miso-belica/jusText/tree/dev/justext/stoplists
bmk#1476: these are the languages it supports
Sid#2121: nice
bmk#1476: https://github.com/miso-belica/jusText/tree/dev/justext/stoplists
bmk#1476: this is a language detector
Sid#2121: are you asking which we want?
bmk#1476: plug b into a
Sid#2121: ah k
Sid#2121: well, i'm gonna do mesh
bmk#1476: ok
Daj#7482: Hi @krysis ! Welcome to Cyberpunk OpenAI! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: wait i just realised i pasted the wrong link lol
bmk#1476: https://becominghuman.ai/a-handy-pre-trained-model-for-language-identification-cadd89db9db8
bmk#1476: there we go
bmk#1476: its blogspam but meh
Daj#7482: #the-pile so we don't clog up general please :D
bmk#1476: ok
aegis#2320: Joined the server. |
Daj#7482: Hey @aegis ! Welcome to 2 Devs, One Tensorflow! Please see the channel topic for info and don't hesitate to ask questions!
Sid#2121: lmao
Daj#7482: @aegis Through the TFRC we've got access to a ton of TPUs
Daj#7482: It's still a _huge_ beast of a model to train but it's not _completely_ infeasible
aegis#2320: oh cool
aegis#2320: is this tpu pod(s)?
Daj#7482: Yep
Daj#7482: We currently run on v3-512s
aegis#2320: do you have the weight distribution technique working or are you still training in memory?
Daj#7482: You should read the gdoc haha
aegis#2320: lol, on it
Daj#7482: We use Tensorflow Mesh for model parallelism
Daj#7482: It's...sorta working so far haha
Daj#7482: GPT3 can never fit on a single core, so it has to be split
Casey#6294: Joined the server.
GPTForMe#6009: Joined the server.
Daj#7482: HEy @Casey @GPTForMe ! Welcome to the Text Farms! Please see the channel topic for info and don't hesitate to ask questions!
Deleted User#0000: Joined the server.
Daj#7482: Hey @Deleted User ! Welcome to the All Dev Moderate Amounts of Memes No Bikeshedding Zone! Please see the channel topic for info and don't hesitate to ask questions!
Isaac McHorse#2007: I DON'T HAVE TIME FOR THAT! |
GptForMe#9886: Joined the server.
GptForMe#9886: @Daj Cores? As in CPU OR gpu?
Daj#7482: CPU atm
Daj#7482: We don't use GPUs
Daj#7482: We need CPU to process the dataset, we train on TPUs
aegis#2320: do you have an idea of what hardware you'll need for inference yet?
GptForMe#9886: @Daj Your own or in the cloud? Are the cloud platforms offering TPU's now?
aegis#2320: based on openai's estimated price I think they have a way to stream computation without having enough gpu/tpu memory for the weights
Daj#7482: We're not currently spending much time thinking about inference, that's long off
aegis#2320: your arch might affect that though
Daj#7482: There are several ways to do it like L2L
Daj#7482: @GPTForMe We have a bunch of TPUs from the TFRC Program
Daj#7482: for free
Daj#7482: > your arch might affect that though
@aegis Unlikely, sampling is really not that hard of a problem compared to training
eigenjoy#5649: Joined the server.
Daj#7482: Sampling _fast_ is a totally different story, but again a story for anothe rday hah
aegis#2320: yeah that's what I meant 😛
GptForMe#9886: @Daj Nice! Well done. Ok, still can't find a use for my 64-core (CPU) box. No TPU's. 🙂
aegis#2320: sure you can sample on cpu with an nvme ssd if you want to wait |
aegis#2320: do you have a lot of internet bandwidth gptforme?
Daj#7482: Hey @eigenjoy ! Welcome to the Freerange Tensor Farm! Please see the channel topic for info and don't hesitate to ask questions!
Daj#7482: > @Daj Nice! Well done. Ok, still can't find a use for my 64-core (CPU) box. No TPU's. 🙂
@GptForMe We can definitely put those cores to use for crunching the training data! We're trying to filter 30PB of Common Crawl data down to ~10TB
Skylion#0368: FastText is good and more than sufficinet for language detection
GptForMe#9886: Does anyone have any approximate heuristics for how susceptible the GPT transformer architecture to "forgetting existing training" when subjected to subsequent training, compared to the pre-Deep-Learning architectures? Is not a problem due to the huge number of parameters? Or is it something you still really have to struggle with?
aegis#2320: you are talking about catastrophic forgetting?
Daj#7482: There's a lot of people experimenting with GPT finetuning
aegis#2320: the whitepaper said they basically only saw every input sequence once
Daj#7482: You should be able to find plenty info with some googling I think
aegis#2320: so I don't think it's a huge issue
Daj#7482: In general, GPT remembers _really_ well
Daj#7482: As aegis said
GptForMe#9886: Yes, where it at least smears badly existing associations.
GptForMe#9886: Thanks Daj, good to know.
bmk#1476: all cpus are appreciated
bmk#1476: we need 40000 core-days of compute all in total to process CCTC
bmk#1476: obviously we dont need all of it for GPT3 or 1T itself but we're producing data useful for other researchers too
aegis#2320: https://cdn.discordapp.com/attachments/729741769738158194/733405139582582924/Screen_Shot_2020-07-16_at_12.30.05_PM.png
bmk#1476: so we need 100 cores to finish this in a year |
bmk#1476: what we're doing is collecting way more than we need for future use basically
shawwn#3694: I would recommend doing the math on how much of this data you're going to be able to train on
shawwn#3694: yes
shawwn#3694: by "way more" you mean "far, far more than the model could feasibly be trained on"
bmk#1476: we already did
aegis#2320: how are you estimating 40k core-days?
shawwn#3694: ah. I retract that then
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/733405463105896458/unknown.png
aegis#2320: I saw python in the-pile? at this scale isn't optimizing the filter (e.g. native code) reasonable
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/733405510061260950/unknown.png
bmk#1476: we dont have the developer time to do that lol
Daj#7482: If someone has the skills for that @aegis yes probably
bmk#1476: if you're a C++/Rust/Go/Haskell dev your skills would be greatly appreciated
aegis#2320: some of everything yes
bmk#1476: generic CCTC info above
aegis#2320: have you at least tried pypy? I do most of my corpus filtering with that out of laziness lol
Sid#2121: > Sampling _fast_ is a totally different story, but again a story for anothe rday hah
@Daj tfmesh should actually help us optimize sampling, too
aegis#2320: it's a fair bit faster than cpython for the filtering I've done
Daj#7482: Yea PyPy or Cython would be interesting to test |
aegis#2320: I use it for my openwebtext work
Daj#7482: We're just really at/beyond the limit of our available dev time lol
aegis#2320: it's way faster than cpython at the basic stuff I've been doing
Daj#7482: We're pouring every free minute we have into this and need more people!
aegis#2320: you can literally just run pypy in place of cpython if you manage to install the same packages (python3 script -> pypy3 script)
bmk#1476: look right now we're stretched unimaginably thin
Daj#7482: I haven't looked into it, we will add it to the list
bmk#1476: if you think you can do it better ***please do it for us, we can use the help***
Daj#7482: Any and all help appreciated, we have plenty of tasks in all levels of difficulty and obscurity
bmk#1476: I'm taking a break rn
bmk#1476: i'm fried
Daj#7482: Sounds good, you deserve it haha! Today was a crazy day and a lot got done
Sid#2121: > Push code, not yourself
@Daj @bmk
Zach Dwiel#0475: Joined the server.
dmrd#2321: Joined the server.
guru4777#2745: Joined the server.
Daj#7482: Hey @Zach Dwiel @dmrd @guru4777 ! Welcome to The Little ML Lab That Could! Please see the channel topic for info and don't hesitate to ask questions!
Merzmensch#9934: Joined the server.
Daj#7482: Hey @Merzmensch ! Welcome to The Blockchain™️ Enabled Cloud-native™️ Decentralized™️ AI™️ Lab LibreAI™️! Please see the channel topic for info and don't hesitate to ask questions! |
tapanc#8821: Joined the server.
Daj#7482: Hey @tapanc ! Welcome to A Mutually Abusive Relationship Between A Few Devs And Their TPUs! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: cool, so the server has more people in two days than I had in two months
Daj#7482: Today was a pretty wild day, guess GPT and HF have a lot of name power
Daj#7482: Lets see how many people stick, so far few have stepped up to actually help haha
aegis#2320: my full time thing is speech recognition, so I'm very invested in better language modeling corpora no matter what
Daj#7482: Cool stuff, well we do hope to eventually publish a really good dataset from all this!
bmk#1476: if all goes well we will have a positively massive corpus of very high quality text data for you to work with!
aegis#2320: I have a few servers and a _lot_ of disk space but I think my main limit is bandwidth (total gigabytes, not line rate)
Daj#7482: We'll take what we can get I think
Daj#7482: Kinda our scrappy modus operandi
aegis#2320: if I temporarily solved the bandwidth problem I could do a significant amount of text processing very cheaply and store the result
Daj#7482: That would be _awesome_!
raf#5075: Joined the server.
Daj#7482: Stick around and once our pipelines are a bit more fleshed out we'll put any compute to good use
peterjliu#7734: Joined the server.
shawwn#3694: @raf welcome to the riff
aegis#2320: my main server is colo'd and has about 40tb disk free, the datacenter and server both have 10gbit, so if there was funding to enable an unmetered 10gbit pipe I could probably saturate it for a while
Daj#7482: Hey @raf @peterjliu ! Welcome to the LM Farm Upstate! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: @aegis ssh key? |
Daj#7482: We don't really have any funding or figured out how we wanna handle money, but definitely interesting @aegis
Daj#7482: We'll see how everything develops
shawwn#3694: the dataset might become more valuable than the project, depending on how the training goes
Daj#7482: Yea I think that's a likely outcome
shawwn#3694: it'd be worth securing a spot for it. It's hard to store TB's of data for extended periods
shawwn#3694: I don't have any ideas yet, but it's in the back of my mind.
aegis#2320: if we need to store 10tb of data I can mirror it but can't serve it up very often
Daj#7482: Yea same, back of the mind atm
Daj#7482: Torrents? lol
Daj#7482: We'll figure something out, maybe bug Google about it
bmk#1476: Same here
bmk#1476: I'm willing to host 10-20 TB of data at home and seed the torrent
Daj#7482: It feels appropriate for our data to be on a torrent lol
bmk#1476: My upload bandwidth is somewhat limited though
Daj#7482: But yeah, bridge to cross when we come to it
bmk#1476: It would take me a month and a half to upload 10TB
bmk#1476: So I can't be the *only* seeder
Daj#7482: We'll figure something out
Daj#7482: I could imagine Google/TFRC lending a hand
Daj#7482: Lets just get the dataset done first hah |
shawwn#3694: fwiw, torrents almost always die, except for super popular datasets like imagenet
Zach Dwiel#0475: you might also check out dat
shawwn#3694: did dat ever go anywhere?
shawwn#3694: I briefly heard about it like, two years ago
shawwn#3694: is it really suitable for storing TB's of data?
Zach Dwiel#0475: They have made quite a bit of progress, but the command line tool has lagged a bit
Zach Dwiel#0475: I'm pretty sure it was designed with at least TB's of data in mind, but i am not 100% sure
Daj#7482: Lets put it on 𝓣𝓗𝓔 𝓑𝓛𝓞𝓒𝓚𝓒𝓗𝓐𝓘𝓝
shawwn#3694: translation error; message not received
Daj#7482: Really? Does Discord not support Unicode?
aegis#2320: works here
aegis#2320: probably a font issue
Daj#7482: Ah yeah probably
Daj#7482: > Lets put it on THE BLOCKCHAIN
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/733414422386704444/unknown.png
shawwn#3694: perhaps others can read that; my brain refused to process it
Daj#7482: Hah so the font is just terrible
shawwn#3694: is it? hm.
shawwn#3694: apple, tut tut.
Daj#7482: That "I" does not look like an I lol |
shawwn#3694: yes. definite F
Daj#7482: To be fair, I and l is the worst
shawwn#3694: bikeshedding
Isaac McHorse#2007: WHAT ARE YOU DOING BIKESHEDDING? BACK TO WORK!
shawwn#3694: man I love that bot.
Daj#7482: Haha
Daj#7482: I'm gonna go to bed soon anyways
shawwn#3694: what other features would you add to McHorseFace? Is the code up somewhere?
Daj#7482: I wanted it to say something sarcastic whenever we say something _should_ work
Daj#7482: Automatic unique welcome messages
Daj#7482: Automatically constantly change the server icon to slight variations
Daj#7482: If we ever find the time to do anything like that haha. Sid is the one doing the bot
aegis#2320: add gpt3 to it 🤔
Daj#7482: Thought about that, but also didn't want to abuse it
Daj#7482: Might do so anyways we'll see hah
aegis#2320: for welcome messages at least
Daj#7482: Those are actually tricky to generate, I tried earlier
Daj#7482: Needs a lot of handpicking
Daj#7482: Maybe I'll just write like 100 myself
shawwn#3694: I'm sure if you mess up the welcome message, people would get offended and immediately leave /s |
Daj#7482: I may have already done that many lol
Daj#7482: Haha yeah I know, but it's a funny little tradition
Deleted User#0000: Joined the server.
shawwn#3694: @Deleted User Hi, welcome to Ethi
Daj#7482: Hey @Deleted User ! Welcome to the Large Language Model Appreciation Society! Please see the channel topic for info and don't hesitate to ask questions!
Deleted User#0000: Hey, thanks! Took 5mins before I could post so got to read around. Enjoy the discussion of whether or not to automate welcome messages above.... it definitely makes me wonder who really sent those 😉
shawwn#3694: there's a 5min cooldown?
shawwn#3694: no wonder most people join and then leave... Hm.
Daj#7482: There is?
Daj#7482: Oh
Sid#2121: yeah in settings
Sid#2121: thought you knew @Daj
Daj#7482: Yea the account needs to be 5 min old
Sid#2121: is this an awful thing? lol
Daj#7482: Sorry should I turn that off I forgot I turned it on?
shawwn#3694: *shrug*
Daj#7482: It seemed very reasonable
Sid#2121: seems fair to me
Sid#2121: have a read around first
shawwn#3694: I guess if people from OpenAI and HF are joining and staying, it can't be too bad. |
Daj#7482: It definitely was a nuisance today, I'll turn it down for now, thanks for alerting us @Deleted User !
Daj#7482: Yea it seemed to not cause too much trouble, we'll see what a lower setting does _shrug_
shawwn#3694: one thing that keeps me from lurking on this server more is that there's no real place to show off one's own work
shawwn#3694: but I lurk often enough.
Daj#7482: Well, _technically_ #the-faraday-cage-archive is show off, but yea this is a very project focused discord
shawwn#3694: and that's not really the point of this server anyway.
shawwn#3694: yeah.
Deleted User#0000: No worries. Guess it stops proper spammers, doubt it would stop people genuinely interested - how hard is it to find 5mins of things to do on the internet. More likely to get distracted than to intentionally leave
Daj#7482: If you want a channel for your project we can set that up
Daj#7482: True Jelly, appreciate the patience :D
shawwn#3694: ehh, I don't really have a project. It's mostly things like the current BigGAN run https://cdn.discordapp.com/attachments/729741769738158194/733418748991766598/individualImage.png
shawwn#3694: "behold the blobs"
Daj#7482: Put it in #the-faraday-cage-archive or #art !
shawwn#3694: works
Daj#7482: I love them haha
shawwn#3694: I didn't realize TFC was for anything other than !scp
es#4913: Joined the server.
Daj#7482: Yea we recently repurposed it and didn't really advertise it
shawwn#3694: @es welcome to the server where you can't talk for 60 seconds, nyah-nyah.
Daj#7482: Hey @es ! Welcome to the A~~u~~rtistic AI Containment Zone! Please see the channel topic for info and don't hesitate to ask questions! |
Daj#7482: Haha I turned the wait time off for now shawwn
Daj#7482: I'mma be heading to bed (read: Continue checking Discord until I fall asleep like a degenerate). Crazy day today, thanks for everyone being so awesome and can't wait to see where this project goes next 👍
Sid#2121: man, you have a better sleep schedule than i do
Sid#2121: night!
Sid#2121: and, echoing that statement
superguy#8832: Joined the server.
kevinw#4330: Joined the server.
Sid#2121: Hey @superguy , @kevinw ! Welcome to LLMA (Large Language Modelholic's Anonymous ™️ ). Check the channel description for an overview of our project, please ask if you have any questions!
shawwn#3694: gpt-3 when?
bmk#1476: 1T when?
shawwn#3694: got a tensorboard link yet?
Sid#2121: @shawwn we're still waiting for our data to encode also idk how to set up a public tensorboard
shawwn#3694: hmm, those two statements seem unrelated. is a tensorboard running?
kevinw#4330: thanks will do
Sid#2121: yeah, tensorboard works fine now
Sid#2121: but also, there's no point looking at it bc it's not really properly training yet
bmk#1476: I believe he means we need the data to start training
shawwn#3694: it's nice to have; makes things feel more real. But yeah, not much point I suppose
bmk#1476: Are you just looking for memory consumption info?
shawwn#3694: I set up DNS to specific server IP addresses; e.g. my current biggan run is at http://goku.shawwn.com:1088/#images |