data
stringlengths 115
7.61k
|
---|
Aran Komatsuzaki#5714: not perfect robustness or anything
bmk#1476: GPT3 is already more robust than many people
Daj#7482: I just wanted to make a meme at humans
Aran Komatsuzaki#5714: ok got it
Daj#7482: Don't get me wrong, some of my best friends are human
Ravna#1831: The "algorithm improvements" may as well just be overfitting the test set which is small and reused a lot.
bmk#1476: Pinned a message.
Daj#7482: I also have some suspicions along what you're saying, @Ravna
Aran Komatsuzaki#5714: i think overfitting to test set is inevitable, so we need to keep expanding the list of eval tasks.
Aran Komatsuzaki#5714: gpt-3 was evaluated on more diverse tasks than gpt-2
Aran Komatsuzaki#5714: those who are still sticking with cifar-10 are suffering from the exact problem, i guess
Daj#7482: New arxiv paper: "Solving MNIST with GPT3"
Aran Komatsuzaki#5714: mnist shouldn't exist in 2020
Aran Komatsuzaki#5714: should've died
Daj#7482: You're right
Daj#7482: We should be using FizzBuzz
goolulusaurs#1571: I am really interested in systems that generate their own tasks, like POET and some kinds of multi agent RL.
goolulusaurs#1571: I wonder if GPT3 could be prompted in the right way to generate new kinds of language tasks. I am not sure what it would mean to evaluate it on them though.
aquajet#7800: I think there is some short term value to distillation. One of the biggest downsides (to me) to large models is that you're dependent on an Internet connection and access to a large datacenter. Distillation helps in running lms on smaller devices and makes it more accessible to people. The better solution though would be to make tpus and gpus cheaper with more compute and memory but that takes a lot of dev time. Also Moore's law is slowing down
Ravna#1831: The ultimate domain-randomization RL: random world, random reward. It's equivalent to training it on the multiverse, which includes our world as a special instance.:brr: |
Daj#7482: It's provably impossible to have an agent that performs better than random on average at every possible universe
goolulusaurs#1571: There has been some pretty amazing stuff using nns with fixed random weights, e.g. reservoir computing.
Ravna#1831: Yes but NNs have some inherent priors and that would hopefully enough for it to choose to optimize harder on some of the universes.
Daj#7482: I feel like we don't need to randomize that hard lol
Daj#7482: Or maybe that will spawn multiverse NN god
Daj#7482: That promptly makes us into paperclips, because it turns out that _is_ in fact the ultimate true goal
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/744244293266440232/unknown.png
bmk#1476: so close and yet so far
goolulusaurs#1571: is that from the Solving FizzBuzz with tensorflow article from a few years ago?
bmk#1476: no this is gpt3
Daj#7482: > so close and yet so far
@bmk Strawman Alignment Skeptic: "And so we have proven GPT3 is less dangerous than a FizzBuzz program!"
Ravna#1831: Yeah as for reservoir computing, there's an experiment report on replicating the World Model paper. Turns out it doesn't matter if the LSTM part is trained or not. The initial random one is as good as the trained one. But it might just be because the problem it is supposed to solve is too easy.
Daj#7482: Is there a reference for that, @Ravna ?
bmk#1476: assume GPT3 exists -> GPT3 cant do FizzBuzz -> things unable to do FizzBuzz cant possibly write beautiful moving poetry -> GPT3 can write beautiful moving poetry -> By contradiction, GPT3 does not exist. QED
goolulusaurs#1571: Yes, I've worked on the same thing with world models.
Ravna#1831: @Daj https://ctallec.github.io/world-models/
ankit#1191: Joined the server.
Daj#7482: > assume GPT3 exists -> GPT3 is less dangerous than a FizzBuzz program -> things less dangerous than FizzBuzz cant possibly write beautiful moving poetry -> GPT3 can write beautiful moving poetry -> By contradiction, GPT3 does not exist. QED
@bmk I feel like I've heard worse arguments on Twitter |
bmk#1476: lol
goolulusaurs#1571: I've also done some stuff with evolving masks on the weights of a randomly weighted network. Whats cool about it is that if each mask is an individual in the population, then you can evaluate the behavior of the whole population with a single batched forward pass.
Ravna#1831: > spawn multiverse NN god
Ravna#1831: Scott Alexander half jokingly wrote an article. It suggests that sufficiently powerful superintelligent agents will do acausal trades with their multiverse counterparts. So their utility functions will all converge somewhat.🤣
Semantic Aberration#3692: > GPT3 is not cheap for its current value per se. The real value lies in its future versions.
@Ravna Obvious about future versions, but at several cents per 100 pages of text, e.g. summaries of complicated legal documents/passages it's a bargain.
Daj#7482: Acausal trade is a legitimate infohazard haha
Daj#7482: Please consume your acausal information responsibly
Semantic Aberration#3692: Thankfully AI alignment is side-stepped by autoregressive objective, barring some meta-opt issues (which are purely conjectural for now)
Daj#7482: I strongly disagree with that assessment @Semantic Aberration but I don't have time to make the full argument right this moment sorry heh
Daj#7482: It's been made like three times in #alignment-general but I don't fault anyone for not wanting to sift through that
Semantic Aberration#3692: @Daj Ok, I will look into ethics' log for past arguments
Ravna#1831: My argument right now is that GPT3, like most DL projects in the past 5 years, can only be trusted in categories where if you are right you get a lot and if you are wrong you lose little. Like generating faces while bad faces don't harm anyone.
Semantic Aberration#3692: @Ravna Sure, "confusion matrix is your product", works for some niches
Daj#7482: > @Daj Ok, I will look into ethics' log for past arguments
@Semantic Aberration I'll be compiling it into a blog post (and a SSC meetup talk) in the near future
Semantic Aberration#3692: @Ravna
> like most DL projects in the past 5 years, can only be trusted in categories where if you are right you get a lot and if you are wrong you lose little.
Worse than human level radiologist is better than none, for people who can't afford a medschooled doctor to attend to their issues/bone photos. Classical SV VC argument (lol from its assumptions and implications, but it has some truth to it)
Semantic Aberration#3692: I guess sad truth is, that GPT3 is already beating many humans on some cognitive axes. Having a smarter-than-you advice giver in your smartphone could be very valuable, I guess. |
Semantic Aberration#3692: For 150 IQ people like Scott though, it's all premature, they better think for themselves.
Aran Komatsuzaki#5714: GPT3 can definitely already write better than me
Ravna#1831: If only we can accept non-artificially-high safety standards... We could have flying cars already. Crashing to the ground from the sky won't kill as many people as, say, coal mining.
Aran Komatsuzaki#5714: in writing some stuffs
Semantic Aberration#3692: @Ravna High safety standards is Western feature/reality, it is not like that in the rest of the world.
Ravna#1831: We could also have much cheaper nuclear power if we cut all these nonsense of waste processing.
Daj#7482: A friend of mine used to work on a nuclear submarine. After a hurricane on Guam, they docked and offered to provide the city with electricity from their reactor. The mayor refused as he didn't want "radiation to leak through the electricity cables"
bmk#1476: wat
bmk#1476: **wat**
lugosch#4764: Joined the server.
Daj#7482: This is very much common everywhere
Daj#7482: lol
Semantic Aberration#3692: I think it would be very interesting/high profile to train GPT2.5 on med textbooks and distill/quantize it so it can be run on, say, snapdragon 865. Though westerners and textbook IP holders would be against [the assumptions and implications of] the whole ordeal, so I won't do it for now.
Semantic Aberration#3692: There is so much knowledge locked up in the books vast majority of people cannot read, it could speak for itself 🤔
Ravna#1831: Yeah it would be a good upgrade to google-based self-diagnosis that I am frequently guilty of
Daj#7482: Sounds like something other's wouldn't do and would make The Serious People™️ very upset despite having clear benefits
Daj#7482: in other words, just our niche
bmk#1476: the one big blocking issue is we need someone piratey and with the cpus and dsik space necessary
bmk#1476: unfortunately while hetzner is cheap, germany doesnt like pirates
Semantic Aberration#3692: @Daj I'd be careful with med textbook IP mafia if I sought legitimacy, so I won't do it on your behalf for now. But private fine-tuning is possible and likely. |
Semantic Aberration#3692: @bmk That's a solvable problem, I could buy used xeon server for that.
Semantic Aberration#3692: Also I don't think parsing several GBs of textbooks is a problem, for me
bmk#1476: i mean
bmk#1476: its a *bit* more than several
Daj#7482: I'm happy to hear other people's views here but I feel like we're the exact amount of pirate to publicly claim we of _course_ did not do that, but Someone Who Isn't Us might have and this might be that exact model
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/744251504927441007/unknown.png
Semantic Aberration#3692: @bmk I don't think there is more than a couple dozen GB valuable medical textbooks around, even that is including many older and low quality ones. Of course I only mean english, for convenience.
bmk#1476: oh
bmk#1476: i was talking about all the books
bmk#1476: are those not usually in libgen?
Semantic Aberration#3692: @Daj Ok then, I'm onboad with pdf parsing
Daj#7482: That's definitely one of the projects that could have big upsides and be worth publishing
Semantic Aberration#3692: @bmk I love libgen but it contains a lot of copies, OCRs and older not factually valuable books
bmk#1476: but its *so big*
bmk#1476: (big = good)
Ravna#1831: you just need to prompt it right so that it won't look for the old and wrong stuff😆
Daj#7482: "**The cure for my symptoms is:** Bloodletting and leeches."
bmk#1476: at least libgen is 100x better than the interwobs in terms of quality
Semantic Aberration#3692: I think pretty obvious corpus enhancement is a short prepended string encoding source, year and topic, say, for a random half of a dataset. It's a hook for future prompt programming
Ravna#1831: @Daj https://www.sciencedaily.com/releases/2020/06/200615115724.htm |
Semantic Aberration#3692: ^ ^ ^ this, and Peter Thiel drank blood not knowing this . . .
Daj#7482: Just consume the blood of virgins
Daj#7482: This post brought to you by Peter Thiel
Semantic Aberration#3692: Peter Thiel could give us $ with some low probability, I guess, it's in his style. But we have to show something before that.
Semantic Aberration#3692: He gave $ here and there via his **breakout labs** fund
Semantic Aberration#3692: https://cdn.discordapp.com/attachments/729741769738158194/744253586329829476/unknown.png
Ravna#1831: The Leela Zero and Leela Chess Zero projects are done with distributed efforts of fans donating their spare computations of their own machines. But that's only possible because the AlphaZero algorithm requires very little bandwidth between nodes.
Aran Komatsuzaki#5714: yup exactly
fractalego#9377: Joined the server.
Some Point Process#3793: Joined the server.
shawwn#3694: Why is everyone suddenly talking about turning this into a company? What changed?
shawwn#3694: A company requires a profit focus (I’ve said this from the beginning) or else no investor will invest for any reason except philanthropy
BeatriceBernardo#5504: Joined the server.
XMaster96#7538: Joined the server.
thenightocean#6100: imho this shouldnt try to be a company as this would add a lot of constraints. It should stay a playground of hackers cause if u get to the AGI you will get all utility u need.
Deleted User#0000: people think money will bring them legitimacy and security
Deleted User#0000: taking investor money
Deleted User#0000: but it isn't true. always measure the impact of what you create by your own ruler
Aran Komatsuzaki#5714: i think people are talking about non-profits.
Semantic Aberration#3692: Taking VCs money is diluting your soul with Devil's will. |
bmk#1476: i dont think anyone here wants to turn this into a company
Deleted User#0000: exactly Semantic
Deleted User#0000: i know a thing or two, having been in the valley
Aran Komatsuzaki#5714: i want to sell AGI tshirts, though.
bmk#1476: i want to as well
bmk#1476: heck i want one to wear to conferences
Deleted User#0000: go ahead! nothing is stopping you
bmk#1476: group buy is cheaper
Aran Komatsuzaki#5714: yeah i'll tweet about it!
Deleted User#0000: get that tee-spring account created
bmk#1476: bulk discount is surprisingly big
shawwn#3694: Re: company https://twitter.com/arankomatsuzaki/status/1294472769116049409?s=21
shawwn#3694: To be clear, I think this can work. It just needs to focus on turning a profit
bmk#1476: I can't control what others say but I am not currently seriously considering turning this into a company
Deleted User#0000: same, im not at all
Deleted User#0000: non-profit is fine
bmk#1476: We're ok with taking goodwill donations and maybe doing something similar to OA API to recoup a bit of costs but the goal will never be to make profit
Deleted User#0000: like what someone once said during a heated meeting where a bunch of MBA types were discussing how to divide equity at an early startup where I was doing most of the coding
Aran Komatsuzaki#5714: haha
Deleted User#0000: a fraction of 0 is 0 |
Deleted User#0000: that said, if we pretend its a company, and everyone who makes a legit code commit gets 0.1%
Deleted User#0000: maybe we'd get there quite quickly 🙂
shawwn#3694: Well, we are, if anyone here is interested. We have a couple contracts lined up. It’s straightforward to be profitable from day one.
bmk#1476: so thats a big decision we need to make
bmk#1476: do we ~~go over to the dark side~~ become more company-like
Aran Komatsuzaki#5714: non-profit sounds simpler
bmk#1476: i think it might make sense to go over to the dark side if someday we really need the money and donations wont cover it
bmk#1476: *but* we cant compromise on openness, etc
bmk#1476: and we should try to plow any profits back into things that actually benefit everyone
Deleted User#0000: i think, like 'bikeshedding', there should be a term for when this kind of conversation comes up
Deleted User#0000: it happens too often, for any project that has a small slight take off
shawwn#3694: It’s because the opportunity exists now, and probably won’t within one year.
shawwn#3694: The same could have been said about search engines back when google first formed
shawwn#3694: “It’s a research project” “we don’t need money” and so on
Deleted User#0000: i would agree, and i think the economic system pushes us to think this way
Deleted User#0000: but this project also carries some significance..
Deleted User#0000: i mean, it's heralding a new era
Deleted User#0000: think about what we are trying to do?
shawwn#3694: What are you trying to do?
Deleted User#0000: well, the way i see it, we are scaling an emergent computational phenomenon in the hopes of capturing a broad spectrum of human intelligence from the data on the internet |
Deleted User#0000: and then sharing that
Deleted User#0000: is that a wrong assessment?
Aran Komatsuzaki#5714: btw my tweet wasn't expressing my intent to make this group move into any direction of my interest. i was just describing what people were seeming to be discussing about from an outsider's perspective (i'm new here anyway). I simply misinterpreted what @bmk was trying to imply. It was my mistake in English reading comprehension.
shawwn#3694: It's accurate. Are you sure being a for-profit company precludes that?
Deleted User#0000: yes
Deleted User#0000: it doesn't preclude it, but it makes it more difficult
Deleted User#0000: like i said, i've been around the valley
shawwn#3694: Why?
Deleted User#0000: experience. i worked at Uber during their asymptotic rise
Deleted User#0000: saw how the organization changed
bmk#1476: id love to hear more about that
shawwn#3694: yes, a for-profit company entails change. But that isn't related
Deleted User#0000: your entitled to your opinion
shawwn#3694: Logically, there is no reason a for-profit company can't be just as open as a non-profit
shawwn#3694: OpenAI is arguably an instance of this.
Deleted User#0000: anyways, that's my 2cents, just want to share that this other researcher and i actually tried one of the techniques here https://arxiv.org/abs/2008.03156
Deleted User#0000: and it worked!
shawwn#3694: @Aran Komatsuzaki for what it's worth, I think you have the right idea
shawwn#3694: the only modification is that it can be profit-focused.
Deleted User#0000: i didn't even know about representational collapse, but it seems like people have been making progress there |
Aran Komatsuzaki#5714: @shawwn that's good to know!
bmk#1476: basically at the end of the day i dont want to pull an openai where we still claim to be open but we're actually not
shawwn#3694: agreed.
shawwn#3694: I think everyone feels that way, luckily.
bmk#1476: if anyone wants to invest in us on the condition that everything we make stays open, im 100% ok
shawwn#3694: ditto.
shawwn#3694: it's a really good idea. It's also an old idea, with a proven model: webdev has been doing it for decades.
shawwn#3694: what matters is company culture, and aims
shawwn#3694: I agree that the company is likely to get distracted with working on for-profit contracts, but that seems like a very nice distraction to have (since it necessarily means that revenue is growing)
Deleted User#0000: @bmk well, you don't need to be an organization to open source good deep learning code
shawwn#3694: neither did Google
Deleted User#0000: anyways, i think it's fine, what you are doing. i understand its financially insecure times and everyone needs a way to make a living
Deleted User#0000: just understand there is a certain strength that comes from being able to create something without relying on the financial support of others
Deleted User#0000: you have complete freedom
shawwn#3694: nah, I think everyone here is in a pretty comfortable position financially. It's not about making a living
shawwn#3694: it's about influence
Deleted User#0000: what do you mean by influence?
shawwn#3694: consider why openai and huggingface are relevant: they're an organization. And OpenAI has already shown that non-profit has basically no future
Deleted User#0000: you mean legitimacy?
shawwn#3694: if you need more and more resources to achieve the company's aims, then at a certain point, it's infeasible to do that via philanthropic investment |
shawwn#3694: nah, simply accomplishing the goals.
Deleted User#0000: well, you can accomplish goals without being an organization
aquajet#7800: i dont think were at that point where resources is an issue
Deleted User#0000: in fact, i'd argue accomplishing goals is more important than doing something else to then think that will help you accomplish them
aquajet#7800: or rather the most important issue
Deleted User#0000: which is what a majority of early valley startups fall for
shawwn#3694: project forward five years. Think about where this group might end up. The logical outcome is either that selling shirts was an effective way forward, or it wasn't.
Deleted User#0000: im honestly hoping for the singularity. won't even make that a secret
shawwn#3694: I am too.
Deleted User#0000: i want a complete dismantling of the economic system brought about by some future iteration of GPT-x
shawwn#3694: yes.
Sid#2121: i think if we spent more time bug fixing and less time theorizing we might get further :thonk:
shawwn#3694: but the way to do this is to subvert a system, not to oppose it
Aran Komatsuzaki#5714: being a phd student, i'm not pretty comfortable financially, so i want agi to destroy the capitalism.
Sid#2121: > but the way to do this is to subvert a system, not to oppose it
@shawwn debatable
Sid#2121: also strong disagree
bmk#1476: For the record I'm ok with stuff like selling API access for an open model, sort of like OSS projects that also sell managed solutions
shawwn#3694: @bmk exactly.
Deleted User#0000: > i think if we spent more time bug fixing and less time theorizing we might get further :thonk: |
@Sid exactly
shawwn#3694: jinx, no jinxback
shawwn#3694: redhat is an instance of that
shawwn#3694: and there was only ~one redhat.
Deleted User#0000: i think i'll name this phenomenon 'christmas-lighting' in the same vein as 'bike-shedding'
Deleted User#0000: it comes up again and again
bmk#1476: what is the definition
bmk#1476: im not sure i understand which phenomenon youre rerferring to
Deleted User#0000: it's when you start discussing how to decorate your house to be better perceived by your neighbors
Deleted User#0000: comes up again and again in early stage projects
shawwn#3694: heh. I think I see why you have such a distaste for the idea @Deleted User
shawwn#3694: yes, that would be lame
bmk#1476: I mean we're about a month in and this project is looking like it might actually succeed
shawwn#3694: it's not about playing house. it's about effectiveness
Sid#2121: our model still doesn't work
Sid#2121: at this point all we are is a congregation of intelligent people who enjoy chatting theory
goolulusaurs#1571: I want to help with the model but I don't know how.
Sid#2121: @goolulusaurs happy to help you through anything you need 🙂 there's lots of resources in #tfmesh
bmk#1476: i mean at least data is coming along really well
shawwn#3694: @goolulusaurs one of the key steps is data collection. In fact, I'm skeptical that the engineering re: the model is the important bit |
shawwn#3694: the dataset seems far more important
shawwn#3694: and that has a much lower threshold for helpfulness
bmk#1476: we have a pipeline and enough cpu power (nuck+test) to gather enough CC for The Pile, though not HUMONGOUS
shawwn#3694: if you can figure out ways of turning html into useful training data, that would be extremely valuable.
bmk#1476: but HUMONGOUS was always a moonshot anyways
Aran Komatsuzaki#5714: probably 90% of chatting theory is my fault.
shawwn#3694: @Aran Komatsuzaki a good fault to have!
bmk#1476: @shawwn that's basically what htmltotext-benchmark has been
bmk#1476: but alas getting contributors to it is hard
Deleted User#0000: @Aran Komatsuzaki would be great to get your opinion on this second-order optimizer https://arxiv.org/abs/2006.00719
shawwn#3694: yeah
bmk#1476: so i might just settle on trafilatura and just be done with it
Deleted User#0000: i talked to another researcher, and he was optimistic on it. it comes with a 2x memory price
Deleted User#0000: but another optimizer guy claims it is like 'a cruise missile'
Aran Komatsuzaki#5714: there comes another opportunity to chat theory!
aquajet#7800: did you run it on the benchmark yet @bmk
Deleted User#0000: for what it is worth
bmk#1476: yeah it comes out ahead *but* the data is tiny
bmk#1476: we need more data to make an informed decision
shawwn#3694: yes |
shawwn#3694: and IMO the gold standard for html->text is https://www.diffbot.com/
aquajet#7800: ill grind out some transcripts today then
shawwn#3694: it's worth trying their free trial just to be shocked at how good it is
Deleted User#0000: @Aran Komatsuzaki anyways we can chat on twitter too
shawwn#3694: (and possibly to generate benchmarks)
Aran Komatsuzaki#5714: let me see
bmk#1476: if you want you can make a pr with diffbot
shawwn#3694: heh. might be worth setting up a foo@eleuther.ai email https://cdn.discordapp.com/attachments/729741769738158194/744275459877109881/unknown.png
bmk#1476: That is a really weird restriction
shawwn#3694: they used to be very liberal about free trials
shawwn#3694: now they seem to verify phone numbers
shawwn#3694: a shame.
aquajet#7800: 14 day free trial
shawwn#3694: yeah. it used to be easy to get another 14 days. but maybe we can just share access keys
shawwn#3694: ```js
#!/usr/bin/env node
var fs = require("fs");
var child_procese = require("child_process");
var write = function (x) {
var __out = process.stdout; |
return __out.write(x);
};
var argv = process.argv.slice(2);
var run = function (command) {
return child_process.execSync(command).toString();
};
var shellquote = function (s) {
return " '" + s.replace(/'/g, "'\"'\"'") + "'";
}
var shellcmd = function (cmd, args) {
return cmd + args.map(shellquote).join(" ");
}
var shell = function (cmd) {
var args = Array.prototype.slice.call(arguments, 1);
return run(shellcmd(cmd, args));
}
write(JSON.parse(shell("curl", "-fsSL", "http://www.diffbot.com/api/article?token=4d4cfb6335b3f10bf2f014112fbafa47&url=" + encodeURIComponent(argv[0]))).html); |
```
shawwn#3694: this was my html-to-text script. And it worked amazingly well
shawwn#3694: you could throw basically any url at it and it would return excellent results 90% of the time
shawwn#3694: (obviously, that token no longer works. But you can substitute your own)
thenightocean#6100: there might be other npms to do that: https://www.npmjs.com/package/html-to-text
thenightocean#6100: I was too lazy to check before for some stupid reason
aquajet#7800: if diffbot is really good can we just use that as the extractor
aquajet#7800: @thenightocean add it to #datascripts
shawwn#3694: can someone bankroll it?
shawwn#3694: $299/mo isn't expensive, but it's not too cheap
shawwn#3694: I assume the 250k monthly credits might also evaporate quickly with our use case
shawwn#3694: (and yes, I think it's worth it -- they essentially have the best ML model for turning HTML into text, and leveraging that might be a big boost)
Fras#3538: Joined the server.
Daj#7482: fwiw @Deleted User I strongly agree with you on the topic of incorporation and stuff and I love the Christmas lights decorations. If anyone wants to go make an AGI company go ahead, but that's not what I want this place to be and if profit ever becomes the main goal I'm out.
But continue other discussion I need to go shower and head to bed anyways. We'll figure this all out I wouldn't worry, let's focus on the work
shawwn#3694: @Fras Welcome to the official GPT-3 onlyfans server! Check the channel description for the project roadmap
Deleted User#0000: @Daj totally with you there
shawwn#3694: @Daj if you leave, would you be nuking the server, or transferring ownership?
Daj#7482: If there is a unambiguous majority that want such a transfer to happen I commit to being cooperative and transferring ownership |
Daj#7482: But you can bet I'll be founding something new somewhere else to continue the spirit of what I hope to build here
bmk#1476: 2021: Eleutherai forks into Eleutherai and Eleutherai Cash
Sid#2121: yeah i heavily don't want this project to be profit driven. It seems like the majority of us actually doing the work feel the same, so. I don't see the problem, personally.
shawwn#3694: I'd totally buy into EleCoin
Daj#7482: This place, like any other, is a reflection of the people that create and run it and their goals and ambitions. I want to be a place that is actively distancing itself from the Silicon Valley "failure modes" of incentives where possible. I don't oppose anyone going for that, just do it somewhere else without me, that's all. I think we're all in agreement here for the most part
thenightocean#6100: IMHO, if the projects succeeds in the long run on the level I hope it will, money might become less relevant in general.
Daj#7482: We'll cross that bridge with the caution it deserves once we come to it
StellaAthena#3530: @Daj did you transfer ownership of the GPT Neo GitHub to the GitHub org?
Daj#7482: Uhh not yet I should do that
Daj#7482: I'll figure out how to do that tomorrow morning
StellaAthena#3530: Also, is #legal a project? Should it have a page on the website?
Daj#7482: It's not at that stage yet but it is where we were planning to discuss the licensing stuff
Daj#7482: I need to write that professor an email right fuck I forgot that
Daj#7482: Champiz is keeping me busy with math lessons and good food lol
shawwn#3694: can we add a #music channel under off topic?
bmk#1476: While we're at it should we have a #math under discussion
bmk#1476: Also what would we use the music channel for?
shawwn#3694: the #music channel would be for posting math theorems, and the #math channel would be for posting youtube links to music
StellaAthena#3530: What media platforms do we have besides slack and GitHub?
bmk#1476: We have a slack? |
Deleted User#0000: > IMHO, if the projects succeeds in the long run on the level I hope it will, money might become less relevant in general.
@thenightocean exactly.
StellaAthena#3530: Oh I meant to say discord
bmk#1476: Oh uh I don't think so
bmk#1476: We might have a Twitter?
bmk#1476: I don't think we've actually done much setting up of it though
StellaAthena#3530: Gotcha
tp#3502: Joined the server.
StellaAthena#3530: If someone wants to write up content for the “about us” page and DM it to me that would be great
Daj#7482: I was intending at taking a shot at writing down my vision/hope/mission statement for this place, can try to find the time tomorrow and see if we put it or a variant on the website @StellaAthena
StellaAthena#3530: That would be perfect
Onibaku#0666: Joined the server.
Yaki#1179: Joined the server.
zoulock#8356: Joined the server.
StellaAthena#3530: Welcome @Onibaku @Yaki @zoulock
bmk#1476: damn you beat me to it
StellaAthena#3530: You can write a pithy one-liner for them
zoulock#8356: Hello! 👋
bmk#1476: Welcome to the AGI Incubator!
zoulock#8356: I've heard some good things about this server |
StellaAthena#3530: What brought y’all here?
zoulock#8356: A friend of mine is very excited about the work you're doing here, and convinced me to join it as well
bmk#1476: who would that happen to be?
zoulock#8356: He's a bit of a lurker hahaha
StellaAthena#3530: ... okay
StellaAthena#3530: Why don’t you tell us about yourself
zoulock#8356: Well I'm a CS student on my last year
zoulock#8356: I've seen what GPT can do and I'd like to exploit the potential it has
zoulock#8356: Maybe even use it for a project for my University
zoulock#8356: I've been told I may be able to help. I know python and I've done some things with Neural networks
bmk#1476: the main areas that need help are with the model itself and with the data
bmk#1476: which of those interests you most?
zoulock#8356: What channels should I read to get a hang of things?
bmk#1476: https://docs.google.com/document/d/1wfCZBd18DMNt6YcC6boPNMd9qzzH3zpHHfKj4dezk0g/edit here's the main doc
zoulock#8356: Thanks!
bmk#1476: #gpt-neox-devs is the channel for model work, #the-pile is the channel for data work
zoulock#8356: Is there a main GitHub account?
StellaAthena#3530: Yes, but we’re in the process of migrating it
StellaAthena#3530: I think @Daj is asleep, and he has to add you to it (which is why we are changing how it’s set up)
StellaAthena#3530: If you would like to do something right now, you can pop into #the-rad-lab and read the pinned posts |
zoulock#8356: I'm currently away from home but, but I'll be back next week. Hopefully I'll see where I can help the most by then
zoulock#8356: I'll just read around for now
StellaAthena#3530: Cool
NaleRaphael#1308: Joined the server.
StellaAthena#3530: Our website is live! Still need to flesh out the content but we have the basics of a website 🙂 Currently hosted at https://sites.google.com/view/eleutherai/ though we'll be moving to a personalized domain (eleuther.ai) soon. Let me know what you think about the layout and if you want to volunteer to write content for it. The major thing we need is detailed descriptions of each project, including recent milestones and future goals. Most of the pictures are things I pulled from sample images, and if you have suggestions for better images or want to do some graphic design and create them that would be awesome.
Sid#2121: Hey @NaleRaphael ! Welcome to the next word prediction tent! Bring yourself up to speed by going over the google doc in the channel description, and feel free to reach out if you have any questions
shgidi#0284: @StellaAthena wow cool! What did you use to build it?
StellaAthena#3530: @shgidi Google sites has a very good in-page editor. It’s just done in that.
Tanya#1006: Joined the server.
yutarochan#3903: Joined the server.
superguy#8832: How many TFLOPS would a V100, and a TPU v3 provide realistically for GPT3 transformer operations?
StellaAthena#3530: Welcome @Tanya @yutarochan! Please take a moment to introduce yourselves.
amaranth#3344: Joined the server.
bmk#1476: any ideas where i can get my hands on the exact loss numbers for gpt3? or do i have to just eyeball their chart?
AlOrozco53#8773: Joined the server.
David-CT#9054: Joined the server.
shgidi#0284: > any ideas where i can get my hands on the exact loss numbers for gpt3? or do i have to just eyeball their chart?
@bmk What numbers are you interested in?
bmk#1476: validation loss numbers
bmk#1476: though i found them so no more help needed |
shgidi#0284: What do you guys think of adding a trello board or the like for tasks, schedule etc?
Sid#2121: > What do you guys think of adding a trello board or the like for tasks, schedule etc?
@shgidi We have a kanban on our git page, but no one ever updates it apart from me anymore lol. Best to just ask what needs doing.
samantha_bot#8943: Joined the server.
Deleted User#0000: Joined the server.
StellaAthena#3530: Bumping for visibility: Our website is live! Still need to flesh out the content, but we have the basics of a website 🙂 Currently hosted at https://sites.google.com/view/eleutherai/ though we'll be moving to a personalized domain (eleuther.ai) soon.
Let me know what you think about the layout and if you want to volunteer to write content for it. The major thing we need is detailed descriptions of each project, including recent milestones and future goals.
Most of the pictures are things I pulled from sample images, and if you have suggestions for better images or want to do some graphic design and create them that would be awesome.
Sid#2121: Hey @samantha_bot , @Deleted User ! Welcome to something something i've run out of custom intros. We do research into GPT-like language models. Check the google doc for an overview of our projects and please ask if you have any questions 🙂
BadBuddhist#6590: Hi, all. I'm the guy who trained Russian GPT-2 https://github.com/mgrankin/ru_transformers. May I help with the code?
Sid#2121: Hey @BadBuddhist ! Yes we'd absolutely love the help. atm we have our models mostly written but we have a persistent bug which means our results are not very good. Maybe you could help by looking over our code, checking Hparams, etc? @Daj can you invite to the repo? or give me the powers to invite people, hah
chirp#4545: hi everyone 👋
a little bit about me -
i've been following ML at a distance for a while but I've never trained or used a model.
I was super impressed by GPT-3, and now I want to dive in to ML. Don't have great ML skills yet, but happy to help out wherever I can! I can write and and I can code
Sid#2121: Hey @chirp ! Welcome! Keep asking questions and experimenting and you'll be up to speed in no time. I'd suggest trying to train or finetune some smaller ML models as a learning exercise
JC#3653: Joined the server. |
Noa Nabeshima#0290: Have any of you run into this error?
Noa Nabeshima#0290: Could not find a version that satisfies the requirement tensorflow==1.15.2 (from -r requirements.txt (line 6)) (from versions: 0.12.1, 1.0.0, 1.0.1, 1.1.0rc0, 1.1.0rc1, 1.1.0rc2, 1.1.0, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.3.0rc0, 1.3.0rc1, 1.3.0rc2, 1.3.0, 1.4.0rc0, 1.4.0rc1, 1.4.0, 1.4.1, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.8.0rc0, 1.8.0rc1, 1.8.0, 1.9.0rc0, 1.9.0rc1, 1.9.0rc2, 1.9.0, 1.10.0rc0, 1.10.0rc1, 1.10.0, 1.10.1, 1.11.0rc0, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.12.0rc0, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.2, 1.12.3, 1.13.0rc0, 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 2.0.0a0, 2.0.0b0, 2.0.0b1)
No matching distribution found for tensorflow==1.15.2 (from -r requirements.txt (line 6))
bmk#1476: o.O
Noa Nabeshima#0290: > No matching distribution found for tensorflow==1.15.2 (from -r requirements.txt (line 6))
Found the bug! TF 1.15 is not supported in python 3.8
Noa Nabeshima#0290: https://cdn.discordapp.com/attachments/729741769738158194/744620274099552276/unknown.png
Noa Nabeshima#0290: I'm running it on my own TPU
Noa Nabeshima#0290: Are the arguments wrong or is it important to use the server you guys are using because of something with sacred or something else?
Noa Nabeshima#0290: I'm not sure how accessible you want the project to be -- for example, I haven't started seriously poking around the main file and trying to figure it out and maybe that's the issue, but examples of running the program can be really helpful for subtle ambiguities.
bmk#1476: Oh this is on your own server?
Noa Nabeshima#0290: Yeah
bmk#1476: Yeah it's because of sacred
bmk#1476: Same command but with main.py
bmk#1476: This is the only exception I'm willing to make wrt sacred: if you're running on a different server
bmk#1476: Otherwise PLS USE SACRED
Noa Nabeshima#0290: Will do 🙂
Noa Nabeshima#0290: You do have examples!
Noa Nabeshima#0290: Can someone send me the Omniboard info?
Noa Nabeshima#0290: Got it, thanks |
Noa Nabeshima#0290: Will this break anything?
Noa Nabeshima#0290: connor@sparse:~/noa$ python3 run_experiment.py --tpu chonk --model ~/configs/GPT_NEO_TEST_128.json
bmk#1476: er
bmk#1476: make sure the model dir is empty first
Sid#2121: yes also that
Julian Felix Flury#6513: Joined the server.
JACKHAHA#8714: Joined the server.
Yuchen Lu#4548: Joined the server.
Yuchen Lu#4548: Hey guys, this is Yuchen. I feel excited to know that people are finally organizing to build a real "open" AI. Is there place to introduce the current status of the project/codebase/document? Much appreciated.
bmk#1476: Hey @Yuchen Lu @JACKHAHA @Julian Felix Flury ! Welcome to the EleutherAI Enrichment Center! Check out the doc in the channel description for info on the project, and don't hesitate to ask questions!
Yuchen Lu#4548: Got it @bmk
brianweet#6542: Joined the server.
Deleted User#0000: Joined the server.
AI_WAIFU#2844: Hello @brianweet and @Deleted User. Welcome to the centre for the chain rule of probability and it's applications. Check out the pinned messages and read the doc. If you have any questions, feel free to ask.
bmk#1476: we're using *all* the chain rules
bmk#1476: https://leogao.dev/2020/08/17/Building-AGI-Using-Language-Models/ post is finally out of draft
bmk#1476: i have a sneaking suspicion that this post will be 10x more controversial than the gpt3 post
M-online#1362: Joined the server.
StellaAthena#3530: This is you, right?
lugosch#4764: https://twitter.com/karpathy/status/1295410274095095810?s=20 |
Deleted User#0000: https://github.com/karpathy/mingpt
Deleted User#0000: looks interesting
Deleted User#0000: thomwolf from HF already did this a long time ago https://gist.github.com/thomwolf/ca135416a30ea387aa20edaa9b21f0ed
bmk#1476: hf has done everything already
Deleted User#0000: i think it is worth reinforcing the point that the code is so short. emergence
Deleted User#0000: it will keep you up at night
bmk#1476: ~~low kolmogorov complexity~~
Deleted User#0000: i was watching a documentary on the great physicist Bohm https://www.youtube.com/watch?v=XDpurdHKpb8 towards the latter years of his life, he started to look for things beyond reductionism
Deleted User#0000: how he would have some great new theories on the universe if he lived to witnessed DL
bmk#1476: [insert lw post about emergence here]
Arbot360#5033: Joined the server.
Sid#2121: Hey @Arbot360 ! Welcome to the AGI seed! Please check the google doc in this channel's description for info about what we're working and please reach out to any blue names if you have questions 🙂
Daj#7482: Why did that not go to #deleted-channel darn
bmk#1476: it was before the server change i thinkl
Daj#7482: Ah
Arbot360#5033: Hello all. I'm Arda Pekis, from Georgia Tech, currently hiding in my parent's basement while completing my MS degree. Looking forward to solving AGI so that I can go back to watching YouTube videos all day while it does my job.
Deleted User#0000: @Arbot360 nice! we have two other Georgia Tech people here. @Aran Komatsuzaki and @Louis
Deleted User#0000: both phd candidates
Daj#7482: Welcome @Arbot360 ! Great to have you here, seems quite a few folks from Georgia Tech hang around here hah
aquajet#7800: I'm an undergrad from GT |
bmk#1476: why are there *so many*
Deleted User#0000: lol, wat is going on
Daj#7482: We're too poor for the Harvard crowd
aquajet#7800: yellow jackets travel in swarms
Arbot360#5033: Ivy league engineers for the budget constrained.
Daj#7482: Building AGI on pocket change and hope (and massive computational resources from Google haha)
StellaAthena#3530: I’m a masters student at Georgia Tech in my free time.
bmk#1476: ~~all hail Overlord Google~~
Daj#7482: inb4 we need a GT role
bmk#1476: ok srsly wtf is going on here
bmk#1476: is there an Internal GT Intertubes where this is being shared around?
Deleted User#0000: the researcher i'm in touch with for Routing Transformers is also GT
Deleted User#0000: lol
bmk#1476: is there a good explanation
Arbot360#5033: @aquajet and I are both from The Agency a secretive, fully transparent group of intelligent Agents at GT.
Deleted User#0000: maybe GT has good deep learning course offerings?
bmk#1476: or is it just a big coinkidinkle
Arbot360#5033: So basically we are feeding the GT ML dept. with undergrads as tribute.
StellaAthena#3530: I joined because @Louis invited me, but we know each other from DEF CON’s AI Village and not GT
bmk#1476: @Arbot360 how did you find this place? |
Arbot360#5033: I was invited by @aquajet
bmk#1476: and you know each other because of GT?
Arbot360#5033: Yeah.
bmk#1476: huh
Sid#2121: is GT a particularly moar params = moar good institution?
Arbot360#5033: I am his senpai uwu
Arbot360#5033: I think we are a more robots more good institution
StellaAthena#3530: GT is very into robotics
Arbot360#5033: We do lots of robots.
Arbot360#5033: rotors too
StellaAthena#3530: Source: I TA our into to AI for Robotics course
bmk#1476: ok i made a GT role
Arbot360#5033: We are a top aerospace school
Arbot360#5033: Actually my high school classmate went to GT for Aerospace.
bmk#1476: lmk if you also go to GT but dont have the role
StellaAthena#3530: We should make a roles bot
StellaAthena#3530: Especially so people can opt in to get @‘d in project channels
Daj#7482: Well, if I ever wanna go to grad school at GT at least I know a small army of people to badger their advisors to take me lol
Daj#7482: > We should make a roles bot
@StellaAthena yes this makes a lot of sense for us |
Louis#0144: Hi I just bought a bike
Arbot360#5033: Congrats
StellaAthena#3530: But is it an info hazard bike?
bmk#1476: Yay gratz
bmk#1476: Did it cost exactly $10
Louis#0144: No
Louis#0144: It was 2300
Louis#0144: Pricy
Louis#0144: Didn’t wanna spend that much
Louis#0144: Couldn’t find alternatives
Louis#0144: https://cdn.discordapp.com/attachments/729741769738158194/745032663345135616/image0.jpg
bmk#1476: I should get a not garbage bike eventually
bmk#1476: That's an order of magnitude more expensive than mine though
shawwn#3694: > We can represent the agent state as natural language, too. Since the agent state is just a compressed representation of the observations, we can ask the language model to summarize the important information of any observations for its own internal world state. The language model could be used to periodically prune (i.e forget) the information inside its state, too, to make room for more observations.
shawwn#3694: what you're describing here is memory
shawwn#3694: adding memory to GPT has been a long time ambition of mine. I hope someone does it
shawwn#3694: I was previously thinking that the memory would take the form of modifying the weights at runtime, i.e. inferencing from the model causes changes to the model
shawwn#3694: but, your paragraph gives a clue for an alternative: have something similar to the "pasts" activations, but persistent over many inferences
shawwn#3694: so the memory would vanish if the TPU is shut off, but that's not necessarily a terrible thing. you could even save the memory periodically
shawwn#3694: the tricky question, though, is how to train in such a way that it knows how to take advantage of that memory |
bmk#1476: i think that modifying the weights just gets so infeasible since inference is much cheaper
shawwn#3694: perhaps. but if someone's going to fall in love with an ML model someday, someone absolutely must solve the problem of giving the model human-like long term memory
bmk#1476: theres absoltely no reason why that has to happen to the weights
shawwn#3694: yes. but, if it doesn't affect the weights, it must be residual, i.e. it's basically a cached activation that then gets applied later on as an "overlay" or something
shawwn#3694: and in that situation, how could the training process be designed so that the model learns how to use that "extra stuff"?
shawwn#3694: it would almost have to be reinforcement learning
shawwn#3694: it'd basically simulate some kind of persistent world, where memory can be advantageous, then reward the model for figuring out how to take advantage of the memory activations
shawwn#3694: but it seems hard to combine that with regular GPT-style training
shawwn#3694: maybe the model could have two different kinds of weights, one of which is frozen during GPT-style training, and the other is frozen during RL-style training
shawwn#3694: something like that.
shawwn#3694: but yes, someone really should do some kind of world simulation with GPT
bmk#1476: i dont think RL-like training would be necessary, though it would certainly help
Sid#2121: > I'm just about through the annotated transformer tutorial. Question regarding GPT3, does it actually have a decoder stack? If so why? Original transformer paper had an encoder/decoder to deal with multi language translation. Whereas I would have thought GPT is just self attention and predict next symbol?
@researcher2 GPT *is* just the decoder stack
Sid#2121: the encoding happens pre-training
shawwn#3694: wait, GPT-1 did tokenization as a part of the model itself?
shawwn#3694: (or some kind of equivalent)
researcher2#9294: encoding = embedding?
shawwn#3694: well, no
Sid#2121: nope, different things |
researcher2#9294: when you say "pre-training" I get confused haha
shawwn#3694: an embedding is like... a parallel universe. you transform an input into the parallel universe by multiplying it with a weight matrix
shawwn#3694: and the result is an embedding
shawwn#3694: because the input has been "embedded" into that new space
Sid#2121: http://dugas.ch/artificial_curiosity/GPT_architecture.html @researcher2 this post is really good for understanding the basic architecture
shawwn#3694: as for encoding, I'm not sure whether it has a specific meaning, but normally in GPT when people say "encoding", they're referring to encoding english text into tokens
researcher2#9294: ah the diagram on the left
researcher2#9294: ok
shawwn#3694: GPT works like this: you take english text, and turn it into a sequence of numbers
shawwn#3694: called tokens
researcher2#9294: does gpt use something like glove for embeddings or just a linear layer?
Sid#2121: it depends on the embedding, there's two types. Positional, and word embedding
Sid#2121: ^ the post i linked above explains it neatly, but the positional embedding uses some sinusoidal stuff that i confess to not fully understanding. The word / vocab embedding embeds the chosen word as a one hot vector over all the possible words in the vocabulary.
researcher2#9294: ok I read on further, looks lke linear
researcher2#9294: I'll stop being a bad student and go read 😄
Sid#2121: and the sequence
Sid#2121: http://dugas.ch/artificial_curiosity/img/GPT_architecture/encoding2.png
shawwn#3694: hmmm
researcher2#9294: > ^ the post i linked above explains it neatly, but the positional embedding uses some sinusoidal stuff that i confess to not fully understanding. The word / vocab embedding embeds the chosen word as a one hot vector over all the possible words in the vocabulary.
@Sid Yeah my brain never liked waves. |
shawwn#3694: where does the one-hot come into play?
shawwn#3694: in the cross entropy step at the end?
bmk#1476: gpt doesnt use any sinusoidal stuff
Sid#2121: oh it doesn't?
bmk#1476: not that im aware
Sid#2121: well the og transformer does, idk how gpt's works then
shawwn#3694: afaik the only nonlinearity it uses is gelu in the middle of the mlp
Sid#2121: > where does the one-hot come into play?
@shawwn the vocab embedding i think
shawwn#3694: I'm not sure there's a one-hot step anywhere
bmk#1476: not in implementation but its helpful to think of it that way
shawwn#3694: ```py
wpe = init_variable('wpe', [hparams.n_ctx, hparams.n_embd],)
wte = init_variable('wte', [hparams.n_vocab, hparams.n_embd],)
h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
```
Sid#2121: it's how it's represented, though. there's not an actual one hot operation, you're right
shawwn#3694: positions_for is a meshgrid, so it's just numbers like `1, 2, 3, 4, 5 ....`, then cloned vertically
Sid#2121: that's positional
shawwn#3694: it doesn't seem like a pedantic point to say it's not a one-hot encoding |
shawwn#3694: you're right, the vocab dimension is only related to `wte`
shawwn#3694: and `wte` is even simpler. it's just `tf.gather(wte, X)`
shawwn#3694: so it extracts a row from wte depending on what the token value is
bmk#1476: wpe is horribly overcomplicated
bmk#1476: it could literally just be a constant that gets sliced
shawwn#3694: I still haven't figured out what's going on with wpe, yeah.
Sid#2121: yep that's the one part of the model that still goes totally over my head, glad i'm not alone lol
bmk#1476: wpe is horribly overengineered
bmk#1476: but i dont feel like changing it lol in case i break something
shawwn#3694: what I'd like to figure out is, suppose you have 1,024 input tokens, but they're all zero, except for the first 5. What's the proper way to mask the computations such that only the first 5 tokens influence anything?
shawwn#3694: i.e. so that the output result is completely identical to feeding in X[0:5] instead of X[0:1024]
shawwn#3694: it's important for getting sampling working on TPUs, because TPUs require fixed shapes
shawwn#3694: but I'm not quite sure where to mask it. maybe the probabilities or something.
shawwn#3694: or maybe it's as simple as throwing out all the logits except the first 5
Sid#2121: @shawwn you should read through the mesh sampling code. It does something like this with the read/write priority variables that then get fed into the attention bias
shawwn#3694: dunno.
shawwn#3694: ah, thanks for the reference.
Sid#2121: it would be good for us to figure out, currently we do it super inefficiently by just generating x[0:1024], cutting out x[0:1], generating [1:1024], cutting out x[1:2] ... etc
Sid#2121: lol
shawwn#3694: I tried that. I got an error saying that the strided slice can't depend on the loop variable |
shawwn#3694: i.e. if you try to slice x[0:i], where i is the current sequence number that ranges from 1 to `length`
shawwn#3694: and x[0:i] is in a while loop
shawwn#3694: then when you try to run that on TPU cores, it barfs
bmk#1476: you can just mess with the attention mask right?
Sid#2121: @bmk yes, this is the proper way to do it
shawwn#3694: hm
shawwn#3694: ```py
def attention_mask(nd, ns, *, dtype):
"""1's in the lower triangle, counting from the lower right corner.
Same as tf.matrix_band_part(tf.ones([nd, ns]), -1, ns-nd), but doesn't produce garbage on TPUs.
"""
i = tf.range(nd)[:,None]
j = tf.range(ns)
m = i >= j - ns + nd
return tf.cast(m, dtype)
```
shawwn#3694: I see
shawwn#3694: ```py |
def mask_attn_weights(w):
# w has shape [batch, heads, dst_sequence, src_sequence], where information flows from src to dst.
_, _, nd, ns = shape_list(w)
b = attention_mask(nd, ns, dtype=w.dtype)
b = tf.reshape(b, [1, 1, nd, ns])
w = w*b - tf.cast(65500 if w.dtype != tf.float32 else 1e10, w.dtype)*(1-b)
return w
```
shawwn#3694: hmm... that's ... not straightforward to visualize
shawwn#3694: ```py
def multihead_attn(q, k, v):
# q, k, v have shape [batch, heads, sequence, features]
w = tf.matmul(q, k, transpose_b=True)
w = w * tf.rsqrt(tf.cast(v.shape[-1].value, w.dtype))
w = mask_attn_weights(w)
w = softmax(w) |
w = dropout(w, hparams.attn_dropout)
a = tf.matmul(w, v)
return a
```
bmk#1476: i think you should be able to set everything in the attention_mask after the `n-k`th row to zero
bmk#1476: to get the desired effect
Sid#2121: @shawwn https://github.com/tensorflow/mesh/blob/ecdee994a4a91d1781e22e9dfc6543479365c47c/mesh_tensorflow/transformer/transformer.py#L1155
Sid#2121: https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L279
shawwn#3694: @bmk thanks! that was a heplful tip. I'll try it
Sid#2121: this isn't very helpful bc tf-mesh code is awful to follow but, i know you love that stuff 🙂
bmk#1476: (there might be an off by one in that idea somewhere but im too tired to figure that out rn)
shawwn#3694: I get high on google documentation
shawwn#3694: godda have it
shawwn#3694: chasing down random undocumented code is clearly the best way to live life
bmk#1476: I think it should be zeroing after and including the n-kth
shawwn#3694: ahhh
bmk#1476: Honestly I'm not too sure
shawwn#3694: that makes sense
shawwn#3694: so it's literally restricting its attention to the first `k` tokens
bmk#1476: Yeah |
shawwn#3694: @Sid you're right, that last link does seem very related
shawwn#3694: deciphering it might be nontrivial though
bmk#1476: https://www.reddit.com/r/MachineLearning/comments/ibpn5l/d_building_agi_using_language_models/
Pls upvote
StellaAthena#3530: The post has been removed, it seems
Deleted User#0000: https://www.reddit.com/r/MachineLearning/comments/ib4rth/d_why_does_models_like_gpt3_or_bert_dont_have/ @Aran Komatsuzaki the first commenter definitely read your paper. https://arxiv.org/abs/1906.06669
researcher2#9294: > https://www.reddit.com/r/MachineLearning/comments/ibpn5l/d_building_agi_using_language_models/
>
> Pls upvote
@bmk This is exactly my interest. But it looks like a blank post.
bmk#1476: ??
bmk#1476: Screenshot pls
researcher2#9294: Ah, better now.
helen 🐳#5160: > it's important for getting sampling working on TPUs, because TPUs require fixed shapes
@shawwn i don't know about tf1, but tf2 has `tf.autograph.experimental.set_loop_options(shape_invariants=[(output, tf.TensorShape([None, None]))])` which effectively allows you to circumvent that restriction. (i implemented autoregressive sampling on TPUs with that)
Aran Komatsuzaki#5714: @Deleted User haha the last sentence convinces me lol
Deleted User#0000: it pays to have your paper title follow the meme
bmk#1476: shit i missed the opportunity to name my blog post "AGI: LM is all you need"
Aran Komatsuzaki#5714: My new paper has extremely long title, essentially a concatenation of key phrases, to make it informative. The downside of "all you need" format is that it's too uninformative. |
Aran Komatsuzaki#5714: So it's only applicable to a paper with a single central theme.
Deleted User#0000: "The unreasonable effectiveness of retrieval methods"?
Aran Komatsuzaki#5714: It was so long I don't remember lol
Aran Komatsuzaki#5714: That sounds good tho
Deleted User#0000: yes, that's another long running meme
Deleted User#0000: lol
bmk#1476: yes finally my hn post has a comment!
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745110177648345088/unknown.png
bmk#1476: explanation: [link to post]
researcher2#9294: haha
researcher2#9294: got about 3/4 of the way through the article then went off to have lunch and got distracted
researcher2#9294: 😄
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/745113001488547939/unknown.png
researcher2#9294: a "steering model" is what I had in mind
researcher2#9294: "To handle input, you could have an input module that turns various modalities of observations into summarized text with respect to the current agent state. For instance, you could use something like iGPT to input camera images or screenshots, or raw HTML from webpages that the agent requests. How exactly this is done is tangential to the point; all that matters is that somehow the inputs are all converted to text and added to the agent state. The examples I have provided are just to convince you that it’s absolutely not insurmountable."
researcher2#9294: Interesting
researcher2#9294: I was thinking rather then trying to exist purely in text world the steering model could be more general in that it received inputs of all types, but then learnt to talk to GPT. Some sort of RL agent with GPT as both input and output, a whole bunch of world sensors as input, and some base drives like "novelty seeking" or "moar paperclips".
researcher2#9294: After a while it would get a feeling for when GPT is useful (or not), kinda like humans would I guess.
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/745114775217766420/unknown.png
researcher2#9294: yes, give it shell access and await domination! |
researcher2#9294: https://cdn.discordapp.com/attachments/729741769738158194/745114944264994866/unknown.png
researcher2#9294: Not tomorrow, but highly likely within a year or two?
bmk#1476: i give it 5 years absolute tops
bmk#1476: ive heard that oa may have something in store for us soon™ though
bmk#1476: then again they might not
bmk#1476: who knows
researcher2#9294: I think it's highly likely that whatever thoughts we had, they had 3 months ago haha
researcher2#9294: helps to be at the absolute cutting edge with hardware and stuff
researcher2#9294: that and the 160 IQ big brains that work there
bmk#1476: im not surprised if 10T is done within a year, possibly less
bmk#1476: (assuming gpt3-but-bigger)
bmk#1476: if we also accept moe and similar, 100T in the same timeframe
bmk#1476: otherwise, 100T within a decade
bmk#1476: my timeline just keeps shrinking
researcher2#9294: so hardware wise i dont see a 1000x jump in a year or two
bmk#1476: i keep thinkin gthat theres no possible way xyz could be done within x years
bmk#1476: and then i see something that changes my mind
bmk#1476: 10T is already possible today
researcher2#9294: but there are ways around this with model optimization you think?
researcher2#9294: well I suppose if you threw a billion dollars at it |
bmk#1476: moe params arent directly comparable
bmk#1476: no, i mean 10T is just doable without that much funding
bmk#1476: if you had billions to play with, honestly, idk
bmk#1476: possibly faster?
bmk#1476: my number is based on hardware we already know exists
researcher2#9294: ok, so 14 million gave us a one epoch 175 billion param model?
researcher2#9294: can that be done again for much less or what?
researcher2#9294: openai used v100s right?
researcher2#9294: are google tpu cheaper?
aquajet#7800: I might be wrong but Google tpus are more expensive than a v100? The benefit of a tpu comes from their constrained use to computing nn propagations. you need fewer variety of ops in a tpu as compared to a gpu (and gpus need fewer than a cpu) due to the specialization. So tpus have much simpler and smaller alus. Since they are smaller google can fit massive amounts of these alus in parallel allowing for massive parallelization
aquajet#7800: Pls correct me if I'm wrong
kindiana#1016: at the same volume, tpus should be cheaper than v100s, because they are just 90% bf16 MAC units with all the fat trimmed off, however v100s most likely have much larger volume than tpus, so its uncertain which is cheaper when you amortize the NRE cost
bmk#1476: pet peeve: when people cite the 5 or 12 Mio estimate
bmk#1476: those estimates are completely back-of-the-envelope
bmk#1476: Unfortunately people just take it as fact now
researcher2#9294: oh right?
researcher2#9294: just rumor mill stuff?
doodle#1078: Pardon the small interruption. I run a thing called Pioneer and Andrej Karpathy and others are hosting a “GPT Demo Day” on Wednesday that might be of interest to folks here. Maybe a chance to get funding for related work. https://pioneer.app/gpt3
bmk#1476: woah, interesting
researcher2#9294: nice! |
bmk#1476: @Daj @Sid does this sound interesting?
doodle#1078: We’re picking finalists tomorrow at 11am PT, so if you’re interested please click away on the form before then.
doodle#1078: Should take 2 minutes to fill out.
bmk#1476: The other two thirds of the core team are currently asleep and I do not constitute a quorum
bmk#1476: anyways, it sounds super interesting and i personally at least am interested
doodle#1078: 👍
bmk#1476: though im not sure what we're doing is as much a "GPT-3 project" as "GPT-3"
StellaAthena#3530: Yeah you should definitely put us down @bmk
researcher2#9294: haha
doodle#1078: @bmk definition is as broad as the mind can imagine, I wouldn’t feel constrained.
doodle#1078: TPG can also apply.
researcher2#9294: any chance this is being recorded? wrong timezone for me
doodle#1078: Yes. And if you can’t present due to timezone you can also pre-record a Loom.
bmk#1476: @doodle by applying do i commit to presenting. or is it not so strict?
doodle#1078: No commitment!
bmk#1476: ok
doodle#1078: Other than submitting a Google Form!
bmk#1476: well,, i'll wait till the other members are awake
chirp#4545: random beginner question: why don't we hear more about stuff like language models but for videos?
chirp#4545: is it just too computationally intensive? |
chirp#4545: no idea how to estimate that
aquajet#7800: Like video generation?
chirp#4545: yeah
chirp#4545: i know people publish about it, but nothing has made a big splash, at least nothing i've heard of
bmk#1476: way way expensive
bmk#1476: lms for images is already slow and low-res
Deleted User#0000: @chirp https://openreview.net/pdf?id=H1e5GJBtDr
Deleted User#0000: their technique is simple and lovely, ive used it for images before
Deleted User#0000: https://github.com/lucidrains/axial-attention
Deleted User#0000: @bmk is right, it is still costly at the moment, but i imagine, with hardware improvements, eventually we will attend to everything.
chirp#4545: wait it was rejected??
Deleted User#0000: don't judge a paper by whether it was accepted or not. Transformer-xl was not accepted, for example
Deleted User#0000: turned out to be one of the most useful variants
Deleted User#0000: another example is linear attention, which i ignored because it was from a no-name independent researcher
Deleted User#0000: half a year later, EPFL, Deepmind confirmed that it works
Deleted User#0000: with their own paper
Deleted User#0000: i've stopped trusting paper acceptance as the sole signal
chirp#4545: from my quick skim, i'm surprised by the reviewers... they're all like "this only works for generative models"
Deleted User#0000: results from an experiment i did with axial attention, where i used encoder / decoder to have the attention network translate pixels to json data https://cdn.discordapp.com/attachments/729741769738158194/745144986072252436/Screenshot_from_2019-12-11_18-52-53.png
Deleted User#0000: it works. |
Deleted User#0000: https://cdn.discordapp.com/attachments/729741769738158194/745145050383515768/Screenshot_from_2019-12-11_18-51-50.png
Deleted User#0000: it even learned to read the y-axis, which is rotated lmao
Deleted User#0000: no preprocessing
Deleted User#0000: just, axial attention to decoder (gpt)
Deleted User#0000: i was an instant believer in the technology after that project
chirp#4545: wow didn't even know it was possible to train that sort of thing end to end
Deleted User#0000: attention is all you need
chirp#4545: 👀
bmk#1476: *what*
bmk#1476: this is dark magic
Deleted User#0000: yea, it is. the first time i woke up to the results, i was astounded. i mean, i knew something was going on with Transformers and attention
Deleted User#0000: but i had no idea it was that powerful
Aran Komatsuzaki#5714: Routing Transformer was accepted, whereas Reformer was oral presentation. Turned out RT is superior to Reformer.
Aran Komatsuzaki#5714: *Routing Transformer was rejected
Deleted User#0000: yeah i know, Reformer had some big warts once you really dove into it
Aran Komatsuzaki#5714: i know you know that
Deleted User#0000: yea, i stopped trusting the paper review process
Aran Komatsuzaki#5714: i just assume all reviewers and acs have alzheimer's
Deleted User#0000: yea, looking at the generated text Aurko sent me
Deleted User#0000: RT is definitely superior |
Aran Komatsuzaki#5714: i think workshop is a better place to submit papers to
Aran Komatsuzaki#5714: they are less retarded
bmk#1476: @Deleted User can we plug this into PDF to text
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745147320588566578/unknown.png
Deleted User#0000: yea, if you have a big corpus of figures and the text, it could be done
Deleted User#0000: doesn't Amazon already have a service that does this kind of?
bmk#1476: not even text honestly
bmk#1476: any kind of representation of the data would be better than nothing
Deleted User#0000: ohh i see, i think summarizing the main ideas in the graph will be hard to get data for
Deleted User#0000: the benefit of my toy project was i could use matplotlib
bmk#1476: yeah but getting anything at all about the graphs down
Deleted User#0000: and instantly generate infinite data
Deleted User#0000: i don't really know, but i do know i'd do the same approach, and probably even add linear attention on top
archivus#7382: > just, axial attention to decoder (gpt)
@Deleted User you have a repo I can see?
Deleted User#0000: @archivus it's essentially the image to caption code https://github.com/lucidrains/reformer-pytorch#examples
Deleted User#0000: except for the encoder, i use axial attention
Deleted User#0000: it really was that simple
Deleted User#0000: @archivus are you at Beirut? I think I saw your comment on twitter, but wasn't sure if that was you
researcher2#9294: > this is dark magic |
@bmk what we were talking about earlier. When I saw axial attention I was like 😋
researcher2#9294: Now that I've figured out transformer I can hopefully comprehend lucid's repo.
Deleted User#0000: it's such a clever idea, i really don't understand why it was rejected
Deleted User#0000: except it was an 'under your nose' kind of idea
Ravna#1831: Most new transformer architecture paper use reformer as a bunch bag in their benchmarks😔
researcher2#9294: damn it now I have to read another paper
researcher2#9294: lol
researcher2#9294: we are gonna need bci pretty soon....
researcher2#9294: > results from an experiment i did with axial attention, where i used encoder / decoder to have the attention network translate pixels to json data
@Deleted User Does that feed the json into the decoder side and use axial attention in the encoder stack?
researcher2#9294: have you gone through bigbird yet?
researcher2#9294: https://arxiv.org/pdf/2007.14062.pdf
chirp#4545: @Deleted User i'm curious what the axial transformer added. did the other image encoders work poorly?
Deleted User#0000: @chirp i did it mainly because it was more efficient and I could take a visual embedding from the resnet close to the initial layers without running out of memory
Deleted User#0000: yup exactly, the json was the sequence for the decoder
Deleted User#0000: and the encoder took resnet embedding a couple layers down and processed it axially
Deleted User#0000: yea, Bigbird mainly reaffirmed Longformer's approach
Deleted User#0000: it isn't too helpful in the auto-regressive case because the global tokens don't work in the causal context
researcher2#9294: oh you used resnet for... transfer learning?
researcher2#9294: silly me, I was just gonna feed images straight into the axial attention |
Deleted User#0000: yup, if your images are small enough you certainly can!
Deleted User#0000: i was dealing with 600x600 images
researcher2#9294: ok, I was hoping that even something of that scale, the separation of dimensions would solve the quadratic issues?
researcher2#9294: step one is mnist, but after that I'd like big images lol
Deleted User#0000: it's better, but still not enough
Deleted User#0000: you can always try linear attention though https://github.com/lucidrains/linear-attention-transformer i haven't revisited my old project and tried it on that task yet
researcher2#9294: what hardware did you use in the "not enough" context?
Deleted User#0000: and someone using it claims really good results
researcher2#9294: I only have a couple of gpu
Deleted User#0000: which surprises me, because in my tests, linear attention comes with a performance downgrade, around 25% or so
Deleted User#0000: but it kinda works
Ravna#1831: I have another stupid idea: Use transformers to generate the approximate distribution of the latent vector that we need. In this way we can ditch the "V" part of the VAE and forget about how to force it into a normal-like distribution. Just vanilla AE + transformer.
Deleted User#0000: https://magenta.tensorflow.org/transformer-autoencoder
Deleted User#0000: like that?
researcher2#9294: oh wow.....
researcher2#9294: that second performance is generated?!?!?!?
Ravna#1831: yeah like that
Ravna#1831: But I'm more interested in how it compares to VAEs and GANs.
archivus#7382: > @archivus are you at Beirut? I think I saw your comment on twitter, but wasn't sure if that was you
@Deleted User yup! Shit’s about to go down today as well. At least that’s what the rumors say |
archivus#7382: I’m awaiting my Covid test results before resuming work at the hospital. Was exposed to a patient suspected of Covid which the triage nurse missed so I’m ‘safe’ at home and bored
archivus#7382: Fixed the front door and cleaned up the debris. Now just waiting for glass and stuff to come in to get installed so covered windows with nylon
archivus#7382: and fixed parts of the ceiling
researcher2#9294: shit bro... did you get hit by shockwave? second thing I thought of when hearing about the disaster was there's gonna be a lot of head trauma
archivus#7382: > @archivus it's essentially the image to caption code https://github.com/lucidrains/reformer-pytorch#examples
@Deleted User perfect, I have a dataset I want to try this on
archivus#7382: > shit bro... did you get hit by shockwave? second thing I thought of when hearing about the disaster was there's gonna be a lot of head trauma
@researcher2 not directly i was at home at the time studying but like windows shattered near me and pieces of the ceiling fell. I was luckily unharmed
archivus#7382: If I’d been in the living room i would’ve died - the glass shards were embedded everywhere from the ceiling to the walls across the room
archivus#7382: Had a thick 3” wooden door split in half
researcher2#9294: fuck... glad you're ok
archivus#7382: Others weren’t as lucky so tried to help out as much as I could in the hospital
archivus#7382: There’s not much I can do as a third year medical student though - and I was rotating in internal medicine. Surgery and the ER had a lot of cases
archivus#7382: Especially after the subsequent protests
Deleted User#0000: @archivus so crazy, hope you and your friends and family well 🙏
Ravna#1831: I find those claims of "linear" transformer variants, how to say it, at least a bit dishonest? They basically change n * n into m * n but in both practice and their experiments, the sizes of m are chosen to be at the same magnitude of n. If you correlate m with n like that, you can't say your m is a constant.
Deleted User#0000: @Ravna yup you are exactly right, the q(kv) attention shifts the quadratic to the dimension of each head
Deleted User#0000: the Linformer type linear attention does what you describe. they just claim m can be quite small
Deleted User#0000: and make some theoretic justification, but really only tested on length of 4096
Sid#2121: > @Daj @Sid does this sound interesting? |
@bmk if ‘replicating gpt-3’ counts as a gpt-3 demo I say we go for it
Daj#7482: That seems like a weird thing to present lol
archivus#7382: https://www.stl-tsl.org/en/watch-the-hearing
archivus#7382: In case you guys want to watch the trial
archivus#7382: Of the assassination of Hariri in 2005
Sid#2121: 🤷 the organiser invited us, our project might just stand out amongst all the react / grammar correction apps lol
archivus#7382: @Sid what’s the project?
Sid#2121: Ours? Replicating gpt-3
Sid#2121: We have a couple side projects but that’s the main one
shgidi#0284: Seems strange to arrange a demo day when access to GPT3 is very limited
Daj#7482: > 🤷 the organiser invited us, our project might just stand out amongst all the react / grammar correction apps lol
@Sid Who would present? I can do it I guess
Sid#2121: I was assuming it would be you lol but I can also step up if we need.
Daj#7482: All good, I'll look into it in a bit
Sid#2121: > All good, I'll look into it in a bit
@Daj 9 hrs left to submit - should i write the submission?
Daj#7482: Sure
Sid#2121: uhhh should i make a video of me banging my head against a table while hunting for bugs? https://cdn.discordapp.com/attachments/729741769738158194/745222967914528818/Screenshot_2020-08-18_at_12.09.26.png
Daj#7482: Yea This is kind of why I was unsure whether we're the kind off thing to be pitching (also since it's not like we're a startup looking for funding or anything)
Sid#2121: It seems like a good opportunity to get important peoples eyes on the project regardless |
Aran Komatsuzaki#5714: Life of a paper:
1. Appears on arXiv
1.001. @ak92501 and I tweet
1.002. @Deleted User makes a repo
2. The author tweets
3. Appears on ML subreddit
4. @hardmaru tweets
5. @ykilcher makes a video
Aleph-0. Rejected by reviewers for "lack of novelty"
0. Conceived by Jurgen in 90s
researcher2#9294: lol @ 0
Aran Komatsuzaki#5714: My goal as an AI journalist is to tweet a paper before it appears on arXiv.
doodle#1078: @Sid go for it! Head banging fine for Loom.
doodle#1078: (Organizer here.)
bmk#1476: @AI_WAIFU https://cdn.discordapp.com/attachments/729741769738158194/745300251136491660/unknown.png
bmk#1476: it's *so much better*
bmk#1476: although intterestingly theres overlap near the beginning
Louis#0144: What specifically
aquajet#7800: thats weird |
aquajet#7800: why didnt they converge like the 1024s
shawwn#3694: Ironic
shawwn#3694: What's this application @Sid is considering?
AI_WAIFU#2844: @bmk can you pass me the data, I'll try smoothing.
AI_WAIFU#2844: Also odd that GPT-2 XL has that pathology.
AI_WAIFU#2844: It seems to be the outlier and GPT-3 doesn't seem to have the same problem.
bmk#1476: To be fair gpt3 was likely trained on gutenberg
bmk#1476: I say likely because it's probably embedded in books2
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745350342782484571/loss-gpt3-3000.npy
bmk#1476: @AI_WAIFU all 4 are packed int othe same npy
AI_WAIFU#2844: I'm busy rn, but I'll have some results later today.
bmk#1476: great
bmk#1476: should i kill the data collecting soon and collect data for some other datasets?
bmk#1476: (text8, openwebtext, libgen are all the the top of my list)
AI_WAIFU#2844: I think text8 would be nice, we know that the articles are much shorter there.
bmk#1476: i think owt would be very interesting
AI_WAIFU#2844: We can always come back and collect more on gutenberg later.
bmk#1476: also libgen would give us a hint as to whether oa trained on lg
AI_WAIFU#2844: I'll leave it up to you.
bmk#1476: aight |
shawwn#3694: did we snag books2 yet?
shawwn#3694: if so, could it be tokenized?
StellaAthena#3530: In a fascinating look at AI failure modes, this algorithm accidentally added Ryan Gosling’s face to an image when upscaling: https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-added-ryan-goslings-face-to-this-photo/
shawwn#3694: apparently it's still speculative https://twitter.com/vboykis/status/1290030614410702848
Ravna#1831: Do we now know what exact architecture GPT-3 is using? This reply of one of the authors shows that apparently even the "dense" layer is sparser than the standard one and the paper doesn't cover the details. https://www.reddit.com/r/MachineLearning/comments/hxvts0/d_breaking_the_quadratic_attention_bottleneck_in/fzh7bpd/
shawwn#3694: nice find.
Ravna#1831: gwern found it first and i was just curious about if there's any follow-up
StellaAthena#3530: I asked a follow-up question directly to the author.
Deleted User#0000: @Ravna i'm not completely certain, but they could just be doing matrix factorization on their weight matrices
Deleted User#0000: i don't think they got rid of the N^2
Deleted User#0000: probably good to run this by Aran
Deleted User#0000: ```python
queries = nn.Linear(dim, dim)(seq)
```
Deleted User#0000: w/ matrix factorization
Deleted User#0000: ```python
a = nn.Linear(dim, dim // 8)
b = nn.Linear(dim // 8, dim)
queries = b(a(seq))
``` |
bmk#1476: >for people who want to replicate the work
@StellaAthena not subtle at all, eh? 😛
StellaAthena#3530: Subtly is overrated
Deleted User#0000: matrix factorization is a common trick in deep learning, used in a lot of places
Deleted User#0000: i think Aran has ran experiments on this, but i'm not sure
Deleted User#0000: we'll ask him once he's on
Ravna#1831: > The "dense" attention in GPT-3 is actually sparsely factorized across the heads resulting in a rank reduction of about 8x.
Ravna#1831: The matrix reduction trick above was a standard information bottleneck type and there's nothing sparse about it. But he added an adverb of "sparsely".
shawwn#3694: yes, that caught my attention too.
Ravna#1831: Also if we do a matrix reduction explicitly by using dim // 8, we won't say "about" 8x.
Deleted User#0000: yea exactly, then that would not be 'dense'
shawwn#3694: if you find hints as to what the sparseness refers to, please post it @Ravna.
shawwn#3694: knowing openai, it might be in reference to their blocksparse work
shawwn#3694: (https://github.com/openai/blocksparse)
shawwn#3694: but I'm not sure what sparse factorization is
Deleted User#0000: @Ravna why wouldn't they have reached for greater sequence lengths if their whole architecture were sparse?
Deleted User#0000: i think there must be some terminology mixup. should reach out to that commenter
shawwn#3694: @StellaAthena mentioned that they asked a followup question.
Deleted User#0000: unless if there's some trick out there that i'm not aware of :thonk:
StellaAthena#3530: I just asked if there’s other model details omitted |
StellaAthena#3530: I didn’t ask about the sparsification
shawwn#3694: ah.
shawwn#3694: the sparsification might deserve a separate followup. It sounds very interesting
Ravna#1831: The same author also said a year ago that the OpenAI blocksparse is just a primitive that can be used in all kinds of specific sparsification strategies. The GPT-3 paper doesn't say what kind of sparse attention they do. https://www.reddit.com/r/MachineLearning/comments/cuz215/r_facebookai_releases_adaptive_attention_span_and/ey3soq3/?context=3
shawwn#3694: one way to solve this is to theorize from first principles
shawwn#3694: suppose you were OpenAI. Suppose some profiling, experiment, or other factor led you to turn your own attention to the attention heads
shawwn#3694: what sort of problem or bottleneck might be solved with this "sparsification"?
shawwn#3694: that would give the answer.
shawwn#3694: and the name might be a hint toward one of two aims: either it's sparse updates (a performance detail), or sparse attention (blocksparse)
shawwn#3694: and if it's sparse attention, then the particular strategy would depend on what problems they were seeing, or what they hoped to improve by changing that part of the model
shawwn#3694: so: what aspects of the effectiveness of GPT-2 are determined by the attention heads? probably "all of it" -- attention heads are so fundamental that they affect all aspects of the final result
shawwn#3694: but it could be some specific thing. if so, then it would be possible to reverse engineer what they meant.
Deleted User#0000: > so: what aspects of the effectiveness of GPT-2 are determined by the attention heads? probably "all of it" -- attention heads are so fundamental that they affect all aspects of the final result
@shawwn nothing is really final yet https://arxiv.org/pdf/2008.00623.pdf they do away with attention heads altogether here
Deleted User#0000: i'll run that comment by Aran later
Deleted User#0000: https://www.youtube.com/watch?v=VgqHitvEbR0
Deleted User#0000: lmao
Deleted User#0000: love it
Deleted User#0000: if i go to europe, i'm going to visit Yannic
bmk#1476: [handshake meme] |
Researchers
Conspiracy theorists
Peer review is broken
Daj#7482: https://twitter.com/fchollet/status/1295829787164807168?s=19
Not sure if relevant to us
bmk#1476: >keras
bmk#1476: time to spend several weeks working out what it does
shawwn#3694: um...
shawwn#3694: if "graph execution" refers to "session.run call", then all they're saying is that a while-loop is important
shawwn#3694: and that people often underestimate how important a while-loop is
shawwn#3694: which is true, but I'm shocked that keras might have made that mistake
AI_WAIFU#2844: It looks like unlike GPT-2 ,the benefit of a bigger model seems to be mostly uniformly distributed across tokens for GPT-3. https://cdn.discordapp.com/attachments/729741769738158194/745413491040452728/Figure_12.png
AI_WAIFU#2844: With the shape of the advantage curve being uniformly flat for all models regardless of size.
shawwn#3694: what's the X axis refer to?
shawwn#3694: context size?
AI_WAIFU#2844: yes, I messed up the sceen cap
shawwn#3694: nah, sequence position is correct
shawwn#3694: I'm surprised that wasn't true of GPT-2
AI_WAIFU#2844: https://cdn.discordapp.com/attachments/729741769738158194/745413911872012298/Figure_13.png |
AI_WAIFU#2844: For some reason it breaks down for the 1.5B model
shawwn#3694: (there's a typo in the word, by the way; "Postion" instead of "Position")
shawwn#3694: interesting. Do you have a graph of that?
AI_WAIFU#2844: BMK posted it upthread, look at the red curve
AI_WAIFU#2844: at 9:17
pragmaticml#1730: I thought this was going to be a plot of loss advantage vs token rarity which would also be interesting. I.e. do rare tokens disproportionately benefit from large model size?
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/745414675843383388/unknown.png
AI_WAIFU#2844: I don't have the data to tell you that, but it's a good idea for an experiment.
shawwn#3694: why is the graph different? it seems to plot davinci here too
AI_WAIFU#2844: Log
shawwn#3694: ah
AI_WAIFU#2844: also I'm looking at the smoothed difference between the curves
AI_WAIFU#2844: not the curves themselves
shawwn#3694: oh
shawwn#3694: so if I'm reading this right, 1.5B actually has the advantage over ada for short sequences
shawwn#3694: I wonder what specifically the ada architecture is...
AI_WAIFU#2844: Yup, on gutenberg
AI_WAIFU#2844: Might also be the training data
AI_WAIFU#2844: @bmk these plots are very noisy, even with smoothing. If there's a dependence on the loss advantage with model size we'll need more data to isolate it.
pragmaticml#1730: Re: earlier convo and Scott Gray's comment on sparse attention, I agree with @shawwn he's almost certainly referring to their blocksparse library (which he largely wrote -- he's OpenAI's CUDA guy) rather than weight matrix factorization. I think this detail is partially mentioned in the paper when they note that every other attention layer is block local. |
> We use the same model and architecture as GPT-2 [RWC+19], including the modified initialization, pre-normalization,
> and reversible tokenization described therein, with the exception that we use alternating dense and locally banded sparse
> attention patterns in the layers of the transformer, similar to the Sparse Transformer [CGRS19].
Blocksparse attention is one of the few transformer add-ons that OpenAI claims has held up to the power law test (and still provides benefit at upper end of model size)
shawwn#3694: that's ... a *huge* detail to omit from the paper!
pragmaticml#1730: Yeah absolutely agree
bmk#1476: whats more valuable, more gutenberg or libgen
shawwn#3694: libgen.
AI_WAIFU#2844: Start with libgen
shawwn#3694: reasoning as follows: libgen has more diversity
AI_WAIFU#2844: But we should revisit gutenberg later
bmk#1476: ok
bmk#1476: also im logging every single api response i get
bmk#1476: 20 gb so far, lol
AI_WAIFU#2844: It kinda looks like d^2loss/(dtoken*d#parameters) is positive but we can't know for sure yet.
shawwn#3694: api response from which? |
AI_WAIFU#2844: soon we'll have enough data to imitation learn GPT-3
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745419805305077790/0.json.zst
bmk#1476: a sample
bmk#1476: lemme know if anything in the data looks useful
bmk#1476: also i just figured out how to batch, which lets me extract 2x the data in the same window of time
bmk#1476: squeezing gpt3 like a lemon for that sweet sweet data
AI_WAIFU#2844: API go brrrr
bmk#1476: unfortunately the api only lets you get top 100 logits
bmk#1476: trust me, i tried
AI_WAIFU#2844: That's more than enough to get a strong imitation learning signal.
shawwn#3694: you can get logits?
shawwn#3694: yes
AI_WAIFU#2844: You just need the right loss function to deal with the missing info
shawwn#3694: you're saying you've extracted 20GB of logits?
bmk#1476: @Noa Nabeshima you have api access too right?
bmk#1476: we can *double* the data collecting
bmk#1476: 20gb of compressed json data
bmk#1476: not sure how much of that is the stuff we need
bmk#1476: but im keeping it all for now and sifting thru later
bmk#1476: im now going to try doubling the rate and seeing if oa throttles me |
shawwn#3694: very interesting. where are you downloading to?
bmk#1476: my computer
bmk#1476: i jsut sifted through the terms of use and im not actually sure if its legally ok for me to do this
bmk#1476: should i check in with OA and see if theyre ok?
bmk#1476: that seems like a reasonable course of action right?
JC#3653: what does the TOS say?
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745423004107931658/unknown.png
bmk#1476: so sticking in libgen is definitely not ok
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745423194881523764/unknown.png
bmk#1476: im not sure if this counts as scraping
bmk#1476: i don think so
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745423313744166942/unknown.png
bmk#1476: they explicitly forbid cloning via logits
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745423438511865916/unknown.png
bmk#1476: so @AI_WAIFU until further notice im shutting off all the data stuff
shawwn#3694: what was the scraping script like? just "ask for a completion from random parts of libgen"?
bmk#1476: yeah i was asking for a completion of 1 token and also asking for all the logits
bmk#1476: i mean that doesnt couunt as scraping right?
bmk#1476: im using their api in a supported way
bmk#1476: maybe theyll be ok if we promise only to use the data for calculating logits by context length? |
shawwn#3694: in general, if you have to reason like "well, technically this isn't X, right?" then you probably know the answer 🙂
bmk#1476: ill ask in the slack
shawwn#3694: wait.
AI_WAIFU#2844: I think i'd be best to communicate our intentions to OpenAI
bmk#1476: ok
shawwn#3694: *shrug*
shawwn#3694: go for it. maybe they'll say yes.
shawwn#3694: or maybe they'll lock down the API more tightly, and this opportunity won't exist in the future.
AI_WAIFU#2844: If they say don't clone with logits then we won't clone with logits, but I see nothing wrong with making the plots we've been making
bmk#1476: yeah ok
shawwn#3694: "Hi, I was wondering if the thing you say not to do is actually permitted?" will probably have a predictable outcome
shawwn#3694: which may include the revocation of your beta key.
bmk#1476: i mean we dont need to clone withl ogits if we're planning on building gpt3 from scratch anyways lol
bmk#1476: beta ends in a few weeks anyways
shawwn#3694: it's not that simple. I have always suspected GPT-3's strength is the training data, not necessarily the model
shawwn#3694: and OpenAI has not ever gone in to explicit detail about all of their training data
shawwn#3694: exfiltrating the logits = exfiltrating the data necessary to clone the model, i.e. getting the model without even having to gather the right training data
shawwn#3694: it's a very powerful technique
shawwn#3694: suppose it's true that OpenAI's GPT-3 is so good solely because of some training data that they haven't revealed. What then?
shawwn#3694: you won't have any chance of success no matter how much you're going to replicate GPT-3. |
bmk#1476: for the record im not interested in cloning their model
AI_WAIFU#2844: If you want to stay on the safe side, just keep the logits vs context length without the prompting tokens. That's all the data we need. And I don't think that breaks the ToS
bmk#1476: we can build our own from scratch anyways
AI_WAIFU#2844: ~~We can do it better~~
shawwn#3694: true enough.
shawwn#3694: having github scrapes would be interesting
shawwn#3694: I might fire up the github downloader script.
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745425817852444762/unknown.png
shawwn#3694: heh. nicely phrased.
bmk#1476: its not even wrong
bmk#1476: im *not* interested in cloning their model
bmk#1476: and the reason im holding onto the data is because if like somewhere down the line oh no i actually need xyz data then id have to hit their api 100k times again per model per dataset
bmk#1476: honestly this is all a distraction i should be working on our model anyways
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745426673368825966/unknown.png
AI_WAIFU#2844: If you can I would also give them a link to a plot we've made. Post it on imgur or somethimg
bmk#1476: https://media.discordapp.net/attachments/729741769738158194/745414675843383388/unknown.png this one?
AI_WAIFU#2844: yes
shawwn#3694: that plot is pro, by the way
shawwn#3694: I thought it was from a research paper
AI_WAIFU#2844: If we're quick it will be. |
bmk#1476: lol
AI_WAIFU#2844: ICLR deadline is Sept 28th right.
AI_WAIFU#2844: ?
bmk#1476: https://aideadlin.es
AI_WAIFU#2844: I needed this
bmk#1476: yes
shawwn#3694: there's no deadline to writing it yourself 🙂 https://www.docdroid.net/faDq8Bu/swarm-training-v01a.pdf
bmk#1476: indeed it is the 28th
shawwn#3694: of course, you'd probably want to actually finish it...
bmk#1476: why do all the good conference deadlines cluster together
bmk#1476: Most good conferences in ML are almost interchangeable anyways
Noa Nabeshima#0290: > @Noa Nabeshima you have api access too right?
@bmk yeah
bmk#1476: ive changed my mind about this and im waiting for the go ahead from OA before i do anything further
Noa Nabeshima#0290: 👍
StellaAthena#3530: @shawwn when I go to read that, it asks me to download a VM app from the App Store. Is that legit or sketch?
shawwn#3694: oh god, really?
shawwn#3694: and till now the link has served me well.
StellaAthena#3530: Just went to duplicate it and got this instead https://cdn.discordapp.com/attachments/729741769738158194/745443629358776420/image0.png
StellaAthena#3530: Is there a reason you don’t use Overleaf? |
shawwn#3694: https://www.shawwn.com/docs/2020-01-swarm-training.pdf
shawwn#3694: if that 404's, then https://www.shawwn.com/docs/2020-01-swarm-training.pdf?1
shawwn#3694: the reason I liked docdroid is that sharing the link always gives an inline preview (i.e. that discord embed)
shawwn#3694: ending up at a weird sketchy security update url is unfortunate
bmk#1476: Weird, the doc droid link works for me no issues
shawwn#3694: yeah, I've never seen that before
bmk#1476: Maybe it's user agent targeted?
shawwn#3694: it might be a malicious ad. Those pop up occasionally
shawwn#3694: ads which use JS to hijack the page
bmk#1476: Oh right I have a pi hole and all
helen 🐳#5160: chiming in on that mysterious `experimental_steps_per_execution` linked earlier: he’s referring to the way that training steps move between the TPU and the host. a lot of random TPU code on the internet is actually very bad and moves between the TPU and the host with every training step, which is very slow. in keras this value is set to 1 by default, which means it's moving between the TPU and the host every single step. with a custom training loop in tf2 you can control this by putting `strategy.run` into a tf.range loop. you can recognize whether this is happening by the way that in the cloud profiler it will show up as a mysterious chunk of dead time per step.
(i big build neural nets on TPUs professionally and my teammates are close to some peeps at Brain. i’m mostly following this discord because it’s one of the few places where people are using TPUs at scale 🤠)
bmk#1476: Since we're using TF1 is it still an issue?
bmk#1476: I thought with TF1 the graph is compiled once and left on tpu
Aran Komatsuzaki#5714: @Deleted User Matrix factorization generally doesn't work. For example, matrix-factorize a part of attn layer or ffn or anywhere, that becomes a perf bottleneck.
helen 🐳#5160: in tf2 the graph is also compiled to XLA by default and put onto the TPU (i don't even know if it's possible to use eager on TPUs, or why you would)
helen 🐳#5160: i thought tf1 had something like experimental_host_call_steps or whatever, though i'm not actually sure. i've never run on TPUs w tf1.
bmk#1476: Interesting, so what can we do to take advantage of this optimization? We use Estimator if that matters
bmk#1476: Ah |
bmk#1476: also do you have any ~~insider~~ info on whether GPT-like (autoregressive, text generation) models are being worked on at google?
bmk#1476: We've heard that the mtf guys have tried GPT3 *size* models before but nothing more specific about whether thay're actually thinking about making a GPT-3 like model
bmk#1476: (or bigger, ofc)
helen 🐳#5160: i don't use estimator unfortunately, i find the tf ops and a custom training loop to be much easier to optimize 😦
helen 🐳#5160: i can't comment on the other question unfortunately!
bmk#1476: I'm not actually sure why we use Estimator, but nobody wants to break the code by changing it lol
bmk#1476: And haha it was worth a shot
StellaAthena#3530: > I'm not actually sure why we use Estimator, but nobody wants to break the code by changing it lol
@bmk There are plenty of governments deploying code they don’t understand for this reason lol.
bmk#1476: i dont doubt it
bmk#1476: also @AI_WAIFU the oa scaling paper has a graph thats basically identical to ours
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745451135262916768/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745451178631888986/unknown.png
bmk#1476: the odd thing is that this looks very different from our graph
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745451311960555530/2U02S0y.png
bmk#1476: oh maybe not very different
bmk#1476: but slightly different
bmk#1476: so im not sure what kind of novel analysis we can do
StellaAthena#3530: @bmk what paper is this from?
bmk#1476: https://arxiv.org/abs/2001.08361 |
StellaAthena#3530: Is it easy to generate the second plot from that paper with our data?
AI_WAIFU#2844: Well shit, I missed that. I think the difference comes from the data distribution we're testing on. They've probably got a smaller fraction of long documents. They also go up to a CTX of 1024. Although it's nice to see that the slope of the curves in figure 20 get more aggressive as the #parameters goes up.
bmk#1476: so do we still have anything novel or nah
Aran Komatsuzaki#5714: > @bmk if you're talking about the optimal transport for long range context stuff, it's intellectually interesting but I would be surprised if it went anywhere. Apparently many of the long term context methods were tested by OpenAI and intentionally ignored because the scaling coefs they computed from experiments (https://arxiv.org/abs/2001.08361) on a range of small model sizes indicated that it wouldnt provide benefit at the extreme upper end of params count.
@pragmaticml I hope they'll publish the results soon. My paper solely won't kill off all the researchers spenting countless human/GPU-hours into yet another efficient-attention.
Aran Komatsuzaki#5714: But I'm glad Aurko Roy (the first author of Routing Transformer) is also working on MARGE. We exchanged some insight about it.
Aran Komatsuzaki#5714: @Deleted User
Deleted User#0000: Let's build it Aran! After I figure out what's going on here lol
Deleted User#0000: Closing in on it..
Deleted User#0000: He told me he was working on Realm like stuff
AI_WAIFU#2844: Eh, not really. It might be interesting to compare how other architectures behave with large contexts as this would be hidden by reporting a single number. But that's a lot of work. This is useful for the original reason I started this which is to choose the context length, but beyond that the only thing novel would be to compare the curves on different datasets.
Aran Komatsuzaki#5714: Yeah, Realm \sim MARGE
bmk#1476: @Deleted User speaking of which 13 diverged
Deleted User#0000: @bmk check the #gpt-neox-devs channel
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745467823358214164/unknown.png
Deleted User#0000: i made some progress 😄
Deleted User#0000: yea, I think Realm will take a lot of resources to work out, but maybe it can be trained on a smaller representative task?
Deleted User#0000: just to see proof of concept. i'm curious about the neural retriever
Aran Komatsuzaki#5714: after finishing the experiments i said i was too lazy to do, i'm going to work on finishing the extended marge and see if it works or not
Deleted User#0000: are you using faiss for fetching? |
Aran Komatsuzaki#5714: imo realm's integration of different samples is inferior to that of marge
Aran Komatsuzaki#5714: yes faiss
Aran Komatsuzaki#5714: it's combination at the output softmax level
Deleted User#0000: are you doing this on your own rig?
Deleted User#0000: or one of georgia tech's servers?
Aran Komatsuzaki#5714: marge combines at each layer ... using more layers for integration should be superior
Aran Komatsuzaki#5714: i'm using my meager single v100
Deleted User#0000: i see, i need to read more papers on retrieval so i can abstract it correctly
Aran Komatsuzaki#5714: from aws lol
Deleted User#0000: in my head first
Aran Komatsuzaki#5714: you don't really have many papers to read other than marge and a few others for this, since it's very new.
Deleted User#0000: ok, well, once i get things underway here, we'll build something together. retrieval like systems actually require some engineering
Deleted User#0000: other than tensor work
Deleted User#0000: if you think about it, we are essentially giving neural nets a search engine lol
Deleted User#0000: DL mimicks life..
Aran Komatsuzaki#5714: yeah i guess google was doing ai before it became hot
Deleted User#0000: what are Aurko's thoughts on sparse attention
Deleted User#0000: is he laying that to rest?
Aran Komatsuzaki#5714: we didn't talk about sparse attn, but he took a look at my paper
Aran Komatsuzaki#5714: given that he's working on retrieval stuffs, he should have a similar idea as ours |
Aran Komatsuzaki#5714: also madison said openai folks are also interested in retrieval
Aran Komatsuzaki#5714: and given what he said, they aren't interested in most efficient attn models
Aran Komatsuzaki#5714: probably except for block sparse one, but they'll change their mind if so
Deleted User#0000: yea, with fusion-in-decoder results, and people complaining that GPT-3 cannot answer some questions accurately, they must be trying to solve that
Aran Komatsuzaki#5714: yup
Aran Komatsuzaki#5714: recently, i haven't got any good paper to tweet to get more followers to get more voice on twitter, so i had to come up with a meme to tweet. testing the quality of meme is important, and i use this channel to see if a given meme is good or not by observing the reaction from people.
Deleted User#0000: lol
Deleted User#0000: you are starting to become the new hardmaru
Aran Komatsuzaki#5714: well google brain tokyo office needs more people
Aran Komatsuzaki#5714: i recently read a paper that argues being tweeted by people with more followers is important for a paper to be more influential, so i'm trying to get more followers to make my paper and whatever interesting to me influential.
JC#3653: Just use GPT-3 to make memes for you. I heard it is good at following trends.
Aran Komatsuzaki#5714: as soon as they perform human-level i'll use that lol
bmk#1476: you can borrow my memes
bmk#1476: as long as yo ugive attribution
Aran Komatsuzaki#5714: when i use your meme, i'll credit you so that you'll get followers, too
Aran Komatsuzaki#5714: yes
bmk#1476: yeah awesome
bmk#1476: https://twitter.com/nabla_theta/status/1290842239363473409 an especially suitable one since its in japanese
Aran Komatsuzaki#5714: my favorite meme of yours is that vitamin one, but that's japanese, so my followers don't understand lol
Aran Komatsuzaki#5714: yeah this one |
Deleted User#0000: Elon once hired some meme-lord to do social media
Deleted User#0000: so it's become a legit profession in some sense lol
bmk#1476: lol
Aran Komatsuzaki#5714: i never tweet in japanese, since there aren't many japanese people working on transformer
bmk#1476: im too polyglot to *not* tweet in other languages once in a while
bmk#1476: its more fun that way
bmk#1476: spice it up
Deleted User#0000: when i visited Japan, i went to a javascript meetup
Deleted User#0000: in Tokyo somewhere
Aran Komatsuzaki#5714: cool
Deleted User#0000: it was fun. everyone bowed to each other
Aran Komatsuzaki#5714: haha
bmk#1476: @shawwn just saw your post here https://twitter.com/theshawwn/status/1295873546242023424
bmk#1476: and i have to say, man, i really do think oa is barking up the wrong tree
bmk#1476: their intentions are good but
shawwn#3694: yes.
shawwn#3694: it was a mistake to give in to social pressures.
bmk#1476: the second order effect here is it pours more fuel on the push for actually-open gpt3 (i.e us) and will probably hurt their goals in the long run
bmk#1476: also i think this whole thing is a red herring
bmk#1476: we need to be focussing on alignment, not models saying offensive stuff |
bmk#1476: gpt-x isnt very good at astroturfing, at least not good enough to replace human astroturfers
bmk#1476: then again im of the few who actually think this is a path to agi
shawwn#3694: funny you mention that. GPT-3 is getting pretty close to beating human astroturfers
shawwn#3694: it surprised me.
bmk#1476: pretty close but
shawwn#3694: my experience was with GPT-2 1.5B
shawwn#3694: it still requires human massaging though, yeah
bmk#1476: wouldnt be surprised if bad actors still decide it's just not worth it
bmk#1476: there are cheaper ways to generate floods of propaganda
bmk#1476: and there are better ways to generate high quality propaganda
shawwn#3694: your world state theory is still one of the more interesting theories in ML, by the way. you should pursue it
shawwn#3694: try adding some sort of world simulation to GPT
bmk#1476: and i think gpt3 doesnt really come out on top anywhere useful in the price vs quality curve
shawwn#3694: i.e. actually implement your blog post
bmk#1476: hmm
bmk#1476: im not sure gpt3 is even good enough
bmk#1476: certainly gpt2 isnt
shawwn#3694: pytorch-but-actually-tensorflow is nice to have, but world simulation would really open doors
shawwn#3694: dunno. it can play chess
bmk#1476: hmm |
bmk#1476: what if we make a world-modelling dataset
bmk#1476: That would make my idea much more feasible
shawwn#3694: that's an interesting idea. What might it look like?
bmk#1476: Essentially the things I described in my post but as an explicit curriculum
bmk#1476: lemme find an example
bmk#1476: >“I go to ebay. I look up paperclips, sorted by price ascending. I spend $100 on the first item on the list. How many paperclips will I have?”
bmk#1476: that kind of thing
bmk#1476: how do we build a dataset of queries like that and the answers
shawwn#3694: conceptually, it's similar to lucidrains' predict-number-plus-one task
bmk#1476: also speaking of the blog post it didnt really take off
shawwn#3694: have a start token, then the state, then an action token, then the results
shawwn#3694: don't sweat it. it takes some time for people to notice
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745490254214987874/unknown.png
bmk#1476: next to the gpt3 post its a tiny bump
bmk#1476: i dont even like my gpt3 post particularily
bmk#1476: i rushed it out in an afternoon
bmk#1476: posts i actually put thought into never seem to take off
shawwn#3694: if you keep writing thoughtful posts, they will.
shawwn#3694: and then when people notice that next post, they'll notice your old thoughtful work too.
bmk#1476: hmm |
bmk#1476: hopefully
shawwn#3694: (if it's somewhat easy to get to.)
shawwn#3694: anyway. I may not be the best researcher in the ML scene, but I'm not the worst, and I liked it. Gwern liked it too, I think, otherwise he would have said harsh things about it.
shawwn#3694: write for yourself, not the world. external validation feels nice but ultimately it's just noise.
bmk#1476: good point
bmk#1476: also my opinions are generally very gwern-leaning
shawwn#3694: I think the paperclips dataset might look like:
START <list of prices> ACTION <wallet amount> <paperclips bought> END
bmk#1476: oh, i dont want to make a paperclips dataset
bmk#1476: i mean more general
shawwn#3694: yeah. but focusing on specific cases can lead to generality
shawwn#3694: what are some other examples you'd want to handle?
bmk#1476: also i dont want the format to be too rigid
bmk#1476: that kind of defeats the entire point
bmk#1476: of using gptx
bmk#1476: the whole idea is you can use language as a really flexible, unconstrained generic Idea Conveyer™
shawwn#3694: the interesting thing might be, can we think of a way for it to remember things it's seen before?
shawwn#3694: where "seen" is built up at runtime |
shawwn#3694: arguably that's the context window
shawwn#3694: so one way to do that might be to say, the first half of the context window is devoted to inputs it's seen before
shawwn#3694: but then you'd want to let the model decide when to remove and add things
shawwn#3694: from that first half
shawwn#3694: and also where.
bmk#1476: I still think the world model is the crucial exciting bit
shawwn#3694: hm. doesn't memory lead to world modelling?
shawwn#3694: it almost implies world modeling by definition
shawwn#3694: but, fair enough -- what would a world modeling gpt look like, in detail?
shawwn#3694: if I understand you correctly, the theory is that you might be able to solve it by generating some sort of dataset that helps it conceptualize the world
shawwn#3694: perhaps if it encodes enough tasks, it can do that.
bmk#1476: The idea is that eventually it'll just be able to do so, but maybe with a custom dataset it can be learned with a smaller model
shawwn#3694: yeah. it's an exciting idea, worth trying.
bmk#1476: Should I xpost to alignment forum
shawwn#3694: dunno. most people shoot down interesting new ideas
shawwn#3694: not sure about the alignment forum, but, suppose the idea is good. is posting it there really going to change anything?
bmk#1476: Alignment forum will most likely be like "yeah this is just an edge case of xyz we've known about forever"
shawwn#3694: a demo would sell the point and refine the idea. though that's easier said than done
bmk#1476: Alignment forum is full of people theorizing
shawwn#3694: r/MachineLearning might like it. Was it submitted there? |
bmk#1476: 2 upvotes lol
shawwn#3694: Hmm
bmk#1476: also it made it to hn front page but only very briefly, unlike my gpt3 post which ignited hundreds of commonts worth of debate
bmk#1476: honestly im surprised
bmk#1476: i thought "HOW TO BUILD AGI!!!111" would be so much more debate sparking
shawwn#3694: looking over the subreddit, that's actually about on par with most research posts
bmk#1476: lol
shawwn#3694: big players like google get lots of upvotes
shawwn#3694: the rest seem to get ~5
Aran Komatsuzaki#5714: Yeah that's why I don't often post on r/ml.
bmk#1476: it feels like number of upvotes is inversely correlated to interestingness
Aran Komatsuzaki#5714: Yeah. You can get more upvotes with more beginner stuffs
bmk#1476: the latest schmidhuber drama: 5000 upvotes
someone makes simple demo using gpt3: 2000 upvotes
google paper: 100 upvotes
regular paper: 3 upvotes
Aran Komatsuzaki#5714: Exactly
bmk#1476: its actually even worse
bmk#1476: top this month: https://cdn.discordapp.com/attachments/729741769738158194/745496984948965376/unknown.png
Aran Komatsuzaki#5714: Also I want to interact with real researchers with experience, not some random undergrad. |
bmk#1476: unfortunately this discord has a high concentration of random undergrads
Aran Komatsuzaki#5714: The point is that I want to see their real name.
bmk#1476: n=12 https://cdn.discordapp.com/attachments/729741769738158194/745497183159320576/unknown.png
bmk#1476: i mean, except for yall from GT
Aran Komatsuzaki#5714: 😂
shawwn#3694: What's GT?
bmk#1476: georgia tech
bmk#1476: apparantly a large number of people here are from gt
shawwn#3694: oh. you went to school with lucid?
Aran Komatsuzaki#5714: Yeah lol
aquajet#7800: > unfortunately this discord has a high concentration of random undergrads
Someone asked for me?
bmk#1476: yeah but youre gt so you get a pass
Aran Komatsuzaki#5714: Yeah
bmk#1476: this is so sad half of yall are europeans and can get literally anywhere in europe cheaply and easily, and the other half are from gt and then im just out here in the middle of nowhere
bmk#1476: if we ever have meetups after The Plauge Times™ i will be very lonely
kindiana#1016: at least you are not in australia 😛
bmk#1476: fair
bmk#1476: it could be worse
bmk#1476: although these parts arent much denser than aus |
kindiana#1016: (I am atm lmao)
Aran Komatsuzaki#5714: Maybe Australians are happy that they don't live in america
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745498635504189600/unknown.png
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745498659130572840/unknown.png
bmk#1476: basically the same
kindiana#1016: australia is pretty good for The Plauge Times, but otherwise pretty boring (I would be studying in the us rn if not for _second wave_)
bmk#1476: logit function wave
Aran Komatsuzaki#5714: I'm in ft but live in Japan to avoid the downside.
Aran Komatsuzaki#5714: *gt
bmk#1476: also yeah i aspire to not live in canada after The Plauge Times are over
Aran Komatsuzaki#5714: Living is a warmer place is highly recommended
bmk#1476: yes
bmk#1476: -38 while waiting for your bus outdoors which is an hour late is not recommended
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/745500659133972480/unknown.png
Noa Nabeshima#0290: > unfortunately this discord has a high concentration of random undergrads
oy
Noa Nabeshima#0290: Do any of you older folks have career/life advice, things to do or not do?
Noa Nabeshima#0290: I have no idea what DL at scale career paths look like outside of grad school.
Noa Nabeshima#0290: I imagine myself becoming a programmer and then building up legible experience for ML?
Noa Nabeshima#0290: I'd actually love to hear your life stories, probably in off topic |
bmk#1476: ~~the dream: eleutherai becomes wildly successful, we all get showered with job opportunities~~
StellaAthena#3530: > I have no idea what DL at scale career paths look like outside of grad school.
@Noa Nabeshima 90% of DL jobs are the same as any other SWE job: maintaining systems.
StellaAthena#3530: Eh, that’s unfair. 90% is either maintaining systems or using buzzwords to confuse people and then sneakily solving problems via a linear regression.
cagoose#3438: Any chance I can contribute compute? I have some GCP credits that I’ve been using for gpt-2 training, but definitely want to help with gpt-3 as well. How do I get started?
shawwn#3694: Hey surya 🙂 nice to see you here.
shawwn#3694: There's a project roadmap in the channel description
shawwn#3694: most of the project work happens in #gpt-neox-devs
shawwn#3694: It's the middle of the night, so not many people are around. But the pinned message in #deleted-channel has some info: https://discordapp.com/channels/729741769192767510/745018020069638174/745022007963287712
shawwn#3694: To get started, ask @Daj for access to the repository when he's around. (Post your github username)
shawwn#3694: the code is mesh tensorflow, which is a bit different than openai's GPT-2 codebase
shawwn#3694: In terms of training resources, it's done via TPU pods provided by TFRC. But other resources are useful, such as CPU cores for gathering and cleaning lots of training data. @bmk heads up that effort
shawwn#3694: beyond that, people contribute in various ways. @kevinw has made some helpful graphs; @StellaAthena is working on https://sites.google.com/view/eleutherai/ ; and @thenightocean made a slick mockup for a TPU dashboard https://cdn.discordapp.com/attachments/729741769738158194/745536541056565249/eai_dashboard.png
shawwn#3694: @Sid, @Deleted User, and @Daj have done much of the coding, and they probably have the deepest understanding of mesh tensorflow at the moment.
bmk#1476: hey dont forget i also did a lot of work on mtf
bmk#1476: even though i absolutely hate it
shawwn#3694: sorry, yes
shawwn#3694: as you can see, my contributions consist of being clueless.
shawwn#3694: and the occasional tweet.
cagoose#3438: Sick; I’ll dm them in the morning! |
Aran Komatsuzaki#5714: Don't forget that @Aran Komatsuzaki also contributes by chatting random nonsense.
Deleted User#0000: John Hopfield himself... https://arxiv.org/abs/2008.06996
Aran Komatsuzaki#5714: Pretty much everything we use in neural net is attention
Aran Komatsuzaki#5714: feedforward is attention, too
Aran Komatsuzaki#5714: MoE can be thought of as attention
Aran Komatsuzaki#5714: PKM too, and so is retrieval as hard attention
Aran Komatsuzaki#5714: Vaswani et. al. said attention is all you need, since almost everything we use is some sort of attention.
Ravna#1831: Attention is just softmax and softmax is basically a continuous if-then-else.
Aran Komatsuzaki#5714: you can use other activations like relu and still claim it's "attention"
Ravna#1831: Attention can also be seen as an optimization trick to improve MLP because all it does is reducing the width of the FC layer from "context size" * "embedding size" to just "embedding size".
Ravna#1831: Everything is MLP plus optimization tricks.
Deleted User#0000: Yup, agreed. Feedforwards and MoEs are a kind of implicit attention
Aran Komatsuzaki#5714: Yeah that's why I call them implicit memory
Aran Komatsuzaki#5714: lol
Deleted User#0000: Relu and Softmaxes are both maxes, one in activation space, and the other for scalar
Aran Komatsuzaki#5714: wait, that's not why.
Aran Komatsuzaki#5714: nvm
Deleted User#0000: The hopfield network connection suggests the attention is an energy update rule to an old biological model for associative memory
Deleted User#0000: Which really makes my head spin. I've always wondered why LMs have such a prodigious memory
Aran Komatsuzaki#5714: well, i guess i can mention something like that in my paper to justify some of my claims. |
Aran Komatsuzaki#5714: motivation to call some stuffs memory etc
Aran Komatsuzaki#5714: i'm not sure if hopfield net is a good way to understand memory tho
Deleted User#0000: Yea, the connection is a bit ad-hoc
Aran Komatsuzaki#5714: well i guess there was some biological study of hopfield, so i guess biologists would be happy.
Aran Komatsuzaki#5714: happy about the analogy of hopfield ~ attn
Deleted User#0000: Haha, yea, it makes the world feel a bit more certain
Deleted User#0000: It feels good to be certain of something
Deleted User#0000: K ttyl
shawwn#3694: Nice bug hunting
Aran Komatsuzaki#5714: good night
Semantic Aberration#3692: @shawwn
> adding memory to GPT has been a long time ambition of mine. I hope someone does it
Some people tried and failed https://arxiv.org/abs/2006.11527 while the Transformer-XL and Compressive Transformer crews somewhat delivered
shawwn#3694: hah https://cdn.discordapp.com/attachments/729741769738158194/745598972508241920/unknown.png
shawwn#3694: interesting.
shawwn#3694: so they tried devoting a portion of the context window to be long-term memory
shawwn#3694: Oh, nope.
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/745599392773439498/unknown.png
shawwn#3694: it's the attention layer.
shawwn#3694: er. yeah, it's context. so part of the tokens in the context window are memory, and the rest are input |
shawwn#3694: https://cdn.discordapp.com/attachments/729741769738158194/745600271471869962/unknown.png
shawwn#3694: but... if it has memory, they'd need to change how the training happens
shawwn#3694: it can't just randomly sample the dataset
shawwn#3694: in order for the model to learn to make associations over time, it would need to train sequentially
shawwn#3694: the first 1024 tokens out of N, then tokens 1 through 1025, and so on
shawwn#3694: like, of course this failed. they don't give the model any opportunity to learn how to use its memory
shawwn#3694: choosing random samples isn't going to make it figure out how to update the memory tokens
Aran Komatsuzaki#5714: @shawwn You want to check the performance. It's not so impressive.
Semantic Aberration#3692: @shawwn
> it would need to train sequentially
> the first 1024 tokens out of N, then tokens 1 through 1025, and so on
That's Transformer-XL, it worked but gains are not great and training is likely hard https://arxiv.org/abs/1901.02860
Semantic Aberration#3692: I like the idea of memory too. It needs to be done properly (how?).
shawwn#3694: ahh.
shawwn#3694: @Semantic Aberration thank you for pointing that out
Aran Komatsuzaki#5714: nvm didn't notice what you said
Aran Komatsuzaki#5714: this kind of memory can be considered as a recurrent model at TBPTT-level
Aran Komatsuzaki#5714: Compressive Transformer is another one
kindiana#1016: @shawwn a cheap trick is to randomly sample batches which are much larger than context window, and then train on those sequentially
shawwn#3694: Oh, good point. You could sample 2048 rather than 1024, then turn that into a bunch of batches. |
shawwn#3694: the thing is, I'm not sure you'd want to batch
shawwn#3694: batching implies averaging the gradients, and it seems like it needs to understand how to use memory in a sequential way, not a parallel way
Aran Komatsuzaki#5714: @Deleted User @bmk @Sid @StellaAthena @thenightocean
Btw Delip said as follows regarding EleutherAI:
> Hey, pretty vibrant community. Glad I joined and look forward to participating/helping. I haven’t turned on notifications for the discord app on my phone, but if you want to bring something to my attention in a timely way, this DM works great. Cheers.
If you guys want to bring something to his attention, please let me (probably @Daj or @shawwn also works) know, so that one of us can contact him in a timely manner. I'm not really sure how he can help, but if any of you has any idea, that would be great.
aquajet#7800: I contribute by asking questions and suggesting emotes
Deleted User#0000: @Semantic Aberration I think I have a decent implementation of transformers with memory https://github.com/lucidrains/memory-transformer-xl it is basically compressive transformers, except i use linear attention itself to update the memory
Deleted User#0000: i also do gating when updating the memory states
Deleted User#0000: Tried running Karpathy's mingpt on colab
Deleted User#0000: So far the only way I managed to make it work was to turn the gpt into a 25 iq version of itself, with just 2 layers and 2 heads.
Deleted User#0000: Everything else says cuda out of memory.
Sid#2121: Lol. What’s supposed to be the advantage of mingpt vs regular gpt?
Aran Komatsuzaki#5714: @Sid It's Karpathy brand.
Aran Komatsuzaki#5714: Same reason why people buy Apple products
Sid#2121: *yeah, what freaking losers.* He types on his apple macbook
Arbot360#5033: Next: training GPT on a corpus of GPT implementations...
StellaAthena#3530: > Next: training GPT on a corpus of GPT implementations... |
@Arbot360 Sometimes I wonder what percentage of GPT-3’s training data was produced by GPT-2
StellaAthena#3530: And what kinds of training biases this could cause
Arbot360#5033: Sounds like a paper to write, finding the fixed point of GPT self-training.
Arbot360#5033: If only OpenAI had #the-rad-lab , they could calculate that percentage.
StellaAthena#3530: Something something MCMC
Arbot360#5033: How convenient: https://github.com/openai/gpt-2-output-dataset
StellaAthena#3530: Hmmm
StellaAthena#3530: @bmk how much of the web-scrapped GPT-3 training data is publicly available?
bmk#1476: ?
bmk#1476: afaik none of it
bmk#1476: oa is very *closed* about this kinda stuff
StellaAthena#3530: That’s what I thought, but I figured you’d know better than me
Arbot360#5033: #ClosedAI
StellaAthena#3530: #COVIDAI
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/746174151647297646/unknown.png
Noa Nabeshima#0290: Any predictions about the cost per completion?
bmk#1476: ¯\\_(ツ)\_/¯
bmk#1476: My guess is that they're not going to price it too exorbitantly expensive
bmk#1476: As to what "too exorbitantly expensive" actually means..
bmk#1476: ¯\\_(ツ)\_/¯ |
bmk#1476: Maybe a few cents per 2048 tokens?
bmk#1476: That would recoup their running costs probably
bmk#1476: Anything more than 10 cents per 2048 tokens is "too much" imo
bmk#1476: Of course this is all assuming OA isn't trying to rake in all the cash they can
bmk#1476: If that's their goal I could imagine the oom of $1 per 2048 as not too unrealistic
bmk#1476: But I still believe they've got *some* nonprofit still left in em
kindiana#1016: would people really pay $1 per 2048?
bmk#1476: yes absolutely
bmk#1476: would *I* pay that? no
bmk#1476: but there most certainly are (enough) people out there who would
bmk#1476: thankfully i dont think oa is that greedy
StellaAthena#3530: There absolutely exist people who would pay that
bmk#1476: i think what really matters here is whether OA is going for the market equilibrium or whether they have altruistic motivations to make access as widespread as possible
bmk#1476: also once we get GPT3 working i think we should try to figure out how to get it inferencing on reasonably commodity hardware (i.e much much cheaper than the cluster that oa is probably using) and run our own api service
bmk#1476: (i know there was some opposition to this idea earlier)
kindiana#1016: it should take roughly on the order of 0.1 gpu seconds to generate a single token
kindiana#1016: if my math is right
kindiana#1016: params * 2 flops * 50% eff
bmk#1476: you could probably inference off a single gpu tbh
bmk#1476: im confident you could make it work |
bmk#1476: the bottleneck here really is read speed
bmk#1476: RAM has a read speed of like 10GB/s
bmk#1476: that's a minute to inference (!!)
bmk#1476: ;-;
StellaAthena#3530: A minute to infer what? A single token?
bmk#1476: Yeah
bmk#1476: So my idea ain't gonna work
aquajet#7800: I'm for getting it running on commodity hardware. Even if we don't run our own api other people can run their own instances of gpt3
aquajet#7800: ***Open***
bmk#1476: its going to be absurdly slow though
bmk#1476: like, a minute per token??
bmk#1476: and this is assuming you have, what, 700GB of ram
bmk#1476: that aint cheap
kindiana#1016: ram is ~50GBps on a commodity system, but it is possible to get 10GBps with like... not too much money worth of ssds
bmk#1476: thats still, what, 14s
bmk#1476: and i guess you could have a big raid to get the data into memory where it queues up to enter the gpu
kindiana#1016: its really not worth it to do bs=1 inference lol
kindiana#1016: bs=2 will be like the exact same speed
bmk#1476: latency unreasonable
bmk#1476: how many channels in a server board? |
bmk#1476: and does that result in much higher memory speeds
kindiana#1016: one ddr4 channel is ~20gbps
kindiana#1016: and it scales ~linearly
bmk#1476: hmm so you could get that down to like 8 seconds
kindiana#1016: I think you are gonna need everything in vram to run inference at any reasonable speed lol
kindiana#1016: like 30k worth of gpus 🤔
bmk#1476: shit
bmk#1476: what if we spread it across multiple machines
bmk#1476: its counterintuitive but that increases the effective memory bandwidth
kindiana#1016: yes that works
kindiana#1016: the activations are tiny relative to weights
bmk#1476: and the network time of a few hundred ms is dwarfed by the 8 seconds or whatever memory time
AI_WAIFU#2844: You can also stream the computation, you don't need to store the activations, so you can spread the weights across several machines and have them all working at full speed.
AI_WAIFU#2844: They'll be lag, but throughput won't be affected.
Nax#8383: @shawwn , would it be possible to fit GPT-3 6B model on TPUv2 using following strategy - this should allow 128GB model to be fit easily on a single TPUv3-8 https://cdn.discordapp.com/attachments/729741769738158194/746302937277661274/unknown.png
kindiana#1016: throughput will drop a lot unless you do some fancy stuff to keep all the cores active
Nax#8383: yeah something like pipelining
kindiana#1016: there might be a problem with the memory required to buffer the activations for the backwards pass
Nax#8383: TPU's 300GB should be enough for that i guess
kindiana#1016: well cpu->tpu transfer bw might be an issue in that case, but it shouldn't be too extreme |
kindiana#1016: ideally you would use reversible layers so you don't need to cache activations
Nax#8383: yeah thats good idea, i ll benchmark and share some results
aquajet#7800: https://twitter.com/sharifshameem/status/1296755366944862208?s=09
Daj#7482: :firealarm:
Daj#7482: This reminds me we should maybe have some explanation of common Eleuther memes in the onboarding doc
Daj#7482: e.g. firealarm, bitter lesson go brrr, x/s risk, infohazard
thenightocean#6100: or maybe familiarity about this idea should be a filter to decide who is allowed in.
thenightocean#6100: 😄
Daj#7482: Nah I think one of our comparative advantages is attracting wild hacker types and exposing them to the ideas in a friendly manner
Daj#7482: I'm happy to make other people's intro into alignment less arduous than mine lol
bmk#1476: > https://twitter.com/sharifshameem/status/1296755366944862208?s=09
@aquajet holy shit
bmk#1476: https://twitter.com/nabla_theta/status/1296860238289629184
bmk#1476: :firealarm:
thenightocean#6100: dammit
thenightocean#6100: maybe we should train it to play this game next: https://www.decisionproblem.com/paperclips/
thenightocean#6100: what could go wrong>
bmk#1476: that would make a funny meme but i dont think that would be actually scary
snoozie#7259: @StellaAthena https://youtu.be/SY5PvZrJhLE seems like it’s common crawler dataset - at 3:00 min mark.
bmk#1476: What specifically is this in response to? |
Louis#0144: Reeeeee yannic
Louis#0144: 😡
StellaAthena#3530: @snoozie I’m not sure what question I asked that you’re answering? I’ve asked a lot of Q’s recently so it probably just slipped my mind.
Eddh👽#7290: I am a layman in NLP but I have a question. Would it be possible later to translate from the semantic space of GPT to images ? Making pictures that can illustrate the text and vice versa. This kind of goal could also help with world modelling ?
StellaAthena#3530: @Eddh👽 here is a paper on the topic: https://arxiv.org/abs/1808.04538
The code is on GitHub here: https://github.com/CSC2548/text2image2textGAN
Eddh👽#7290: Thanks!
coozamano#5333: Hey guys, I stumbled on this group by accident. I am a software engineer, how can I help make this happen?
Sid#2121: Hey @coozamano , onboarding is currently in progress hah, but please check the pinned post and google doc in #deleted-channel if you haven't already
Sid#2121: our data gathering effort, #the-pile , is probably where we could use the most help right now, unless you're super proficient in language models and/or tensorflow mesh
coozamano#5333: checking
Noa Nabeshima#0290: My computer is broken so until I fix it on Monday would it be helpful if I carefully read over scaling laws for natural language models and wrote down what might be important for our models?
Noa Nabeshima#0290: Don't know how much knowledge is in the community there
Noa Nabeshima#0290: It also might not matter because OAI's training schemes for various sizes are public?
Basedblue#9138: someone posted this on 4chan lol https://cdn.discordapp.com/attachments/729741769738158194/747076688256434226/1598169376933.png
Ravna#1831: GPT3 is not AGI -> DL failed and symbolic AI is the only way
Ravna#1831: GPT3 is not AGI -> guys, don't train larger models any more
Ravna#1831: Bitterness is real
thenightocean#6100: Most of the sources I see are super anti-hype and feel the need to overcorrect to perceived enthusiasm about DL possibilities. |
Standard template in the most news articles about some new AI success is like “yes this x thing is kinda impressive, but we are still super-duper far from anything close to true AGI, so don't worry and especially don't listen to those LessWrong low-status nerds who saw Terminator movie too many times”.
I know that history of AI is full of failed attempts and overenthusiastic periods, but I would say that these days the pendulum has swung too far in the other direction and everyone is super cautious and doesn't want to loose weirdness points by claiming its closer than the high-status and respected people are saying it is.
Ravna#1831: On the other hand, "Humans are special", "Qualia is not formalizable", "Consciousness is not computable", are applause lights for many people.
thenightocean#6100: correct
Ravna#1831: If you repeat these lines using your own impressive language, you gain status.
thenightocean#6100: all I am saying is AI is too powerful and too dangerous technology to play annoying status games that disincentivize potentially competent people to take the sort term risks seriously and do something about it.
thenightocean#6100: just had a similar discussion on a FB group where someone used your standard Gary Marcus rants as a ultimate takedown against those insolent kids who think DL actually did some useful stuff in the last 5-10 years.
Ravna#1831: The Gary Marcus style argument is actually subtle enough that I'm not going to strongly take a side.
Ravna#1831: What annoy me the most are the Chinese Room argument and Roger Penrose quotes.
Ravna#1831: 🤦
Ravna#1831: DL did do something useful: machine translation for daily conversations of tourists it is.
Ravna#1831: But I think the past achievements are indeed exaggerated. The really useful stuff lies in the future of the exponential curve.
Ravna#1831: Not in 2012-2020's CV moments.
thenightocean#6100: > The Gary Marcus style argument is actually subtle enough that I'm not going to strongly take a side.
@Ravna didnt he said something like: ‘gpt and transformers are dead end and it would be for the best to shut down this research altogether. ‘ Not really subtle.
researcher2#9294: oof
researcher2#9294: it's pretty hard to tell what's going to be a dead end tbh
researcher2#9294: nlp looked kinda meh a few years ago and now it's blown up |
Ravna#1831: Wow he did say that? I guess my impression of his argument is still on his debate with Bengio.
Ravna#1831: I remember that about 2 years ago someone famous in the DL community said to Marcus that "if you want to complain, write a paper, don't start wars on popular media articles". Someone in the LW community (I think it's Sarah Constantin) commented that this is a defensive guild behavior and I agreed with her. The DL community was the unreasonable one then.
Ravna#1831: Now it seems that it flips.
thenightocean#6100: > Wow he did say that? I guess my impression of his argument is still on his debate with Bengio.
@Ravna oh I watched that. He was much more reasonable there.
thenightocean#6100: I guess its Motte-Bailey theatre
bmk#1476: > On the other hand, "Humans are special", "Qualia is not formalizable", "Consciousness is not computable", are applause lights for many people.
@Ravna to be fair "humans are not special", "consciousness is not special" is also often applause lights, just for a very different crowd
bmk#1476: Also for anyone who says that GPT3 will never ever lead to AGI just link them to the AGI using LMs post
bmk#1476: (totally not just self promotion)
thenightocean#6100: > "humans are not special", "consciousness is not special" 👏 👏 👏
thenightocean#6100: oh wait
Sid#2121: > Also for anyone who says that GPT3 will never ever lead to AGI just link them to the AGI using LMs post
@bmk if you want some serious promotion get Gary Marcus to tweet to his followers how wrong your post is 😈
bmk#1476: Lmao
bmk#1476: That already happened to me once with that one tweet
bmk#1476: My blog would be completely swamped by traffic if Marcus tweets "no. this post is completely wrong."
Aran Komatsuzaki#5714: haha that would be awersome lol
Aran Komatsuzaki#5714: maybe aspiring agi researchers should always tag Marcus on their tweet.
bmk#1476: lol |
bmk#1476: if you see a post by marcus talking about "even assuming it can world model, gpt3 isn't an agent and it never will be" please lmk so i can respond to it, haha
bmk#1476: unfortunately he seems to be currently stuck on "gpt3 cant model the world"
Aran Komatsuzaki#5714: or post AGI memes trolling Marcus's argument with him tagged
bmk#1476: see but i want to actually refute his ideas, not strawman them
Aran Komatsuzaki#5714: yeah understood
Aran Komatsuzaki#5714: but i don't think he'll change his opinion even if he sees the actual AGI
bmk#1476: oh, my goal isn't to convince *him*
Aran Komatsuzaki#5714: i see. makes sense.
Deleted User#0000: > someone posted this on 4chan lol
@Basedblue it's not that we don't recognize that its a camera, it's that the results makes us question if *we are* just a camera
Deleted User#0000: there's a whole line of work that studies predictive coding in neuroscience
Deleted User#0000: that's actually a poor analogy to build on top of lol
Edward#0927: I heard there was a gpt3 talk by "Conner", like a lecture happening with Q&A on Sept 13?
Edward#0927: That's the only lead I have; I'm sorry. :<
bmk#1476: Where did you see this?
bmk#1476: Link pls
bmk#1476: @Edward
Edward#0927: I was told this verbally. I think I remember seeing it as an event as well.
StellaAthena#3530: Well @Daj is Conner, so presumably he can provide the information you’re looking for.
Daj#7482: > I heard there was a gpt3 talk by "Conner", like a lecture happening with Q&A on Sept 13? |
@Edward That is correct, on the SSC meetup
researcher2#9294: Did you guys end up presenting at the GPT3 Demo day?
researcher2#9294: I was keen to watch any videos from the event.
Sid#2121: No, I don’t think ‘replicating gpt-3’ was really the kind of project they were looking for 😦 plus we aren’t a business, can’t be invested in, which I think was the kind of thing they wanted
Sid#2121: @ShyTeaSeb presented his and @Daj ‘s game, it was cool as hell. I think I saw it posted here somewhere?
Daj#7482: I don't think we posted the video, but glad you thought it was cool!
Daj#7482: Me and @Commutative Conjecture were sitting on the couch laughing our asses off at how shitty most of the other projects were lol
Sid#2121: Must have seen it on Twitter hah
Sid#2121: I just watched @ShyTeaSeb ‘s presentation then the music one after it, then got bored 😂
Daj#7482: You didn't miss much haha
researcher2#9294: Is there anywhere I can watch all the demos? The organizer said everything would be recorded but I can't see it on the site.
researcher2#9294: What is this about a game?
researcher2#9294: GAMES
Daj#7482: @ShyTeaSeb and me have been working on a GPT powered game for a while
Daj#7482: Some stuff of it are in #art
ShyTeaSeb#3037: @researcher2
https://youtu.be/WJnjX-O3WbE
Here's a link to the unlisted YouTube video. Our game is around the 1:08:30 mark
researcher2#9294: Thanks! So don't share with general public?
StellaAthena#3530: Ooooo I love game design and the complexity theory of games. I’ll definitely check out your presentation @Daj @ShyTeaSeb |
ShyTeaSeb#3037: @researcher2 Uhhh 🤷🏻♂️ They weren't clear tbh, it's probably fine
@StellaAthena Thanks! Let us know what you think 😃
Louis#0144: https://twitter.com/mark_riedl/status/1298054078703042561?s=21 @Daj I told my advisor about this group during a video meeting
Louis#0144: He wrote a post about it
Louis#0144: @StellaAthena you’ll like this too
bmk#1476: semi relevant: the post he mentions as having quoted him has some.. issues https://discordapp.com/channels/729741769192767510/730095596861521970/747627888320446584
StellaAthena#3530: Direct link to Riedl’s blog post: https://medium.com/@mark_riedl/ai-democratization-in-the-era-of-gpt-3-8b91891f91cb
bmk#1476: Riedl gets it right!
>It is so large that it cannot be trained without hundreds of thousands of dollars worth of cloud computing resources
bmk#1476: (ok, i might be a bit too enthusiastic)
bmk#1476: still, it's so great to see someone not using 4.6M or 12M as their estimate
StellaAthena#3530: This is a pretty good blog post
bmk#1476: >there are very few groups that can expend the resources for replicability purposes.
:mesh:
bmk#1476: (sorry, i'm in more of a playful than a serious mood right now)
mistobaan#2737: Hello Everyone, I am Fabrizio, a Goole Machine Learning Developer Expert. I have just been invited by Brett another fellow ML-GDE into this group. I have experience with TPUs and Tensorflow Mesh. I am interested in helping out with the GPT-3 replica idea. I have access to GPT-3 itself.
asparagui#6391: hi fabrizio 🙂
asparagui#6391: jump around and read some of the threads
StellaAthena#3530: Welcome Fabrizio! We are always excited to have more people with TPU experience
asparagui#6391: there's a github repo that daj can invite you to |
mistobaan#2737: where is the main discussion channel? gpt-neo?
StellaAthena#3530: #gpt-neox-devs is for our GPT-3 replication efforts
bmk#1476: woah, awesome!
bmk#1476: we could really use more tf mesh people
StellaAthena#3530: #the-pile is for our building our collection of training data, as OpenAI’s training data is not publicly available
mistobaan#2737: did you already use C4 ?
StellaAthena#3530: And yeah. We have... two?... people with prior TF mesh experience I think?
bmk#1476: we decided not to use C4
bmk#1476: three
bmk#1476: well, four now haha
bmk#1476: let's pop over to #gpt-neox-devs to continue this
Louis#0144: Yeah the reaction to using C4 was rather explosive tbh
Louis#0144: Really heated
bmk#1476: C4 is actually a recursive acronym which stands for Союз Советских Социалистических СССC
ssp3ll#6042: Hi Stella thx for the invite...
StellaAthena#3530: By the way, here’s the updated and reorganized in-doc document @O5 https://docs.google.com/document/d/1yOnxEMlU57M8YFlQC3XNOvyMVX2EpU5LeIWhEBcwQNk/edit
bmk#1476: awesome
bmk#1476: some minor points:
bmk#1476: getting the evaluation system all setup is looking to be nontrivial in complexity
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/748282389226192906/unknown.png |
bmk#1476: this thing
bmk#1476: it's looking like we'll be using the pre-extracted libgen instead of this https://cdn.discordapp.com/attachments/729741769738158194/748282722933407815/unknown.png
bmk#1476: we might do our own pdf extraction eventually but probably not right now
bmk#1476: since we don't really have the resources to do so
bmk#1476: this issomething that would be an obvious area to improve on later down the line though
StellaAthena#3530: Sure, I’ll take that into account.
bmk#1476: so the great news is that the majority of the data as outlined in the original paper is pretty much done
StellaAthena#3530: I think edits can be proposed if you follow the link? I’m happy to approve whatever changes you make
bmk#1476: CC - currently downloading, it's hands off from now until whenever we do filtering
bmk#1476: i'm not sure how to organize it, lol
bmk#1476: i'd probably just append everything to the end which is not good for organization [citation needed]
StellaAthena#3530: Fair enough
bmk#1476: libgen - we can just use the one that someone else extracted and worry about cleaning later
StellaAthena#3530: > i'd probably just append everything to the end which is not good for organization [citation needed]
@bmk Yes, this is why I had to redo the entire google doc 😛
bmk#1476: webtext2 - we have openwebtext which is half the size of wt2
bmk#1476: books1: done
wiki: done
bmk#1476: and that's it
bmk#1476: everything else is random stuff that we thought we wanted to add |
mistobaan#2737: I tried to go to sleep early and have a good night sleep, but I literally slept 40min and I am fresh new again unable to sleep. 😐
Daj#7482: > I tried to go to sleep early and have a good night sleep, but I literally slept 40min and I am fresh new again unable to sleep. 😐
@mistobaan Welcome to real euro times lol
mistobaan#2737: 😄
mistobaan#2737: I kind of like the quiet of the night
Commutative Conjecture#6969: > I tried to go to sleep early and have a good night sleep, but I literally slept 40min and I am fresh new again unable to sleep.
Just work until you collapse back from exhaustion 👌
mistobaan#2737: I am on it boss
Ken#8338: AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization https://arxiv.org/abs/2008.11869
Deleted User#0000: starting to see more and more works out of TikTok AI labs
Deleted User#0000: text to full animated avatars would be interesting
Deleted User#0000: coming to your tiktok feed
bmk#1476: cursed
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/749084721627004978/noetherian.png
StellaAthena#3530: What
bmk#1476: it's a *no etherian* ring
bmk#1476: i know, it's a shitty pun
StellaAthena#3530: Go sit in the corner and think about what you’ve done
bmk#1476: my memes are all shitty puns
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/749085957600313344/asymptomatic.png |
StellaAthena#3530: If each of your puns is worse than the previous one, does that imply the existence of a maximal ideal pun?
bmk#1476: shit u got me
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/749088488552661083/tputree.png
Louis#0144: Explain
Louis#0144: Pls
bmk#1476: my memes mostly hinge on weird inside jokes designed to appeal to as few people as possible
Louis#0144: Space station
Louis#0144: Vs TPU
Louis#0144: hm
Ravna#1831: > If each of your puns is worse than the previous one, does that imply the existence of a maximal ideal pun?
Ravna#1831: This assumes that "worse" has transitivity.
Ravna#1831: It's only true if the language user has maximum rationality all the time.
bmk#1476: id say worseness has transitivity
bmk#1476: its a preorder
archivus#7382: I have a proposition. You guys should add one extra trainable parameter to GPTNeo just so you can say you have a bigger language model than GPT3
archivus#7382: 175 000 000 001 parameters
kindiana#1016: its unlikely that the final model will be an exact copy of gpt3 anyways, so slightly bigger would be good if just for the PR lol
bmk#1476: we barely had 100B clunking along
bmk#1476: 175B + eps is going to be really damn hard
bmk#1476: but there's literally no reason not to try and go further and aim for 1T once we can get 175 + eps |
bmk#1476: (well, except for it being almost an order of magnitude harder but small deal)
Ken#8338: A good write-up of advances in NLP over the years: https://eugeneyan.com/writing/nlp-supervised-learning-survey/
researcher2#9294: thanks @Ken, will have a read - is this your work?
Ken#8338: @researcher2 no, not my work.
mistobaan#2737: I read finally what sacred does. I like parts of it. but it does too much and too invasively.
mistobaan#2737: I think it should just be a tiny wrapper inside a docker image that you submit to execute.
bmk#1476: I think adding docker adds so much more complexity
mistobaan#2737: docker snapshots your environment in one asset. sacred does not do that
mistobaan#2737: indeed also the google ML api are starting to support that
mistobaan#2737: that as in: docker training
mistobaan#2737: I need a good name for this tool. hloop? hyperloop the super train cmd line 😄
Louis#0144: stanfordnlp REEEEEE
Louis#0144: god its such a friggen nightmare to use
Louis#0144: the documentation is awful
Louis#0144: how did THIS become popular
mistobaan#2737: is written in java and by the Manning mafia
Louis#0144: like no joke a large portion of the documentation is actually wrong
Louis#0144: its really weird
Louis#0144: and people point out the errors on the github
Louis#0144: but no one fixes it |
Louis#0144: LOL
Louis#0144: Atleast tf tries to fix wrong documentation
mistobaan#2737: why are you trying to use s nlp is the question 😄
Louis#0144: legacy parser
Louis#0144: old phd student wrote it like 7 yrs ago or something
Louis#0144: idk
Louis#0144: ok I should say the parser is maintained
Louis#0144: but it requires setting up this weird backend
mistobaan#2737: yeah I feel the pain. anything Java is hard to change
mistobaan#2737: or setup
mistobaan#2737: but is fast
mistobaan#2737: damn fast compared to python
StellaAthena#3530: I feel your pain. I’m working with the “open source” implementation of a paper that literally does not run.
Louis#0144: >java
Louis#0144: >fast
Louis#0144: i spent so much of my time in HS writting fortran and C
Louis#0144: dont u come at me with "oh man java is so fast"
Louis#0144: LMAO
mistobaan#2737: the JVM JIT has its years of optimization
mistobaan#2737: so yeah. I think it can beat C in few cases |
Louis#0144: just memory allocation
mistobaan#2737: specially these byte shifting inner routines
Louis#0144: but python can also beat C occasionally
Louis#0144: so that doesnt say much
Louis#0144: 😛
Louis#0144: Python can beat tf out of fortran
Louis#0144: its actually really funny
mistobaan#2737: doubt that if you exclude the optimized c libraries
Louis#0144: oh for sure
Louis#0144: but even optimized fortran
Louis#0144: python is infinitely better at handling strings than even the best fortran code
Louis#0144: its weird
mistobaan#2737: I can see that
Louis#0144: but yeah java beats C when it comes to malloc
Louis#0144: JVM has good memory management
Kazumi#1297: so CycleGAN was great because it could do cycle consistent image <=> image translation with unpaired images, here's a paper about how to do cycle consistent text <=> image translation. Imagine being able to do unpaired image <=> text translation
https://arxiv.org/abs/1903.05854
Eddh👽#7290: With things like mirrorgan, doesn't it implies the possibility to have a universal concept space, which could be mapped to text or image or video ? You could encode any of these data types into the concept space and decode it into another type
Eddh👽#7290: Could the brain be doing something similar when an image of a dog comes to mind when we hear the word "dog" ?
Kazumi#1297: I'm still in the process of reading it, but I don't think MirrorGAN is exploring that yet. I want to see cycle consistent multimodal translations more though |
Kazumi#1297: getting paired dataset is not easy
Ken#8338: Good talk that might help you understand the connections between the brain and deep learning https://www.youtube.com/watch?v=QBN6shA6FpA
mistobaan#2737: 2Petabyte of data for one cubic mm 😄
Ken#8338: This might be something this project might find interesting (at least in principle) ...Folding@Home, but for AI https://arxiv.org/pdf/2002.04013.pdf
Daj#7482: For anyone interested, I will be speaking at the SSC meetup on 13.9, sign up here: https://old.reddit.com/r/slatestarcodex/comments/ik0rhz/next_ssclesswrong_meetup_guest_speaker_connor/
Commutative Conjecture#6969: weeeeeeeeee
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/750425123638476881/unknown.png
bmk#1476: https://www.reddit.com/r/GPT3/comments/ikorgs/oa_api_preliminary_beta_pricing_announced/
bmk#1476: so my estimate was kinda off
Sid#2121: I mean that works out to about $1 per 20k tokens so
Sid#2121: that's not far off
Sid#2121: oh i was going off your 10 cents comment
Sid#2121: okay, an order of magnitude off 🤷
Sid#2121: it's not cheap at all. Can't wait to open source this shit.
bmk#1476: https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090/
bmk#1476: I've been saving up for this moment
bmk#1476: Also it's 12 cents so a bit over my 10 cents threshold
ForgottenOrb#6802: it seems like it’ll be hard for applications to use gpt3 at that price
AI_WAIFU#2844: The pass through airflow thing is dope.
bmk#1476: does it handle stacking gpus well |
bmk#1476: i know the 2080 was really bad at this because of the fan
kindiana#1016: triple slot will make stacking hard lol
kindiana#1016: hopefully there will be some custom 2 slot cards
bmk#1476: full 3 slots or 2.5?
kindiana#1016: the 3090 is full 3 slot
bmk#1476: huh
bmk#1476: my current card is a 2.5
bmk#1476: uses up 2 brackets but protrudes into the third
bmk#1476: so will there be *any* breathing room between cards?
researcher2#9294: > i know the 2080 was really bad at this because of the fan
@bmk This, my sli setup is farked
bmk#1476: Can someone help me find a motherboard that I can put 4x 3090 on without needing risers and stuff
Deleted User#0000: > https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090/
@bmk the one bright side to Sutton's bitter lesson is that an AI winter is less likely. opportunities will open up as hardware gets better and better
bmk#1476: Sure hope so
bmk#1476: In any event I need to find a way to stick four of these on a board without everything melting down
researcher2#9294: not sure if joking, but custom build with risers
bmk#1476: i dont like risers they're annoying
Louis#0144: "gaming" experience
Louis#0144: lmao |
Louis#0144: sure
Louis#0144: nvidia knows their fucking market
Louis#0144: no one is using 24gb for gaming
Louis#0144: LMAO
Louis#0144: why not just be honest with yourself nvidia, this is the rtx titan replacement
Louis#0144: @researcher2 never by founders edition
Louis#0144: theyre awful cards
Louis#0144: consistently
researcher2#9294: Never had one, I have been told the cooling normally isn't great.
bmk#1476: so i should wait for 3090 vendor cards to come out?
bmk#1476: or what
researcher2#9294: What turned me off most was the price.
bmk#1476: i want to build an ML rig
bmk#1476: 4x 3090
researcher2#9294: Wouldn't you be better off getting a tesla? How do they stack up
bmk#1476: arent those more expensive though
bmk#1476: like, an order of magnitude more so
bmk#1476: im not made of money
researcher2#9294: Well I saw a v100 selling for 7k the other day
researcher2#9294: around the same order as 4x3090 |
researcher2#9294: just do a comparison of flops/$ and post in here plz 😄
bmk#1476: > 7k
researcher2#9294: https://www.amazon.com/PNY-TCSV100MPCIE-PB-Nvidia-Tesla-v100/dp/B076P84525
bmk#1476: 3090 is 1.5k
bmk#1476: 4x 3090 is less than a v100
bmk#1476: and you better believe it's faster
researcher2#9294: Do the workstation gpus lag the gaming ones?
bmk#1476: theyre always more expensive
bmk#1476: period
researcher2#9294: never bought before
researcher2#9294: $/performance?
researcher2#9294: why would anybody buy that then
researcher2#9294: wierd
bmk#1476: which brings me back to the original thing
bmk#1476: i want a 4x 3090
kindiana#1016: the workstation gpus have more double precision performance and usually ecc memory
bmk#1476: the price/perf is amazing
kindiana#1016: sometimes more memory too
researcher2#9294: isn't the big thing in ml using fp16? see that popup in various codes
researcher2#9294: obviously workstation is not just for ml |
kindiana#1016: yeah workstation cards are not for ml
bmk#1476: @Louis would you recommend i wait until vendor 3090s
kindiana#1016: for scientific computing and like cad stuff
researcher2#9294: kk
bmk#1476: also where the hell am i gonna get a mobo for this
bmk#1476: risers sounds too complicated
kindiana#1016: doing 4x anything isn't going to be easy lol
Louis#0144: idk man I dont think a 3090 is worth it
researcher2#9294: can you just build two machines and distributed
Louis#0144: cloud compute is cheap
bmk#1476: why not?
researcher2#9294: ?
bmk#1476: >cloud compute is cheap
you cant say that with a straight face
Louis#0144: thats almost 8k of cloud compute
Louis#0144: dude colab is cheap
Louis#0144: cant you do most big tests on colab
bmk#1476: im not using colab
Louis#0144: and small tests on a single 3090 |