Unsupervised learning with unlabled data

#32
by rbgreenway - opened

I'm very much still learning my way around LLMs, but I see where some LLMs (e.g. CustomGPT, IngestAI) allow you to fine tune simply by providing raw, unlabeled text. For example, providing training manuals for operating some complex system, supposedly can be used to fine tune an LLM around giving advice related to operating that system. Maybe I have the training manuals for flying a small airplane... :-)

Question: Can falcon-7b and/or 40b be fine tuned using this kind of raw, unlabeled data? If so, where can I find instructions or an example of doing so?

I apologize for the rudimentary question.
Thanks in advance for any suggestions that anyone may have.

I think we can fine-tune the model using masked/causal language modelling techniques given unlabelled text. Here the labels would be the next word token or masked tokens. I am trying to do the same for one of the pretrained models but haven't gotten sufficient enough memorisation of training data. So not sure how well it works.

Any solution?

Any solution?

Not by me. I wonder if I've got a square peg/round hole problem here. Everywhere I look, fine tuning on these LLMs is done with labeled data. I'm very new to all this, so I'm not sure if I'm even asking good questions. I've got another potential application for falcon that I'm banging my head on, too. This stuff is incredible, but not for the faint at heart.

So far, the only solution I have found is for older models like mT5 and BERT. You can find more information and examples in the following link:

GitHub - Hugging Face Transformers Examples

If anyone has a link for unsupervised training specifically for Falcon, please let us know. I would greatly appreciate it!

Sign up or log in to comment