M

Maykeye

AI & ML interests

Image Gen, TextGen, training sillyness from scratch

Recent Activity

liked a model 5 months ago
Zyphra/Zamba2-7B-Instruct
commented on a paper 5 months ago
Differential Transformer
View all activity

Organizations

None yet

Maykeye's activity

New activity in Maykeye/TinyLLama-v0 2 days ago
reacted to MonsterMMORPG's post with 👀 6 months ago
view post
Post
2451
I have done an extensive multi-GPU FLUX Full Fine Tuning / DreamBooth training experimentation on RunPod by using 2x A100–80 GB GPUs (PCIe) since this was commonly asked of me.

Full article here : https://medium.com/@furkangozukara/multi-gpu-flux-fu

Image 1
Image 1 shows that only first part of installation of Kohya GUI took 30 minutes on a such powerful machine on a very expensive Secure Cloud pod — 3.28 USD per hour
There was also part 2, so just installation took super time
On Massed Compute, it would take like 2–3 minutes
This is why I suggest you to use Massed Compute over RunPod, RunPod machines have terrible hard disk speeds and they are like lottery to get good ones



Image 2, 3 and 4
Image 2 shows speed of our very best config FLUX Fine Tuning training shared below when doing 2x Multi GPU training
https://www.patreon.com/posts/kohya-flux-fine-112099700
Used config name is : Quality_1_27500MB_6_26_Second_IT.json
Image 3 shows VRAM usage of this config when doing 2x Multi GPU training
Image 4 shows the GPUs of the Pod


Image 5 and 6
Image 5 shows speed of our very best config FLUX Fine Tuning training shared below when doing a single GPU training
https://www.patreon.com/posts/kohya-flux-fine-112099700
Used config name is : Quality_1_27500MB_6_26_Second_IT.json
Image 6 shows this setup used VRAM amount


Image 7 and 8
Image 7 shows speed of our very best config FLUX Fine Tuning training shared below when doing a single GPU training and Gradient Checkpointing is disabled
https://www.patreon.com/posts/kohya-flux-fine-112099700
Used config name is : Quality_1_27500MB_6_26_Second_IT.json
Image 8 shows this setup used VRAM amount


....
reacted to kz919's post with 👀 6 months ago
reacted to TuringsSolutions's post with 👀 6 months ago
view post
Post
1443
ChatGPT does better at math if you prompt it to think like Captain Picard from Star Trek. Scientifically proven fact lol. This got me to thinking, LLM models probably 'think' about the world in weird ways. Far different ways than we would. This got me down a rabbit hole of thinking about different concepts but for LLM models. Somewhere along the way, Python Chemistry was born. To an LLM model, there is a strong connection between Python and Chemistry. To an LLM model, it is easier to understand exactly how Python works, if you frame it in terms of chemistry.

Don't believe me? Ask Python-Chemistry-GPT yourself: https://chatgpt.com/g/g-dzjYhJp4U-python-chemistry-gpt

Want to train your own Python-GPT and prove this concept actually works? Here is the dataset: https://huggingface.co/.../TuringsSolu.../PythonChemistry400
replied to enzostvs's post 6 months ago
view reply

image.png

Being called a king and being told I can be more is not exactly a hurtful roast. Feels more like a pep talk. 🤪

reacted to enzostvs's post with 🔥 6 months ago
view post
Post
3605
What if we asked the AI what it thought of our hugging face profile? 👹
I've released a new space capable of doing it.... watch out, it hits hard! 🥊

Try it now ➡️ enzostvs/hugger-roaster

Share your roast below 👇
·
reacted to Fizzarolli's post with 👍 10 months ago
view post
Post
2676
Is anyone looking into some sort of decentralized/federated dataset generation or classification by humans instead of synthetically?

From my experience with trying models, a *lot* of modern finetunes are trained on what amounts to, in essence, GPT-4 generated slop that makes everything sound like a rip-off GPT-4 (refer to i.e. the Dolphin finetunes). I have a feeling that this is a lot of the reason people haven't been quite as successful as Meta's instruct tunes of Llama 3.