Nicolas

xpgx1

AI & ML interests

I'm just another LLM enthusiast, in it for the thrill of this new field! Local LLMs and generative AI (Stable Diffusion) are a new era of supercharged creativity for me. I'm merely using the frameworks, enjoying the hard work of all model trainers and project maintainers. Keep on keeping on - we need reliable, unbiased and LOCAL models in all areas!

Recent Activity

Organizations

None yet

xpgx1's activity

Reacted to m-ric's post with ๐Ÿ‘€ 28 days ago
view post
Post
1541
๐Ÿง ย  ๐—–๐—Ÿ๐—˜๐—”๐—ฅ: ๐—ณ๐—ถ๐—ฟ๐˜€๐˜ ๐—บ๐˜‚๐—น๐˜๐—ถ๐—บ๐—ผ๐—ฑ๐—ฎ๐—น ๐—ฏ๐—ฒ๐—ป๐—ฐ๐—ต๐—บ๐—ฎ๐—ฟ๐—ธ ๐˜๐—ผ ๐—บ๐—ฎ๐—ธ๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ณ๐—ผ๐—ฟ๐—ด๐—ฒ๐˜ ๐˜„๐—ต๐—ฎ๐˜ ๐˜„๐—ฒ ๐˜„๐—ฎ๐—ป๐˜ ๐˜๐—ต๐—ฒ๐—บ ๐˜๐—ผ ๐—ณ๐—ผ๐—ฟ๐—ด๐—ฒ๐˜

With privacy concerns rising, we sometimes need our models to "forget" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.

โŒย Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!

โœจย But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.

๐ŸŽฏย Key insights:

โœ…ย The benchmark tests forgetting on 200 fictional personas
โ€ฃ 3,770 visual Q&A pairs
โ€ฃ 4,000 textual Q&A pairs
โ€ฃ Additional real-world tests

๐Ÿ›‘ย Most current forgetting methods don't work well with both text and images
โ€ฃ They either remember what they should forget
โ€ฃ Or they forget too much unrelated information

โœจย Simple mathematical constraints work surprisingly well
โ€ฃ L1 regularization prevents excessive forgetting
โ€ฃ Works especially well with the LLMU method

๐Ÿ‘‰ย Read the full paper here: CLEAR: Character Unlearning in Textual and Visual Modalities (2410.18057)