I'm just another LLM enthusiast, in it for the thrill of this new field! Local LLMs and generative AI (Stable Diffusion) are a new era of supercharged creativity for me. I'm merely using the frameworks, enjoying the hard work of all model trainers and project maintainers.
Keep on keeping on - we need reliable, unbiased and LOCAL models in all areas!
With privacy concerns rising, we sometimes need our models to "forget" specific information - like a person's data - while keeping everything else intact. Researchers just released CLEAR, the first benchmark to test how well this works with both text and images.
โย Bad news: Current methods either fail to truly forget or end up forgetting way too much. It's like trying to remove a single ingredient from a baked cake!
โจย But there's hope: Adding simple mathematical constraints (L1 regularization) during the forgetting process significantly improves results.
๐ฏย Key insights:
โ ย The benchmark tests forgetting on 200 fictional personas โฃ 3,770 visual Q&A pairs โฃ 4,000 textual Q&A pairs โฃ Additional real-world tests
๐ย Most current forgetting methods don't work well with both text and images โฃ They either remember what they should forget โฃ Or they forget too much unrelated information
โจย Simple mathematical constraints work surprisingly well โฃ L1 regularization prevents excessive forgetting โฃ Works especially well with the LLMU method