๐ Reproducing DeepSeek R1 for Text-to-Graph Extraction
Iโve been working on replicating DeepSeek R1, focusing on zero-shot text-to-graph extractionโa challenging task where LMs extract entities and relations from text based on predefined types.
๐ง Key Insight: Language models struggle when constrained by entity/relation types. Supervised training alone isnโt enough, but reinforcement learning (RL), specifically Guided Reward Policy Optimization (GRPO), shows promise.
๐ก Why GRPO? It trains the model to generate structured graphs, optimizing multiple reward functions (format, JSON validity, and extraction accuracy). It allows the model to learn from both positive and hard negative examples dynamically. RL can be fine-tuned to emphasize relation extraction improvements.
๐ Early Results: Even with limited training, F1 scores consistently improved, and we saw clear benefits from RL-based optimization. More training = better performance!
๐ฌ Next Steps: Weโre scaling up experiments with larger models and high-quality data. Stay tuned for updates! Meanwhile, check out one of our experimental models here: Ihor/Text2Graph-R1-Qwen2.5-0.5b
๐ Welcome the New and Improved GLiNER-Multitask! ๐
Since the release of our beta version, GLiNER-Multitask has received many positive responses. It's been embraced in many consulting, research, and production environments. Thank you everyone for your feedback, it helped us rethink the strengths and weaknesses of the first model and we are excited to present the next iteration of this multi-task information extraction model.
๐ก Whatโs New? Here are the key improvements in this latest version: ๐น Expanded Task Support: Now includes text classification and other new capabilities. ๐น Enhanced Relation Extraction: Significantly improved accuracy and robustness. ๐น Improved Prompt Understanding: Optimized for open-information extraction tasks. ๐น Better Named Entity Recognition (NER): More accurate and reliable results.
๐ง How We Made It Better: These advancements were made possible by: ๐น Leveraging a better and more diverse dataset. ๐น Using a larger backbone model for increased capacity. ๐น Implementing advanced model merging techniques. ๐น Employing self-learning strategies for continuous improvement. ๐น Better training strategies and hyperparameters tuning.