John Smith's picture

John Smith PRO

John6666

AI & ML interests

None yet

Recent Activity

updated a collection about 3 hours ago
Spaces for Model / Space / useful Utilities in Hugging Face
updated a collection about 3 hours ago
Spaces for LLM / VLM / NLP
liked a Space about 3 hours ago
ehristoforu/solatium-advanced-search
View all activity

Organizations

open/ acc's profile picture FashionStash Group meeting's profile picture

John6666's activity

reacted to aaditya's post with 🔥 about 5 hours ago
view post
Post
228
Last Week in Medical AI: Top Research Papers/Models 🔥
🏅 (December 15 – December 21, 2024)

Medical LLM & Other Models
- MedMax: Mixed-Modal Biomedical Assistant
- Advanced multimodal instruction tuning
- Enhanced biomedical knowledge integration
- Comprehensive assistant capabilities
- MGH Radiology Llama 70B
- Specialized radiology focus
- State-of-the-art performance
- Enhanced report generation capabilities
- HC-LLM: Historical Radiology Reports
- Context-aware report generation
- Historical data integration
- Improved accuracy in diagnostics

Frameworks & Methods
- ReflecTool: Reflection-Aware Clinical Agents
- Process-Supervised Clinical Notes
- Federated Learning with RAG
- Query Pipeline Optimization

Benchmarks & Evaluations
- Multi-OphthaLingua
- Multilingual ophthalmology benchmark
- Focus on LMICs healthcare
- Bias assessment framework
- ACE-M3 Evaluation Framework
- Multimodal medical model testing
- Comprehensive capability assessment
- Standardized evaluation metrics

LLM Applications
- Patient-Friendly Video Reports
- Medical Video QA Systems
- Gene Ontology Annotation
- Healthcare Recommendations

Special Focus: Medical Ethics & AI
- Clinical Trust Impact Study
- Mental Health AI Challenges
- Hospital Monitoring Ethics
- Radiology AI Integration

Now you can watch and listen to the latest Medical AI papers daily on our YouTube and Spotify channels as well!

- Full thread in detail:
https://x.com/OpenlifesciAI/status/1870504774162063760
- Youtube Link: youtu.be/SbFp4fnuxjo
- Spotify: https://t.co/QPmdrXuWP9
reacted to nicolay-r's post with 👀 about 7 hours ago
view post
Post
250
📢 If you're working in relation extraction / character network domain, then the following post would be relevant.
Excited to share the most recent milestone on releasing the ARElight 0.25.0 🎊

Core library: https://github.com/nicolay-r/ARElight
Server: https://github.com/nicolay-r/ARElight-server

🔎 What is ARElight? It represents Granular Viewer of Sentiments Between Entities in Massively Large Documents and Collections of Texts.
Shortly speaking, it allows to extract contexts with mentioned object pairs for the related prompting / classification.
In the slides below we illsutrate the ARElight appliation for sentiment classification between object pairs in context.

We exploit DeepPavlov NER modes + GoogleTranslate + BERT-based classifier in the demo. The bash script for launching the quick demo illustrates the application of these components.

The new update provide a series of new features:
✅ SQlite support for storing all the extracted samples
✅ Support of the enhanced GUI for content investigation.
✅ Switch to external no-string projects for NER and Translator

Supplementiary materials:
📜 Paper: https://link.springer.com/chapter/10.1007/978-3-031-56069-9_23
reacted to singhsidhukuldeep's post with 🧠 about 7 hours ago
view post
Post
378
Exciting breakthrough in AI: @Meta 's new Byte Latent Transformer (BLT) revolutionizes language models by eliminating tokenization!

The BLT architecture introduces a groundbreaking approach that processes raw bytes instead of tokens, achieving state-of-the-art performance while being more efficient and robust. Here's what makes it special:

>> Key Innovations
Dynamic Patching: BLT groups bytes into variable-sized patches based on entropy, allocating more compute power where the data is more complex. This results in up to 50% fewer FLOPs during inference compared to traditional token-based models.

Three-Component Architecture:
• Lightweight Local Encoder that converts bytes to patch representations
• Powerful Global Latent Transformer that processes patches
• Local Decoder that converts patches back to bytes

>> Technical Advantages
• Matches performance of Llama 3 at 8B parameters while being more efficient
• Superior handling of non-English languages and rare character sequences
• Remarkable 99.9% accuracy on spelling tasks
• Better scaling properties than token-based models

>> Under the Hood
The system uses an entropy model to determine patch boundaries, cross-attention mechanisms for information flow, and hash n-gram embeddings for improved representation. The architecture allows simultaneous scaling of both patch and model size while maintaining fixed inference costs.

This is a game-changer for multilingual AI and could reshape how we build future language models. Excited to see how this technology evolves!
reacted to etemiz's post with 👀 about 24 hours ago
view post
Post
505
What if human alignment is easy:
- Get a list of humans who really care about other humans
- Feed what they say into an LLM
·
reacted to nroggendorff's post with 🔥 about 24 hours ago
view post
Post
507
Has anyone else noticed that ZeroGPU quota is per space, not per user as of a few weeks ago?
  • 2 replies
·
replied to nroggendorff's post about 24 hours ago
view reply

Really?🙀
A few weeks ago, I think it was around the time the Quota bar appeared. Was it even earlier?

However, there are some strange points if that is the case. For example, won't the popular Zero GPU space created by HF staff reach its limit and stop working?
For example, is there a condition where the behavior differs between Public and Private?

reacted to Jaward's post with 👀 1 day ago
reacted to davanstrien's post with 🔥 1 day ago
view post
Post
993
Introducing FineWeb-C 🌐🎓, a community-built dataset for improving language models in ALL languages.

Inspired by FineWeb-Edu the community is labelling the educational quality of texts for many languages.

318 annotators, 32K+ annotations, 12 languages - and growing! 🌍

data-is-better-together/fineweb-c
reacted to Abhaykoul's post with 🚀 1 day ago
view post
Post
1060
🔥 BIG ANNOUNCEMENT: THE HELPINGAI API IS LIVE! 🔥

Yo, the moment you’ve all been waiting for is here! 🚀 The HelpingAI API is now LIVE and ready to level up your projects! 🔥 We’re bringing that next-level AI goodness straight to your fingertips. 💯

No more waiting— it’s time to build something epic! 🙌

From now on, you can integrate our cutting-edge AI models into your own applications, workflows, and everything in between. Whether you’re a developer, a creator, or just someone looking to make some serious moves, this is your chance to unlock the full potential of emotional intelligence and adaptive AI.

Check out the docs 🔥 and let’s get to work! 🚀

👉 Check out the docs and start building (https://helpingai.co/docs)
👉 Visit the HelpingAI website (https://helpingai.co/)
·
reacted to InferenceIllusionist's post with 🔥 1 day ago
view post
Post
803
MilkDropLM-32b-v0.3: Unlocking Next-Gen Visuals ✨

Stoked to release the latest iteration of our MilkDropLM project! This new release is based on the powerful Qwen2.5-Coder-32B-Instruct model using the same great dataset that powered our 7b model.

What's new?

- Genome Unlocked: Deeper understanding of preset relationships for more accurate and creative generations.

- Preset Revival: Breathe new life into old presets with our upgraded model!

- Loop-B-Gone: Say goodbye to pesky loops and hello to smooth generation.

- Natural Chats: Engage in more natural sounding conversations with our LLM than ever before.

Released under Apache 2.0, because sharing is caring!

Try it out: InferenceIllusionist/MilkDropLM-32b-v0.3

Shoutout to @superwatermelon for his invaluable insights and collab, and to all those courageous members in the community that have tested and provided feedback before!
reacted to MoritzLaurer's post with 👀 1 day ago
view post
Post
1249
Quite excited by the ModernBERT release! 0.15/0.4B small, 2T modern pre-training data and tokenizer with code, 8k context window, great efficient model for embeddings & classification!

This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D

Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !

Paper and models here 👇https://huggingface.co/collections/answerdotai/modernbert-67627ad707a4acbf33c41deb
reacted to suayptalha's post with 🔥 2 days ago
view post
Post
1283
🚀 FastLlama Series is Live!

🦾 Experience faster, lighter, and smarter language models! The new FastLlama makes Meta's LLaMA models work with smaller file sizes, lower system requirements, and higher performance. The model supports 8 languages, including English, German, and Spanish.

🤖 Built on the LLaMA 3.2-1B-Instruct model, fine-tuned with Hugging Face's SmolTalk and MetaMathQA-50k datasets, and powered by LoRA (Low-Rank Adaptation) for groundbreaking mathematical reasoning.

💻 Its compact size makes it versatile for a wide range of applications!
💬 Chat with the model:
🔗 Chat Link: suayptalha/Chat-with-FastLlama
🔗 Model Link: suayptalha/FastLlama-3.2-1B-Instruct
reacted to ginipick's post with 🚀🔥 2 days ago
view post
Post
2813
🌟 Digital Odyssey: AI Image & Video Generation Platform 🎨
Welcome to our all-in-one AI platform for image and video generation! 🚀
✨ Key Features

🎨 High-quality image generation from text
🎥 Video creation from still images
🌐 Multi-language support with automatic translation
🛠️ Advanced customization options

💫 Unique Advantages

⚡ Fast and accurate results using FLUX.1-dev and Hyper-SD models
🔒 Robust content safety filtering system
🎯 Intuitive user interface
🛠️ Extended toolkit including image upscaling and logo generation

🎮 How to Use

Enter your image or video description
Adjust settings as needed
Click generate
Save and share your results automatically

🔧 Tech Stack

FluxPipeline
Gradio
PyTorch
OpenCV

link: ginigen/Dokdo

Turn your imagination into reality with AI! ✨
#AI #ImageGeneration #VideoGeneration #MachineLearning #CreativeTech
replied to OFT's post 2 days ago
view reply

Sorry, there was a weird UI bug (caused by typo). I've fixed it. However, it should be easy to do even locally. Except for the torch installation!😅

pip install -U diffusers transformers sentencepiece numpy<2 safetensors accelerate huggingface_hub 

Well, you can convert it using the script I wrote above, but if it doesn't work properly, just write it here again and I'll get a notification.

reacted to FranckAbgrall's post with 🔥 2 days ago
view post
Post
959
🆕 It should now be easier to identify discussions or pull requests where repository owners are participating on HF, let us know it that helps 💬🤗
  • 1 reply
·
reacted to prithivMLmods's post with 🤗 2 days ago
view post
Post
1421
Qwen2VL Models: Vision and Language Processing 🍉

📍FT; [ Latex OCR, Math Parsing, Text Analogy OCRTest ]

❄️Demo : prithivMLmods/Qwen2-VL-2B . The demo includes the Qwen2VL 2B Base Model.

🎯The space handles documenting content from the input image along with standardized plain text. It includes adjustment tools with over 30 font styles, file formatting support for PDF and DOCX, textual alignments, font size adjustments, and line spacing modifications.

📄PDFs are rendered using the ReportLab software library toolkit.

🧵Models :
+ prithivMLmods/Qwen2-VL-OCR-2B-Instruct
+ prithivMLmods/Qwen2-VL-Ocrtest-2B-Instruct
+ prithivMLmods/Qwen2-VL-Math-Prase-2B-Instruct

🚀Sample Document :
+ https://drive.google.com/file/d/1Hfqqzq4Xc-3eTjbz-jcQY84V5E1YM71E/view?usp=sharing

📦Collection :
+ prithivMLmods/vision-language-models-67639f790e806e1f9799979f

.
.
.
@prithivMLmods 🤗
  • 1 reply
·
reacted to anton-l's post with 🚀 2 days ago
view post
Post
1800
Introducing 📐𝐅𝐢𝐧𝐞𝐌𝐚𝐭𝐡: the best public math pre-training dataset with 50B+ tokens!
HuggingFaceTB/finemath

Math remains challenging for LLMs and by training on FineMath we see considerable gains over other math datasets, especially on GSM8K and MATH.

We build the dataset by:
🛠️ carefully extracting math data from Common Crawl;
🔎 iteratively filtering and recalling high quality math pages using a classifier trained on synthetic annotations to identify math reasoning and deduction.

We conducted a series of ablations comparing the performance of Llama-3.2-3B-Base after continued pre-training on FineMath and observe notable gains compared to the baseline model and other public math datasets.

We hope this helps advance the performance of LLMs on math and reasoning! 🚀
We’re also releasing all the ablation models as well as the evaluation code.

HuggingFaceTB/finemath-6763fb8f71b6439b653482c2
reacted to m-ric's post with 🔥 2 days ago
view post
Post
1404
After 6 years, BERT, the workhorse of encoder models, finally gets a replacement: 𝗪𝗲𝗹𝗰𝗼𝗺𝗲 𝗠𝗼𝗱𝗲𝗿𝗻𝗕𝗘𝗥𝗧! 🤗

We talk a lot about ✨Generative AI✨, meaning "Decoder version of the Transformers architecture", but this is only one of the ways to build LLMs: encoder models, that turn a sentence in a vector, are maybe even more widely used in industry than generative models.

The workhorse for this category has been BERT since its release in 2018 (that's prehistory for LLMs).

It's not a fancy 100B parameters supermodel (just a few hundred millions), but it's an excellent workhorse, kind of a Honda Civic for LLMs.

Many applications use BERT-family models - the top models in this category cumulate millions of downloads on the Hub.

➡️ Now a collaboration between Answer.AI and LightOn just introduced BERT's replacement: ModernBERT.

𝗧𝗟;𝗗𝗥:
🏛️ Architecture changes:
⇒ First, standard modernizations:
- Rotary positional embeddings (RoPE)
- Replace GeLU with GeGLU,
- Use Flash Attention 2
✨ The team also introduced innovative techniques like alternating attention instead of full attention, and sequence packing to get rid of padding overhead.

🥇 As a result, the model tops the game of encoder models:
It beats previous standard DeBERTaV3 for 1/5th the memory footprint, and runs 4x faster!

Read the blog post 👉 https://huggingface.co/blog/modernbert
  • 1 reply
·
reacted to akhaliq's post with 🚀 2 days ago
view post
Post
1746
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: akhaliq/anychat