Clem ๐Ÿค— PRO

clem

AI & ML interests

multi-modal, time-series, biology and chemistry

Recent Activity

reacted to fdaudens's post with โค๏ธ about 5 hours ago
reacted to fdaudens's post with ๐Ÿ‘ about 5 hours ago
reacted to mgubri's post with ๐Ÿ”ฅ about 5 hours ago

Organizations

clem's activity

reacted to fdaudens's post with โค๏ธ๐Ÿ‘ about 5 hours ago
view post
Post
813
My new favorite bookmark: AnyChat. The ultimate AI Swiss Army knife that lets you switch between ChatGPT, Gemini, Claude, LLaMA, Grok & moreโ€”all in one place!

Really cool work by @akhaliq

akhaliq/anychat
reacted to mgubri's post with ๐Ÿ”ฅ about 5 hours ago
view post
Post
599
๐ŸŽ‰ Weโ€™re excited to announce, in collaboration with @kaleidophon , the release of the models from our Apricot ๐Ÿ‘ paper, "Apricot: Calibrating Large Language Models Using Their Generations Only," accepted at ACL 2024! Reproducibility is essential in science, and we've worked hard to make it as seamless as possible.
parameterlab/apricot-models-673d2cae40b6ff437a86f0bf
reacted to fdaudens's post with ๐Ÿ‘๐Ÿ‘€๐Ÿคฏ about 5 hours ago
view post
Post
764
๐Ÿš€ DeepSeek just dropped DeepSeek-R1-Lite-Preview with โ€œreasoningโ€ capacity.

- Matches OpenAI o1-preview on AIME & MATH benchmarks.
- Transparent process output
- Open-source model to be released

Try it out: https://chat.deepseek.com/
reacted to rwightman's post with ๐Ÿš€๐Ÿš€ about 5 hours ago
view post
Post
619
Want to validate some hparams or figure out what timm model to use before commiting to download or training with a large dataset? Try mini-imagenet: timm/mini-imagenet

I had this sitting on my drive and forgot where I pulled it together from. It's 100 classes of imagenet, 50k train and 10k val images (from ImageNet-1k train set), and 5k test images (from ImageNet-1k val set). 7.4GB instead of > 100GB for the full ImageNet-1k. This ver is not reduced resolution like some other 'mini' versions. Super easy to use with timm train/val scripts, checkout the dataset card.

I often check fine-tuning with even smaller datasets like:
* timm/resisc45
* timm/oxford-iiit-pet
But those are a bit small to train any modest size model w/o starting from pretrained weights.
reacted to openfree's post with ๐Ÿš€๐Ÿ”ฅ about 5 hours ago
view post
Post
746
MOUSE-I: Transform a Prompt into a Live Web Service
"From Prompt to Global Service in 60 Seconds"
The Future of Web Development
MOUSE-I revolutionizes web development by converting a single prompt into a fully functional, globally deployed web service through AI automation and enterprise-grade infrastructure.
โšก Lightning-Fast Pipeline (60 Seconds)
1. AI Prompt Enhancement (5s)

Instant requirement analysis
Tech stack optimization
Development spec generation

2. Code Creation (49s)

Production-ready code
Responsive design
Performance-optimized

3. Live Rendering (1s)

Instant visualization
Real-time testing

4. Global Deployment (5s)

Vercel infrastructure
Global CDN
Automatic HTTPS

๐ŸŽฏ Key Differentiators

Instant Results: From idea to live URL in 60 seconds
Enterprise Quality: Production-grade code and infrastructure
Zero Configuration: No setup or technical knowledge required
40+ Templates: Ready-to-use solutions for games, dashboards, and apps

๐Ÿ’ซ Perfect For

Startups needing quick MVPs
Developers prototyping ideas
Non-technical founders building web services
Educators creating interactive tools

๐Ÿš€ Get Started

Visit MOUSE-I Gallery
Enter your prompt
Get your live service in 60 seconds

๐Ÿ’ก Connect

๐ŸŒ MOUSE-I Gallery
https://huggingface.co/spaces/VIDraft/mouse1


๐Ÿ’ฌ discord.gg/openfreeai
๐Ÿ“ง arxivgpt@gmail.com
  • 1 reply
ยท
reacted to AlonzoLeeeooo's post with ๐Ÿš€ about 5 hours ago
view post
Post
522
๐ŸŽ‰ We are excited to announce our latest research on video editing - StableV2V!
๐Ÿ’ญ StableV2V aims to perform video editing with aligned shape consistency to user prompt, even if which might cause significant shape differences.
๐Ÿ“š Besides, we curate a testing benchmark, namely DAVIS-Edit, for video editing, comprising of both text-based and image-based applications.
๐Ÿš€ We have open-sourced our paper, code, model weights, and DAVIS-Edit, which you may refer to more details of StableV2V from the following link:

- arXiv paper: https://arxiv.org/abs/2411.11045
- Project page: https://alonzoleeeooo.github.io/StableV2V/
- GitHub: https://github.com/AlonzoLeeeooo/StableV2V
- HuggingFace model repo: AlonzoLeeeooo/StableV2V
- HuggingFace dataset repo: AlonzoLeeeooo/DAVIS-Edit
reacted to huzaifas-sidhpurwala's post with ๐Ÿ‘€ about 5 hours ago
view post
Post
319
As AI models become more widespread, it is essential to address their potential risks and vulnerabilities. Open-source AI is poised to be a driving force behind tomorrow's innovations in this field. This paper examines the current landscape of security and safety in open-source AI models and outlines concrete measures to monitor and mitigate associated risks effectively.

Building Trust: Foundations of Security, Safety and Transparency in AI (2411.12275)






replied to elliesleightholm's post about 5 hours ago
reacted to elliesleightholm's post with ๐Ÿ‘€๐Ÿš€๐Ÿ”ฅ๐Ÿค— about 5 hours ago
reacted to hbseong's post with ๐Ÿ”ฅ๐Ÿ‘€ about 5 hours ago
view post
Post
328
๐Ÿšจ๐Ÿ”ฅ New Release Alert! ๐Ÿ”ฅ๐Ÿšจ

Introducing the 435M model that outperforms Llama-Guard-3-8B while slashing 75% of the computation cost! ๐Ÿ’ป๐Ÿ’ฅ
๐Ÿ‘‰ Check it out: hbseong/HarmAug-Guard (Yes, INFERENCE CODE INCLUDED! ๐Ÿ’ก)

More details in our paper: https://arxiv.org/abs/2410.01524 ๐Ÿ“œ

#HarmAug #LLM # Safety #EfficiencyBoost #Research #AI #MachineLearning
reacted to takarajordan's post with โค๏ธ about 5 hours ago
view post
Post
289
First post here goes!

takarajordan/CineDiffusion

Super excited to announce CineDiffusion๐ŸŽฅ, it creates images up to 4.2 Megapixels in Cinematic ultrawide formats like:
- 2.39:1 (Modern Widescreen)
- 2.76:1 (Ultra Panavision 70)
- 3.00:1 (Experimental Ultra-wide)
- 4.00:1 (Polyvision)
- 2.55:1 (CinemaScope)
- 2.20:1 (Todd-AO)

More to come soon!!

Thanks to @John6666 and @Resoldjew for your early support <3

And thanks to the team at ShuttleAI for their brand new Shuttle-3 model, what an amazing job.

shuttleai/shuttle-3-diffusion