Did a fun experiment: What are the main themes emerging from the 100+ Nieman Journalism Lab predictions for 2025?
I used natural language processing to cluster and map them β really helps spot patterns that weren't obvious when reading predictions one by one. So what will shape journalism next year? A lot of AI and US politics (surprise!), but there's also this horizontal axis that spans from industry strategies to deep reflections on how to talk to the public.
Click any dot to explore the original prediction. What themes surprise/interest you the most?
Coming back to Paris Friday to open our new Hugging Face office!
We're at capacity for the party but add your name in the waiting list as we're trying to privatize the passage du Caire for extra space for robots π€π¦Ύπ¦Ώ
In the past seven days, the Diffusers team has shipped:
1. Two new video models 2. One new image model 3. Two new quantization backends 4. Three new fine-tuning scripts 5. Multiple fixes and library QoL improvements
Coffee on me if someone can guess 1 - 4 correctly.
1 reply
Β·
reacted to merve's
post with β€οΈπ₯11 days ago
Key Idea: A data-dependent weighted average for pooling and communication, enabling flexible and powerful neural network connections.
Breakthrough: Bahdanau's "soft search" mechanism (softmax + weighted averaging) solved encoder-decoder bottlenecks in machine translation. Transformer Revolution: Attention Is All You Need (1706.03762) (2017) by @ashishvaswanigoogle et al. simplified architectures by stacking attention layers, introducing multi-headed attention and positional encodings. Legacy: Attention replaced RNNs, driving modern AI systems like ChatGPT. It emerged independently but was influenced by contemporaneous work like Alex Gravesβs Neural Turing Machines (1410.5401) and Jason Westonβs Memory Networks (1410.3916) .
Attention to history: JΓΌrgen Schmidhuber claims his 1992 Fast Weight Programmers anticipated modern attention mechanisms. While conceptually similar, the term βattentionβ was absent, and thereβs no evidence it influenced Bahdanau, Cho, and Bengioβs 2014 work. Paying attention (!) to history might have brought us to genAI earlier β but credit for the breakthrough still goes to Montreal.
Who else deserves recognition in this groundbreaking narrative of innovation? Letβs ensure every contributor gets the credit they deserve. Leave a comment below ππ»π€
Quick update from week 1 of smol course. The community is taking the driving seat and using the material for their own projects. If you want to do the same, join in!
- we have ongoing translation projects in Korean, Vietnamese, Portuguese, and Spanish - 3 chapters are ready for students. On topics like, instruction tuning, preference alignment, and parameter efficient fine tuning - 3 chapters are in progress on evaluation, vision language models, and synthetic data. - around 780 people have forked the repo to use it for learning, teaching, sharing.
βοΈ Next step is to support people that want to use the course for teaching, content creation, internal knowledge sharing, or anything. If you're into this. Drop an issue or PR