Alex Chen PRO

alexchen4ai

AI & ML interests

NLP

Recent Activity

Organizations

Nexa AI's profile picture Nexa AI ToB Enterprise's profile picture nexaai-unreleased's profile picture nexa-collaboration's profile picture Nexa AI Enterprise Early Access's profile picture

alexchen4ai's activity

New activity in NexaAIDev/OmniVLM-968M 19 days ago

Regarding Model Weights

1
#12 opened 19 days ago by
BimsaraRad
New activity in NexaAIDev/OmniVLM-968M 24 days ago

9x token reduction

1
#10 opened 25 days ago by
Sijuade
New activity in NexaAIDev/OmniVLM-968M 27 days ago

Error loading model

2
#9 opened 28 days ago by
iojvsuynv
New activity in NexaAIDev/OmniVLM-968M about 1 month ago
reacted to thomwolf's post with πŸ‘ about 1 month ago
view post
Post
5018
A Little guide to building Large Language Models in 2024

This is a post-recording of a 75min lecture I gave two weeks ago on how to train a LLM from scratch in 2024. I tried to keep it short and comprehensive – focusing on concepts that are crucial for training good LLM but often hidden in tech reports.

In the lecture, I introduce the students to all the important concepts/tools/techniques for training good performance LLM:
* finding, preparing and evaluating web scale data
* understanding model parallelism and efficient training
* fine-tuning/aligning models
* fast inference

There is of course many things and details missing and that I should have added to it, don't hesitate to tell me you're most frustrating omission and I'll add it in a future part. In particular I think I'll add more focus on how to filter topics well and extensively and maybe more practical anecdotes and details.

Now that I recorded it I've been thinking this could be part 1 of a two-parts series with a 2nd fully hands-on video on how to run all these steps with some libraries and recipes we've released recently at HF around LLM training (and could be easily adapted to your other framework anyway):
*datatrove for all things web-scale data preparation: https://github.com/huggingface/datatrove
*nanotron for lightweight 4D parallelism LLM training: https://github.com/huggingface/nanotron
*lighteval for in-training fast parallel LLM evaluations: https://github.com/huggingface/lighteval

Here is the link to watch the lecture on Youtube: https://www.youtube.com/watch?v=2-SPH9hIKT8
And here is the link to the Google slides: https://docs.google.com/presentation/d/1IkzESdOwdmwvPxIELYJi8--K3EZ98_cL6c5ZcLKSyVg/edit#slide=id.p

Enjoy and happy to hear feedback on it and what to add, correct, extend in a second part.
  • 2 replies
Β·
liked a Space about 1 month ago
New activity in NexaAIDev/OmniVLM-968M about 1 month ago

about ocr

1
#1 opened about 1 month ago by
MiaHawthorne

Text/vision parameter split

1
#3 opened about 1 month ago by
AlexThompson