Benny boo

rbn2008k
·

AI & ML interests

Open source LLMs

Recent Activity

updated a Space about 1 month ago
rbn2008k/Scarlett
View all activity

Organizations

None yet

rbn2008k's activity

updated a Space about 1 month ago
New activity in m-ric/rate_coolness 2 months ago

Update app.py

1
#1 opened 2 months ago by
rbn2008k
liked a Space 3 months ago
reacted to singhsidhukuldeep's post with 👍 3 months ago
view post
Post
1159
1 hour of OpenAi o1, here are my thoughts...

Here are my few observations:

- Slower response times: o1 can take over 10+ seconds to answer some questions, as it spends more time "thinking" through problems. In my case, it took over 50 seconds.

- Less likely to admit ignorance: The models are reported to be less likely to admit when they don't know the answer to a question.

- Higher pricing: o1-preview is significantly more expensive than GPT-4o, costing 3x more for input tokens and 4x more for output tokens in the API. With more thinking and more tokens, this could require houses to be mortgaged!

- Do we need this?: While it's better than GPT-4o for complex reasoning, on many common business tasks, its performance is just equivalent.

- Not a big deal: No comparisons to Anthropic or Google DeepMind Gemini are mentioned or included.

- This model tries to think and iterate over the response on its own! Think of it as an inbuilt CoT on steroids! Would love a technical review paper on the training process.

A must-read paper: https://cdn.openai.com/o1-system-card.pdf