Digital Clockwork

company

AI & ML interests

None defined yet.

Recent Activity

DigitalClockwork's activity

MrOvkillΒ 
posted an update 6 months ago
view post
Post
1152
Hello!

I've been in the lab synthesizing captions, with my trusty sidekick Blip, and along the way I had an interesting idea. I thought of designing an incredibly simple model that accepts simple instruction pairs, adjective noun pairs specifically, and outputs 2d vertices.

The current implementation has been implemented by myself then ran over with Claude, not because I am incompetent, but because I recognize tools written by experts may have more technique than my newbie self.

As with all projects, this will be updated with proportion to the feedback received, if someone's using it and wants to keep using it, i'm happy to keep working on anything. Thanks, all! πŸ€—

-<3

https://colab.research.google.com/gist/SMeyersMrOvkill/8d4686db803f6c5f43fafc1c94b1c8c6/polypathdelement.ipynb
MrOvkillΒ 
posted an update 6 months ago
view post
Post
2090
Hello!

I've been in the lab, I think one or two of you saw my furtive attempts to create a dolphinized 2b Gemma, which is still waiting for more funding. I get paid in a week.

Once that funding ran out, I dropped my last pinch of API credits to work on this:

DigitalClockwork/spatial_instruct_v1

It is an instruct database for spatial interactions with color tokens, i'm planning to tune a TBD model. Been experimenting with Gemma, but i'm welcome to ( smaller! ) model suggestions. If you think your favorite 0.5/0.75/1/2b can handle numbers, distances, or colors especially well, most especially community-enhanced models... I'm listening to the comments, intently!
Have a great day, and enjoy! This was one fun! πŸ€—

-<3
MrOvkillΒ 
posted an update 6 months ago
view post
Post
845
Hello!

I've been playing with Claude, and we decided to tackle a real thorn in my side.

"The Truthiness Model" - Analyze arbitrary input text for "truthiness", or likelihood of containing true information according to seed text.

P.S. Yes, v1 was broken. I saw the loss rate going down and go excited. Anyway, it just needed some data and a rollback, me and Claude got WAY too carried away trying to tack on features.

Anyway, fixed now, and working! :D

http://samuelmeyerscode.serveblog.net/?p=49
Β·
MrOvkillΒ 
posted an update 6 months ago
view post
Post
645
Hello!

https://www.youtube.com/watch?v=6NyDkpfNfUs

I had some feedback recently, that perhaps it would be beneficial to expand upon the fallacy dataset. I took this deeply to heart, and exploded it 10x.

MrOvkill/fallacies-fallacy-base

Produced synthetically with *ALL* the Gemini models on Vertex AI.

*phew* This was a rush. I can promise over 8 it might have been like 16 of straight prompt/copy/paste/fix/re-splice/fix/prompt again/chug caffeine/repeat, but we got there! Thanks for egging me on, all! I appreciate being driven to work! So much better than boredom! πŸ€—

Have fun!

MrOvkillΒ 
posted an update 6 months ago
view post
Post
1606
Hello!

I've been in the lab playing with various data formats today, and jammed out with some plain text to produce a nice list of fallacies and their solutions from Wikipedia's List of fallacies as JSONL for data processing.

Had some bumps along the way, but me and Gemini 1.5 Pro got there in the end. I must really learn to work with Gemini 1.5 Flash more effectively in future.

MrOvkill/fallacies-list-wikipedia

Enjoy!


-<3

MrOvkillΒ 
posted an update 6 months ago
MrOvkillΒ 
posted an update 6 months ago
view post
Post
3329
Hello!

I've made a little evaluation dataset for LLMs that require advanced and convoluted logical reasoning. It's composed of 81 unique paradoxes, with admittedly a couple in the same category ( absolutes. ) It's available here: MrOvkill/pdox

**Update**: I have upgraded the dataset to v3, ( don't worry about v2, it can be forgotten... ) and placed in a separate repo here:
MrOvkill/pdox-reversed

Enjoy & Have fun!
-<3
Β·
MrOvkillΒ 
posted an update 7 months ago
MrOvkillΒ 
posted an update 7 months ago
view post
Post
738
I propose a novel approach to training large language models (LLMs), inspired by the layered learning process observed in humans. Instead of training on all data simultaneously, this method would introduce increasingly complex information in stages, prioritizing foundational knowledge and relevance to the modern world. This "back-to-front" training approach could potentially improve the efficiency and effectiveness of LLM training. I've outlined the concept in more detail in this Gist:
https://gist.github.com/SMeyersMrOvkill/14ff37ffb955831897b177fb3d2d540e.

While the core idea and solutions presented in the Gist are my own, I'd like to acknowledge the valuable assistance I received from a language model in refining the presentation of this concept, making it clearer and more engaging for the community. I'm eager to hear your thoughts and feedback!
MrOvkillΒ 
posted an update 7 months ago
view post
Post
2231
Hello, all!

I was up late experimenting with Gemini, and we came across the need for circular arithmetic. We couldn't find anything that accomplished it in the way we wanted reliably, for different transformations identically, etc...

Therefore, I wrote and Gemini assisted, with the creation of an _rclamp function you can add as an attribute or just use as-is for a PyTorch tensor. It's 1d right now, didn't want to implement the dimensionality wrong with my "newbie skills".

Have fun!
- <3

https://colab.research.google.com/drive/1aj_iAp0eyfPMznzF-aC1UXrQPTNOeKUj?usp=sharing
  • 1 reply
Β·
MrOvkillΒ 
posted an update 7 months ago
MrOvkillΒ 
posted an update 8 months ago
view post
Post
1200
StarCoder 15b Instruct v0.1 Space w/ Llama.cpp & Code Completion!

Hey all! I made a little StarCoder space, it's up for fun, have at it, pls make tons of pr's and requests, I love to improve my work for you all!

MrOvkill/starcoder-15b-instruct