Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
BramVanroy 
posted an update May 14
Post
2234
All my models seem to be plagued by infinite lists. When you ask a question that requires it to write a list, it most often keeps adding bullet points or enumeration. I am wondering whether this is a result of using chatty GPT-4 as DPO preferences. Any thoughts?

Personally i find that ChatGPT:
has been trained for multipurposes : as well of being firstly an intent detector !! as it is leapfrogging RASA the chatbot website component and the slot based archtecture:
as well as having access ot functions so for some tasks it is already in a rag system , hence it can always have guardrailing system to stop bad outputs ::: so it has input sytems and output systems : then the final response is extracted : bing uses the internet to rag first then responds from its content : basically it uses the internet for the rag and its model only for chat!

understanding this : this is why people send so many specific querys via api to the model ! :

Local languge models need to be trained for these tasks : such as to produce a json list from data : hence using chatgpt to perform this task to produce data : then training a model to function in the same way: (essentially your not hitting chatgpt your hitting the rag called chat gpt : which is what all the online models offer: Hence when training for function calling and task which would be performed by chatgpt: you chould always uses the ML format of messages, and history! as when you host your model on olamma or lmstudio or lms server llama server etc :you will talk to the model in this way : hence they react differently when hosted than when in applications and UI!: and when in CODE!

So if you need it to produce a specific output type you should do a text dump of the datashape .... ie train it for textgeneration task: on the data shapes ie lots of jason data: the go back to task based; data examples : and it will produce perfect answers:

obvioulsy when dumping text etc : if it is loss of 1+ then it is only in the surface , it it is below 1- then it is begining to be recallable , if it is -0.5 then it is confirmed knowledge and will be recalled as well as interogated : so it is a little better to deeply embed the dump of text : and when fitting the task from 2- to +0.5 as as anything under can overfit the task unless you want to CLOSE the DOMAIN: ie KEEP the model on PURPOSE:: data whcih is Higer than 1 will need to have higher tempreture to retrive ! ( so think of the loss as the tempreture you will need to retrive the task!) (each sample has its own tempreture..... some low some high:
Repeated data which is simular will be chosen from the pool of TopK.... so increasing the probablitys from other tasks and calulations and predictions to surface and not just the examples being recalled :
We NEED the model to be a preditor.... so Hallucenations are a sign that the model is attempting to predict based off its known data: instead of tring to recall information (em it is not a database)....it is a collection of probablity matrixes based on seq in and seq out : hence for a s[ecific task ! KEEP THE SAME PROMPT for Every sample ! so when using the model to perform the task in app based or rag systems the exact prompt can be used :
SO if you feel your prompt is not working (it has too much detail)... then you need to train the specific prompt you will use for your task! ie teach the model to perform the output as suggested by your prompt and examples! (then it can Park the CAR!).....

The langage model is very multipurposed as after you can created this seasoned model ::: you can perform any trainable task! with examples! ::: you can alow it to be a predictor / or focus on task with a specifc one command prompt ! hence CHAINS! ... or AGENTS? are the the same ... they perform one purpose (format text from the output of one model to another)perform a single function (write a code based on...) ....

the models have not been trained on maths (but the can with a set of example (mental aritmatics) then later it will calculate deeper maths but the first blocks need to be laid first:::

Maybe ?