Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Greetings Enthusiast, If you are reading this, you are just like us. So, here's the thing... We built this. What does it do? WE DON'T KNOW. What do we know? Well, it's 70 Billion Parameter Model, 8k context length, model can use upto 5k context perfectly without any precision loss. Majority of the loss in precision and contextual relation establishment is between 5k to 7k. How was it made? Random things. This is an experimental model, but, we didn't conduct the experiment. Our experiment conducted this experiment.

Now, everything that we know about this model, you know it too. Also, yes it is uncensored, please use responsibly.

Cheers!

Downloads last month
3
Safetensors
Model size
69B params
Tensor type
F32
·
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Collection including deepnight-research/SaiLy_experiment_v1