SMoE Datasets
Collection
Sparse Mixture of Experts datasets for mathematical reasoning and complex calculations.
•
13 items
•
Updated
•
1
OGAI-STEM-7B is a LoRA fine-tuned Mathstral-7B model, designed specifically for oil and gas engineering, scientific computing, and technical problem-solving. It is optimized for numerical accuracy, complex engineering calculations, and technical document understanding.
The model is an integral part of GainEnergy's Upstrima AI Platform, enhancing workflows with pragmatic AI agents, scientific computing tools, and retrieval-augmented generation (RAG)-based document analysis.
Version | Memory Requirement | Performance |
---|---|---|
OGAI-STEM-7B-GGUF | CPU optimized | Suitable for edge computing |
python -m vllm.entrypoints.openai.api_server \
--model GainEnergy/ogai-stem-7b \
--tensor-parallel-size 2
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "GainEnergy/ogai-stem-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
prompt = "Calculate the pressure drop in a 500m pipeline with a 10,000 BPD flow rate."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
@article{ogai_stem_7b_2025,
title={OGAI-STEM-7B: AI Model for Oil & Gas Scientific Computing},
author={GainEnergy AI Team},
year={2025},
publisher={Hugging Face Models}
}
Base model
mistralai/Mathstral-7B-v0.1