|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- GainEnergy/SMoE-Training |
|
- GainEnergy/reasoner |
|
- GainEnergy/ogai-8x7B |
|
- GainEnergy/oilandgas-engineering-dataset |
|
- GainEnergy/ogdataset |
|
- GainEnergy/upstrimacentral |
|
- open-r1/OpenR1-Math-220k |
|
- unsloth/LaTeX_OCR |
|
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 |
|
tags: |
|
- oil-gas |
|
- drilling-engineering |
|
- mixtral-8x7b |
|
- lora |
|
- fine-tuned |
|
- energy-ai |
|
- pragmatic-ai |
|
- gguf |
|
- text-generation-inference |
|
- text-generation |
|
model-index: |
|
- name: OGAI-8x7B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Drilling Engineering AI |
|
dataset: |
|
name: GainEnergy Oil & Gas Corpus |
|
type: custom |
|
metrics: |
|
- name: Drilling Calculations Accuracy |
|
type: accuracy |
|
value: 95.2 |
|
- name: Engineering Document Retrieval Precision |
|
type: precision |
|
value: 91.8 |
|
- name: Context Retention |
|
type: contextual-coherence |
|
value: High |
|
variants: |
|
- name: OGAI-8x7B-4bit |
|
pipeline_tag: text-generation |
|
repo_name: GainEnergy/ogai-8x7b-4bit |
|
- name: OGAI-8x7B-8bit-32k |
|
pipeline_tag: text-generation |
|
repo_name: GainEnergy/ogai-8x7b-8bit-32k |
|
- name: OGAI-8x7b-Q4_K_M-GGUF |
|
pipeline_tag: text-generation |
|
repo_name: GainEnergy/OGAI-8x7b-Q4_K_M-GGUF |
|
library_name: transformers |
|
language: |
|
- en |
|
widget: |
|
- text: >- |
|
User: What is the recommended mud weight for drilling a well with a |
|
formation pressure of 8,000 psi? |
|
|
|
AI: |
|
example_title: Mud Weight Calculation |
|
- text: >- |
|
User: Explain the differences between rotary steerable systems and |
|
conventional directional drilling tools. |
|
|
|
AI: |
|
example_title: Directional Drilling Technologies |
|
- text: >- |
|
User: How does drilling fluid viscosity impact hole cleaning efficiency in |
|
horizontal wells? |
|
|
|
AI: |
|
example_title: Drilling Fluid Rheology |
|
- text: >- |
|
User: What are the key factors affecting wellbore stability during drilling |
|
operations? |
|
|
|
AI: |
|
example_title: Wellbore Stability Analysis |
|
- text: >- |
|
User: Describe the process of well control in the event of a kick during |
|
drilling operations. |
|
|
|
AI: |
|
example_title: Well Control Procedures |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# OGAI-8x7B: Oil & Gas AI Model for Drilling Process Optimization |
|
|
|
 |
|
[](LICENSE) |
|
|
|
## Model Description |
|
|
|
**OGAI-8x7B** is a **LoRA fine-tuned Mixtral-8x7B model**, engineered for **oil and gas engineering applications**. This version has been optimized for **drilling process automation, technical document understanding, and engineering problem-solving**. |
|
|
|
The model is a core component of **GainEnergy's Upstrima AI Platform**, which provides **pragmatic AI agents, advanced workflows, and retrieval-augmented generation (RAG)-enhanced document processing**. |
|
|
|
## Technical Architecture |
|
|
|
### Base Model Specifications |
|
- **Architecture**: Mixtral-8x7B Sparse Mixture of Experts (SMoE) |
|
- **Parameters**: 8 experts with 7B parameters each (46.7B total parameters, 12.9B active per token) |
|
- **Context Length**: Extended to 32,768 tokens for handling technical documentation |
|
- **Attention Mechanism**: Sliding Window Attention with specialized oil & gas technical vocabulary |
|
|
|
### Fine-tuning Approach |
|
- **Method**: Low-Rank Adaptation (LoRA) with rank 16 |
|
- **Training Dataset**: 2.8M specialized oil & gas engineering datapoints (800K drilling-specific) |
|
- **Hardware**: Trained on 16x NVIDIA A100 80GB GPUs |
|
- **Training Time**: 2,800 GPU hours |
|
- **Special Features**: Enhanced numerical precision for engineering calculations with reduced hallucination rates |
|
|
|
### Performance Optimizations |
|
- **Quantization**: Custom 4-bit and 8-bit quantization schemes preserving numerical accuracy |
|
- **Inference Speed**: Optimized KV cache management for real-time drilling advisory |
|
- **Memory Footprint**: Reduced to 12GB with 4-bit quantization while maintaining 95%+ calculation accuracy |
|
- **Speculative Decoding**: Implemented for 2.7x faster response generation with technical queries |
|
|
|
## Deployment-Optimized Versions |
|
|
|
For flexible deployment options, we provide **quantized versions** of this model: |
|
|
|
- **[4-bit Quantized Version (NF4)](https://huggingface.co/GainEnergy/ogai-8x7b-4bit)** – Lower memory usage with efficient performance. |
|
- **[8-bit Quantized Version (Int8)](https://huggingface.co/GainEnergy/ogai-8x7b-8bit-32k)** – Balanced model size and precision. |
|
- **[GGUF Version for llama.cpp](https://huggingface.co/GainEnergy/OGAI-8x7b-Q4_K_M-GGUF)** – Optimized for CPU-based inference and edge computing. |
|
|
|
### Using the GGUF Model with llama.cpp |
|
|
|
To run OGAI-8x7B using llama.cpp, use the following command: |
|
|
|
```bash |
|
./server \ |
|
--gguf-file-name ogai-8x7b-q4_k_m.gguf \ |
|
--repo-slug GainEnergy/OGAI-8x7b-Q4_K_M-GGUF \ |
|
--np 8 |
|
``` |
|
|
|
## Deployment Options |
|
|
|
This model supports one-click deployment to various platforms directly from the Hugging Face Hub: |
|
|
|
### Hugging Face Inference Endpoints |
|
Click the "Deploy" button and select "Inference Endpoints" to deploy on Hugging Face's managed infrastructure. |
|
|
|
### Local Deployment with vLLM |
|
```bash |
|
python -m vllm.entrypoints.openai.api_server \ |
|
--model GainEnergy/ogai-8x7b \ |
|
--tensor-parallel-size 2 |
|
``` |
|
|
|
## OGAI Model Family & Expansion Roadmap |
|
|
|
OGAI-8x7B is part of the OGAI Model Family, designed for various energy-sector applications. |
|
|
|
| **Model Name** | **Base Model** | **Fine-Tuning** | **Domain Focus** | |
|
|----------------|----------------|-----------------|------------------| |
|
| **OGAI 3.1 Engineer** | Custom Framework | Full fine-tuning | AI-powered Oil & Gas Engineering Assistance | |
|
| **OGAI-Quantum** | Hybrid AI-Quantum | Hybrid Fine-Tuning | Reservoir Simulation & Seismic Data Processing | |
|
| **OGAI-R1** | TinyR1-32B | Fine-tuned | Engineering AI Reasoning & Logical Analysis | |
|
| **OGMOE** | Mixtral-8x7B MoE | Fine-tuned | Mixture of Experts (MoE) for Drilling Optimization | |
|
| **OGAI-8x22B** | Mixtral-8x22B | Full fine-tuning | High-Performance LLM for Engineering Reasoning | |
|
| **OGAI-8x7B** | Mixtral-8x7B | LoRA fine-tuning | Drilling Optimization, Well Planning & Document RAG | |
|
|
|
## How to Use |
|
|
|
### Run Inference in Python |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_name = "GainEnergy/ogai-8x7b" |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") |
|
|
|
prompt = "Calculate the required casing depth for a well in a high-pressure formation." |
|
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
|
outputs = model.generate(**inputs, max_new_tokens=100) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Citing OGAI-8x7B |
|
|
|
``` |
|
@article{ogai8x7b2025, |
|
title={OGAI-8x7B: An AI Model for Oil & Gas Drilling Engineering}, |
|
author={GainEnergy AI Team}, |
|
year={2025}, |
|
publisher={Hugging Face Models} |
|
} |
|
``` |