Model Card for Model ID

The Mistral 7B - Time Series Predictor is a fine-tuned large language model designed to analyze server performance metrics and forecast potential failures. It processes time-series data and predicts failure probabilities, offering actionable insights for predictive maintenance and operational risk assessment.

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Sivakrishna Yaganti and Shankar Jayaratnam
  • Funded by: Esperanto Technologies
  • Model type: Causal Language Model, fine-tuned for time-series forecasting
  • Finetuned from model: Mistral 7B

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

The model can be directly used to:

  • Forecast server health based on time-series metrics like temperature, power consumption, utilization and throughput.
  • Predict potential causes of failures using historical data.

Downstream Use [optional]

The model is ideal for integration into platforms such as Splunk and Grafana to:

  • Monitor server health in real-time.
  • Support decision-making in preventive maintenance.

Out-of-Scope Use

  • This model is not designed for general time-series forecasting outside server health monitoring.
  • It may not perform well on non-server-related data or domains significantly different from its training dataset.

Bias, Risks, and Limitations

Bias:

  1. Performance may vary on datasets with metrics significantly different from those in the training data.
  2. Predictions are most accurate when used within the context of server health monitoring.

Risks

  1. Relying solely on the model without validating its predictions may result in inaccurate failure forecasts.
  2. Model outputs are probabilistic and should be interpreted cautiously in critical systems.

Limitations

  1. Limited to time-series metrics related to server health (e.g., temperature, power, throughput).
  2. Performance may degrade for very sparse or noisy datasets.

Recommendations

Recommendations

  1. Use the model in conjunction with other predictive maintenance tools.
  2. Validate model predictions against domain knowledge to ensure accuracy.

How to Get Started with the Model

The Mistral 7B - Time Series Predictor can process time-series queries such as server health metrics and predict failure probabilities and causes. The following Python script demonstrates how to load the model and generate responses.

Code

  • from transformers import AutoModelForCausalLM, AutoTokenizer
  • model_name = "Esperanto/Mistral-7B-TimeSeriesReasoner"
  • tokenizer = AutoTokenizer.from_pretrained(model_name)
  • model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "What is the failure probability and Cause for Server 'x' on Date : [mm/dd/yy]?"

  • input_ids = tokenizer(prompt, return_tensors='pt')['input_ids']
  • output = model.generate(input_ids=input_ids, max_new_tokens=100)
  • response = tokenizer.decode(output[0])
  • print(response)

Example Prompt

  • What is the failure probability and Cause for Server 'x' on Date : [mm/dd/yy]?
  • Expected Ouptut: The failure probability for ET-1 on 11th July is 0.72. The likely cause is overheating due to sustained high temperatures over the past week.

Requirements

Dependencies:

  • pip install torch transformers

Training Details

Training Data

Source: Synthetic and real-world server metrics from Esperanto servers. Dataset: Synthetic data generated with periodic patterns (e.g., cosine functions) combined with operational zones (green, yellow, red).

Training Procedure

Preprocessing [optional]

Numerical to Textual Conversion:

All numerical metrics (e.g., temperature, power consumption, throughput) were converted into descriptive textual data to make it comprehensible for the language model. For example:

  • Numerical Input: {"temperature": [40, 42, 43]}
  • Converted Text: "The temperature increased steadily from 40°C to 43°C over the last three readings."
Domain-Specific Context:

Prompts were carefully designed to incorporate domain knowledge, guiding the model to focus on server health indicators and operational risks.

  • Example prompts include:
  1. "Analyze the following server performance metrics and predict potential failures."
  2. "Based on the provided metrics, forecast failure probabilities and identify potential causes."

These prompts ensured the model understood the critical relationships between input metrics and their operational implications.

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

  • Training time: ~30 hours on NVIDIA A100 GPUs
  • Model size: ~7B parameters

Evaluation

Testing Data, Factors & Metrics

Testing Data

Validation set: 10% of synthetic and real-world server performance data.

Factors

Model evaluated for:

  • Failure prediction accuracy with cause.

Results

image/png

Metrics

Results

Summary

Model Examination [optional]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Hardware

Runs on both GPU A100 and Esperanto ET-SoC

Software

Use Pytorch, Huggingface transformers library

Citation [optional]

Esperanto Blog :

Model Card Authors [optional]

Sivakrishna Yaganti and Shankar Jayaratnam

Model Card Contact

shankar.jayaratnam@esperantotech.com

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Esperanto/Mistral-7B-TimeSeriesReasoner

Finetuned
(96)
this model

Collection including Esperanto/Mistral-7B-TimeSeriesReasoner