library_name: transformers
tags:
- chaos-engineering
- IT-infrastructure
- H.A.N.D.S
- python
- AI
Model Card for phi-2-chaos-gen
This model card describes the phi-2-chaos-gen, a fine-tuned version of the PHI-2 model, specialized in generating insights and strategies for chaos engineering in IT infrastructures.
Model Details
Model Description
The phi-2-chaos-gen is a fine-tuned version of the PHI-2 model, developed to assist in chaos engineering for IT infrastructures. It utilizes a unique methodology called H.A.N.D.S (Hardware, Application, Network, Data, Security) to generate relevant strategies and insights. This model aims to provide comprehensive chaos engineering solutions, focusing on each aspect of IT infrastructure.
- Developed by: Webnizam
- Model type: Text generation
- Language(s) (NLP): English
- License: MIT License
- Finetuned from model: Microsoft's PHI-2
Model Sources
- Repository: webnizam/phi-2-chaos-gen
Uses
Direct Use
The model can be directly used by IT professionals and organizations to generate strategies and insights for chaos engineering in their IT infrastructure, focusing on hardware, application, network, data, and security aspects.
Bias, Risks, and Limitations
The model, while powerful, may have limitations in understanding highly specialized or newly emerging IT concepts. Users should verify the model's recommendations with current IT standards and practices.
Recommendations
It's recommended to use this model as a starting point or a complement to existing chaos engineering practices, not as a sole source of truth.
How to Get Started with the Model
To use the phi-2-chaos-gen model, follow these steps:
Installation: Install the transformers library using pip:
pip install transformers
Loading the Model: Load the phi-2-chaos-gen model using the transformers package. Ensure you have an internet connection as the model will be downloaded the first time you run this code.
import torch from transformers import AutoTokenizer, AutoModelForCausalLM base_model_id = "microsoft/phi-2" base_model = AutoModelForCausalLM.from_pretrained( base_model_id, # Phi2, same as before device_map="auto", trust_remote_code=True, load_in_8bit=True, torch_dtype=torch.float16, ) base_model.config.use_cache = True eval_tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)
Using the Model: You can now use the model to generate text. For example:
from peft import PeftModel ft_model = PeftModel.from_pretrained(base_model, "Falconsai/phi-2-chaos") eval_prompt = """ Give me a list of chaos-engineering scenarios to execute in an IT infrastructure, list the results in H.A.N.D.S. i need it as a list with sub headings and contents. """ model_input = eval_tokenizer(eval_prompt, return_tensors="pt").to("cuda") ft_model.eval() with torch.no_grad(): print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=300, repetition_penalty=1.11)[0], skip_special_tokens=True))
Results
The model showed proficiency in generating relevant and practical strategies for different scenarios within the scope of H.A.N.D.S.
Technical Specifications
Model Architecture and Objective
The model follows the architecture of the PHI-2 model, fine-tuned for text generation in the context of chaos engineering.