File size: 1,256 Bytes
d8d9b66 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- dark-gbf-xgboost2/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Tabular classification
## Validation Metrics
- logloss: 0.08323427141158712
- accuracy: 0.98
- mlogloss: 0.08323427141158712
- f1_macro: 0.8266666666666665
- f1_micro: 0.98
- f1_weighted: 0.9793333333333333
- precision_macro: 0.8666666666666666
- precision_micro: 0.98
- precision_weighted: 0.9833333333333333
- recall_macro: 0.8333333333333333
- recall_micro: 0.98
- recall_weighted: 0.98
- loss: 0.08323427141158712
## Best Params
- learning_rate: 0.16433034910560887
- reg_lambda: 3.7914578973926436
- reg_alpha: 2.806649620056883e-07
- subsample: 0.7396301555452317
- colsample_bytree: 0.9137471530067593
- max_depth: 6
- early_stopping_rounds: 383
- n_estimators: 15000
- eval_metric: mlogloss
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
predictions = model.predict(data) # or model.predict_proba(data)
# predictions can be converted to original labels using label_encoders.pkl
```
|