Spaces:
Sleeping
Sleeping
File size: 2,701 Bytes
bfa1717 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
## Quickstart
Build and start Machine Learning backend on `http://localhost:9090`
```bash
docker-compose up
```
Check if it works:
```bash
$ curl http://localhost:9090/health
{"status":"UP"}
```
Then connect running backend to Label Studio using Machine Learning settings.
## Writing your own model
1. Place your scripts for model training & inference inside root directory. Follow the [API guidelines](#api-guidelines) described bellow. You can put everything in a single file, or create 2 separate one say `my_training_module.py` and `my_inference_module.py`
2. Write down your python dependencies in `requirements.txt`
3. Open `wsgi.py` and make your configurations under `init_model_server` arguments:
```python
from my_training_module import training_script
from my_inference_module import InferenceModel
init_model_server(
create_model_func=InferenceModel,
train_script=training_script,
...
```
4. Make sure you have docker & docker-compose installed on your system, then run
```bash
docker-compose up --build
```
## API guidelines
#### Inference module
In order to create module for inference, you have to declare the following class:
```python
from htx.base_model import BaseModel
# use BaseModel inheritance provided by pyheartex SDK
class MyModel(BaseModel):
# Describe input types (Label Studio object tags names)
INPUT_TYPES = ('Image',)
# Describe output types (Label Studio control tags names)
INPUT_TYPES = ('Choices',)
def load(self, resources, **kwargs):
"""Here you load the model into the memory. resources is a dict returned by training script"""
self.model_path = resources["model_path"]
self.labels = resources["labels"]
def predict(self, tasks, **kwargs):
"""Here you create list of model results with Label Studio's prediction format, task by task"""
predictions = []
for task in tasks:
# do inference...
predictions.append(task_prediction)
return predictions
```
#### Training module
Training could be made in a separate environment. The only one convention is that data iterator and working directory are specified as input arguments for training function which outputs JSON-serializable resources consumed later by `load()` function in inference module.
```python
def train(input_iterator, working_dir, **kwargs):
"""Here you gather input examples and output labels and train your model"""
resources = {"model_path": "some/model/path", "labels": ["aaa", "bbb", "ccc"]}
return resources
``` |