# Build any downstream models from this backbone ## Embedding ```python from genbio_finetune.tasks import Embed model = Embed.from_config({"model.backbone": "dna300m"}) collated_batch = model.collate({"sequences": ["ACGT", "ACGT"]}) embedding = model(collated_batch) print(embedding.shape) print(embedding) ``` ## Sequence Level Classification ```python import torch from genbio_finetune.tasks import SequenceClassification model = SequenceClassification.from_config({"model.backbone": "dna300m", "model.n_classes": 2}) collated_batch = model.collate({"sequences": ["ACGT", "ACGT"]}) logits = model(collated_batch) print(logits) print(torch.argmax(logits, dim=-1)) ``` ## Token Level Classification ```python import torch from genbio_finetune.tasks import TokenClassification model = TokenClassification.from_config({"model.backbone": "dna300m", "model.n_classes": 3}) collated_batch = model.collate({"sequences": ["ACGT", "ACGT"]}) logits = model(collated_batch) print(logits) print(torch.argmax(logits, dim=-1)) ``` ## Regression ```python from genbio_finetune.tasks import SequenceRegression model = SequenceRegression.from_config({"model.backbone": "dna300m"}) collated_batch = model.collate({"sequences": ["ACGT", "ACGT"]}) logits = model(collated_batch) print(logits) ``` ## Or use our one-liner CLI to finetune or evaluate any of the above! ``` gbft fit --model SequenceClassification --model.backbone dna300m --data SequenceClassification --data.path gbft test --model SequenceClassification --model.backbone dna300m --data SequenceClassification --data.path ``` For more information, visit: [Model Generator](https://github.com/genbio-ai/test)