RyzenAIModel
optimum.amd.ryzenai.pipeline
< source >( task model: typing.Optional[typing.Any] = None vaip_config: typing.Optional[str] = None model_type: typing.Optional[str] = None feature_extractor: typing.Union[str, ForwardRef('PreTrainedFeatureExtractor'), NoneType] = None image_processor: typing.Union[str, transformers.image_processing_utils.BaseImageProcessor, NoneType] = None use_fast: bool = True token: typing.Union[bool, str, NoneType] = None revision: typing.Optional[str] = None **kwargs ) → Pipeline
Parameters
- task (
str
) — The task defining which pipeline will be returned. Available tasks include:- “image-classification”
- “object-detection”
- model (
Optional[Any]
, defaults toNone
) — The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model. If not provided, the default model for the specified task will be loaded. - vaip_config (
Optional[str]
, defaults toNone
) — Runtime configuration file for inference with Ryzen IPU. A default config file can be found in the Ryzen AI VOE package, extracted during installation under the namevaip_config.json
. - model_type (
Optional[str]
, defaults toNone
) — Model type for the model - feature_extractor (
Union[str, "PreTrainedFeatureExtractor"]
, defaults toNone
) — The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor. - image_processor (
Union[str, BaseImageProcessor]
, defaults toNone
) — The image processor that will be used by the pipeline for image-related tasks. - use_fast (
bool
, defaults toTrue
) — Whether or not to use a Fast tokenizer if possible. - token (
Union[str, bool
], defaults toNone
) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
). - revision (
str
, defaults toNone
) — The specific model version to use, specified as a branch name, tag name, or commit id. - **kwargs — Additional keyword arguments passed to the underlying pipeline class.
Returns
Pipeline
An instance of the specified pipeline for the given task and model.
Utility method to build a pipeline for various RyzenAI tasks.
This function creates a pipeline for a specified task, utilizing a given model or loading the default model for the task. The pipeline includes components such as a image processor and model.
Computer vision
class optimum.amd.ryzenai.pipelines.TimmImageClassificationPipeline
< source >( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None image_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = None processor: typing.Optional[transformers.processing_utils.ProcessorMixin] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Example usage:
import requests
from PIL import Image
from optimum.amd.ryzenai import pipeline
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
model_id = "mohitsha/timm-resnet18-onnx-quantized-ryzen"
pipe = pipeline("image-classification", model=model_id, vaip_config="vaip_config.json")
print(pipe(image))
class optimum.amd.ryzenai.pipelines.YoloObjectDetectionPipeline
< source >( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None image_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = None processor: typing.Optional[transformers.processing_utils.ProcessorMixin] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Supported model types
- yolox
- yolov3
- yolov5
- yolov8
Example usage:
import requests
from PIL import Image
from optimum.amd.ryzenai import pipeline
img = ".\\image.jpg"
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
img = ".\\image2.jpg"
image = Image.open(img)
model_id = "amd/yolox-s"
detector = pipeline("object-detection", model=model_id, vaip_config="vaip_config.json", model_type="yolox")
detector = pipe(image)