File size: 2,149 Bytes
a2d3b25
 
3caac63
 
 
 
 
 
a2d3b25
3caac63
 
 
 
 
 
cdc6976
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet
- imagenet-21k
---

# Vision Transformer (base-sized model) 

Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. 

This repo contains a Core ML version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224).

## Usage instructions

Create a `VNCoreMLRequest` that loads the ViT model:

```swift
import CoreML
import Vision

lazy var classificationRequest: VNCoreMLRequest = {
  do {
    let config = MLModelConfiguration()
    config.computeUnits = .all
    let coreMLModel = try ViT(configuration: config)
    let visionModel = try VNCoreMLModel(for: coreMLModel.model)

    let request = VNCoreMLRequest(model: visionModel, completionHandler: { [weak self] request, error in
      if let results = request.results as? [VNClassificationObservation] {
        /* do something with the results */
      }
    })

    request.imageCropAndScaleOption = .centerCrop
    return request
  } catch {
    fatalError("Failed to create VNCoreMLModel: \(error)")
  }
}()
```

Perform the request:

```swift
func classify(image: UIImage) {
  guard let ciImage = CIImage(image: image) else {
    print("Unable to create CIImage")
    return
  }

  DispatchQueue.global(qos: .userInitiated).async {
    let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up)
    do {
      try handler.perform([self.classificationRequest])
    } catch {
      print("Failed to perform classification: \(error)")
    }
  }
}
```