Image Classification
KerasHub
File size: 3,111 Bytes
ead2e2f
 
dfd27f4
 
 
e20c131
ead2e2f
a334c9d
61ac425
ead2e2f
 
61ac425
 
 
 
 
 
 
 
 
 
bce5a49
61ac425
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85393c7
61ac425
 
 
 
54f38bd
61ac425
 
 
 
 
 
 
 
 
 
54f38bd
61ac425
 
 
 
 
 
 
 
 
 
 
54f38bd
61ac425
 
 
 
 
 
 
 
 
 
54f38bd
61ac425
 
 
a334c9d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
library_name: keras-hub
license: apache-2.0
tags:
- image-classification
pipeline_tag: image-classification
---
### Model Overview
The VGG model is a type of convolutional neural network (CNN) architecture designed for image recognition and classification tasks. Developed by the Visual Geometry Group at the University of Oxford, it was introduced in the paper titled "Very Deep Convolutional Networks for Large-Scale Image Recognition" by Karen Simonyan and Andrew Zisserman in 2014. This model is supported in both KerasCV and KerasHub. KerasCV will no longer be actively developed, so please try to use KerasHub.



## Links
* [VGG paper](https://arxiv.org/abs/1409.1556)

## Installation

Keras and KerasHub can be installed with:

```
pip install -U -q keras-Hub
pip install -U -q keras>=3
```

Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.

## Presets

The following model checkpoints are provided by the Keras team. Weights have been ported from  https://huggingface.co/timm. 

| Preset Name      | Parameters | Description                                                    |
|------------------|------------|----------------------------------------------------------------|
| vgg_11_imagenet  | 9.22M      | 11-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
| vgg_13_imagenet  | 9.40M      | 13-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
| vgg_16_imagenet  | 14.71M     | 16-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |
| vgg_19_imagenet  | 20.02M     | 19-layer VGG model pre-trained on the ImageNet 1k dataset at a 224x224 resolution. |

## Example Usage
```python
input_data = np.ones(shape=(2, 224, 224, 3))

# Pretrained backbone
model = keras_hub.models.VGGBackbone.from_preset("vgg_19_imagenet")
model(input_data)

# Randomly initialized backbone with a custom config
model = keras_hub.models.VGGBackbone(
    stackwise_num_repeats=[2, 3, 3, 2],
    stackwise_num_filters=[64, 128, 256, 512],
)
model(input_data)

# Use VGG for image classification task
model = keras_hub.models.ImageClassifier.from_preset("vgg_19_imagenet")

# User Timm presets directly from HuggingFace
model = keras_hub.models.ImageClassifier.from_preset('hf://timm/vgg11.tv_in1k')
```

## Example Usage with Hugging Face URI

```python
input_data = np.ones(shape=(2, 224, 224, 3))

# Pretrained backbone
model = keras_hub.models.VGGBackbone.from_preset("hf://keras/vgg_19_imagenet")
model(input_data)

# Randomly initialized backbone with a custom config
model = keras_hub.models.VGGBackbone(
    stackwise_num_repeats=[2, 3, 3, 2],
    stackwise_num_filters=[64, 128, 256, 512],
)
model(input_data)

# Use VGG for image classification task
model = keras_hub.models.ImageClassifier.from_preset("hf://keras/vgg_19_imagenet")

# User Timm presets directly from HuggingFace
model = keras_hub.models.ImageClassifier.from_preset('hf://timm/vgg11.tv_in1k')
```