simonMadec commited on
Commit
9273d47
1 Parent(s): f81ed4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -63
README.md CHANGED
@@ -1,26 +1,23 @@
 
 
 
 
 
 
 
 
 
 
1
 
 
2
 
3
- # VegAnn
4
- ![Alt text](images/vegann-logo.png "Vegann-logo")
5
 
6
  ### **Vegetation Annotation of a large multi-crop RGB Dataset acquired under diverse conditions for image semantic segmentation**
7
 
8
- # Table of contents
9
- 1. [Keypoints](#key)
10
- 2. [Abstract](#abs)
11
- 3. [DIY with Google Colab](#colab)
12
- 4. [Pytorch Data Loader](#loader)
13
- 5. [Baseline Results](#res)
14
- 6. [Citing](#cite)
15
- 7. [Paper](#paper)
16
- 8. [Meta-Information](#meta)
17
- 9. [Model inference](#model)
18
- 10. [Licence](#licence)
19
- 11. [Credits](#credits)
20
-
21
- ## ⏳ Keypoints <a name="key"></a>
22
-
23
- - The dataset can be accessed at https://doi.org/10.5281/zenodo.7636408.
24
  - VegAnn contains 3775 images
25
  - Images are 512*512 pixels
26
  - Corresponding binary masks is 0 for soil + crop residues (background) 255 for Vegetation (foreground)
@@ -28,68 +25,108 @@
28
  - VegAnn was compiled using a variety of outdoor images captured with different acquisition systems and configurations
29
  - For more information about VegAnn, details, labeling rules and potential uses see https://doi.org/10.1038/s41597-023-02098-y
30
 
31
- ## 📚 Abstract <a name="abs"></a>
32
 
33
- Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - **Veg**etation **Ann**otation - dataset, a collection of 3795 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.
34
 
35
- ## Google Colab <a name="colab"></a>
36
- Example code for VegAnn (Unet) inference here : https://t.co/LkI1esLzqu
37
-
38
 
39
- ## 📦 Pytorch Data Loader <a name="loader"></a>
40
- We provide Python dataloader that load the data as PyTorch tensors. With the dataloader, users can select desired images with the metadata information such as species, camera system, and training/validation/test sets.
41
 
42
- ### 🍲 Example use :
43
 
44
- Here is an example use case of the dataloader with our custom dataset class:
45
 
46
- ```
47
- from segmentation_models_pytorch.encoders import get_preprocessing_fn
48
- from utils.dataset import DatasetVegAnn
49
- from torch.utils.data import DataLoader
50
 
51
- train_dataset = DatasetVegAnn(images_dir = veganpath,species = ["Wheat","Maize"], system = ["Handeld Cameras","Phone Camera"], tvt="Training")
52
- train_dataloader = DataLoader(train_dataset, batch_size=16, shuffle=True,pin_memory=False, num_workers=10)
53
- ```
54
- By using this dataloader, you can easily load the desired images as PyTorch tensors see utils/dataset.py for more details.
55
 
56
- ## 👀 Baseline Results <a name="res"></a>
57
 
58
- Metrics are computed at the dataset level for the 5 Test sets of VegAnn
59
 
60
- Method | Encoder | IOU | F1
61
- --- | --- | --- | ---
62
- Unet | ResNet34 | 89.7 ±1.4 | 94.5 ±0.8
63
- DeepLabV3 | ResNet34 | 89.5 ±0.2 | 94.5 ±0.2
 
 
 
 
 
 
64
 
 
65
 
66
- ## 📝 Citing <a name="cite"></a>
67
 
68
- If you find this dataset useful, please cite:
69
 
70
- @article{madec2023,
71
- title={VegAnn: Vegetation Annotation of multi-crop RGB images acquired under diverse conditions for segmentation},
72
- author={Madec, Simon and Irfan, Kamran and Velumani, Kaaviya and Baret, Frederic and David, Etienne and Daubige, Gaetan and Samatan, Lucas and Serouart, Mario and Smith, Daniel and James, Chris and Camacho, Fernando and Guo, Wei and De Solan, Benoit and Chapman, Scott and Weiss, Marie },
73
- url={https://doi.org/10.5281/zenodo.7636408},
74
- year={2023}
75
- }
76
- ## 📖 Paper <a name="paper"></a>
77
- https://doi.org/10.1038/s41597-023-02098-y
78
 
79
- ## ☸️ Model inference <a name="model"></a>
80
- Model weights here : https://drive.google.com/uc?id=1azagsinfW4btSGaTi0XJKsRnFR85Gtaw (Unet, resnet34 weights initialized on Imagenet fine-tunned on Vegan)
81
- Please open an issue or request for any feature request
82
 
83
- ## 📑Licence <a name="licence"></a>
84
- The dataset is under the CC-BY licence.
85
- This repository is under the MIT licence
86
 
87
- ## 👫 Credits <a name="credits"></a>
88
- This work was supported by the projects Phenome-ANR-11-INBS-0012, P2S2-CNES-TOSCA-4500066524, GRDC UOQ2002-08RTX, GRDC UOQ2003-011RTX, JST AIP Acceleration Research JPMJCR21U3 and French Ministry of Agriculture and food (LITERAL CASDAR project).
89
 
90
- We thank all the people involved in the labelling review also including F.Venault, M. Debroux, G. Studer
91
 
 
92
 
93
- ---
94
- license: mit
95
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - vegetation
6
+ - segmentation
7
+ size_categories:
8
+ - 1K<n<10K
9
+ ---
10
+
11
 
12
+ # VegAnn Dataset
13
 
 
 
14
 
15
  ### **Vegetation Annotation of a large multi-crop RGB Dataset acquired under diverse conditions for image semantic segmentation**
16
 
17
+
18
+
19
+ ## Keypoints ⏳
20
+
 
 
 
 
 
 
 
 
 
 
 
 
21
  - VegAnn contains 3775 images
22
  - Images are 512*512 pixels
23
  - Corresponding binary masks is 0 for soil + crop residues (background) 255 for Vegetation (foreground)
 
25
  - VegAnn was compiled using a variety of outdoor images captured with different acquisition systems and configurations
26
  - For more information about VegAnn, details, labeling rules and potential uses see https://doi.org/10.1038/s41597-023-02098-y
27
 
28
+ ## Dataset Description 📚
29
 
30
+ VegAnn, short for Vegetation Annotation, is a meticulously curated collection of 3,775 multi-crop RGB images aimed at enhancing research in crop vegetation segmentation. These images span various phenological stages and were captured using diverse systems and platforms under a wide range of illumination conditions. By aggregating sub-datasets from different projects and institutions, VegAnn represents a broad spectrum of measurement conditions, crop species, and development stages.
31
 
32
+ ### Languages 🌐
 
 
33
 
34
+ The annotations and documentation are primarily in English.
 
35
 
36
+ ## Dataset Structure 🏗
37
 
38
+ ### Data Instances 📸
39
 
40
+ A VegAnn data instance consists of a 512x512 pixel RGB image patch derived from larger raw images. These patches are designed to provide sufficient detail for distinguishing between vegetation and background, crucial for applications in semantic segmentation and other forms of computer vision analysis in agricultural contexts.
 
 
 
41
 
 
 
 
 
42
 
43
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645a05f09e55477fff862881/O-iKRqn8FRZnY9hBzmaU5.png)
44
 
45
+ ### Data Fields 📋
46
 
47
+ - `Name`: Unique identifier for each image patch.
48
+ - `System`: The imaging system used to acquire the photo (e.g., Handheld Cameras, DHP, UAV).
49
+ - `Orientation`: The camera's orientation during image capture (e.g., Nadir, 45 degrees).
50
+ - `latitude` and `longitude`: Geographic coordinates where the image was taken.
51
+ - `date`: Date of image acquisition.
52
+ - `LocAcc`: Location accuracy flag (1 for high accuracy, 0 for low or uncertain accuracy).
53
+ - `Species`: The crop species featured in the image (e.g., Wheat, Maize, Soybean).
54
+ - `Owner`: The institution or entity that provided the image (e.g., Arvalis, INRAe).
55
+ - `Dataset-Name`: The sub-dataset or project from which the image originates (e.g., Phenomobile, Easypcc).
56
+ - `TVT-split1` to `TVT-split5`: Fields indicating the train/validation/test split configurations, facilitating various experimental setups.
57
 
58
+ ### Data Splits 📊
59
 
60
+ The dataset is structured into multiple splits (as indicated by `TVT-split` fields) to support different training, validation, and testing scenarios in machine learning workflows.
61
 
62
+ ## Dataset Creation 🛠
63
 
64
+ ### Curation Rationale 🤔
 
 
 
 
 
 
 
65
 
66
+ The VegAnn dataset was developed to address the gap in available datasets for training convolutional neural networks (CNNs) for the task of semantic segmentation in real-world agricultural environments. By incorporating images from a wide array of conditions and stages of crop development, VegAnn aims to enhance the performance of segmentation algorithms, promote benchmarking, and foster research on large-scale crop vegetation segmentation.
 
 
67
 
68
+ ### Source Data 🌱
 
 
69
 
70
+ #### Initial Data Collection and Normalization
 
71
 
72
+ Images within VegAnn were sourced from various sub-datasets contributed by different institutions, each under specific acquisition configurations. These were then standardized into 512x512 pixel patches to maintain consistency across the dataset.
73
 
74
+ #### Who are the source data providers?
75
 
76
+ The data was provided by a collaboration of institutions including Arvalis, INRAe, The University of Tokyo, University of Queensland, NEON, and EOLAB, among others.
77
+
78
+
79
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645a05f09e55477fff862881/W7rF7P9oexd-Q7oBGV6aF.png)
80
+
81
+ ### Annotations 📝
82
+
83
+ #### Annotation process
84
+
85
+ Annotations for the dataset were focused on distinguishing between vegetation and background within the images. The process ensured that the images offered sufficient spatial resolution to allow for accurate visual segmentation.
86
+
87
+ #### Who are the annotators?
88
+
89
+ The annotations were performed by a team comprising researchers and domain experts from the contributing institutions.
90
+
91
+ ## Considerations for Using the Data 🤓
92
+
93
+ ### Social Impact of Dataset 🌍
94
+
95
+ The VegAnn dataset is expected to significantly impact agricultural research and commercial applications by enhancing the accuracy of crop monitoring, disease detection, and yield estimation through improved vegetation segmentation techniques.
96
+
97
+ ### Discussion of Biases 🧐
98
+
99
+ Given the diverse sources of the images, there may be inherent biases towards certain crop types, geographical locations, and imaging conditions. Users should consider this diversity in applications and analyses.
100
+
101
+ ### Licensing Information 📄
102
+
103
+ Please refer to the specific licensing agreements of the contributing institutions or contact the dataset providers for more information on usage rights and restrictions.
104
+
105
+ ## Citation Information 📚
106
+
107
+ If you use the VegAnn dataset in your research, please cite the following:
108
+
109
+
110
+ ```
111
+ @article{madec_vegann_2023,
112
+ title = {{VegAnn}, {Vegetation} {Annotation} of multi-crop {RGB} images acquired under diverse conditions for segmentation},
113
+ volume = {10},
114
+ issn = {2052-4463},
115
+ url = {https://doi.org/10.1038/s41597-023-02098-y},
116
+ doi = {10.1038/s41597-023-02098-y},
117
+ abstract = {Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Vegetation Annotation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.},
118
+ number = {1},
119
+ journal = {Scientific Data},
120
+ author = {Madec, Simon and Irfan, Kamran and Velumani, Kaaviya and Baret, Frederic and David, Etienne and Daubige, Gaetan and Samatan, Lucas Bernigaud and Serouart, Mario and Smith, Daniel and James, Chrisbin and Camacho, Fernando and Guo, Wei and De Solan, Benoit and Chapman, Scott C. and Weiss, Marie},
121
+ month = may,
122
+ year = {2023},
123
+ pages = {302},
124
+ }
125
+ ```
126
+
127
+ ## Additional Information
128
+
129
+ - **Dataset Curators**: Simon Madec et al.
130
+ - **Version**: 1.0
131
+ - **License**: Specified by each contributing institution
132
+ - **Contact**: TBD