phyloforfun commited on
Commit
c7d3ccd
β€’
1 Parent(s): e91ac58

Major update. Support for 15 LLMs, World Flora Online taxonomy validation, geolocation, 2 OCR methods, significant UI changes, stability improvements, consistent JSON parsing

Browse files
Files changed (1) hide show
  1. README.md +10 -340
README.md CHANGED
@@ -1,343 +1,13 @@
1
- # VoucherVision
2
-
3
- [![VoucherVision](https://LeafMachine.org/img/VV_Logo.png "VoucherVision")](https://LeafMachine.org/)
4
-
5
- Table of Contents
6
- =================
7
-
8
- * [Table of Contents](#table-of-contents)
9
- * [About](#about)
10
- * [Roadmap and New Features List](#roadmap-and-new-features-list)
11
- * [Try our public demo!](#try-our-public-demo)
12
- * [Installing VoucherVision](#installing-VoucherVision)
13
- * [Prerequisites](#prerequisites)
14
- * [Installation - Cloning the VoucherVision Repository](#installation---cloning-the-VoucherVision-repository)
15
- * [About Python Virtual Environments](#about-python-virtual-environments)
16
- * [Installation - Windows 10+](#installation---windows-10)
17
- * [Virtual Environment](#virtual-environment-1)
18
- * [Installing Packages](#installing-packages-1)
19
- * [Troubleshooting CUDA](#troubleshooting-cuda)
20
- * [Create a Desktop Shortcut to Launch VoucherVision GUI](#create-a-desktop-shortcut-to-launch-vouchervision-gui)
21
- * [Run VoucherVision](#run-vouchervision)
22
- * [Setting up API key](#setting-up-api-key)
23
- * [Check GPU](#check-gpu)
24
- * [Run Tests](#run-tests)
25
- * [Starting VoucherVision](#starting-vouchervision)
26
- * [Azure Instances of OpenAI](#azure-instances-of-openai)
27
- * [Custom Prompt Builder](#custom-prompt-builder)
28
- * [Load, Build, Edit](#load-build-edit)
29
- * [Instructions](#instructions)
30
- * [Defining Column Names Field-Specific Instructions](#defining-column-names-field-specific-instructions)
31
- * [Prompting Structure](#prompting-structure)
32
- * [Mapping Columns for VoucherVisionEditor](#mapping-columns-for-vouchervisioneditor)
33
- * [Expense Reporting](#expense-reporting)
34
- * [Expense Report Dashboard](#expense-report-dashboard)
35
- * [User Interface Images](#user-interface-images)
36
-
37
- ---
38
-
39
- # About
40
- ## **VoucherVision** - In Beta Testing Phase πŸš€
41
-
42
- For inquiries, feedback (or if you want to get involved!) [please complete our form](https://docs.google.com/forms/d/e/1FAIpQLSe2E9zU1bPJ1BW4PMakEQFsRmLbQ0WTBI2UXHIMEFm4WbnAVw/viewform?usp=sf_link).
43
-
44
- ## **Overview:**
45
- Initiated by the **University of Michigan Herbarium**, VoucherVision harnesses the power of large language models (LLMs) to transform the transcription process of natural history specimen labels. Our workflow is as follows:
46
- - Text extraction from specimen labels with **LeafMachine2**.
47
- - Text interpretation using **Google Vision OCR**.
48
- - LLMs, including ***GPT-3.5***, ***GPT-4***, ***PaLM 2***, and Azure instances of OpenAI models, standardize the OCR output into a consistent spreadsheet format. This data can then be integrated into various databases like Specify, Symbiota, and BRAHMS.
49
-
50
- For ensuring accuracy and consistency, the [VoucherVisionEditor](https://github.com/Gene-Weaver/VoucherVisionEditor) serves as a quality control tool.
51
-
52
- ## Roadmap and New Features List
53
-
54
- #### VoucherVision
55
- - [X] Update to GPT 1106 builds
56
- - [ ] Option to zip output files for simpler import into VVE
57
- - [ ] Instead of saving a copy of the original image inplace of the OCR/collage images when they are not selected, just change the path to the original image.
58
- - [x] Expense tracking
59
- - [x] Dashboard
60
- - [X] More granular support for different GPT versions
61
- - [x] Project-based and cummulative tracking
62
- - [x] Hugging Face Spaces
63
- - [x] Working and refactored
64
- - [ ] Visualize locations on a map (verbatim and decimal)
65
- - [x] Tested with batch of 300 images
66
- - [x] GPT 3.5
67
- - [ ] GPT 4
68
- - [ ] PaLM 2
69
- - [ ] Optimize for +300 images at a time
70
- - [x] Modular Prompt Builder
71
- - [x] Build, save, load, submit to VV library
72
- - [ ] Assess whether order of column matters
73
- - [ ] Assess shorter prompt effectiveness
74
- - [ ] Restrict special columns to conform with VVE requirements (catalog_number, coordinates)
75
- - [ ] Option to load existing OCR into VoucherVision workflow
76
- #### Supported LLM APIs
77
- - [x] OpenAI
78
- - [x] GPT 4
79
- - [x] GPT 4 Turbo 1106-preview
80
- - [x] GPT 4 32k
81
- - [x] GPT 3.5
82
- - [x] GPT 3.5 Instruct
83
- - [x] OpenAI (Microsoft Azure Endpoints)
84
- - [x] GPT 4
85
- - [x] GPT 4 Turbo 1106-preview
86
- - [x] GPT 4 32k
87
- - [x] GPT 3.5
88
- - [x] GPT 3.5 Instruct
89
- - [x] MistralAI
90
- - [x] Mistral Tiny
91
- - [x] Mistral Small
92
- - [x] Mistral Medium
93
- - [x] Google PaLM2
94
- - [x] text-bison@001
95
- - [x] text-bison@002
96
- - [x] text-unicorn@001
97
- - [x] Google Gemini
98
- - [x] Gemini-Pro
99
- #### Supported Locally Hosted LLMs
100
- - [x] MistralAI (24GB+ VRAM GPU Required)
101
- - [x] Mixtral 8x7B Instruct v0.1
102
- - [x] Mixtral 7B Instruct v0.2
103
- - [x] MistralAI (CPU Inference) ((can run on almost computer!))
104
- - [x] Mixtral 7B Instruct v0.2 GGUF via llama.cpp
105
- - [x] Meta-Llama2 7B
106
- - [ ] Llama2 7B chat hf
107
-
108
- #### VoucherVisionEditor
109
- - [ ] Streamline the startup procedure
110
- - [ ] Add configurable dropdown menus for certain fields
111
- - [ ] Make sure that VVE can accomodate arbitrary column names
112
- - [ ] Remove legacy support (version 1 prompts)
113
- - [ ] Taxonomy validation helper
114
- - [x] Visualize locations on a map (verbatim and decimal)
115
- - [ ] More support for datum and verbatim coordinates
116
- - [ ] Compare raw OCR to values in form to flag hallucinations/generated content
117
- - [ ] Accept zipped folders as input
118
- - [ ] Flag user when multiple people/names/determinations are present
119
-
120
- ### **Package Information:**
121
- The main VoucherVision tool and the VoucherVisionEditor are packaged separately. This separation ensures that lower-performance computers can still install and utilize the editor. While VoucherVision is optimized to function smoothly on virtually any modern system, maximizing its capabilities (like using LeafMachine2 label collages or running Retrieval Augmented Generation (RAG) prompts) mandates a GPU.
122
-
123
- > ***NOTE:*** You can absolutely run VoucherVision on non-GPU systems, but RAG will not be possible (luckily the apparent best prompts 'Version2+' does not use RAG).
124
-
125
- ---
126
-
127
- # Try our public demo!
128
- Our public demo, while lacking several quality control and reliability features found in the full VoucherVision module, provides an exciting glimpse into its capabilities. Feel free to upload your herbarium specimen and see what happens!
129
- [VoucherVision Demo](https://huggingface.co/spaces/phyloforfun/VoucherVision)
130
-
131
- ---
132
-
133
- # Installing VoucherVision
134
-
135
- ## Prerequisites
136
- - Python 3.10 or later
137
- - Optional: an Nvidia GPU + CUDA for running LeafMachine2
138
-
139
- ## Installation - Cloning the VoucherVision Repository
140
- 1. First, install Python 3.10, or greater, on your machine of choice. We have validated up to Python 3.11.
141
- - Make sure that you can use `pip` to install packages on your machine, or at least inside of a virtual environment.
142
- - Simply type `pip` into your terminal or PowerShell. If you see a list of options, you are all set. Otherwise, see
143
- either this [PIP Documentation](https://pip.pypa.io/en/stable/installation/) or [this help page](https://www.geeksforgeeks.org/how-to-install-pip-on-windows/)
144
- 2. Open a terminal window and `cd` into the directory where you want to install VoucherVision.
145
- 3. In the [Git BASH terminal](https://gitforwindows.org/), clone the VoucherVision repository from GitHub by running the command:
146
- <pre><code class="language-python">git clone https://github.com/Gene-Weaver/VoucherVision.git</code></pre>
147
- <button class="btn" data-clipboard-target="#code-snippet"></button>
148
- 4. Move into the VoucherVision directory by running `cd VoucherVision` in the terminal.
149
- 5. To run VoucherVision we need to install its dependencies inside of a python virtual environmnet. Follow the instructions below for your operating system.
150
-
151
- ## About Python Virtual Environments
152
- A virtual environment is a tool to keep the dependencies required by different projects in separate places, by creating isolated python virtual environments for them. This avoids any conflicts between the packages that you have installed for different projects. It makes it easier to maintain different versions of packages for different projects.
153
-
154
- For more information about virtual environments, please see [Creation of virtual environments](https://docs.python.org/3/library/venv.html)
155
-
156
- ---
157
-
158
- ## Installation - Windows 10+
159
- Installation should basically be the same for Linux.
160
- ### Virtual Environment
161
-
162
- 1. Still inside the VoucherVision directory, show that a venv is currently not active
163
- <pre><code class="language-python">python --version</code></pre>
164
- <button class="btn" data-clipboard-target="#code-snippet"></button>
165
- 2. Then create the virtual environment (venv_VV is the name of our new virtual environment)
166
- <pre><code class="language-python">python3 -m venv venv_VV</code></pre>
167
- <button class="btn" data-clipboard-target="#code-snippet"></button>
168
- Or depending on your Python version...
169
- <pre><code class="language-python">python -m venv venv_VV</code></pre>
170
- <button class="btn" data-clipboard-target="#code-snippet"></button>
171
- 3. Activate the virtual environment
172
- <pre><code class="language-python">.\venv_VV\Scripts\activate</code></pre>
173
- <button class="btn" data-clipboard-target="#code-snippet"></button>
174
- 4. Confirm that the venv is active (should be different from step 1)
175
- <pre><code class="language-python">python --version</code></pre>
176
- <button class="btn" data-clipboard-target="#code-snippet"></button>
177
- 5. If you want to exit the venv later for some reason, deactivate the venv using
178
- <pre><code class="language-python">deactivate</code></pre>
179
- <button class="btn" data-clipboard-target="#code-snippet"></button>
180
-
181
- ### Installing Packages
182
-
183
- 1. Install the required dependencies to use VoucherVision
184
- - Option A - If you are using Windows PowerShell:
185
- <pre><code class="language-python">pip install wheel streamlit streamlit-extras plotly pyyaml Pillow pandas matplotlib matplotlib-inline tqdm openai langchain tiktoken openpyxl google-generativeai google-cloud-storage google-cloud-vision opencv-python chromadb chroma-migrate InstructorEmbedding transformers sentence-transformers seaborn dask psutil py-cpuinfo azureml-sdk azure-identity ; if ($?) { pip install numpy -U } ; if ($?) { pip install -U scikit-learn } ; if ($?) { pip install --upgrade numpy scikit-learnstreamlit google-generativeai google-cloud-storage google-cloud-vision azureml-sdk azure-identity openai langchain }</code></pre>
186
- <button class="btn" data-clipboard-target="#code-snippet"></button>
187
-
188
- - Option B:
189
- <pre><code class="language-python">pip install wheel streamlit streamlit-extras plotly pyyaml Pillow pandas matplotlib matplotlib-inline tqdm openai langchain tiktoken openpyxl google-generativeai google-cloud-storage google-cloud-vision opencv-python chromadb chroma-migrate InstructorEmbedding transformers sentence-transformers seaborn dask psutil py-cpuinfo azureml-sdk azure-identity</code></pre>
190
- <button class="btn" data-clipboard-target="#code-snippet"></button>
191
-
192
- 2. Upgrade important packages. Run this if there is an update to VoucherVision.
193
- <pre><code class="language-python">pip install --upgrade numpy scikit-learn streamlit google-generativeai google-cloud-storage google-cloud-vision azureml-sdk azure-identity openai langchain</code></pre>
194
- <button class="btn" data-clipboard-target="#code-snippet"></button>
195
-
196
- 3. Install PyTorch
197
- - The LeafMachine2 machine learning algorithm requires PyTorch. If your computer does not have a GPU, then please install a version of PyTorch that is for CPU only. If your computer does have an Nvidia GPU, then please determine which version of PyTorch matches your current CUDA version. Please see [Troubleshooting CUDA](#troubleshooting-cuda) for help. PyTorch is large and will take a bit to install.
198
-
199
- - WITH GPU (or visit [PyTorch.org](https://pytorch.org/get-started/locally/) to find the appropriate version of PyTorch for your CUDA version)
200
- <pre><code class="language-python">pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113</code></pre>
201
- <button class="btn" data-clipboard-target="#code-snippet"></button>
202
- - WITHOUT GPU, CPU ONLY
203
- <pre><code class="language-python">pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cpu</code></pre>
204
- <button class="btn" data-clipboard-target="#code-snippet"></button>
205
-
206
-
207
- > If you need help, please submit an inquiry in the form at [LeafMachine.org](https://LeafMachine.org/)
208
-
209
- ---
210
-
211
- ## Troubleshooting CUDA
212
-
213
- - If your system already has another version of CUDA (e.g., CUDA 11.7) then it can be complicated to switch to CUDA 11.3.
214
- - The simplest solution is to install pytorch with CPU only, avoiding the CUDA problem entirely.
215
- - Alternatively, you can install the [latest pytorch release](https://pytorch.org/get-started/locally/) for your specific system, either using the cpu only version `pip3 install torch`, `pip3 install torchvision`, `pip3 install torchaudio` or by matching the pythorch version to your CUDA version.
216
- - We have not validated CUDA 11.6 or CUDA 11.7, but our code is likely to work with them too. If you have success with other versions of CUDA/pytorch, let us know and we will update our instructions.
217
-
218
- ---
219
-
220
- # Create a Desktop Shortcut to Launch VoucherVision GUI
221
- We can create a desktop shortcut to launch VoucherVision. In the `../VoucherVision/` directory is a file called `create_desktop_shortcut.py`. In the terminal, move into the `../VoucherVision/` directory and type:
222
- <pre><code class="language-python">python create_desktop_shortcut.py</code></pre>
223
- <button class="btn" data-clipboard-target="#code-snippet"></button>
224
- Or...
225
- <pre><code class="language-python">python3 create_desktop_shortcut.py</code></pre>
226
- <button class="btn" data-clipboard-target="#code-snippet"></button>
227
- Follow the instructions, select where you want the shortcut to be created, then where the virtual environment is located.
228
-
229
  ---
230
-
231
- # Run VoucherVision
232
- 1. In the terminal, make sure that you `cd` into the `VoucherVision` directory and that your virtual environment is active (you should see venv_VV on the command line).
233
- 2. Type:
234
- <pre><code class="language-python">python run_VoucherVision.py</code></pre>
235
- <button class="btn" data-clipboard-target="#code-snippet"></button>
236
- or depending on your Python installation:
237
- <pre><code class="language-python">python3 run_VoucherVision.py</code></pre>
238
- <button class="btn" data-clipboard-target="#code-snippet"></button>
239
- 3. If you ever see an error that says that a "port is not available", open `run_VoucherVision.py` in a plain text editor and change the `--port` value to something different but close, like 8502.
240
-
241
- ## Setting up API key
242
- VoucherVision requires access to Google Vision OCR and at least one of the following LLMs: OpenAI API, Google PaLM 2, a private instance of OpenAI through Microsoft Azure. On first startup, you will see a page with instructions on how to get these API keys. ***Nothing will work until*** you get at least the Google Vision OCR API key and one LLM API key.
243
-
244
- ## Check GPU
245
- Press the "Check GPU" button to see if you have a GPU available. If you know that your computer has an Nvidia GPU, but the check fails, then you need to install an different version of PyTorch in the virtual environment.
246
-
247
- ## Run Tests
248
- Once you have provided API keys, you can test all available prompts and LLMs by pressing the test buttons. Every combination of LLM, prompt, and LeafMachine2 collage will run on the image in the `../VoucherVision/demo/demo_images` folder. A grid will appear letting you know which combinations are working on your system.
249
-
250
- ## Starting VoucherVision
251
- 1. "Run name" - Set a run name for your project. This will be the name of the new folder that contains the output files.
252
- 2. "Output directory" - Paste the full file path of where you would like to save the folder that will be created in step 1.
253
- 3. "Input images directory" - Paste the full file path of where the input images are located. This folder can only have JPG or JPEG images inside of it.
254
- 4. "Select an LLM" - Pick the LLM you want to use to parse the unstructured OCR text.
255
- - As of Nov. 1, 2023 PaLM 2 is free to use.
256
- 5. "Prompt Version" - Pick your prompt version. We recommend "Version 2" for production use, but you can experiment with our other prompts.
257
- 6. "Cropped Components" - Check the box to use LeafMachine2 collage images as the input file. LeafMachine2 can often find small handwritten text that may be missed by Google Vision OCR's text detection algorithm. But, the difference in performance is not that big. You will still get good performance without using the LeafMachine2 collage images.
258
- 7. "Domain Knowledge" is only used for "Version 1" prompts.
259
- 8. "Component Detector" sets basic LeafMachine2 parameters, but the default is likely good enough.
260
- 9. "Processing Options"
261
- - The image file name defines the row name in the final output spreadsheet.
262
- - We provide some basic options to clean/parse the image file name to produce the desired output.
263
- - For example, if the input image name is `MICH-V-3819482.jpg` but the desired name is just `3819482` you can add `MICH-V-` to the "Remove prefix from catalog number" input box. Alternatively, you can check the "Require Catalog..." box and achieve the same result.
264
-
265
- 10. ***Finally*** you can press the start processing button.
266
-
267
- ## Azure Instances of OpenAI
268
- If your institution has an enterprise instance of OpenAI's services, [like at the University of Michigan](https://its.umich.edu/computing/ai), you can use Azure instead of the OpenAI servers. Your institution should be able to provide you with the required keys (there are 5 required keys for this service).
269
-
270
- # Custom Prompt Builder
271
- VoucherVision empowers individual institutions to customize the format of the LLM output. Using our pre-defined prompts you can transcribe the label text into 20 columns, but using our Prompt Builder you can load one of our default prompts and adjust the output to meet your needs. More instructions will come soon, but for now here are a few more details.
272
-
273
- ### Load, Build, Edit
274
-
275
- The Prompt Builder creates a prompt in the structure that VoucherVision expects. This information is stored as a configuration yaml file in `../VoucherVision/custom_prompts/`. We provide a few versions to get started. You can load one of our examples and then use the Prompt Builder to edit or add new columns.
276
-
277
- ![prompt_1](https://LeafMachine.org/img/prompt_1.PNG)
278
-
279
- ### Instructions
280
-
281
- Right now, the prompting instructions are not configurable, but that may change in the future.
282
-
283
- ![prompt_2](https://LeafMachine.org/img/prompt_1.PNG)
284
-
285
- ### Defining Column Names Field-Specific Instructions
286
-
287
- The central JSON object shows the structure of the columns that you are requesting the LLM to create and populate with information from the specimen's labels. These will become the rows in the final xlsx file the VoucherVision generates. You can pick formatting instructions, set default values, and give detailed instructions.
288
-
289
- > Note: formatting instructions are not always followed precisely by the LLM. For example, GPT-4 is capable of granular instructions like converting ALL CAPS TEXT to sentence-case, but GPT-3.5 and PaLM 2 might not be capable of following that instruction every time (which is why we have the VoucherVisionEditor and are working to link these instructions so that humans editing the output can quickly/easily fix these errors).
290
-
291
- ![prompt_3](https://LeafMachine.org/img/prompt_3.PNG)
292
-
293
- ### Prompting Structure
294
-
295
- The rightmost JSON object is the entire prompt structure. If you load the `required_structure.yaml` prompt, you will wee the bare-bones version of what VoucherVision expects to see. All of the parts are there for a reason. The Prompt Builder UI may be a little unruly right now thanks to quirks with Streamlit, but we still recommend using the UI to build your own prompts to make sure that all of the required components are present.
296
-
297
- ![prompt_4](https://LeafMachine.org/img/prompt_4.PNG)
298
-
299
- ### Mapping Columns for VoucherVisionEditor
300
-
301
- Finally, we need to map columns to a VoucherVisionEditor category.
302
-
303
- ![prompt_5](https://LeafMachine.org/img/prompt_5.PNG)
304
-
305
- # Expense Reporting
306
- VoucherVision logs the number of input and output tokens (using [tiktoken](https://github.com/openai/tiktoken)) from every call. We store the publicly listed prices of the LLM APIs in `../VoucherVision/api_cost/api_cost.yaml`. Then we do some simple math to estimage the cost of run, which is stored inside of your project's output directory `../run_name/Cost/run_name.csv` and all runs are accumulated in a csv file stored in `../VoucherVision/expense_report/expense_report.csv`. VoucherVision only manages `expense_report.csv`, so if you want to split costs by month/quarter then copy and rename `expense_report.csv`. Deleting `expense_report.csv` will let you accumulate more stats.
307
-
308
- > This should be treated as an estimate. The true cost may be slightly different.
309
-
310
- This is an example of the stats that we track:
311
- | run | date | api_version | total_cost | n_images | tokens_in | tokens_out | rate_in | rate_out | cost_in | cost_out |
312
- |----------------------------|--------------------------|-------------|------------|----------|-----------|------------|---------|----------|-----------|----------|
313
- | GPT4_test_run1 | 2023_11_05__17-44-31 | GPT_4 | 0.23931 | 2 | 6749 | 614 | 0.03 | 0.06 | 0.20247 | 0.03684 |
314
- | GPT_3_5_test_run | 2023_11_05__17-48-48 | GPT_3_5 | 0.0189755 | 4 | 12033 | 463 | 0.0015 | 0.002 | 0.0180495 | 0.000926 |
315
- | PALM2_test_run | 2023_11_05__17-50-35 | PALM2 | 0 | 4 | 13514 | 771 | 0 | 0 | 0 | 0 |
316
- | GPT4_test_run2 | 2023_11_05__18-49-24 | GPT_4 | 0.40962 | 4 | 12032 | 811 | 0.03 | 0.06 | 0.36096 | 0.04866 |
317
-
318
- ## Expense Report Dashboard
319
- The sidebar in VoucherVision displays summary stats taken from `expense_report.csv`.
320
- ![Expense Report Dashboard](https://LeafMachine.org/img/expense_report.PNG)
321
-
322
- # User Interface Images
323
- Validation test when the OpenAI key is not provided, but keys for PaLM 2 and Azure OpenAI are present:
324
- ![Validation 1](https://LeafMachine.org/img/validation_1.PNG)
325
-
326
  ---
327
 
328
- Validation test when all versions of the OpenAI keys are provided:
329
- ![Validation GPT](https://LeafMachine.org/img/validation_gpt.PNG)
330
-
331
- ---
332
-
333
- A successful GPU test:
334
- ![Validation GPU](https://LeafMachine.org/img/validation_gpu.PNG)
335
-
336
- ---
337
-
338
- Successful PaLM 2 test:
339
- ![Validation PaLM](https://LeafMachine.org/img/validation_palm.PNG)
340
-
341
-
342
-
343
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: VoucherVision
3
+ emoji: πŸ“ˆ
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: streamlit
7
+ sdk_version: 1.28.1
8
+ app_file: app.py
9
+ pinned: false
10
+ license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference