tyriaa commited on
Commit
4f59b81
·
1 Parent(s): b7fdbeb

Initialisation

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. Dockerfile +17 -0
  3. README.md +3 -3
  4. app.log +0 -0
  5. app.py +797 -0
  6. dataset/.DS_Store +0 -0
  7. dataset/images/.DS_Store +0 -0
  8. dataset/images/train/02_JPG.rf.d6063f8ca200e543da7becc1bf260ed5.jpg +0 -0
  9. dataset/images/train/03_JPG.rf.2ca107348e11cdefab68044dba66388d.jpg +0 -0
  10. dataset/images/train/04_JPG.rf.b0b546ecbc6b70149b8932018e69fef0.jpg +0 -0
  11. dataset/images/train/05_jpg.rf.46241369ebb0749c40882400f82eb224.jpg +0 -0
  12. dataset/images/train/08_JPG.rf.1f81e954a3bbfc49dcd30e3ba0eb5e98.jpg +0 -0
  13. dataset/images/train/09_JPG.rf.9119efd8c174f968457a893669209835.jpg +0 -0
  14. dataset/images/train/10_JPG.rf.6745a7b3ea828239398b85182acba199.jpg +0 -0
  15. dataset/images/train/11_JPG.rf.3aa3109a1838549cf273cdbe8b2cafeb.jpg +0 -0
  16. dataset/images/train/12_jpg.rf.357643b374df92f81f9dee7c701b2315.jpg +0 -0
  17. dataset/images/train/14_jpg.rf.d91472c724e7c34da4d96ac5e504044c.jpg +0 -0
  18. dataset/images/train/15_jpg.rf.284413e4432b16253b4cd19f0c4f01e2.jpg +0 -0
  19. dataset/images/train/15r_jpg.rf.2da1990173346311d3a3555e23a9164a.jpg +0 -0
  20. dataset/images/train/16_jpg.rf.9fdb4f56ec8596ddcc31db5bbffc26a0.jpg +0 -0
  21. dataset/images/train/18_jpg.rf.4d241aab78af17171d83f3a50f1cf1aa.jpg +0 -0
  22. dataset/images/train/20_jpg.rf.4a45f799ba16b5ff81ab1929f12a12b1.jpg +0 -0
  23. dataset/images/train/21_jpg.rf.d1d6dd254d2e5f396589ccc68a3c8536.jpg +0 -0
  24. dataset/images/train/22_jpg.rf.a72964a78ea36c7bebe3a09cf27ef6ba.jpg +0 -0
  25. dataset/images/train/25_jpg.rf.893f4286e0c8a3cef2efb7612f248147.jpg +0 -0
  26. dataset/images/train/26_jpg.rf.a03c550707ff22cd50ff7f54bebda7ab.jpg +0 -0
  27. dataset/images/train/29_jpg.rf.931769b58ae20d18d1f09d042bc44176.jpg +0 -0
  28. dataset/images/train/31_jpg.rf.f31137f793efde0462ed560d426dcd24.jpg +0 -0
  29. dataset/images/train/7-Figure14-1_jpg.rf.1c6cb204ed1f37c8fed44178a02e9058.jpg +0 -0
  30. dataset/images/train/LU-F_mod_jpg.rf.fc594179772346639512f891960969bb.jpg +0 -0
  31. dataset/images/train/Solder_Voids_jpg.rf.d40f1b71d8a801f084067fde7f33fb08.jpg +0 -0
  32. dataset/images/train/gc10_lake_voids_260-31_jpg.rf.479f3d9dda8dd22097d3d93c78f7e11d.jpg +0 -0
  33. dataset/images/train/images_jpg.rf.675b31c5e1ba2b77f0fa5ca92e2391b0.jpg +0 -0
  34. dataset/images/train/qfn-voiding_0_jpg.rf.2945527db158e9ff4943febaf9cd3eab.jpg +0 -0
  35. dataset/images/train/techtips_3_jpg.rf.ad88af637816f0999f4df0b18dfef293.jpg +0 -0
  36. dataset/images/val/025_JPG.rf.b2cdc2d984adff593dc985f555b8d280.jpg +0 -0
  37. dataset/images/val/06_jpg.rf.a94e0a678df372f5ea1395a8d888a388.jpg +0 -0
  38. dataset/images/val/07_JPG.rf.324d17a87726bd2a9614536c687c6e68.jpg +0 -0
  39. dataset/images/val/23_jpg.rf.8e9afa6b3b471e10c26637d47700f28b.jpg +0 -0
  40. dataset/images/val/24_jpg.rf.4caa996d97e35f6ce4f27a527ea43465.jpg +0 -0
  41. dataset/images/val/27_jpg.rf.3475fce31d283058f46d9f349c04cb1a.jpg +0 -0
  42. dataset/images/val/28_jpg.rf.50e348d807d35667583137c9a6c162ca.jpg +0 -0
  43. dataset/images/val/30_jpg.rf.ed72622e97cf0d884997585686cfe40a.jpg +0 -0
  44. dataset/test/.DS_Store +0 -0
  45. dataset/test/images/17_jpg.rf.ec31940ea72d0cf8b9f38dba68789fcf.jpg +0 -0
  46. dataset/test/images/19_jpg.rf.2c5ffd63bd0ce6b9b0c80fef69d101dc.jpg +0 -0
  47. dataset/test/images/32_jpg.rf.f3e33dcf611a8754c0765224f7873d8b.jpg +0 -0
  48. dataset/test/images/normal-reflow_jpg.rf.2c4fbc1fda915b821b85689ae257e116.jpg +0 -0
  49. dataset/test/images/techtips_31_jpg.rf.673cd3c7c8511e534766e6dbc3171b39.jpg +0 -0
  50. dataset/test/labels/.DS_Store +0 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
Dockerfile ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Utiliser une image de base Python légère
2
+ FROM python:3.9-slim
3
+
4
+ # Définir le répertoire de travail
5
+ WORKDIR /app
6
+
7
+ # Copier les fichiers nécessaires dans le conteneur
8
+ COPY . /app
9
+
10
+ # Installer les dépendances
11
+ RUN pip install --no-cache-dir -r requirements.txt
12
+
13
+ # Exposer le port 7860 pour le serveur Flask
14
+ EXPOSE 7860
15
+
16
+ # Commande pour démarrer Flask
17
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  title: Segmentation Project
3
- emoji:
4
- colorFrom: blue
5
- colorTo: yellow
6
  sdk: docker
7
  pinned: false
8
  ---
 
1
  ---
2
  title: Segmentation Project
3
+ emoji: 😻
4
+ colorFrom: red
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: false
8
  ---
app.log ADDED
File without changes
app.py ADDED
@@ -0,0 +1,797 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, render_template, request, jsonify
2
+ from flask_socketio import SocketIO
3
+ import os
4
+ import shutil
5
+ import numpy as np
6
+ from PIL import Image
7
+ from utils.predictor import Predictor
8
+ from utils.helpers import (
9
+ blend_mask_with_image,
10
+ save_mask_as_png,
11
+ convert_mask_to_yolo,
12
+ )
13
+ import torch
14
+ from ultralytics import YOLO
15
+ import threading
16
+ from threading import Lock
17
+ import subprocess
18
+ import time
19
+ import logging
20
+ import multiprocessing
21
+
22
+
23
+ # Initialize Flask app and SocketIO
24
+ app = Flask(__name__)
25
+ socketio = SocketIO(app)
26
+
27
+ # Define Base Directory
28
+ BASE_DIR = os.path.abspath(os.path.dirname(__file__))
29
+
30
+ # Folder structure with absolute paths
31
+ UPLOAD_FOLDERS = {
32
+ 'input': os.path.join(BASE_DIR, 'static/uploads/input'),
33
+ 'segmented_voids': os.path.join(BASE_DIR, 'static/uploads/segmented/voids'),
34
+ 'segmented_chips': os.path.join(BASE_DIR, 'static/uploads/segmented/chips'),
35
+ 'mask_voids': os.path.join(BASE_DIR, 'static/uploads/mask/voids'),
36
+ 'mask_chips': os.path.join(BASE_DIR, 'static/uploads/mask/chips'),
37
+ 'automatic_segmented': os.path.join(BASE_DIR, 'static/uploads/segmented/automatic'),
38
+ }
39
+
40
+ HISTORY_FOLDERS = {
41
+ 'images': os.path.join(BASE_DIR, 'static/history/images'),
42
+ 'masks_chip': os.path.join(BASE_DIR, 'static/history/masks/chip'),
43
+ 'masks_void': os.path.join(BASE_DIR, 'static/history/masks/void'),
44
+ }
45
+
46
+ DATASET_FOLDERS = {
47
+ 'train_images': os.path.join(BASE_DIR, 'dataset/train/images'),
48
+ 'train_labels': os.path.join(BASE_DIR, 'dataset/train/labels'),
49
+ 'val_images': os.path.join(BASE_DIR, 'dataset/val/images'),
50
+ 'val_labels': os.path.join(BASE_DIR, 'dataset/val/labels'),
51
+ 'temp_backup': os.path.join(BASE_DIR, 'temp_backup'),
52
+ 'models': os.path.join(BASE_DIR, 'models'),
53
+ 'models_old': os.path.join(BASE_DIR, 'models/old'),
54
+ }
55
+
56
+ # Ensure all folders exist
57
+ for folder_name, folder_path in {**UPLOAD_FOLDERS, **HISTORY_FOLDERS, **DATASET_FOLDERS}.items():
58
+ os.makedirs(folder_path, exist_ok=True)
59
+ logging.info(f"Ensured folder exists: {folder_name} -> {folder_path}")
60
+
61
+ training_process = None
62
+
63
+
64
+ def initialize_training_status():
65
+ """Initialize global training status."""
66
+ global training_status
67
+ training_status = {'running': False, 'cancelled': False}
68
+
69
+ def persist_training_status():
70
+ """Save training status to a file."""
71
+ with open(os.path.join(BASE_DIR, 'training_status.json'), 'w') as status_file:
72
+ json.dump(training_status, status_file)
73
+
74
+ def load_training_status():
75
+ """Load training status from a file."""
76
+ global training_status
77
+ status_path = os.path.join(BASE_DIR, 'training_status.json')
78
+ if os.path.exists(status_path):
79
+ with open(status_path, 'r') as status_file:
80
+ training_status = json.load(status_file)
81
+ else:
82
+ training_status = {'running': False, 'cancelled': False}
83
+
84
+ load_training_status()
85
+
86
+ os.environ["TORCH_CUDNN_SDPA_ENABLED"] = "0"
87
+
88
+ # Initialize SAM Predictor
89
+ MODEL_CFG = r"C:\codes\sam2\segment-anything-2\sam2\configs\sam2.1\sam2.1_hiera_l.yaml"
90
+ CHECKPOINT = r"C:\codes\sam2\segment-anything-2\checkpoints\sam2.1_hiera_large.pt"
91
+ DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
92
+ predictor = Predictor(MODEL_CFG, CHECKPOINT, DEVICE)
93
+
94
+ # Initialize YOLO-seg
95
+ YOLO_CFG = os.path.join(DATASET_FOLDERS['models'], "best.pt")
96
+ yolo_model = YOLO(YOLO_CFG)
97
+
98
+ # Configure logging
99
+ logging.basicConfig(
100
+ level=logging.INFO,
101
+ format='%(asctime)s [%(levelname)s] %(message)s',
102
+ handlers=[
103
+ logging.StreamHandler(),
104
+ logging.FileHandler(os.path.join(BASE_DIR, "app.log")) # Log to a file
105
+ ]
106
+ )
107
+
108
+
109
+ @app.route('/')
110
+ def index():
111
+ """Serve the main UI."""
112
+ return render_template('index.html')
113
+
114
+ @app.route('/upload', methods=['POST'])
115
+ def upload_image():
116
+ """Handle image uploads."""
117
+ if 'file' not in request.files:
118
+ return jsonify({'error': 'No file uploaded'}), 400
119
+ file = request.files['file']
120
+ if file.filename == '':
121
+ return jsonify({'error': 'No file selected'}), 400
122
+
123
+ # Save the uploaded file to the input folder
124
+ input_path = os.path.join(UPLOAD_FOLDERS['input'], file.filename)
125
+ file.save(input_path)
126
+
127
+ # Set the uploaded image in the predictor
128
+ image = np.array(Image.open(input_path).convert("RGB"))
129
+ predictor.set_image(image)
130
+
131
+ # Return a web-accessible URL instead of the file system path
132
+ web_accessible_url = f"/static/uploads/input/{file.filename}"
133
+ print(f"Image uploaded and set for prediction: {input_path}")
134
+ return jsonify({'image_url': web_accessible_url})
135
+
136
+ @app.route('/segment', methods=['POST'])
137
+ def segment():
138
+ """
139
+ Perform segmentation and return the blended image URL.
140
+ """
141
+ try:
142
+ # Extract data from request
143
+ data = request.json
144
+ points = np.array(data.get('points', []))
145
+ labels = np.array(data.get('labels', []))
146
+ current_class = data.get('class', 'voids') # Default to 'voids' if class not provided
147
+
148
+ # Ensure predictor has an image set
149
+ if not predictor.image_set:
150
+ raise ValueError("No image set for prediction.")
151
+
152
+ # Perform SAM prediction
153
+ masks, _, _ = predictor.predict(
154
+ point_coords=points,
155
+ point_labels=labels,
156
+ multimask_output=False
157
+ )
158
+
159
+ # Check if masks exist and have non-zero elements
160
+ if masks is None or masks.size == 0:
161
+ raise RuntimeError("No masks were generated by the predictor.")
162
+
163
+ # Define output paths based on class
164
+ mask_folder = UPLOAD_FOLDERS.get(f'mask_{current_class}')
165
+ segmented_folder = UPLOAD_FOLDERS.get(f'segmented_{current_class}')
166
+
167
+ if not mask_folder or not segmented_folder:
168
+ raise ValueError(f"Invalid class '{current_class}' provided.")
169
+
170
+ os.makedirs(mask_folder, exist_ok=True)
171
+ os.makedirs(segmented_folder, exist_ok=True)
172
+
173
+ # Save the raw mask
174
+ mask_path = os.path.join(mask_folder, 'raw_mask.png')
175
+ save_mask_as_png(masks[0], mask_path)
176
+
177
+ # Generate blended image
178
+ blend_color = [34, 139, 34] if current_class == 'voids' else [30, 144, 255] # Green for voids, blue for chips
179
+ blended_image = blend_mask_with_image(predictor.image, masks[0], blend_color)
180
+
181
+ # Save blended image
182
+ blended_filename = f"blended_{current_class}.png"
183
+ blended_path = os.path.join(segmented_folder, blended_filename)
184
+ Image.fromarray(blended_image).save(blended_path)
185
+
186
+ # Return URL for frontend access
187
+ segmented_url = f"/static/uploads/segmented/{current_class}/{blended_filename}"
188
+ logging.info(f"Segmentation completed for {current_class}. Points: {points}, Labels: {labels}")
189
+ return jsonify({'segmented_url': segmented_url})
190
+
191
+ except ValueError as ve:
192
+ logging.error(f"Value error during segmentation: {ve}")
193
+ return jsonify({'error': str(ve)}), 400
194
+
195
+ except Exception as e:
196
+ logging.error(f"Unexpected error during segmentation: {e}")
197
+ return jsonify({'error': 'Segmentation failed', 'details': str(e)}), 500
198
+
199
+ @app.route('/automatic_segment', methods=['POST'])
200
+ def automatic_segment():
201
+ """Perform automatic segmentation using YOLO."""
202
+ if 'file' not in request.files:
203
+ return jsonify({'error': 'No file uploaded'}), 400
204
+ file = request.files['file']
205
+ if file.filename == '':
206
+ return jsonify({'error': 'No file selected'}), 400
207
+
208
+ input_path = os.path.join(UPLOAD_FOLDERS['input'], file.filename)
209
+ file.save(input_path)
210
+
211
+ try:
212
+ # Perform YOLO segmentation
213
+ results = yolo_model.predict(input_path, save=False, save_txt=False)
214
+ output_folder = UPLOAD_FOLDERS['automatic_segmented']
215
+ os.makedirs(output_folder, exist_ok=True)
216
+
217
+ chips_data = []
218
+ chips = []
219
+ voids = []
220
+
221
+ # Process results and save segmented images
222
+ for result in results:
223
+ annotated_image = result.plot()
224
+ result_filename = f"{file.filename.rsplit('.', 1)[0]}_pred.jpg"
225
+ result_path = os.path.join(output_folder, result_filename)
226
+ Image.fromarray(annotated_image).save(result_path)
227
+
228
+ # Separate chips and voids
229
+ for i, label in enumerate(result.boxes.cls): # YOLO labels
230
+ label_name = result.names[int(label)] # Get label name (e.g., 'chip' or 'void')
231
+ box = result.boxes.xyxy[i].cpu().numpy() # Bounding box (x1, y1, x2, y2)
232
+ area = float((box[2] - box[0]) * (box[3] - box[1])) # Calculate area
233
+
234
+ if label_name == 'chip':
235
+ chips.append({'box': box, 'area': area, 'voids': []})
236
+ elif label_name == 'void':
237
+ voids.append({'box': box, 'area': area})
238
+
239
+ # Assign voids to chips based on proximity
240
+ for void in voids:
241
+ void_centroid = [
242
+ (void['box'][0] + void['box'][2]) / 2, # x centroid
243
+ (void['box'][1] + void['box'][3]) / 2 # y centroid
244
+ ]
245
+ for chip in chips:
246
+ # Check if void centroid is within chip bounding box
247
+ if (chip['box'][0] <= void_centroid[0] <= chip['box'][2] and
248
+ chip['box'][1] <= void_centroid[1] <= chip['box'][3]):
249
+ chip['voids'].append(void)
250
+ break
251
+
252
+ # Calculate metrics for each chip
253
+ for idx, chip in enumerate(chips):
254
+ chip_area = chip['area']
255
+ total_void_area = sum([float(void['area']) for void in chip['voids']])
256
+ max_void_area = max([float(void['area']) for void in chip['voids']], default=0)
257
+
258
+ void_percentage = (total_void_area / chip_area) * 100 if chip_area > 0 else 0
259
+ max_void_percentage = (max_void_area / chip_area) * 100 if chip_area > 0 else 0
260
+
261
+ chips_data.append({
262
+ "chip_number": int(idx + 1),
263
+ "chip_area": round(chip_area, 2),
264
+ "void_percentage": round(void_percentage, 2),
265
+ "max_void_percentage": round(max_void_percentage, 2)
266
+ })
267
+
268
+ # Return the segmented image URL and table data
269
+ segmented_url = f"/static/uploads/segmented/automatic/{result_filename}"
270
+ return jsonify({
271
+ "segmented_url": segmented_url, # Use the URL for frontend access
272
+ "table_data": {
273
+ "image_name": file.filename,
274
+ "chips": chips_data
275
+ }
276
+ })
277
+
278
+ except Exception as e:
279
+ print(f"Error in automatic segmentation: {e}")
280
+ return jsonify({'error': 'Segmentation failed.'}), 500
281
+
282
+ @app.route('/save_both', methods=['POST'])
283
+ def save_both():
284
+ """Save both the image and masks into the history folders."""
285
+ data = request.json
286
+ image_name = data.get('image_name')
287
+
288
+ if not image_name:
289
+ return jsonify({'error': 'Image name not provided'}), 400
290
+
291
+ try:
292
+ # Ensure image_name is a pure file name
293
+ image_name = os.path.basename(image_name) # Strip any directory path
294
+ print(f"Sanitized Image Name: {image_name}")
295
+
296
+ # Correctly resolve the input image path
297
+ input_image_path = os.path.join(UPLOAD_FOLDERS['input'], image_name)
298
+ if not os.path.exists(input_image_path):
299
+ print(f"Input image does not exist: {input_image_path}")
300
+ return jsonify({'error': f'Input image not found: {input_image_path}'}), 404
301
+
302
+ # Copy the image to history/images
303
+ image_history_path = os.path.join(HISTORY_FOLDERS['images'], image_name)
304
+ os.makedirs(os.path.dirname(image_history_path), exist_ok=True)
305
+ shutil.copy(input_image_path, image_history_path)
306
+ print(f"Image saved to history: {image_history_path}")
307
+
308
+ # Backup void mask
309
+ void_mask_path = os.path.join(UPLOAD_FOLDERS['mask_voids'], 'raw_mask.png')
310
+ if os.path.exists(void_mask_path):
311
+ void_mask_history_path = os.path.join(HISTORY_FOLDERS['masks_void'], f"{os.path.splitext(image_name)[0]}.png")
312
+ os.makedirs(os.path.dirname(void_mask_history_path), exist_ok=True)
313
+ shutil.copy(void_mask_path, void_mask_history_path)
314
+ print(f"Voids mask saved to history: {void_mask_history_path}")
315
+ else:
316
+ print(f"Voids mask not found: {void_mask_path}")
317
+
318
+ # Backup chip mask
319
+ chip_mask_path = os.path.join(UPLOAD_FOLDERS['mask_chips'], 'raw_mask.png')
320
+ if os.path.exists(chip_mask_path):
321
+ chip_mask_history_path = os.path.join(HISTORY_FOLDERS['masks_chip'], f"{os.path.splitext(image_name)[0]}.png")
322
+ os.makedirs(os.path.dirname(chip_mask_history_path), exist_ok=True)
323
+ shutil.copy(chip_mask_path, chip_mask_history_path)
324
+ print(f"Chips mask saved to history: {chip_mask_history_path}")
325
+ else:
326
+ print(f"Chips mask not found: {chip_mask_path}")
327
+
328
+ return jsonify({'message': 'Image and masks saved successfully!'}), 200
329
+
330
+ except Exception as e:
331
+ print(f"Error saving files: {e}")
332
+ return jsonify({'error': 'Failed to save files.', 'details': str(e)}), 500
333
+
334
+ @app.route('/get_history', methods=['GET'])
335
+ def get_history():
336
+ try:
337
+ saved_images = os.listdir(HISTORY_FOLDERS['images'])
338
+ return jsonify({'status': 'success', 'images': saved_images}), 200
339
+ except Exception as e:
340
+ return jsonify({'status': 'error', 'message': f'Failed to fetch history: {e}'}), 500
341
+
342
+
343
+ @app.route('/delete_history_item', methods=['POST'])
344
+ def delete_history_item():
345
+ data = request.json
346
+ image_name = data.get('image_name')
347
+
348
+ if not image_name:
349
+ return jsonify({'error': 'Image name not provided'}), 400
350
+
351
+ try:
352
+ image_path = os.path.join(HISTORY_FOLDERS['images'], image_name)
353
+ if os.path.exists(image_path):
354
+ os.remove(image_path)
355
+
356
+ void_mask_path = os.path.join(HISTORY_FOLDERS['masks_void'], f"{os.path.splitext(image_name)[0]}.png")
357
+ if os.path.exists(void_mask_path):
358
+ os.remove(void_mask_path)
359
+
360
+ chip_mask_path = os.path.join(HISTORY_FOLDERS['masks_chip'], f"{os.path.splitext(image_name)[0]}.png")
361
+ if os.path.exists(chip_mask_path):
362
+ os.remove(chip_mask_path)
363
+
364
+ return jsonify({'message': f'{image_name} and associated masks deleted successfully.'}), 200
365
+ except Exception as e:
366
+ return jsonify({'error': f'Failed to delete files: {e}'}), 500
367
+
368
+ # Lock for training status updates
369
+ status_lock = Lock()
370
+
371
+ def update_training_status(key, value):
372
+ """Thread-safe update for training status."""
373
+ with status_lock:
374
+ training_status[key] = value
375
+
376
+ @app.route('/retrain_model', methods=['POST'])
377
+ def retrain_model():
378
+ """Handle retrain model workflow."""
379
+ global training_status
380
+
381
+ if training_status.get('running', False):
382
+ return jsonify({'error': 'Training is already in progress'}), 400
383
+
384
+ try:
385
+ # Update training status
386
+ update_training_status('running', True)
387
+ update_training_status('cancelled', False)
388
+ logging.info("Training status updated. Starting training workflow.")
389
+
390
+ # Backup masks and images
391
+ backup_masks_and_images()
392
+ logging.info("Backup completed successfully.")
393
+
394
+ # Prepare YOLO labels
395
+ prepare_yolo_labels()
396
+ logging.info("YOLO labels prepared successfully.")
397
+
398
+ # Start YOLO training in a separate thread
399
+ threading.Thread(target=run_yolo_training).start()
400
+ return jsonify({'message': 'Training started successfully!'}), 200
401
+
402
+ except Exception as e:
403
+ logging.error(f"Error during training preparation: {e}")
404
+ update_training_status('running', False)
405
+ return jsonify({'error': f"Failed to start training: {e}"}), 500
406
+
407
+ def prepare_yolo_labels():
408
+ """Convert all masks into YOLO-compatible labels and copy images to the dataset folder."""
409
+ images_folder = HISTORY_FOLDERS['images'] # Use history images as the source
410
+ train_labels_folder = DATASET_FOLDERS['train_labels']
411
+ train_images_folder = DATASET_FOLDERS['train_images']
412
+ val_labels_folder = DATASET_FOLDERS['val_labels']
413
+ val_images_folder = DATASET_FOLDERS['val_images']
414
+
415
+ # Ensure destination directories exist
416
+ os.makedirs(train_labels_folder, exist_ok=True)
417
+ os.makedirs(train_images_folder, exist_ok=True)
418
+ os.makedirs(val_labels_folder, exist_ok=True)
419
+ os.makedirs(val_images_folder, exist_ok=True)
420
+
421
+ try:
422
+ all_images = [img for img in os.listdir(images_folder) if img.endswith(('.jpg', '.png'))]
423
+ random.shuffle(all_images) # Shuffle the images for randomness
424
+
425
+ # Determine split index
426
+ split_idx = int(len(all_images) * 0.8) # 80% for training, 20% for validation
427
+
428
+ # Split images into train and validation sets
429
+ train_images = all_images[:split_idx]
430
+ val_images = all_images[split_idx:]
431
+
432
+ # Process training images
433
+ for image_name in train_images:
434
+ process_image_and_mask(
435
+ image_name,
436
+ source_images_folder=images_folder,
437
+ dest_images_folder=train_images_folder,
438
+ dest_labels_folder=train_labels_folder
439
+ )
440
+
441
+ # Process validation images
442
+ for image_name in val_images:
443
+ process_image_and_mask(
444
+ image_name,
445
+ source_images_folder=images_folder,
446
+ dest_images_folder=val_images_folder,
447
+ dest_labels_folder=val_labels_folder
448
+ )
449
+
450
+ logging.info("YOLO labels prepared, and images split into train and validation successfully.")
451
+
452
+ except Exception as e:
453
+ logging.error(f"Error in preparing YOLO labels: {e}")
454
+ raise
455
+
456
+ import random
457
+
458
+ def prepare_yolo_labels():
459
+ """Convert all masks into YOLO-compatible labels and copy images to the dataset folder."""
460
+ images_folder = HISTORY_FOLDERS['images'] # Use history images as the source
461
+ train_labels_folder = DATASET_FOLDERS['train_labels']
462
+ train_images_folder = DATASET_FOLDERS['train_images']
463
+ val_labels_folder = DATASET_FOLDERS['val_labels']
464
+ val_images_folder = DATASET_FOLDERS['val_images']
465
+
466
+ # Ensure destination directories exist
467
+ os.makedirs(train_labels_folder, exist_ok=True)
468
+ os.makedirs(train_images_folder, exist_ok=True)
469
+ os.makedirs(val_labels_folder, exist_ok=True)
470
+ os.makedirs(val_images_folder, exist_ok=True)
471
+
472
+ try:
473
+ all_images = [img for img in os.listdir(images_folder) if img.endswith(('.jpg', '.png'))]
474
+ random.shuffle(all_images) # Shuffle the images for randomness
475
+
476
+ # Determine split index
477
+ split_idx = int(len(all_images) * 0.8) # 80% for training, 20% for validation
478
+
479
+ # Split images into train and validation sets
480
+ train_images = all_images[:split_idx]
481
+ val_images = all_images[split_idx:]
482
+
483
+ # Process training images
484
+ for image_name in train_images:
485
+ process_image_and_mask(
486
+ image_name,
487
+ source_images_folder=images_folder,
488
+ dest_images_folder=train_images_folder,
489
+ dest_labels_folder=train_labels_folder
490
+ )
491
+
492
+ # Process validation images
493
+ for image_name in val_images:
494
+ process_image_and_mask(
495
+ image_name,
496
+ source_images_folder=images_folder,
497
+ dest_images_folder=val_images_folder,
498
+ dest_labels_folder=val_labels_folder
499
+ )
500
+
501
+ logging.info("YOLO labels prepared, and images split into train and validation successfully.")
502
+
503
+ except Exception as e:
504
+ logging.error(f"Error in preparing YOLO labels: {e}")
505
+ raise
506
+
507
+
508
+ def process_image_and_mask(image_name, source_images_folder, dest_images_folder, dest_labels_folder):
509
+ """
510
+ Process a single image and its masks, saving them in the appropriate YOLO format.
511
+ """
512
+ try:
513
+ image_path = os.path.join(source_images_folder, image_name)
514
+ label_file_path = os.path.join(dest_labels_folder, f"{os.path.splitext(image_name)[0]}.txt")
515
+
516
+ # Copy image to the destination images folder
517
+ shutil.copy(image_path, os.path.join(dest_images_folder, image_name))
518
+
519
+ # Clear the label file if it exists
520
+ if os.path.exists(label_file_path):
521
+ os.remove(label_file_path)
522
+
523
+ # Process void mask
524
+ void_mask_path = os.path.join(HISTORY_FOLDERS['masks_void'], f"{os.path.splitext(image_name)[0]}.png")
525
+ if os.path.exists(void_mask_path):
526
+ convert_mask_to_yolo(
527
+ mask_path=void_mask_path,
528
+ image_path=image_path,
529
+ class_id=0, # Void class
530
+ output_path=label_file_path
531
+ )
532
+
533
+ # Process chip mask
534
+ chip_mask_path = os.path.join(HISTORY_FOLDERS['masks_chip'], f"{os.path.splitext(image_name)[0]}.png")
535
+ if os.path.exists(chip_mask_path):
536
+ convert_mask_to_yolo(
537
+ mask_path=chip_mask_path,
538
+ image_path=image_path,
539
+ class_id=1, # Chip class
540
+ output_path=label_file_path,
541
+ append=True # Append chip annotations
542
+ )
543
+
544
+ logging.info(f"Processed {image_name} into YOLO format.")
545
+ except Exception as e:
546
+ logging.error(f"Error processing {image_name}: {e}")
547
+ raise
548
+
549
+ def backup_masks_and_images():
550
+ """Backup current masks and images from history folders."""
551
+ temp_backup_paths = {
552
+ 'voids': os.path.join(DATASET_FOLDERS['temp_backup'], 'masks/voids'),
553
+ 'chips': os.path.join(DATASET_FOLDERS['temp_backup'], 'masks/chips'),
554
+ 'images': os.path.join(DATASET_FOLDERS['temp_backup'], 'images')
555
+ }
556
+
557
+ # Prepare all backup directories
558
+ for path in temp_backup_paths.values():
559
+ if os.path.exists(path):
560
+ shutil.rmtree(path)
561
+ os.makedirs(path, exist_ok=True)
562
+
563
+ try:
564
+ # Backup images from history
565
+ for file in os.listdir(HISTORY_FOLDERS['images']):
566
+ src_image_path = os.path.join(HISTORY_FOLDERS['images'], file)
567
+ dst_image_path = os.path.join(temp_backup_paths['images'], file)
568
+ shutil.copy(src_image_path, dst_image_path)
569
+
570
+ # Backup void masks from history
571
+ for file in os.listdir(HISTORY_FOLDERS['masks_void']):
572
+ src_void_path = os.path.join(HISTORY_FOLDERS['masks_void'], file)
573
+ dst_void_path = os.path.join(temp_backup_paths['voids'], file)
574
+ shutil.copy(src_void_path, dst_void_path)
575
+
576
+ # Backup chip masks from history
577
+ for file in os.listdir(HISTORY_FOLDERS['masks_chip']):
578
+ src_chip_path = os.path.join(HISTORY_FOLDERS['masks_chip'], file)
579
+ dst_chip_path = os.path.join(temp_backup_paths['chips'], file)
580
+ shutil.copy(src_chip_path, dst_chip_path)
581
+
582
+ logging.info("Masks and images backed up successfully from history.")
583
+ except Exception as e:
584
+ logging.error(f"Error during backup: {e}")
585
+ raise RuntimeError("Backup process failed.")
586
+
587
+ def run_yolo_training(num_epochs=10):
588
+ """Run YOLO training process."""
589
+ global training_process
590
+
591
+ try:
592
+ device = "cuda" if torch.cuda.is_available() else "cpu"
593
+ data_cfg_path = os.path.join(BASE_DIR, "models/data.yaml") # Ensure correct YAML path
594
+
595
+ logging.info(f"Starting YOLO training on {device} with {num_epochs} epochs.")
596
+ logging.info(f"Using dataset configuration: {data_cfg_path}")
597
+
598
+ training_command = [
599
+ "yolo",
600
+ "train",
601
+ f"data={data_cfg_path}",
602
+ f"model={os.path.join(DATASET_FOLDERS['models'], 'best.pt')}",
603
+ f"device={device}",
604
+ f"epochs={num_epochs}",
605
+ "project=runs",
606
+ "name=train"
607
+ ]
608
+
609
+ training_process = subprocess.Popen(
610
+ training_command,
611
+ stdout=subprocess.PIPE,
612
+ stderr=subprocess.STDOUT,
613
+ text=True,
614
+ env=os.environ.copy(),
615
+ )
616
+
617
+ # Display and log output in real time
618
+ for line in iter(training_process.stdout.readline, ''):
619
+ print(line.strip())
620
+ logging.info(line.strip())
621
+ socketio.emit('training_update', {'message': line.strip()}) # Send updates to the frontend
622
+
623
+ training_process.wait()
624
+
625
+ if training_process.returncode == 0:
626
+ finalize_training() # Finalize successfully completed training
627
+ else:
628
+ raise RuntimeError("YOLO training process failed. Check logs for details.")
629
+ except Exception as e:
630
+ logging.error(f"Training error: {e}")
631
+ restore_backup() # Restore the dataset and masks
632
+
633
+ # Emit training error event to the frontend
634
+ socketio.emit('training_status', {'status': 'error', 'message': f"Training failed: {str(e)}"})
635
+ finally:
636
+ update_training_status('running', False)
637
+ training_process = None # Reset the process
638
+
639
+
640
+ @socketio.on('cancel_training')
641
+ def handle_cancel_training():
642
+ """Cancel the YOLO training process."""
643
+ global training_process, training_status
644
+
645
+ if not training_status.get('running', False):
646
+ socketio.emit('button_update', {'action': 'retrain'}) # Update button to retrain
647
+ return
648
+
649
+ try:
650
+ training_process.terminate()
651
+ training_process.wait()
652
+ training_status['running'] = False
653
+ training_status['cancelled'] = True
654
+
655
+ restore_backup()
656
+ cleanup_train_val_directories()
657
+
658
+ # Emit button state change
659
+ socketio.emit('button_update', {'action': 'retrain'})
660
+ socketio.emit('training_status', {'status': 'cancelled', 'message': 'Training was canceled by the user.'})
661
+ except Exception as e:
662
+ logging.error(f"Error cancelling training: {e}")
663
+ socketio.emit('training_status', {'status': 'error', 'message': str(e)})
664
+
665
+ def finalize_training():
666
+ """Finalize training by promoting the new model and cleaning up."""
667
+ try:
668
+ # Locate the most recent training directory
669
+ runs_dir = os.path.join(BASE_DIR, 'runs')
670
+ if not os.path.exists(runs_dir):
671
+ raise FileNotFoundError("Training runs directory does not exist.")
672
+
673
+ # Get the latest training run folder
674
+ latest_run = max(
675
+ [os.path.join(runs_dir, d) for d in os.listdir(runs_dir)],
676
+ key=os.path.getmtime
677
+ )
678
+ weights_dir = os.path.join(latest_run, 'weights')
679
+ best_model_path = os.path.join(weights_dir, 'best.pt')
680
+
681
+ if not os.path.exists(best_model_path):
682
+ raise FileNotFoundError(f"'best.pt' not found in {weights_dir}.")
683
+
684
+ # Backup the old model
685
+ old_model_folder = DATASET_FOLDERS['models_old']
686
+ os.makedirs(old_model_folder, exist_ok=True)
687
+ existing_best_model = os.path.join(DATASET_FOLDERS['models'], 'best.pt')
688
+
689
+ if os.path.exists(existing_best_model):
690
+ timestamp = time.strftime("%Y%m%d_%H%M%S")
691
+ shutil.move(existing_best_model, os.path.join(old_model_folder, f"old_{timestamp}.pt"))
692
+ logging.info(f"Old model backed up to {old_model_folder}.")
693
+
694
+ # Move the new model to the models directory
695
+ new_model_dest = os.path.join(DATASET_FOLDERS['models'], 'best.pt')
696
+ shutil.move(best_model_path, new_model_dest)
697
+ logging.info(f"New model saved to {new_model_dest}.")
698
+
699
+ # Notify frontend that training is completed
700
+ socketio.emit('training_status', {
701
+ 'status': 'completed',
702
+ 'message': 'Training completed successfully! Model saved as best.pt.'
703
+ })
704
+
705
+ # Clean up train/val directories
706
+ cleanup_train_val_directories()
707
+ logging.info("Train and validation directories cleaned up successfully.")
708
+
709
+ except Exception as e:
710
+ logging.error(f"Error finalizing training: {e}")
711
+ # Emit error status to the frontend
712
+ socketio.emit('training_status', {'status': 'error', 'message': f"Error finalizing training: {str(e)}"})
713
+
714
+ def restore_backup():
715
+ """Restore the dataset and masks from the backup."""
716
+ try:
717
+ temp_backup = DATASET_FOLDERS['temp_backup']
718
+ shutil.copytree(os.path.join(temp_backup, 'masks/voids'), UPLOAD_FOLDERS['mask_voids'], dirs_exist_ok=True)
719
+ shutil.copytree(os.path.join(temp_backup, 'masks/chips'), UPLOAD_FOLDERS['mask_chips'], dirs_exist_ok=True)
720
+ shutil.copytree(os.path.join(temp_backup, 'images'), UPLOAD_FOLDERS['input'], dirs_exist_ok=True)
721
+ logging.info("Backup restored successfully.")
722
+ except Exception as e:
723
+ logging.error(f"Error restoring backup: {e}")
724
+
725
+ @app.route('/cancel_training', methods=['POST'])
726
+ def cancel_training():
727
+ global training_process
728
+
729
+ if training_process is None:
730
+ logging.error("No active training process to terminate.")
731
+ return jsonify({'error': 'No active training process to cancel.'}), 400
732
+
733
+ try:
734
+ training_process.terminate()
735
+ training_process.wait()
736
+ training_process = None # Reset the process after termination
737
+
738
+ # Update training status
739
+ update_training_status('running', False)
740
+ update_training_status('cancelled', True)
741
+
742
+ # Check if the model is already saved as best.pt
743
+ best_model_path = os.path.join(DATASET_FOLDERS['models'], 'best.pt')
744
+ if os.path.exists(best_model_path):
745
+ logging.info(f"Model already saved as best.pt at {best_model_path}.")
746
+ socketio.emit('button_update', {'action': 'revert'}) # Notify frontend to revert button state
747
+ else:
748
+ logging.info("Training canceled, but no new model was saved.")
749
+
750
+ # Restore backup if needed
751
+ restore_backup()
752
+ cleanup_train_val_directories()
753
+
754
+ # Emit status update to frontend
755
+ socketio.emit('training_status', {'status': 'cancelled', 'message': 'Training was canceled by the user.'})
756
+ return jsonify({'message': 'Training canceled and data restored successfully.'}), 200
757
+
758
+ except Exception as e:
759
+ logging.error(f"Error cancelling training: {e}")
760
+ return jsonify({'error': f"Failed to cancel training: {e}"}), 500
761
+
762
+ @app.route('/clear_history', methods=['POST'])
763
+ def clear_history():
764
+ try:
765
+ for folder in [HISTORY_FOLDERS['images'], HISTORY_FOLDERS['masks_chip'], HISTORY_FOLDERS['masks_void']]:
766
+ shutil.rmtree(folder, ignore_errors=True)
767
+ os.makedirs(folder, exist_ok=True) # Recreate the empty folder
768
+ return jsonify({'message': 'History cleared successfully!'}), 200
769
+ except Exception as e:
770
+ return jsonify({'error': f'Failed to clear history: {e}'}), 500
771
+
772
+ @app.route('/training_status', methods=['GET'])
773
+ def get_training_status():
774
+ """Return the current training status."""
775
+ if training_status.get('running', False):
776
+ return jsonify({'status': 'running', 'message': 'Training in progress.'}), 200
777
+ elif training_status.get('cancelled', False):
778
+ return jsonify({'status': 'cancelled', 'message': 'Training was cancelled.'}), 200
779
+ return jsonify({'status': 'idle', 'message': 'No training is currently running.'}), 200
780
+
781
+ def cleanup_train_val_directories():
782
+ """Clear the train and validation directories."""
783
+ try:
784
+ for folder in [DATASET_FOLDERS['train_images'], DATASET_FOLDERS['train_labels'],
785
+ DATASET_FOLDERS['val_images'], DATASET_FOLDERS['val_labels']]:
786
+ shutil.rmtree(folder, ignore_errors=True) # Remove folder contents
787
+ os.makedirs(folder, exist_ok=True) # Recreate empty folders
788
+ logging.info("Train and validation directories cleaned up successfully.")
789
+ except Exception as e:
790
+ logging.error(f"Error cleaning up train/val directories: {e}")
791
+
792
+
793
+ if __name__ == '__main__':
794
+ multiprocessing.set_start_method('spawn') # Required for multiprocessing on Windows
795
+ app.run(debug=True, use_reloader=False)
796
+
797
+
dataset/.DS_Store ADDED
Binary file (8.2 kB). View file
 
dataset/images/.DS_Store ADDED
Binary file (6.15 kB). View file
 
dataset/images/train/02_JPG.rf.d6063f8ca200e543da7becc1bf260ed5.jpg ADDED
dataset/images/train/03_JPG.rf.2ca107348e11cdefab68044dba66388d.jpg ADDED
dataset/images/train/04_JPG.rf.b0b546ecbc6b70149b8932018e69fef0.jpg ADDED
dataset/images/train/05_jpg.rf.46241369ebb0749c40882400f82eb224.jpg ADDED
dataset/images/train/08_JPG.rf.1f81e954a3bbfc49dcd30e3ba0eb5e98.jpg ADDED
dataset/images/train/09_JPG.rf.9119efd8c174f968457a893669209835.jpg ADDED
dataset/images/train/10_JPG.rf.6745a7b3ea828239398b85182acba199.jpg ADDED
dataset/images/train/11_JPG.rf.3aa3109a1838549cf273cdbe8b2cafeb.jpg ADDED
dataset/images/train/12_jpg.rf.357643b374df92f81f9dee7c701b2315.jpg ADDED
dataset/images/train/14_jpg.rf.d91472c724e7c34da4d96ac5e504044c.jpg ADDED
dataset/images/train/15_jpg.rf.284413e4432b16253b4cd19f0c4f01e2.jpg ADDED
dataset/images/train/15r_jpg.rf.2da1990173346311d3a3555e23a9164a.jpg ADDED
dataset/images/train/16_jpg.rf.9fdb4f56ec8596ddcc31db5bbffc26a0.jpg ADDED
dataset/images/train/18_jpg.rf.4d241aab78af17171d83f3a50f1cf1aa.jpg ADDED
dataset/images/train/20_jpg.rf.4a45f799ba16b5ff81ab1929f12a12b1.jpg ADDED
dataset/images/train/21_jpg.rf.d1d6dd254d2e5f396589ccc68a3c8536.jpg ADDED
dataset/images/train/22_jpg.rf.a72964a78ea36c7bebe3a09cf27ef6ba.jpg ADDED
dataset/images/train/25_jpg.rf.893f4286e0c8a3cef2efb7612f248147.jpg ADDED
dataset/images/train/26_jpg.rf.a03c550707ff22cd50ff7f54bebda7ab.jpg ADDED
dataset/images/train/29_jpg.rf.931769b58ae20d18d1f09d042bc44176.jpg ADDED
dataset/images/train/31_jpg.rf.f31137f793efde0462ed560d426dcd24.jpg ADDED
dataset/images/train/7-Figure14-1_jpg.rf.1c6cb204ed1f37c8fed44178a02e9058.jpg ADDED
dataset/images/train/LU-F_mod_jpg.rf.fc594179772346639512f891960969bb.jpg ADDED
dataset/images/train/Solder_Voids_jpg.rf.d40f1b71d8a801f084067fde7f33fb08.jpg ADDED
dataset/images/train/gc10_lake_voids_260-31_jpg.rf.479f3d9dda8dd22097d3d93c78f7e11d.jpg ADDED
dataset/images/train/images_jpg.rf.675b31c5e1ba2b77f0fa5ca92e2391b0.jpg ADDED
dataset/images/train/qfn-voiding_0_jpg.rf.2945527db158e9ff4943febaf9cd3eab.jpg ADDED
dataset/images/train/techtips_3_jpg.rf.ad88af637816f0999f4df0b18dfef293.jpg ADDED
dataset/images/val/025_JPG.rf.b2cdc2d984adff593dc985f555b8d280.jpg ADDED
dataset/images/val/06_jpg.rf.a94e0a678df372f5ea1395a8d888a388.jpg ADDED
dataset/images/val/07_JPG.rf.324d17a87726bd2a9614536c687c6e68.jpg ADDED
dataset/images/val/23_jpg.rf.8e9afa6b3b471e10c26637d47700f28b.jpg ADDED
dataset/images/val/24_jpg.rf.4caa996d97e35f6ce4f27a527ea43465.jpg ADDED
dataset/images/val/27_jpg.rf.3475fce31d283058f46d9f349c04cb1a.jpg ADDED
dataset/images/val/28_jpg.rf.50e348d807d35667583137c9a6c162ca.jpg ADDED
dataset/images/val/30_jpg.rf.ed72622e97cf0d884997585686cfe40a.jpg ADDED
dataset/test/.DS_Store ADDED
Binary file (6.15 kB). View file
 
dataset/test/images/17_jpg.rf.ec31940ea72d0cf8b9f38dba68789fcf.jpg ADDED
dataset/test/images/19_jpg.rf.2c5ffd63bd0ce6b9b0c80fef69d101dc.jpg ADDED
dataset/test/images/32_jpg.rf.f3e33dcf611a8754c0765224f7873d8b.jpg ADDED
dataset/test/images/normal-reflow_jpg.rf.2c4fbc1fda915b821b85689ae257e116.jpg ADDED
dataset/test/images/techtips_31_jpg.rf.673cd3c7c8511e534766e6dbc3171b39.jpg ADDED
dataset/test/labels/.DS_Store ADDED
Binary file (6.15 kB). View file