ItchyFingaz commited on
Commit
c2ded79
1 Parent(s): 2578602

Upload 2 files

Browse files
wassComprehensiveNode_wasNodeSuiteV101Patch/WAS_License.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Copyright 2023 Jordan Thompson (WASasquatch)
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
wassComprehensiveNode_wasNodeSuiteV101Patch/WAS_Node_Suite.py ADDED
@@ -0,0 +1,2430 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # By WASasquatch (Discord: WAS#0263)
2
+ #
3
+ # Copyright 2023 Jordan Thompson (WASasquatch)
4
+ #
5
+ # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to
6
+ # deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
7
+ # and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
8
+ #
9
+ # The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
10
+ #
11
+ # THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
12
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
13
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
14
+ # THE SOFTWARE.
15
+
16
+
17
+ import torch, os, sys, subprocess, random, math, hashlib, json, time
18
+ import torch.nn as nn
19
+ import torchvision.transforms as transforms
20
+ import numpy as np
21
+ from PIL import Image, ImageFilter, ImageEnhance, ImageOps, ImageDraw, ImageChops
22
+ from PIL.PngImagePlugin import PngInfo
23
+ from urllib.request import urlopen
24
+
25
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), "comfy"))
26
+ sys.path.append('../ComfyUI')
27
+
28
+ import comfy.samplers
29
+ import comfy.sd
30
+ import comfy.utils
31
+
32
+ import comfy_extras.clip_vision
33
+
34
+ import model_management
35
+ import importlib
36
+
37
+ import nodes
38
+
39
+ # GLOBALS
40
+ MIDAS_INSTALLED = False
41
+
42
+ #! FUNCTIONS
43
+
44
+ # Freeze PIP modules
45
+ def packages():
46
+ import sys, subprocess
47
+ return [r.decode().split('==')[0] for r in subprocess.check_output([sys.executable, '-m', 'pip', 'freeze']).split()]
48
+
49
+ # Tensor to PIL
50
+ def tensor2pil(image):
51
+ return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8))
52
+
53
+ # Convert PIL to Tensor
54
+ def pil2tensor(image):
55
+ return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0)
56
+
57
+ # PIL Hex
58
+ def pil2hex(image):
59
+ return hashlib.sha256(np.array(tensor2pil(image)).astype(np.uint16).tobytes()).hexdigest().hex();
60
+
61
+ # Median Filter
62
+ def medianFilter(img, diameter, sigmaColor, sigmaSpace):
63
+ import cv2 as cv
64
+ diameter = int(diameter); sigmaColor = int(sigmaColor); sigmaSpace = int(sigmaSpace)
65
+ img = img.convert('RGB')
66
+ img = cv.cvtColor(np.array(img), cv.COLOR_RGB2BGR)
67
+ img = cv.bilateralFilter(img, diameter, sigmaColor, sigmaSpace)
68
+ img = cv.cvtColor(np.array(img), cv.COLOR_BGR2RGB)
69
+ return Image.fromarray(img).convert('RGB')
70
+
71
+ # INSTALLATION CLEANUP
72
+ # Delete legacy nodes
73
+ legacy_was_nodes = ['fDOF_WAS.py','Image_Blank_WAS.py','Image_Blend_WAS.py','Image_Canny_Filter_WAS.py', 'Canny_Filter_WAS.py','Image_Combine_WAS.py','Image_Edge_Detection_WAS.py', 'Image_Film_Grain_WAS.py', 'Image_Filters_WAS.py', 'Image_Flip_WAS.py','Image_Nova_Filter_WAS.py','Image_Rotate_WAS.py','Image_Style_Filter_WAS.py','Latent_Noise_Injection_WAS.py','Latent_Upscale_WAS.py','MiDaS_Depth_Approx_WAS.py','NSP_CLIPTextEncoder.py','Samplers_WAS.py']
74
+ legacy_was_nodes_found = []
75
+ f_disp = False
76
+ for f in legacy_was_nodes:
77
+ node_path_dir = os.getcwd()+'/ComfyUI/custom_nodes/'
78
+ file = f'{node_path_dir}{f}'
79
+ if os.path.exists(file):
80
+ import zipfile
81
+ if not f_disp:
82
+ print('\033[34mWAS Node Suite:\033[0m Found legacy nodes. Archiving legacy nodes...')
83
+ f_disp = True
84
+ legacy_was_nodes_found.append(file)
85
+ if legacy_was_nodes_found:
86
+ from os.path import basename
87
+ archive = zipfile.ZipFile(f'{node_path_dir}WAS_Legacy_Nodes_Backup_{round(time.time())}.zip', "w")
88
+ for f in legacy_was_nodes_found:
89
+ archive.write(f, basename(f))
90
+ try:
91
+ os.remove(f)
92
+ except OSError:
93
+ pass
94
+ archive.close()
95
+ if f_disp:
96
+ print('\033[34mWAS Node Suite:\033[0m Legacy cleanup complete.')
97
+
98
+ #! IMAGE FILTER NODES
99
+
100
+ # IMAGE FILTER ADJUSTMENTS
101
+
102
+ class WAS_Image_Filters:
103
+ def __init__(self):
104
+ pass
105
+
106
+ @classmethod
107
+ def INPUT_TYPES(cls):
108
+ return {
109
+ "required": {
110
+ "image": ("IMAGE",),
111
+ "brightness": ("FLOAT", {"default": 0.0, "min": -1.0, "max": 1.0, "step": 0.01}),
112
+ "contrast": ("FLOAT", {"default": 1.0, "min": -1.0, "max": 2.0, "step": 0.01}),
113
+ "saturation": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 5.0, "step": 0.01}),
114
+ "sharpness": ("FLOAT", {"default": 1.0, "min": -5.0, "max": 5.0, "step": 0.01}),
115
+ "blur": ("INT", {"default": 0, "min": 0, "max": 16, "step": 1}),
116
+ "gaussian_blur": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1024.0, "step": 0.1}),
117
+ "edge_enhance": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01}),
118
+ },
119
+ }
120
+
121
+ RETURN_TYPES = ("IMAGE",)
122
+ FUNCTION = "image_filters"
123
+
124
+ CATEGORY = "WAS Suite/Image"
125
+
126
+ def image_filters(self, image, brightness, contrast, saturation, sharpness, blur, gaussian_blur, edge_enhance):
127
+
128
+ pil_image = None
129
+
130
+ # Apply NP Adjustments
131
+ if brightness > 0.0 or brightness < 0.0:
132
+ # Apply brightness
133
+ image = np.clip(image + brightness, 0.0, 1.0)
134
+
135
+ if contrast > 1.0 or contrast < 1.0:
136
+ # Apply contrast
137
+ image = np.clip(image * contrast, 0.0, 1.0)
138
+
139
+ # Apply PIL Adjustments
140
+ if saturation > 1.0 or saturation < 1.0:
141
+ #PIL Image
142
+ pil_image = tensor2pil(image)
143
+ # Apply saturation
144
+ pil_image = ImageEnhance.Color(pil_image).enhance(saturation)
145
+
146
+ if sharpness > 1.0 or sharpness < 1.0:
147
+ # Assign or create PIL Image
148
+ pil_image = pil_image if pil_image else tensor2pil(image)
149
+ # Apply sharpness
150
+ pil_image = ImageEnhance.Sharpness(pil_image).enhance(sharpness)
151
+
152
+ if blur > 0:
153
+ # Assign or create PIL Image
154
+ pil_image = pil_image if pil_image else tensor2pil(image)
155
+ # Apply blur
156
+ for _ in range(blur):
157
+ pil_image = pil_image.filter(ImageFilter.BLUR)
158
+
159
+ if gaussian_blur > 0.0:
160
+ # Assign or create PIL Image
161
+ pil_image = pil_image if pil_image else tensor2pil(image)
162
+ # Apply Gaussian blur
163
+ pil_image = pil_image.filter(ImageFilter.GaussianBlur(radius = gaussian_blur))
164
+
165
+ if edge_enhance > 0.0:
166
+ # Assign or create PIL Image
167
+ pil_image = pil_image if pil_image else tensor2pil(image)
168
+ # Edge Enhancement
169
+ edge_enhanced_img = pil_image.filter(ImageFilter.EDGE_ENHANCE_MORE)
170
+ # Blend Mask
171
+ blend_mask = Image.new(mode = "L", size = pil_image.size, color = (round(edge_enhance * 255)))
172
+ # Composite Original and Enhanced Version
173
+ pil_image = Image.composite(edge_enhanced_img, pil_image, blend_mask)
174
+ # Clean-up
175
+ del blend_mask, edge_enhanced_img
176
+
177
+ # Output image
178
+ out_image = ( pil2tensor(pil_image) if pil_image else image )
179
+
180
+ return ( out_image, )
181
+
182
+
183
+
184
+ # IMAGE STYLE FILTER
185
+
186
+ class WAS_Image_Style_Filter:
187
+ def __init__(self):
188
+ pass
189
+
190
+ @classmethod
191
+ def INPUT_TYPES(cls):
192
+ return {
193
+ "required": {
194
+ "image": ("IMAGE",),
195
+ "style": ([
196
+ "1977",
197
+ "aden",
198
+ "brannan",
199
+ "brooklyn",
200
+ "clarendon",
201
+ "earlybird",
202
+ "gingham",
203
+ "hudson",
204
+ "inkwell",
205
+ "kelvin",
206
+ "lark",
207
+ "lofi",
208
+ "maven",
209
+ "mayfair",
210
+ "moon",
211
+ "nashville",
212
+ "perpetua",
213
+ "reyes",
214
+ "rise",
215
+ "slumber",
216
+ "stinson",
217
+ "toaster",
218
+ "valencia",
219
+ "walden",
220
+ "willow",
221
+ "xpro2"
222
+ ],),
223
+ },
224
+ }
225
+
226
+ RETURN_TYPES = ("IMAGE",)
227
+ FUNCTION = "image_style_filter"
228
+
229
+ CATEGORY = "WAS Suite/Image"
230
+
231
+ def image_style_filter(self, image, style):
232
+
233
+ # Install Pilgram
234
+ if 'pilgram' not in packages():
235
+ print("\033[34mWAS NS:\033[0m Installing Pilgram...")
236
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'pilgram'])
237
+
238
+ # Import Pilgram module
239
+ import pilgram
240
+
241
+ # Convert image to PIL
242
+ image = tensor2pil(image)
243
+
244
+ # Apply blending
245
+ match style:
246
+ case "1977":
247
+ out_image = pilgram._1977(image)
248
+ case "aden":
249
+ out_image = pilgram.aden(image)
250
+ case "brannan":
251
+ out_image = pilgram.brannan(image)
252
+ case "brooklyn":
253
+ out_image = pilgram.brooklyn(image)
254
+ case "clarendon":
255
+ out_image = pilgram.clarendon(image)
256
+ case "earlybird":
257
+ out_image = pilgram.earlybird(image)
258
+ case "gingham":
259
+ out_image = pilgram.gingham(image)
260
+ case "hudson":
261
+ out_image = pilgram.hudson(image)
262
+ case "inkwell":
263
+ out_image = pilgram.inkwell(image)
264
+ case "kelvin":
265
+ out_image = pilgram.kelvin(image)
266
+ case "lark":
267
+ out_image = pilgram.lark(image)
268
+ case "lofi":
269
+ out_image = pilgram.lofi(image)
270
+ case "maven":
271
+ out_image = pilgram.maven(image)
272
+ case "mayfair":
273
+ out_image = pilgram.mayfair(image)
274
+ case "moon":
275
+ out_image = pilgram.moon(image)
276
+ case "nashville":
277
+ out_image = pilgram.nashville(image)
278
+ case "perpetua":
279
+ out_image = pilgram.perpetua(image)
280
+ case "reyes":
281
+ out_image = pilgram.reyes(image)
282
+ case "rise":
283
+ out_image = pilgram.rise(image)
284
+ case "slumber":
285
+ out_image = pilgram.slumber(image)
286
+ case "stinson":
287
+ out_image = pilgram.stinson(image)
288
+ case "toaster":
289
+ out_image = pilgram.toaster(image)
290
+ case "valencia":
291
+ out_image = pilgram.valencia(image)
292
+ case "walden":
293
+ out_image = pilgram.walden(image)
294
+ case "willow":
295
+ out_image = pilgram.willow(image)
296
+ case "xpro2":
297
+ out_image = pilgram.xpro2(image)
298
+ case _:
299
+ out_image = image
300
+
301
+ out_image = out_image.convert("RGB")
302
+
303
+ return ( torch.from_numpy(np.array(out_image).astype(np.float32) / 255.0).unsqueeze(0), )
304
+
305
+
306
+ # COMBINE NODE
307
+
308
+ class WAS_Image_Blending_Mode:
309
+ def __init__(self):
310
+ pass
311
+
312
+ @classmethod
313
+ def INPUT_TYPES(cls):
314
+ return {
315
+ "required": {
316
+ "image_a": ("IMAGE",),
317
+ "image_b": ("IMAGE",),
318
+ "mode": ([
319
+ "add",
320
+ "color",
321
+ "color_burn",
322
+ "color_dodge",
323
+ "darken",
324
+ "difference",
325
+ "exclusion",
326
+ "hard_light",
327
+ "hue",
328
+ "lighten",
329
+ "multiply",
330
+ "overlay",
331
+ "screen",
332
+ "soft_light"
333
+ ],),
334
+ },
335
+ }
336
+
337
+ RETURN_TYPES = ("IMAGE",)
338
+ FUNCTION = "image_blending_mode"
339
+
340
+ CATEGORY = "WAS Suite/Image"
341
+
342
+ def image_blending_mode(self, image_a, image_b, mode):
343
+
344
+ # Install Pilgram
345
+ if 'pilgram' not in packages():
346
+ print("\033[34mWAS NS:\033[0m Installing Pilgram...")
347
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'pilgram'])
348
+
349
+ # Import Pilgram module
350
+ import pilgram
351
+
352
+ # Convert images to PIL
353
+ img_a = tensor2pil(image_a)
354
+ img_b = tensor2pil(image_b)
355
+
356
+ # Apply blending
357
+ match mode:
358
+ case "color":
359
+ out_image = pilgram.css.blending.color(img_a, img_b)
360
+ case "color_burn":
361
+ out_image = pilgram.css.blending.color_burn(img_a, img_b)
362
+ case "color_dodge":
363
+ out_image = pilgram.css.blending.color_dodge(img_a, img_b)
364
+ case "darken":
365
+ out_image = pilgram.css.blending.darken(img_a, img_b)
366
+ case "difference":
367
+ out_image = pilgram.css.blending.difference(img_a, img_b)
368
+ case "exclusion":
369
+ out_image = pilgram.css.blending.exclusion(img_a, img_b)
370
+ case "hard_light":
371
+ out_image = pilgram.css.blending.hard_light(img_a, img_b)
372
+ case "hue":
373
+ out_image = pilgram.css.blending.hue(img_a, img_b)
374
+ case "lighten":
375
+ out_image = pilgram.css.blending.lighten(img_a, img_b)
376
+ case "multiply":
377
+ out_image = pilgram.css.blending.multiply(img_a, img_b)
378
+ case "add":
379
+ out_image = pilgram.css.blending.normal(img_a, img_b)
380
+ case "overlay":
381
+ out_image = pilgram.css.blending.overlay(img_a, img_b)
382
+ case "screen":
383
+ out_image = pilgram.css.blending.screen(img_a, img_b)
384
+ case "soft_light":
385
+ out_image = pilgram.css.blending.soft_light(img_a, img_b)
386
+ case _:
387
+ out_image = img_a
388
+
389
+ out_image = out_image.convert("RGB")
390
+
391
+ return ( pil2tensor(out_image), )
392
+
393
+ # IMAGE BLEND NODE
394
+
395
+ class WAS_Image_Blend:
396
+ def __init__(self):
397
+ pass
398
+
399
+ @classmethod
400
+ def INPUT_TYPES(cls):
401
+ return {
402
+ "required": {
403
+ "image_a": ("IMAGE",),
404
+ "image_b": ("IMAGE",),
405
+ "blend_percentage": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
406
+ },
407
+ }
408
+
409
+ RETURN_TYPES = ("IMAGE",)
410
+ FUNCTION = "image_blend"
411
+
412
+ CATEGORY = "WAS Suite/Image"
413
+
414
+ def image_blend(self, image_a, image_b, blend_percentage):
415
+
416
+ # Convert images to PIL
417
+ img_a = tensor2pil(image_a)
418
+ img_b = tensor2pil(image_b)
419
+
420
+ # Blend image
421
+ blend_mask = Image.new(mode = "L", size = img_a.size, color = (round(blend_percentage * 255)))
422
+ blend_mask = ImageOps.invert(blend_mask)
423
+ img_result = Image.composite(img_a, img_b, blend_mask)
424
+
425
+ del img_a, img_b, blend_mask
426
+
427
+ return ( pil2tensor(img_result), )
428
+
429
+
430
+
431
+ # IMAGE THRESHOLD NODE
432
+
433
+ class WAS_Image_Threshold:
434
+ def __init__(self):
435
+ pass
436
+
437
+ @classmethod
438
+ def INPUT_TYPES(cls):
439
+ return {
440
+ "required": {
441
+ "image": ("IMAGE",),
442
+ "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
443
+ },
444
+ }
445
+
446
+ RETURN_TYPES = ("IMAGE",)
447
+ FUNCTION = "image_threshold"
448
+
449
+ CATEGORY = "WAS Suite/Image"
450
+
451
+ def image_threshold(self, image, threshold=0.5):
452
+ return ( pil2tensor(self.apply_threshold(tensor2pil(image), threshold)), )
453
+
454
+
455
+
456
+ # IMAGE CHROMATIC ABERRATION NODE
457
+
458
+ class WAS_Image_Chromatic_Aberration:
459
+
460
+ def __init__(self):
461
+ pass
462
+
463
+ @classmethod
464
+ def INPUT_TYPES(cls):
465
+ return {
466
+ "required": {
467
+ "image": ("IMAGE",),
468
+ "red_offset": ("INT", {"default": 2, "min": -255, "max": 255, "step": 1}),
469
+ "green_offset": ("INT", {"default": -1, "min": -255, "max": 255, "step": 1}),
470
+ "blue_offset": ("INT", {"default": 1, "min": -255, "max": 255, "step": 1}),
471
+ "intensity": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
472
+ },
473
+ }
474
+
475
+ RETURN_TYPES = ("IMAGE",)
476
+ FUNCTION = "image_chromatic_aberration"
477
+
478
+ CATEGORY = "WAS Suite/Image"
479
+
480
+ def image_chromatic_aberration(self, image, red_offset=4, green_offset=2, blue_offset=0, intensity=1):
481
+ return ( pil2tensor(self.apply_chromatic_aberration(tensor2pil(image), red_offset, green_offset, blue_offset, intensity)), )
482
+
483
+
484
+ def apply_chromatic_aberration(self, img, r_offset, g_offset, b_offset, intensity):
485
+ # split the channels of the image
486
+ r, g, b = img.split()
487
+
488
+ # apply the offset to each channel
489
+ r_offset_img = ImageChops.offset(r, r_offset, 0)
490
+ g_offset_img = ImageChops.offset(g, 0, g_offset)
491
+ b_offset_img = ImageChops.offset(b, 0, b_offset)
492
+
493
+ # blend the original image with the offset channels
494
+ blended_r = ImageChops.blend(r, r_offset_img, intensity)
495
+ blended_g = ImageChops.blend(g, g_offset_img, intensity)
496
+ blended_b = ImageChops.blend(b, b_offset_img, intensity)
497
+
498
+ # merge the channels back into an RGB image
499
+ result = Image.merge("RGB", (blended_r, blended_g, blended_b))
500
+
501
+ return result
502
+
503
+
504
+
505
+ # IMAGE BLOOM FILTER
506
+
507
+ class WAS_Image_Bloom_Filter:
508
+ def __init__(self):
509
+ pass
510
+
511
+ @classmethod
512
+ def INPUT_TYPES(cls):
513
+ return {
514
+ "required": {
515
+ "image": ("IMAGE",),
516
+ "radius": ("FLOAT", {"default": 10, "min": 0.0, "max": 1024, "step": 0.1}),
517
+ "intensity": ("FLOAT", {"default": 1, "min": 0.0, "max": 1.0, "step": 0.1}),
518
+ },
519
+ }
520
+
521
+ RETURN_TYPES = ("IMAGE",)
522
+ FUNCTION = "image_bloom"
523
+
524
+ CATEGORY = "WAS Suite/Image"
525
+
526
+ def image_bloom(self, image, radius=0.5, intensity=1.0):
527
+ return ( pil2tensor(self.apply_bloom_filter(tensor2pil(image), radius, intensity)), )
528
+
529
+ def apply_bloom_filter(self, input_image, radius, bloom_factor):
530
+ # Apply a blur filter to the input image
531
+ blurred_image = input_image.filter(ImageFilter.GaussianBlur(radius=radius))
532
+
533
+ # Subtract the blurred image from the input image to create a high-pass filter
534
+ high_pass_filter = ImageChops.subtract(input_image, blurred_image)
535
+
536
+ # Create a blurred version of the bloom filter
537
+ bloom_filter = high_pass_filter.filter(ImageFilter.GaussianBlur(radius=radius*2))
538
+
539
+ # Adjust brightness and levels of bloom filter
540
+ bloom_filter = ImageEnhance.Brightness(bloom_filter).enhance(2.0)
541
+
542
+ # Multiply the bloom image with the bloom factor
543
+ bloom_filter = ImageChops.multiply(bloom_filter, Image.new('RGB', input_image.size, (int(255 * bloom_factor), int(255 * bloom_factor), int(255 * bloom_factor))))
544
+
545
+ # Multiply the bloom filter with the original image using the bloom factor
546
+ blended_image = ImageChops.screen(input_image, bloom_filter)
547
+
548
+ return blended_image
549
+
550
+
551
+
552
+ # IMAGE REMOVE COLOR
553
+
554
+ class WAS_Image_Remove_Color:
555
+ def __init__(self):
556
+ pass
557
+
558
+ @classmethod
559
+ def INPUT_TYPES(cls):
560
+ return {
561
+ "required": {
562
+ "image": ("IMAGE",),
563
+ "target_red": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
564
+ "target_green": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
565
+ "target_blue": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
566
+ "replace_red": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
567
+ "replace_green": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
568
+ "replace_blue": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
569
+ "clip_threshold": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
570
+ },
571
+ }
572
+
573
+ RETURN_TYPES = ("IMAGE",)
574
+ FUNCTION = "image_remove_color"
575
+
576
+ CATEGORY = "WAS Suite/Image"
577
+
578
+ def image_remove_color(self, image, clip_threshold=10, target_red=255, target_green=255, target_blue=255, replace_red=255, replace_green=255, replace_blue=255):
579
+ return ( pil2tensor(self.apply_remove_color(tensor2pil(image), clip_threshold, (target_red, target_green, target_blue), (replace_red, replace_green, replace_blue))), )
580
+
581
+ def apply_remove_color(self, image, threshold=10, color=(255, 255, 255), rep_color=(0, 0, 0)):
582
+ # Create a color image with the same size as the input image
583
+ color_image = Image.new('RGB', image.size, color)
584
+
585
+ # Calculate the difference between the input image and the color image
586
+ diff_image = ImageChops.difference(image, color_image)
587
+
588
+ # Convert the difference image to grayscale
589
+ gray_image = diff_image.convert('L')
590
+
591
+ # Apply a threshold to the grayscale difference image
592
+ mask_image = gray_image.point(lambda x: 255 if x > threshold else 0)
593
+
594
+ # Invert the mask image
595
+ mask_image = ImageOps.invert(mask_image)
596
+
597
+ # Apply the mask to the original image
598
+ result_image = Image.composite(Image.new('RGB', image.size, rep_color), image, mask_image)
599
+
600
+ return result_image
601
+
602
+
603
+ # IMAGE BLEND MASK NODE
604
+
605
+ class WAS_Image_Blend_Mask:
606
+ def __init__(self):
607
+ pass
608
+
609
+ @classmethod
610
+ def INPUT_TYPES(cls):
611
+ return {
612
+ "required": {
613
+ "image_a": ("IMAGE",),
614
+ "image_b": ("IMAGE",),
615
+ "mask": ("IMAGE",),
616
+ "blend_percentage": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}),
617
+ },
618
+ }
619
+
620
+ RETURN_TYPES = ("IMAGE",)
621
+ FUNCTION = "image_blend_mask"
622
+
623
+ CATEGORY = "WAS Suite/Image"
624
+
625
+ def image_blend_mask(self, image_a, image_b, mask, blend_percentage):
626
+
627
+ # Convert images to PIL
628
+ img_a = tensor2pil(image_a)
629
+ img_b = tensor2pil(image_b)
630
+ mask = ImageOps.invert(tensor2pil(mask).convert('L'))
631
+
632
+ # Mask image
633
+ masked_img = Image.composite(img_a, img_b, mask.resize(img_a.size))
634
+
635
+ # Blend image
636
+ blend_mask = Image.new(mode = "L", size = img_a.size, color = (round(blend_percentage * 255)))
637
+ blend_mask = ImageOps.invert(blend_mask)
638
+ img_result = Image.composite(img_a, masked_img, blend_mask)
639
+
640
+ del img_a, img_b, blend_mask, mask
641
+
642
+ return ( pil2tensor(img_result), )
643
+
644
+
645
+ # IMAGE BLANK NOE
646
+
647
+
648
+ class WAS_Image_Blank:
649
+ def __init__(self):
650
+ pass
651
+
652
+ @classmethod
653
+ def INPUT_TYPES(s):
654
+ return {
655
+ "required": {
656
+ "width": ("INT", {"default": 512, "min": 8, "max": 4096, "step": 1}),
657
+ "height": ("INT", {"default": 512, "min": 8, "max": 4096, "step": 1}),
658
+ "red": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
659
+ "green": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
660
+ "blue": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}),
661
+ }
662
+ }
663
+ RETURN_TYPES = ("IMAGE",)
664
+ FUNCTION = "blank_image"
665
+
666
+ CATEGORY = "WAS Suite/Image"
667
+
668
+ def blank_image(self, width, height, red, green, blue):
669
+
670
+ # Ensure multiples
671
+ width = ( width // 8 ) * 8
672
+ height = ( height // 8 ) * 8
673
+
674
+ # Blend image
675
+ blank = Image.new(mode = "RGB", size = (width, height), color = (red, green, blue))
676
+
677
+ return ( pil2tensor(blank), )
678
+
679
+
680
+ # IMAGE HIGH PASS
681
+
682
+ class WAS_Image_High_Pass_Filter:
683
+ def __init__(self):
684
+ pass
685
+
686
+ @classmethod
687
+ def INPUT_TYPES(s):
688
+ return {
689
+ "required": {
690
+ "image": ("IMAGE",),
691
+ "radius": ("INT", {"default": 10, "min": 1, "max": 500, "step": 1}),
692
+ "strength": ("FLOAT", {"default": 1.5, "min": 0.0, "max": 255.0, "step": 0.1})
693
+ }
694
+ }
695
+ RETURN_TYPES = ("IMAGE",)
696
+ FUNCTION = "high_pass"
697
+
698
+ CATEGORY = "WAS Suite/Image"
699
+
700
+ def high_pass(self, image, radius=10, strength=1.5):
701
+ hpf = tensor2pil(image).convert('L')
702
+ return ( pil2tensor(self.apply_hpf(hpf.convert('RGB'), radius, strength)), )
703
+
704
+ def apply_hpf(self, img, radius=10, strength=1.5):
705
+
706
+ # pil to numpy
707
+ img_arr = np.array(img).astype('float')
708
+
709
+ # Apply a Gaussian blur with the given radius
710
+ blurred_arr = np.array(img.filter(ImageFilter.GaussianBlur(radius=radius))).astype('float')
711
+
712
+ # Apply the High Pass Filter
713
+ hpf_arr = img_arr - blurred_arr
714
+ hpf_arr = np.clip(hpf_arr * strength, 0, 255).astype('uint8')
715
+
716
+ # Convert the numpy array back to a PIL image and return it
717
+ return Image.fromarray(hpf_arr, mode='RGB')
718
+
719
+
720
+ # IMAGE LEVELS NODE
721
+
722
+ class WAS_Image_Levels:
723
+ def __init__(self):
724
+ pass
725
+
726
+ @classmethod
727
+ def INPUT_TYPES(s):
728
+ return {
729
+ "required": {
730
+ "image": ("IMAGE",),
731
+ "black_level": ("FLOAT", {"default": 0.0, "min": 0.0, "max":255.0, "step": 0.1}),
732
+ "mid_level": ("FLOAT", {"default": 127.5, "min": 0.0, "max": 255.0, "step": 0.1}),
733
+ "white_level": ("FLOAT", {"default": 255, "min": 0.0, "max": 255.0, "step": 0.1}),
734
+ }
735
+ }
736
+ RETURN_TYPES = ("IMAGE",)
737
+ FUNCTION = "apply_image_levels"
738
+
739
+ CATEGORY = "WAS Suite/Image"
740
+
741
+ def apply_image_levels(self, image, black_level, mid_level, white_level):
742
+
743
+ # Convert image to PIL
744
+ image = tensor2pil(image)
745
+
746
+ #apply image levels
747
+ #image = self.adjust_levels(image, black_level, mid_level, white_level)
748
+
749
+ levels = self.AdjustLevels(black_level, mid_level, white_level)
750
+ image = levels.adjust(image)
751
+
752
+ # Return adjust image tensor
753
+ return ( pil2tensor(image), )
754
+
755
+ def adjust_levels(self, image, black=0.0, mid=1.0, white=255):
756
+ """
757
+ Adjust the black, mid, and white levels of an RGB image.
758
+ """
759
+ # Create a new empty image with the same size and mode as the original image
760
+ result = Image.new(image.mode, image.size)
761
+
762
+ # Check that the mid value is within the valid range
763
+ if mid < 0 or mid > 1:
764
+ raise ValueError("mid value must be between 0 and 1")
765
+
766
+ # Create a lookup table to map the pixel values to new values
767
+ lut = []
768
+ for i in range(256):
769
+ if i < black:
770
+ lut.append(0)
771
+ elif i > white:
772
+ lut.append(255)
773
+ else:
774
+ lut.append(int(((i - black) / (white - black)) ** mid * 255.0))
775
+
776
+ # Split the image into its red, green, and blue channels
777
+ r, g, b = image.split()
778
+
779
+ # Apply the lookup table to each channel
780
+ r = r.point(lut)
781
+ g = g.point(lut)
782
+ b = b.point(lut)
783
+
784
+ # Merge the channels back into an RGB image
785
+ result = Image.merge("RGB", (r, g, b))
786
+
787
+ return result
788
+
789
+ class AdjustLevels:
790
+ def __init__(self, min_level, mid_level, max_level):
791
+ self.min_level = min_level
792
+ self.mid_level = mid_level
793
+ self.max_level = max_level
794
+
795
+ def adjust(self, im):
796
+ # load the image
797
+
798
+ # convert the image to a numpy array
799
+ im_arr = np.array(im)
800
+
801
+ # apply the min level adjustment
802
+ im_arr[im_arr < self.min_level] = self.min_level
803
+
804
+ # apply the mid level adjustment
805
+ im_arr = (im_arr - self.min_level) * (255 / (self.max_level - self.min_level))
806
+ im_arr[im_arr < 0] = 0
807
+ im_arr[im_arr > 255] = 255
808
+ im_arr = im_arr.astype(np.uint8)
809
+
810
+ # apply the max level adjustment
811
+ im = Image.fromarray(im_arr)
812
+ im = ImageOps.autocontrast(im, cutoff=self.max_level)
813
+
814
+ return im
815
+
816
+
817
+ # FILM GRAIN NODE
818
+
819
+ class WAS_Film_Grain:
820
+ def __init__(self):
821
+ pass
822
+
823
+ @classmethod
824
+ def INPUT_TYPES(s):
825
+ return {
826
+ "required": {
827
+ "image": ("IMAGE",),
828
+ "density": ("FLOAT", {"default": 1.0, "min": 0.01, "max": 1.0, "step": 0.01}),
829
+ "intensity": ("FLOAT", {"default": 1.0, "min": 0.01, "max": 1.0, "step": 0.01}),
830
+ "highlights": ("FLOAT", {"default": 1.0, "min": 0.01, "max": 255.0, "step": 0.01}),
831
+ "supersample_factor": ("INT", {"default": 4, "min": 1, "max": 8, "step": 1})
832
+ }
833
+ }
834
+ RETURN_TYPES = ("IMAGE",)
835
+ FUNCTION = "film_grain"
836
+
837
+ CATEGORY = "WAS Suite/Image"
838
+
839
+ def film_grain(self, image, density, intensity, highlights, supersample_factor):
840
+ return ( pil2tensor(self.apply_film_grain(tensor2pil(image), density, intensity, highlights, supersample_factor)), )
841
+
842
+ def apply_film_grain(self, img, density=0.1, intensity=1.0, highlights=1.0, supersample_factor = 4):
843
+ """
844
+ Apply grayscale noise with specified density, intensity, and highlights to a PIL image.
845
+ """
846
+ # Convert the image to grayscale
847
+ img_gray = img.convert('L')
848
+
849
+ # Super Resolution noise image
850
+ original_size = img.size
851
+ img_gray = img_gray.resize(((img.size[0] * supersample_factor), (img.size[1] * supersample_factor)), Image.Resampling(2))
852
+
853
+ # Calculate the number of noise pixels to add
854
+ num_pixels = int(density * img_gray.size[0] * img_gray.size[1])
855
+
856
+ # Create a list of noise pixel positions
857
+ noise_pixels = []
858
+ for i in range(num_pixels):
859
+ x = random.randint(0, img_gray.size[0]-1)
860
+ y = random.randint(0, img_gray.size[1]-1)
861
+ noise_pixels.append((x, y))
862
+
863
+ # Apply the noise to the grayscale image
864
+ for x, y in noise_pixels:
865
+ value = random.randint(0, 255)
866
+ img_gray.putpixel((x, y), value)
867
+
868
+ # Convert the grayscale image back to RGB
869
+ img_noise = img_gray.convert('RGB')
870
+
871
+ # Blur noise image
872
+ img_noise = img_noise.filter(ImageFilter.GaussianBlur(radius = 0.125))
873
+
874
+ # Downsize noise image
875
+ img_noise = img_noise.resize(original_size, Image.Resampling(1))
876
+
877
+ # Sharpen super resolution result
878
+ img_noise = img_noise.filter(ImageFilter.EDGE_ENHANCE_MORE)
879
+
880
+ # Blend the noisy color image with the original color image
881
+ img_final = Image.blend(img, img_noise, intensity)
882
+
883
+ # Adjust the highlights
884
+ enhancer = ImageEnhance.Brightness(img_final)
885
+ img_highlights = enhancer.enhance(highlights)
886
+
887
+ # Return the final image
888
+ return img_highlights
889
+
890
+
891
+ # IMAGE FLIP NODE
892
+
893
+ class WAS_Image_Flip:
894
+ def __init__(self):
895
+ pass
896
+
897
+ @classmethod
898
+ def INPUT_TYPES(cls):
899
+ return {
900
+ "required": {
901
+ "image": ("IMAGE",),
902
+ "mode": (["horizontal", "vertical",],),
903
+ },
904
+ }
905
+
906
+ RETURN_TYPES = ("IMAGE",)
907
+ FUNCTION = "image_flip"
908
+
909
+ CATEGORY = "WAS Suite/Image"
910
+
911
+ def image_flip(self, image, mode):
912
+
913
+ # PIL Image
914
+ image = tensor2pil(image)
915
+
916
+ # Rotate Image
917
+ if mode == 'horizontal':
918
+ image = image.transpose(0)
919
+ if mode == 'vertical':
920
+ image = image.transpose(1)
921
+
922
+ return ( pil2tensor(image), )
923
+
924
+
925
+ class WAS_Image_Rotate:
926
+ def __init__(self):
927
+ pass
928
+
929
+ @classmethod
930
+ def INPUT_TYPES(cls):
931
+ return {
932
+ "required": {
933
+ "image": ("IMAGE",),
934
+ "mode": (["transpose", "internal",],),
935
+ "rotation": ("INT", {"default": 0, "min": 0, "max": 360, "step": 90}),
936
+ "sampler": (["nearest", "bilinear", "bicubic"],),
937
+ },
938
+ }
939
+
940
+ RETURN_TYPES = ("IMAGE",)
941
+ FUNCTION = "image_rotate"
942
+
943
+ CATEGORY = "WAS Suite/Image"
944
+
945
+ def image_rotate(self, image, mode, rotation, sampler):
946
+
947
+ # PIL Image
948
+ image = tensor2pil(image)
949
+
950
+ # Check rotation
951
+ if rotation > 360:
952
+ rotation = int(360)
953
+ if (rotation % 90 != 0):
954
+ rotation = int((rotation//90)*90);
955
+
956
+ # Set Sampler
957
+ match sampler:
958
+ case 'nearest':
959
+ sampler = PIL.Image.NEAREST
960
+ case 'bicubic':
961
+ sampler = PIL.Image.BICUBIC
962
+ case 'bilinear':
963
+ sampler = PIL.Image.BILINEAR
964
+
965
+ # Rotate Image
966
+ if mode == 'internal':
967
+ image = image.rotate(rotation, sampler)
968
+ else:
969
+ rot = int(rotation / 90)
970
+ for _ in range(rot):
971
+ image = image.transpose(2)
972
+
973
+ return ( torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0), )
974
+
975
+
976
+ # IMAGE NOVA SINE FILTER
977
+
978
+ class WAS_Image_Nova_Filter:
979
+ def __init__(self):
980
+ pass
981
+
982
+ @classmethod
983
+ def INPUT_TYPES(cls):
984
+ return {
985
+ "required": {
986
+ "image": ("IMAGE",),
987
+ "amplitude": ("FLOAT", {"default": 0.1, "min": 0.0, "max": 1.0, "step": 0.001}),
988
+ "frequency": ("FLOAT", {"default": 3.14, "min": 0.0, "max": 100.0, "step": 0.001}),
989
+ },
990
+ }
991
+
992
+ RETURN_TYPES = ("IMAGE",)
993
+ FUNCTION = "nova_sine"
994
+
995
+ CATEGORY = "WAS Suite/Image"
996
+
997
+ def nova_sine(self, image, amplitude, frequency):
998
+
999
+ # Convert image to numpy
1000
+ img = tensor2pil(image)
1001
+
1002
+ # Convert the image to a numpy array
1003
+ img_array = np.array(img)
1004
+
1005
+ # Define a sine wave function
1006
+ def sine(x, freq, amp):
1007
+ return amp * np.sin(2 * np.pi * freq * x)
1008
+
1009
+ # Calculate the sampling frequency of the image
1010
+ resolution = img.info.get('dpi') # PPI
1011
+ physical_size = img.size # pixels
1012
+
1013
+ if resolution is not None:
1014
+ # Convert PPI to pixels per millimeter (PPM)
1015
+ ppm = 25.4 / resolution
1016
+ physical_size = tuple(int(pix * ppm) for pix in physical_size)
1017
+
1018
+ # Set the maximum frequency for the sine wave
1019
+ max_freq = img.width / 2
1020
+
1021
+ # Ensure frequency isn't outside visual representable range
1022
+ if frequency > max_freq:
1023
+ frequency = max_freq
1024
+
1025
+ # Apply levels to the image using the sine function
1026
+ for i in range(img_array.shape[0]):
1027
+ for j in range(img_array.shape[1]):
1028
+ for k in range(img_array.shape[2]):
1029
+ img_array[i,j,k] = int(sine(img_array[i,j,k]/255, frequency, amplitude) * 255)
1030
+
1031
+ return ( torch.from_numpy(img_array.astype(np.float32) / 255.0).unsqueeze(0), )
1032
+
1033
+
1034
+ # IMAGE CANNY FILTER
1035
+
1036
+
1037
+ class WAS_Canny_Filter:
1038
+ def __init__(self):
1039
+ pass
1040
+
1041
+ @classmethod
1042
+ def INPUT_TYPES(cls):
1043
+ return {
1044
+ "required": {
1045
+ "image": ("IMAGE",),
1046
+ "enable_threshold": (['false', 'true'],),
1047
+ "threshold_low": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01}),
1048
+ "threshold_high": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
1049
+ },
1050
+ }
1051
+
1052
+ RETURN_TYPES = ("IMAGE",)
1053
+ FUNCTION = "canny_filter"
1054
+
1055
+ CATEGORY = "WAS Suite/Image"
1056
+
1057
+ def canny_filter(self, image, threshold_low, threshold_high, enable_threshold):
1058
+
1059
+ self.install_opencv()
1060
+
1061
+ if enable_threshold == 'false':
1062
+ threshold_low = None
1063
+ threshold_high = None
1064
+
1065
+ image_canny = Image.fromarray(self.Canny_detector(255. * image.cpu().numpy().squeeze(), threshold_low, threshold_high)).convert('RGB')
1066
+
1067
+ return ( pil2tensor(image_canny), )
1068
+
1069
+ # Defining the Canny Detector function
1070
+ # From: https://www.geeksforgeeks.org/implement-canny-edge-detector-in-python-using-opencv/
1071
+
1072
+ # here weak_th and strong_th are thresholds for
1073
+ # double thresholding step
1074
+ def Canny_detector(self, img, weak_th = None, strong_th = None):
1075
+
1076
+ import cv2
1077
+
1078
+ # conversion of image to grayscale
1079
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
1080
+
1081
+ # Noise reduction step
1082
+ img = cv2.GaussianBlur(img, (5, 5), 1.4)
1083
+
1084
+ # Calculating the gradients
1085
+ gx = cv2.Sobel(np.float32(img), cv2.CV_64F, 1, 0, 3)
1086
+ gy = cv2.Sobel(np.float32(img), cv2.CV_64F, 0, 1, 3)
1087
+
1088
+ # Conversion of Cartesian coordinates to polar
1089
+ mag, ang = cv2.cartToPolar(gx, gy, angleInDegrees = True)
1090
+
1091
+ # setting the minimum and maximum thresholds
1092
+ # for double thresholding
1093
+ mag_max = np.max(mag)
1094
+ if not weak_th:weak_th = mag_max * 0.1
1095
+ if not strong_th:strong_th = mag_max * 0.5
1096
+
1097
+ # getting the dimensions of the input image
1098
+ height, width = img.shape
1099
+
1100
+ # Looping through every pixel of the grayscale
1101
+ # image
1102
+ for i_x in range(width):
1103
+ for i_y in range(height):
1104
+
1105
+ grad_ang = ang[i_y, i_x]
1106
+ grad_ang = abs(grad_ang-180) if abs(grad_ang)>180 else abs(grad_ang)
1107
+
1108
+ # selecting the neighbours of the target pixel
1109
+ # according to the gradient direction
1110
+ # In the x axis direction
1111
+ if grad_ang<= 22.5:
1112
+ neighb_1_x, neighb_1_y = i_x-1, i_y
1113
+ neighb_2_x, neighb_2_y = i_x + 1, i_y
1114
+
1115
+ # top right (diagonal-1) direction
1116
+ elif grad_ang>22.5 and grad_ang<=(22.5 + 45):
1117
+ neighb_1_x, neighb_1_y = i_x-1, i_y-1
1118
+ neighb_2_x, neighb_2_y = i_x + 1, i_y + 1
1119
+
1120
+ # In y-axis direction
1121
+ elif grad_ang>(22.5 + 45) and grad_ang<=(22.5 + 90):
1122
+ neighb_1_x, neighb_1_y = i_x, i_y-1
1123
+ neighb_2_x, neighb_2_y = i_x, i_y + 1
1124
+
1125
+ # top left (diagonal-2) direction
1126
+ elif grad_ang>(22.5 + 90) and grad_ang<=(22.5 + 135):
1127
+ neighb_1_x, neighb_1_y = i_x-1, i_y + 1
1128
+ neighb_2_x, neighb_2_y = i_x + 1, i_y-1
1129
+
1130
+ # Now it restarts the cycle
1131
+ elif grad_ang>(22.5 + 135) and grad_ang<=(22.5 + 180):
1132
+ neighb_1_x, neighb_1_y = i_x-1, i_y
1133
+ neighb_2_x, neighb_2_y = i_x + 1, i_y
1134
+
1135
+ # Non-maximum suppression step
1136
+ if width>neighb_1_x>= 0 and height>neighb_1_y>= 0:
1137
+ if mag[i_y, i_x]<mag[neighb_1_y, neighb_1_x]:
1138
+ mag[i_y, i_x]= 0
1139
+ continue
1140
+
1141
+ if width>neighb_2_x>= 0 and height>neighb_2_y>= 0:
1142
+ if mag[i_y, i_x]<mag[neighb_2_y, neighb_2_x]:
1143
+ mag[i_y, i_x]= 0
1144
+
1145
+ weak_ids = np.zeros_like(img)
1146
+ strong_ids = np.zeros_like(img)
1147
+ ids = np.zeros_like(img)
1148
+
1149
+ # double thresholding step
1150
+ for i_x in range(width):
1151
+ for i_y in range(height):
1152
+
1153
+ grad_mag = mag[i_y, i_x]
1154
+
1155
+ if grad_mag<weak_th:
1156
+ mag[i_y, i_x]= 0
1157
+ elif strong_th>grad_mag>= weak_th:
1158
+ ids[i_y, i_x]= 1
1159
+ else:
1160
+ ids[i_y, i_x]= 2
1161
+
1162
+ # finally returning the magnitude of
1163
+ # gradients of edges
1164
+ return mag
1165
+
1166
+ def install_opencv(self):
1167
+ if 'opencv-python' not in packages():
1168
+ print("\033[34mWAS NS:\033[0m Installing CV2...")
1169
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'opencv-python'])
1170
+
1171
+
1172
+ # IMAGE EDGE DETECTION
1173
+
1174
+ class WAS_Image_Edge:
1175
+ def __init__(self):
1176
+ pass
1177
+
1178
+ @classmethod
1179
+ def INPUT_TYPES(cls):
1180
+ return {
1181
+ "required": {
1182
+ "image": ("IMAGE",),
1183
+ "mode": (["normal", "laplacian"],),
1184
+ },
1185
+ }
1186
+
1187
+ RETURN_TYPES = ("IMAGE",)
1188
+ FUNCTION = "image_edges"
1189
+
1190
+ CATEGORY = "WAS Suite/Image"
1191
+
1192
+ def image_edges(self, image, mode):
1193
+
1194
+ # Convert image to PIL
1195
+ image = tensor2pil(image)
1196
+
1197
+ # Detect edges
1198
+ match mode:
1199
+ case "normal":
1200
+ image = image.filter(ImageFilter.FIND_EDGES)
1201
+ case "laplacian":
1202
+ image = image.filter(ImageFilter.Kernel((3, 3), (-1, -1, -1, -1, 8,
1203
+ -1, -1, -1, -1), 1, 0))
1204
+ case _:
1205
+ image = image
1206
+
1207
+ return ( torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0), )
1208
+
1209
+
1210
+ # IMAGE FDOF NODE
1211
+
1212
+ class WAS_Image_fDOF:
1213
+ def __init__(self):
1214
+ pass
1215
+
1216
+ @classmethod
1217
+ def INPUT_TYPES(cls):
1218
+ return {
1219
+ "required": {
1220
+ "image": ("IMAGE",),
1221
+ "depth": ("IMAGE",),
1222
+ "mode": (["mock","gaussian","box"],),
1223
+ "radius": ("INT", {"default": 8, "min": 1, "max": 128, "step": 1}),
1224
+ "samples": ("INT", {"default": 1, "min": 1, "max": 3, "step": 1}),
1225
+ },
1226
+ }
1227
+
1228
+ RETURN_TYPES = ("IMAGE",)
1229
+ FUNCTION = "fdof_composite"
1230
+
1231
+ CATEGORY = "WAS Suite/Image"
1232
+
1233
+ def fdof_composite(self, image, depth, radius, samples, mode):
1234
+
1235
+ if 'opencv-python' not in packages():
1236
+ print("\033[34mWAS NS:\033[0m Installing CV2...")
1237
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'opencv-python'])
1238
+
1239
+ import cv2 as cv
1240
+
1241
+ #Convert tensor to a PIL Image
1242
+ i = 255. * image.cpu().numpy().squeeze()
1243
+ img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
1244
+ d = 255. * depth.cpu().numpy().squeeze()
1245
+ depth_img = Image.fromarray(np.clip(d, 0, 255).astype(np.uint8))
1246
+
1247
+ #Apply Fake Depth of Field
1248
+ fdof_image = self.portraitBlur(img, depth_img, radius, samples, mode)
1249
+
1250
+ return ( torch.from_numpy(np.array(fdof_image).astype(np.float32) / 255.0).unsqueeze(0), )
1251
+
1252
+ def portraitBlur(self, img, mask, radius=5, samples=1, mode = 'mock'):
1253
+ mask = mask.resize(img.size).convert('L')
1254
+ if mode == 'mock':
1255
+ bimg = medianFilter(img, radius, (radius * 1500), 75)
1256
+ elif mode == 'gaussian':
1257
+ bimg = img.filter(ImageFilter.GaussianBlur(radius = radius))
1258
+ elif mode == 'box':
1259
+ bimg = img.filter(ImageFilter.BoxBlur(radius))
1260
+ bimg.convert(img.mode)
1261
+ rimg = None
1262
+ if samples > 1:
1263
+ for i in range(samples):
1264
+ if i == 0:
1265
+ rimg = Image.composite(img, bimg, mask)
1266
+ else:
1267
+ rimg = Image.composite(rimg, bimg, mask)
1268
+ else:
1269
+ rimg = Image.composite(img, bimg, mask).convert('RGB')
1270
+
1271
+ return rimg
1272
+
1273
+ # TODO: Implement lens_blur mode attempt
1274
+ def lens_blur(img, radius, amount, mask=None):
1275
+ """Applies a lens shape blur effect on an image.
1276
+
1277
+ Args:
1278
+ img (numpy.ndarray): The input image as a numpy array.
1279
+ radius (float): The radius of the lens shape.
1280
+ amount (float): The amount of blur to be applied.
1281
+ mask (numpy.ndarray): An optional mask image specifying where to apply the blur.
1282
+
1283
+ Returns:
1284
+ numpy.ndarray: The blurred image as a numpy array.
1285
+ """
1286
+ # Create a lens shape kernel.
1287
+ kernel = cv2.getGaussianKernel(ksize=int(radius * 10), sigma=0)
1288
+ kernel = np.dot(kernel, kernel.T)
1289
+
1290
+ # Normalize the kernel.
1291
+ kernel /= np.max(kernel)
1292
+
1293
+ # Create a circular mask for the kernel.
1294
+ mask_shape = (int(radius * 2), int(radius * 2))
1295
+ mask = np.ones(mask_shape) if mask is None else cv2.resize(mask, mask_shape, interpolation=cv2.INTER_LINEAR)
1296
+ mask = cv2.GaussianBlur(mask, (int(radius * 2) + 1, int(radius * 2) + 1), radius / 2)
1297
+ mask /= np.max(mask)
1298
+
1299
+ # Adjust kernel and mask size to match input image.
1300
+ ksize_x = img.shape[1] // (kernel.shape[1] + 1)
1301
+ ksize_y = img.shape[0] // (kernel.shape[0] + 1)
1302
+ kernel = cv2.resize(kernel, (ksize_x, ksize_y), interpolation=cv2.INTER_LINEAR)
1303
+ kernel = cv2.copyMakeBorder(kernel, 0, img.shape[0] - kernel.shape[0], 0, img.shape[1] - kernel.shape[1], cv2.BORDER_CONSTANT, value=0)
1304
+ mask = cv2.resize(mask, (ksize_x, ksize_y), interpolation=cv2.INTER_LINEAR)
1305
+ mask = cv2.copyMakeBorder(mask, 0, img.shape[0] - mask.shape[0], 0, img.shape[1] - mask.shape[1], cv2.BORDER_CONSTANT, value=0)
1306
+
1307
+ # Apply the lens shape blur effect on the image.
1308
+ blurred = cv2.filter2D(img, -1, kernel)
1309
+ blurred = cv2.filter2D(blurred, -1, mask * amount)
1310
+
1311
+ if mask is not None:
1312
+ # Apply the mask to the original image.
1313
+ mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
1314
+ img_masked = img * mask
1315
+ # Combine the masked image with the blurred image.
1316
+ blurred = img_masked * (1 - mask) + blurred
1317
+
1318
+ return blurred
1319
+
1320
+
1321
+ # IMAGE MEDIAN FILTER NODE
1322
+
1323
+ class WAS_Image_Median_Filter:
1324
+ def __init__(self):
1325
+ pass
1326
+
1327
+ @classmethod
1328
+ def INPUT_TYPES(cls):
1329
+ return {
1330
+ "required": {
1331
+ "image": ("IMAGE",),
1332
+ "diameter": ("INT", {"default": 2.0, "min": 0.1, "max": 255, "step": 1}),
1333
+ "sigma_color": ("FLOAT", {"default": 10.0, "min": -255.0, "max": 255.0, "step": 0.1}),
1334
+ "sigma_space": ("FLOAT", {"default": 10.0, "min": -255.0, "max": 255.0, "step": 0.1}),
1335
+ },
1336
+ }
1337
+
1338
+ RETURN_TYPES = ("IMAGE",)
1339
+ FUNCTION = "apply_median_filter"
1340
+
1341
+ CATEGORY = "WAS Suite/Image"
1342
+
1343
+ def apply_median_filter(self, image, diameter, sigma_color, sigma_space):
1344
+
1345
+ # Numpy Image
1346
+ image = tensor2pil(image)
1347
+
1348
+ # Apply Median Filter effect
1349
+ image = medianFilter(image, diameter, sigma_color, sigma_space)
1350
+
1351
+ return ( pil2tensor(image), )
1352
+
1353
+ # IMAGE SELECT COLOR
1354
+
1355
+ class WAS_Image_Select_Color:
1356
+ def __init__(self):
1357
+ pass
1358
+
1359
+ @classmethod
1360
+ def INPUT_TYPES(cls):
1361
+ return {
1362
+ "required": {
1363
+ "image": ("IMAGE",),
1364
+ "red": ("INT", {"default": 255.0, "min": 0.0, "max": 255.0, "step": 0.1}),
1365
+ "green": ("INT", {"default": 255.0, "min": 0.0, "max": 255.0, "step": 0.1}),
1366
+ "blue": ("INT", {"default": 255.0, "min": 0.0, "max": 255.0, "step": 0.1}),
1367
+ "variance": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}),
1368
+ },
1369
+ }
1370
+
1371
+ RETURN_TYPES = ("IMAGE",)
1372
+ FUNCTION = "select_color"
1373
+
1374
+ CATEGORY = "WAS Suite/Image"
1375
+
1376
+ def select_color(self, image, red=255, green=255, blue=255, variance=10):
1377
+
1378
+ if 'opencv-python' not in packages():
1379
+ print("\033[34mWAS NS:\033[0m Installing CV2...")
1380
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'opencv-python'])
1381
+
1382
+ image = self.color_pick(tensor2pil(image), red, green, blue, variance)
1383
+
1384
+ return ( pil2tensor(image), )
1385
+
1386
+
1387
+ def color_pick(self, image, red=255, green=255, blue=255, variance=10):
1388
+ # Convert image to RGB mode
1389
+ image = image.convert('RGB')
1390
+
1391
+ # Create a new black image of the same size as the input image
1392
+ selected_color = Image.new('RGB', image.size, (0,0,0))
1393
+
1394
+ # Get the width and height of the image
1395
+ width, height = image.size
1396
+
1397
+ # Loop through every pixel in the image
1398
+ for x in range(width):
1399
+ for y in range(height):
1400
+ # Get the color of the pixel
1401
+ pixel = image.getpixel((x,y))
1402
+ r,g,b = pixel
1403
+
1404
+ # Check if the pixel is within the specified color range
1405
+ if ((r >= red-variance) and (r <= red+variance) and
1406
+ (g >= green-variance) and (g <= green+variance) and
1407
+ (b >= blue-variance) and (b <= blue+variance)):
1408
+ # Set the pixel in the selected_color image to the RGB value of the pixel
1409
+ selected_color.putpixel((x,y),(r,g,b))
1410
+
1411
+ # Return the selected color image
1412
+ return selected_color
1413
+
1414
+ # IMAGE CONVERT TO CHANNEL
1415
+
1416
+ class WAS_Image_Select_Channel:
1417
+ def __init__(self):
1418
+ pass
1419
+
1420
+ @classmethod
1421
+ def INPUT_TYPES(cls):
1422
+ return {
1423
+ "required": {
1424
+ "image": ("IMAGE",),
1425
+ "channel": (['red','green','blue'],),
1426
+ },
1427
+ }
1428
+
1429
+ RETURN_TYPES = ("IMAGE",)
1430
+ FUNCTION = "select_channel"
1431
+
1432
+ CATEGORY = "WAS Suite/Image"
1433
+
1434
+ def select_channel(self, image, channel='red'):
1435
+
1436
+ image = self.convert_to_single_channel(tensor2pil(image), channel)
1437
+
1438
+ return ( pil2tensor(image), )
1439
+
1440
+
1441
+ def convert_to_single_channel(self, image, channel='red'):
1442
+
1443
+ # Convert to RGB mode to access individual channels
1444
+ image = image.convert('RGB')
1445
+
1446
+ # Extract the desired channel and convert to greyscale
1447
+ if channel == 'red':
1448
+ channel_img = image.split()[0].convert('L')
1449
+ elif channel == 'green':
1450
+ channel_img = image.split()[1].convert('L')
1451
+ elif channel == 'blue':
1452
+ channel_img = image.split()[2].convert('L')
1453
+ else:
1454
+ raise ValueError("Invalid channel option. Please choose 'red', 'green', or 'blue'.")
1455
+
1456
+ # Convert the greyscale channel back to RGB mode
1457
+ channel_img = Image.merge('RGB', (channel_img, channel_img, channel_img))
1458
+
1459
+ return channel_img
1460
+
1461
+
1462
+
1463
+ # IMAGE CONVERT TO CHANNEL
1464
+
1465
+ class WAS_Image_RGB_Merge:
1466
+ def __init__(self):
1467
+ pass
1468
+
1469
+ @classmethod
1470
+ def INPUT_TYPES(cls):
1471
+ return {
1472
+ "required": {
1473
+ "red_channel": ("IMAGE",),
1474
+ "green_channel": ("IMAGE",),
1475
+ "blue_channel": ("IMAGE",),
1476
+ },
1477
+ }
1478
+
1479
+ RETURN_TYPES = ("IMAGE",)
1480
+ FUNCTION = "merge_channels"
1481
+
1482
+ CATEGORY = "WAS Suite/Image"
1483
+
1484
+ def merge_channels(self, red_channel, green_channel, blue_channel):
1485
+
1486
+ # Apply mix rgb channels
1487
+ image = self.mix_rgb_channels(tensor2pil(red_channel).convert('L'), tensor2pil(green_channel).convert('L'), tensor2pil(blue_channel).convert('L'))
1488
+
1489
+ return ( pil2tensor(image), )
1490
+
1491
+
1492
+ def mix_rgb_channels(self, red, green, blue):
1493
+ # Create an empty image with the same size as the channels
1494
+ width, height = red.size; merged_img = Image.new('RGB', (width, height))
1495
+
1496
+ # Merge the channels into the new image
1497
+ merged_img = Image.merge('RGB', (red, green, blue))
1498
+
1499
+ return merged_img
1500
+
1501
+
1502
+ # Image Save (NSP Compatible)
1503
+ # Originally From ComfyUI/nodes.py
1504
+
1505
+ class WAS_Image_Save:
1506
+ def __init__(self):
1507
+ self.output_dir = os.path.join(os.getcwd()+'/ComfyUI', "output")
1508
+
1509
+ @classmethod
1510
+ def INPUT_TYPES(s):
1511
+ return {
1512
+ "required": {
1513
+ "images": ("IMAGE", ),
1514
+ "output_path": ("STRING", {"default": './ComfyUI/output', "multiline": False}),
1515
+ "filename_prefix": ("STRING", {"default": "ComfyUI"}),
1516
+ "extension": (['png', 'jpeg', 'tiff', 'gif'], ),
1517
+ "quality": ("INT", {"default": 100, "min": 1, "max": 100, "step": 1}),
1518
+ },
1519
+ "hidden": {
1520
+ "prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"
1521
+ },
1522
+ }
1523
+
1524
+ RETURN_TYPES = ()
1525
+ FUNCTION = "save_images"
1526
+
1527
+ OUTPUT_NODE = True
1528
+
1529
+ CATEGORY = "WAS Suite/IO"
1530
+
1531
+ def save_images(self, images, output_path='', filename_prefix="ComfyUI", extension='png', quality=100, prompt=None, extra_pnginfo=None):
1532
+ def map_filename(filename):
1533
+ prefix_len = len(filename_prefix)
1534
+ prefix = filename[:prefix_len + 1]
1535
+ try:
1536
+ digits = int(filename[prefix_len + 1:].split('_')[0])
1537
+ except:
1538
+ digits = 0
1539
+ return (digits, prefix)
1540
+
1541
+ # Setup custom path or default
1542
+ if output_path.strip() != '':
1543
+ if not os.path.exists(output_path.strip()):
1544
+ print(f'\033[34mWAS NS\033[0m Error: The path `{output_path.strip()}` specified doesn\'t exist! Defaulting to `{self.output_dir}` directory.')
1545
+ else:
1546
+ self.output_dir = os.path.normpath(output_path.strip())
1547
+ print(self.output_dir)
1548
+
1549
+ # Define counter for files found
1550
+ try:
1551
+ counter = max(filter(lambda a: a[1][:-1] == filename_prefix and a[1][-1] == "_", map(map_filename, os.listdir(self.output_dir))))[0] + 1
1552
+ except ValueError:
1553
+ counter = 1
1554
+ except FileNotFoundError:
1555
+ os.mkdir(self.output_dir)
1556
+ counter = 1
1557
+
1558
+ paths = list()
1559
+ for image in images:
1560
+ i = 255. * image.cpu().numpy()
1561
+ img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
1562
+ metadata = PngInfo()
1563
+ if prompt is not None:
1564
+ metadata.add_text("prompt", json.dumps(prompt))
1565
+ if extra_pnginfo is not None:
1566
+ for x in extra_pnginfo:
1567
+ metadata.add_text(x, json.dumps(extra_pnginfo[x]))
1568
+ file = f"{filename_prefix}_{counter:05}_.{extension}"
1569
+ if extension == 'png':
1570
+ img.save(os.path.join(self.output_dir, file), pnginfo=metadata, optimize=True)
1571
+ elif extension == 'webp':
1572
+ img.save(os.path.join(self.output_dir, file), quality=quality)
1573
+ elif extension == 'jpeg':
1574
+ img.save(os.path.join(self.output_dir, file), quality=quality, optimize=True)
1575
+ elif extension == 'tiff':
1576
+ img.save(os.path.join(self.output_dir, file), quality=quality, optimize=True)
1577
+ else:
1578
+ img.save(os.path.join(self.output_dir, file))
1579
+ paths.append(file)
1580
+ counter += 1
1581
+ return { "ui": { "images": paths } }
1582
+
1583
+
1584
+ # LOAD IMAGE NODE
1585
+ class WAS_Load_Image:
1586
+
1587
+ def __init__(self):
1588
+ self.input_dir = os.path.join(os.getcwd()+'/ComfyUI', "input")
1589
+
1590
+ @classmethod
1591
+ def INPUT_TYPES(s):
1592
+ return {"required":
1593
+ {"image_path": ("STRING", {"default": './ComfyUI/input/example.png', "multiline": False}),}
1594
+ }
1595
+
1596
+ CATEGORY = "WAS Suite/IO"
1597
+
1598
+ RETURN_TYPES = ("IMAGE", "MASK")
1599
+ FUNCTION = "load_image"
1600
+ def load_image(self, image_path):
1601
+ try:
1602
+ i = Image.open(image_path)
1603
+ except OSError:
1604
+ print(f'\033[34mWAS NS\033[0m Error: The image `{output_path.strip()}` specified doesn\'t exist!')
1605
+ i = Image.new(mode='RGB', size=(512,512), color=(0,0,0))
1606
+ image = i.convert("RGB")
1607
+ image = np.array(image).astype(np.float32) / 255.0
1608
+ image = torch.from_numpy(image)[None,]
1609
+ if 'A' in i.getbands():
1610
+ mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
1611
+ mask = 1. - torch.from_numpy(mask)
1612
+ else:
1613
+ mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
1614
+ return (image, mask)
1615
+
1616
+ @classmethod
1617
+ def IS_CHANGED(s, image_path):
1618
+ m = hashlib.sha256()
1619
+ with open(image_path, 'rb') as f:
1620
+ m.update(f.read())
1621
+ return m.digest().hex()
1622
+
1623
+
1624
+ # TENSOR TO IMAGE NODE
1625
+
1626
+ class WAS_Tensor_Batch_to_Image:
1627
+ def __init__(self):
1628
+ pass
1629
+
1630
+ @classmethod
1631
+ def INPUT_TYPES(cls):
1632
+ return {
1633
+ "required": {
1634
+ "images_batch": ("IMAGE",),
1635
+ "batch_image_number": ("INT", {"default": 0, "min": 0, "max": 64, "step": 1}),
1636
+ },
1637
+ }
1638
+
1639
+ RETURN_TYPES = ("IMAGE",)
1640
+ FUNCTION = "tensor_batch_to_image"
1641
+
1642
+ CATEGORY = "WAS Suite/Latent"
1643
+
1644
+ def tensor_batch_to_image(self, images_batch=None, batch_image_number=0):
1645
+
1646
+ count = 0
1647
+ for _ in images_batch:
1648
+ if batch_image_number == count:
1649
+ return ( images_batch[batch_image_number].unsqueeze(0), )
1650
+ count = count+1
1651
+
1652
+ print(f"\033[34mWAS NS\033[0m Error: Batch number `{batch_image_number}` is not defined, returning last image")
1653
+ return( images_batch[-1].unsqueeze(0), )
1654
+
1655
+
1656
+ #! LATENT NODES
1657
+
1658
+ # IMAGE TO MASK
1659
+
1660
+ class WAS_Image_To_Mask:
1661
+
1662
+ def __init__(s):
1663
+ pass
1664
+
1665
+ @classmethod
1666
+ def INPUT_TYPES(s):
1667
+ return {"required":
1668
+ {"image": ("IMAGE",),
1669
+ "channel": (["alpha", "red", "green", "blue"], ),}
1670
+ }
1671
+
1672
+ CATEGORY = "WAS Suite/Latent"
1673
+
1674
+ RETURN_TYPES = ("MASK",)
1675
+
1676
+ FUNCTION = "image_to_mask"
1677
+
1678
+ def image_to_mask(self, image, channel):
1679
+
1680
+ img = tensor2pil(image)
1681
+
1682
+ mask = None
1683
+ c = channel[0].upper()
1684
+ if c in img.getbands():
1685
+ mask = np.array(img.getchannel(c)).astype(np.float32) / 255.0
1686
+ mask = torch.from_numpy(mask)
1687
+ if c == 'A':
1688
+ mask = 1. - mask
1689
+ else:
1690
+ mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
1691
+
1692
+ return ( mask, )
1693
+
1694
+
1695
+ # LATENT UPSCALE NODE
1696
+
1697
+ class WAS_Latent_Upscale:
1698
+ def __init__(self):
1699
+ pass
1700
+
1701
+ @classmethod
1702
+ def INPUT_TYPES(s):
1703
+ return {"required": { "samples": ("LATENT",), "mode": (["bilinear", "bicubic", "trilinear"],),
1704
+ "factor": ("FLOAT", {"default": 2.0, "min": 0.1, "max": 8.0, "step": 0.1}),
1705
+ "align": (["true", "false"], )}}
1706
+ RETURN_TYPES = ("LATENT",)
1707
+ FUNCTION = "latent_upscale"
1708
+
1709
+ CATEGORY = "WAS Suite/Latent"
1710
+
1711
+ def latent_upscale(self, samples, mode, factor, align):
1712
+ s = samples.copy()
1713
+ s["samples"] = torch.nn.functional.interpolate(s['samples'], scale_factor=factor, mode=mode, align_corners=( True if align == 'true' else False ))
1714
+ return (s,)
1715
+
1716
+ # LATENT NOISE INJECTION NODE
1717
+
1718
+ class WAS_Latent_Noise:
1719
+ def __init__(self):
1720
+ pass
1721
+
1722
+ @classmethod
1723
+ def INPUT_TYPES(s):
1724
+ return {
1725
+ "required": {
1726
+ "samples": ("LATENT",),
1727
+ "noise_std": ("FLOAT", {"default": 0.1, "min": 0.0, "max": 1.0, "step": 0.01}),
1728
+ }
1729
+ }
1730
+
1731
+ RETURN_TYPES = ("LATENT",)
1732
+ FUNCTION = "inject_noise"
1733
+
1734
+ CATEGORY = "WAS Suite/Latent"
1735
+
1736
+ def inject_noise(self, samples, noise_std):
1737
+ s = samples.copy()
1738
+ noise = torch.randn_like(s["samples"]) * noise_std
1739
+ s["samples"] = s["samples"] + noise
1740
+ return (s,)
1741
+
1742
+
1743
+ # MIDAS DEPTH APPROXIMATION NODE
1744
+
1745
+ class MiDaS_Depth_Approx:
1746
+ def __init__(self):
1747
+ pass
1748
+
1749
+ @classmethod
1750
+ def INPUT_TYPES(cls):
1751
+ return {
1752
+ "required": {
1753
+ "image": ("IMAGE",),
1754
+ "use_cpu": (["false", "true"],),
1755
+ "midas_model": (["DPT_Large", "DPT_Hybrid", "DPT_Small"],),
1756
+ "invert_depth": (["false", "true"],),
1757
+ },
1758
+ }
1759
+
1760
+ RETURN_TYPES = ("IMAGE",)
1761
+ FUNCTION = "midas_approx"
1762
+
1763
+ CATEGORY = "WAS Suite/Image"
1764
+
1765
+ def midas_approx(self, image, use_cpu, midas_model, invert_depth):
1766
+
1767
+ global MIDAS_INSTALLED
1768
+
1769
+ if not MIDAS_INSTALLED:
1770
+ self.install_midas()
1771
+
1772
+ import cv2 as cv
1773
+
1774
+ # Convert the input image tensor to a PIL Image
1775
+ i = 255. * image.cpu().numpy().squeeze()
1776
+ img = i
1777
+
1778
+ print("\033[34mWAS NS:\033[0m Downloading and loading MiDaS Model...")
1779
+ midas = torch.hub.load("intel-isl/MiDaS", midas_model, trust_repo=True)
1780
+ device = torch.device("cuda") if torch.cuda.is_available() and use_cpu == 'false' else torch.device("cpu")
1781
+
1782
+ print('\033[34mWAS NS:\033[0m MiDaS is using device:', device)
1783
+
1784
+ midas.to(device).eval()
1785
+ midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms")
1786
+
1787
+ if midas_model == "DPT_Large" or midas_model == "DPT_Hybrid":
1788
+ transform = midas_transforms.dpt_transform
1789
+ else:
1790
+ transform = midas_transforms.small_transform
1791
+
1792
+ img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
1793
+ input_batch = transform(img).to(device)
1794
+
1795
+ print('\033[34mWAS NS:\033[0m Approximating depth from image.')
1796
+
1797
+ with torch.no_grad():
1798
+ prediction = midas(input_batch)
1799
+ prediction = torch.nn.functional.interpolate(
1800
+ prediction.unsqueeze(1),
1801
+ size=img.shape[:2],
1802
+ mode="bicubic",
1803
+ align_corners=False,
1804
+ ).squeeze()
1805
+
1806
+ if invert_depth == 'true':
1807
+ depth = ( 255 - prediction.cpu().numpy().astype(np.uint8) )
1808
+ depth = depth.astype(np.float32)
1809
+ else:
1810
+ depth = prediction.cpu().numpy().astype(np.float32)
1811
+ depth = depth * 255 / (np.max(depth)) / 255
1812
+ # Invert depth map
1813
+ depth = cv.cvtColor(depth, cv.COLOR_GRAY2RGB)
1814
+
1815
+ tensor = torch.from_numpy( depth )[None,]
1816
+ tensors = ( tensor, )
1817
+
1818
+ del midas, device, midas_transforms
1819
+ del transform, img, input_batch, prediction
1820
+
1821
+ return tensors
1822
+
1823
+ def install_midas(self):
1824
+ global MIDAS_INSTALLED
1825
+ if 'timm' not in packages():
1826
+ print("\033[34mWAS NS:\033[0m Installing timm...")
1827
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'timm'])
1828
+ if 'opencv-python' not in packages():
1829
+ print("\033[34mWAS NS:\033[0m Installing CV2...")
1830
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'opencv-python'])
1831
+ MIDAS_INSTALLED = True
1832
+
1833
+ # MIDAS REMOVE BACKGROUND/FOREGROUND NODE
1834
+
1835
+ class MiDaS_Background_Foreground_Removal:
1836
+ def __init__(self):
1837
+ pass
1838
+
1839
+ @classmethod
1840
+ def INPUT_TYPES(cls):
1841
+ return {
1842
+ "required": {
1843
+ "image": ("IMAGE",),
1844
+ "use_cpu": (["false", "true"],),
1845
+ "midas_model": (["DPT_Large", "DPT_Hybrid", "DPT_Small"],),
1846
+ "remove": (["background", "foregroud"],),
1847
+ "threshold": (["false", "true"],),
1848
+ "threshold_low": ("FLOAT", {"default": 10, "min": 0, "max": 255, "step": 1}),
1849
+ "threshold_mid": ("FLOAT", {"default": 200, "min": 0, "max": 255, "step": 1}),
1850
+ "threshold_high": ("FLOAT", {"default": 210, "min": 0, "max": 255, "step": 1}),
1851
+ "smoothing": ("FLOAT", {"default": 0.25, "min": 0.0, "max": 16.0, "step": 0.01}),
1852
+ "background_red": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
1853
+ "background_green": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
1854
+ "background_blue": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}),
1855
+ },
1856
+ }
1857
+
1858
+ RETURN_TYPES = ("IMAGE","IMAGE")
1859
+ FUNCTION = "midas_remove"
1860
+
1861
+ CATEGORY = "WAS Suite/Image"
1862
+
1863
+ def midas_remove(self,
1864
+ image,
1865
+ midas_model,
1866
+ use_cpu='false',
1867
+ remove='background',
1868
+ threshold='false',
1869
+ threshold_low=0,
1870
+ threshold_mid=127,
1871
+ threshold_high=255,
1872
+ smoothing=0.25,
1873
+ background_red=0,
1874
+ background_green=0,
1875
+ background_blue=0):
1876
+
1877
+ global MIDAS_INSTALLED
1878
+
1879
+ if not MIDAS_INSTALLED:
1880
+ self.install_midas()
1881
+
1882
+ import cv2 as cv
1883
+
1884
+ # Convert the input image tensor to a numpy and PIL Image
1885
+ i = 255. * image.cpu().numpy().squeeze()
1886
+ img = i
1887
+ # Original image
1888
+ img_original = tensor2pil(image).convert('RGB')
1889
+
1890
+ print("\033[34mWAS NS:\033[0m Downloading and loading MiDaS Model...")
1891
+ midas = torch.hub.load("intel-isl/MiDaS", midas_model, trust_repo=True)
1892
+ device = torch.device("cuda") if torch.cuda.is_available() and use_cpu == 'false' else torch.device("cpu")
1893
+
1894
+ print('\033[34mWAS NS:\033[0m MiDaS is using device:', device)
1895
+
1896
+ midas.to(device).eval()
1897
+ midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms")
1898
+
1899
+ if midas_model == "DPT_Large" or midas_model == "DPT_Hybrid":
1900
+ transform = midas_transforms.dpt_transform
1901
+ else:
1902
+ transform = midas_transforms.small_transform
1903
+
1904
+ img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
1905
+ input_batch = transform(img).to(device)
1906
+
1907
+ print('\033[34mWAS NS:\033[0m Approximating depth from image.')
1908
+
1909
+ with torch.no_grad():
1910
+ prediction = midas(input_batch)
1911
+ prediction = torch.nn.functional.interpolate(
1912
+ prediction.unsqueeze(1),
1913
+ size=img.shape[:2],
1914
+ mode="bicubic",
1915
+ align_corners=False,
1916
+ ).squeeze()
1917
+
1918
+ # Invert depth map
1919
+ if remove == 'foreground':
1920
+ depth = ( 255 - prediction.cpu().numpy().astype(np.uint8) )
1921
+ depth = depth.astype(np.float32)
1922
+ else:
1923
+ depth = prediction.cpu().numpy().astype(np.float32)
1924
+ depth = depth * 255 / (np.max(depth)) / 255
1925
+ depth = Image.fromarray(np.uint8(depth * 255))
1926
+
1927
+ # Threshold depth mask
1928
+ if threshold == 'true':
1929
+ levels = self.AdjustLevels(threshold_low, threshold_mid, threshold_high)
1930
+ depth = levels.adjust(depth.convert('RGB')).convert('L')
1931
+ if smoothing > 0:
1932
+ depth = depth.filter(ImageFilter.GaussianBlur(radius=smoothing))
1933
+ depth = depth.resize(img_original.size).convert('L')
1934
+
1935
+ # Validate background color arguments
1936
+ background_red = int(background_red) if isinstance(background_red, (int, float)) else 0
1937
+ background_green = int(background_green) if isinstance(background_green, (int, float)) else 0
1938
+ background_blue = int(background_blue) if isinstance(background_blue, (int, float)) else 0
1939
+
1940
+ # Create background color tuple
1941
+ background_color = ( background_red, background_green, background_blue )
1942
+
1943
+ # Create background image
1944
+ background = Image.new(mode="RGB", size=img_original.size, color=background_color)
1945
+
1946
+ # Composite final image
1947
+ result_img = Image.composite(img_original, background, depth)
1948
+
1949
+ del midas, device, midas_transforms
1950
+ del transform, img, img_original, input_batch, prediction
1951
+
1952
+ return ( pil2tensor(result_img), pil2tensor(depth.convert('RGB')) )
1953
+
1954
+ class AdjustLevels:
1955
+ def __init__(self, min_level, mid_level, max_level):
1956
+ self.min_level = min_level
1957
+ self.mid_level = mid_level
1958
+ self.max_level = max_level
1959
+
1960
+ def adjust(self, im):
1961
+ # load the image
1962
+
1963
+ # convert the image to a numpy array
1964
+ im_arr = np.array(im)
1965
+
1966
+ # apply the min level adjustment
1967
+ im_arr[im_arr < self.min_level] = self.min_level
1968
+
1969
+ # apply the mid level adjustment
1970
+ im_arr = (im_arr - self.min_level) * (255 / (self.max_level - self.min_level))
1971
+ im_arr[im_arr < 0] = 0
1972
+ im_arr[im_arr > 255] = 255
1973
+ im_arr = im_arr.astype(np.uint8)
1974
+
1975
+ # apply the max level adjustment
1976
+ im = Image.fromarray(im_arr)
1977
+ im = ImageOps.autocontrast(im, cutoff=self.max_level)
1978
+
1979
+ return im
1980
+
1981
+ def install_midas(self):
1982
+ global MIDAS_INSTALLED
1983
+ if 'timm' not in packages():
1984
+ print("\033[34mWAS NS:\033[0m Installing timm...")
1985
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'timm'])
1986
+ if 'opencv-python' not in packages():
1987
+ print("\033[34mWAS NS:\033[0m Installing CV2...")
1988
+ subprocess.check_call([sys.executable, '-m', 'pip', '-q', 'install', 'opencv-python'])
1989
+ MIDAS_INSTALLED = True
1990
+
1991
+
1992
+ #! CONDITIONING NODES
1993
+
1994
+
1995
+ # NSP CLIPTextEncode NODE
1996
+
1997
+ class WAS_NSP_CLIPTextEncoder:
1998
+ def __init__(self):
1999
+ pass
2000
+
2001
+ @classmethod
2002
+ def INPUT_TYPES(s):
2003
+ return {
2004
+ "required": {
2005
+ "noodle_key": ("STRING", {"default": '__', "multiline": False}),
2006
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
2007
+ "text": ("STRING", {"multiline": True}),
2008
+ "clip": ("CLIP",),
2009
+ }
2010
+ }
2011
+
2012
+ OUTPUT_NODE = True
2013
+ RETURN_TYPES = ("CONDITIONING",)
2014
+ FUNCTION = "nsp_encode"
2015
+
2016
+ CATEGORY = "WAS Suite/Conditioning"
2017
+
2018
+ def nsp_encode(self, clip, text, noodle_key = '__', seed = 0):
2019
+
2020
+ # Fetch the NSP Pantry
2021
+ local_pantry = os.getcwd()+'/ComfyUI/custom_nodes/nsp_pantry.json'
2022
+ if not os.path.exists(local_pantry):
2023
+ response = urlopen('https://raw.githubusercontent.com/WASasquatch/noodle-soup-prompts/main/nsp_pantry.json')
2024
+ tmp_pantry = json.loads(response.read())
2025
+ # Dump JSON locally
2026
+ pantry_serialized = json.dumps(tmp_pantry, indent=4)
2027
+ with open(local_pantry, "w") as f:
2028
+ f.write(pantry_serialized)
2029
+ del response, tmp_pantry
2030
+
2031
+ # Load local pantry
2032
+ with open(local_pantry, 'r') as f:
2033
+ nspterminology = json.load(f)
2034
+
2035
+ if seed > 0 or seed < 1:
2036
+ random.seed(seed)
2037
+
2038
+ # Parse Text
2039
+ new_text = text
2040
+ for term in nspterminology:
2041
+ # Target Noodle
2042
+ tkey = f'{noodle_key}{term}{noodle_key}'
2043
+ # How many occurances?
2044
+ tcount = new_text.count(tkey)
2045
+ # Apply random results for each noodle counted
2046
+ for _ in range(tcount):
2047
+ new_text = new_text.replace(tkey, random.choice(nspterminology[term]), 1)
2048
+ seed = seed+1
2049
+ random.seed(seed)
2050
+
2051
+ print('\033[34mWAS NS\033[0m CLIPTextEncode NSP:', new_text)
2052
+
2053
+ return ([[clip.encode(new_text), {}]],{"ui":{"prompt":new_text}})
2054
+
2055
+
2056
+ #! SAMPLING NODES
2057
+
2058
+ # KSAMPLER
2059
+
2060
+ class WAS_KSampler:
2061
+ @classmethod
2062
+ def INPUT_TYPES(s):
2063
+ return {"required":
2064
+ {"model": ("MODEL",),
2065
+ "seed": ("SEED",),
2066
+ "steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
2067
+ "cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
2068
+ "sampler_name": (comfy.samplers.KSampler.SAMPLERS, ),
2069
+ "scheduler": (comfy.samplers.KSampler.SCHEDULERS, ),
2070
+ "positive": ("CONDITIONING", ),
2071
+ "negative": ("CONDITIONING", ),
2072
+ "latent_image": ("LATENT", ),
2073
+ "denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
2074
+ }
2075
+ }
2076
+
2077
+ RETURN_TYPES = ("LATENT",)
2078
+ FUNCTION = "sample"
2079
+
2080
+ CATEGORY = "WAS Suite/Sampling"
2081
+
2082
+ def sample(self, model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=1.0):
2083
+ return nodes.common_ksampler(model, seed['seed'], steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
2084
+
2085
+ # SEED NODE
2086
+
2087
+ class WAS_Seed:
2088
+ @classmethod
2089
+ def INPUT_TYPES(s):
2090
+ return {"required":
2091
+ {"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff})}
2092
+ }
2093
+
2094
+
2095
+ RETURN_TYPES = ("SEED",)
2096
+ FUNCTION = "seed"
2097
+
2098
+ CATEGORY = "WAS Suite/Constant"
2099
+
2100
+ def seed(self, seed):
2101
+ return ( {"seed": seed,}, )
2102
+
2103
+
2104
+ #! TEXT NODES
2105
+
2106
+ # Text Multiline Node
2107
+
2108
+ class WAS_Text_Multiline:
2109
+ def __init__(s):
2110
+ pass
2111
+
2112
+ @classmethod
2113
+ def INPUT_TYPES(s):
2114
+ return {
2115
+ "required": {
2116
+ "text": ("STRING", {"default": '', "multiline": True}),
2117
+ }
2118
+ }
2119
+ RETURN_TYPES = ("ASCII",)
2120
+ FUNCTION = "text_multiline"
2121
+
2122
+ CATEGORY = "WAS Suite/Text"
2123
+
2124
+ def text_multiline(self, text):
2125
+ return ( text, )
2126
+
2127
+
2128
+ # Text String Node
2129
+
2130
+ class WAS_Text_String:
2131
+ def __init__(s):
2132
+ pass
2133
+
2134
+ @classmethod
2135
+ def INPUT_TYPES(s):
2136
+ return {
2137
+ "required": {
2138
+ "text": ("STRING", {"default": '', "multiline": False}),
2139
+ }
2140
+ }
2141
+ RETURN_TYPES = ("ASCII",)
2142
+ FUNCTION = "text_string"
2143
+
2144
+ CATEGORY = "WAS Suite/Text"
2145
+
2146
+ def text_string(self, text):
2147
+ return ( text, )
2148
+
2149
+
2150
+ # Text Random Line
2151
+
2152
+ class WAS_Text_Random_Line:
2153
+ def __init__(s):
2154
+ pass
2155
+
2156
+ @classmethod
2157
+ def INPUT_TYPES(s):
2158
+ return {
2159
+ "required": {
2160
+ "text": ("ASCII",),
2161
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
2162
+ }
2163
+ }
2164
+
2165
+ RETURN_TYPES = ("ASCII",)
2166
+ FUNCTION = "text_random_line"
2167
+
2168
+ CATEGORY = "WAS Suite/Text"
2169
+
2170
+ def text_random_line(self, text, seed):
2171
+ lines = text.split("\n")
2172
+ random.seed(seed)
2173
+ choice = random.choice(lines)
2174
+ print('\033[34mWAS NS\033[0m Random Line:', choice)
2175
+ return ( choice, )
2176
+
2177
+
2178
+ # Text Concatenate
2179
+
2180
+ class WAS_Text_Concatenate:
2181
+ def __init__(s):
2182
+ pass
2183
+
2184
+ @classmethod
2185
+ def INPUT_TYPES(s):
2186
+ return {
2187
+ "required": {
2188
+ "text_a": ("ASCII",),
2189
+ "text_b": ("ASCII",),
2190
+ "linebreak_addition": (['true','false'], ),
2191
+ }
2192
+ }
2193
+
2194
+ RETURN_TYPES = ("ASCII",)
2195
+ FUNCTION = "text_concatenate"
2196
+
2197
+ CATEGORY = "WAS Suite/Text"
2198
+
2199
+ def text_concatenate(self, text_a, text_b, linebreak_addition):
2200
+ return ( text_a + ("\n" if linebreak_addition == 'true' else '') + text_b, )
2201
+
2202
+
2203
+ # Text Search and Replace
2204
+
2205
+ class WAS_Search_and_Replace:
2206
+ def __init__(s):
2207
+ pass
2208
+
2209
+ @classmethod
2210
+ def INPUT_TYPES(s):
2211
+ return {
2212
+ "required": {
2213
+ "text": ("ASCII",),
2214
+ "find": ("STRING", {"default": '', "multiline": False}),
2215
+ "replace": ("STRING", {"default": '', "multiline": False}),
2216
+ }
2217
+ }
2218
+
2219
+ RETURN_TYPES = ("ASCII",)
2220
+ FUNCTION = "text_search_and_replace"
2221
+
2222
+ CATEGORY = "WAS Suite/Text"
2223
+
2224
+ def text_search_and_replace(self, text, find, replace):
2225
+ return ( self.replace_substring(text, find, replace), )
2226
+
2227
+ def replace_substring(self, text, find, replace):
2228
+ import re
2229
+ text = re.sub(find, replace, text)
2230
+ return text
2231
+
2232
+
2233
+ # Text Parse NSP
2234
+
2235
+ class WAS_Text_Parse_NSP:
2236
+ def __init__(s):
2237
+ pass
2238
+
2239
+ @classmethod
2240
+ def INPUT_TYPES(s):
2241
+ return {
2242
+ "required": {
2243
+ "noodle_key": ("STRING", {"default": '__', "multiline": False}),
2244
+ "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
2245
+ "text": ("ASCII",),
2246
+ }
2247
+ }
2248
+
2249
+ OUTPUT_NODE = True
2250
+ RETURN_TYPES = ("ASCII",)
2251
+ FUNCTION = "text_parse_nsp"
2252
+
2253
+ CATEGORY = "WAS Suite/Text"
2254
+
2255
+ def text_parse_nsp(self, text, noodle_key = '__', seed = 0):
2256
+
2257
+ # Fetch the NSP Pantry
2258
+ local_pantry = os.getcwd()+'/ComfyUI/custom_nodes/nsp_pantry.json'
2259
+ if not os.path.exists(local_pantry):
2260
+ response = urlopen('https://raw.githubusercontent.com/WASasquatch/noodle-soup-prompts/main/nsp_pantry.json')
2261
+ tmp_pantry = json.loads(response.read())
2262
+ # Dump JSON locally
2263
+ pantry_serialized = json.dumps(tmp_pantry, indent=4)
2264
+ with open(local_pantry, "w") as f:
2265
+ f.write(pantry_serialized)
2266
+ del response, tmp_pantry
2267
+
2268
+ # Load local pantry
2269
+ with open(local_pantry, 'r') as f:
2270
+ nspterminology = json.load(f)
2271
+
2272
+ if seed > 0 or seed < 1:
2273
+ random.seed(seed)
2274
+
2275
+ # Parse Text
2276
+ new_text = text
2277
+ for term in nspterminology:
2278
+ # Target Noodle
2279
+ tkey = f'{noodle_key}{term}{noodle_key}'
2280
+ # How many occurances?
2281
+ tcount = new_text.count(tkey)
2282
+ # Apply random results for each noodle counted
2283
+ for _ in range(tcount):
2284
+ new_text = new_text.replace(tkey, random.choice(nspterminology[term]), 1)
2285
+ seed = seed+1
2286
+ random.seed(seed)
2287
+
2288
+ print('\033[34mWAS NS\033[0m Text Parse NSP:', new_text)
2289
+
2290
+ return ( new_text, )
2291
+
2292
+
2293
+ # Text Search and Replace
2294
+
2295
+ class WAS_Text_Save:
2296
+ def __init__(s):
2297
+ pass
2298
+
2299
+ @classmethod
2300
+ def INPUT_TYPES(s):
2301
+ return {
2302
+ "required": {
2303
+ "text": ("ASCII",),
2304
+ "path": ("STRING", {"default": '', "multiline": False}),
2305
+ "filename": ("STRING", {"default": f'text_[time]', "multiline": False}),
2306
+ }
2307
+ }
2308
+
2309
+ OUTPUT_NODE = True
2310
+ RETURN_TYPES = ()
2311
+ FUNCTION = "save_text_file"
2312
+
2313
+ CATEGORY = "WAS Suite/Text"
2314
+
2315
+ def save_text_file(self, text, path, filename):
2316
+
2317
+ # Ensure path exists
2318
+ if not os.path.exists(path):
2319
+ print(f'\033[34mWAS NS\033[0m Error: The path `{path}` doesn\'t exist!')
2320
+
2321
+ # Ensure content to save
2322
+ if text.strip == '':
2323
+ print(f'\033[34mWAS NS\033[0m Error: There is no text specified to save! Text is empty.')
2324
+
2325
+ # Replace tokens
2326
+ tokens = {
2327
+ '[time]': f'{round(time.time())}',
2328
+ }
2329
+ for k in tokens.keys():
2330
+ text = self.replace_substring(text, k, tokens[k])
2331
+
2332
+ # Write text file
2333
+ self.writeTextFile(os.path.join(path, filename + '.txt'), text)
2334
+
2335
+ return( text, )
2336
+
2337
+ # Save Text FileNotFoundError
2338
+ def writeTextFile(self, file, content):
2339
+ try:
2340
+ with open(file, 'w') as f:
2341
+ f.write(content)
2342
+ except OSError:
2343
+ print(f'\033[34mWAS Node Suite\033[0m Error: Unable to save file `{file}`')
2344
+
2345
+
2346
+ # Replace a substring
2347
+ def replace_substring(self, text, find, replace):
2348
+ import re
2349
+ text = re.sub(find, replace, text)
2350
+ return text
2351
+
2352
+
2353
+ # Text to Conditioning
2354
+
2355
+ class WAS_Text_to_Conditioning:
2356
+ def __init__(s):
2357
+ pass
2358
+
2359
+ @classmethod
2360
+ def INPUT_TYPES(s):
2361
+ return {
2362
+ "required": {
2363
+ "clip": ("CLIP",),
2364
+ "text": ("ASCII",),
2365
+ }
2366
+ }
2367
+
2368
+ RETURN_TYPES = ("CONDITIONING",)
2369
+ FUNCTION = "text_to_conditioning"
2370
+
2371
+ CATEGORY = "WAS Suite/Text"
2372
+
2373
+ def text_to_conditioning(self, clip, text):
2374
+ return ( [[clip.encode(text), {}]], )
2375
+
2376
+
2377
+ # NODE MAPPING
2378
+
2379
+ NODE_CLASS_MAPPINGS = {
2380
+ # IMAGE
2381
+ "Image Filter Adjustments": WAS_Image_Filters,
2382
+ "Image Style Filter": WAS_Image_Style_Filter,
2383
+ "Image Blending Mode": WAS_Image_Blending_Mode,
2384
+ "Image Blend": WAS_Image_Blend,
2385
+ "Image Blend by Mask": WAS_Image_Blend_Mask,
2386
+ "Image Remove Color": WAS_Image_Remove_Color,
2387
+ "Image Threshold": WAS_Image_Threshold,
2388
+ "Image Chromatic Aberration": WAS_Image_Chromatic_Aberration,
2389
+ "Image Bloom Filter": WAS_Image_Bloom_Filter,
2390
+ "Image Blank": WAS_Image_Blank,
2391
+ "Image Film Grain": WAS_Film_Grain,
2392
+ "Image Flip": WAS_Image_Flip,
2393
+ "Image Rotate": WAS_Image_Rotate,
2394
+ "Image Nova Filter": WAS_Image_Nova_Filter,
2395
+ "Image Canny Filter": WAS_Canny_Filter,
2396
+ "Image Edge Detection Filter": WAS_Image_Edge,
2397
+ "Image fDOF Filter": WAS_Image_fDOF,
2398
+ "Image Median Filter": WAS_Image_Median_Filter,
2399
+ "Image Save": WAS_Image_Save,
2400
+ "Image Load": WAS_Load_Image,
2401
+ "Image Levels Adjustment": WAS_Image_Levels,
2402
+ "Image High Pass Filter": WAS_Image_High_Pass_Filter,
2403
+ "Tensor Batch to Image": WAS_Tensor_Batch_to_Image,
2404
+ "Image Select Color": WAS_Image_Select_Color,
2405
+ "Image Select Channel": WAS_Image_Select_Channel,
2406
+ "Image Mix RGB Channels": WAS_Image_RGB_Merge,
2407
+ # LATENT
2408
+ "Latent Upscale by Factor (WAS)": WAS_Latent_Upscale,
2409
+ "Latent Noise Injection": WAS_Latent_Noise,
2410
+ "Image to Latent Mask": WAS_Image_To_Mask,
2411
+ # MIDAS
2412
+ "MiDaS Depth Approximation": MiDaS_Depth_Approx,
2413
+ "MiDaS Mask Image": MiDaS_Background_Foreground_Removal,
2414
+ # CONDITIONING
2415
+ "CLIPTextEncode (NSP)": WAS_NSP_CLIPTextEncoder,
2416
+ # SAMPLING
2417
+ "KSampler (WAS)": WAS_KSampler,
2418
+ "Seed": WAS_Seed,
2419
+ # TEXT
2420
+ "Text Multiline": WAS_Text_Multiline,
2421
+ "Text String": WAS_Text_String,
2422
+ "Text Random Line": WAS_Text_Random_Line,
2423
+ "Text to Conditioning": WAS_Text_to_Conditioning,
2424
+ "Text Concatenate": WAS_Text_Concatenate,
2425
+ "Text Find and Replace": WAS_Search_and_Replace,
2426
+ "Text Parse Noodle Soup Prompts": WAS_Text_Parse_NSP,
2427
+ "Save Text File": WAS_Text_Save,
2428
+ }
2429
+
2430
+ print('\033[34mWAS Node Suite: \033[92mLoaded\033[0m')