Datasets:
question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,320,289 | 2024-12-31 | https://stackoverflow.com/questions/79320289/why-cant-i-wrap-lgbm | I'm using LGBM to forecast the relative change of a numerical quantity. I'm using the MSLE (Mean Squared Log Error) loss function to optimize my model and to get the correct scaling of errors. Since MSLE isn't native to LGBM, I have to implement it myself. But lucky me, the math can be simplified a ton. This is my implementation; class MSLELGBM(LGBMRegressor): def __init__(self, **kwargs): super().__init__(**kwargs) def predict(self, X): return np.exp(super().predict(X)) def fit(self, X, y, eval_set=None, callbacks=None): y_log = np.log(y.copy()) print(super().get_params()) # This doesn't print any kwargs if eval_set: eval_set = [(X_eval, np.log(y_eval.copy())) for X_eval, y_eval in eval_set] super().fit(X, y_log, eval_set=eval_set, callbacks=callbacks) As you can see, it's very minimal. I basically just need to apply a log transform to the model target, and exponentiate the predictions to return to our own non-logarithmic world. However, my wrapper doesn't work. I call the class with; model = MSLELGBM(**lgbm_params) model.fit(data[X_cols_all], data[y_col_train]) And I get the following exception; --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[31], line 38 32 callbacks = [ 33 lgbm.early_stopping(10, verbose=0), 34 lgbm.log_evaluation(period=0), 35 ] 37 model = MSLELGBM(**lgbm_params) ---> 38 model.fit(data[X_cols_all], data[y_col_train]) 40 feature_importances_df = pd.DataFrame([model.booster_.feature_importance(importance_type='gain')], columns=X_cols_all).T.sort_values(by=0, ascending=False) 41 feature_importances_df.iloc[:30] Cell In[31], line 17 15 if eval_set: 16 eval_set = [(X_eval, np.log(y_eval.copy())) for X_eval, y_eval in eval_set] ---> 17 super().fit(X, y_log, eval_set=eval_set, callbacks=callbacks) File c:\X\.venv\lib\site-packages\lightgbm\sklearn.py:1189, in LGBMRegressor.fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_init_score, eval_metric, feature_name, categorical_feature, callbacks, init_model) 1172 def fit( # type: ignore[override] 1173 self, 1174 X: _LGBM_ScikitMatrixLike, (...) 1186 init_model: Optional[Union[str, Path, Booster, LGBMModel]] = None, 1187 ) -> "LGBMRegressor": 1188 """Docstring is inherited from the LGBMModel.""" ... --> 765 if isinstance(params["random_state"], np.random.RandomState): 766 params["random_state"] = params["random_state"].randint(np.iinfo(np.int32).max) 767 elif isinstance(params["random_state"], np.random.Generator): KeyError: 'random_state' I have no idea how random_state is missing from the fit method, as it isnt even required for that function. I get the impression that this is a complicated software engineering issue that's above my head. Anybody knows whats up? If it's of any help, I tried illustrating what I want using a simpler non-lgbm structure; I just want to pass whatever parameters I provide to the MSLELGBM to the original LGBM, but I'm running into a ton of issues doing so. | Root Cause scikit-learn expects that each of the keyword arguments to an estimator's __init__() will exactly correspond to a public attribute on instances of the class. Per https://scikit-learn.org/stable/developers/develop.html every keyword argument accepted by __init__ should correspond to an attribute on the instance. Scikit-learn relies on this to find the relevant attributes to set on an estimator when doing model selection Its .get_params() method on estimators take advantage of this by inspecting the signature of __init__() to figure out which attributes to expect (scikit-learn / sklearn / base.py). lightgbm's estimators call .get_params() and then expect the key "random_state" to exist in the dictionary it returns... because that parameter is in the keyword arguments to LGBMRegressor (LightGBM / python-package / lightgbm / sklearn.py). Your estimator's __init__() does not have random_state as a keyword argument, so when self.get_params() is called it returns a dictionary that does not contain "random_state", leading to the error your observed. How to fix this If you do not need to add any other custom parameters, then just do not define an __init__() method on your subclass. Here's a minimal, reproducible example that works with lightgbm 4.5.0 and Python 3.11: import numpy as np from lightgbm import LGBMRegressor from sklearn.datasets import make_regression class MSLELGBM(LGBMRegressor): def predict(self, X): return np.exp(super().predict(X)) def fit(self, X, y, eval_set=None, callbacks=None): y_log = np.log(y.copy()) if eval_set: eval_set = [(X_eval, np.log(y_eval.copy())) for X_eval, y_eval in eval_set] super().fit(X, y_log, eval_set=eval_set, callbacks=callbacks) # modifying bias and tail_strength to ensure every value in 'y' is positive X, y = make_regression( n_samples=5_000, n_features=3, bias=500.0, tail_strength=0.001, random_state=708, ) reg = MSLELGBM(num_boost_round=5) # print params (you'll see all the LGBMRegressor params) reg.get_params() # fit the model reg.fit(X, y) If you do need to define any custom parameters, then for lightgbm<=4.5.0: add an __init__() on your subclass copy all of the parameters from the signature of lightgbm.LGBMModel.__init__() into that __init__() call super().__init__() in your subclass's __init__(), and pass it all of the keyword arguments explicitly 1 at a time with = Like this: class MSLELGBM(LGBMRegressor): # just including 'random_state' to keep it short... you # need to include more params here, depending on LightGBM version def __init__(self, random_state=None, **kwargs): super().__init__( random_state=random_state, **kwargs ) | 1 | 1 |
79,320,303 | 2024-12-31 | https://stackoverflow.com/questions/79320303/artifacts-with-pygame-when-trying-to-update-visible-sprites-only | I'm learning the basics of the pygame library and already struggling. The "game" at this point only has a player and walls. There are 2 main surfaces: "world" (the actual game map) and "screen" (which serves as a viewport for "view_src" w/ scaling & scrolling, "viewport" is the corresponding rect). Here's the problem: I want to implement at least basic optimisation and only render sprites that are actually visible, so I'm filtering the "all" group to whatever collides with the viewport. That acts as expected. But when I call the rendering functions on the "visible" ad hoc group I get artifacts whereas calling them on "all" works just fine. Here's the relevant snippet from the game loop: # clear old sprites all.clear(world, background) # this should clear the OLD position of all sprites, right? # handle input and generic game logic here if player.move(key_state, walls) != (0,0): # moves the player's rect if possible scroll_view(world, player.last_move, view_src) # shifts view_src if applicable # this does very little and should be unrelated to the issue all.update() # draw the new scene visible = pg.sprite.Group([ spr for spr in all.sprites() if view_src.colliderect(spr.rect) ]) print(visible.sprites()) # confirms the visible sprites are chosen correctly visible.draw(world) # results in drawing each sprite in its new AND old position #all.draw(world) # acts as it should if used instead scaled = pg.transform.scale(world.subsurface(view_src), viewport.size) screen.blit(scaled, viewport.topleft) pg.display.flip() (I do .empty() the "visible" group at the end of the loop) Even if I determine "visible" earlier and call visible.clear(world, background) and then go all.draw(world) I get the exact same issue, it only works if both .clear() and .draw() are called on "all". This is already after consulting an AI which told me this works just fine so hopefully a good old fashioned human can point me in the right direction. | Found the problem and the fix thanks to Kingsley's nudge. The issue: Group.clear() clears the sprites drawn by the last .draw() of that exact same group. So using a different group for .clear() and .draw() doesn't work, and the continuity it needs to function is also lost by re-assigning the "visible" group each time. The solution: Initialise "visible" before the loop, persist it between iterations and add/remove sprites as needed. Fixed code: # clear old sprites visible.clear(world, background) # clears sprites from the last .draw() # handle input and generic game logic here if player.move(key_state, walls) != (0,0): scroll_view(world, player, view_src) # "step event", update positions etc here all.update() # draw the new scene visible.empty() visible.add([ spr for spr in all if view_src.colliderect(spr.rect) ]) visible.draw(world) render_view(screen, world, view_src, viewport) # this is still what it was before | 2 | 0 |
79,316,973 | 2024-12-30 | https://stackoverflow.com/questions/79316973/improve-computational-time-and-memory-usage-of-the-calculation-of-a-large-matrix | I want to calculate a Matrix G that its elements is a scalar and are calculated as: I want to calculated this matrix for a large n > 10000, d>30. My code is below but it has a huge overhead and it still takes very long time. How can I make this computation at the fastest possible way? Without using GPU and Minimize the memory usage. import numpy as np from sklearn.gaussian_process.kernels import Matern from tqdm import tqdm from joblib import Parallel, delayed # Pre-flattened computation to minimize data transfer overhead def precompute_differences(R, Z): n, d = R.shape R_diff_flat = (R[:, None, :] - R[None, :, :]).reshape(n * n, d) Z_diff = Z[:, None, :] - Z[None, :, :] return R_diff_flat, Z_diff def compute_G_row(i, R_diff_flat, Z_diff, W, gamma_val, kernel, n, d): """ Compute the i-th row for j >= i and store them in a temporary array. """ row_values = np.zeros(n) for j in range(i, n): Z_ij = gamma_val * Z_diff[i, j].reshape(1, d) K_flat = kernel(R_diff_flat, Z_ij) K_ij = K_flat.reshape(n, n) row_values[j] = np.sum(W * K_ij) return i, row_values def compute_G(M, gamma, R, Z, nu=1.5, length_scale=1.0, use_parallel=True): """ Compute the G matrix with fewer kernel evaluations by exploiting symmetry: G[i,j] = G[j,i]. We only compute for j >= i, then mirror the result. """ R = np.asarray(R) Z = np.asarray(Z) M = np.asarray(M).reshape(-1, 1) # ensure (n,1) n, d = R.shape # Precompute data R_diff_flat, Z_diff = precompute_differences(R, Z) W = M @ M.T # Weight matrix G = np.zeros((n, n)) kernel = Matern(length_scale=length_scale, nu=nu) if use_parallel and n > 1: # Parallel computation results = Parallel(n_jobs=-1)( delayed(compute_G_row)(i, R_diff_flat, Z_diff, W, gamma, kernel, n, d) for i in tqdm(range(n), desc="Computing G matrix") ) else: # Single-threaded computation results = [] for i in tqdm(range(n), desc="Computing G matrix"): row_values = np.zeros(n) for j in range(i, n): Z_ij = gamma * Z_diff[i, j].reshape(1, d) K_flat = kernel(R_diff_flat, Z_ij) K_ij = K_flat.reshape(n, n) row_values[j] = np.sum(W * K_ij) results.append((i, row_values)) # Sort and fill final G by symmetry results.sort(key=lambda x: x[0]) for i, row_vals in results: for j in range(i, n): G[i, j] = row_vals[j] G[j, i] = row_vals[j] # mirror for symmetry # Delete auxiliary variables to save memory del R_diff_flat, Z_diff, W, kernel, results # Optional checks is_symmetric = np.allclose(G, G.T, atol=1e-8) eigenvalues = np.linalg.eigvalsh(G) is_semi_positive_definite = np.all(eigenvalues >= -1e-8) print(f"G is semi-positive definite: {is_semi_positive_definite}") print(f"G is symmetric: {is_symmetric}") # Delete all local auxiliary variables except G to save memory local_vars = list(locals().keys()) for var_name in local_vars: if var_name not in ["G"]: del locals()[var_name] return G Toy Example # Example usage: if __name__ == "__main__": __spec__ = None n = 20 d = 10 gamma = 0.9 R = np.random.rand(n, d) Z = np.random.rand(n, d) M = np.random.rand(n, 1) G = compute_G(M, gamma, R, Z, nu=1.5, length_scale=1.0, use_parallel=True) print("G computed with shape:", G.shape) | A convenient way is to note that each entry could also be written as : with above notation the computation could be much easier and: import numpy as np from tqdm import tqdm from sklearn.gaussian_process.kernels import Matern from yaspin import yaspin import time from memory_profiler import profile ##----------------- @profile def G_einsum_block(M, gamma, R, Z, nu=1.5, length_scale=1.0, block_size=100): n, d = R.shape G = np.zeros((n, n)) Gamma = M.ravel() # Ensure shape is (n,) # Initialize the Matern kernel kernel = Matern(length_scale=length_scale, nu=nu) # with yaspin(text="Computing Matrix G", spinner="dots") as spinner: # Iterate over chunks of ell for ell_start in tqdm(range(0, n, block_size), desc="Computing G by ell-Chunks"): ell_end = min(ell_start + block_size, n) ell_indices = np.arange(ell_start, ell_end) Gamma_ell = Gamma[ell_indices] # Compute shifted points for current ell chunk # Shape: (n, block_size, d) X_ell = gamma * Z[:, np.newaxis, :] + R[ell_indices] # Iterate over chunks of m for m_start in range(0, n, block_size): m_end = min(m_start + block_size, n) m_indices = np.arange(m_start, m_end) Gamma_m = Gamma[m_indices] # Compute shifted points for current m chunk # Shape: (n, block_size, d) X_m = gamma * Z[:, np.newaxis, :] + R[m_indices] # Reshape for kernel computation # Each pair (i, ell) and (j, m) needs to be compared # We compute pairwise distances between X_ell and X_m # To vectorize, reshape to (n * block_size, d) X_i_ell = X_ell.reshape(n * (ell_end - ell_start), d) X_j_m = X_m.reshape(n * (m_end - m_start), d) # Compute the kernel matrix for the current chunks # Shape: (n * block_size, n * block_size) K_chunk = kernel(X_i_ell, X_j_m) # Reshape K_chunk to (n, ell_chunk, n, m_chunk) K_chunk = K_chunk.reshape(n, ell_end - ell_start, n, m_end - m_start) # Multiply by M for current chunks # Shape: (ell_chunk,) and (m_chunk,) # Use broadcasting in einsum # 'iljm,l,m->ij' corresponds to: # i: row index of G # j: column index of G # l: current ell chunk # m: current m chunk G += np.einsum('iljm,l,m->ij', K_chunk, Gamma_ell, Gamma_m) # spinner.ok("✔") print("") # Optional checks is_symmetric = np.allclose(G, G.T, atol=1e-8) eigenvalues = np.linalg.eigvalsh(G) is_semi_positive_definite = np.all(eigenvalues >= -1e-8) print(f"G is semi-positive definite: {is_semi_positive_definite}") print(f"G is symmetric: {is_symmetric}") return G #%% ##-------------------------------------------------------------- ### --- Example usage --- #### if __name__ == "__main__": # Example dimensions n = 20 d = 10 gamma = 0.9 # Generate dummy data R = np.random.rand(n, d) Z = np.random.rand(n, d) M = np.random.rand(n, 1) # Compute G with a progress bar G = G_einsum_block(M, gamma, R, Z, nu=1.5, length_scale=1.0) print("Shape of G:", G.shape) | 1 | 2 |
79,313,502 | 2024-12-28 | https://stackoverflow.com/questions/79313502/extracting-owner-s-username-from-nested-page-on-huggingface | I am scraping the HuggingFace research forum (https://discuss.huggingface.co/c/research/7/l/latest) using Selenium. I am able to successfully extract the following attributes from the main page of the forum: Activity Date View Count Replies Count Title URL However, I am encountering an issue when trying to extract the owner’s username from the individual topic pages. The owner’s username is located on a nested page that is accessible via the URL found in the main page’s topic link. For example, on the main page, I have the following HTML snippet for a topic: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time # Set up Chrome options to use headless mode (for Colab) chrome_options = Options() chrome_options.add_argument("--headless") # Run in headless mode chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--disable-gpu") chrome_options.add_argument("--window-size=1920,1080") chrome_options.add_argument("--disable-infobars") chrome_options.add_argument("--disable-popup-blocking") chrome_options.add_argument("--ignore-certificate-errors") chrome_options.add_argument("--incognito") chrome_options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36") chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"]) chrome_options.add_experimental_option("useAutomationExtension", False) # Set the path to chromedriver explicitly (installed by apt) chrome_path = "/usr/bin/chromedriver" # Initialize the WebDriver with the updated path driver = webdriver.Chrome(options=chrome_options) # Open the HuggingFace page url = "https://discuss.huggingface.co/c/research/7/l/latest" # URL for HuggingFace Issues driver.get(url) # Wait for the page to load time.sleep(6) def scrape_huggingface_issues(): titles_and_links = [] seen_titles_and_links = set() owner = [] replies = [] views = [] activity = [] while True: try: # Find all issue rows (elements in the table) elements = driver.find_elements(By.CSS_SELECTOR, 'tr.topic-list-item') # Extract and store the titles, links, and other data for elem in elements: topic_id = elem.get_attribute("data-topic-id") if topic_id in seen_titles_and_links: continue seen_titles_and_links.add(topic_id) # Extract title and link selected_title = elem.find_element(By.CSS_SELECTOR, 'a.title.raw-link.raw-topic-link') title = selected_title.text.strip() relative_link = selected_title.get_attribute('href') # Get the relative URL from the href attribute full_link = relative_link # Construct the absolute URL (if needed) # Extract replies count try: replies_elem = elem.find_element(By.CSS_SELECTOR, 'button.btn-link.posts-map.badge-posts') replies_count = replies_elem.find_element(By.CSS_SELECTOR, 'span.number').text.strip() except: replies_count = "0" # Extract views count try: views_elem = elem.find_element(By.CSS_SELECTOR, 'td.num.views.topic-list-data') views_count = views_elem.find_element(By.CSS_SELECTOR, 'span.number').text.strip() except: views_count = "0" # Extract activity (last activity) try: activity_elem = elem.find_element(By.CSS_SELECTOR, 'td.num.topic-list-data.age.activity') activity_text = activity_elem.get_attribute('title').strip() except: activity_text = "N/A" # Use the helper function to get the owner info from the topic page owner_text = scrape_issue_details(relative_link) # Store the extracted data in the lists titles_and_links.append((title, full_link, owner_text, replies_count, views_count, activity_text)) seen_titles_and_links.add((title, full_link)) # Add to the seen set to avoid duplicates # Scroll down to load more content (if the forum uses infinite scroll) driver.find_element(By.TAG_NAME, "body").send_keys(Keys.END) time.sleep(3) # Adjust based on loading speed # Check if the "Next" button is available and click it try: next_button = driver.find_element(By.CSS_SELECTOR, 'a.next.page-numbers') next_button.click() time.sleep(3) # Wait for the next page to load except: # If there's no "Next" button, exit the loop print("No more pages to scrape.") break except Exception as e: print(f"Error occurred: {e}") continue return titles_and_links def scrape_issue_details(url): """ Navigate to the topic page and scrape additional details like the owner's username. """ # Go to the topic page driver.get(url) time.sleep(3) # Wait for the page to load # Extract the owner's username try: owner_elem = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, 'span.first.username.new-user'))) owner_username_fetch = owner_elem.find_element(By.CSS_SELECTOR, 'a').text.strip() owner_username = owner_elem.text.strip() # Extract the username from the link except Exception as e: owner_username = "N/A" # Default value if no owner found return owner_username # Scrape the HuggingFace issues across all pages issues = scrape_huggingface_issues() # Print the titles, links, and additional data (owner, replies, views, activity) print("Scraped Titles, Links, Owner, Replies, Views, Activity:") for i, (title, link, owner_text, replies_count, views_count, activity_text) in enumerate(issues, 1): print(f"{i}: {title} - {link} - Owner: {owner_text} - Replies: {replies_count} - Views: {views_count} - Activity: {activity_text}") # Close the browser driver.quit() Problem: I cannot fetch the owner’s username from the individual topic page. After following the URL, I am unable to locate and extract the owner’s username even though I know its location in the HTML. <a href="/t/model-that-can-generate-both-text-and-image-as-output/132209" role="heading" aria-level="2" class="title raw-link raw-topic-link" data-topic-id="132209">Model that can generate both text and image as output</a> The owner’s username is located on the topic’s individual page at the following HTML snippet: <span class="first username new-user"><a href="/u/InsertOPUsername" data-user-card="InsertOPUsername" class="">InsertOPUsername</a></span> What I’ve Tried: I used driver.get(url) to navigate to the individual topic pages. I attempted to locate the username using WebDriverWait and the correct CSS selector (span.first.username.new-user a). I am successfully scraping other details like Activity, Views, and Replies from the main page but unable to retrieve the owner’s username from the topic page. | All the data you're after comes from two API endpoints. Most of what you already have can be fetched from the frist one. If you follow the post, you'll get even more data and you'll find the posters section, there you can find your owner aka Original Poster. This is just to push you in the right direction (and no selenium needed!). Once you know the endpoints you can massage the data to whatever you like it to be. import requests from tabulate import tabulate API_ENDPOINT = "https://discuss.huggingface.co/c/research/7/l/latest.json?filter=latest" TRACK_ENDPOINT = "https://discuss.huggingface.co/t/{}.json?track_visit=true&forceLoad=true" HEADERS = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0", "Accept": "application/json", "X-Requested-With": "XMLHttpRequest" } def get_posters(track_id: str, current_session: requests.Session) -> dict: track = current_session.get(TRACK_ENDPOINT.format(track_id), headers=HEADERS) posts = track.json()["post_stream"]["posts"] return { "owner": posts[0]["username"], "owner_name": posts[0]["name"], "owner_id": posts[0]["id"], "posters": [p["name"] for p in posts], } with requests.Session() as session: response = session.get(API_ENDPOINT, headers=HEADERS) topics_data = response.json()["topic_list"]["topics"] topics = [] for topic in topics_data: posters = get_posters(topic["id"], session) topics.append( [ topic["title"], f"https://discuss.huggingface.co/t/{topic['slug']}/{topic['id']}", topic["posts_count"], topic["views"], topic["like_count"], topic["id"], posters["owner_name"], posters["owner_id"], # ", ".join(posters["posters"]), ] ) columns = ["Title", "URL", "Posts", "Views", "Likes", "ID", "Owner", "Owner ID"] table = tabulate(topics, headers=columns, tablefmt="pretty", stralign="left") print(table) You should get this table: +----------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------+-------+-------+--------+------------------------+----------+ | Title | URL | Posts | Views | Likes | ID | Owner | Owner ID | +----------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------+-------+-------+--------+------------------------+----------+ | Merry Christmas & We have released "Awesome-Neuro-Symbolic-Learning-with-LLM" | https://discuss.huggingface.co/t/merry-christmas-we-have-released-awesome-neuro-symbolic-learning-with-llm/133045 | 1 | 36 | 4 | 133045 | Lan-Zhe Guo | 191786 | | Why do some commits have zero insertions and zero deletions? | https://discuss.huggingface.co/t/why-do-some-commits-have-zero-insertions-and-zero-deletions/132603 | 1 | 12 | 0 | 132603 | Sandra | 191238 | | Model that can generate both text and image as output | https://discuss.huggingface.co/t/model-that-can-generate-both-text-and-image-as-output/132209 | 5 | 73 | 7 | 132209 | Bibhuti Bhusan Padhi | 190689 | | Using mixup on RoBERTa | https://discuss.huggingface.co/t/using-mixup-on-roberta/306 | 8 | 2228 | 8 | 306 | FRAN Valero | 576 | | Seeking Guidance on Training a Model for Generating Gregorian Chant Music | https://discuss.huggingface.co/t/seeking-guidance-on-training-a-model-for-generating-gregorian-chant-music/131700 | 3 | 21 | 4 | 131700 | Martim Ramos | 189949 | | Interest in Contributing PEFT Educational Resources - Seeking Community Input | https://discuss.huggingface.co/t/interest-in-contributing-peft-educational-resources-seeking-community-input/131143 | 3 | 30 | 6 | 131143 | Jen Wei | 188941 | | LLM for analysing JSON data | https://discuss.huggingface.co/t/llm-for-analysing-json-data/130407 | 2 | 67 | 2 | 130407 | S. Gow | 188022 | | Models for Document Image Annotation Without OCR | https://discuss.huggingface.co/t/models-for-document-image-annotation-without-ocr/129604 | 2 | 109 | 3 | 129604 | Pavel Spirin | 186986 | | Get gaierror when trying to access HF Token for login | https://discuss.huggingface.co/t/get-gaierror-when-trying-to-access-hf-token-for-login/128870 | 3 | 36 | 3 | 128870 | S. Gow | 186043 | | Evaluation metrics for BERT-like LMs | https://discuss.huggingface.co/t/evaluation-metrics-for-bert-like-lms/1256 | 5 | 4455 | 1 | 1256 | Vladimir Blagojevic | 3083 | | Introducing ClearerVoice-Studio: Your One-Stop Speech Processing Platform! | https://discuss.huggingface.co/t/introducing-clearervoice-studio-your-one-stop-speech-processing-platform/129193 | 3 | 92 | 0 | 129193 | Alibaba_Speech_Lab_SG | 186434 | | Seeking Advice on Building a Custom Virtual Try-On Model Using Pre-Existing Models | https://discuss.huggingface.co/t/seeking-advice-on-building-a-custom-virtual-try-on-model-using-pre-existing-models/128946 | 1 | 44 | 1 | 128946 | Abeer Ilyas | 186127 | | LLM Hackathon in Ecology | https://discuss.huggingface.co/t/llm-hackathon-in-ecology/128906 | 1 | 35 | 0 | 128906 | Jennifer D'Souza | 186080 | | Retrieving Meta Data on Models for Innovation Research | https://discuss.huggingface.co/t/retrieving-meta-data-on-models-for-innovation-research/128646 | 1 | 33 | 1 | 128646 | Fabian F | 185762 | | (Research/Personal) Projects Ideas | https://discuss.huggingface.co/t/research-personal-projects-ideas/71651 | 3 | 1410 | 0 | 71651 | HeHugging | 111782 | | Understanding Technical Drawings | https://discuss.huggingface.co/t/understanding-technical-drawings/78903 | 2 | 287 | 1 | 78903 | Yakoi | 121186 | | Ionic vs. React Native vs. Flutter | https://discuss.huggingface.co/t/ionic-vs-react-native-vs-flutter/128132 | 1 | 97 | 0 | 128132 | yaw | 185084 | | Choosing Benchmarks for Fine-Tuned Models in Emotion Analysis | https://discuss.huggingface.co/t/choosing-benchmarks-for-fine-tuned-models-in-emotion-analysis/127106 | 1 | 38 | 1 | 127106 | Pavol | 183654 | | I have a project Skin Lens Please can you fill the form | https://discuss.huggingface.co/t/i-have-a-project-skin-lens-please-can-you-fill-the-form/108980 | 2 | 48 | 2 | 108980 | Soopramanien | 158453 | | How does an API work? | https://discuss.huggingface.co/t/how-does-an-api-work/121828 | 5 | 102 | 2 | 121828 | riddhi patel | 176354 | | More expressive attention with negative weights | https://discuss.huggingface.co/t/more-expressive-attention-with-negative-weights/119667 | 2 | 252 | 4 | 119667 | AngLv | 173243 | | Biases in AI Hallucinations Based on Context | https://discuss.huggingface.co/t/biases-in-ai-hallucinations-based-on-context/117082 | 1 | 28 | 1 | 117082 | That Prommolmard | 169443 | | RAG performance | https://discuss.huggingface.co/t/rag-performance/116048 | 1 | 59 | 1 | 116048 | Salah Ghalyon | 168143 | | Gangstalkers AI harassment voice to skull | https://discuss.huggingface.co/t/gangstalkers-ai-harassment-voice-to-skull/115897 | 1 | 87 | 0 | 115897 | Andrew Cruz AKA OmegaT | 167944 | | How Pika Effects works? 🤔 | https://discuss.huggingface.co/t/how-pika-effects-works/115760 | 1 | 45 | 0 | 115760 | JiananZHU | 167769 | | An idea about LLMs | https://discuss.huggingface.co/t/an-idea-about-llms/115462 | 1 | 56 | 1 | 115462 | Garrett Johnson | 167279 | | Different response from different UI's | https://discuss.huggingface.co/t/different-response-from-different-uis/115192 | 3 | 49 | 2 | 115192 | Marvin Snell | 166941 | | Gradio is more than UI? | https://discuss.huggingface.co/t/gradio-is-more-than-ui/114715 | 5 | 62 | 4 | 114715 | Zebra | 166264 | | Narrative text generation | https://discuss.huggingface.co/t/narrative-text-generation/114869 | 2 | 43 | 1 | 114869 | QUANGDUC | 166472 | | Say goodbye to manual testing of your LLM-based apps – automate with EvalMy.AI beta! 🚀 | https://discuss.huggingface.co/t/say-goodbye-to-manual-testing-of-your-llm-based-apps-automate-with-evalmy-ai-beta/114533 | 1 | 38 | 1 | 114533 | Petr Pascenko | 166007 | +----------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------+-------+-------+--------+------------------------+----------+ Bonus: To get more of the latest you can paginate the API by adding the page=<PAGE_VALUE> parameter to the first endpoint. For example, latest.json?page=2 | 2 | 2 |
79,319,663 | 2024-12-31 | https://stackoverflow.com/questions/79319663/fastapi-apache-409-response-from-fastapi-is-converted-to-502-what-can-be-the | I have a FastAPI application, which, in general, works fine. My setup is Apache as a proxy and FastAPI server behind it. This is the apache config: ProxyPass /fs http://127.0.0.1:8000/fs retry=1 acquire=3000 timeout=600 Keepalive=On disablereuse=ON ProxyPassReverse /fs http://127.0.0.1:8000/fs I have one endpoint that can return 409 HTTP response, if an object exists. FastAPI works correctly. I can see in logs: INFO: 172.**.0.25:0 - "PUT /fs/Automation/123.txt HTTP/1.1" 409 Conflict But the final response to the client is "502 Bad Gateway". Apache error log has a record for this: [Tue Dec 31 04:45:54.545972 2024] [proxy:error] [pid 3019178:tid 140121168807680] (32)Broken pipe: [client 172.31.0.25:63759] AH01084: pass request body failed to 127.0.0.1:8000 (127.0.0.1), referer: https://10.100.21.13/fs/view/Automation [Tue Dec 31 04:45:54.545996 2024] [proxy_http:error] [pid 3019178:tid 140121168807680] [client 172.31.0.25:63759] AH01097: pass request body failed to 127.0.0.1:8000 (127.0.0.1) from 172.31.0.25 (), referer: https://10.100.21.13/fs/view/Automation What can be the reason? Another interesting thing is that it doesn't happen for any PUT request. How can I debug this? Maybe FastAPI has to return something else, some header? Or it returns too much , some extra data? How to catch this? | So, i have found the reason. When there is file upload you need to read the input buffer in any case, even if you want to return the error. In my case i had to add try: except: to empty the buffer when exception happens. Something like try: ... my original code except Exception as e: # Empty input buffer here to avoid proxy problems await request.body() raise e | 1 | 0 |
79,316,958 | 2024-12-30 | https://stackoverflow.com/questions/79316958/mlagents-learn-help-is-giving-errors-python-3-11-3-10-3-9-3-8 | I am trying to install mlagents. I got to the part in python but after creating a virtual enviorment with pyenv and setting the local version to 3.10, 3.9, and 3.8 it works on none of them. I upgraded pip, installed mlagents, then torch,torchvision, and torchaudio. Then I tested mlagents-learn --help and then because of a error installed protobuf 3.20.3. I then tested again to get the following error (venv) D:\Unity\AI Ecosystem>mlagents-learn --help Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Unity\AI Ecosystem\venv\Scripts\mlagents-learn.exe\__main__.py", line 4, in <module> File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\trainers\learn.py", line 2, in <module> from mlagents import torch_utils File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\torch_utils\__init__.py", line 1, in <module> from mlagents.torch_utils.torch import torch as torch # noqa ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\torch_utils\torch.py", line 6, in <module> from mlagents.trainers.settings import TorchSettings File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\trainers\settings.py", line 644, in <module> class TrainerSettings(ExportableSettings): File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\mlagents\trainers\settings.py", line 667, in TrainerSettings cattr.register_structure_hook( File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\cattr\converters.py", line 207, in register_structure_hook self._structure_func.register_cls_list([(cl, func)]) File "D:\Unity\AI Ecosystem\venv\Lib\site-packages\cattr\dispatch.py", line 55, in register_cls_list self._single_dispatch.register(cls, handler) File "C:\Users\Ebrah\AppData\Local\Programs\Python\Python311\Lib\functools.py", line 864, in register raise TypeError( TypeError: Invalid first argument to `register()`. typing.Dict[mlagents.trainers.settings.RewardSignalType, mlagents.trainers.settings.RewardSignalSettings] is not a class or union type. I tried installing cattrs 1.5.0 but the error remains. As I said before I also tried in 3.11, 3.10, 3.9 and 3.8 and got the same error in all of them. My unity version is 2022.3.5f1 but I don't see how that would make a difference. My pyenv version is 3.1.1. I am on windows 11 and am using pyenv-win. | Try deleting your unity project and making a new one. Unity says to use conda so try that too. Use python 3.9. | 1 | 2 |
79,318,540 | 2024-12-30 | https://stackoverflow.com/questions/79318540/django-model-foreign-key-to-whichever-model-calls-it | I am getting back into Django after a few years, and am running into the following problem. I am making a system where there are 2 models; a survey, and an update. I want to make a notification model that would automatically have an object added when I add a survey object or update object, and the notification object would have a foreign key to the model object which caused it to be added. However I am running into a brick wall figuring out how I would do this, to have a model with a foreign key which can be to one of two models, which would be automatically set to the model object which creates it. Any help with this would be appreciated. I am trying to make a model that looks something like this (psuedocode): class notification(models.model): source = models.ForeignKey(to model that created it) #this is what I need help with start_date = models.DateTimeField(inherited from model that created it) end_date = models.DateTimeField(inherited from model that created it) Also, just to add some context to the question and in case I am looking at this from the wrong angle, I am wanting to do this because both surveys and updates will be displayed on the same page, so my plan is to query the notification model, and then have the view do something like this: from .models import notification notifications = notification.objects.filter(start_date__lte=now, end_date__gte=now).order_by('-start_date') for notification in notifications: if notification.__class__.__name__ == "survey_question": survey = notification.survey_question.all() question = survey.question() elif notification.__class__.__name__ == "update": update = notification.update.all() update = update.update() I am also doing this instead of combining the 2 queries and then sorting them by date as I want to have notifications for each specific user anyways, so my plan is (down the road) to have a notification created for each user. Here are my models (that I reference in the question): from django.db import models from datetime import timedelta from django.utils import timezone def tmrw(): return timezone.now() + timedelta(days=1) class update(models.Model): update = models.TextField() start_date = models.DateTimeField(default=timezone.now, null=True, blank=True) end_date = models.DateTimeField(default=tmrw, null=True, blank=True) class Meta: verbose_name = 'Update' verbose_name_plural = f'{verbose_name}s' class survey_question(models.Model): question = models.TextField() start_date = models.DateTimeField(default=timezone.now, null=True, blank=True) end_date = models.DateTimeField(default=tmrw, null=True, blank=True) class Meta: verbose_name = 'Survey' verbose_name_plural = f'{verbose_name}s' | GenericForeignKey to the rescue: A normal ForeignKey can only “point to” one other model, which means that if the TaggedItem model used a ForeignKey it would have to choose one and only one model to store tags for. The contenttypes application provides a special field type (GenericForeignKey) which works around this and allows the relationship to be with any model from django.contrib.contenttypes.fields import GenericForeignKey from django.contrib.contenttypes.models import ContentType class notification(models.model): content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) object_id = models.PositiveIntegerField() source = GenericForeignKey("content_type", "object_id") EDIT: How to use survey_object = survery_question.objects.first() notification_for_survey = notification.objects.create(source=survey_object) update_object = update.objects.first() notification_for_update = notification.objects.create(source=update_object) | 2 | 2 |
79,319,263 | 2024-12-31 | https://stackoverflow.com/questions/79319263/why-does-geopandas-dissolve-function-keep-working-forever | All, I am trying to use the Geopandas dissolve function to aggregate a few countries; the function countries.dissolve keeps running forever. Here is a minimal script. import geopandas as gpd shape='/Volumes/TwoGb/shape/fwdshapfileoftheworld/' countries=gpd.read_file(shape+'TM_WORLD_BORDERS-0.3.shp') # Add columns countries['wmosubregion'] = '' countries['dummy'] = '' country_count = len(countries) # If the country list is empty then use all countries. country_list=['SO','SD','DJ','KM'] default = 'Null' for i in range(country_count): countries.at[i, 'wmosubregion'] = default if countries.ISO2[i] in country_list: countries.at[i, 'wmosubregion'] = "EAST_AFRICA" print(countries.ISO2[i]) region_shapes = countries.dissolve(by='wmosubregion') I am using the TM_WORLD_BORDERS-0.3 shape files, which is freely accessible. You can get the shape files (TM_WORLD_BORDERS-0.3.shp, TM_WORLD_BORDERS-0.3.dbf, TM_WORLD_BORDERS-0.3.shx, TM_WORLD_BORDERS-0.3.shp ) from the following GitHub https://github.com/rmichnovicz/Sick-Slopes/tree/master Thanks | Dissolve is working when I try it, it finishes in a few seconds. My Geopandas version is 1.0.1. import geopandas as gpd df = gpd.read_file(r"C:\Users\bera\Downloads\TM_WORLD_BORDERS-0.3.shp") df.plot(column="NAME") df2 = df.dissolve() df2.plot() There are some invalid geometries that might cause problems for you? Try fixing them: #df.geometry.is_valid.all() #np.False_ #Four geometries are invalid df.loc[~df.geometry.is_valid] # FIPS ISO2 ... LAT geometry # 23 CA CA ... 59.081 MULTIPOLYGON (((-65.61362 43.42027, -65.61972 ... # 32 CI CL ... -23.389 MULTIPOLYGON (((-67.21278 -55.89362, -67.24695... # 154 NO NO ... 61.152 MULTIPOLYGON (((8.74361 58.40972, 8.73194 58.4... # 174 RS RU ... 61.988 MULTIPOLYGON (((131.87329 42.95694, 131.82413 ... # [4 rows x 12 columns] df.geometry = df.geometry.make_valid() #df.geometry.is_valid.all() #np.True_ | 1 | 2 |
79,318,939 | 2024-12-31 | https://stackoverflow.com/questions/79318939/loaded-keras-model-throws-error-while-predicting-likely-issues-with-masking | I am currently developing and testing a RNN that relies upon a large amount of data for training, and so have attempted to separate my training and testing files. I have one file where I create, train, and save a tensorflow.keras model to a file 'model.keras' I then load this model in another file and predict some values, but get the following error: Failed to convert elements of {'class_name': '__tensor__', 'config': {'dtype': 'float64', 'value': [0.0, 0.0, 0.0, 0.0]}} to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes By the way, I have tried running model.predict with this exact same data in the file where I train the model, and it works smoothly. The model loading must be the problem, not the data used to predict. This mysterious float64 tensor is the value I passed into the masking layer. I don't understand why keras is unable to recognize this JSON object as a Tensor and apply the masking operation as such. I have included snippets of my code below, edited for clarity and brevity: model_generation.py: # Create model model = tf.keras.Sequential([ tf.keras.layers.Input((352, 4)), tf.keras.layers.Masking(mask_value=tf.convert_to_tensor(np.array([0.0, 0.0, 0.0, 0.0]))), tf.keras.layers.GRU(50, return_sequences=True, activation='tanh'), tf.keras.layers.Dropout(0.2), tf.keras.layers.GRU(50,activation='tanh'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(units=1, activation='sigmoid')]) # Compile Model... # Train Model... model.save('model.keras') model.predict(data) # Line works here model_testing.py model = tf.keras.models.load_model('model.keras') model.predict(data) # this line generates the error EDIT: Moved the load command into the same file as the training, still receiving the exact same error message. | That error is due to the mask_value that you pass into tf.keras.layers.Masking not getting serialized compatibly for deserialization. But because you masking layer is a tensor containing all 0s anyway, you can instead just pass a scalar value like this and it will eliminate the need to serialize a tensor while storing the model tf.keras.layers.Masking(mask_value=0.0) and it broadcasts it to effectively make it equivalent to comparing it against the tensor containing all 0s. Here is the source where the mask is applied like this ops.any(ops.not_equal(inputs, self.mask_value), axis=-1, keepdims=True) and ops.not_equal supports broadcasting. | 1 | 1 |
79,320,886 | 2024-12-31 | https://stackoverflow.com/questions/79320886/numpy-einsum-why-did-this-happen | Can you explain why this happened? import numpy as np a = np.array([[1,2], [3,4], [5,6] ]) b = np.array([[2,2,2], [2,2,2]]) print(np.einsum("xy,zx -> yx",a,b)) and output of the code is:[[ 4 12 20] [ 8 16 24]] Which means the answer is calculated like this : [1*2+1*2 , 3*2+3*2 , ...] But I expected it to be calculated like this: [[1*2 , 3*2 , 5*2],[2*2 , 4*2 , 6*2]] Where did I make a mistake? | Your code is equivalent to: (a[None] * b[..., None]).sum(axis=0).T You start with a (x, y) and b (z, x). First let's align the arrays: # a[None] shape: (1, x, y) array([[[1, 2], [3, 4], [5, 6]]]) # b[..., None] shape: (z, x, 1) array([[[2], [2], [2]], [[2], [2], [2]]]) and multiply: # a[None] * b[..., None] shape: (z, x, y) array([[[ 2, 4], [ 6, 8], [10, 12]], [[ 2, 4], [ 6, 8], [10, 12]]]) sum over axis = 0 (z): # (a[None] * b[..., None]).sum(axis=0) shape: (x, y) array([[ 4, 8], [12, 16], [20, 24]]) Swap x and y: # (a[None] * b[..., None]).sum(axis=0).T shape: (y, x) array([[ 4, 12, 20], [ 8, 16, 24]]) What you want is np.einsum('yx,xy->xy', a, b): array([[ 2, 6, 10], [ 4, 8, 12]]) | 1 | 1 |
79,320,784 | 2024-12-31 | https://stackoverflow.com/questions/79320784/bot-not-responding-to-channel-posts-in-telegram-bot-api-python-telegram-bot | I'm developing a Telegram bot using python-telegram-bot to handle and reply to posts in a specific channel. The bot starts successfully and shows "Bot is running...", but it never replies to posts in the channel. Here's the relevant code for handling channel posts: async def handle_channel_post(self, update: Update, context: ContextTypes.DEFAULT_TYPE): """Handle new channel posts by adding the message link as a reply.""" try: # Get the message and channel info message = update.channel_post or update.message if not message: return # Verify this is from our target channel if message.chat.username != self.channel_username: return channel_id = message.chat.id message_id = message.message_id # Construct the message link if str(channel_id).startswith("-100"): # Private channels (or supergroups) link = f"https://t.me/c/{str(channel_id)[4:]}/{message_id}" else: # Public channels link = f"https://t.me/{self.channel_username.replace('@', '')}/{message_id}" # Create the reply text reply_text = f"View message: [Click here]({link})" # Reply to the channel post await context.bot.send_message( chat_id=channel_id, text=reply_text, reply_to_message_id=message_id, parse_mode="Markdown" ) except Exception as e: print(f"Error handling channel post: {e}") This is the main method: async def main(): BOT_TOKEN = "<MT_BOT_TOKEN>" CHANNEL_USERNAME = "@TestTGBot123" bot = ChannelBot(BOT_TOKEN, CHANNEL_USERNAME) await bot.start() I tried with another channel and different channel types but still not working. The bot is admin and also has privileges to post in the channel. | The issue is with this part of the code: if message.chat.username != self.channel_username: return The message.chat.username returns the channel username without the '@' and your self.channel.username includes '@' Try this: if message.chat.username != self.channel_username.replace("@", ""): return It removes '@' from self.channel.username and your bot should work as expected. | 3 | 2 |
79,318,200 | 2024-12-30 | https://stackoverflow.com/questions/79318200/return-placeholder-values-with-formatting-if-a-key-is-not-found | I want to silently ignore KeyErrors and instead replace them with placeholders if values are not found. For example: class Name: def __init__(self, name): self.name = name self.capitalized = name.capitalize() def __str__(self): return self.name "hello, {name}!".format(name=Name("bob")) # hello, bob! "greetings, {name.capitalized}!".format(name=Name("bob")) # greetings, Bob! # but, if no name kwarg is given... "hello, {name}!".format(age=34) # hello, {name}! "greetings, {name.capitalized}!".format(age=34) # greetings, {name.capitalized}! My goal with this is that I'm trying to create a custom localization package for personal projects (I couldn't find existing ones that did everything I wanted to). Messages would be user-customizable, but I want users to have a flawless experience, so for example, if they make a typo and insert {nmae} instead of {name}, I don't want users to have to deal with errors, but I want to instead signal to them that they made a typo by giving them the placeholder value. I found several solutions on stackoverflow, but none of them can handle attributes. My first solution was this: class Default(dict): """A dictionary that returns the key itself wrapped in curly braces if the key is not found.""" def __missing__(self, key: str) -> str: return f"{{{key}}}" But this results in an error when trying to use it with attributes: AttributeError: 'str' object has no attribute 'capitalized', it does print "hello, {name}!" with no issues. Same goes for my second solution using string.Formatter: class CustomFormatter(string.Formatter): def get_value(self, key, args, kwargs): try: value = super().get_value(key, args, kwargs) except KeyError: value = f'{{{key}}}' except AttributeError: value = f'{{{key}}}' return value formatter.format("hello, {name}!", name=Name("bob")) # hello, bob! formatter.format("greetings, {name.capitalized}!", name=Name("bob")) # greetings, Bob! formatter.format("hello, {name}!", age=42) # hello, {name}! formatter.format("greetings, {name.capitalized}!", age=42) # AttributeError: 'str' object has no attribute 'capitalized' So how could I achieve something like this? "hello, {name}!".format(name=Name("bob")) # hello, bob! "greetings, {name.capitalized}!".format(name=Name("bob")) # greetings, Bob! # but, if no name kwarg is given... "hello, {name}!".format(age=34) # hello, {name}! "greetings, {name.capitalized}!".format(age=34) # greetings, {name.capitalized}! | TL;DR The best solution is to override get_field instead of get_value in CustomFormatter: class CustomFormatter(string.Formatter): def get_field(self, field_name, args, kwargs): try: return super().get_field(field_name, args, kwargs) except (AttributeError, KeyError): return f"{{{field_name}}}", None Kuddos to @blhsing for suggesting this solution. Details The issue is that the AttributeError gets raised when formatter.get_field() is called, not in get_value(), so you also need to override get_field(). By adding this function to your CustomFormatter class, I was able to get the behaviour you want with {name.capitalized} shown when you pass name="bob" or name=34 instead of name=Name("bob"): def get_field(self, field_name, args, kwargs): try: return super().get_field(field_name, args, kwargs) except AttributeError: return f"{{{field_name}}}", None The return value is a tuple, to respect get_field's return value: a tuple with the result, and the key used. In action >>> formatter = CustomFormatter() >>> formatter.format("greetings, {name.capitalized}!", name="bob") 'greetings, {name.capitalized}!' >>> formatter.format("greetings, {name.capitalized}!", name=34) 'greetings, {name.capitalized}!' >>> formatter.format("greetings, {name.capitalized}!", name=Name("bob")) 'greetings, Bob!' >>> formatter.format("{name.capitalized}, you are {age} years old.", name=Name("bob")) 'Bob, you are {age} years old.' Tracing the code for deeper understanding When I added some debugging print statements, namely: class CustomFormatter(string.Formatter): def get_value(self, key, args, kwargs): print(f"get_value({key=}, {args=}, {kwargs=}") ... def get_field(self, field_name, args, kwargs): print(f"get_field({field_name=}, {args=}, {kwargs=}") ... I could see this log when using name=Name("bob"): get_field(field_name='name.capitalized', args=(), kwargs={'name': <__main__.Name object at 0x000002818AA8E8D0>} get_value(key='name', args=(), kwargs={'name': <__main__.Name object at 0x000002818AA8E8D0>} and this log with for age=34 and leaving out name: get_field(field_name='name', args=(), kwargs={'age': 34} get_value(key='name', args=(), kwargs={'age': 34} so you see it's your overriden get_value that handles the wrong key, and my overriden get_field that handles the missing attribute. Making the code more concise As @blhsing pointed out, if you also catch the KeyError in get_field, then you don't need to override get_value at all, leading to the final solution in the TL;DR above. | 2 | 2 |
79,320,041 | 2024-12-31 | https://stackoverflow.com/questions/79320041/python-flask-blueprint-parameter | I need to pass a parameter (some_url) from the main app to the blueprint using Flask This is my (oversimplified) app app = Flask(__name__) app.register_blueprint(my_bp, url_prefix='/mybp', some_url ="http....") This is my (oversimplified) blueprint my_bp = Blueprint('mybp', __name__, url_prefix='/mybp') @repositories_bp.route('/entrypoint', methods=['GET', 'POST']) def entrypoint(): some_url = ???? Not sure this is the way to go, but I parsed countless threads, I just cannot find any Info about this Thanks for your help | you can use g object for the current request which stores temporary data, or you can use session to maintain data between multiple requests which usually stores this data in the client browser as a cookie, or you can store the data in the app.config to maintain a constant value. | 1 | 0 |
79,318,743 | 2024-12-30 | https://stackoverflow.com/questions/79318743/how-to-create-combinations-from-dataframes-for-a-specific-combination-size | Say I have a dataframe with 2 columns, how would I create all possible combinations for a specific combination size? Each row of the df should be treated as 1 item in the combination rather than 2 unique separate items. I want the columns of the combinations to be appended to the right. The solution should ideally be efficient since it takes long to generate all the combinations with a large list. For example, I want to create all possible combinations with a combination size of 3. import pandas as pd df = pd.DataFrame({'A':['a','b','c','d'], 'B':['1','2','3','4']}) How would I get my dataframe to look like this? A B A B A B 0 a 1 b 2 c 3 1 a 1 b 2 d 4 2 a 1 c 3 d 4 3 b 2 c 3 d 4 | An approach is itertools to generate the combinations. Define the combination size and generate all possible combinations of rows using itertools.combinations Flatten each combination into a single list of values using itertools.chain. combination_df is created from the flattened combinations and the columns are dynamically generated to repeat 'A' and 'B' for each combination Sample import itertools combination_size = 3 combinations = list(itertools.combinations(df.values, combination_size)) combination_df = pd.DataFrame( [list(itertools.chain(*comb)) for comb in combinations], columns=[col for i in range(combination_size) for col in df.columns] ) ) EDIT : Optimisation as suggested by @ouroboros1 combination_df = pd.DataFrame( (chain.from_iterable(c) for c in combinations), columns=np.tile(df.columns, combination_size) ) Output A B A B A B 0 a 1 b 2 c 3 1 a 1 b 2 d 4 2 a 1 c 3 d 4 3 b 2 c 3 d 4 | 1 | 1 |
79,319,708 | 2024-12-31 | https://stackoverflow.com/questions/79319708/confused-by-documentation-about-behavior-of-globals-within-a-function | Per the Python documentation of globals(): For code within functions, this is set when the function is defined and remains the same regardless of where the function is called. I understood this as calling globals() from within a function returns an identical dict to the one that represented the global namespace when the function was defined, even if there have been modifications to the global namespace since then. However, my experiment below showed that my understanding is apparently incorrect. What does the documentation mean, then? (In the example below I expected the second call of foo() to give the same result as the first. Of course, if that was the case I would question the utility of globals(), but that seems to be what the documentation means.) def foo(): if 'x' in globals(): print(f"Found x: {globals()['x']}") else: global x x = 1 print(f"Not found. Set x = {x}.") foo() # Not found. Set x = 1. foo() # Found x: 1 | In fact this problem is only loosely related to the globals() builtin function but more closely related to the behaviour of mutable objects. Long story made short, your observation is correct, and the documentation is absolutely correct and accurate. The underlying cause, is that Python variables are only references to the actual objects. Let us look at an example: a = {'a': 1, 'b': 2} b = a # ok, we take a "copy" print(b) {'a': 1, 'b': 2} # no surprise here a['c'] = 3 # let us MUTATE the original object print(b) {'a': 1, 'b': 2, 'c': 3} What happens here is that both variable are references to the very same object, what can be confirmed with print(id(a), id(b)) But if we use a different object: a = {'a': 1, 'b': 2, 'c': 3} # a is now a new and distinct object # even if it has the same value a['d'] = 4 print(a, b) {'a': 1, 'b': 2, 'c': 3, 'd': 4} {'a': 1, 'b': 2, 'c': 3} b is still a reference to the original object, so the new changes to a are not accessible through b. You can confirm that they are now distinct objects with print(id(a), id(b)). The documentation is just a warning that if for any reason the global directory is changed to a new and different object(*), the function will still keep a reference of the object that existed when the function was defined. (*) AFAIK, the specification of the language has no guarantee that the global directory will be the very same object during all the program lifetime | 1 | 1 |
79,319,434 | 2024-12-31 | https://stackoverflow.com/questions/79319434/duplicate-null-columns-created-during-pivot-in-polars | I have this example dataframe in polars: df_example = pl.DataFrame( { "DATE": ["2024-11-11", "2024-11-11", "2024-11-12", "2024-11-12", "2024-11-13"], "A": [None, None, "option1", "option2", None], "B": [None, None, "YES", "YES", "NO"], } ) Which looks like this: DATE A B 0 2024-11-11 1 2024-11-11 2 2024-11-12 option1 YES 3 2024-11-12 option2 YES 4 2024-11-13 NO As you can see this is a long format dataframe. I want to have it in a wide format, meaning that I want the DATE to be unique per row and for each other column several columns have to be created. What I want to achieve is: DATE A_option1 A_option2 B_YES B_NO 2024-11-11 Null Null Null Null 2024-11-12 True True True Null 2024-11-13 Null Null Null True I have tried doing the following: df_example.pivot( index="DATE", on=["A", "B"], values=["A", "B"], aggregate_function="first" ) However, I get this error: DuplicateError: column with name 'null' has more than one occurrence Which is logical, as it tries to create a column for the Null values in columns A, and a column for the Null values in column B. I am looking for a clean solution to this problem. I know I can impute the nulls per column with something unique and then do the pivot. Or by pivoting per column and then dropping the Null columns. However, this will create unnecessary columns. I want something more elegant. | I ended up with: ( df_example.pipe( lambda df: df.group_by("DATE").agg( [ pl.col(col).eq(val).any().alias(f"{col}_{val}") for col in df.select(pl.exclude("DATE")).columns for val in df.get_column(col).unique().drop_nulls() ] ) ).sort("DATE") ) | 2 | 1 |
79,319,156 | 2024-12-31 | https://stackoverflow.com/questions/79319156/how-to-add-python-type-annotations-to-a-class-that-inherits-from-itself | I'm trying add type annotations to an ElementList object that inherits from list and can contain either Element objects or other ElementGroup objects. When I run the following code through mypy: from typing import Self class Element: pass class ElementList(list[Element | Self]): pass elements = ElementList( [ Element(), Element(), ElementList( [ Element(), Element(), ] ), ] ) I get the following error: element.py:8: error: Self type is only allowed in annotations within class definition [misc] Found 1 error in 1 file (checked 1 source file) What's the recommended way to add typing annotations to this so that mypy doesn't throw an error? | Your sample list argument to the ElementList constructor contains not just Elements and ElementLists but also actual lists, so a workaround of class ElementList(list["Element | ElementList"]): ... would not have worked, as @dROOOze pointed out in the comment, because list is not a subtype of ElementList. You can work around this limitation with a type alias, which can refer to itself without creating a subtype: class Element: pass type ElementListType[T] = Element | T | list[ElementListType[T]] class ElementList(list[ElementListType["ElementList"]]): pass elements = ElementList( [ Element(), Element(), [ Element(), ElementList( [ Element(), Element(), ] ) ], ] ) Demo with mypy here Demo with pyright here | 1 | 1 |
79,317,395 | 2024-12-30 | https://stackoverflow.com/questions/79317395/multi-columns-legend-in-geodataframe | I tried to plot Jakarta's map based on the district. fig, ax = plt.subplots(1, figsize=(4.5,10)) jakarta_mandiri_planar.plot(ax=ax, column='Kecamatan', legend=True, legend_kwds={'loc':'center left'}) leg= ax.get_legend() leg.set_bbox_to_anchor((1.04, 0.5)) I plotted the legend on the right of the map, but I think it's too long. Can I make the legend into two or three columns? If so, how? | Use the ncols keyword: df.plot(column="NAME", cmap="tab20", legend=True, figsize=(8,8)) df.plot(column="NAME", cmap="tab20", legend=True, figsize=(10,10), legend_kwds={"ncols":2, "loc":"lower left"}) | 1 | 1 |
79,315,937 | 2024-12-29 | https://stackoverflow.com/questions/79315937/in-ta-lib-cython-compiler-errors-internalerror-internal-compiler-error-com | While running a program on pycharm I am getting below error while running on pycharm using python. Unable to run the program due to below error: ERROR: Failed building wheel for TA-Lib-Precompiled ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (TA-Lib-Precompiled) > Package :TA-Lib-Precompiled > Python Version : Python 3.12.1 > Cython version 3.0.11 Please help in finding the solution !! Below are the logs : > Collecting TA-Lib-Precompiled Using cached TA-Lib-Precompiled-0.4.25.tar.gz (276 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Requirement already satisfied: numpy in c:\python312\lib\site-packages (from TA-Lib-Precompiled) (1.26.4) Building wheels for collected packages: TA-Lib-Precompiled Building wheel for TA-Lib-Precompiled (setup.py): started Building wheel for TA-Lib-Precompiled (setup.py): finished with status 'error' Running setup.py clean for TA-Lib-Precompiled Failed to build TA-Lib-Precompiled ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Cython\\Utils.py", line 129, in Cython.Utils.cached_method.wrapper File "C:\Python312\Lib\site-packages\Cython\Build\Dependencies.py", line 574, in cimports_externs_incdirs for include in self.included_files(filename): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Cython\\Utils.py", line 129, in Cython.Utils.cached_method.wrapper File "C:\Python312\Lib\site-packages\Cython\Build\Dependencies.py", line 556, in included_files include_path = self.context.find_include_file(include, source_file_path=filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\Cython\Compiler\Main.py", line 299, in find_include_file error(pos, "'%s' not found" % filename) File "C:\Python312\Lib\site-packages\Cython\Compiler\Errors.py", line 178, in error raise InternalError(message) Cython.Compiler.Errors.InternalError: Internal compiler error: '_common.pxi' not found [end of output] > note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for TA-Lib-Precompiled ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (TA-Lib-Precompiled) | The stable release of TA-Lib-Precompiled only has wheels for Python 3.8 - 3.11 for Linux. You can install The Windows Subsystem for Linux (WSL) which provides a Linux environment on your Windows machine and then use a supported Python version such as Python 3.11. See How to install Linux on Windows with WSL for detailed instructions on this. | 2 | 1 |
79,317,602 | 2024-12-30 | https://stackoverflow.com/questions/79317602/python-selenium-need-help-in-locating-username-and-password | i am new to selenium . i am trying to scrape financial data on tradingview. i am trying to log into https://www.tradingview.com/accounts/signin/ . i understand that i am facing a timeout issue right now, is there any way to fix this? thank you to anybody helping. much appreciated. however, i am facing alot of errors with logging in. the error i am facing is --------------------------------------------------------------------------- TimeoutException Traceback (most recent call last) <ipython-input-29-7f9f0236fad7> in <cell line: 24>() 22 # Login process (replace with your email and password) 23 # Locate the email/username field using the 'id' or 'name' attribute ---> 24 email_field = wait.until(EC.presence_of_element_located((By.ID, "id_username"))) 25 email_field.send_keys("tom@gmail.com") # Replace with your email 26 /usr/local/lib/python3.10/dist-packages/selenium/webdriver/support/wait.py in until(self, method, message) 103 break 104 time.sleep(self._poll) --> 105 raise TimeoutException(message, screen, stacktrace) 106 107 def until_not(self, method: Callable[[D], T], message: str = "") -> Union[T, Literal[True]]: TimeoutException: Message: Stacktrace: #0 0x5677df5f58fa <unknown> #1 0x5677df106d20 <unknown> #2 0x5677df155a66 <unknown> #3 0x5677df155d01 <unknown> this is my code over here. from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time # Set up Selenium WebDriver for Colab options = webdriver.ChromeOptions() options.add_argument('--headless') # Run Chrome in headless mode options.add_argument('--no-sandbox') # Needed for Colab options.add_argument('--disable-dev-shm-usage') # Overcome resource limitations options.add_argument('--disable-gpu') # Disable GPU for compatibility options.add_argument('--window-size=1920x1080') # Set a default window size driver = webdriver.Chrome(options=options) # Example: Open the TradingView login page driver.get("https://www.tradingview.com/accounts/signin/") # Wait for the login page to load wait = WebDriverWait(driver, 15) # Login process (replace with your email and password) # Locate the email/username field using the 'id' or 'name' attribute email_field = wait.until(EC.presence_of_element_located((By.ID, "id_username"))) email_field.send_keys("tom@gmail.com") # Replace with your email # Locate the password field using the 'id' or 'name' attribute password_field = driver.find_element(By.ID, "id_password") password_field.send_keys("Fs5u+exxxx1") # Replace with your password # Locate and click the login button login_button = driver.find_element(By.XPATH, "//button[@type='submit']") login_button.click() # Wait for login to complete (adjust sleep time as necessary) time.sleep(5) if "Sign In" in driver.page_source: print("Login failed. Check your credentials.") else: print("Login successful!") # Navigate to a chart page (e.g., btc chart) driver.get("https://www.tradingview.com/chart/kvfFlBvq/?symbol=INDEX%3ABTCUSD") time.sleep(5) # Example: Extract data from a visible container try: data_container = driver.find_element(By.CLASS_NAME, "container") print("Extracted Data:") print(data_container.text) except Exception as e: print("Failed to extract data:", e) # Close the browser driver.quit() | To locate the login form on the sign-in page, it is necessary to click the "Email" button first in order to proceed with submitting the login form. I have included the following two lines in the script to accomplish this. email_button = driver.find_element(By.XPATH, "//button[@name='Email']") email_button.click() The login form does not contain a button of type "submit." Instead, there is only a button without a specified type. To perform the login action, I used the span text "Sign in" to identify and click the appropriate button. Your code: login_button = driver.find_element(By.XPATH, "//button[@type='submit']") Updated code by me: login_button = driver.find_element(By.XPATH, "//span[text()='Sign in']") The login process was successful. However, after logging in, the system is unable to locate any container elements. I trust you will be able to address this issue. The complete code, with corrections applied, is presented below: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time # Set up Selenium WebDriver for Colab options = webdriver.ChromeOptions() options.add_argument('--headless') # Run Chrome in headless mode options.add_argument('--no-sandbox') # Needed for Colab options.add_argument('--disable-dev-shm-usage') # Overcome resource limitations options.add_argument('--disable-gpu') # Disable GPU for compatibility options.add_argument('--window-size=1920x1080') # Set a default window size driver = webdriver.Chrome(options=options) # Example: Open the TradingView login page driver.get("https://www.tradingview.com/accounts/signin/") # Wait for the login page to load wait = WebDriverWait(driver, 15) # newly added by me email_button = driver.find_element(By.XPATH, "//button[@name='Email']") email_button.click() # Login process (replace with your email and password) # Locate the email/username field using the 'id' or 'name' attribute email_field = wait.until(EC.presence_of_element_located((By.ID, "id_username"))) email_field.send_keys("tom@gmail.com") # Replace with your email # Locate the password field using the 'id' or 'name' attribute password_field = driver.find_element(By.ID, "id_password") password_field.send_keys("Fs5u+exxxx1") # Replace with your password # Locate and click the login button # Edited by me login_button = driver.find_element(By.XPATH, "//span[text()='Sign in']") login_button.click() # Wait for login to complete (adjust sleep time as necessary) time.sleep(5) if "Sign In" in driver.page_source: print("Login failed. Check your credentials.") else: print("Login successful!") # Navigate to a chart page (e.g., btc chart) driver.get("https://www.tradingview.com/chart/kvfFlBvq/? symbol=INDEX%3ABTCUSD") time.sleep(5) # Example: Extract data from a visible container try: data_container = driver.find_element(By.CLASS_NAME, "container") print("Extracted Data:") print(data_container.text) except Exception as e: print("Failed to extract data:", e) # Close the browser driver.quit() | 1 | 1 |
79,317,247 | 2024-12-30 | https://stackoverflow.com/questions/79317247/how-to-do-a-clean-install-of-python-from-source-in-a-docker-container-image-ge | Currently I have to create Docker images that build python from source (for example we do need two different python versions in a container, one python version for building and one for testing the application, also we need to exactly specify the python version we want to install and newer versions are not supported via apt install for example). My Problem is a.t.m. that the size of the image gets really large if you build python from source and yet I do not fully understand why. Let's take the following image as an example: # we start with prebuild python image to set system python to 3.13 FROM WWW.SOMEURL.COM/python:3.13-slim-bullseye # now we install the build dependencies required to build python from source RUN apt update -y &&\ apt upgrade -y &&\ apt-get install --no-install-recommends --yes \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev \ libsqlite3-dev \ libbz2-dev \ git \ wget &&\ apt-get clean # next we altinstall another python version by building it from source RUN cd /usr/src &&\ wget "https://www.python.org/ftp/python/3.11.11/Python-3.11.11.tgz" &&\ tar xzf "Python-3.11.11.tgz" &&\ cd "Python-3.11.11" &&\ ./configure &&\ make altinstall # finally we remove the build dependencies to safe some space RUN apt-get remove --purge -y \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev \ libsqlite3-dev \ libbz2-dev \ git \ wget &&\ apt-get autoremove --purge -y &&\ apt-get autoclean -y # verify installation RUN echo "DEBUG: Path to alt python: $(which python3.11) which has version $(python3.11 --version)" For me this process results in a very large image, while the python installation itself should not be that large (~150-200 MB on a local machine). However, it seems like the pure installation of python from source adds around 800MB to the image. Why is this the case? Thank you for your help! New Dockerfile according to answers, that greatly reduces (~50%) the final size of the image: # we start with prebuild python image to set system python to 3.13, if you dont need that you can just use any other image and perform the same steps (maybe swap altinstall to install) FROM WWW.SOMEURL.COM/python:3.13-slim-bullseye # install and remove build dependencies in a single stage RUN bash install_build_deps.sh && \ bash altinstall_python.sh && \ bash remove_build_deps.sh # verify installation RUN echo "DEBUG: Path to alt python: $(which python3.11) which has version $(python3.11 --version)" Script install_build_deps.sh (addition of removing /var/lib/apt/lists/*): apt-get update -y apt-get upgrade -y apt-get install --no-install-recommends --yes build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev libbz2-dev wget rm -rf /var/lib/apt/lists/* apt-get clean Script altinstall_python.sh (delete tarball and added files to /usr/local/src): cd "/usr/local/src" wget "https://www.python.org/ftp/python/3.11.11/Python-3.11.11.tgz" tar xzf "Python-3.11.11.tgz" cd "Python-3.11.11" ./configure make altinstall rm "Python-3.11.11.tgz" rm -r Python-3.11.11 Script remove_build_deps.sh: apt-get remove --purge -y \ build-essential \ zlib1g-dev \ libncurses5-dev \ libgdbm-dev \ libnss3-dev \ libssl-dev \ libreadline-dev \ libffi-dev \ libsqlite3-dev \ libbz2-dev \ wget &&\ apt-get autoremove --purge -y &&\ apt-get autoclean -y Thanks a lot for the help, if there are further optimizations, let me know and I will update this, if somebody wants to use it as a reference. | Research and read dockerfile best practices, for example https://docs.docker.com/build/building/best-practices/#apt-get . Remove src directory and any build aftefacts after you are done installing. Remove packages in the same stage as you install them. Additionally, you might be interested in pyenv project that streamlines python compilation. Do not use /usr/src for your stuff, it's a system directory. Research linux FHS. I usually use home directory in docker, but i guess /usr/local/src looks also fine. | 2 | 1 |
79,317,098 | 2024-12-30 | https://stackoverflow.com/questions/79317098/python-logging-filter-works-with-console-but-still-writes-to-file | I am saving the logs to a text file and displaying them to the console at the same time. I would like to apply a filter on the logs, so that some logs neither make it to the text file nor the console output. However, with this code, the logs that I would like to filter out are still being saved to the text file. The filter only seems to work on the console output. How can I apply the filter to the text file and the console output? Thank you very much import logging class applyFilter(logging.Filter): def filter(self, record): return not record.getMessage().startswith('Hello') logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', filename='log_file.txt', filemode='a') console = logging.StreamHandler() console.addFilter(applyFilter()) console.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') console.setFormatter(formatter) logging.getLogger('').addHandler(console) logging.info('Hello world') | basicConfig created a FileHandler and a StreamHandler was also created and added to the logger. The filter was only applied to the StreamHandler. To filter both handlers, add the filter to the logger instead: import logging class applyFilter(logging.Filter): def filter(self, record): return not record.getMessage().startswith('Hello') logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', filename='log_file.txt', filemode='a') console = logging.StreamHandler() # console.addFilter(applyFilter()) # not here console.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') console.setFormatter(formatter) logger = logging.getLogger('') logger.addFilter(applyFilter()) # here logger.addHandler(console) logging.info('Hello world') logging.info('world only') Output (console): 2024-12-30 00:44:31,702 - INFO - world only Output (log_file.txt): 2024-12-30 00:44:31,702 - INFO - world only | 1 | 0 |
79,316,851 | 2024-12-30 | https://stackoverflow.com/questions/79316851/sympy-integration-with-cosine-function-under-a-square-root | I am trying to solve the integration integrate( sqrt(1 + cos(2 * x)), (x, 0, pi) ) Clearly, through pen and paper this is not hard, and the result is: But when doing this through Sympy, something does not seem correct. I tried the sympy codes as below. from sympy import * x = symbols("x", real=True) integrate(sqrt(1 + cos(2 * x)), (x, 0, pi)).doit() It then gives me a ValueError saying something in the complex domain not defined. But I've already defined the symbol x as a variable in the real domain. Here is the full error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 4 1 from sympy import * 3 x = symbols("x", real=True) ----> 4 integrate(sqrt(1 + cos(2 * x)), (x, 0, pi)).doit() File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\integrals\integrals.py:1567, in integrate(meijerg, conds, risch, heurisch, manual, *args, **kwargs) 1564 integral = Integral(*args, **kwargs) 1566 if isinstance(integral, Integral): -> 1567 return integral.doit(**doit_flags) 1568 else: 1569 new_args = [a.doit(**doit_flags) if isinstance(a, Integral) else a 1570 for a in integral.args] File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\integrals\integrals.py:499, in Integral.doit(self, **hints) 497 if reps: 498 undo = {v: k for k, v in reps.items()} --> 499 did = self.xreplace(reps).doit(**hints) 500 if isinstance(did, tuple): # when separate=True 501 did = tuple([i.xreplace(undo) for i in did]) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\integrals\integrals.py:710, in Integral.doit(self, **hints) 707 uneval = Add(*[eval_factored(f, x, a, b) 708 for f in integrals]) 709 try: --> 710 evalued = Add(*others)._eval_interval(x, a, b) 711 evalued_pw = piecewise_fold(Add(*piecewises))._eval_interval(x, a, b) 712 function = uneval + evalued + evalued_pw File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\core\expr.py:956, in Expr._eval_interval(self, x, a, b) 953 domain = Interval(b, a) 954 # check the singularities of self within the interval 955 # if singularities is a ConditionSet (not iterable), catch the exception and pass --> 956 singularities = solveset(self.cancel().as_numer_denom()[1], x, 957 domain=domain) 958 for logterm in self.atoms(log): 959 singularities = singularities | solveset(logterm.args[0], x, 960 domain=domain) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2252, in solveset(f, symbol, domain) 2250 if symbol not in _rc: 2251 x = _rc[0] if domain.is_subset(S.Reals) else _rc[1] -> 2252 rv = solveset(f.xreplace({symbol: x}), x, domain) 2253 # try to use the original symbol if possible 2254 try: File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2276, in solveset(f, symbol, domain) 2273 f = f.xreplace({d: e}) 2274 f = piecewise_fold(f) -> 2276 return _solveset(f, symbol, domain, _check=True) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:1060, in _solveset(f, symbol, domain, _check) 1057 result = Union(*[solver(m, symbol) for m in f.args]) 1058 elif _is_function_class_equation(TrigonometricFunction, f, symbol) or \ 1059 _is_function_class_equation(HyperbolicFunction, f, symbol): -> 1060 result = _solve_trig(f, symbol, domain) 1061 elif isinstance(f, arg): 1062 a = f.args[0] File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:612, in _solve_trig(f, symbol, domain) 610 sol = None 611 try: --> 612 sol = _solve_trig1(f, symbol, domain) 613 except _SolveTrig1Error: 614 try: File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:688, in _solve_trig1(f, symbol, domain) 685 if g.has(x) or h.has(x): 686 raise _SolveTrig1Error("change of variable not possible") --> 688 solns = solveset_complex(g, y) - solveset_complex(h, y) 689 if isinstance(solns, ConditionSet): 690 raise _SolveTrig1Error("polynomial has ConditionSet solution") File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2284, in solveset_complex(f, symbol) 2283 def solveset_complex(f, symbol): -> 2284 return solveset(f, symbol, S.Complexes) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2252, in solveset(f, symbol, domain) 2250 if symbol not in _rc: 2251 x = _rc[0] if domain.is_subset(S.Reals) else _rc[1] -> 2252 rv = solveset(f.xreplace({symbol: x}), x, domain) 2253 # try to use the original symbol if possible 2254 try: File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:2276, in solveset(f, symbol, domain) 2273 f = f.xreplace({d: e}) 2274 f = piecewise_fold(f) -> 2276 return _solveset(f, symbol, domain, _check=True) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:1110, in _solveset(f, symbol, domain, _check) 1106 result += _solve_radical(equation, u, 1107 symbol, 1108 solver) 1109 elif equation.has(Abs): -> 1110 result += _solve_abs(f, symbol, domain) 1111 else: 1112 result_rational = _solve_as_rational(equation, symbol, domain) File C:\Dev_Tools\Anaconda3\Lib\site-packages\sympy\solvers\solveset.py:918, in _solve_abs(f, symbol, domain) 916 """ Helper function to solve equation involving absolute value function """ 917 if not domain.is_subset(S.Reals): --> 918 raise ValueError(filldedent(''' 919 Absolute values cannot be inverted in the 920 complex domain.''')) 921 p, q, r = Wild('p'), Wild('q'), Wild('r') 922 pattern_match = f.match(p*Abs(q) + r) or {} ValueError: Absolute values cannot be inverted in the complex domain. How do I properly integrate this using Sympy? | Adding a simplification in there will produce the correct result, but I'm not sure why it is having an issue in the first place. integrate(sqrt(1+cos(2*x)).simplify(), (x, 0, pi)) # 2*sqrt(2) | 5 | 3 |
79,316,346 | 2024-12-29 | https://stackoverflow.com/questions/79316346/how-to-include-exception-handling-within-a-python-pool-starmap-multiprocess | I'm using the metpy library to do weather calculations. I'm using the multiprocessing library to run them in parallel, but I get rare exceptions, which completely stop the program. I am not able to provide a minimal, reproducible example because I can't replicate the problems with the metpy library functions and because there is a huge amount of code that runs before the problem occurs that I can't put here. I want to know how to write multiprocessing code to tell the pool.starmap function to PASS if it encounters an error. The first step in my code produces an argument list, which then gets passed to the pool.starmap function, along with the metpy function (metpy.ccl, in this case). The argument list for metpy.ccl includes a list of pressure levels, air temperatures, and dew point values. ccl_pooled = pool.starmap(mpcalc.ccl, ccl_argument_list) I tried to write a generalized function that would take the metpy function I pass to it and tell it to pass when it encounters an error. def run_ccl(p,t,td): try: result = mpcalc.ccl(p,t,td) except IndexError: pass Is there a way for me to write the "run_ccl" function so I can check for errors in my original code line - something like this: ccl_pooled = pool.starmap(run_ccl, ccl_argument_list) If not, what would be the best way to do this? EDIT: To clarify, these argument lists are thousands of data points long. I want to pass on the data point that causes the problem (and enter a nan in the result, "ccl_pooled", for that data point), and keep going. | You can generalize run_ccl with a wrapper function that suppresses specified exceptions and returns NaN as a default value: from contextlib import suppress def suppressor(func, *exceptions): def wrapper(*args, **kwargs): with suppress(*exceptions): return func(*args, **kwargs) return float('nan') return wrapper with which you can then rewrite the code into something like: ccl_pooled = pool.starmap(suppressor(mpcalc.ccl, IndexError), ccl_argument_list) | 1 | 2 |
79,316,278 | 2024-12-29 | https://stackoverflow.com/questions/79316278/is-there-a-more-elegant-rewrite-for-this-python-enum-value-of-implementation | I would like to get a value_of implementation for the StrEnum (Python 3.9.x). For example: from enum import Enum class StrEnum(str, Enum): """Enum with str values""" pass class BaseStrEnum(StrEnum): """Base Enum""" @classmethod def value_of(cls, value): try: return cls[value] except KeyError: try: return cls(value) except ValueError: return None and then can use it like this: class Fruits(BaseStrEnum): BANANA = "Banana" PEA = "Pea" APPLE = "Apple" print(Fruits.value_of('BANANA')) print(Fruits.value_of('Banana')) it is just that the nested try-except doesn't look amazing, is there a better more idiomatic rewrite? | Since upon success of the first try block the function will return and won't execute the code that follows, there is no need to nest the second try block in the error handler of the first try block to begin with: def value_of(cls, value): try: return cls[value] except KeyError: pass try: return cls(value) except ValueError: return None And since both of the error handlers are really meant to ignore the respective exceptions, you can use contextlib.suppress to simply suppress those errors: from contextlib import suppress def value_of(cls, value): with suppress(KeyError): return cls[value] with suppress(ValueError): return cls(value) # return None Note that a function returns None by default so you don't have to explicitly return None as a fallback unless you want to make it perfectly clear. | 2 | 2 |
79,316,309 | 2024-12-29 | https://stackoverflow.com/questions/79316309/how-does-this-code-execute-the-finally-block-even-though-its-never-evaluated-to | def divisive_recursion(n): try: if n <= 0: return 1 else: return n + divisive_recursion(n // divisive_recursion(n - 1)) except ZeroDivisionError: return -1 finally: if n == 2: print("Finally block executed for n=2") elif n == 1: print("Finally block executed for n=1") print(divisive_recursion(5)) Here, divisive_recursion(1) results in 1 + (1 // divisive_recursion(0)), then divisive_recursion(0) returns 1 and it goes into an infinite recursion, where divisive_recursion(1) and divisive_recursion(0) gets executed repeatedly. I expected the code to give and RecursionError due to this, which it does, but the finally blocks get executed before that somehow, I know that they get executed always but for the prints to be printed n should be equal to 1 or 2, which it never does due to the infinite recursion so how come both the print statements are printed when the condition inside them is never evaluated to be True? | In one of the comments, you ask "does that mean once the program encounters the crash, it will execute all the finally blocks upward the recursion before it finally crashes". And the answer is basically "yes". An exception isn't really a "crash", or perhaps think of it as a controlled way of crashing. Here is a simple example to illustrate, in this case where the exception is caught and handled: >>> def foo(n): ... if n == 0: ... raise RuntimeError ... try: ... foo(n - 1) ... except: ... print(f'caught exception at {n=}') ... finally: ... print(f'in finally at {n=}') ... >>> foo(5) caught exception at n=1 in finally at n=1 in finally at n=2 in finally at n=3 in finally at n=4 in finally at n=5 And perhaps even more clarifying, here is a case with an uncaught exception: >>> def foo(n): ... if n == 0: ... raise RuntimeError ... try: ... foo(n - 1) ... except ZeroDivisionError: ... print(f'caught exception at {n=}') ... finally: ... print(f'in finally at {n=}') ... >>> foo(5) in finally at n=1 in finally at n=2 in finally at n=3 in finally at n=4 in finally at n=5 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in foo File "<stdin>", line 5, in foo File "<stdin>", line 5, in foo [Previous line repeated 2 more times] File "<stdin>", line 3, in foo RuntimeError | 2 | 3 |
79,316,399 | 2024-12-29 | https://stackoverflow.com/questions/79316399/how-do-i-remove-an-image-overlay-in-matplotlib | Using matplotlib and python, I have a grey-scale image of labeled objects, on which I want to draw a homogeneously coloured overlay image with a position and shape based on a changeable input parameter - an object identifier. Basically an outline and enhancement of on of the objects in the image. I can generate the overlay, and re-generate it correctly (I think) every time the input value changes. But I don't know how to clear the previous overlay before drawing the new one. So, in the end, the grey-scale image is overlaid with multiple overlays. This is what I have tried, and it doesn't work. 'overlay',and 'object_data' are defined and used in the calling function: def overlay_object(object_num): try: overlay except NameError: # nothing pass else: # remove old overlay for handle in overlay: handle.remove() # color the selected object componentMask = (object_data == object_num) masked = ma.masked_where(componentMask == 0, componentMask) overlay = ax.imshow(masked, 'jet', interpolation='none', alpha=0.5) return overlay Edit: This is the creation of the grey-scale image in the main program: fig, ax = plt.subplots() ax.imshow(object_data, cmap='gray') ax.axis('off') | If you’re trying to update overlay on a grayscale without accumulating overlays, you should use this approach: import matplotlib.pyplot as plt import numpy as np import numpy.ma as ma def create_interactive_overlay(object_data): """ Creates a figure with a grayscale base image and functions to update overlays. Parameters: object_data : numpy.ndarray The labeled image data where each object has a unique integer value Returns: fig : matplotlib.figure.Figure The figure object update_overlay : function Function to call to update the overlay """ # Create the figure and base image fig, ax = plt.subplots() ax.imshow(object_data, cmap='gray') ax.axis('off') # Initialize overlay as None overlay_artist = [None] # Use list to allow modification in nested function def update_overlay(object_num): """ Updates the overlay to highlight a specific object number. Parameters: object_num : int The object identifier to highlight """ # Remove existing overlay if it exists if overlay_artist[0] is not None: overlay_artist[0].remove() # Create mask for selected object component_mask = (object_data == object_num) masked = ma.masked_where(component_mask == 0, component_mask) # Create new overlay overlay_artist[0] = ax.imshow(masked, cmap='jet', interpolation='none', alpha=0.5) # Redraw the figure fig.canvas.draw_idle() return fig, update_overlay # Example usage: """ # Create sample data object_data = np.zeros((100, 100)) object_data[20:40, 20:40] = 1 object_data[60:80, 60:80] = 2 # Create interactive figure fig, update_overlay = create_interactive_overlay(object_data) # Update overlay for different objects update_overlay(1) # Highlight object 1 plt.pause(1) # Pause to see the change update_overlay(2) # Highlight object 2 plt.show() """ In the above solution, the overlay management is handled by: Keeping track of the overlay artist using a list (to allow modification in the nested function) Properly removing the previous overlay before creating a new one Using draw_idle() to update the figure The code is structured to separate the setup from the update functionality: create_interactive_overlay handles the initial setup update_overlay handles the dynamic updates To use this in your code, you would do something like this: fig, update_overlay = create_interactive_overlay(object_data) # When you want to highlight a different object: update_overlay(new_object_number) Hope this helps | 1 | 1 |
79,306,760 | 2024-12-25 | https://stackoverflow.com/questions/79306760/how-to-get-full-traceback-messages-when-the-open-syscall-is-banned | I am working on providing an environment for running users' untrusted python code. I use the python bindings of libseccomp library to avoid triggering unsafe system calls, and the service is running in a docker container. Here is the script that will be executed in my environment. P.S. The list of banned syscalls is from this project: https://github.com/langgenius/dify-sandbox/blob/f40de1f6bc5f87d0e847cbf52076280bf61c05d5/internal/static/python_syscall/syscalls_amd64.go import sys from seccomp import * import json import requests import datetime import math import re import os import signal import urllib.request allowed_syscalls_str = "syscall.SYS_NEWFSTATAT, syscall.SYS_IOCTL, syscall.SYS_LSEEK, syscall.SYS_GETDENTS64,syscall.SYS_WRITE, syscall.SYS_CLOSE, syscall.SYS_OPENAT, syscall.SYS_READ,syscall.SYS_FUTEX,syscall.SYS_MMAP, syscall.SYS_BRK, syscall.SYS_MPROTECT, syscall.SYS_MUNMAP, syscall.SYS_RT_SIGRETURN,syscall.SYS_MREMAP,syscall.SYS_SETUID, syscall.SYS_SETGID, syscall.SYS_GETUID,syscall.SYS_GETPID, syscall.SYS_GETPPID, syscall.SYS_GETTID,syscall.SYS_EXIT, syscall.SYS_EXIT_GROUP,syscall.SYS_TGKILL, syscall.SYS_RT_SIGACTION, syscall.SYS_IOCTL,syscall.SYS_SCHED_YIELD,syscall.SYS_SET_ROBUST_LIST, syscall.SYS_GET_ROBUST_LIST, syscall.SYS_RSEQ,syscall.SYS_CLOCK_GETTIME, syscall.SYS_GETTIMEOFDAY, syscall.SYS_NANOSLEEP,syscall.SYS_EPOLL_CREATE1,syscall.SYS_EPOLL_CTL, syscall.SYS_CLOCK_NANOSLEEP, syscall.SYS_PSELECT6,syscall.SYS_TIME,syscall.SYS_RT_SIGPROCMASK, syscall.SYS_SIGALTSTACK, syscall.SYS_CLONE,syscall.SYS_MKDIRAT,syscall.SYS_MKDIR,syscall.SYS_SOCKET, syscall.SYS_CONNECT, syscall.SYS_BIND, syscall.SYS_LISTEN, syscall.SYS_ACCEPT, syscall.SYS_SENDTO, syscall.SYS_RECVFROM,syscall.SYS_GETSOCKNAME, syscall.SYS_RECVMSG, syscall.SYS_GETPEERNAME, syscall.SYS_SETSOCKOPT, syscall.SYS_PPOLL, syscall.SYS_UNAME,syscall.SYS_SENDMSG, syscall.SYS_SENDMMSG, syscall.SYS_GETSOCKOPT,syscall.SYS_FSTAT, syscall.SYS_FCNTL, syscall.SYS_FSTATFS, syscall.SYS_POLL, syscall.SYS_EPOLL_PWAIT" allowed_syscalls_tmp = allowed_syscalls_str.split(',') L = [] for item in allowed_syscalls_tmp: item = item.strip() parts = item.split('.')[1][4:].lower() L.append(parts) # create a filter object with a default KILL action f = SyscallFilter(defaction=KILL) for item in L: f.add_rule(ALLOW, item) f.add_rule(ALLOW, 307) f.add_rule(ALLOW, 318) f.add_rule(ALLOW, 334) f.load() #User's code, triggers ZeroDivision a = 10 / 0 However, since the syscall open is banned, I can't provide the full error message for users. Is it safe to provide both open and write for users? Or is there another way to get the full traceback message? Thanks. | EDIT: You will have to grant write access to stdout and stderr. Since these files are opened as the process is started, you can selectively restrict write access to these files only without having to worry about untrusted code modifying other files. You can add write permissions to stdout and stderr in your code like this: f.add_rule(ALLOW, "open") f.add_rule(ALLOW, "close") f.add_rule(ALLOW, "write", Arg(0, EQ, sys.stdout.fileno())) f.add_rule(ALLOW, "write", Arg(0, EQ, sys.stderr.fileno())) In case you would like read access from stdin, it can be added as: f.add_rule(ALLOW, "read", Arg(0, EQ, sys.stdin.fileno())) You can see an example using these filter rules from the seccomp library source code here. You might also find this blog on python sandboxes useful too. Original approach: You can use the traceback library for this. It has a try/except block where you can place the user's code in try and catch and print any exceptions. This code example shows the use of this library with your example: # importing module import traceback try: a = 10/0 except: # printing stack trace traceback.print_exc() The output would be similar to: Traceback (most recent call last): File "example.py", line 5, in <module> a = 10/0 ZeroDivisionError: division by zero | 1 | 1 |
79,311,280 | 2024-12-27 | https://stackoverflow.com/questions/79311280/dask-var-and-std-with-ddof-in-groupby-context-and-other-aggregations | Suppose I want to compute variance and/or standard deviation with non-default ddof in a groupby context, I can do: df.groupby("a")["b"].var(ddof=2) If I want that to happen together with other aggregations, I can use: df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) My understanding is that to be able to have non default ddof I should create a custom aggregation. Here what I got so far: def var(ddof: int = 1) -> dd.Aggregation: import dask.dataframe as dd return dd.Aggregation( name="var", chunk=lambda s: (s.count(), s.sum(), (s.pow(2)).sum()), agg=lambda count, sum_, sum_sq: (count.sum(), sum_.sum(), sum_sq.sum()), finalize=lambda count, sum_, sum_sq: (sum_sq - (sum_ ** 2 / count)) / (count - ddof), ) Yet, I encounter a RuntimeError: df.groupby("a").agg({"b": var(2)}) RuntimeError('Failed to generate metadata for DecomposableGroupbyAggregation(frame=df, arg={‘b’: <dask.dataframe.groupby.Aggregation object at 0x7fdfb8469910>} What am I missing? Is there a better way to achieve this? Replacing s.pow(2) with s**2 also results in an error. Full script: import dask.dataframe as dd data = { "a": [1, 1, 1, 1, 2, 2, 2], "b": range(7), "c": range(10, 3, -1), } df = dd.from_dict(data, 2) def var(ddof: int = 1) -> dd.Aggregation: import dask.dataframe as dd return dd.Aggregation( name="var", chunk=lambda s: (s.count(), s.sum(), (s.pow(2)).sum()), agg=lambda count, sum_, sum_sq: (count.sum(), sum_.sum(), sum_sq.sum()), finalize=lambda count, sum_, sum_sq: (sum_sq - (sum_ ** 2 / count)) / (count - ddof), ) df.groupby("a").agg(b_var = ("b", "var"), c_sum = ("c", "sum")) # <- no issue df.groupby("a").agg(b_var = ("b", var(2)), c_sum = ("c", "sum")) # <- RuntimeError | As answered in Dask Discourse Forum, I don't think your custom Aggregation implementation is correct. However, a simpler solution can be applied: import dask.dataframe as dd import functools data = { "a": [1, 1, 1, 1, 2, 2, 2], "b": range(7), "c": range(10, 3, -1), } df = dd.from_dict(data, 2) var_ddof_2 = functools.partial(dd.groupby.DataFrameGroupBy.var, ddof=2) df.groupby("a").agg(b_var = ("b", var_ddof_2), c_sum = ("c", "sum")) | 2 | 3 |
79,314,406 | 2024-12-28 | https://stackoverflow.com/questions/79314406/n-unique-aggregation-using-duckdb-relational-api | Say I have import duckdb rel = duckdb.sql('select * from values (1, 4), (1, 5), (2, 6) df(a, b)') rel Out[3]: ┌───────┬───────┐ │ a │ b │ │ int32 │ int32 │ ├───────┼───────┤ │ 1 │ 4 │ │ 1 │ 5 │ │ 2 │ 6 │ └───────┴───────┘ I can group by a and find the mean of 'b' by doing: rel.aggregate( [duckdb.FunctionExpression('mean', duckdb.ColumnExpression('b'))], group_expr='a', ) ┌─────────┐ │ mean(b) │ │ double │ ├─────────┤ │ 4.5 │ │ 6.0 │ └─────────┘ which works wonderfully Is there a similar way to create a "n_unique" aggregation? I'm looking for something like rel.aggregate( [duckdb.FunctionExpression('count_distinct', duckdb.ColumnExpression('b'))], group_expr='a', ) but that doesn't exist. Is there something that does? | updated. I couldn't find proper way of doing count distinct, but you could use combination of array_agg() and array_unique() functions: rel.aggregate( [duckdb.FunctionExpression( 'array_unique', duckdb.FunctionExpression( 'array_agg', duckdb.ColumnExpression('b') ) )], group_expr='a', ) ┌────────────────────────────┐ │ array_unique(array_agg(b)) │ │ uint64 │ ├────────────────────────────┤ │ 1 │ │ 2 │ └────────────────────────────┘ old. you can pre-select distinct a and b columns? ( rel.select(*[duckdb.ColumnExpression('a'), duckdb.ColumnExpression('b')]) .distinct() .aggregate( [duckdb.FunctionExpression('count', duckdb.ColumnExpression('b'))], group_expr='a', ) ) | 2 | 1 |
79,314,321 | 2024-12-28 | https://stackoverflow.com/questions/79314321/use-an-expression-dictionary-to-calculate-row-wise-based-on-a-column-in-polars | I want to use an expression dictionary to perform calculations for a new column. I have this Polars dataframe: df=pl.DataFrame( "col1": ["a", "b", "a"], "x": [1,2,3], "y": [2,2,5] ) And I have an expression dictionary: expr_dict = { "a": pl.col("x") * pl.col("y"), "b": pl.col("x"), } I want to create a column where each value is calculated based on a key in in another column, but I do not know how. I want to hhave result like this: >>> df.with_columns(r=pl.col("col1").apply(lambda x: expr_dict[X]) >>> shape: (3, 3) ┌──────┬─────┬─────┬─────┐ │ col1 ┆ x ┆ y ┆ r │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞══════╪═════╪═════╪═════╡ │ a ┆ 1 ┆ 2 ┆ 2 │ │ b ┆ 2 ┆ 2 ┆ 4 │ │ a ┆ 3 ┆ 5 ┆ 15 │ └──────┴─────┴─────┴─────┘ Is this possible? | pl.when() for conditional expression. pl.coalesce() to combine conditional expressions together. df.with_columns( r = pl.coalesce( pl.when(pl.col.col1 == k).then(v) for k, v in expr_dict.items() ) ) shape: (3, 4) ┌──────┬─────┬─────┬─────┐ │ col1 ┆ x ┆ y ┆ r │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 │ ╞══════╪═════╪═════╪═════╡ │ a ┆ 1 ┆ 2 ┆ 2 │ │ b ┆ 2 ┆ 2 ┆ 2 │ │ a ┆ 3 ┆ 5 ┆ 15 │ └──────┴─────┴─────┴─────┘ | 1 | 2 |
79,310,142 | 2024-12-26 | https://stackoverflow.com/questions/79310142/how-to-extract-sub-arrays-from-a-larger-array-with-two-start-and-two-stop-1-d-ar | I am looking for a way to vectorize the following code, # Let cube have shape (N, M, M) sub_arrays = np.empty(len(cube), 3, 3) row_start = ... # Shape (N,) and are integers in range [0, M-2] row_end = ... # Shape (N,) and are integers in range [1, M-1] col_start = ... # Shape (N,) and are integers in range [0, M-2] col_end = ... # Shape (N,) and are integers in range [1, M-1] # extract sub arrays from cube and put them in sub_arrays for i in range(len(cube)): # Note that the below is extracting a (3, 3) sub array from cube sub_arrays[i] = cube[i, row_start[i]:row_end[i], col_start[i]:col_end[i]] Instead of the loop, I would like to do something like, sub_arrays = cube[:, row_start:row_end, col_start:col_end] But this throws the exception, TypeError: only integer scalar arrays can be converted to a scalar index Is there instead some valid way to vectorize the loop? | I believe this question is a duplicate of the one about Slicing along axis with varying indices. However, since it may not be obvious, I think it's okay to provide the answer in a new context with a somewhat different approach. From what I can see, you want to extract data from the cube using a sliding window of a fixed size (3×3 in this case), applied to a separate slice along the first axis with varying shifts within the slices. In contrast to the previously mentioned approach using as_strided, let's use sliding_window_view this time. As a result, we get two additional axes for row_start and col_start, followed by the window dimensions. Note that row_end and col_end appear as if they are equal to the corresponding starting points increased by a fixed square window side, which is 3 in this case: from numpy.lib.stride_tricks import sliding_window_view cube_view = sliding_window_view(cube, window_shape=(3, 3), axis=(1, 2)) output = cube_view[range(cube.shape[0]), row_start, col_start].copy() That's all. But to be sure, let's compare the output with the original code, using test data: import numpy as np from numpy.random import randint from numpy.lib.stride_tricks import sliding_window_view n, m, w = 100, 10, 3 # w - square window size row_start = randint(m-w+1, size=n) col_start = randint(m-w+1, size=n) # Test cube cube = np.arange(n*m*m).reshape(n, m, m) # Data to compare with sub_arrays = np.empty((n, w, w), dtype=cube.dtype) for i in range(cube.shape[0]): sub_arrays[i] = cube[i, row_start[i]:row_start[i]+w, col_start[i]:col_start[i]+w] # Subarrays from the sliding window view cube_view = sliding_window_view(cube, window_shape=(w, w), axis=(1, 2)) output = cube_view[range(cube.shape[0]), row_start, col_start].copy() # No exceptions should occur at this step assert np.equal(output, sub_arrays).all() | 3 | 1 |
79,313,103 | 2024-12-28 | https://stackoverflow.com/questions/79313103/asof-join-with-multiple-inequality-conditions | I have two dataframes: a (~600M rows) and b (~2M rows). What is the best approach for joining b onto a, when using 1 equality condition and 2 inequality conditions on the respective columns? a_1 = b_1 a_2 >= b_2 a_3 >= b_3 I have explored the following paths so far: Polars: join_asof(): only allows for 1 inequality condition join_where() with filter(): even with a small tolerance window, the standard Polars installation runs out of rows (4.3B row limit) during the join, and the polars-u64-idx installation runs out of memory (512GB) DuckDB: ASOF LEFT JOIN: also only allows for 1 inequality condition Numba: As the above didn't work, I tried to create my own join_asof() function - see code below. It works fine but with increasing lengths of a, it becomes prohibitively slow. I tried various different configurations of for/ while loops and filtering, all with similar results. Now I'm running a bit out of ideas... What would be a more efficient way to implement this? Thank you import numba as nb import numpy as np import polars as pl import time @nb.njit(nb.int32[:](nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:]), parallel=True) def join_multi_ineq(a_1, a_2, a_3, b_1, b_2, b_3, b_4): output = np.zeros(len(a_1), dtype=np.int32) for i in nb.prange(len(a_1)): for j in range(len(b_1) - 1, -1, -1): if a_1[i] == b_1[j]: if a_2[i] >= b_2[j]: if a_3[i] >= b_3[j]: output[i] = b_4[j] break return output length_a = 5_000_000 length_b = 2_000_000 start_time = time.time() output = join_multi_ineq(a_1=np.random.randint(1, 1_000, length_a, dtype=np.int32), a_2=np.random.randint(1, 1_000, length_a, dtype=np.int32), a_3=np.random.randint(1, 1_000, length_a, dtype=np.int32), b_1=np.random.randint(1, 1_000, length_b, dtype=np.int32), b_2=np.random.randint(1, 1_000, length_b, dtype=np.int32), b_3=np.random.randint(1, 1_000, length_b, dtype=np.int32), b_4=np.random.randint(1, 1_000, length_b, dtype=np.int32)) print(f"Duration: {(time.time() - start_time):.2f} seconds") | Using Numba here is a good idea since the operation is particularly expensive. That being said, the complexity of the algorithm is O(n²) though it is not easy to do much better (without making the code much more complex). Moreover, the array b_1, which might not fit in the L3 cache, is fully read 5_000_000 times making the code rather memory bound. We can strongly speed up the code by building an index so not to travel the whole array b_1, but only the values where a_1[i] == b_1[j]. This is not enough to improve the complexity since a lot of j values fulfil this condition. We can improve the (average) complexity by building a kind of tree for all nodes of the index but in practice, this makes the code much more complex and the time to build the tree would be so big that it actually does not worth doing that in practice. Indeed, a basic index is enough to strongly reduce the execution time on the provided random dataset (with uniformly distributed numbers). Here is the resulting code: import numba as nb import numpy as np import time length_a = 5_000_000 length_b = 2_000_000 a_1=np.random.randint(1, 1_000, length_a, dtype=np.int32) a_2=np.random.randint(1, 1_000, length_a, dtype=np.int32) a_3=np.random.randint(1, 1_000, length_a, dtype=np.int32) b_1=np.random.randint(1, 1_000, length_b, dtype=np.int32) b_2=np.random.randint(1, 1_000, length_b, dtype=np.int32) b_3=np.random.randint(1, 1_000, length_b, dtype=np.int32) b_4=np.random.randint(1, 1_000, length_b, dtype=np.int32) IntList = nb.types.ListType(nb.types.int32) @nb.njit(nb.int32[:](nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:], nb.int32[:]), parallel=True) def join_multi_ineq_fast(a_1, a_2, a_3, b_1, b_2, b_3, b_4): output = np.zeros(len(a_1), dtype=np.int32) b1_indices = nb.typed.Dict.empty(key_type=nb.types.int32, value_type=IntList) for j in range(len(b_1)): val = b_1[j] if val in b1_indices: b1_indices[val].append(j) else: lst = nb.typed.List.empty_list(item_type=np.int32) lst.append(j) b1_indices[val] = lst kmean = 0 for i in nb.prange(len(a_1)): if a_1[i] in b1_indices: indices = b1_indices[a_1[i]] v2 = a_2[i] v3 = a_3[i] for k in range(len(indices) - 1, -1, -1): j = indices[np.uint32(k)] #assert a_1[i] == b_1[j] if v2 >= b_2[j] and v3 >= b_3[j]: output[i] = b_4[j] break return output %time join_multi_ineq_fast(a_1, a_2, a_3, b_1, b_2, b_3, b_4) Note that, in average, only 32 k values are tested (which is reasonable enough not to build a more efficient/complicated data structure). Also please note that the result is strictly identical to the one provided by the naive implementation. Benchmark Here are results on my i5-9600KF CPU (6 cores): Roman's code: >120.00 sec (require a HUGE amount of RAM: >16 GiB) Naive Numba code: 24.85 sec This implementation: 0.83 sec <----- Thus, this implementation is about 30 times faster than the initial code. | 5 | 2 |
79,313,133 | 2024-12-28 | https://stackoverflow.com/questions/79313133/sqlalchemy-one-or-more-mappers-failed-to-initialize | I know this Question has been asked a lot and believe me I checked the answers and to me my code looks fine even tough it gives error so it's not. Basically, I was trying to set up a relationship between two Entities: User and Workout. from sqlalchemy import Integer,VARCHAR,TIMESTAMP from sqlalchemy.orm import mapped_column,relationship from sqlalchemy.sql import func from app.schemas.baseschema import Base class User(Base): __tablename__="users" id=mapped_column(Integer,primary_key=True,autoincrement=True) username=mapped_column(VARCHAR(255),unique=True,nullable=False) email=mapped_column(VARCHAR(50),unique=True,nullable=False) created_at=mapped_column(TIMESTAMP(timezone=True),default=func.current_timestamp()) updated_at=mapped_column(TIMESTAMP(timezone=True)) password_hash=mapped_column(VARCHAR(255),nullable=False) workouts=relationship("Workout",back_populates="user") from sqlalchemy import Integer,DATE,TEXT,ForeignKey from sqlalchemy.orm import mapped_column,relationship from sqlalchemy.sql import func from app.schemas.baseschema import Base from sqlalchemy.schema import ForeignKeyConstraint class Workout(Base): __tablename__="workouts" id=mapped_column(Integer,primary_key=True,autoincrement=True) date=mapped_column(DATE,default=func.current_date) notes=mapped_column(TEXT) user_id=mapped_column(Integer,ForeignKey("users.id"),nullable=False) user=relationship("User",back_populates="workouts") the error I'm getting is this one: InvalidRequestError("When initializing mapper Mapper[User(users)], expression 'Workout' failed to locate a name ('Workout'). If this is a class name, consider adding this relationship() to the <class 'app.schemas.userschemas.User'> class after both dependent classes have been defined."). Can someone help me identify the issue? To me, it looks like there's the Workout class and that it has an user field. I never used sql alchemy and I'm new to Python as well. I checked both classes to see If i spelled the relationship wrong but it looked fine to me. I also tried to compare other answers given here to the same problem and tried to contextualize them to my situation but I didn't succeed | This is sort of a weird problem that I have not seen a perfect solution to. SQLAlchemy allows this "deferred" referencing of other models/etc by str name so that you don't end up with circular imports, ie. User must import Workout and Workout must import User. The problem that happens is that by not directly referencing them they might not ever be loaded/executed and do not end up in the registry and in this example sqlalchemy cannot find "Workout". Some options to mitigate this: Put all the models in the same file, if you import User then Workout will also be executed and included in the registry because it the whole module is loaded and it exists in the same module. (This is the easiest) """ models.py """ class User(Base): #... pass class Workout(Base): #... pass Import all the models into a "middle" module and use models from that registry therefore forcing all the models to be loaded/registered. (Now you have to remember to do this song-and-dance every time you make a new class) """ models/classes.py """ from .user import User from .workout import Workout #... """ handlers.py """ # Load Workout as a side-effect. from .models.classes import User def handle_user_request(request): return to_json([u.id for u in request.db.scalars(select(User))]) Carefully import only the necessary models (this is not a great solution) Another option I don't know about | 2 | 0 |
79,313,343 | 2024-12-28 | https://stackoverflow.com/questions/79313343/how-to-fix-setuptools-scm-file-finders-git-listing-git-files-failed | I am using pyproject.toml to build a package. I use setuptools_scm to automatically determine the version number. I use python version 3.11.2, setuptools 66.1.1 and setuptools-scm 8.1.0. Here are the relevant parts of pyproject.toml # For a discussion on single-sourcing the version, see # https://packaging.python.org/guides/single-sourcing-package-version/ dynamic = ["version"] [tool.setuptools_scm] # can be empty if no extra settings are needed, presence enables setuptools-scm I build the project with python3 -m build When I run the build command, I see ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any What I've Tried: There is a .git directory at the root of my project. It's readable by all users. Git is installed and accessible from my PATH. I've committed changes to ensure there's Git history available. How can I fix this error? Are there additional configurations or checks I should perform to ensure setuptools_scm can correctly interact with Git for version determination? Reproducible example cd /tmp/ mkdir setuptools_scm_example cd setuptools_scm_example git init touch .gitignore git add . git commit -m "Initial commit" Add the following to pyproject.toml [build-system] requires = ["setuptools>=61.0", "setuptools_scm>=7.0"] build-backend = "setuptools.build_meta" [project] name = "example_package" dynamic = ["version"] [tool.setuptools_scm] # No additional configuration needed, but can add if needed Create and build a python package mkdir -p example_package touch example_package/__init__.py echo "print('Hello from example package')" > example_package/__init__.py python3 -m build I see the error ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any | python3 -m build builds in 2 phases: 1st it builds sdist and then it builds wheel from the sdist in an isolated environment where there is no .git directory. It doesn't matter because at the wheel building phase version is already set in sdist and build gets the version from sdist, not from setuptools_scm. In short: you may safely ignore the error. Reference: https://github.com/pypa/setuptools-scm/issues/997 . Found in https://github.com/pypa/setuptools-scm/issues?q=is%3Aissue+setuptools_scm._file_finders.git Another approach to try: prevent build isolation, install build dependencies into the current environment and build sdist and wheel explicitly: python3 -m pip install build setuptools-scm python3 -m build --no-isolation --sdist --wheel | 1 | 1 |
79,313,112 | 2024-12-28 | https://stackoverflow.com/questions/79313112/combine-two-pandas-dataframes-side-by-side-with-resulting-length-being-maxdf1 | Essentially, what I described in the title. I am trying to combine two dataframes (i.e. df1 & df2) where they have different amounts of columns (df1=3, df2=8) with varying row lengths. (The varying row lengths stem from me having a script that breaks main two excel lists into blocks based on a date condition). My goal is to combine the two length-varying dataframes into one dataframe, where they both start at index 0 instead of one after the other. What is currently happening: A B C D 0 1 2 nan nan 1 3 4 nan nan 2 nan nan 5 6 3 nan nan 7 8 4 nan nan 9 10 This is how I would like it to be: A B C D 0 1 2 5 6 1 3 4 7 8 2 nan nan 9 10 I tried many things, but this is the last code that worked (but with wrong results): import pandas as pd hours_df = pd.read_excel("hours.xlsx").fillna("") hours_columns = hours_df.columns material_df = pd.read_excel("material.xlsx").fillna("") material_df = material_df.rename(columns={'Date': 'Material Date'}) material_columns = material_df.columns breaker = False temp = [] combined_df = pd.DataFrame() last_date = "1999-01-01" for _, row in hours_df.iterrows(): if row["Date"] != "": block_df = pd.DataFrame(temp, columns=hours_columns) if temp: cell_a1 = block_df.iloc[0,0] filtered_df = material_df.loc[ (material_df["Material Date"] < cell_a1) & (material_df["Material Date"] >= last_date)] last_date = cell_a1 combined_block = pd.concat([block_df, filtered_df], axis=1) combined_df = pd.concat([combined_df, combined_block], ignore_index=True) temp = [] temp.append(row) if temp: block_df = pd.DataFrame(temp, columns=hours_columns) combined_df = pd.concat([combined_df, block_df], ignore_index=True) print(combined_df) I am not getting any errors. Just stacked output -- like the one I showed above. | Your issue arises because you are concatenating dataframes vertically rather than horizontally. To achieve the desired output, you need to align rows from df1 and df2 with the same index and then concatenate horizontally. Here’s the updated code that would produce the output you want. I have added comments on the places where I've made the changes. import pandas as pd # Loading dataframes hours_df = pd.read_excel("hours.xlsx").fillna("") material_df = pd.read_excel("material.xlsx").fillna("") material_df = material_df.rename(columns={'Date': 'Material Date'}) temp = [] combined_df = pd.DataFrame() last_date = "1999-01-01" for _, row in hours_df.iterrows(): if row["Date"] != "": block_df = pd.DataFrame(temp, columns=hours_df.columns) if temp: # Filter material_df based on the date range first_date_in_block = block_df.iloc[0, 0] filtered_df = material_df.loc[ (material_df["Material Date"] < first_date_in_block) & (material_df["Material Date"] >= last_date) ] last_date = first_date_in_block # Reset indices for horizontal alignment block_df.reset_index(drop=True, inplace=True) filtered_df.reset_index(drop=True, inplace=True) # Concatenate horizontally combined_block = pd.concat([block_df, filtered_df], axis=1) combined_df = pd.concat([combined_df, combined_block], ignore_index=True) temp = [] temp.append(row) # Handling the remaining block if temp: block_df = pd.DataFrame(temp, columns=hours_df.columns) combined_df = pd.concat([combined_df, block_df], ignore_index=True) print(combined_df) | 4 | 3 |
79,312,644 | 2024-12-27 | https://stackoverflow.com/questions/79312644/extracting-substring-between-optional-substrings | I need to extract a substring which is between two other substrings. But I would like to make the border substrings optional - if no substrings found then the whole string should be extracted. patt = r"(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # d - as expected a = re.sub(patt, r"\1", "abcdefg") # adg - as expected # I'd like to get `d` only without `a` and `g` # Trying to remove `a`: patt = r".*(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # empty !!! a = re.sub(patt, r"\1", "abcdef") # empty !!! # make non-greedy patt = r".*?(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "bcdef") # d - as expected a = re.sub(patt, r"\1", "abcdef") # `ad` instead of `d` - `a` was not captured # make `a` non-captured patt = r"(?:.*?)(?:bc)?(.*?)(?:ef)?" a = re.sub(patt, r"\1", "abcdef") # ad !!! `a` still not captured I also tried to use re.search without any success. How can I extract d only (a substring between optional substrings bc and ef) from abcdefg? The same pattern should return hij when applied to hij. | By making the bc and ef patterns optional, you'll get into situations where the one is matched, while the other is not. Yet, you'd need both of them or neither. The requirement that you need the whole input to match when these delimiters are not present really overcomplicates it. Realise that if there is no match, sub will not alter the input, and so that would actually achieve the desired result. In other words, don't make these delimiter patterns optional -- make them mandatory. When there is a match, you'll want to replace all of the input with the captured group. This means you should also match what follows ef, so it gets replaced (removed) too. Bringing all that together, you could use: patt = r".*?bc(.*?)ef.*" Be aware that this will only match the first occurrence of the bc...ef pattern. If the input string has more occurrences of those, the sub call will only return the first delimited text. | 3 | 3 |
79,312,133 | 2024-12-27 | https://stackoverflow.com/questions/79312133/getting-all-leaf-words-reverse-stemming-into-one-python-list | On the same lines as the solution provided in this link, I am trying to get all leaf words of one stem word. I am using the community-contributed (@Divyanshu Srivastava) package get_word_forms Imagine I have a shorter sample word list as follows: my_list = [' jail', ' belief',' board',' target', ' challenge', ' command'] If I work it manually, I do the following (which is go word-by-word, which is very time-consuming if I have a list of 200 words): get_word_forms("command") and get the following output: {'n': {'command', 'commandant', 'commandants', 'commander', 'commanders', 'commandership', 'commanderships', 'commandment', 'commandments', 'commands'}, 'a': set(), 'v': {'command', 'commanded', 'commanding', 'commands'}, 'r': set()} 'n' is noun, 'a' is adjective, 'v' is verb, and 'r' is adverb. If I try to reverse-stem the entire list in one go: [get_word_forms(word) for word in sample] I fail at getting any output: [{'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}, {'n': set(), 'a': set(), 'v': set(), 'r': set()}] I think I am failing at saving the output to the dictionary. Eventually, I would like my output to be a list without breaking it down into noun, adjective, adverb, or verb: something like: ['command','commandant','commandants', 'commander', 'commanders', 'commandership', 'commanderships','commandment', 'commandments', 'commands','commanded', 'commanding', 'commands', 'jail', 'jailer', 'jailers', 'jailor', 'jailors', 'jails', 'jailed', 'jailing'.....] .. and so on. | One solution using nested list comprehensions after stripping forgotten spaces: all_words = [setx for word in my_list for setx in get_word_forms(word.strip()).values() if len(setx)] # Flatten the list of sets all_words = [word for setx in all_words for word in setx] # Remove the repetitions and sort the set all_words = sorted(set(all_words)) print(all_words) ['belief', 'beliefs', 'believabilities', 'believability', 'believable', 'believably', 'believe', 'believed', 'believer', 'believers', 'believes', 'believing', 'board', 'boarded', 'boarder', 'boarders', 'boarding', 'boards', 'challenge', 'challengeable', 'challenged', 'challenger', 'challengers', 'challenges', 'challenging', 'command', 'commandant', 'commandants', 'commanded', 'commander', 'commanders', 'commandership', 'commanderships', 'commanding', 'commandment', 'commandments', 'commands', 'jail', 'jailed', 'jailer', 'jailers', 'jailing', 'jailor', 'jailors', 'jails', 'target', 'targeted', 'targeting', 'targets'] | 1 | 1 |
79,313,107 | 2024-12-28 | https://stackoverflow.com/questions/79313107/how-to-have-pyright-infer-type-from-an-enum-check | Can types be associated with enums, so that Pyright can infer the type from an equality check? (Without cast() or isinstance().) from dataclasses import dataclass from enum import Enum, auto class Type(Enum): FOO = auto() BAR = auto() @dataclass class Foo: type: Type @dataclass class Bar: type: Type item = next(i for i in (Foo(Type.FOO), Bar(Type.BAR)) if i.type == Type.BAR) reveal_type(item) # How to have this be `Bar` instead of `Foo | Bar`? | You want a discriminated union (also known as tagged union). In a discriminated union, there exists a discriminator (also known as a tag field) which can be used to differentiate the members. You currently have an union of Foo and Bar, and you want to discriminate them using the .type attribute. However, this field cannot be the discriminator since it isn't different for each member of the union. (playgrounds: Pyright, Mypy) for i in (Foo(Type.FOO), Bar(Type.BAR)): reveal_type(i) # Foo | Bar mischievous_foo = Foo(Type.BAR) # This is valid naughty_bar = Bar(Type.FOO) # This too for i in (mischievous_foo, naughty_bar): if i.type == Type.FOO: reveal_type(i) # Runtime: Bar, not Foo If Foo.type can only ever be Type.FOO and Bar.Type be Type.BAR, then it is important that you reflect this in the types: (Making type a dataclass field no longer makes sense at this point, but I'm assuming they are only dataclasses for the purpose of this question.) @dataclass class Foo: type: Literal[Type.FOO] @dataclass class Bar: type: Literal[Type.BAR] As Literal[Type.FOO] and Literal[Type.BAR] are disjoint types, i will then be narrowable by checking for the type of .type: (playgrounds: Pyright, Mypy) for i in (Foo(Type.FOO), Bar(Type.BAR)): if i.type == Type.FOO: reveal_type(i) # Foo Foo(Type.BAR) # error Bar(Type.FOO) # error ...even in a generator, yes: item = next(i for i in (Foo(Type.FOO), Bar(Type.BAR)) if i.type == Type.BAR) reveal_type(item) # Bar | 2 | 2 |
79,312,774 | 2024-12-27 | https://stackoverflow.com/questions/79312774/inconsistent-url-error-in-django-from-following-along-to-beginner-yt-tutorial | As you can see in the first screenshot, /products/new isn't showing up as a valid URL although I followed the coding tutorial from YouTube exactly. For some reason there's a blank character before "new" but no blank space in the current path I'm trying to request. I don't know if that's normal or not. I'm using django version 2.1 if that matters The URL does work for products/salt/. What's weird is the URL used to be products/trending/ but I got the same error as with products/new so I randomly changed the URL to products/salt and it started working for me. [Page not found (404) Request Method: GET Request URL: http://127.0.0.1:8000/products/new/ Using the URLconf defined in pyshop.urls, Django tried these URL patterns, in this order: admin/ products/ products/ salt products/ new The current path, products/new/, didn't match any of these.]1 from django.http import HttpResponse from django.shortcuts import render def index(request): return HttpResponse('Hello World') def trending(request): return HttpResponse('Trending Products') def new(request): return HttpResponse('New Products')[2] from django.urls import path from . import views urlpatterns = [ path('', views.index), path('salt', views.trending), path('new', views.new)[3] | Add a trailing slash / to your URLpatterns to resolve this issue i.e. new/ and trending/. Also as mentioned in my comment, I would suggest you upgrade to a secure version of Django to access newer features. | 3 | 2 |
79,310,840 | 2024-12-27 | https://stackoverflow.com/questions/79310840/pil-generate-an-image-from-applying-a-gradient-to-a-numpy-array | I have a 2d NumPy array with values from 0 to 1. I want to turn this array into a Pillow image. I can do the following, which gives me a nice greyscale image: arr = np.random.rand(100,100) img = Image.fromarray((255 * arr).astype(np.uint8)) Now, instead of making a greyscale image, I'd like to apply a custom gradient. To clarify, instead of drawing bands of colors in a linear gradient as in this example, I'd like to specify apply a gradient colormap to an existing 2d array and turn it into a 3d array. Example: If my gradient is [color1, color2, color3], then all 0s should be color1, all 1s should be color3, and 0.25 should be somewhere in between color1 and color2. I was already able to write a simple function that does this: gradient = [(0, 0, 0), (255, 80, 0), (0, 200, 255)] # black -> orange -> blue def get_color_at(x): assert 0 <= x <= 1 n = len(gradient) if x == 1: return gradient[-1] pos = x * (n - 1) idx1 = int(pos) idx2 = idx1 + 1 frac = pos - idx1 color1 = gradient[idx1] color2 = gradient[idx2] color_in_between = [round(color1[i] * (1 - frac) + color2[i] * frac) for i in range(3)] return tuple(color_in_between) So get_color_at(0) returns (0,0,0) and get_color_at(0.75) equals (153, 128, 102), which is this tan/brownish color in between orange and blue. Now, how can I apply this to the original NumPy array? I shouldn't apply get_color_at directly to the NumPy array, since that would still give a 2d array, where each element is a 3-tuple. Instead, I think I want an array whose shape is (n, m, 3), so I can feed that to Pillow and create an RGB image. If possible, I'd prefer to use vectorized operations whenever possible - my input arrays are quite large. If there is builtin-functionality to use a custom gradient, I would also love to use that instead of my own get_color_at function, since my implementation is pretty naive. Thanks in advance. | Method 1: vectorization of your code Your code is almost already vectorized. Almost all operations of it can work indifferently on a float or on an array of floats Here is a vectorized version def get_color_atArr(arr): assert (arr>=0).all() and (arr<=1).all() n=len(gradient) gradient.append(gradient[-1]) gradient=np.array(gradient, dtype=np.uint8) pos = arr*(n-1) idx1 = pos.astype(np.uint8) idx2 = idx1+1 frac = (pos - idx1)[:,:,None] color1 = gradient[idx1] color2 = gradient[idx2] color_in_between = np.round(color1*(1-frac) + color2*frac).astype(np.uint8) Basically, the changes are, the assert (can't use a<b<c notation with numpy arrays). Note that this assert iterates all values of array to check for assertion. That is not for free. So I included it because you did. But you need to be aware that this is not a compile-time verification. It does run code to check all values, which is a non-negligible part of all execution time of the code. more an implementation choice than a vectorization step (a pure translation of your code would have translated that if x==1 into some np.where, or masks. But I am never comfortable with usage of == on floats any way. So I prefer my way. Which costs nothing. It is not another iteration on the image. It adds a sentinel (In Donald Kuth sense of "sentinel": a few bytes that avoid special cases) to the gradient color. So that, in the unlikely even that arr is really 1.0, the gradient happen between last color and last color). frac is broadcasted in 3D array, so that it can be used as a coefficient on 3d arrays color1 and color2 Plus of course, int or floor can't be used on numpy arrays Method 2: not reinventing the wheel Matplotlib (and, I am certain, many other libraries) already have a whole colormap module to deal with this kind of transformations. Let's use it thresh=np.linspace(0,1,len(gradient)) cmap=LinearSegmentedColormap.from_list('mycmap', list(zip(thresh, np.array(gradient)/255.0)), N=256*len(gradient)) arr2 = cmap(arr)[:,:,:3] This is building a custom colormap, using LinearSegmentedColormap, which takes, as 2nd argument, a list of pair (threshold, color). Such as [(0, (0,0,0)), (0.3, (1,0,0)), (0.8, (0,1,0)), (1, (0,0,1))] for a color map that goes from black to red when x goes from 0 tom 0.3, then from red to green when x goes from 0.3 to 0.8, then from green to blue. In this case, your gradient can be transformed to such a list, with just a zip with a linspace. It takes a N= argument, since it creates a discretization of all possible colors (with interpolation in between). Here I take an exaggerated option (my N is more than the maximum number of different colors than can exist, once uint8d) Also since it returns a RGBA array, and to remain strictly identical to what you did, I drop the A using [:,:,:3]. Of course, both method need the final translation into PIL, but you already know how to do that. For this one, it also needs mapping between 0 and 255, which I can do with your own code: Image.fromarray((255 * arr).astype(np.uint8)) Note that, while using matplotlib colormap, you may want to take a tour at what that module has to offer. For example some of the zillions of already existing colormaps may suit you. Or some other way to build colors map non-linearly. | 2 | 2 |
79,311,978 | 2024-12-27 | https://stackoverflow.com/questions/79311978/how-can-i-optimize-python-code-for-analysis-of-a-large-sales-dataset | I’m working on a question where I have to process a large set of sales transactions stored in a CSV file and summarize the results. The code is running slower than expected and taking too much time for execution, especially as the size of the dataset increases. I am using pandas to load and process the data, are there any optimizations I can make to reduce computational time and get the output faster. Here is the code i am using: import pandas as pd import numpy as np # Sample dataset n = 10**6 # million rows np.random.seed(0) transaction_ids = np.arange(1, n+1) customer_ids = np.random.randint(100, 200, n) sale_amounts = np.random.uniform(50, 500, n) transaction_dates = pd.date_range('2023-01-01', periods=n, freq='T') # DataFrame df = pd.DataFrame({ 'transaction_id': transaction_ids, 'customer_id': customer_ids, 'sale_amount': sale_amounts, 'transaction_date': transaction_dates }) # Categorization function def categorize_transaction(sale_amount): if sale_amount > 400: return 'High Value' elif sale_amount > 200: return 'Medium Value' else: return 'Low Value' category_map = { 'High Value': (df['sale_amount'] > 400), 'Medium Value': (df['sale_amount'] > 200) & (df['sale_amount'] <= 400), 'Low Value': (df['sale_amount'] <= 200) } df['category'] = np.select( [category_map['High Value'], category_map['Medium Value'], category_map['Low Value']], ['High Value', 'Medium Value', 'Low Value'], default='Unknown' ) # Aggregation category_summary = df.groupby('category')['sale_amount'].agg( total_sales='sum', avg_sales='mean', transaction_count='count' ).reset_index() # Additional optimization using 'transaction_date' for time-based grouping df['transaction_month'] = df['transaction_date'].dt.to_period('M') monthly_summary = df.groupby(['transaction_month', 'category'])['sale_amount'].agg( total_sales='sum', avg_sales='mean', transaction_count='count' ).reset_index() print(category_summary.head()) print(monthly_summary.head()) | First of all, the df['category'] = np.select(...) line is slow because of the implicit conversion of all strings to a list of string objects. You can strongly speed this up by creating a categorical column rather than string-based one, since strings are inherently slow to compute. df['category'] = pd.Categorical.from_codes(np.select( [category_map['High Value'], category_map['Medium Value'], category_map['Low Value']], [0, 1, 2], default=3 ), ['High Value', 'Medium Value', 'Low Value', 'Unknown']) This create a categorical column with 4 possible values (integers associated to predefined strings). This is about 8 times faster on my machine. Once you use the above code, the aggregation is also running much faster (about 5 times) because Pandas operates on integers rather than slow string objets. It also speed up the very-last operation (about twice faster). The df['transaction_date'].dt.to_period('M') is particularly slow. Directly using Numpy (with .astype('datetime64[M]')) does not make this faster. Since this operation is compute bound, you can parallelize it. Alternatively, you can write your own (parallel) implementation with Numba (or Cython) though this is tedious to write since one need to case about leap years (and possibly even leap seconds). Update: You can make the first code even faster thanks to 8-bit integers (assuming there are less than 128 categories). This can be done by replacing [0, 1, 2] to np.array([0, 1, 2], dtype=np.int8). This is about 35% faster than the default 32-bit categories. | 1 | 3 |
79,311,933 | 2024-12-27 | https://stackoverflow.com/questions/79311933/how-to-solve-multiple-and-nested-discriminators-with-pydantic-v2 | I am trying to validate Slack interaction payloads, that look like these: type: block_actions container: type: view ... type: block_actions container: type: message ... type: view_submission ... I use 3 different models for payloads coming to the same interaction endpoint: class MessageContainer(BaseModel): type: Literal["message"] ... class ViewContainer(BaseModel): type: Literal["view"] ... class MessageActions(ActionsBase): type: Literal["block_actions"] container: MessageContainer ... class ViewActions(ActionsBase): type: Literal["block_actions"] container: ViewContainer ... class ViewSubmission(BaseModel): type: Literal["view_submission"] ... and I was planning to use BlockActions = Annotated[ MessageActions | ViewActions, Field(discriminator="container.type"), ] SlackInteraction = Annotated[ ViewSubmission | BlockActions, Field(discriminator="type"), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) but cannot make it work with v2.10.4. Do I have to dispatch them manually or there is a way to solve it with Pydantic? | Not sure it's possible to use 2 discriminators to resolve one type (as you are trying to do). I can suggest you 3 options: 1. Split block_actions into block_message_actions and block_view_actions: from typing import Annotated, Literal from pydantic import BaseModel, Field, TypeAdapter class MessageContainer(BaseModel): pass class ViewContainer(BaseModel): pass class ActionsBase(BaseModel): pass class MessageActions(ActionsBase): type: Literal["block_message_actions"] container: MessageContainer class ViewActions(ActionsBase): type: Literal["block_view_actions"] container: ViewContainer class ViewSubmission(BaseModel): type: Literal["view_submission"] SlackInteraction = Annotated[ ViewSubmission | ViewActions | MessageActions, Field(discriminator="type"), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) a = SlackInteractionAdapter.validate_python({"type": "view_submission"}) assert isinstance(a, ViewSubmission) b = SlackInteractionAdapter.validate_python( {"type": "block_message_actions", "container": {}}, ) assert isinstance(b, MessageActions) assert isinstance(b.container, MessageContainer) c = SlackInteractionAdapter.validate_python( {"type": "block_view_actions", "container": {}}, ) assert isinstance(c, ViewActions) assert isinstance(c.container, ViewContainer) 2. Use Discriminated Unions with callable Discriminator: def get_discriminator_value(v: Any) -> str: if isinstance(v, dict): if v["type"] == "view_submission": return "view_submission" return "message_action" if v["container"]["type"] == "message" else "view_action" if v.type == "view_submission": return "view_submission" return "message_action" if v.container.type == "message" else "view_action" SlackInteraction = Annotated[ Union[ Annotated[ViewSubmission, Tag("view_submission")], Annotated[MessageActions, Tag("message_action")], Annotated[ViewActions, Tag("view_action")], ], Discriminator(get_discriminator_value), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) 3. Use nested discriminated unions: from typing import Annotated, Literal from pydantic import BaseModel, Field, TypeAdapter class MessageContainer(BaseModel): type: Literal["message"] class ViewContainer(BaseModel): type: Literal["view"] ActionContainer = Annotated[ MessageContainer | ViewContainer, Field(discriminator="type"), ] class BlockActions(BaseModel): type: Literal["block_actions"] container: ActionContainer class ViewSubmission(BaseModel): type: Literal["view_submission"] SlackInteraction = Annotated[ ViewSubmission | BlockActions, Field(discriminator="type"), ] SlackInteractionAdapter = TypeAdapter(SlackInteraction) a = SlackInteractionAdapter.validate_python({"type": "view_submission"}) assert isinstance(a, ViewSubmission) b = SlackInteractionAdapter.validate_python( {"type": "block_actions", "container": {"type": "message"}}, ) assert isinstance(b, BlockActions) assert isinstance(b.container, MessageContainer) c = SlackInteractionAdapter.validate_python( {"type": "block_actions", "container": {"type": "view"}}, ) assert isinstance(c, BlockActions) assert isinstance(c.container, ViewContainer) | 1 | 2 |
79,309,271 | 2024-12-26 | https://stackoverflow.com/questions/79309271/pandas-series-subtract-pandas-dataframe-strange-result | I'm wondering why pandas Series subtract a pandas dataframe produce such a strange result. df = pd.DataFrame(np.arange(10).reshape(2, 5), columns='a-b-c-d-e'.split('-')) df.max(axis=1) - df[['b']] What are the steps for pandas to produce the result? b 0 1 0 NaN NaN NaN 1 NaN NaN NaN | By default an operation between a DataFrame and a Series is broadcasted on the DataFrame by column, over the rows. This makes it easy to perform operations combining a DataFrame and aggregation per column: # let's subtract the DataFrame to its max per column df.max(axis=0) - df[['b']] a b c d e b NaN 5 NaN NaN NaN 1 NaN 0 NaN NaN NaN Here, since you're aggregating per row, this is no longer possible. You should use rsub with the parameter axis=0: df[['b']].rsub(df.max(axis=1), axis=0) Output: b 0 3 1 3 Note that using two Series would also align the values: df.max(axis=1) - df['b'] Output: 0 3 1 3 dtype: int64 Why 3 columns with df.max(axis=1) - df[['b']]? First, let's have a look at each operand: # df.max(axis=1) 0 4 1 9 dtype: int64 # df[['b']] b 0 1 1 6 Since df[['b']] is 2D (DataFrame), and df.max(axis=1) is 1D (Series), df.max(axis=1) will be used as if it was a "wide" DataFrame: # df.max(axis=1).to_frame().T 0 1 0 4 9 There are no columns in common, thus the output is only NaNs with the union of column names ({'b'}|{0, 1} -> {'b', 0, 1}). If you replace the NaNs that are used in the operation by 0 this makes it obvious how the values are used: df[['b']].rsub(df.max(axis=1).to_frame().T, fill_value=0) b 0 1 0 -1.0 4.0 9.0 1 -6.0 NaN NaN Now let's check a different example in which one of the row indices has the same name as one of the selected columns: df = pd.DataFrame(np.arange(10).reshape(2, 5), columns=['a', 'b', 'c', 'd', 'e'], index=['b', 0] ) df.max(axis=1) - df[['b']] Now the output only has 2 columns, b the common indice and 1 the second index in the Series ({'b', 1}|{'b'} -> {'b', 1}): 1 b b NaN 3 1 NaN -2 | 1 | 1 |
79,310,713 | 2024-12-27 | https://stackoverflow.com/questions/79310713/how-to-apply-the-capitalize-with-condition | I'm wondering how to use the capitalize function when another column has a specific value. For example, I want to change the first letter of students with Master's degree. # importing pandas as pd import pandas as pd # creating a dataframe df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'], 'C': [27, 23, 21, 23, 24] }) # Expected result # A B C #0 John Masters 27 #1 bODAY Graduate 23 #2 minA Graduate 21 #3 Peter Masters 23 #4 nicky Graduate 24 I tried it like this, but it didn't apply well. df[df['B']=='Masters']['A'].str = df[df['B']=='Masters']['A'].str.capitalize() | Here is the complete code: import pandas as pd # Creating the DataFrame df = pd.DataFrame({ 'A': ['john', 'bODAY', 'minA', 'peter', 'nicky'], 'B': ['Masters', 'Graduate', 'Graduate', 'Masters', 'Graduate'], 'C': [27, 23, 21, 23, 24] }) # Capitalize column A conditionally based on B df['A'] = df.apply(lambda row: row['A'].capitalize() if row['B'] == 'Masters' else row['A'], axis=1) # Display the updated DataFrame print(df) Output: A B C 0 John Masters 27 1 bODAY Graduate 23 2 minA Graduate 21 3 Peter Masters 23 4 nicky Graduate 24 | 1 | 1 |
79,309,886 | 2024-12-26 | https://stackoverflow.com/questions/79309886/parsing-units-out-of-column | I've got some data I'm reading into Python using Pandas and want to keep track of units with the Pint package. The values have a range of scales, so have mixed units, e.g. lengths are mostly meters but some are centimeters. For example the data: what,length foo,5.3 m bar,72 cm and I'd like to end up with the length column in some form that Pint understands. Pint's Pandas integration suggests that it only supports the whole column having the same datatype, which seems reasonable. I'm happy with some arbitrary unit being picked (e.g. the first, most common, or just SI base unit) and everything expressed in terms of that. I was expecting some nice way of getting from the data I have to what's expected, but I don't see anything. import pandas as pd import pint_pandas length = pd.Series(['5.3 m', "72 cm"], dtype='pint[m]') Doesn't do the correct thing at all, for example: length * 2 outputs 0 5.3 m5.3 m 1 72 cm72 cm dtype: pint[meter] so it's just leaving things as strings. Calling length.pint.convert_object_dtype() doesn't help and everything stays as strings. | Going through the examples, it looks like pint_pandas is expecting numbers rather than strings. You can use apply to do the conversion: from pint import UnitRegistry ureg = UnitRegistry() df["length"].apply(lambda i: ureg(i)).astype("pint[m]") However, why keep the column as Quantity objects instead of just plain float numbers? | 1 | 2 |
79,309,190 | 2024-12-26 | https://stackoverflow.com/questions/79309190/numpy-convention-for-storing-time-series-of-vectors-and-matrices-items-in-rows | I'm working with discrete-time simulations of ODEs with time varying parameters. I have time series of various data (e.g. time series of state vectors generated by solve_ivp, time series of system matrices generated by my control algorithm, time series of system matrices in modal form, and so on). My question: in what order should I place the indices? My intuition is that since numpy arrays are (by default) stored in row-major order, and I want per-item locality, each row should contain the "item" (i.e. a vector or matrix), and so the number of rows is the number of time points, and the number of columns is the dimension of my vector, e.g.: x_k = np.array((5000, 4)) # a time series of 5000, 4-vectors display(x_k[25]) # the 26th timepoint Or for matrices I might use: A_k = np.array((5000, 4, 4)) # a time series of 5000, 4x4-matrices However, solve_ivp appears to do the opposite and returns a row-major array with the time series in columns (sol.y shape is (4, 5000)). Furthermore, transposing the result with .T just flips a flag to column-major so it is not really clear what the developers of solve_ivp and numpy intend me to do to write cache efficient code. What are the conventions? Should I use the first index for the time index, as in my examples above, or last index as solve_ivp does? | This is strongly dependent of the algorithms applied on your dataset. This problem is basically known as AoS versus SoA. For algorithm that does not benefit much from SIMD operations and accessing all fields, AoS can be better, otherwise SoA is often better. The optimal data structure is often AoSoA, but it is often a pain to manipulate (so it is rare in Numpy codes). On top of that, Numpy is not efficient to operate on arrays having a very small last axis because of the way it is currently implemented (more specifically because internal generators, unneeded function calls, and lack of function specialization which is hard to do due to the high number of possible combinations). Example Here is a first practical example showing this (center of a point cloud): aos = np.random.rand(1024*1024, 2) %timeit -n 10 aos.mean(axis=0) # 17.6 ms ± 503 µs per loop soa = np.random.rand(2, 1024*1024) %timeit -n 10 soa.mean(axis=1) # 1.7 ms ± 77 µs per loop Here we can see that the SoA version is much faster (about 10 times). This is because the SoA version benefit from SIMD instruction while the former does not and suffer from the internal Numpy iterator overhead. Technically, please note that the AoS version could be implemented to be nearly as fast as the SoA version here but Numpy is not able to optimize this yet (nor any similar cases which are actually not so easy to optimize). In your case For matrices, Numpy can call BLAS functions on contiguous arrays, which is good for performance. However, a 4x4 matrix-vector operation takes a tiny amount of time: only few nanoseconds on mainstream CPUs (for AoS). Indeed, multiplying the 4x4 matrix rows by a vector takes only 4 AVX instructions that can typically be computed in only 2 cycles. Then comes the sum reduction which takes few nanoseconds too (~4 cycles per line for a naive hadd reduction that is 16 cycles for the whole matrix). Meanwhile, a BLAS function call from Numpy and the management of internal iterators takes significantly more than 10 ns per matrix to compute. This means most of the time will be spent in Numpy overheads with a AoS layout. Thus, a np.array((5000, 4, 4)) will certainly not be so bad, but clearly far from being optimal. You can strongly reduce these overheads by writing your own specialized implementation (with Cython/Numba) specifically designed for 4x4 matrices. Here is an example of relatively-fast AoS computation using Numba. With a SoA data layout (i.e. (4, 4, 5000)), you can write your own vectorized operations (e.g. SoA-based matrix multiplication). A naive implementation will certainly not be very efficient either because creating/filling temporary Numpy array is expensive. However, temporary arrays can often be preallocated/reused and operations can be often done in-place so to reduce the overheads. On top of that, you can tune the size of the temporary array so it can fit in the L1 cache (though this is tedious to do since it makes the code more complex so generally Numpy users don't want to do that). That being said, calling Numpy functions from CPython also has a significant overhead (generally 0.2-3.0 µs on my i5-9600KF CPU). This is a problem since doing basic computation on 5000 double-precision floating-point numbers in the L1 cache typically takes less than 1 µs. As a result, there is a good chance for most of the time to be spent in CPython/Numpy overheads with a SoA array having only 5000 items manipulated only using Numpy. Here again, Cython/Numba can be used to nearly remove these overheads. The resulting Cython/Numba code should be faster on SoA than AoS arrays (mainly because of horizontal SIMD operations are generally inefficient and AoS operations tends to be hard to optimize, especially on modern CPUs with wide SIMD instruction set). Conclusion This is a complicated topic. In your specific case, I expect both SoA and AoS to be inefficient if you only use Numpy (but the SoA version might be a bit faster) : most of the time will be spent in overheads. As a result, the speed of the best implementation is dependent of the exact algorithm implementation and even the CPU used (so the best is to try which one is better in practice in practice). That being said, I think using SoA is significantly better performance-wise than AoS. Indeed, codes operating on SoA arrays can be optimized more easily and further than AoS ones (see Cython/Numba or even native C code). On top of that, SoA-based codes are much more likely to benefit from accelerators like GPUs. Indeed, GPUs are massively-SIMD hardware devices operating on wide SIMD vectors (e.g. 32 items at once). 4x4 contiguous AoS matrix operation are generally pretty inefficient on them, meanwhile SIMD-friendly SoA ones are cheap. I advise you to write a clean/simple Numpy code first while preferring a SoA layout for your array, and then optimize slow parts of the code later (possibly with Cython/Numba/native codes). This strategy often results in relatively-clean codes that are simple to optimize. | 1 | 2 |
79,309,025 | 2024-12-26 | https://stackoverflow.com/questions/79309025/why-does-summing-data-grouped-by-df-iloc-0-also-sum-up-the-column-names | I have a DataFrame with a species column and four arbitrary data columns. I want to group it by species and sum up the four data columns for each one. I've tried to do this in two ways: once by grouping by df.columns[0] and once by grouping by df.iloc[:, 0]. data = { 'species': ['a', 'b', 'c', 'd', 'e', 'rt', 'gh', 'ed', 'e', 'd', 'd', 'q', 'ws', 'f', 'fg', 'a', 'a', 'a', 'a', 'a'], 's1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 's2': [9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9], 's3': [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 's4': [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10] } df = pd.DataFrame(data) grouped_df1 = df.groupby(df.columns[0], as_index=False).sum() grouped_df2 = df.groupby(df.iloc[:, 0], as_index=False).sum() Both methods correctly sum the data in the four rightmost columns. But for some reason, the second method also sums up the names of the species, concatenating them into one long, repeating string. Here's the result from the first method, which is what I'm looking for: print(grouped_df1) species s1 s2 s3 s4 0 a 91 54 97 60 1 b 2 9 3 10 2 c 3 9 4 10 3 d 25 27 28 30 4 e 14 18 16 20 5 ed 8 9 9 10 6 f 14 9 15 10 7 fg 15 9 16 10 8 gh 7 9 8 10 9 q 12 9 13 10 10 rt 6 9 7 10 11 ws 13 9 14 10 And here's the result from the df.iloc method, which incorrectly sums up the species data: print(grouped_df2) species s1 s2 s3 s4 0 aaaaaa 91 54 97 60 1 b 2 9 3 10 2 c 3 9 4 10 3 ddd 25 27 28 30 4 ee 14 18 16 20 5 ed 8 9 9 10 6 f 14 9 15 10 7 fg 15 9 16 10 8 gh 7 9 8 10 9 q 12 9 13 10 10 rt 6 9 7 10 11 ws 13 9 14 10 Why is the second method summing up the species names as well as the numerical data? | In groupby - column name is treated as an intrinsic grouping key, while a Series is treated as an external key. Reference - https://pandas.pydata.org/docs/reference/groupby.html When using df.iloc[:, 0]: Pandas considers the string values in the species column as a separate grouping key independent of the DataFrame structure. When using df.columns[0]: Pandas directly uses the column 'species' within the DataFrame as the grouping key. This allows Pandas to manage the grouping and summation correctly. Code COrrection You should always reference the column name explicitly grouped_df1 = df.groupby('species', as_index=False).sum() Or this also works grouped_df1 = df.groupby(df[df.columns[0]], as_index=False).sum() | 2 | 0 |
79,308,731 | 2024-12-26 | https://stackoverflow.com/questions/79308731/safest-way-to-incrementally-append-to-a-file | I'm performing some calculations to generate chaotic solutions to a mathematical function. I have an infinite loop that looks something like this: f = open('solutions.csv', 'a') while True: x = generate_random_parameters() # x is a list of floats success = test_parameters(x) if success: print(','.join(map(str, x)), file=f, flush=True) The implementation of generate_random_parameters() and test_parameters() is not very important here. When I want to stop generating solutions I want to ^C, but I want to ensure that solutions.csv keeps its integrity/doesn't get corrupted/etc, in case I happen to interrupt when the file is being written to. So far I haven't observed this happening, but I'd like to remove any possibility that this could occur. Additionally, since the program will never terminate on its own I don't have a corresponding f.close() -- this should be fine, correct? Appreciate any clarification. | One simple approach to ensuring that the current call to print finishes before the program exits from a keyboard interrupt is to use a signal handler to unset a flag on which the while loop runs. Set the signal handler only when you're about to call print and reset the signal handler to the original when print returns, so that the preceding code in the loop can be interrupted normally: import signal def interrupt_handler(signum, frame): global running running = False text = 'a' * 99999 running = True with open('solutions.csv', 'a') as f: while running: ... # your calculations original_handler = signal.signal(signal.SIGINT, interrupt_handler) print(text, file=f, flush=True) # your output signal.signal(signal.SIGINT, original_handler) Also note that it is more idiomatic to use open as a context manager to handle the closure of an open file when exiting the block for any reason. | 3 | 2 |
79,307,295 | 2024-12-25 | https://stackoverflow.com/questions/79307295/what-is-the-best-way-to-avoid-detecting-words-as-lines-in-opencv-linedetector | I am using OpenCV LineDetector class in order to parse tables. However, I face an issue when I try to detect lines inside the table. for the following image: I use img = cv2.imread(TABLE_PATH) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) lsd = cv2.createLineSegmentDetector(cv2.LSD_REFINE_ADV, sigma_scale=0.6) dlines = lsd.detect(gray) lines = (Line(x0, y0, x1, y1) for x0, y0, x1, y1 in dlines[0][:, 0]) in order to detect line segments. However, the results are lousy. these are the lines it detects: How can I make sure that words are not detected as lines. I cannot use hardcoded thresholds since they would work for one example but not for the other. Solutions in python or java would be appreciated | You got some lines detected, but that set contained some undesirable ones. You could just filter the set of lines for line length. If you do that, you can easily exclude the very short lines coming from the text in that picture. Implementation: that's a list comprehension, only including lines that are long enough. Write a predicate function that gives you the length of one line, then you can use that in the list comprehension. That is independent of how you scraped lines out of the picture. the LSD is one, but there are routines based on the Hough transform too, which might fare better or worse than what you have. You probably also noticed that your approach didn't find some lines that it should have. You might want to tweak the parameters you pass to your line detector. Or try another line detection approach. | 3 | 0 |
79,332,328 | 2025-1-6 | https://stackoverflow.com/questions/79332328/pydantic-model-how-to-exclude-field-from-being-hashed-eq-compared | I have the following hashable pydantic model: class TafReport(BaseModel, frozen=True): download_date: dt icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str Now I don't want these reports to be considered different just because their download date is different (I insert that with the datetime.now()). How can i exclude download_date from being considered in the __hash__ and __eq__ functions so that I can do stunts like: tafs = list(set(tafs)) and have a unique set of tafs even though two might have differing download date? I'm looking for a solution where I don't have to overwrite the __hash__ and __eq__ methods... I checked out this topic but it only answers how to exclude a field from the model in general (so it doesn't show up in the json dumps), but I do want it to show up in the json dump. | Unfortunately there is no built-in option at the moment, but there are two options that you can consider: Changing from BaseModel to a Pydantic dataclass: from dataclasses import field from datetime import datetime as dt from pydantic import TypeAdapter from pydantic.dataclasses import dataclass @dataclass(frozen=True) class TafReport: download_date: dt = field(compare=False) icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str TafReportAdapter = TypeAdapter(TafReport) SameTime = dt.now() TafReport1 = TafReport(download_date=dt.now(), icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report') TafReport2 = TafReport(download_date=dt.now(), icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report') print(TafReportAdapter.dump_json(TafReport1), hash(TafReport1)) print(TafReportAdapter.dump_json(TafReport2), hash(TafReport2)) This will give the same hash while the download_date is different. Exclude the download_date from the model and allow extra fields: from datetime import datetime as dt from pydantic import BaseModel class TafReport(BaseModel, frozen=True, extra='allow'): icao: str issue_time: dt validity_time_start: dt validity_time_stop: dt raw_report: str SameTime = dt.now() TafReport1 = TafReport(icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report', download_date=dt.now()) TafReport2 = TafReport(icao='icao', issue_time=SameTime, validity_time_start=SameTime, validity_time_stop=SameTime, raw_report='raw_report', download_date=dt.now()) print(TafReport1.model_dump(), hash(TafReport1)) print(TafReport2.model_dump(), hash(TafReport2)) In this case the hash function is build based on the fields provided in the model. But allowing extra fields without defining them in the model gives you the ability to add the download_date without affecting the hash function build in the model. | 5 | 1 |
79,336,604 | 2025-1-7 | https://stackoverflow.com/questions/79336604/failed-creating-mock-folders-with-pyfakefs | I'm working on a project that uses pyfakefs to mock my filesystem to test folder creation and missing folders in a previously defined tree structure. I'm using Python 3.13 on Windows and get this output from the terminal after running my test: Terminal output: (Does anyone have a tip for formatting terminal output without getting automatic syntax highlighting?) E ====================================================================== ERROR: test_top_folders_exist (file_checker.tests.file_checker_tests.TestFolderCheck.test_top_folders_exist) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\juank\dev\projects\python\gamedev_eco\file_checker\tests\file_checker_tests.py", line 20, in test_top_folders_exist self.fs.create_dir(Path.cwd() / "gdd") ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 2191, in create_dir dir_path = self.absnormpath(dir_path) File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 1133, in absnormpath path = self.replace_windows_root(path) File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 1418, in replace_windows_root if path and self.is_windows_fs and self.root_dir: ^^^^^^^^^^^^^ File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 357, in root_dir return self._mount_point_dir_for_cwd() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^ File "C:\Users\juank\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyfakefs\fake_filesystem.py", line 631, in _mount_point_dir_for_cwd if path.startswith(str_root_path) and len(str_root_path) > len(mount_path): ^^^^^^^^^^^^^^^ AttributeError: 'WindowsPath' object has no attribute 'startswith' ---------------------------------------------------------------------- Ran 1 test in 0.011s FAILED (errors=1) Test: from pyfakefs.fake_filesystem_unittest import TestCase class TestFolderCheck(TestCase): """Test top folders = gdd marketing business""" @classmethod def setUp(cls): cls.setUpClassPyfakefs() cls.fake_fs().create_dir(Path.cwd() / "gamedev_eco") cls.fake_fs().cwd = Path.cwd() / "gamedev_eco" def test_top_folders_exist(self): self.fs.create_dir(Path.cwd() / "gdd") What is confusing for me is that the Setup class method can create a folder and change cwd to that new folder but I'm not able to create a folder inside a test. Does anyone have experience working with pyfakefs? Can anyone lend me a hand with this issue please? | The issue has been acknowledged, fixed, and the fix has been included in the 5.7.4 release of pyfakefs. No workaround should thus be necessary, any longer. | 1 | 1 |
79,321,826 | 2025-1-1 | https://stackoverflow.com/questions/79321826/seleniumbase-cdp-mode-opening-new-tabs | I am currently writing a python program which uses a seleniumbase web bot with CDP mode activated: with SB(uc=True, test=True, xvfb=True, incognito=True, agent=<user_agent>, headless=True) as sb: temp_email_gen_url = "https://temp-mail.org/en" sb.activate_cdp_mode(temp_email_gen_url) ... I need to be able to create new tab and switch between the new and original tab. I have read the CDP docs but have not seen a solution to this, does anybody know how this can be done? | For better or worse there isn't an "open tab" feature in CDP mode. The main developer of seleniumbase suggests using a separate driver in CDP mode for each tab as follows, equivalent to using "open in new window" on every link: from seleniumbase import SB # opens all links on the target page with a second driver with SB(uc=True, test=True) as sb: temp_email_gen_url = "https://temp-mail.org/en" sb.driver.uc_open_with_reconnect(temp_email_gen_url) links = sb.get_unique_links() for link in links: driver2 = sb.get_new_driver(undetectable=True) driver2.uc_open_with_reconnect(link) print(driver2.title) sb.quit_extra_driver() You may want to consider reusing the second driver for each link instead of creating and destroying a driver for each link. It would be faster and more efficient, but it's possible that the site could use cookies and session storage to detect a suspicious number of page accesses coming from the same browser session. To elaborate on a question in the comments: there is indeed a way in non-CDP mode to open tabs but I don't recommend it. Connecting the WebDriver leaves traces that bot detection scripts can find, both obvious (in years past hard-coded variable names were added to the JS environment) and subtle such as exploiting obscure behavior around how stack traces and logging commands are buffered and normally run lazily, but not if WebDriver is connected. seleniumbase's UC mode was an attempt at addressing this by using a WebDriver most of the time, but disconnecting for a while just before doing something that can result in detection then waiting until the danger is assumed to have passed before reconnecting. It worked for a while but hosting platforms have adapted. CDP mode is a relatively new entrant in this cat-and-mouse game that is much harder to detect. The growing counter to CDP mode is to track requests and UI interactions such as mouse movements and clicks over time and deploy models like recaptcha v3 that predict the probability a browser is a bot. The counter to that will be increased reliance on pyautogui and similar tools to simulate human interaction with the UI. | 1 | 1 |
End of preview. Expand
in Dataset Viewer.
Description
- This dataset contains the question-answer pairs extracted from Stackoverflow using Stack Exchange API v2.3 and used following endpoints,
- From 2020 January 1 to 2025 February 5
1. Dataset description,
- Contains only
python
tagged question-answer pairs. - Each question has a vote greater tan or equal to 1.
- Only contains the questions that have accepted answers and the corresponding accepted answers.
- Can contain accepted answers(~30) with negative votes.
2. Column description,
question_id
: id came from Stackoverflow.creation_date
: the date when the question was created.link
: link to the Stackoverflow page corresponding to that question-answer pair.question
: question textaccepted_answer
: accepted answer textquestion_vote
: score/vote given for thequestion
by the Stackoverflow community.answer_vote
: score/vote given for theaccepted_answer
by the Stackoverflow community.
- Contains only
- Downloads last month
- 35