qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
---|---|---|---|---|---|---|
63,442,333 | When running `npm start`, my code shows a blank page, without favicon either, and the browse console shows
```
Loading failed for the <script> with source βhttp://localhost:3000/short_text_understanding/static/js/bundle.jsβ. bundle.js:23:1
Loading failed for the <script> with source βhttp://localhost:3000/short_text_understanding/static/js/0.chunk.jsβ. bundle.js:23:1
Loading failed for the <script> with source βhttp://localhost:3000/short_text_understanding/static/js/main.chunk.jsβ. bundle.js:23:1
```
If it helps debugging, my code works previously, but after `npm audit`, my package.json changed
```
- "react-scripts": "3.4.1"
+ "react-scripts": "^3.4.3"
```
My package.json
```
{
"name": "short_text_understand",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.5.0",
"@testing-library/user-event": "^7.2.1",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-scripts": "^3.4.3"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"test:debug": "react-scripts --inspect-brk test --runInBand --no-cache",
"eject": "react-scripts eject",
"lint": "eslint .",
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
},
"eslintConfig": {
"extends": "react-app"
},
"proxy": "http://localhost:5000",
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"eslint-config-airbnb": "^18.2.0",
"eslint-config-prettier": "^6.11.0",
"eslint-plugin-jsx-a11y": "^6.3.1",
"eslint-plugin-prettier": "^3.1.4",
"gh-pages": "^3.1.0",
"prettier": "^2.0.5"
}
}
```
Structure of source is
```
.
βββ Dockerfile
βββ LICENSE
βββ README.md
βββ docker-compose.yml
βββ nginx
βΒ Β βββ Dockerfile
βΒ Β βββ nginx.conf
βββ package-lock.json
βββ package.json
βββ public
βΒ Β βββ android-chrome-192x192.png
βΒ Β βββ android-chrome-512x512.png
βΒ Β βββ apple-touch-icon.png
βΒ Β βββ favicon-16x16.png
βΒ Β βββ favicon-32x32.png
βΒ Β βββ favicon.ico
βΒ Β βββ index.html
βΒ Β βββ manifest.json
βΒ Β βββ robots.txt
βΒ Β βββ site.webmanifest
βββ src
βΒ Β βββ index.js
βΒ Β βββ normalize.css
βΒ Β βββ skeleton.css
βΒ Β βββ style.css
βββ src_python
βββ Dockerfile
βββ __pycache__
```
What I do not understand is why the `bundle.js` disappears, and why my favicon is not loaded, even though I did not move the `public` folder, and things worked before.
I am totally new in this, please tell if you need any other information | 2020/08/16 | [
"https://Stackoverflow.com/questions/63442333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1780570/"
] | ```
/// <summary>
/// Passengers array
/// </summary>
public Passenger[] Passengers = new Passenger[10];
public class Passenger
{
public int Age { get; set; }
public Passenger(int age)
{
Age = age;
}
}
public void AddPassenger()
{
// Get the number of passengers
int passengerCount = Passengers.Count(p => p != null);
if (passengerCount == Passengers.Length)
Console.WriteLine("Maximum number of passengers");
else
{
Console.WriteLine("How old are the passengers?");
int age = int.Parse(Console.ReadLine());
// Add passenger
Passengers[passengerCount] = new Passenger(age);
}
}
``` | You can try:
```
public void AddPassengers(Passenger[] passengers)
{
int i = Array.IndexOf(passengers, null);
if (i < 0)
{
Console.WriteLine("The array is full.");
return;
}
Console.WriteLine("How old is the passenger?");
int age = Int32.Parse(Console.ReadLine());
passengers[i] = new Passenger(age);
}
```
Hope this helps. | 1,939 |
73,225,062 | I am trying to use `multiprocessing.Queue` to manage some tasks that are sent by the main process and picked up by "worker" processes (`multiprocessing.Process`). The workers then run the task and put the results into a result queue.
Here is my main script:
```py
from multiprocessing import Process, Queue, freeze_support
import auxiliaries as aux
import functions
if __name__ == '__main__':
freeze_support()
start = time.perf_counter()
# number of processess
nprocs = 3
# define the tasks
tasks = [(functions.get_stats_from_uniform_dist, (2**23, i)) for i in range(600)]
# start the queues
task_queue = Queue()
result_queue = Queue()
# populate task queue
for task in tasks:
task_queue.put(task)
# after all tasks are in the queue, send a message to stop picking...
for _ in range(nprocs):
task_queue.put('STOP')
# start workers
procs = []
for _ in range(nprocs):
p = Process(target=aux.worker, args=(task_queue, result_queue))
p.start()
procs.append(p)
for p in procs:
p.join()
# print what's in the result queue
while not result_queue.empty():
print(result_queue.get())
```
The imported modules are
**auxiliaries.py**
```py
from multiprocessing import current_process
def calculate(func, args):
"""
Calculates a certain function for a list of arguments. Returns a string with the result.
Arguments:
- func (string): function name
- args (list): list of arguments
"""
result = func(*args)
string = current_process().name
string = string + " says " + func.__name__ + str(args)
string = string + " = " + str(result)
return string
def worker(inputQueue, outputQueue):
"""
Picks up work from the inputQueue and outputs result to outputQueue.
Inputs:
- inputQueue (multiprocessing.Queue)
- outputQueue (multiprocessing.Queue)
"""
for func, args in iter(inputQueue.get, 'STOP'):
result = calculate(func, args)
outputQueue.put(result)
```
and
**functions.py**
```py
import numpy as np
def get_stats_from_uniform_dist(nDraws, seed):
"""
Calculates average and standard deviation of nDraws from NumPy's random.rand().
Arguments:
- nDraws (int): number of elements to draw
- seed (int): random number generator's seed
Returns:
- results (list): [average, std]
"""
np.random.seed(seed)
x = np.random.rand(nDraws)
return [x.mean(), x.std()]
```
This is entirely based on <https://docs.python.org/3/library/multiprocessing.html#multiprocessing-examples>
Everything runs okay with up to ~500 tasks. After that, the code hangs. It's looking like one of the processes never finishes so it gets stuck when I join them.
It does not look like the queues are getting full. I suspect that one of the processes is not finding the "STOP" entry in the `task_queue`, so it keeps trying to `.get()` forever, but I can't understand how and why that would happen. Any ideas on what could be causing the lock? Thanks! | 2022/08/03 | [
"https://Stackoverflow.com/questions/73225062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19102984/"
] | ```
import pandas as pd
import numpy as np
rng = np.random.default_rng(92)
df = pd.DataFrame({'a':rng.integers(0,5, 10),
'b':rng.integers(0,5, 10),
'c':rng.integers(0,5, 10)})
df
###
a b c
0 2 3 1
1 3 4 0
2 4 1 1
3 0 0 1
4 2 3 3
5 1 0 2
6 2 2 2
7 1 3 2
8 3 0 3
9 0 0 2
```
```
df['rollMeanColumn_a'] = df[df['a'] != 0]['a'].rolling(window=3).mean()
df['rollMeanColumn_b'] = df['b'].replace(0,np.nan).dropna().rolling(window=3).mean()
df['rollMeanColumn_c'] = df.query('c != 0')['c'].rolling(3).mean()
df
###
a b c rollMeanColumn_a rollMeanColumn_b rollMeanColumn_c
0 2 3 1 NaN NaN NaN
1 3 4 0 NaN NaN NaN
2 4 1 1 3.000000 2.666667 NaN
3 0 0 1 NaN NaN 1.000000
4 2 3 3 3.000000 2.666667 1.666667
5 1 0 2 2.333333 NaN 2.000000
6 2 2 2 1.666667 2.000000 2.333333
7 1 3 2 1.333333 2.666667 2.000000
8 3 0 3 2.000000 NaN 2.333333
9 0 0 2 NaN NaN 2.333333
``` | here is one way to do it. If you post the data to reproduce, i would have posted the result set.
```
window=5
df[df['Column']!=0]['Column'].rolling(window).mean()
``` | 1,943 |
60,155,158 | I'm using selenium in python and trying to click an element that is not a button class. I'm using Google Chrome as my browser/web driver
Here is my code:
```
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome(executable_path="/Users/ep9k/Desktop/SeleniumTest/drivers/chromedriver")
driver.get('http://tax.watgov.org/WataugaNC/search/commonsearch.aspx?mode=address')
driver.find_element_by_name('btAgree').click() #clicks 'Agree' button to agree to site's terms
driver.find_element_by_name('inpNumber').send_keys('190')
driver.find_element_by_name('inpStreet').send_keys('ELI HARTLEY')
driver.find_element_by_name('btSearch').click()
```
This takes me to this page:
[![enter image description here](https://i.stack.imgur.com/9r5Gs.jpg)](https://i.stack.imgur.com/9r5Gs.jpg)
I can parse the results HTML (with Beautiful Soup for example), but I want to Click on them. If I inspect the first row of elements, I see this is kept in a div element, with a style of "margin-left:3px;".
But this is not a button element, so the normal click() function does not work. Is there a way to click on this?
For example, If I click on the first row of results, I am taken to this page with more information (which is what I really want):
[![enter image description here](https://i.stack.imgur.com/1SiKK.jpg)](https://i.stack.imgur.com/1SiKK.jpg) | 2020/02/10 | [
"https://Stackoverflow.com/questions/60155158",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9305645/"
] | The element doesn't need to be a button to be clickable.
after I ran your code, I've added:
```py
results = driver.find_elements_by_class_name('SearchResults')
first_result = results[0]
first_result.click()
```
And it worked perfectly fine for me.
Most probably you tried to click on some different element and that's why it didn't work
EDIT:
Just to be more precise, most probably you tried to click on a div element inside `<tr>` tag.
while the `<tr>` tag contains `javascript:selectSearchRow('../Datalets/Datalet.aspx?sIndex=1&idx=1')` so your script should click this tag not `<div>` | Clicking the first row with xpath - see below.
Assuming you want to parse each of the results(parcels) after that, make use of the navigation buttons; this is a structure you could use:
```
table = driver.find_elements_by_xpath("//table[@id='searchResults']")
table[0].click()
# Extract the total number of parcels from string e.g. "1 of 24"
string=driver.find_element_by_xpath("//input[@name='DTLNavigator$txtFromTo']").get_attribute('value')
# split string in separate words; last word i.e. [-1] is the total number of parcels e.g. "24"
total_parcels=string.split(' ')[-1]
for record in range(int(total_parcels)):
# >>> parse record here <<<
driver.find_element_by_xpath("//input[@name='DTLNavigator$imageNext']").click()
time.sleep(0.5) # be considerate to your source and don't load their server with numerous quick requests
``` | 1,945 |
34,645,978 | I am new to python and would like to have a script that looks at a feature class and compares the values in two text fields and then populates a third field with a `Y` or `N` depending on if the values are the same or not. I think I need to use an UpdateCursor with an if statement. I have tried the following but I get a syntax error when I try to run it. I am using ArcGIS 10.1 and know that the daCursor is better but I am just trying to wrap my head around cursors and thought I would try and keep it simple for now.
```
#import system modules
import arcpy
from arcpy import env
import os
import sys
#set environment settings
working_fc = sys.argv[1]
working_gdb = os.path.split(working_fc)[0]
#use an update cursor to populate the field BEC_UPDATED based on the result of a query
#query = ("SELECT * FROM working_fc" "WHERE [BEC_LABEL] = [BEC_V9]")
#if the query is true, then BEC_UPDATED should be popluated with "N"
#if the query is false, then BEC_UPDATED should be populated with "Y"
rows = arcpy.UpdateCursor (working_fc)
for row in rows:
if row.getValue("BEC_LABEL") == row.getValue("BEC_V9")
row.BEC_UPDATED = "N"
else
row.BEC_UPDATED = "Y"
rows.updateRow(row)
print "BEC_UPDATED field populated"
``` | 2016/01/07 | [
"https://Stackoverflow.com/questions/34645978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5755063/"
] | Try splitting up the for loop that goes through each item and the actual get\_multi call itself. So something like:
```
all_values = ndb.get_multi(all_keys)
for counter in all_values:
# Insert amazeballs codes here
```
I have a feeling it's one of these:
1. The generator pattern (yield from for loop) is causing something funky with get\_multi execution paths
2. Perhaps the number of items you are expecting doesn't match actual result counts, which could reveal a problem with GeneralCounterShardConfig.all\_keys(name)
3. The number of shards is set too high. I've realized that anything over 10 shards causes performance issues. | When I've dug into similar issues, one thing I've learned is that `get_multi` can cause multiple RPCs to be sent from your application. It looks like the default in the SDK is set to 1000 keys per get, but the batch size I've observed in production apps is much smaller: something more like 10 (going from memory).
I suspect the reason it does this is that at some batch size, it actually is better to use multiple RPCs: there is more RPC overhead for your app, but there is more Datastore parallelism. In other words: this is still probably the best way to read a lot of datastore objects.
However, if you don't need to read the absolute most current value, you can try setting the `db.EVENTUAL_CONSISTENCY` option, but that seems to only be available in the older `db` library and not in `ndb`. (Although it also appears to be available via the [Cloud Datastore API](https://cloud.google.com/datastore/docs/reference/rpc/google.datastore.v1#google.datastore.v1.LookupRequest)).
**Details**
If you look at the Python code in the App Engine SDK, specifically the file `google/appengine/datastore/datastore_rpc.py`, you will see the following lines:
```
max_count = (Configuration.max_get_keys(config, self.__config) or
self.MAX_GET_KEYS)
...
if is_read_current and txn is None:
max_egs_per_rpc = self.__get_max_entity_groups_per_rpc(config)
else:
max_egs_per_rpc = None
...
pbsgen = self._generate_pb_lists(indexed_keys_by_entity_group,
base_req.ByteSize(), max_count,
max_egs_per_rpc, config)
rpcs = []
for pbs, indexes in pbsgen:
rpcs.append(make_get_call(base_req, pbs,
self.__create_result_index_pairs(indexes)))
```
My understanding of this:
* Set `max_count` from the configuration object, or `1000` as a default
* If the request must read the current value, set `max_gcs_per_rpc` from the configuration, or `10` as a default
* Split the input keys into individual RPCs, using both `max_count` and `max_gcs_per_rpc` as limits.
So, this is being done by the Python Datastore library. | 1,947 |
61,135,030 | I'm running a few tasks on the same terminal in bash. Is there a way I can stream all the logs I'm seeing on the bash terminal to a log file? I know can technically pipe the logs of individual tasks, but wondering if there's a more elegant way. So far this is what I'm doing:
```
$> python background1.py > logs/bg1.log & \
python background2.py > logs/bg2.log & \
python foreground.py | tee logs/fg.log
```
Is there a way I can somehow capture everything together? (somewhat similar to how CI/CD tools show all of the terminal output in the browser). | 2020/04/10 | [
"https://Stackoverflow.com/questions/61135030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/499363/"
] | Sorry I didn't understand your question :D
For this case you can use an input to specify what do you need:
type=int(input("What Type Of Def You Want To Use? "))
And then you can put an IF for you selection:
if(type==1):
command.a(args)
elif(type==2):
command.b(args)
elif(type==3):
command.c(args)
else:
print("Invalid Command.. Use:[1,2,3]")
Hope it works for you this time :D | You can use `input`:
```
name = input ("What's your name")
print ("Hello, ", name )
```
If you're writing a command line tool, it's very doable with the [click package](https://click.palletsprojects.com/en/7.x/). See their Hello World example:
```
import click
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
def hello(count, name):
"""Simple program that greets NAME for a total of COUNT times."""
for x in range(count):
click.echo('Hello %s!' % name)
if __name__ == '__main__':
hello()
```
Running `python hello.py --count=3` will give you the output below:
```
Your name: John
Hello John!
Hello John!
Hello John!
``` | 1,948 |
69,555,581 | This might be heavily related to similar questions as [Python 3.3: Split string and create all combinations](https://stackoverflow.com/questions/22911367/python-3-3-split-string-and-create-all-combinations/22911505) , but I can't infer a pythonic solution out of this.
Question is:
Let there be a str such as `'hi|guys|whats|app'`, and I need all permutations of splitting that str by a separator. Example:
```
#splitting only once
['hi','guys|whats|app']
['hi|guys','whats|app']
['hi|guys|whats','app']
#splitting only twice
['hi','guys','whats|app']
['hi','guys|whats','app']
#splitting only three times
...
etc
```
I could write a backtracking algorithm, but does python (itertools, e.g.) offer a library that simplifies this algorithm?
Thanks in advance!! | 2021/10/13 | [
"https://Stackoverflow.com/questions/69555581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6042172/"
] | An approach, once you have split the string is to use `itertools.combinations` to define the split points in the list, the other positions should be fused again.
```
def lst_merge(lst, positions, sep='|'):
'''merges a list on points other than positions'''
'''A, B, C, D and 0, 1 -> A, B, C|D'''
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations
l = s.split(sep)
return [lst_merge(l, pos, sep=sep)
for pos in combinations(range(len(l)-1), split)]
```
#### examples
```
>>> split_comb('hi|guys|whats|app', 0)
[['hi|guys|whats|app']]
>>> split_comb('hi|guys|whats|app', 1)
[['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app']]
>>> split_comb('hi|guys|whats|app', 2)
[['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 3)
[['hi', 'guys', 'whats', 'app']]
>>> split_comb('hi|guys|whats|app', 4)
[] ## impossible
```
#### rationale
```
ABCD -> A B C D
0 1 2
combinations of split points: 0/1 or 0/2 or 1/2
0/1 -> merge on 2 -> A B CD
0/2 -> merge on 1 -> A BC D
1/2 -> merge on 0 -> AB C D
```
#### generic function
Here is a generic version, working like above but also taking `-1` as parameter for `split`, in which case it will output all combinations
```
def lst_merge(lst, positions, sep='|'):
a = -1
out = []
for b in list(positions)+[len(lst)-1]:
out.append('|'.join(lst[a+1:b+1]))
a = b
return out
def split_comb(s, split=1, sep='|'):
from itertools import combinations, chain
l = s.split(sep)
if split == -1:
pos = chain.from_iterable(combinations(range(len(l)-1), r)
for r in range(len(l)+1))
else:
pos = combinations(range(len(l)-1), split)
return [lst_merge(l, pos, sep=sep)
for pos in pos]
```
example:
```
>>> split_comb('hi|guys|whats|app', -1)
[['hi|guys|whats|app'],
['hi', 'guys|whats|app'],
['hi|guys', 'whats|app'],
['hi|guys|whats', 'app'],
['hi', 'guys', 'whats|app'],
['hi', 'guys|whats', 'app'],
['hi|guys', 'whats', 'app'],
['hi', 'guys', 'whats', 'app']]
``` | One approach using [`combinations`](https://docs.python.org/3/library/itertools.html#itertools.combinations) and [`chain`](https://docs.python.org/3/library/itertools.html#itertools.chain)
```
from itertools import combinations, chain
def partition(alist, indices):
# https://stackoverflow.com/a/1198876/4001592
pairs = zip(chain([0], indices), chain(indices, [None]))
return (alist[i:j] for i, j in pairs)
s = 'hi|guys|whats|app'
delimiter_count = s.count("|")
splits = s.split("|")
for i in range(1, delimiter_count + 1):
print("split", i)
for combination in combinations(range(1, delimiter_count + 1), i):
res = ["|".join(part) for part in partition(splits, combination)]
print(res)
```
**Output**
```
split 1
['hi', 'guys|whats|app']
['hi|guys', 'whats|app']
['hi|guys|whats', 'app']
split 2
['hi', 'guys', 'whats|app']
['hi', 'guys|whats', 'app']
['hi|guys', 'whats', 'app']
split 3
['hi', 'guys', 'whats', 'app']
```
The idea is to generate all the ways to pick (or remove) a delimiter 1, 2, 3 times and generate the partitions from there. | 1,949 |
64,163,749 | I have asyncio crawler, that visits URLs and collects new URLs from HTML responses. I was inspired that great tool: <https://github.com/aio-libs/aiohttp/blob/master/examples/legacy/crawl.py>
Here is a very simplified piece of workflow, how it works:
```
import asyncio
import aiohttp
class Requester:
def __init__(self):
self.sem = asyncio.BoundedSemaphore(1)
async def fetch(self, url, client):
async with client.get(url) as response:
data = (await response.read()).decode('utf-8', 'replace')
print("URL:", url, " have code:", response.status)
return response, data
async def run(self, urls):
async with aiohttp.ClientSession() as client:
for url in urls:
await self.sem.acquire()
task = asyncio.create_task(self.fetch(url, client))
task.add_done_callback(lambda t: self.sem.release())
def http_crawl(self, _urls_list):
loop = asyncio.get_event_loop()
crawl_loop = asyncio.ensure_future(self.run(_urls_list))
loop.run_until_complete(crawl_loop)
r = Requester()
_url_list = ['https://www.google.com','https://images.google.com','https://maps.google.com','https://mail.google.com','https://news.google.com','https://video.google.com','https://books.google.com']
r.http_crawl(_url_list)
```
What I need now is to add some very slow beautifulsoap based function. I need that function do not block main loop and work as background process. For instance, I will handle HTTP responses.
I read python docs about it and found that: <https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor>
I tried to add it to my code, but it does not work as should (I use cpu\_bound only for demo):
```
import asyncio
import aiohttp
import concurrent.futures
def cpu_bound():
return sum(i * i for i in range(10 ** 7))
class Requester:
def __init__(self):
self.sem = asyncio.BoundedSemaphore(1)
async def fetch(self, url, client):
async with client.get(url) as response:
data = (await response.read()).decode('utf-8', 'replace')
print("URL:", url, " have code:", response.status)
####### Blocking operation #######
loop = asyncio.get_running_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
result = await loop.run_in_executor(pool, cpu_bound)
print('custom process pool', result)
#################################
return response, data
async def run(self, urls):
async with aiohttp.ClientSession() as client:
for url in urls:
await self.sem.acquire()
task = asyncio.create_task(self.fetch(url, client))
task.add_done_callback(lambda t: self.sem.release())
def http_crawl(self, _urls_list):
loop = asyncio.get_event_loop()
crawl_loop = asyncio.ensure_future(self.run(_urls_list))
loop.run_until_complete(crawl_loop)
r = Requester()
_url_list = ['https://www.google.com','https://images.google.com','https://maps.google.com','https://mail.google.com','https://news.google.com','https://video.google.com','https://books.google.com']
r.http_crawl(_url_list)
```
For now, it doesn't work as expected, it blocks HTTP requests every time:
```
URL: https://www.google.com have code: 200
custom process pool 333333283333335000000
URL: https://images.google.com have code: 200
custom process pool 333333283333335000000
URL: https://maps.google.com have code: 200
custom process pool 333333283333335000000
URL: https://mail.google.com have code: 200
custom process pool 333333283333335000000
URL: https://news.google.com have code: 200
custom process pool 333333283333335000000
URL: https://video.google.com have code: 200
custom process pool 333333283333335000000
```
How to correctly put the task in the background inside the main asyncio process?
Are there best practices on how to do that in a simple way, or I should use Redis for task planning? | 2020/10/01 | [
"https://Stackoverflow.com/questions/64163749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14376515/"
] | I have created an example in Node.js that is based on the steps from my previous answer to this question.
The first action expects a valid apikey in `params.apikey` as input parameter and returns a bearer token:
```js
/**
*
* main() will be run when you invoke this action
*
* @param Cloud Functions actions accept a single parameter, which must be a JSON object.
*
* @return The output of this action, which must be a JSON object.
*
*/
function main(params) {
const axios = require('axios');
const querystring = require('querystring');
return axios.post('https://iam.cloud.ibm.com/cloudfoundry/login/us-south/oauth/token',
querystring.stringify({
grant_type: "password",
username: "apikey",
password: params.apikey
}), {
auth: {
username: 'cf'
}
})
.then(res => {
console.log(`statusCode: ${res.status}`);
console.log(res.data);
return {
token: res.data.access_token
};
})
.catch(error => {
console.error(error);
return {
error: err.message
};
})
}
```
The second action expects a valid bearer token in `params.token` and is then executing an API call against the IBM Cloud CF Public API. In this example a get request against /v2/organizations:
```js
/**
*
* main() will be run when you invoke this action
*
* @param Cloud Functions actions accept a single parameter, which must be a JSON object.
*
* @return The output of this action, which must be a JSON object.
*
*/
function main(params) {
const axios = require('axios');
axios.defaults.headers.common['Authorization'] = "bearer " + params.token;
const querystring = require('querystring');
return axios.get('https://api.us-south.cf.cloud.ibm.com/v2/organizations')
.then(res => {
console.log(`statusCode: ${res.status}`);
console.log(res.data);
return {
organizations: res.data.resources
};
})
.catch(error => {
console.error(error);
return {
error: error.message
};
})
}
```
Now you can put both actions into a sequence, so that the output from the first action (the bearer token) is used as token within the second action. | I can't guide you the full way right now, but I hope the information that I can provide will guide you into the right direction.
First you'll need to identify the authorization endpoint:
`curl http://api.us-south.cf.cloud.ibm.com/info`
With that and a valid IAM API token for your account you can get the bearer token that will work against the IBM Cloud CF Public API:
`curl -v -X POST "https://iam.cloud.ibm.com/cloudfoundry/login/us-south/oauth/token" -d "grant_type=password&scope=&username=apikey&password=<yourApiKey>" --user "cf:"`
Note that you need to append `/oauth/token` to the authorization endpoint that you received in step 1.
The response contains the access token that you need. For this example, just put it into an environment variable:
`export TOKEN=<yourAccessToken>`
Next try a command against the IBM Cloud CF Public API:
`curl "https://api.us-south.cf.cloud.ibm.com/v2/organizations" -X GET -H "Authorization: bearer $TOKEN"`
I hope once you have followed these steps in your command line, you will be able to do the same steps in your IBM Cloud Function and you'll reach your goal. | 1,953 |
53,098,413 | I am storing discount codes with different prefixes and unique digits at the end (`10OFF<abc>`, `25OFF<abc>`, `50OFF<abc>`, etc.) in a file, and then loading that file into a list.
I am trying to make a function so that when they are redeemed, they are removed from the list, and the file is overwritten. Right now what I am doing looks like this:
```
for x in range(0, 5):
total += codes[0] + '\n'
codes.remove(codes[0])
with open('codes.txt', 'w') as f:
for code in codes:
f.write(code+'\n')
```
For one thing, I don't think this is a very pythonic way of doing things, and it feels dirty. And for another, theres not really a way for me to specify which discount code to select and remove - doing it this way i would have to make separate files for the `10OFF`, `25OFF`, and `50OFF` codes.
Does anyone have any suggestions? | 2018/11/01 | [
"https://Stackoverflow.com/questions/53098413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10142229/"
] | this should do what you want
```
list_of_codes=open('codes.txt','rt').read().split('\n')
while True:
code=input('enter code to remove:')
if code in list_of_codes:
break
else:
print('code you entered is not in the list')
continue
list_of_codes.pop(list_of_codes.index(code))
with open('codes.txt','wt') as f:
[f.write(item+'\n') for item in list_of_codes]
``` | This code promps the user to input all the code he whishes to remove, then it reads the current file and overwrites the file with only the code that were NOT input by the user. The file considers the whole content of a line as a code (must contain prefix + unique digits).
The code also leaves the old file as a backup, so you can inspect the changes afterwards.
```
import datetime
import shutil
def remove(base_file_name, codes_to_remove):
now_ = datetime.datetime.now()
current_file_name = '{}.txt'.format(base_file_name)
backup_file_name = '{}-{}.txt'.format(base_file_name, now_.strftime('%Y%m%d-%H%M%S'))
# copy current file to the new name, which will also be kept as backup
shutil.copy2(current_file_name, backup_file_name)
with open(backup_file_name, 'r') as fr, open(current_file_name, 'w') as fw:
for line in fr:
line = line.strip()
if len(line) > 0:
# 'line' will be an individual code
if line not in codes_to_remove:
fw.write('{}\n'.format(line))
if __name__ == '__main__':
code_list = input('Enter codes to remove (separated by spaces): ').split()
remove('my-codes', code_list)
```
Suppose the file **my-codes.txt** contains the following lines:
```
25OFF123456
25OFF123457
25OFF123458
50OFF111112
50OFF111113
```
When you run this code, and when prompted you input a few codes:
```
Enter codes to remove (separated by spaces): 50OFF111112 50OFF111114
Process finished with exit code 0
```
Then afterwards the file **my-codes.txt** will contain one less code (the second code from the prompt `50OFF111114` does not exist in the file and will have no effect):
```
25OFF123456
25OFF123457
25OFF123458
50OFF111113
```
You also will be left with a new file called **my-codes-20181101-120102.txt** which will contain the original 5 codes from before running the script.
---
Notes:
* if you have thousands of codes and are looking for more speed, then you should look into using databases. But for only a few hunderd codes this should sufice.
* Your promo codes can be of any format, as long as the file contains a single code per line, and you have to input the code into the prompt exactly like it is in the file. Of course, you can modify this script to fit your specific use case. | 1,954 |
40,703,228 | I am trying to run a Flask REST service on CentOS Apache2 using WSGI. The REST service requires a very small storage. So i decided to use SQLite with `sqlite3` python package. The whole application worked perfectly well on my local system and on the CentOS server when ran using `app.run()`. But when i used WSGI to host the application on Apache, i am getting
```
OperationalError: attempt to write a readonly database
```
I have checked the permissions of the file. The user and group of the file are set to apache (under which the server is running) using `chown` and`chgrp`. Also, the file has `rwx` permission. Still i am getting read-only database error. Following is what i get by running `ls -al` on the db file:
```
-rwxrwxrwx. 1 apache apache 8192 Nov 19 01:39 dbfile.db
```
My Apache Configuration:
```
<VirtualHost *>
ServerName wlc.host.com
WSGIDaemonProcess wlcd
WSGIScriptAlias / /var/www/html/wlcd.wsgi
RewriteEngine on
RewriteCond %{HTTP:Authorization} ^(.*)
RewriteRule .* - [e=HTTP_AUTHORIZATION:%1]
<Directory /var/www/html/>
WSGIProcessGroup wlcd
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
Require all granted
</Directory>
``` | 2016/11/20 | [
"https://Stackoverflow.com/questions/40703228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6935236/"
] | In addition to changing the database file permissions, you need also to change permissions for the directory that hosts the database file. You can try the following command:
```
chmod 664 /path/to/your/directory/
```
You can also change the directory's owner as follows:
```
chown apache:apache /path/to/your/directory/
``` | What worked for me (I don't have sudo) was removing the database file and all migrations and starting again, as described here: [How do I delete DB (sqlite3) in Django 1.9 to start from scratch?](https://stackoverflow.com/questions/42150499/how-do-i-delete-db-sqlite3-in-django-1-9-to-start-from-scratch/42150639) | 1,956 |
13,984,423 | I am very new to python, this is my first program that I am trying.
This function reads the password from the standard input.
```
def getPassword() :
passwordArray =[]
while 1:
char = sys.stdin.read(1)
if char == '\\n':
break
passwordArray.append(char)
return passwordArray
print (username)
print (URL)
```
getting this error:
```
Problem invoking WLST - Traceback (innermost last):
(no code object) at line 0
File "/scratch/aime/work/stmp/wlstCommand.py", line 10
while 1:
^
SyntaxError: invalid syntax
``` | 2012/12/21 | [
"https://Stackoverflow.com/questions/13984423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1731553/"
] | Your indentation is not correct. Your `while` should be indented the same as the line above it. | Python uses indentation to "separate" stuff and the thing with that is you need to have the same kind of indentation across the file. Having a fixed kind of indentation in the code you write is good practice. You might want to consider a tab or four spaces(The later being the suggestion in the PEP8 style guide) | 1,959 |
43,380,783 | I wrote a MoviePy script that takes an input video, does some processing, and outputs a video file. I want to run this through an entire folder of videos. Any help or direction is appreciated.
Here's what I tried...
```
for f in *; do python resize.py $f; done
```
and resize.py source code here:
```
from moviepy.editor import *
clip = VideoFileClip(input)
clip1 = clip.rotate(270)
clip2 = clip1.crop(x_center=540,y_center=960,width=1080,height=608)
clip3 = clip2.resize(width=1920)
clip3.write_videofile(output,codec='libx264')
```
Really wasn't sure what to put for "input" and "output" in my .py file.
Thanks,
Evan | 2017/04/12 | [
"https://Stackoverflow.com/questions/43380783",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7859068/"
] | I know you have an answer [on Github](https://github.com/Zulko/moviepy/issues/542#issuecomment-293735347), but I'll add my own solution.
First, you'll want to put your code inside a function:
```
def process_video(input):
"""Parameter input should be a string with the full path for a video"""
clip = VideoFileClip(input, output)
clip1 = clip.rotate(270)
clip2 = clip1.crop(x_center=540,y_center=960,width=1080,height=608)
clip3 = clip2.resize(width=1920)
clip3.write_videofile(output,codec='libx264')
```
Then, you can have a function that returns a list of file paths, and a list of final file names to use with the above function (note that the final file names will be the same as the original file names but with "output" in front):
```
import os
def get_video_paths(folder_path):
"""
Parameter folder_path should look like "Users/documents/folder1/"
Returns a list of complete paths
"""
file_name_list = os.listdir(folder_path)
path_name_list = []
final_name_list = []
for name in file_name_list:
# Put any sanity checks here, e.g.:
if name == ".DS_Store":
pass
else:
path_name_list.append(folder_path + name)
# Change the format of the output file names below
final_name_list.append(folder_path + "output" + name)
return path_name_list, final_name_list
```
Finally, at the bottom, we get the input folder, and utilise the above two functions:
```
if __name__ == "__main__":
video_folder = input("What folder would you like to process? ")
path_list, final_name_list = get_video_paths(video_folder)
for path, name in zip(path_list, final_name_list):
process_video(path, name)
print("Finished")
```
Just watch out, because this will crash if there are any files in the folder that can't be read as a movie. For instance, on mac, the OS puts a ".DS\_Store" file in each folder, which will crash the program. I've put an area for a sanity check to ignore certain filenames.
Complete code:
```
import os
from moviepy.editor import *
def process_video(input, output):
"""Parameter input should be a string with the full path for a video"""
clip = VideoFileClip(input)
clip1 = clip.rotate(270)
clip2 = clip1.crop(x_center=540,y_center=960,width=1080,height=608)
clip3 = clip2.resize(width=1920)
clip3.write_videofile(output,codec='libx264')
def get_video_paths(folder_path):
"""
Parameter folder_path should look like "Users/documents/folder1/"
Returns a list of complete paths
"""
file_name_list = os.listdir(folder_path)
path_name_list = []
final_name_list = []
for name in file_name_list:
# Put any sanity checks here, e.g.:
if name == ".DS_Store":
pass
else:
path_name_list.append(folder_path + name)
final_name_list.append(folder_path + "output" + name)
return path_name_list, final_name_list
if __name__ == "__main__":
video_folder = input("What folder would you like to process? ")
path_list, final_name_list = get_video_paths(video_folder)
for path, name in zip(path_list, final_name_list):
process_video(path, name)
print("Finished")
``` | I responded on your [Github issue #542](https://github.com/Zulko/moviepy/issues/542#issuecomment-293843765), but I copied it here for future reference!
First off, the below example isn't ironclad, but it should do what you need.
You can achieve this via something like this:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Convert all media assets located in a specified directory."""
import glob
import os
from optparse import OptionParser
from moviepy.editor import VideoFileClip
def get_dir_files(dir_path, patterns=None):
"""Get all absolute paths for pattern matched files in a directory.
Args:
dir_path (str): The path to of the directory containing media assets.
patterns (list of str): The list of patterns/file extensions to match.
Returns:
(list of str): A list of all pattern-matched files in a directory.
"""
if not patterns or type(patterns) != list:
print('No patterns list passed to get_dir_files, defaulting to patterns.')
patterns = ['*.mp4', '*.avi', '*.mov', '*.flv']
files = []
for pattern in patterns:
dir_path = os.path.abspath(dir_path) + '/' + pattern
files.extend(glob.glob(dir_path))
return files
def modify_clip(path, output):
"""Handle conversion of a video file.
Args:
path (str): The path to the directory of video files to be converted.
output (str): The filename to associate with the converted file.
"""
clip = VideoFileClip(path)
clip = clip.rotate(270)
clip = clip.crop(x_center=540, y_center=960, width=1080, height=608)
clip = clip.resize(width=1920)
clip.write_videofile(output, codec='libx264')
print('File: {} should have been created.'.format(output))
if __name__ == '__main__':
status = 'Failed!'
parser = OptionParser(version='%prog 1.0.0')
parser.add_option('-p', '--path', action='store', dest='dir_path',
default='.', type='string',
help='the path of the directory of assets, defaults to .')
options, args = parser.parse_args()
print('Running against directory path: {}'.format(options.dir_path))
path_correct = raw_input('Is that correct?').lower()
if path_correct.startswith('y'):
dir_paths = get_dir_files(options.dir_path)
for dir_path in dir_paths:
output_filename = 'converted_' + os.path.basename(dir_path)
modify_clip(path=dir_path, output=output_filename)
status = 'Successful!'
print('Conversion {}'.format(status))
```
With the above example, you can simply drop that into the directory of assets you wish to convert and run: `python this_file.py` and it should convert the files for you in the same directory with the name prepended with: `converted_`
Likewise, you can drop that file anywhere and run it against an absolute path:
`python this_file.py -p /Users/thisguy/media` and it will convert all files with the extensions: `['*.mp4', '*.avi', '*.mov', '*.flv']`
Either way, let me know if you have any questions (or if this resolves your issue) and I'll do my best to help you out!
Thanks for using moviepy! | 1,961 |
37,124,342 | I am "using" `Statsmodel`for less than 2 days and am not at all familiar with the import commands etc. I want to run a simple `variance_inflation_factor` from [here](http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.outliers_influence.variance_inflation_factor.html) but am having some issues. My code follows:
```
from numpy import *
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import statsmodels.formula.api as sm
from sklearn.linear_model import LinearRegression
import scipy, scipy.stats
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
from statsmodels.api import add_constant
from numpy import linalg as LA
import statsmodels as sm
## I have been adding libraries and modules/packages with the intention of erring on the side of caution
a = df1.years_exp
b = df1.leg_totalbills
c = df1.log_diff_rgdp
d = df1.unemployment
e = df1.expendituresfor
f = df1.direct_expenditures
g = df1.indirect_expenditures
sm.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
then I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-61-bb126535eadd> in <module>()
----> 1 sm.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
AttributeError: module 'statsmodels' has no attribute 'variance_inflation_factor'
```
Can someone direct me to the proper syntax for loading and executing this module? If it is more convenient that I post a link to some source code please ask. However, I have a feeling that this is just a simple syntax issue. | 2016/05/09 | [
"https://Stackoverflow.com/questions/37124342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5211377/"
] | The function `variance_inflation_factor` is found in `statsmodels.stats.outlier_influence` as seen [in the docs](http://statsmodels.sourceforge.net/devel/_modules/statsmodels/stats/outliers_influence.html), so to use it you must import correctly, an option would be
```
from statsmodels.stats import outliers_influence
# code here
outliers_influence.variance_inflation_factor((['a', 'b', 'c', 'd', 'e', 'f']), g)
``` | ```
a = df1.years_exp
b = df1.leg_totalbills
c = df1.log_diff_rgdp
d = df1.unemployment
e = df1.expendituresfor
f = df1.direct_expenditures
g = df1.indirect_expenditures
ck=np.array([a,b,c,d,e,f,g])
outliers_influence.variance_inflation_factor(ck, 6)
``` | 1,962 |
11,809,643 | I have some python code with many lines like this:
```
print "some text" + variables + "more text and special characters .. etc"
```
I want to modify this to put everything after print within brackets, like this:
```
print ("some text" + variables + "more text and special characters .. etc")
```
How to do this in vim using regex? | 2012/08/04 | [
"https://Stackoverflow.com/questions/11809643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1338814/"
] | Use this substitute:
```
%s/print \(.*$\)/print (\1)
```
`\(.*$\)` matches everything up to the end of the line and captures it in a group using the escaped parentheses. The replacement includes this group using `\1`, surrounded by literal parentheses. | ```
:%s/print \(.*\)/print(\1)/c
```
OR if you visually select multiple lines
```
:'<,'>s/print \(.*\)/print(\1)/c
```
`%` - every line
`'<,'>` - selected lines
`s` - substitute
`c` - confirm - show you what matched before you convert
`print \(.*\)` - exactly match print followed by a space then group everything between the `\(` and `\)`
`print(\1)` - replace with print(<first match>)
Vim has some function rules for regex, you can do `:help substitute` or `:help regex` to see what they are. | 1,964 |
20,553,695 | I'm fairly green in Python and trying to get django working to build a simple website. I've installed Django 1.6 under Python 2.7.6 but can't get django-admin to run. According to the tutorial I should create a project as follows, but I get a syntax error:
```
Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> import sys
>>> print(django.VERSION)
(1, 6, 0, 'final', 0)
>>> django-admin.py startproject Nutana
File "<stdin>", line 1
django-admin.py startproject Nutana
^
SyntaxError: invalid syntax
>>>
```
I've created a .pth file in the site-packages directory with this:
```
c:\python27\lib\site-packages\django
c:\python27\lib\site-packages\django\bin
```
but that doesn't help. I've tried it with relative paths as well, and with the slashes going the other way.
I've also tried straight from the command line:
```
Z:\Nutana GeophysicsXXX\Web_Django>python django-admin.py startproject Nutana
python: can't open file 'django-admin.py': [Errno 2] No such file or directory
```
Where have I gone wrong?? | 2013/12/12 | [
"https://Stackoverflow.com/questions/20553695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1324833/"
] | ```
django-admin.py startproject Nutana
```
should be run in the command line, and not in the django shell.
If the second case is not working
1. If you are using a virtual-env, did you forget to activate it ?
2. Make sure you add `C:\Python27\Scripts` to the path, and you would not face this issue. | Try this `$ django-admin.py startproject mysite`
You don't need the python statement in front. | 1,965 |
26,721,113 | I have an equation 'a\*x+logx-b=0,(a and b are constants)', and I want to solve x. The problem is that I have numerous constants a(accordingly numerous b). How do I solve this equation by using python? | 2014/11/03 | [
"https://Stackoverflow.com/questions/26721113",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4211557/"
] | You could check out something like
<http://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.nonlin.html>
which has tools specifically designed for these kinds of equations. | Cool - today I learned about Python's numerical solver.
```
from math import log
from scipy.optimize import brentq
def f(x, a, b):
return a * x + log(x) - b
for a in range(1,5):
for b in range(1,5):
result = brentq(lambda x:f(x, a, b), 1e-10, 20)
print a, b, result
```
`brentq` provides estimate where the function crosses the x-axis. You need to give it two points, one which is definitely negative and one which is definitely positive. For negative point choose number that is smaller than exp(-B), where B is maximum value of `b`. For positive point choose number that's bigger than B.
If you cannot predict range of `b` values, you can use a solver instead. This will probably produce a solution - but this is not guaranteed.
```
from scipy.optimize import fsolve
for a in range(1,5):
for b in range(1,5):
result = fsolve(f, 1, (a,b))
print a, b, result
``` | 1,966 |
26,906,586 | **Background:**
I have an OpenShift Python 2.7 gear containing my Django 1.6 application. I used django-openshift-quickstart.git as a starting point for my own project and it works well.
However, if I have a syntax error in my code or some other exception I have no way of finding it. I can do a tail of the logs via:
```
rhc tail -a appname
```
However, this only shows me that a 500 error occurred. I never see any exceptions or details other than:
```
10.137.24.60, x.x.x.x - - [13/Nov/2014:17:12:27 -0500] "GET /snapper/snapshots HTTP/1.1" 500 27 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36"
```
The client web browser reports:
```
Server Error (500)
```
I turned on the DEBUG setting (DEBUG = True) in my settings.py but that made no difference. I still see no exceptions in the logs or in the browser.
I believe the container (gear) is using haproxy + apache + mod\_wsgi + python2.7.
I'd dearly love to start getting Django exceptions reporting to my browser.
**Question:**
Why do I not see Django exceptions in my browser (or log files) under OpenShift when DEBUG is set to True ?
*I realise this appears similar to the existing question [How to debug Django exceptions in OpenShift applications](https://stackoverflow.com/questions/20586363/how-to-debug-django-exceptions-in-openshift-applications) but "rhc tail -a" simply displays the 500 error lines - I still see no Django exceptions.*
AtDhVaAnNkCsE
Doug | 2014/11/13 | [
"https://Stackoverflow.com/questions/26906586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3538533/"
] | I don't know zilch about OpenShift, but
1. you may have to configure your loggers (<https://docs.djangoproject.com/en/1.6/topics/logging/#configuring-logging>) and
2. you have to restart the wsgi processes when you make some changes to your settings.
Now I strongly advise you to NOT set `DEBUG=True` on a production server - with a properly configured logger and SMTP server you should get all unhandled exceptions by mail.
As a last point: if you have something like a syntax error or such, you may not even get to the point where Django can do any logging. In this case what info you can get is up to the server process itself. BUT there's no reason you should get a SyntaxError on a production server anyway, cause you shouldn't be editing code on production... | To my horror... it turns out I simply hadn't set DEBUG=True! I could have sworn I had set it in settings.py at some point but my commit history strongly suggests I'm wrong.
With DEBUG=True in my wsgi/settings.py I can now debug my application on OpenShift.
Apologies for the noise.
Doug | 1,967 |
63,802,423 | I have troubles checking the user token inside of middleware. I'm getting token from cookies and then I need to query database to check if this token exists and belongs to user that made a request.
**routing.py**
```
from channels.routing import ProtocolTypeRouter, URLRouter
import game.routing
from authentication.utils import TokenAuthMiddlewareStack
application = ProtocolTypeRouter({
# (http->django views is added by default)
'websocket': TokenAuthMiddlewareStack(
URLRouter(
game.routing.websocket_urlpatterns
)
),
})
```
**middleware.py**
```
from rest_framework.authentication import TokenAuthentication
from rest_framework.exceptions import AuthenticationFailed
from rest_auth.models import TokenModel
from channels.auth import AuthMiddlewareStack
from django.contrib.auth.models import AnonymousUser
from django.db import close_old_connections
...
class TokenAuthMiddleware:
"""
Token authorization middleware for Django Channels 2
"""
def __init__(self, inner):
self.inner = inner
def __call__(self, scope):
close_old_connections()
headers = dict(scope['headers'])
if b'Authorization' in headers[b'cookie']:
try:
cookie_str = headers[b'cookie'].decode('utf-8')
try: # no cookie Authorization=Token in the request
token_str = [x for x in cookie_str.split(';') if re.search(' Authorization=Token', x)][0].strip()
except IndexError:
scope['user'] = AnonymousUser()
return self.inner(scope)
token_name, token_key = token_str.replace('Authorization=', '').split()
if token_name == 'Token':
token = TokenModel.objects.get(key=token_key)
scope['user'] = token.user
except TokenModel.DoesNotExist:
scope['user'] = AnonymousUser()
return self.inner(scope)
TokenAuthMiddlewareStack = lambda inner: TokenAuthMiddleware(AuthMiddlewareStack(inner))
```
And this gives me
```
django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
```
I also tried the following approaches
```
async def __call__(self, scope):
...
if token_name == 'Token':
token = await self.get_token(token_key)
scope['user'] = token.user
...
# approach 1
@sync_to_async
def get_token(self, token_key):
return TokenModel.objects.get(key=token_key)
# approach 2
@database_sync_to_async
def get_token(self, token_key):
return TokenModel.objects.get(key=token_key)
```
Those approaches give the following error
```
[Failure instance: Traceback: <class 'TypeError'>: 'coroutine' object is not callable
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/autobahn/websocket/protocol.py:2847:processHandshake
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/txaio/tx.py:366:as_future
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/twisted/internet/defer.py:151:maybeDeferred
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/daphne/ws_protocol.py:72:onConnect
--- <exception caught here> ---
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/twisted/internet/defer.py:151:maybeDeferred
/Users/nikitatonkoshkur/Documents/work/svoya_igra/venv/lib/python3.8/site-packages/daphne/server.py:206:create_application
]```
``` | 2020/09/08 | [
"https://Stackoverflow.com/questions/63802423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9631956/"
] | Works on my machine.
Hard to say without knowing what the data looks like, so I took a stab below:
```js
const value = {
expenses: [{amount: 1}, {amount: 2}]
}
const totalExpense = value.expenses.length > 0 ? (
value.expenses.reduce((acc, curr) => {
acc += curr.amount
return acc
}, 0)) : 0;
console.log(totalExpense);
console.log(value.expenses);
```
You can also simplify it further with the following changes:
```js
const value = {
expenses: [{amount: 1}, {amount: 2}]
}
// no need to check length
const totalExpense = value.expenses.reduce((acc, curr) => acc + curr.amount, 0);
console.log(totalExpense);
console.log(value.expenses);
``` | ```
{value => {
const totalExpense = value.expenses.length > 0 ? (
value.expenses.reduce((acc, curr) => {
acc += parseInt(curr.amount)
return acc
}, 0)) : 0;
console.log(totalExpense);
console.log(value.expenses);
```
curr.amount wasnt coming accross as an integer so it needed to be parseInt'd | 1,968 |
48,688,693 | New to Django framework. Mostly reading through documentations.
But this one i am unable to crack.
Trying to add a URL to an headline, that will be forwarded to the 'headlines' post.
The Error:
>
> NoReverseMatch at / Reverse for 'assignment\_detail' with arguments
> '('',)' not found. 1 pattern(s) tried: ['assignment\_detail/'] Request
> Method: GET Request URL: <http://127.0.0.1:8000/> Django Version: 2.0.2
> Exception Type: NoReverseMatch Exception Value: Reverse for
> 'assignment\_detail' with arguments '('',)' not found. 1 pattern(s)
> tried: ['assignment\_detail/'] Exception
> Location: C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\urls\resolvers.py
> in \_reverse\_with\_prefix, line 632 Python
> Executable: C:\Users\internit\Dropbox\Python\codepython\env\Scripts\python.exe
> Python Version: 3.6.2 Python Path:
>
> ['C:\Users\internit\Dropbox\Python\codepython\codepython',
> 'C:\Users\internit\Dropbox\Python\codepython\env\Scripts\python36.zip',
> 'C:\Users\internit\Dropbox\Python\codepython\env\DLLs',
> 'C:\Users\internit\Dropbox\Python\codepython\env\lib',
> 'C:\Users\internit\Dropbox\Python\codepython\env\Scripts',
> 'c:\program files (x86)\python36-32\Lib', 'c:\program files
> (x86)\python36-32\DLLs',
> 'C:\Users\internit\Dropbox\Python\codepython\env',
> 'C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages']
> Server time: Thu, 8 Feb 2018 14:53:07 +0000 Error during template
> rendering In template
> C:\Users\internit\Dropbox\Python\codepython\codepython\codepython\templates\base.html,
> error at line 0
>
>
> Reverse for 'assignment\_detail' with arguments '('',)' not found. 1
> pattern(s) tried: ['assignment\_detail/'] 1 {% load static %}
> 2 3 4 5 6 7 8 9 10 CODEPYTHON.NET
> Traceback Switch to copy-and-paste view
> C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\core\handlers\exception.py
> in inner
> response = get\_response(request) ... βΆ Local vars C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\core\handlers\base.py
> in \_get\_response
> response = self.process\_exception\_by\_middleware(e, request) ... βΆ Local vars
> C:\Users\internit\Dropbox\Python\codepython\env\lib\site-packages\django\core\handlers\base.py
> in \_get\_response
> response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) ... βΆ Local vars C:\Users\internit\Dropbox\Python\codepython\codepython\home\views.py
> in home
> return render(request, 'home.html', {'post':post}) ... βΆ Local vars
>
>
>
home/urls.py
```
from django.conf.urls import url
from django.conf import settings
from django.conf.urls.static import static
from codepython.posts import views
from posts import views as ps
app_name ='home'
urlpatterns = [
url(r'^$/', views.create, name='create'),
url(r'(?P<pk>\d+)/$', views.home, name='home'),
url(r'(?P<pk>\d+)/$', views.userposts, name='userposts')
url(r'^posts/(?P<post_id>[0-9]+)/$', ps.assignment_detail, name='assignment_detail'),
]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
```
home/views.py
```
from django.shortcuts import render, get_object_or_404
from django.apps import apps
# Create your views here.
def home(request):
posts = apps.get_model("posts", "Post")
post = posts.objects.all().order_by('-pub_date')[0:6]
return render(request, 'home.html', {'post':post})
def assignment_detail(request, post_id):
posts = apps.get_model('posts', 'Post')
post = get_object_or_404(posts, pk=post_id)
return render(request, "assignment_detail.html", {'post': post})
```
home.html
```
<div class="row">
{% for post in post.all %}
<div class="col-md-4">
<div class="thumbnail">
<div class="caption">
<p>Level: {{post.assignment_level}}</p>
<a href="{% url 'assignment_detail' post_id %}"><h3>{{ post.title }}</h3></a>
<p>by {{post.author}} from {{post.pub_date}}</p>
<h4>{{post.assignment_body}}</h4>
<p><a href="#" class="btn btn-primary" role="button">Read...</a></p>
</div>
</div>
</div>
{% endfor %}
</div>
{% endblock%}
```
myproject/urls.py
```
url(r'^assignment_detail/', views.assignment_detail,name='assignment_detail'),
```
What am I missing here.
Thank you in advance. | 2018/02/08 | [
"https://Stackoverflow.com/questions/48688693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8929670/"
] | Your url does not imply that you have to pass an id, but you're passing one in the template:
```
<a href="{% url 'assignment_detail' post_id %}"><h3>{{ post.title }}</h3></a>
```
It should be:
```
url(r'^assignment_detail/(?P<post_id>[0-9]+)', views.assignment_detail,name='assignment_detail'),
``` | That error is Django telling you that it can't find any URLs named 'assignment\_detail' that have an argument to pass in.
This is because your url entry in `myproject/urls.py` is missing the argument (`post_id`) that you use in your view. You'll need to update that url line to something similar to this:
```
url(r'^assignment_detail/(?P<post_id>[0-9]+)/$', views.assignment_detail, name='assignment_detail'),
```
The change at the end of the URL adds a named regular expression to capture the `post_id` value which will then be passed into the view.
Looking at your template code, you'll need to update your {% url %} block to use `post.id` (notice period) not `post_id` | 1,969 |
36,620,175 | I am receiving a warning and I want to check if this will break. I am using np.where like this in a lot of cases (it is similar, for me, to an if statement in excel). Is there a better or more pythonic or pandas way to do this? I'm trying to turn one dimension into something I can easily do mathematical operations on.
```
df['closed_item'] = np.where(df['result']=='Action Taken', 1, 0)
FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
result = getattr(x, name)(y)
INSTALLED VERSIONS
------------------
python: 3.5.1.final.0
python-bits: 64
OS: Windows
OS-release: 10
pandas: 0.18.0
nose: 1.3.7
pip: 8.1.0
setuptools: 20.2.2
Cython: 0.23.4
numpy: 1.11.0
scipy: 0.17.0
statsmodels: 0.6.1
xarray: None
IPython: 4.0.0
sphinx: 1.3.1
patsy: 0.4.0
dateutil: 2.4.2
pytz: 2015.7
blosc: None
bottleneck: None
tables: 3.2.2
numexpr: 2.5.1
matplotlib: 1.5.1
openpyxl: 2.2.6
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: 0.7.7
lxml: 3.4.4
bs4: 4.4.1
html5lib: None
httplib2: None
apiclient: None
sqlalchemy: 1.0.9
pymysql: None
psycopg2: None
jinja2: 2.8
boto: 2.38.0
``` | 2016/04/14 | [
"https://Stackoverflow.com/questions/36620175",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3966601/"
] | This warning occurs when comparing "int" and "str" in your dataset. Add .astype(int) to your comparison dataset.
Try:
```
df['closed_item'] = np.where(df['result'].astype(str)=='Action Taken', 1, 0)
``` | The issue that you mentioned is actually quite complex, so let me divide it into parts using your words:
>
> I am receiving a warning and I want to check if this will *break*
>
>
>
A `Warning` is a statement that is telling you to be cautious with how you handle your coding logic. A well-designed warning is not going to break your code; if it were a case, it would be an `Exception`.
While you need to be concerned if there are problems with your output or performance, often you may ignore a warning *ceteris paribus*. So in your case, if everything else is OK and you do not plan to update the software, you do not need to do anything to suppress the warning. However, if you need to, you may use the following snippet:
```
import warnings
with warnings.catch_warnings():
warnings.filterwarnings('ignore', r'elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison(.*)')
```
>
> I am using np.where like this in a lot of cases (it is similar, for me, to an if statement in excel).
>
>
>
Note that there is a `DataFrame.where` method in pandas.
>
> Is there a better or more pythonic or pandas way to do this?
>
>
>
Yes, there are two ways that you can use to make your code more pandas-like:
If you want to get a number of columns that will work like dummies, you may use
```
pd.get_dummies(df.result)
```
It will produce a data frame with all possible dummy values it could find in a series.
If this sounds to you like an overkill, do not worry there are ways to single out just one such variable.
---
In pandas boolean `True` and `False` are commonly used to binary classify matches within a series or a dataframe so in your case, one could perform the following operation:
```py
df.closed_item = df.result == 'Action Taken'
```
>
> I'm trying to turn one dimension into something I can easily do mathematical operations on.
>
>
>
However, if you want the output to contain integer values so that it matches yours, you may use this piece of code:
```py
df.closed_item = (df.result == 'Action Taken'`).astype(int)
```
---
*As a side note, I do not think this warning propagates to newer versions, i.e. `0.13` and above (as expected since it is a future warning), so you may also considering an update.* | 1,970 |
4,976,776 | In my vim plugin, I have two files:
```
myplugin/plugin.vim
myplugin/plugin_helpers.py
```
I would like to import plugin\_helpers from plugin.vim (using the vim python support), so I believe I first need to put the directory of my plugin on python's sys.path.
How can I (in vimscript) get the path to the currently executing script? In python, this is `__file__`. In ruby, it's `__FILE__`. I couldn't find anything similar for vim by googling, can it be done?
**Note:** I am not looking for the currently *edited* file ("%:p" and friends). | 2011/02/12 | [
"https://Stackoverflow.com/questions/4976776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/144135/"
] | ```
" Relative path of script file:
let s:path = expand('<sfile>')
" Absolute path of script file:
let s:path = expand('<sfile>:p')
" Absolute path of script file with symbolic links resolved:
let s:path = resolve(expand('<sfile>:p'))
" Folder in which script resides: (not safe for symlinks)
let s:path = expand('<sfile>:p:h')
" If you're using a symlink to your script, but your resources are in
" the same directory as the actual script, you'll need to do this:
" 1: Get the absolute path of the script
" 2: Resolve all symbolic links
" 3: Get the folder of the resolved absolute file
let s:path = fnamemodify(resolve(expand('<sfile>:p')), ':h')
```
I use that last one often because my `~/.vimrc` is a symbolic link to a script in a git repository. | Found it:
```
let s:current_file=expand("<sfile>")
``` | 1,971 |
74,266,511 | I am making a blackjack simulator with python and are having problems with when the player want another card. To begin with the player gets a random sample of two numbers from a list and then get the option to take another card or not to. When the answer is yes another card is added to the random sample but it gets added as a list inside of the list.
This is is the line when the answer is yes to another card.
```
if svar == "JA":
handspelare.append(random.sample(kortlek,1))
print(handspelare)
```
This returns, [5, 10, [13]] and it is this list inside of the list i want to get rid of so i can sum the numbers, any suggestions on how i can get rid of this? | 2022/10/31 | [
"https://Stackoverflow.com/questions/74266511",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19321160/"
] | `random.sample(kortlek,1)`
`random.sample` returns a list, so you end up `append`ing a list to `handspelare` (which creates the sublists).
You could change `append` to `extend`, but `random.sample(..., 1)` is just `random.choice`, so it makes more sense to use `handspelare.append(random.choice(kortlek))`. | Use list concatenation rather than append.
```
handspelare += random.sample(kortlek,1)
```
`append` will not unbundle its argument
```
a = [1]
a.append([2]) # [1, [2]]
a = [1]
a += [2] # [1, 2]
``` | 1,977 |
19,616,168 | I am new to Django and I try to follow the official tutorial. since I want to connect to mysql (installed on my computer, and i checked mysql module does exit in python command line), I set the ENGINE in setting.py to be django.db.backends.mysql . and then I tried to run
```
python manage.py syncdb
```
then I got error message like this:
```
Error loading MySQLdb module
```
and I cannot run
```
pip install mysql-python
```
the error msg is:
```
Unable to find vcvarsall.bat
```
so what is this error? and honestly I am not sure about the difference between mysql-python and mysql-connector-python. Since i tried with "pip install mysql-connector-python" and it tells me that requirement already satisfied... | 2013/10/27 | [
"https://Stackoverflow.com/questions/19616168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2379736/"
] | You need to download the [windows binary installer](http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python) for the MySQL drivers for Python. Installing from source will not work since you do not have the development headers in Windows. | You need to install the mysql python connector
sudo apt-get install python-mysqldb | 1,980 |
13,877,907 | ```
# python
enter code herePython 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os,sys
>>> import setup
..........
..........
..........
>>> reload(setup)
<module 'setup' from 'setup.pyc'>
>>>
```
But after executing reload its not taking updated 'setup' module
For example:
Doing some change in 'setup' file in another session and reloading in interpreter mode. But unable to use updated 'setup'
Could any will help me, how to overcome from this issue or where i am doing wrong
Thanks in Advance
Abhishek | 2012/12/14 | [
"https://Stackoverflow.com/questions/13877907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1468198/"
] | `reload` reloads a module, but doesn't recompile it.
```
>>> reload(setup)
<module 'setup' from 'setup.pyc'>
```
It is reloading from the compiled `setup.pyc`, not `setup.py`. The easiest way to get around this is simply to delete `setup.pyc` after making changes. Then when it reloads `setup.py` it will first recompile it. | Try assigning the value returned by `reload` to the same variable:
```
setup = reload(setup)
``` | 1,981 |
49,844,925 | I have the following python code to write processed words into excel file. The words are about 7729
```
From openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
for x in range (7729):
sheet.cell (row=1,column=x+1).value=x
book.save ('test.xlsx')
```
This is the what the code I used looks like, but when I run it, it gives me an error that says
```
openpyxl.utils.exceptions.IllegalCharacterError
```
This is my first time using this module, I would appreciate any kind of help. | 2018/04/15 | [
"https://Stackoverflow.com/questions/49844925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616724/"
] | **Try this :**
This code works for me .
```
from openpyxl import *
book=Workbook ()
sheet=book.active
sheet.title="test"
x = 0
with open("temp.txt") as myfile :
text = myfile.readline()
while text !="":
sheet.cell (row=1,column=x+1).value=str(text).encode("ascii",errors="ignore")
x+=1
text = myfile.readline()
book.save ('test.xlsx')
``` | You missed to add the value for cell `sheet.cell (row=1,column=x+1).value =`
Try like this
```
from openpyxl import *
book = Workbook ()
sheet = book.active
sheet.title = "test"
for x in range (7):
sheet.cell (row=1,column=x+1).value = "Hello"
book.save ('test.xlsx')
``` | 1,982 |
58,007,418 | I've got a CASIO fx-CG50 with python running extended version of micropython 1.9.4
Decided to make a game but I really need a sleep function, I cannot use any imports as everything is pretty barebones. Any help would be greatly appreciated.
I've tried downloading utilities but they're just extra applications, nothing seems to really exist for the casio.
Cheers! | 2019/09/19 | [
"https://Stackoverflow.com/questions/58007418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8402836/"
] | If you cannot import time (or utime) in your code, you could always implement a simple function that loops for a certain number of steps:
```
def wait(step):
for i in range(step):
pass
wait(999999)
```
In that case, the actual time spent in the function will depend on the computational power of your device. | I am trying to do the same exact things and I was trying to benchmark a wait function by animating a square accross the scree. Here is what I have come up width:
```
from casioplot import *
def wait(milli):
time = milli*50
for i in range(time):
pass
def drawSquare(x,y,l):
for i in range(l):
for j in range(l):
set_pixel(x+j,y+i,(0,0,0))
draw_string(0, 0, "start.", (0, 0, 0,), "small")
show_screen()
waitMillis = 1000
screenWidth = 384
screenHeight = 192
xPos = 0
yPos = 0
squareSide = 24
while xPos + squareSide < screenWidth:
clear_screen()
drawSquare(xPos,yPos,squareSide)
show_screen()
wait(waitMillis)
xPos = xPos + 5
draw_string(int(screenWidth/2), 0, "done", (0, 0, 0,), "small")
show_screen()
```
So when just using a manual stopwatch app, I noticed that the wait function has a 1 second turnaround time for every 50k iterations. So I set a ratio to the value in order
to have a comfortable millisecond parameter. Unfortunately, performance degrade drastically probably do to the drawing utilities and the constant clear and setting of pixels. The performance degradation is exponential and it is hard to capture and control. The square moves well up to half of the screen after which it moves very sluggishly so. I am not sure if we can fix this problem easily... | 1,988 |
65,559,632 | Seems to be impossible currently with Anaconda as well as with Xcode 12. Via idle, it runs via Rosetta. There seems to be no discussion of this so either I'm quite naive or maybe this will be useful to others as well.
Python says: "As of 3.9.1, Python now fully supports building and running on macOS 11.0 (Big Sur) and on Apple Silicon Macs (based on the ARM64 architecture). A new universal build variant, universal2, is now available to natively support both ARM64 and Intel 64 in one set of executables" <https://docs.python.org/3/whatsnew/3.9.html>
Please help a newbie figure out how to take advantage of his recent impulse-buy. | 2021/01/04 | [
"https://Stackoverflow.com/questions/65559632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14936216/"
] | You can now install python 3.9.1 through multiple pathways now but the most comprehensive build environment for the full data-science suite for python at the moment (Feb 2021) on M1 ARM architecture is via miniforge.
e.g.
```
brew install --cask miniforge
conda init zsh
conda activate
conda install numpy scipy scikit-learn
``` | I am using python3.9.4. I installed it using homebrew only.
```
brew install python@3.9
``` | 1,989 |
23,936,239 | Strings are iterable.
Lists are iterable.
And with a List of Strings, both the List and the Strings can be iterated through with a nested loop.
For Example:
```
input = [ 'abcdefg', 'hijklmn', 'opqrstu']
for item in input:
for letter in item:
print letter
```
Out:
```
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
```
It seems we can then iterate within the iteration over the string, i.e. iterate over each letter of 'abcdefg'. Is there a more pythonic way to iterate as above, possibly in one statement?
I've re-read through Learning Python Chapter 4, Chapter 1 of the Python Cookbook, and looked here [in python: iterate over each string in a list](https://stackoverflow.com/questions/20968823/in-python-iterate-over-each-string-in-a-list), and here [Displaying and iterating a list within a string?](https://stackoverflow.com/questions/23146299/displaying-and-iterating-a-list-within-a-string). | 2014/05/29 | [
"https://Stackoverflow.com/questions/23936239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1851961/"
] | You can use [`itertools.chain.from_iterable()`](https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable):
```
>>> from itertools import chain
>>> input = ['abcdefg', 'hijklmn', 'opqrstu']
>>>
>>> for letter in chain.from_iterable(input):
... print letter
...
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
``` | Use `itertools.chain`:
```
for letter in itertools.chain(*input):
print letter
``` | 1,994 |
14,610,459 | Go's time package claims giving a nanosecond precision.
<http://golang.org/src/pkg/time/time.go>
I'm wondering how it is implemented and if I can trust it. My doubts come from Python, which clearly documents its difficulties and limitations with time:
>
> *From <http://docs.python.org/2/library/time.html>*
>
>
> The precision of the various real-time functions may be less than
> suggested by the units in which their value or argument is expressed.
> E.g. on most Unix systems, the clock βticksβ only 50 or 100 times a
> second.
>
>
> On the other hand, the precision of time() and sleep() is better than
> their Unix equivalents: times are expressed as floating point numbers,
> time() returns the most accurate time available (using Unix
> gettimeofday() where available), and sleep() will accept a time with a
> nonzero fraction (Unix select() is used to implement this, where
> available).
>
>
>
Since the operating systems are giving such a hard time to python, how does Go achieve its nanosecond precision? | 2013/01/30 | [
"https://Stackoverflow.com/questions/14610459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/520159/"
] | Well as for the implementation, `time.Now()` falls back to a function implemented in the runtime.
You can review the [C time implementation](http://code.google.com/p/go/source/browse/src/pkg/runtime/time.goc) and the implementation for [`timeΒ·now` in assembly](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_linux_amd64.s?name=go1.1.2#103) (linux amd64 in this case). This then uses [`clock_gettime`](http://linux.die.net/man/3/clock_gettime), which provides nano seconds resolution. On windows, this is realized by calling `GetSystemTimeAsFileTime`, which [too generates nanoseconds](https://stackoverflow.com/a/11743614/1643939) (not as high res but nanoseconds).
So yes, the resolution depends on the operating system and you can't expect it to be accurate on every OS but the developers are trying to make it as good as it can be. For example, in go1.0.3, `timeΒ·now` for FreeBSD [used `gettimeofday`](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_freebsd_386.s?name=go1.0.3#110) instead of `clock_gettime`, which only offers millisecond precision. You can see this by looking at the value stored in `AX`, as it is the [syscall id](http://www.acsu.buffalo.edu/~charngda/freebsd_syscalls.html). If you take a look at the referenced assembly, you can see that the ms value is mulitplied by 1000 to get the nanoseconds. However, this is fixed [now](http://code.google.com/p/go/source/browse/src/pkg/runtime/sys_freebsd_386.s?name=go1.1#134).
If you want to be sure, check the corresponding implementations in the runtime source code and ask the manuals of your operating system. | One of the problems with Python's [time.time](http://docs.python.org/2/library/time.html#time.time) function is that it returns a [float](http://docs.python.org/2/library/functions.html#float). A float is an [IEEE 754 double-precision number](http://en.wikipedia.org/wiki/Double-precision_floating-point_format) which has 53 bits of precision.
Since it is now more than 2\*\*30 seconds since 1970-01-01 (the epoch) you need 61 (31 + 30) bits of precision to store time accurate to the nanosecond since 1970-01-01.
Unfortunately that is 7 or 8 bits short of what you can store in a python float, meaning that python floats will always be less precise than go time.
To quantify that the demonstration below shows that python time is at most accurate to 100nS just due to the limitations of the `float` type.
```
>>> t = time()
>>> t
1359587524.591781
>>> t == t + 1E-6
False
>>> t == t + 1E-7
True
```
So go, starting with an `int64` and counting in nS doesn't have these limitations and is limited to the precision of the underlying OS as explained very well by nemo. | 1,999 |
12,172,934 | (not to be confused with itertools.chain)
I was reading the following:
<http://en.wikipedia.org/wiki/Method_chaining>
My question is: **what is the best way to implement method chaining in python?**
Here is my attempt:
```
class chain():
def __init__(self, my_object):
self.o = my_object
def __getattr__(self, attr):
x = getattr(self.o, attr)
if hasattr(x, '__call__'):
method = x
return lambda *args: self if method(*args) is None else method(*args)
else:
prop = x
return prop
list_ = chain([1, 2, 3, 0])
print list_.extend([9, 5]).sort().reverse()
"""
C:\Python27\python.exe C:/Users/Robert/PycharmProjects/contests/sof.py
[9, 5, 3, 2, 1, 0]
"""
```
One problem is if calling `method(*args)` modifies `self.o` but doesn't return `None`. (then should I return `self` or return what `method(*args)` returns).
Does anyone have better ways of implementing chaining? There are probably many ways to do it.
Should I just assume a method always returns `None` so I may always return `self.o` ? | 2012/08/29 | [
"https://Stackoverflow.com/questions/12172934",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/632088/"
] | There is a very handy [`Pipe`](https://github.com/JulienPalard/Pipe) library which may be the answer to your question. For example::
```
seq = fib() | take_while(lambda x: x < 1000000) \
| where(lambda x: x % 2) \
| select(lambda x: x * x) \
| sum()
``` | There isn't going to be any general way of allowing any method of any object to be chained, since you can't know what sort of value that method returns and why without knowing how that particular method works. Methods might return `None` for any reason; it doesn't always mean the method has modified the object. Likewise, methods that do return a value still might not return a value that can be chained. There's no way to chain a method like `list.index`: `fakeList.index(1).sort()` can't have much hope of working, because the whole point of `index` is it returns a number, and that number means something, and can't be ignored just to chain on the original object.
If you're just fiddling around with Python's builtin types to chain certain specific methods (like sort and remove), you're better off just wrapping those particular methods explicitly (by overriding them in your wrapper class), instead of trying to do a general mechanism with `__getattr__`. | 2,002 |
61,748,604 | I have two pandas series with DateTimeIndex. I'd like to join these two series such that the resulting DataFrame uses the index of the first series and "matches" the values from the second series accordingly (using a linear interpolation in the second series).
First Series:
```
2020-03-01 1
2020-03-03 2
2020-03-05 3
2020-03-07 4
```
Second Series:
```
2020-03-01 20
2020-03-02 22
2020-03-05 25
2020-03-06 35
2020-03-07 36
2020-03-08 45
```
Desired Output:
```
2020-03-01 1 20
2020-03-03 2 23
2020-03-05 3 25
2020-03-07 4 36
```
---
Code for generating the input data:
```python
import pandas as pd
import datetime as dt
s1 = pd.Series([1, 2, 3, 4])
s1.index = pd.to_datetime([dt.date(2020, 3, 1), dt.date(2020, 3, 3), dt.date(2020, 3, 5), dt.date(2020, 3, 7)])
s2 = pd.Series([20, 22, 25, 35, 36, 45])
s2.index = pd.to_datetime([dt.date(2020, 3, 1), dt.date(2020, 3, 2), dt.date(2020, 3, 5), dt.date(2020, 3, 6), dt.date(2020, 3, 7), dt.date(2020, 3, 8)])
``` | 2020/05/12 | [
"https://Stackoverflow.com/questions/61748604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5554921/"
] | Use [`concat`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html) with inner join:
```
df = pd.concat([s1, s2], axis=1, keys=('s1','s2'), join='inner')
print (df)
s1 s2
2020-03-01 1 20
2020-03-05 3 25
2020-03-07 4 36
```
Solution with interpolate of `s2` Series and then removed rows with missing values:
```
df = (pd.concat([s1, s2], axis=1, keys=('s1','s2'))
.assign(s2 = lambda x: x.s2.interpolate('index'))
.dropna())
print (df)
s1 s2
2020-03-01 1.0 20.0
2020-03-03 2.0 23.0
2020-03-05 3.0 25.0
2020-03-07 4.0 36.0
``` | ### Construct combined dataframe
```
# there are many ways to construct a dataframe from series, this uses the constructor:
df = pd.DataFrame({'s1': s1, 's2': s2})
s1 s2
2020-03-01 1.0 20.0
2020-03-02 NaN 22.0
2020-03-03 2.0 NaN
2020-03-05 3.0 25.0
2020-03-06 NaN 35.0
2020-03-07 4.0 36.0
2020-03-08 NaN 45.0
```
### Interpolate
```
df = df.interpolate()
s1 s2
2020-03-01 1.0 20.0
2020-03-02 1.5 22.0
2020-03-03 2.0 23.5
2020-03-05 3.0 25.0
2020-03-06 3.5 35.0
2020-03-07 4.0 36.0
2020-03-08 4.0 45.0
```
### Restrict rows
```
# Only keep the rows that were in s1's index.
# Several ways to do this, but this example uses .filter
df = df.filter(s1.index, axis=0)
s1 s2
2020-03-01 1.0 20.0
2020-03-03 2.0 23.5
2020-03-05 3.0 25.0
2020-03-07 4.0 36.0
```
### Convert numbers back to int64
```
df = df.astype('int64')
s1 s2
2020-03-01 1 20
2020-03-03 2 23
2020-03-05 3 25
2020-03-07 4 36
```
One-liner:
```
df = pd.DataFrame({'s1': s1, 's2': s2}).interpolate().filter(s1.index, axis=0).astype('int64')
```
Documentation links:
* [interpolate](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html)
* [filter](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html) | 2,010 |
73,386,405 | ```
infile = open('results1', 'r')
lines = infile.readlines()
import re
for line in lines:
if re.match("track: 1,", line):
print(line)
```
question solved by using python regex below | 2022/08/17 | [
"https://Stackoverflow.com/questions/73386405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19783767/"
] | I suggest you use Regular Expressions library (re) which gives you all you need to extract the data from text files. I ran a simple code to solve your current problem:
```
import re
# Customize path as the file's address on your system
text_file = open('path/sample.txt','r')
# Read the file line by line using .readlines(), so that each line will be a continuous long string in the "file_lines" list
file_lines = text_file.readlines()
```
Depending on how your target is located in each line, detailed process from here on could be a little different but the overall approach is the same in every scenario.
I have assumed your only condition is that the line starts with "Id of the track" and we are looking to extract all the values between parentheses all in one place.
```
# A list to append extracted data
list_extracted_data = []
for line in list_lines:
# Flag is True if the line starts (special character for start: \A) with 'Id of the track'
flag = re.search('\AId of the track',line)
if flag:
searched_phrase = re.search(r'\B\(.*',line)
start_index, end_index = searched_phrase.start(), searched_phrase.end()
# Select the indices from each line as it contains our extracted data
list_extracted_data.append(line[start_index:end_index])
print(list_extracted_data)
```
>
> ['(0.8835006455995176, -0.07697617837544447)', '(0.8835006455995176, -0.07697617837544447)', '(0.8835006455995176, -0.07697617837544447)', '(0.8835006455995176, -0.07697617837544447)', '(0.8755597308669424, -0.23473345870373538)', '(0.8835006455995176, -0.07697617837544447)', '(0.8755597308669424, -0.23473345870373538)', '(6.4057079727806485, -0.6819141582566414)', '(1.1815888836384334,
> -0.35535274681454954)']
>
>
>
you can do all sorts of things after you've selected the data from each line, including convert it to numerical type or separating the two numbers inside the parentheses.
I assume your intention was to add each of the numbers inside into a different column in a dataFrame:
```
final_df = pd.DataFrame(columns=['id','X','Y'])
for K, pair in enumerate(list_extracted_data):
# split by comma, select the left part, exclude the '(' at the start
this_X = float(pair.split(',')[0][1:])
# split by comma, select the right part, exclude the ')' at the end
this_Y = float(pair.split(',')[1][:-1])
final_df = final_df.append({'id':K,'X':this_X,'Y':this_Y},ignore_index=True)
```
[![enter image description here](https://i.stack.imgur.com/AiBt8.png)](https://i.stack.imgur.com/AiBt8.png) | Given that all your target lines follow the exact same pattern, a much simpler way to extract the value between parentheses would be:
```
from ast import literal_eval as make_tuple
infile = open('results1', 'r')
lines = infile.readlines()
import re
for line in lines:
if re.match("Id of the track: 1,", line):
values_slice = line.split(": ")[-1]
values = make_tuple(values_slice) # stored as tuple => (0.8835006455995176, -0.07697617837544447)
```
Now you can use/manipulate/store the values whichever way you want. | 2,011 |
5,738,339 | I have a specific use. I am preparing for GRE. Everytime a new word comes, I look it up at
www.mnemonicdictionary.com, for its meanings and mnemonics. I want to write a script in python preferably ( or if someone could provide me a pointer to an already existing thing as I dont know python much but I am learning now) which takes a list of words from a text file, and looks it up at this site, and just fetch relevant portion (meaning and mnemonics) and store it another text file for offline use. Is it possible to do so ?? I tried to look up the source of these pages also. But along with html tags, they also have some ajax functions.
Could someone provide me a complete way how to go about this ??
Example: for word impecunious:
the related html source is like this
```
<ul class='wordnet'><li><p>(adj.) not having enough money to pay for necessities</p><u>synonyms</u> : <a href='http://www.mnemonicdictionary.com/word/hard up' onclick="ajaxSearch('hard up','click'); return false;">hard up</a> , <a href='http://www.mnemonicdictionary.com/word/in straitened circumstances' onclick="ajaxSearch('in straitened circumstances','click'); return false;">in straitened circumstances</a> , <a href='http://www.mnemonicdictionary.com/word/penniless' onclick="ajaxSearch('penniless','click'); return false;">penniless</a> , <a href='http://www.mnemonicdictionary.com/word/penurious' onclick="ajaxSearch('penurious','click'); return false;">penurious</a> , <a href='http://www.mnemonicdictionary.com/word/pinched' onclick="ajaxSearch('pinched','click'); return false;">pinched</a><p></p></li></ul>
```
but the web page renders like this:
**β’(adj.) not having enough money to pay for necessities
synonyms : hard up , in straitened circumstances , penniless , penurious , pinched** | 2011/04/21 | [
"https://Stackoverflow.com/questions/5738339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/169210/"
] | If you have Bash (version 4+) and `wget`, an example
```
#!/bin/bash
template="http://www.mnemonicdictionary.com/include/ajaxSearch.php?word=%s&event=search"
while read -r word
do
url=$(printf "$template" "$word")
data=$(wget -O- -q "$url")
data=${data#* }
echo "$word: ${data%%<*}"
done < file
```
Sample output
```
$> more file
synergy
tranquil
jester
$> bash dict.sh
synergy: the working together of two things (muscles or drugs for example) to produce an effect greater than the sum of their individual effects
tranquil: (of a body of water) free from disturbance by heavy waves
jester: a professional clown employed to entertain a king or nobleman in the Middle Ages
```
Update: Include mneumonic
```
template="http://www.mnemonicdictionary.com/include/ajaxSearch.php?word=%s&event=search"
while read -r word
do
url=$(printf "$template" "$word")
data=$(wget -O- -q "$url")
data=${data#* }
m=${data#*class=\'mnemonic\'}
m=${m%%</p>*}
m="${m##* }"
echo "$word: ${data%%<*}, mneumonic: $m"
done < file
``` | Use [curl](http://curl.haxx.se/) and sed from a Bash shell (either Linux, Mac, or Windows with Cygwin).
If I get a second I will write a quick script ... gotta give the baby a bath now though. | 2,012 |
49,766,071 | I'm new to python, and I know there must be a better way to do this, especially with numpy, and without appending to arrays. Is there a more concise way to do something like this in python?
```py
def create_uniform_grid(low, high, bins=(10, 10)):
"""Define a uniformly-spaced grid that can be used to discretize a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins along each corresponding dimension.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
range1 = high[0] - low[0]
range2 = high[1] - low[1]
steps1 = range1 / bins[0]
steps2 = range2 / bins[1]
arr1 = []
arr2 = []
for i in range(0, bins[0] - 1):
if(i == 0):
arr1.append(low[0] + steps1)
arr2.append(low[1] + steps2)
else:
arr1.append(round((arr1[i - 1] + steps1), 1))
arr2.append(arr2[i - 1] + steps2)
return [arr1, arr2]
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_uniform_grid(low, high)
# [[-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8],
# [-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0]]
``` | 2018/04/11 | [
"https://Stackoverflow.com/questions/49766071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097028/"
] | `np.ogrid` is similar to your function. Differences: 1) It will keep the endpoints; 2) It will create a column and a row, so its output is 'broadcast ready':
```
>>> np.ogrid[-1:1:11j, -5:5:11j]
[array([[-1. ],
[-0.8],
[-0.6],
[-0.4],
[-0.2],
[ 0. ],
[ 0.2],
[ 0.4],
[ 0.6],
[ 0.8],
[ 1. ]]), array([[-5., -4., -3., -2., -1., 0., 1., 2., 3., 4., 5.]])]
``` | Maybe the `numpy.meshgrid` is what you want.
Here is an example to create the grid and do math on it:
```
#!/usr/bin/python3
# 2018.04.11 11:40:17 CST
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-5, 5, 0.1)
y = np.arange(-5, 5, 0.1)
xx, yy = np.meshgrid(x, y, sparse=True)
z = np.sin(xx**2 + yy**2) / (xx**2 + yy**2)
#h = plt.contourf(x,y,z)
plt.imshow(z)
plt.show()
```
[![enter image description here](https://i.stack.imgur.com/iwSMU.png)](https://i.stack.imgur.com/iwSMU.png)
---
Refer:
1. <https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html> | 2,013 |
25,067,927 | So I have a line here that is meant to dump frames from a movie via python and ffmpeg.
```
subprocess.check_output([ffmpeg, "-i", self.moviefile, "-ss 00:01:00.000 -t 00:00:05 -vf scale=" + str(resolution) + ":-1 -r", str(framerate), "-qscale:v 6", self.processpath + "/" + self.filetitles + "-output%03d.jpg"])
```
And currently it's giving me the error:
```
'CalledProcessError: Command ... returned non-zero exit status 1'
```
The command python SAYS it's running is:
```
'['/var/lib/openshift/id/app-root/data/programs/ffmpeg/ffmpeg', '-i', u'/var/lib/openshift/id/app-root/data/moviefiles/moviename/moviename.mp4', '-ss 00:01:00.000 -t 00:00:05 -vf scale=320:-1 -r', '10', '-qscale:v 6', '/var/lib/openshift/id/app-root/data/process/moviename/moviename-output%03d.jpg']'
```
But when I run the following command via ssh...
```
'/var/lib/openshift/id/app-root/data/programs/ffmpeg/ffmpeg' -i '/var/lib/openshift/id/app-root/data/moviefiles/moviename/moviename.mp4' -ss 00:01:00.000 -t 00:00:05 -vf scale=320:-1 -r 10 -qscale:v 6 '/var/lib/openshift/id/app-root/data/process/moviename/moviename-output%03d.jpg'
```
It works just fine. What am I doing wrong? I think I'm misunderstanding the way subprocess field parsing works... | 2014/07/31 | [
"https://Stackoverflow.com/questions/25067927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | The subprocess module does almost never allow any whitespace characters in its parameters, unless you run it in shell mode. Try this:
```
subprocess.check_output(["ffmpeg", "-i", self.moviefile, "-ss", "00:01:00.000", "-t", "00:00:05", "-vf", "scale=" + str(resolution) + ":-1", "-r", str(framerate), "-qscale:v", "6", self.processpath + "/" + self.filetitles + "-output%03d.jpg"])
```
Here is a cite from [the python docs.](https://docs.python.org/2/library/subprocess.html#popen-constructor)
*"Note in particular that options (such as -input) and arguments (such as eggs.txt) that are separated by whitespace in the shell go in separate list elements, while arguments that need quoting or backslash escaping when used in the shell (such as filenames containing spaces or the echo command shown above) are single list elements."* | The argument array you pass to `check_call` is not correctly formatted. Every argument to `ffmpeg` needs to be a single element in the argument list, for example
```
... "-ss 00:01:00.000 -t 00:00:05 -vf ...
```
should be
```
... "-ss", "00:01:00.000", "-t", "00:00:05", "-vf", ...
```
The complete resulting args array should be:
```
['ffmpeg', '-i', '/var/lib/openshift/id/app-root/data/moviefiles/moviename/moviename.mp4', '-ss', '00:01:00.000', '-t', '00:00:05', '-vf', 'scale=320:-1', '-r', '10', '-qscale:v', '6', '/var/lib/openshift/id/app-root/data/process/moviename/moviename-output%03d.jpg']
``` | 2,014 |
4,002,660 | In my MySQL database I have dates going back to the mid 1700s which I need to convert somehow to ints in a format similar to Unix time. The value of the int isn't important, so long as I can take a date from either my database or from user input and generate the same int. I need to use MySQL to generate the int on the database side, and python to transform the date from the user.
Normally, the [UNIX\_TIMESTAMP function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp), would accomplish this in MySQL, but for dates before 1970, it always returns zero.
The [TO\_DAYS MySQL function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_to-days), also could work, but I can't take a date from user input and use Python to create the same values as this function creates in MySQL.
So basically, I need a function like UNIX\_TIMESTAMP that works in MySQL and Python for dates between 1700-01-01 and 2100-01-01.
Put another way, this MySQL pseudo-code:
```
select 1700_UNIX_TIME(date) from table;
```
Must equal this Python code:
```
1700_UNIX_TIME(date)
``` | 2010/10/23 | [
"https://Stackoverflow.com/questions/4002660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/64911/"
] | This is my idea, create a filter in your web application , when u receive a request like
`/area.jsp?id=1` , in `doFilter` method , forward the request to `http://example.com/newyork`.
In `web.xml`:
```
<filter>
<filter-name>RedirectFilter</filter-name>
<filter-class>
com.filters.RedirectFilter
</filter-class>
</filter>
<filter-mapping>
<filter-name>RedirectFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
```
Write the following class and place it in `WEB-INF/classses`:
```
class RedirectFilter implements Filter
{
public void doFilter(ServletRequest request,
ServletResponse response,
FilterChain chain)
throws IOException, ServletException
{
String scheme = req.getScheme(); // http
String serverName = req.getServerName(); // example.com
int serverPort = req.getServerPort(); // 80
String contextPath = req.getContextPath(); // /mywebapp
String servletPath = req.getServletPath(); // /servlet/MyServlet
String pathInfo = req.getPathInfo(); // area.jsp?id=1
String queryString = req.getQueryString();
if (pathInfo.IndexOf("area.jsp") > 1)
{
pathInfo = "/newyork";
String url = scheme+"://"+serverName+contextPath+pathInfo;
filterConfig.getServletContext().getRequestDispatcher(login_page).
forward(request, response);
} else
{
chain.doFilter(request, response);
return;
}
}
}
``` | In your database where you store these area IDs, add a column called "slug" and populate it with the names you want to use. The "slug" for id 1 would be "newyork". Now when a request comes in for one of these URLs, look up the row by "slug" instead of by id. | 2,015 |
64,415,588 | Given 2 data frames like the link example, I need to add to df1 the "index income" from df2. I need to search by the df1 combined key in df2 and if there is a match return the value into a new column in df1. There is not an equal number of instances in df1 and df2 and there are about 700 rows in df1 1000 rows in df2.
I was able to do this in excel with a vlookup but I am trying to apply it to python code now.
![Data Frame images](https://i.stack.imgur.com/YUh1H.png) | 2020/10/18 | [
"https://Stackoverflow.com/questions/64415588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14473305/"
] | This should solve your issue:
```
df1.merge(df2, how='left', on='combind_key')
```
This (`left` join) will give you all the records of `df1` and matching records from `df2`. | <https://www.geeksforgeeks.org/how-to-do-a-vlookup-in-python-using-pandas/>
Here is an answer using joins. I modified my df2 to only include useful columns then used pandas left join.
```
Left_join = pd.merge(df,
zip_df,
on ='State County',
how ='left')
``` | 2,018 |
66,996,373 | I'm trying to install and use Pillow with Python 3.9.2 (managed with pyenv). I'm using Poetry to manage my virtual environments and dependencies, so I ran `poetry add pillow`, which successfully added `Pillow = "^8.2.0"` to my pyproject.toml. Per the Pillow docs, I added `from PIL import Image` in my script, but when I try to run it, I get:
```
File "<long/path/to/file.py>", line 3, in <module>
from PIL import Image
ModuleNotFoundError: No module named 'PIL'
```
When I look in the venv Poetry is creating for me, I can see a PIL directory (`/long/path/lib/python3.9/site-packages/PIL/`) and an Image.py file inside it.
What am I missing here? I've tried:
* Downcasing to `from pil import Image` per [this](https://github.com/python-pillow/Pillow/issues/3851#issuecomment-568223993); did not work
* Downgrading to lower versions of Python and PIL; works, but defeats the purpose
* ETA: Exporting a requirements.txt file from Poetry, creating a virtualenv with venv, and installing the packages manually; works, but cuts me off from using Poetry/pyproject.toml
Any help would be tremendously appreciated. | 2021/04/08 | [
"https://Stackoverflow.com/questions/66996373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4430379/"
] | I couldn't find a way to solve this either (using poetry 1.1.13).
Ultimately, I resorted to a workaround of `poetry add pillow && pip install pillow` so I could move on with my life. :P
`poetry add pillow` gets the dependency in to the TOML, so consumers of the package *should* be OK. | capitalizing "Pillow" solved it for me:
`poetry add Pillow` | 2,019 |
30,558,917 | Using the pandas library for python I am reading a csv, then grouping the results with a sum.
```
grouped = df[['Organization Name','Views']].groupby('Organization Name').sum().sort(columns='Views',ascending=False).head(10)
#Bar Chart Section
print grouped.to_string()
```
Unfortunately I get the following result for the table:
```
Views
Organization Name
Test1 112
Test2 114
Test3 115
```
it seems that the column headers are going on two separate rows. | 2015/05/31 | [
"https://Stackoverflow.com/questions/30558917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1760634/"
] | Because you grouped on 'Organization Name', this is being used as the name for your index, you can set this to `None` using:
```
grouped.index.name = None
```
Will then remove the line, this is just a display issue, your data is not in some funny shape
Alternatively if you don't want 'Organization Name' to become the index then pass `as_index=False` to `groupby`:
```
grouped = df[['Organization Name','Views']].groupby('Organization Name', as_index=False).sum().sort(columns='Views',ascending=False).head(10)
``` | `grouped.reset_index()` should fix this. This happened because you have grouped the data and aggregated on a column. | 2,020 |
67,511,611 | I am new to Python socket server programming, I am following this [example](https://docs.python.org/3/library/socketserver.html#examples) to setup a server using the socketserver framework. Based on the comment, pressing Ctrl-C will stop the server but when I try to run it again, I get
`OSError: [Errno 98] Address already in use`
which makes me have to kill the process manually using the terminal.
Based on my understanding, KeyboardInterrupt is considered one type of exception in Python, and when an exception happens in a `with` block, Python will also call the `__exit__()` function to clean up. I have tried to create a `__exit__()` function in the TCP hanlder class but that does not seems to fix the problem.
Does anyone know a way to unbind the socket when an exception is raised?
server.py
```
import socketserver
from threading import Thread
class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The request handler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print("{} wrote:".format(self.client_address[0]))
print(self.data)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
# Self-written function to try to make Python close the server properly
def __exit__(self):
shutdown_thread = Thread(target=server.shutdown)
shutdown_thread.start()
if __name__ == "__main__":
HOST, PORT = "localhost", 9999
# Create the server, binding to localhost on port 9999
with socketserver.TCPServer((HOST, PORT), MyTCPHandler) as server:
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
``` | 2021/05/12 | [
"https://Stackoverflow.com/questions/67511611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10733376/"
] | Just split the string and add map over the `stringArray` and add `<b>` just before the `beginOffset` and `</b>` after the `endOffset`.
```js
var indices = [{
beginOffset: 2,
endOffset: 8,
},
{
beginOffset: 42,
endOffset: 48,
},
{
beginOffset: 58,
endOffset: 63,
},
];
var teststring =
"a lovely day at the office to meet such a lovely woman. I loved her so much";
let stringArray = teststring.split("");
indices.forEach(({
beginOffset: begin,
endOffset: end
}) => {
stringArray = stringArray.map((l, index) => {
if (index === begin - 1) {
return [l, `<b>`];
} else if (index === end - 1) {
return [l, `</b>`];
} else return l;
});
});
console.log(stringArray.flat().join(""));
``` | Sort the indices from highest to lowest. Then when you insert `<b>` and `</b>` it won't affect the indexes in subsequent iterations.
```js
var indices = [{
beginOffset: 2,
endOffset: 8
},
{
beginOffset: 42,
endOffset: 48
},
{
beginOffset: 58,
endOffset: 63
}
];
var teststring = "a lovely day at the office to meet such a lovely woman. I loved her so much";
indices.sort((a, b) => b.beginOffset - a.beginOffset).forEach(({
beginOffset,
endOffset
}) => teststring = teststring.substring(0, beginOffset) + '<b>' + teststring.substring(beginOffset, endOffset) + '</b>' + teststring.substr(endOffset));
console.log(teststring);
``` | 2,021 |
45,477,478 | I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data.
The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this.
Here are some examples of what happens when I apply the heatmap data to the image:
[![Image of a cat with a heatmap illuminating the subject of the image](https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true)](https://github.com/jacobgil/keras-grad-cam/raw/master/examples/persian_cat.jpg?raw=true)
[![enter image description here](https://i.stack.imgur.com/JZqJn.jpg)](https://i.stack.imgur.com/JZqJn.jpg)
I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately.
<https://github.com/metalbubble/CAM/tree/master/bboxgenerator>
Anyone have any ideas about how to approach something like this? | 2017/08/03 | [
"https://Stackoverflow.com/questions/45477478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3539683/"
] | this is not a good piece of code. I would not know where to start on the bad practices...
This function defines a function that it is not reachable from any other scope, and not reusable, just to return its call with the data argument. The outer return could be simple as
```
return self.change('groupTo', groupExp, data);
``` | if you call `getData()` function without passing any parameter then value of the data variable in function is undefined.
So at this Line ternary operator is used.
```
data = (data === undefined) ? this.defaultData() : data;
```
So it will check whether `data === undefined` condition which is true. therefore it will assign value of `this.defaultData()` to the data attribute
In short when value of `data` is `undefined` that time following is the case
```
data = this.defaultData()
```
Otheriwse if data has a value means calling function `getData("Hi")` with parameter then it will be evaluated as a
```
data = data // data = Hi
```
Now here `var self = this;` is used to preserve the context of `this` inside nested function which is mentioned below.
```
return (function parse(group) {
return self.change('groupTo', groupExp, group);
}(data));
```
Without self = this if i try to use `this` in Nested function then it will point to the Global Object i.e `window` Object in JS.
In following Code `arg` is available inside the function as we are passing it in call of IIFE so it is availabel to pass in the call of doSomething function.
```
(function (local_arg) {
doSomething(local_arg);
})(arg);
``` | 2,023 |
34,490,117 | C code:
```
#include "Python.h"
#include <windows.h>
__declspec(dllexport) PyObject* getTheString()
{
auto p = Py_BuildValue("s","hello");
char * s = PyString_AsString(p);
MessageBoxA(NULL,s,"s",0);
return p;
}
```
Python code:
```
import ctypes
import sys
sys.path.append('./')
dll = ctypes.CDLL('pythonCall_test.dll')
print type(dll.getTheString())
```
Result:
```
<type 'int'>
```
How can I get `pytype(str)'hello'` from C code? Or is there any Pythonic way to translate this `pytype(int)` to `pytype(str)`?
It looks like no matter what I changeοΌthe returned `Pyobject*` is a `pytype(int)` no else | 2015/12/28 | [
"https://Stackoverflow.com/questions/34490117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5680359/"
] | >
> By default functions are assumed to return the C `int` type. Other
> return types can be specified by setting the `restype` attribute of the
> function object.
> [(ref)](https://docs.python.org/2/library/ctypes.html#return-types)
>
>
>
Define the type returned by your function like that:
```
>>> from ctypes import c_char_p
>>> dll.getTheString.restype = c_char_p # c_char_p is a pointer to a string
>>> print type(dll.getTheString())
``` | `int` is the default return type, to specify another type you need to set the function object's `restype` attribute. See [Return types](https://docs.python.org/2/library/ctypes.html#return-types) in the `ctype` docs for details. | 2,025 |
61,959,745 | I want to merge all files with the extension `.asc` in my current working directory to be merged into a file called `outfile.asc`.
My problem is, I don't know how to exclude a specific file (`"BigTree.asc"`) and how to overwrite an existing `"outfile.asc"` if there is one in the directory.
```
if len(sys.argv) < 2:
print("Please supply the directory of the ascii files and an output-file as argument:")
print("python merge_file.py directory outfile")
exit()
directory = sys.argv[1]
os.chdir(directory)
currwd = os.getcwd()
filename = sys.argv[2]
fileobj_out = open(filename, "w")
starttime = time.time()
read_files = glob.glob(currwd+"\*.asc")
with open("output.asc", "wb") as outfile:
for f in read_files:
with open(f, "rb") as infile:
if f == "BigTree.asc":
continue
else:
outfile.write(infile.read())
endtime = time.time()
runtime = int(endtime-starttime)
sys.stdout.write("The script took %i sec." %runtime)
``` | 2020/05/22 | [
"https://Stackoverflow.com/questions/61959745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13461656/"
] | As suggested in a comment, here's my simplified (simplistic?) solution to make it such that specific flask end points in google app engine are only accessibly by application code or app engine service accounts. The answer is based on the documentation regarding [validating cron requests](https://cloud.google.com/appengine/docs/standard/python3/scheduling-jobs-with-cron-yaml#validating_cron_requests) and [validating task requests](https://cloud.google.com/tasks/docs/creating-appengine-handlers#reading_app_engine_task_request_headers).
Basically, we write a decorator that will validate whether or not `X-Appengine-Cron: true` is in the headers (implying that the end point is being called by your code, not a remote user). If the header is not found, then we do not run the protected function.
```
# python
# main.py
from flask import Flask, request, redirect, render_template
app = Flask(__name__)
# Define the decorator to protect your end points
def validate_cron_header(protected_function):
def cron_header_validator_wrapper(*args, **kwargs):
# https://cloud.google.com/appengine/docs/standard/python3/scheduling-jobs-with-cron-yaml#validating_cron_requests
header = request.headers.get('X-Appengine-Cron')
# If you are validating a TASK request from a TASK QUEUE instead of a CRON request, then use 'X-Appengine-TaskName' instead of 'X-Appengine-Cron'
# example:
# header = request.headers.get('X-Appengine-TaskName')
# Other possible headers to check can be found here: https://cloud.google.com/tasks/docs/creating-appengine-handlers#reading_app_engine_task_request_headers
# If the header does not exist, then don't run the protected function
if not header:
# here you can raise an error, redirect to a page, etc.
return redirect("/")
# Run and return the protected function
return protected_function(*args, **kwargs)
# The line below is necessary to allow the use of the wrapper on multiple endpoints
# https://stackoverflow.com/a/42254713
cron_header_validator_wrapper.__name__ = protected_function.__name__
return cron_header_validator_wrapper
@app.route("/example/protected/handler")
@validate_cron_header
def a_protected_handler():
# Run your code here
your_response_or_error_etc = "text"
return your_response_or_error_etc
@app.route("/yet/another/example/protected/handler/<myvar>")
@validate_cron_header
def another_protected_handler(some_var=None):
# Run your code here
return render_template("my_sample_template", some_var=some_var)
``` | It still works in Python 3.x, I use the original approach in my own Flask AppEngine app running Python 3.8
Here is a simplified version of my `app.yaml` with everything you need:
```
runtime: python38
app_engine_apis: true
handlers:
- url: /admin/.*
secure: always
script: auto
login: admin
- url: /.*
secure: always
script: auto
```
Both scripts are set to auto and point to main.py by default.
In main.py, I define my routes and all routes starting with /admin will force the user to login with a Google Account which has owner/admin rights for the application.
Just make sure you include `app_engine_apis: true` in your `app.yaml` file as it is required for login to work. | 2,026 |
49,625,350 | I have a zip file structure like - B.zip/org/note.txt
I want to directly list the files inside org folder without going to other folders in B.zip
I have written the following code but it is listing all the files and directories available inside the B.zip file
```
f = zipfile.ZipFile('D:\python\B.jar')
for name in f.namelist():
print '%s: %r' % (name, f.read(name))
``` | 2018/04/03 | [
"https://Stackoverflow.com/questions/49625350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can filter the yields by startwith function.(Using Python 3)
```
import os
import zipfile
with zipfile.ZipFile('D:\python\B.jar') as z:
for filename in z.namelist():
if filename.startswith("org"):
print(filename)
``` | How to list all files that are inside ZIP files of a certain folder
-------------------------------------------------------------------
>
> Everytime I came into this post making a similar question... But different at the same time. Cause of this, I think other users can have the same doubt. If you got to this post trying this....
>
>
>
```py
import os
import zipfile
# Use your folder path
path = r'set_yoor_path'
for file in os.listdir(os.chdir(path)):
if file[-3:].upper() == 'ZIP':
for item in zipfile.ZipFile(file).namelist():
print(item)
```
If someone feels that this post has to be deleted, please let m know. Tks | 2,027 |
53,605,066 | I know there are lots of Q&As to extract datetime from string, such as [dateutil.parser](https://stackoverflow.com/questions/3276180/extracting-date-from-a-string-in-python), to extract datetime from a string
```
import dateutil.parser as dparser
dparser.parse('something sep 28 2017 something',fuzzy=True).date()
output: datetime.date(2017, 9, 28)
```
but my question is how to know which part of string results this extraction, e.g. i want a function that also returns me 'sep 28 2017'
```
datetime, datetime_str = get_date_str('something sep 28 2017 something')
outputs: datetime.date(2017, 9, 28), 'sep 28 2017'
```
any clue or any direction that i can search around? | 2018/12/04 | [
"https://Stackoverflow.com/questions/53605066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1165964/"
] | Extend to the discussion with @Paul and following the solution from @alecxe, I have proposed the following solution, which works on a number of testing cases, I've made the problem slight challenger:
**Step 1: get excluded tokens**
```
import dateutil.parser as dparser
ostr = 'something sep 28 2017 something abcd'
_, excl_str = dparser.parse(ostr,fuzzy_with_tokens=True)
```
gives outputs of:
```
excl_str: ('something ', ' ', 'something abcd')
```
**Step 2 : rank tokens by length**
```
excl_str = list(excl_str)
excl_str.sort(reverse=True,key = len)
```
gives a sorted token list:
```
excl_str: ['something abcd', 'something ', ' ']
```
**Step 3: delete tokens and ignore space element**
```
for i in excl_str:
if i != ' ':
ostr = ostr.replace(i,'')
return ostr
```
gives a final output
```
ostr: 'sep 28 2017 '
```
***Note:*** step 2 is required, because it will cause problem if any shorter token a subset of longer ones. e.g., in this case, if deletion follows an order of `('something ', ' ', 'something abcd')`, the replacement process will remove `something` from `something abcd`, and `abcd` will never get deleted, ends up with `'sep 28 2017 abcd'` | Interesting problem! There is no direct way to get the parsed out date string out of the bigger string with `dateutil`. The problem is that `dateutil` parser does not even have this string available as an intermediate result as it really builds parts of the future `datetime` object on the fly and character by character ([source](https://github.com/dateutil/dateutil/blob/master/dateutil/parser/_parser.py#L732-L856)).
It, though, also collects a list of skipped tokens which is probably your best bet. As this list is ordered, you can loop over the tokens and replace the first occurrence of the token:
```
from dateutil import parser
s = 'something sep 28 2017 something'
parsed_datetime, tokens = parser.parse(s, fuzzy_with_tokens=True)
for token in tokens:
s = s.replace(token.lstrip(), "", 1)
print(s) # prints "sep 28 2017"
```
I am though not 100% sure if this would work in all the possible cases, especially, with the different whitespace characters (notice how I had to workaround things with `.lstrip()`). | 2,028 |
16,874,010 | I am trying to write out a line to a new file based on input from a csv file, with elements from different rows and different columns for example
test.csv:
```
name1, value1, integer1, integer1a
name2, value2, integer2, integer2a
name3, value3, integer3, integer3a
```
desired output:
```
command integer1:integer1a moretext integer2:integer2a
command integer2:integer2a moretext integer3:integer3a
```
I realize this will probably some type of loop, I am just getting lost in the references for loop interation and python maps | 2013/06/01 | [
"https://Stackoverflow.com/questions/16874010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2443424/"
] | For an array you can use the std::vector class.
```
std::vector<account *>MyAccounts;
MyAccounts.push_back(new account());
```
Then you can use it like an array accessing it normally.
```
MyAccounts[i]->accountFunction();
```
**update**
I don't know enough about your code, so I give just some general examples here.
In your bank class you have a member like shown above )`MyAccounts`. Now when ever you add a new account to your bank, you can do it with the push back function.
For example to add a new account and set the initial amount of 100 moneyitems.
```
MyAccounts.push_back(new account());
size_t i = MyAccounts.size();
MyAccounts[i]->setAmount(100);
``` | You can do something like below
```
class Bank
{
public:
int AddAccount(Account act){ m_vecAccts.push_back(act);}
....
private:
...
std:vector<account> m_vecAccts;
}
```
Update:
This is just a Bank class with vector of accounts as private member variable. AddAccount is public function which can add account to vector | 2,029 |
45,823,884 | So I'm working a quiz on Python as a project for an Intro to Programming course.
My quiz works as intended except in the case that the quiz variable is not being affected by the new values of the blank array. On the run\_quiz function I want to make the quiz variable update itself by changing the blanks to the correct answer after the user has provided it.
Here's my code:
```
#Declaration of variables
blank = ["___1___", "___2___", "___3___", "___4___"]
answers = []
tries = 5
difficulty = ""
quiz = ""
#Level 1: Easy
quiz1 = "Python is intended to be a highly " + blank[0] + " language. It is designed to have an uncluttered " + blank[1] + " layout, often using English " + blank[2] + " where other languages use " + blank[3] + ".\n"
#Level 2: Medium
quiz2 = "Python interpreters are available for many " + blank[0] + " allowing Python code to run on a wide variety of systems. " + blank[1] + " the reference implementation of Python, is " + blank[2] + " software and has a community-based development model, as do nearly all of its variant implementations. " + blank[1] + " is managed by the non-profit " + blank[3] + ".\n"
#Level 3: Hard
quiz3 = "Python features a " + blank[0] + " system and automatic " + blank[1] + " and supports multiple " + blank[2] + " including object-oriented, imperative, functional programming, and " + blank[3] + " styles. It has a large and comprehensive standard library.\n"
#Answer and quiz assignment
def assign():
global difficulty
global quiz
x = 0
while x == 0:
user_input = raw_input("Select a difficulty, Press 1 for Easy, 2 for Medium or 3 for Hard.\n")
if user_input == "1":
answers.extend(["readable", "visual", "keywords", "punctuation"])
difficulty = "Easy"
quiz = quiz1
x = 1
elif user_input == "2":
answers.extend(["operating systems", "cpython", "open source", "python software foundation"])
difficulty = "Medium"
quiz = quiz2
x = 1
elif user_input == "3":
answers.extend(["dynamic type", "memory management", "programming paradigms", "procedural"])
difficulty = "Hard"
quiz = quiz3
x = 1
else:
print "Error: You must select 1, 2 or 3.\n"
x = 0
def run_quiz():
n = 0
global tries
global blank
print "Welcome to the Python Quiz! This quiz follows a fill in the blank structure. You will have 5 tries to replace the 4 blanks on the difficulty you select. Let's begin!\n"
assign()
print "You have slected " + difficulty + ".\n"
print "Read the paragraph carefully and prepare to provide your answers.\n"
while n < 4 and tries > 0:
print quiz
user_input = raw_input("What is your answer for " + blank[n] + "? Remember, you have " + str(tries) + " tries left.\n")
if user_input.lower() == answers[n]:
print "That is correct!\n"
blank[n] = answers[n]
n += 1
else:
print "That is the wrong answer. Try again!\n"
tries -= 1
if n == 4 or tries == 0:
if n == 4:
print "Congratulations! You are an expert on Python!"
else:
print "You have no more tries left! You can always come back and play again!"
run_quiz()
```
I know my code has many areas of improvement but this is my first Python project so I guess that's expected. | 2017/08/22 | [
"https://Stackoverflow.com/questions/45823884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8501849/"
] | The problem is that your variable, `quiz`, is just a fixed string, and although it looks like it has something to do with `blanks`, it actually doesn't. What you want is 'string interpolation'. Python allows this with the `.format` method of `str` objects. This is really the crux of your question, and using string interpolation it's easy to do. I'd advise you to take some time to learn `.format`, it's an incredibly helpful function in almost any script.
I've also updated your code a bit not to use global variables, as this is generally bad practice and can lead to confusing, difficult to track bugs. It may also impair the uncluttered visual layout :). Here is your modified code, which should be working now:
```
quizzes = [
("""\
Python is intended to be a highly {} language.\
It is designed to have an uncluttered {} layout,\
often using English {} where other languages use {}
""", ["readable", "visual", "keywords", "punctuation"], "Easy"),
("""\
Python interpreters are available for many {}\
allowing Python code to run on a wide variety of systems.\
{} the reference implementation of Python, is {}\
software and has a community-based development model, as\
do nearly all of its variant implementations. {} is managed by the non-profit {}
""", ["operating systems", "cpython", "open source", "python software foundation"], "Medium"),
("""\
Python features a {} system and automatic {} and\
supports multiple {} including object-oriented,\
imperative, functional programming, and\
{} styles. It has a large and comprehensive standard library.
""", ["dynamic type", "memory management", "programming paradigms", "procedural"], "Hard")
]
#Answer and quiz assignment
def assign():
while True:
user_input = raw_input("Select a difficulty, Press 1 for Easy, 2 for Medium or 3 for Hard.\n")
if user_input == "1":
return quizzes[0]
elif user_input == "2":
return quizzes[1]
elif user_input == "3":
return quizzes[2]
else:
print "Error: You must select 1, 2 or 3.\n"
continue
break
def run_quiz():
n = 0
#Declaration of variables
blank = ["___1___", "___2___", "___3___", "___4___"]
tries = 5
print "Welcome to the Python Quiz! This quiz follows a fill in the blank structure. You will have 5 tries to replace the 4 blanks on the difficulty you select. Let's begin!\n"
quiz, answers, difficulty = assign()
print "You have selected {}.\n".format(difficulty)
print "Read the paragraph carefully and prepare to provide your answers.\n"
while n < 4 and tries > 0:
print quiz.format(*blank)
user_input = raw_input("What is your answer for {}? Remember, you have {} tries left.\n".format(blank[n], tries))
if user_input.lower() == answers[n]:
print "That is correct!\n"
blank[n] = answers[n]
n += 1
else:
print "That is the wrong answer. Try again!\n"
tries -= 1
if n == 4 or tries == 0:
if n == 4:
print "Congratulations! You are an expert on Python!"
else:
print "You have no more tries left! You can always come back and play again!"
run_quiz()
```
A little more on string interpolation:
You're doing a lot of `"start of string " + str(var) + " end of string"`. This can be achieved quite simply with `"start of string {} end of string".format(var)"` - it even automatically does the `str` conversion. I've changed your `quiz` variables to have `"{}"` where either `"__1__"` etc should be displayed or the user's answer. You can then do `quiz.format(*blank*)` to print the 'most recent' version of the quiz. `*` here 'unpacks' the elements of blank into separate arguments for `format`.
If you find it more easy to learn with example usage, here are two usages of `format` in a simpler context:
```
>>> "the value of 2 + 3 is {}".format(2 + 3)
'the value of 2 + 3 is 5'
>>> a = 10
>>> "a is {}".format(a)
'a is 10'
```
I've also stored the information about each quiz in a `list` of `tuple`s, and assign now has a `return` value, rather than causing side effects. Apart from that, your code is still pretty much intact. Your original logic hasn't changed at all.
Regarding your comment about objects:
Technically, yes, `quizzes` is an object. However, as Python is a 'pure object oriented language', *everything* in Python is an object. `2` is an object. `"abc"` is an object. `[1, 2, 3]` is an object. Even functions are objects. You may be thinking in terms of JavaScript - with all of the brackets and parentheses, it kind of resembles a JS Object. However, `quizzes` is nothing more than a list (of tuples). You might also be thinking of instances of custom classes, but it's not one of those either. Instances require you to define a class first, using `class ...`.
A bit more on what `quizzes` actually is - it's a list of tuples of strings, lists of strings and strings. This is a kind of complicated type signature, but it's just a lot of nested container types really. It firstly means that each element of `quizzes` is a 'tuple'. A tuples is pretty similar to a list, except that it can't be changed in place. Really, you could almost always use a list instead of a tuple, but my rule of thumb is that a heterogenous collection (meaning stuff of different types) should generally be a tuple. Each tuple has the quiz text, the answers, and the difficulty. I've put it in an object like this as it means it can be accessed by indexing (using `quiz[n]`), rather than by a bunch of if statements which then refer to `quiz1`, `quiz2`, etc. Generally, if you find yourself naming more than about two variables which are semantically similar like this, it would be a good idea to put them in a list, so you can index, and iterate etc. | Only now have I read your question properly.
You first make your strings quiz1, quiz2 an quiz3.
You only do that once.
After that you change your blanks array.
But you don't reconstruct your strings.
So they still have the old values.
Note that a copy of elements of the blanks array is made into e.g. quiz1.
That copy doesn't change automagically after the fact.
If you want to program it like this, you'll have to rebuild your quiz1, quiz2 and quiz3 strings explicitly each time you change your blanks array.
General advice: Don't use so many globals. Use function parameters instead. But for a first attempt I guess it's OK.
[edit]
A simple modification would be:
Replace your quiz, quiz1, quiz2 and quiz3 by functions get\_quiz (), get\_quiz1 () etc. that get the most recent version, including the altered elements of blanks.
This modification doesn't make this an elegant program. But you'll come to that with a bit more experience.
A long shot in case you wonder (but don't try to bridge that gap in one step):
In the end Quiz will probably be a class with methods and attributes, of which you have instances.
To be sure: I think that experimenting like this will make you a good programmer, more than copying some ready to go code! | 2,031 |
72,011,497 | I am reading data remote .dat files for EDI data processing.
Original Data is some string bytes:
```
b'MDA1MDtWMjAxOS44LjAuMDtWMjAxOS44LjAuMDsyMDIwMD.........'
```
Used decode as below...
```
byte_data = base64.b64decode(byte_data)
```
Gave me this below byte data. Is there a better way to process below bytes data into python list ?
```
b"0050;V2019.8.0.0;V2019.8.0.0;20200407;184821\r\n0070;;7;0;7\r\n0080;11;50;bot.pdf;Driss;C:\\Dat\\Abl\\\r\n0090;1;Z;Zub\xf6r;0;0;0;Zub\xf6r;;;Zub\xf6r\r\n
```
Tried decode with uft-8, didn't work.
```
byte_data.decode('utf-8')
```
Tired to convert to string and read as CSV but did not help, landed on original data. Need to keep some of the string as it is and convert \xf6r \r \n
```
data = io.StringIO(above_data)
data.seek(0)
csv_reader = csv.reader(data, delimiter=";")
``` | 2022/04/26 | [
"https://Stackoverflow.com/questions/72011497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6837224/"
] | It didn't work with 'utf-8' because it's not 'utf-8', it's probably 'ISO-8859-1' (latin-1)
```py
text = byte_data.decode('ISO-8859-1')
```
because `\xf6` is `ΓΆ` in 'ISO-8859-1' | Is it definitely utf-8 encoded?
This might help guide to what decoder to use:
```
import chardet
print(cardet.detect(byte_data))
``` | 2,032 |
45,209,068 | I'm new to python, and now I need to use it to work with some data in a txt file.
Here is a sample data, where after each `'&'`, is a new index:
```
uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fff...
uid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2...
...
```
The end result is to have a DataFrame (with pandas) with `columns=['uid', 'sid', 'bid', 'cid', 'pid', 'ver'...]` and the content of `uid` as index.
My idea is: to strip out `aaa`, `bbb`, and `ccc`, etc. from the string, and insert them into the dataframe.
I've tried:
```
st1 = gif?uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fff......HTTPasfawfaw
(st1 is the original string)
st2 = st1.split("gif?")[1].split("HTTP")[0]
st3 = st2.split('&')
```
My question is:
1. how can I only take the string after the `=` out and put them in Dataframe?
2. I need to deal with huge data files, is there a better way to do this with less time and takes less memory?
Thank you in advance for your help! | 2017/07/20 | [
"https://Stackoverflow.com/questions/45209068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8336506/"
] | This is a URL querystring. You should use the `urllib` module in the standard library to parse it.
```
from urllib.parse import parse_qs # python3
from urlparse import parse_qs # python2
parse_qs('uid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2')
```
Output:
```
{'bid': ['ccc2'],
'cid': ['ddd2'],
'pid': ['eee2'],
'sid': ['bbb2'],
'uid': ['aaa2'],
'ver': ['fff2']}
``` | You can use `regex` to create a `list` of all the columns and values and then use it to create your `dataframe`, for example:
```
import re
st = 'uid=aaa&sid=bbb&bid=ccc&cid=ddd&pid=eee&ver=fffuid=aaa2&sid=bbb2&bid=ccc2&cid=ddd2&pid=eee2&ver=fff2'
myData = re.findall(r'(\wid)=(\w+)', st)
prit myData
```
output:
```
[('uid', 'aaa'), ('sid', 'bbb'), ('bid', 'ccc'), ('cid', 'ddd'), ('pid', 'eee'), ('uid', 'aaa2'), ('sid', 'bbb2'), ('bid', 'ccc2'), ('cid', 'ddd2'), ('pid', 'eee2')]
``` | 2,033 |
37,061,089 | I installed Jupyter notebooks in Ubuntu 14.04 via Anaconda earlier, and just now I installed TensorFlow. I would like TensorFlow to work regardless of whether I am working in a notebook or simply scripting. In my attempt to achieve this, I ended up installing TensorFlow twice, once using Anaconda, and once using pip. The Anaconda install works, but I need to preface any call to python with "source activate tensorflow". And the pip install works nicely, if start python the standard way (in the terminal) then tensorflow loads just fine.
My question is: how can I also have it work in the Jupyter notebooks?
This leads me to a more general question: it seems that my python kernel in Jupyter/Anaconda is separate from the python kernel (or environment? not sure about the terminology here) used system wide. It would be nice if these coincided, so that if I install a new python library, it becomes accessible to all the varied ways I have of running python. | 2016/05/05 | [
"https://Stackoverflow.com/questions/37061089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4556722/"
] | I installed PIP with Conda `conda install pip` instead of `apt-get install python-pip python-dev`.
Then installed tensorflow
use [Pip Installation](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html#test-the-tensorflow-installation):
```
# Ubuntu/Linux 64-bit, CPU only, Python 2.7
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v4. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp27-none-linux_x86_64.whl
```
...
`pip install --upgrade $TF_BINARY_URL`
Then it will work in jupyter notebook. | ```
pip install tensorflow
```
This worked for me in my conda virtual environment.
I was trying to use `conda install tensorflow` in a conda virtual environment where jupyter notebooks was already installed, resulting in many conflicts and failure. But pip install worked fine. | 2,035 |
10,559,144 | I am trying to use `suptitle` to print a title, and I want to occationally replace this title. Currently I am using:
```
self.ui.canvas1.figure.suptitle(title)
```
where figure is a matplotlib figure (canvas1 is an mplCanvas, but that is not relevant) and title is a python string.
Currently, this works, except for the fact that when I run this code again later, it just prints the new text on top of the old, resulting in a gargeled, unreadable title.
How do you replace the old `suptitle` of a figure, instead of just printing over?
Thanks,
Tyler | 2012/05/11 | [
"https://Stackoverflow.com/questions/10559144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/402632/"
] | `figure.suptitle` returns a `matplotlib.text.Text` instance. You can save it and set the new title:
```
txt = fig.suptitle('A test title')
txt.set_text('A better title')
plt.draw()
``` | Resurrecting this old thread because I recently ran into this. There is a references to the Text object returned by the original setting of suptitle in figure.texts. You can use this to change the original until this is fixed in matplotlib. | 2,045 |
12,451,124 | So I've already graduated and received all credits for my compsci degree. But my professor from my last quarter just sent me an email saying he found something interesting in one of my homework assignments. I forget the context, but I don't think it matters. I'll post the email exchange.
---
From: PROF
To: ME
Subject: RE: LDA
Hi STUDENT,
I noticed something odd in one of your homework answers:
```
def swap(i,j):
tmp = i
i = j
j = tmp
return i,j
```
This works, but what is wrong with it?
---
From: ME
To: PROF
Subject: RE: LDA
oh yea, I was thinking in C, wouldn't it work in python to do:
```
def swap(i,j):
i,j = j,i
return i,j
```
Python will then optimize that swap on the stack eliminating the need for an tmp variable at all.
---
From: PROF
To: ME
Subject: RE: LDA
that's true, but there is a more subtle conceptual confusion
So my question is: what am I missing here? | 2012/09/16 | [
"https://Stackoverflow.com/questions/12451124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1676273/"
] | Think about how you would call `swap` in Python, versus how you would call a swap function in C.
For example, in C,
```
swap(&a, &b);
```
is valid and swaps the memory in `a` with the memory in `b` (assuming the implementation of `swap` is right).
But, in Python,
```
swap(a, b)
```
...does nothing! You'd have to assign the result:
```
a,b = swap(a,b)
```
but then why don't you just do
```
a,b = b,a
```
and ditch the swap() function completely?
If you really understand the difference between Python and C, you will be able to explain why the Python swap function cannot swap two variables without assigning the result. | I guess his point is that inside a function there's no need to do the swap at all - because the return values of the function aren't tied to the values passed in, so this would do as well:
```
def swap(i, j):
return j, i
```
So in fact there's no point in having the function, it doesn't add anything at all. You'd have to call `i, j = swap(i, j)` - which is exactly the same as `j, i = i, j`. | 2,048 |
32,328,778 | Suppose I want to match a string like this:
>
> 123(432)123(342)2348(34)
>
>
>
I can match digits like `123` with `[\d]*` and `(432)` with `\([\d]+\)`.
How can match the whole string by repeating either of the 2 patterns?
*I tried `[[\d]* | \([\d]+\)]+`, but this is incorrect.*
*I am using python re module.* | 2015/09/01 | [
"https://Stackoverflow.com/questions/32328778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/954376/"
] | I think you need this regex:
```
"^(\d+|\(\d+\))+$"
```
and to avoid catastrophic backtracking you need to change it to a regex like this:
```
"^(\d|\(\d+\))+$"
``` | You can use a character class to match the whole of string :
```
[\d()]+
```
But if you want to match the separate parts in separate groups you can use `re.findall` with a spacial regex based on your need, for example :
```
>>> import re
>>> s="123(432)123(342)2348(34)"
>>> re.findall(r'\d+\(\d+\)',s)
['123(432)', '123(342)', '2348(34)']
>>>
```
Or :
```
>>> re.findall(r'(\d+)\((\d+)\)',s)
[('123', '432'), ('123', '342'), ('2348', '34')]
```
Or you can just use `\d+` to get all the numbers :
```
>>> re.findall(r'\d+',s)
['123', '432', '123', '342', '2348', '34']
```
If you want to match the patter `\d+\(\d+\)` repeatedly you can use following regex :
```
(?:\d+\(\d+\))+
``` | 2,053 |
32,870,262 | I am trying to create a program in python in which the user enters a sentence and the reversed sentenced is printed. The code I have so far is:
```
sentence = raw_input('Enter the sentence')
length = len(sentence)
for i in sentence[length:0:-1]:
a = i
print a,
```
When the program is run it misses out the last letter so if the word was 'hello' it would print 'olle'. Can anyone see my mistake? | 2015/09/30 | [
"https://Stackoverflow.com/questions/32870262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5342974/"
] | You need to remove the `0` from your indices range, but instead you can use :
```
sentence[length::-1]
```
Also not that then you don't need to loop over your string and use extra assignments and even the `length` you can simply print the reversed string.
So the following code will do the job for you :
```
print sentence[::-1]
```
Demo :
```
>>> s="hello"
>>> print s[::-1]
'olleh'
``` | The second argument of the slice notation means "up to, but not including", so `sentence[length:0:-1]` will loop up to 0, but not at 0.
The fix is to explicitly change the 0 to -1, or leave it out (preferred).
```
for i in sentence[::-1]:
``` | 2,056 |
10,621,615 | I was playing around with iterables and more specifically the `yield` operator in Python. While using test driven development to start writing a new iterable, I wondered what is the shortest code that could make this simple test for an iterable to pass:
```py
def test():
for x in my_iterable():
pass
```
The shortest version I could think of was:
```py
def my_iterable():
for i in []:
yield i
```
Is it possible to write a simpler, shorter or more beautiful (pythonic) version? | 2012/05/16 | [
"https://Stackoverflow.com/questions/10621615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2457/"
] | Yes, there is:
```
return iter([])
``` | ```
def do_yield():
return
yield None
```
if usage of `yield` is important for you, one of the other answers otherwise. | 2,063 |
64,523,282 | I installed anaconda from the [official website](https://www.anaconda.com/) and I want to integrate it with sublime text 3. I tried to build a sublime-build json file like this:
```
{
"cmd": ["C:/Users/Minh Duy/anaconda3/python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
```
But I got errors:
```
C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\__init__.py:138: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\__init__.py", line 22, in <module>
from . import multiarray
File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\multiarray.py", line 12, in <module>
from . import overrides
File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\overrides.py", line 7, in <module>
from numpy.core._multiarray_umath import (
ImportError: DLL load failed while importing _multiarray_umath: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Minh Duy\Documents\Self-study\Python\Exercise\test_code.py", line 1, in <module>
import numpy as np
File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\__init__.py", line 140, in <module>
from . import core
File "C:\Users\Minh Duy\anaconda3\lib\site-packages\numpy\core\__init__.py", line 48, in <module>
raise ImportError(msg)
ImportError:
```
I didn't add anaconda to PATH, but everything works fine on spyder and anaconda prompt.
I don't really know if there is anything wrong with the way I set up anaconda or something else.
Can someone help me with this issue? | 2020/10/25 | [
"https://Stackoverflow.com/questions/64523282",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12074366/"
] | The DLLs of the mkl-service that it's tried to load are by default located in the following directory:
**C:/Users/<username>/anaconda3/Library/bin**
since that path isn't in the PATH Environment Variable, it can't find them and raises the ImportError.
To fix this, you can:
1. Add the mentioned path to the PATH Environment Variable:
Open the start menu search, type *env*, click *edit environment variables for your account*, select path from the list at the top, click Edit then New, enter the mentioned path, and click OK.
This isn't the best method, as it makes this directory available globally, while you need it only when you are building with Anaconda.
2. Configure your custom Sublime Text build system to add the directory to PATH every time you use that build system (temporarily for the duration of that run).
This can be done simply by adding one line to the build system file, and it should look like this:
```
{
"cmd": ["C:/Users/<username>/anaconda3/python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python",
"env": {
"PYTHONIOENCODING": "utf-8",
"PATH": "$PATH;C:/Users/<username>/anaconda3/Library/bin"},
}
```
This should work, however, to make it more error resistant you should consider adding some other paths too:
* C:/Users/<username>/anaconda3
* C:/Users/<username>/anaconda3/Library/mingw-w64/bin
* C:/Users/<username>/anaconda3/Library/usr/bin
* C:/Users/<username>/anaconda3/Scripts
* C:/Users/<username>/anaconda3/bin
* C:/Users/<username>/anaconda3/condabi
3. If you have more than one Anaconda environment and want more control from inside Sublime Text, then you consider installing the [Conda](https://docs.anaconda.com/anaconda/user-guide/tasks/integration/sublime/) [package](https://packagecontrol.io/packages/Conda) for Sublime Text.
Press Shift+Control+P to open command palette inside Sublime Text, search for Conda and click to install; once installed, change the build system to Conda from Menu -> Tools -> Build System. Then you can open the command palette and use the commands that start with Conda to manage your Anaconda Environments.
Note that you need to activate an environment before using Ctrl+B to build. | first configure it with python. write python in your cmd to get python path. then configure it with anaconda.
```
{
"cmd": ["C:/Users/usr_name/AppData/Local/Programs/Python/Python37-32/python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
``` | 2,073 |
64,708,800 | I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide | 2020/11/06 | [
"https://Stackoverflow.com/questions/64708800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11828549/"
] | Haskell doesn't allow this because it would be ambiguous. The value constructor `Prop` is effectively a function, which may be clearer if you ask GHCi about its type:
```
> :t Const
Const :: Bool -> Prop
```
If you attempt to add one more `Const` constructor in the same module, you'd have two 'functions' called `Const` in the same module. You can't have that. | This is somewhat horrible, but will basically let you do what you want:
```hs
{-# LANGUAGE PatternSynonyms, TypeFamilies, ViewPatterns #-}
data Prop = PropConst Bool
| PropVar Char
| PropNot Prop
| PropOr Prop Prop
| PropAnd Prop Prop
| PropImply Prop Prop
data Formula = FormulaConst Bool
| FormulaVar Prop
| FormulaNot Formula
| FormulaAnd Formula Formula
| FormulaOr Formula Formula
| FormulaImply Formula Formula
class PropOrFormula t where
type Var t
constructConst :: Bool -> t
deconstructConst :: t -> Maybe Bool
constructVar :: Var t -> t
deconstructVar :: t -> Maybe (Var t)
constructNot :: t -> t
deconstructNot :: t -> Maybe t
constructOr :: t -> t -> t
deconstructOr :: t -> Maybe (t, t)
constructAnd :: t -> t -> t
deconstructAnd :: t -> Maybe (t, t)
constructImply :: t -> t -> t
deconstructImply :: t -> Maybe (t, t)
instance PropOrFormula Prop where
type Var Prop = Char
constructConst = PropConst
deconstructConst (PropConst x) = Just x
deconstructConst _ = Nothing
constructVar = PropVar
deconstructVar (PropVar x) = Just x
deconstructVar _ = Nothing
constructNot = PropNot
deconstructNot (PropNot x) = Just x
deconstructNot _ = Nothing
constructOr = PropOr
deconstructOr (PropOr x y) = Just (x, y)
deconstructOr _ = Nothing
constructAnd = PropAnd
deconstructAnd (PropAnd x y) = Just (x, y)
deconstructAnd _ = Nothing
constructImply = PropImply
deconstructImply (PropImply x y) = Just (x, y)
deconstructImply _ = Nothing
instance PropOrFormula Formula where
type Var Formula = Prop
constructConst = FormulaConst
deconstructConst (FormulaConst x) = Just x
deconstructConst _ = Nothing
constructVar = FormulaVar
deconstructVar (FormulaVar x) = Just x
deconstructVar _ = Nothing
constructNot = FormulaNot
deconstructNot (FormulaNot x) = Just x
deconstructNot _ = Nothing
constructOr = FormulaOr
deconstructOr (FormulaOr x y) = Just (x, y)
deconstructOr _ = Nothing
constructAnd = FormulaAnd
deconstructAnd (FormulaAnd x y) = Just (x, y)
deconstructAnd _ = Nothing
constructImply = FormulaImply
deconstructImply (FormulaImply x y) = Just (x, y)
deconstructImply _ = Nothing
pattern Const x <- (deconstructConst -> Just x) where
Const x = constructConst x
pattern Var x <- (deconstructVar -> Just x) where
Var x = constructVar x
pattern Not x <- (deconstructNot -> Just x) where
Not x = constructNot x
pattern Or x y <- (deconstructOr -> Just (x, y)) where
Or x y = constructOr x y
pattern And x y <- (deconstructAnd -> Just (x, y)) where
And x y = constructAnd x y
pattern Imply x y <- (deconstructImply -> Just (x, y)) where
Imply x y = constructImply x y
{-# COMPLETE Const, Var, Not, Or, And, Imply :: Prop #-}
{-# COMPLETE Const, Var, Not, Or, And, Imply :: Formula #-}
```
If <https://gitlab.haskell.org/ghc/ghc/-/issues/8583> were ever done, then this could be substantially cleaned up. | 2,074 |
53,014,961 | It seems like a trivial task however, I can't find a solution for doing this using python.
Given the following string:
```
"Lorem/ipsum/dolor/sit amet consetetur"
```
I would like to output
```
"Lorem/ipsum/dolor/sit ametconsetetur"
```
Hence, removing the single whitespace between `amet` and `consetetur`.
Using `.replace(" ","")` replaces all whitespaces, giving me:
```
"Lorem/ipsum/dolor/sitametconsetetur"
```
which is not what I want. How can I solve this? | 2018/10/26 | [
"https://Stackoverflow.com/questions/53014961",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6341510/"
] | use regex and word boundary:
```
>>> s="Lorem/ipsum/dolor/sit amet consetetur"
>>> import re
>>> re.sub(r"\b \b","",s)
'Lorem/ipsum/dolor/sit ametconsetetur'
>>>
```
This technique also handles the more general case:
```
>>> s="Lorem/ipsum/dolor/sit amet consetetur adipisci velit"
>>> re.sub(r"\b \b","",s)
'Lorem/ipsum/dolor/sit ametconsetetur adipiscivelit'
```
for start & end spaces, you'll have to work slightly harder, but it's still doable:
```
>>> s=" Lorem/ipsum/dolor/sit amet consetetur adipisci velit "
>>> re.sub(r"(^|\b) (\b|$)","",s)
'Lorem/ipsum/dolor/sit ametconsetetur adipiscivelit'
```
Just for fun, a last variant: use `re.split` with a multiple space separation, preserve the split char using a group, then join the strings again, removing the spaces only if the string has some non-space in it:
```
"".join([x if x.isspace() else x.replace(" ","") for x in re.split("( {2,})",s)])
```
(I suppose that this is slower because of list creation & join though) | ```
s[::-1].replace(' ', '', 1)[::-1]
```
* Reverse the string
* Delete the first space
* Reverse the string back | 2,075 |
68,588,398 | I would like to define python function which takes a list of dictionaries in which some keys could be lists and then returns a list of list of dictionaries in which each key is a single value, which corresponds to all the combinations of options (an option is picking a single value from each list).
Consider the following input:
```
input = [
{
"name": "A",
"option1": [1, 2],
"option2": ["a1", "a2"]
}
{
"name": "B",
"option1": [3, 4],
"option2": "b1"
}
]
```
Given this input, the desired output would be:
```
output = [[{"name": "A", "option1": 1, "option2": "a1"}{"name": "B", "option1": 3, "option2": "b1"}]
[{"name": "A", "option1": 1, "option2": "a1"}{"name": "B", "option1": 4, "option2": "b1"}]
[{"name": "A", "option1": 1, "option2": "a2"}{"name": "B", "option1": 3, "option2": "b1"}]
[{"name": "A", "option1": 1, "option2": "a2"}{"name": "B", "option1": 4, "option2": "b1"}]
[{"name": "A", "option1": 2, "option2": "a1"}{"name": "B", "option1": 3, "option2": "b1"}]
[{"name": "A", "option1": 2, "option2": "a1"}{"name": "B", "option1": 4, "option2": "b1"}]
[{"name": "A", "option1": 2, "option2": "a2"}{"name": "B", "option1": 3, "option2": "b1"}]
[{"name": "A", "option1": 2, "option2": "a2"}{"name": "B", "option1": 4, "option2": "b1"}]]
``` | 2021/07/30 | [
"https://Stackoverflow.com/questions/68588398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13613091/"
] | If you have all vec lists in a single list of lists using, you can unpack this list when passing it to the product function:
```
list_vecs = [vec, vec2, vec3, vec4]
list(product(*list_vecs, repeat=1))
```
Concerning the \* (star-notation) see the python docs [here](https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists):
>
> For instance, the built-in range() function expects separate start and stop arguments. If they are not available separately, write the function call with the \*-operator to unpack the arguments out of a list or tuple:
>
>
>
```
>>> list(range(3, 6)) # normal call with separate arguments
[3, 4, 5]
>>> args = [3, 6]
>>> list(range(*args)) # call with arguments unpacked from a list
[3, 4, 5]
```
In case `vec4` is only defined later, just append it to the `list_vecs`: `list_vecs.append(vec4)` | This solution is almost the same as @mcsoini, but a little more explanation:
Here,
```
vec=[['A1','A2','A3'],
['B1','B2'],
['C1','C2','C3'],vec4]
```
`vec` is a list of lists. The first 3 lists are `vec1,2,3`. `vec4` can be added later on. Also, you can add more lists to `vec` using `vec.append(<list>)`
Now, instead of doing `vec[0],vec[1]...`, we will simply use then `*` for unpacking the list. This will pass all the lists in the `itertools.product()`.
```
list(product(*vec,repeat=1))
```
Also, this takes care of the nuber of lists because doing `vec[0]...` is not only tedious but also lead to errors if the index is out of range, or will only consider those lists which are indexed.
```
vec=[['A1','A2','A3'],
['B1','B2'],
['C1','C2','C3'],vec4]
result = list(product(*vec,repeat=1))
``` | 2,078 |
44,948,661 | I am new to python and word2vec and keep getting a "you must first build vocabulary before training the model" error. What is wrong with my code?
Here is my code:
```
file_object=open("SupremeCourt.txt","w")
from gensim.models import word2vec
data = word2vec.Text8Corpus('SupremeCourt.txt')
model = word2vec.Word2Vec(data, size=200)
out=model.most_similar()
print(out[1])
print(out[2])
``` | 2017/07/06 | [
"https://Stackoverflow.com/questions/44948661",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8264914/"
] | Have a look at this: <https://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html>
And try this:
```
import tweepy
auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET)
api = tweepy.API(auth)
for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items():
# Do something
pass
```
In your case you have a max number of tweets to get, so as per the linked tutorial you could do:
```
import tweepy
MAX_TWEETS = 5000000000000000000000
auth = tweepy.OAuthHandler(CONSUMER_TOKEN, CONSUMER_SECRET)
api = tweepy.API(auth)
for tweet in tweepy.Cursor(api.search, q='#python', rpp=100).items(MAX_TWEETS):
# Do something
pass
```
If you want tweets after a given ID, you can also pass that argument. | Check twitter api documentation, probably it allows just 300 tweets to parse.
I would recommend to forget api, make it with requests with streaming. The api is an implementation of requests with limitations. | 2,079 |
55,013,809 | OK
I was afraid to use the terminal, so I installed the python-3.7.2-macosx10.9 package downloaded from python.org
Ran the certificate and shell profile scripts, everything seems fine.
Now the "which python3" has changed the path from 3.6 to the new 3.7.2
So everything seems fine, correct?
My question (of 2) is what's going on with the old python3.6 folder still in the applications folder. Can you just delete it safely? Why when you install a new version does it not at least ask you if you want to update or install and keep both versions?
Second question, how would you do this from the terminal?
I see the first step is to sudo to the root.
I've forgotten the rest.
But from the terminal, would this simply add the new version and leave
the older one like the package installer?
It's pretty simple to use the package installer and then delete a folder.
So, thanks in advance. I'm new to python and have not much confidence
using the terminal and all the powerful shell commands.
And yeah I see all the Brew enthusiasts. I DON'T want to use Brew for the moment.
The python snakes nest of pathways is a little confusing, for the moment.
I don't want to get lost with a zillion pathways from Brew because it's
confusing for the moment.
I love Brew, leave me alone. | 2019/03/06 | [
"https://Stackoverflow.com/questions/55013809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9291766/"
] | Yes, you can install Python 3.7 or Python 3.8 using installer that you can download from [python.org](https://www.python.org/downloads/). It doesn't automatically delete the older version that you can keep using the older version.
For example, if you have `python3.7` and `python3.8`, you can run either one on your terminal.
On the other hand, it is quite easy to install using Homebrew, you can follow the instructions on this [article on how to install Python3 on MacOS](https://jun711.github.io/devops/how-to-install-python3-on-mac-os/#homebrew) | Each version of the Python installation is independent of each other. So its safe to delete the version you don't want, but be cautious of this because it can lead to broken dependencies :-).
You can run any version by adding the specific version i.e $python3.6 or $python3.7
The best approach is to use virtual environments for your projects to enhance consistency. see pipenv | 2,085 |
32,736,350 | I did found quite a lot about this error, but somehow none of the suggested solutions resolved the problem.
I am trying to use JNA bindings for libgphoto2 under Ubuntu in Eclipse (moderate experience with Java on Eclipse, none whatsoever on Ubuntu, I'm afraid). The bindings in question I want to use are here:
<http://angryelectron.com/projects/libgphoto2-jna/>
I followed the steps described on that page, and made a simple test client that failed with the above error. So I reduced the test client until the only thing I tried to do was to instantiate a GPhoto2 object, which still produced the error. The test client looks like this:
```
import com.angryelectron.gphoto2.*;
public class test_class
{
public static void main(String[] args)
{
GPhoto2 cam = new GPhoto2();
}
}
```
The errors I get take up considerably more space:
```
Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jna/Structure
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at test_class.main(test_class.java:12)
Caused by: java.lang.ClassNotFoundException: com.sun.jna.Structure
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
```
libgphoto2 itself is installed, it runs from the command line, I even have the development headers and am able to call GPhoto2 functions from python, so the problem can't be located there.
When looking at the .class files in Eclipse, however, they didn't have any definitions. So I figured that might be the problem, especially since there was an error when building the whole thing with ant (although the .jar was succesfully exported, from what I could make out the error concerned only the generation of documentation).
So I loaded the source into eclipse and built the .jar myself. At this occasion Eclipse stated there were warnings during the build (though no errors), but didn't show me the actual warnings. If anyone could tell me where the hell the build log went, that might already help something. I searched for it everywhere without success, and if I click on "details" in eclipse it merely tells me where the warnings occured, not what they were.
Be that as it may, a warning isn't necessarily devastating, so I imported the resulting Jar into the above client. I checked the .class files, this time they contained all the code. But I still get the exact same list of errors (yes, I have made very sure that the old library was removed from the classpath and the new ones added. I repeated the process several times, just in case).
Since I don't have experience with building jars, I made a small helloworld jar, just to see if I could call that from another program or if I'd be getting similar errors. It worked without a hitch. I even tried to reproduce the problem deliberately by exporting it with various options, but it still worked. I tried re-exporting the library I actully need with the settings that had worked during my experiment, but they still wouldn't run. I'm pretty much stuck by now. Any hints that help me resolve the problem would be greatly appreciated. | 2015/09/23 | [
"https://Stackoverflow.com/questions/32736350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4428658/"
] | In addition to what @Paul Whelan has said. You might have better luck by just get the missing jar directly.
Get the missing library [here](https://github.com/java-native-access/jna), set the classpath and then re-run the application again and see whether it will run fine or not. | What version of java are you using com/sun/jna/Structure may only work with certain JVMs.
In general, packages such as sun.*, that are outside of the Java platform, can be different across OS platforms (Solaris, Windows, Linux, Macintosh, etc.) and can change at any time without notice with SDK versions (1.2, 1.2.1, 1.2.3, etc). Programs that contain direct calls to the sun.* packages are not 100% Pure Java.
More details [here](http://www.oracle.com/technetwork/java/faq-sun-packages-142232.html) | 2,086 |
45,384,065 | I am looking for a way to run a method every second, regardless of how long it takes to run. In looking for help with that, I ran across
[Run certain code every n seconds](https://stackoverflow.com/questions/3393612/run-certain-code-every-n-seconds)
and in trying it, found that it doesn't work correctly. It appears to have the very problem I'm trying to avoid: drift. I tried adding a "sleep(0.5)" after the print, and it does in fact slow down the loop, and the interval stays at the 1.003 (roughly) seconds.
Is there a way to fix this, to do what I want?
```
(venv) 20170728-153445 mpeck@bilbo:~/dev/whiskerlabs/aphid/loadtest$ cat a.py
import threading
import time
def woof():
threading.Timer(1.0, woof).start()
print "Hello at %s" % time.time()
woof()
(venv) 20170728-153449 mpeck@bilbo:~/dev/whiskerlabs/aphid/loadtest$ python a.py
Hello at 1501281291.84
Hello at 1501281292.85
Hello at 1501281293.85
Hello at 1501281294.85
Hello at 1501281295.86
Hello at 1501281296.86
Hello at 1501281297.86
Hello at 1501281298.87
Hello at 1501281299.87
Hello at 1501281300.88
Hello at 1501281301.88
Hello at 1501281302.89
Hello at 1501281303.89
Hello at 1501281304.89
Hello at 1501281305.89
Hello at 1501281306.9
Hello at 1501281307.9
Hello at 1501281308.9
Hello at 1501281309.91
Hello at 1501281310.91
Hello at 1501281311.91
Hello at 1501281312.91
Hello at 1501281313.92
Hello at 1501281314.92
Hello at 1501281315.92
Hello at 1501281316.93
``` | 2017/07/29 | [
"https://Stackoverflow.com/questions/45384065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8217211/"
] | 1. Don't use a `threading.Timer` if you don't actually need a new thread each time; to run a function periodically `sleep` in a loop will do (possibly in a single separate thread).
2. Whatever method you use to schedule the next execution, don't wait for the exact amount of time you use as interval - execution of the other statements take time, so the result is drift as you can see. Instead, write down the initial time in a variable, and at each iteration calculate the next time when you want to schedule execution and sleep for the difference between now and then.
```
interval = 1.
next_t = time.time()
while True:
next_t += interval
time.sleep(next_t - time.time())
# do whatever you want to do
```
(of course you may refine it for better overall accuracy, but this at least should avoid drift) | I'm pretty sure the problem with that code is that it takes Python some time (apparently around .3s) to execute the call to your function `woof`, instantiate a new `threading.Timer` object, and print the current time. So basically, after your first call to the function, and the creation of a `threading.Timer`, Python waits exactly 1s, then calls the function `woof` (a decisecond or so), creates a new `Timer` object (yet another decisecond at least), and finally prints the current time with some delay.
The solution to actually run a program every second seems to be the Twisted library, as said on [this other post](https://stackoverflow.com/a/474570/8232125), but I didn't really try it myself...
**Edit:** I would mark the question as possible duplicate but I apparently don't have enough reputation to do that yet... If someone can be kind enough to do so with at least the link I provided, it would be cool :) | 2,088 |
6,686,576 | What i'm trying to achieve is playing a guitar chord from my python application. I know (or can calculate) the frequencies in the chord if needed.
I'm thinking that even if I do the low level leg work of producing multiple sine waves at the right frequencies it wont sound right due to the envelope needing to be correct also, else it wont sound like a guitar but more of a hum.
Tantilisingly, the linux sox command play can produce a pretty convincing individual note with:
`play -n synth 0 pluck E3`
So really what i'm asking is,
a) is it possible to shoehorn the play command to do a whole chord (ideally with slightly differing start times to simulate the plectrum string stroke) -- i've not been able to do this but maybe theres some bash fairydust that'll fork a process or such so it sounds right. If this is possible i'd settle for just calling out to a bash command from my code (I dont like reinventing the wheel).
b) (even better) is there a way in python of achieving this (a guitar chord sound) ? I've seen a few accessable python midi librarys but frankly midi isn't a good fit for the sound I want, as far as i can tell. | 2011/07/13 | [
"https://Stackoverflow.com/questions/6686576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/384388/"
] | a) The hackish way is to spawn a background subprocess to run each `play` command. Since a background subprocess doesn't make the shell wait for it to finish, you can have multiple `play`s running at once. Something like this would work:
```
for p in "C3" "E3" "G3"; do ( play -n synth 3 pluck $p & ); done
```
I see that ninjagecko posted basically the same thing as I'm writing this.
b) The key point to realize about MIDI data is that it's more like a high-level recipe for producing a sound, not the sound itself. In other words, each MIDI note is expressed as a pitch, a dynamic level, start and stop times, and assorted other metadata. The actual sound is produced by a synthesizer, and different synthesizers do the job with different levels of quality. If you don't like the sound you're getting from your MIDI files, it's not a problem with MIDI, it's a problem with your synthesizer, so you just need to find a better one. (In practice, that usually takes $$$; most free or cheap synthesizers are pretty bad.)
An alternative would be to actually dig under the hood, so to speak, and implement an algorithm to create your own guitar sound. For that you'd want to look into [digital signal processing](http://en.wikipedia.org/wiki/Digital_signal_processing), in particular something like the [Karplus-Strong algorithm](http://en.wikipedia.org/wiki/Karplus-Strong_string_synthesis) (one of many ways to create a synthetic plucked string sound). It's a fascinating subject, but if your only exposure to sound synthesis is at the level of `play` and creating MIDI files, you'd have a bit of learning to do. Additionally, Python probably isn't the best choice of language, since execution speed is pretty critical.
If you're curious about DSP, you might want to download and play with [ChucK](http://chuck.cs.princeton.edu/). | *a) is it possible to shoehorn the play command to do a whole chord... ?*
If your sound architecture supports it, you can run multiple commands that output audio at the same time. If you're using ALSA, you need dmix or other variants in your `~/.asoundrc`. Use `subprocess.Popen` to spawn many child processes. If this were hypothetically a bash script, you could do:
```
command1 &
command2 &
...
```
*b) (even better) is there a way in python of achieving this (a guitar chord sound)?*
Compile to MIDI and output via a software synthesizer like FluidSynth. | 2,089 |
27,643,383 | I am trying to install the elastic beanstalk CLI on an EC2 instance (running AMI) using these instructions:
<http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html>
I have python 2.7.9 installed, pip and eb. However, when I try to run eb I get the error below. It looks like it is still using python 2.6. How do you fix that?
Thanks!
```
Traceback (most recent call last):
File "/usr/bin/eb", line 9, in <module>
load_entry_point('awsebcli==3.0.10', 'console_scripts', 'eb')()
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 473, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2568, in load_entry_point
return ep.load()
File "/usr/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2259, in load
['__name__'])
File "/usr/lib/python2.6/site-packages/ebcli/core/ebcore.py", line 23, in <module>
from ..controllers.initialize import InitController
File "/usr/lib/python2.6/site-packages/ebcli/controllers/initialize.py", line 16, in <module>
from ..core.abstractcontroller import AbstractBaseController
File "/usr/lib/python2.6/site-packages/ebcli/core/abstractcontroller.py", line 21, in <module>
from ..core import io, fileoperations, operations
File "/usr/lib/python2.6/site-packages/ebcli/core/operations.py", line 762
vars = {n['OptionName']: n['Value'] for n in settings
^
SyntaxError: invalid syntax
``` | 2014/12/25 | [
"https://Stackoverflow.com/questions/27643383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1536188/"
] | Pip is probably set up with Python 2.6 instead of python 2.7.
```
pip --version
```
You can reinstall pip with Python 2.7, then reinstall 2.6
```
pip uninstall awsebcli
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awsebcli
``` | The "smartest" solution for me was to install python-dev tools
sudo apt install python-dev
found here:
<http://ericbenson.azurewebsites.net/deployment-on-aws-elastic-beanstalk-for-ubuntu/> | 2,092 |
69,437,836 | I was trying to make a program that can make classification between runway and taxiway using mask rcnn. after importing custom dataset in json format I am getting key error
```
class CustomDataset(utils.Dataset):
def load_custom(self, dataset_dir, subset):
"""Load a subset of the Horse-Man dataset.
dataset_dir: Root directory of the dataset.
subset: Subset to load: train or val
"""
# Add classes. We have only one class to add.
self.add_class("object", 1, "runway")
self.add_class("object", 2, "taxiway")
# self.add_class("object", 3, "xyz") #likewise
# Train or validation dataset?
assert subset in ["trainn", "vall"]
dataset_dir = os.path.join(dataset_dir, subset)
# Load annotations
# VGG Image Annotator saves each image in the form:
# { 'filename': '28503151_5b5b7ec140_b.jpg',
# 'regions': {
# '0': {
# 'region_attributes': {},
# 'shape_attributes': {
# 'all_points_x': [...],
# 'all_points_y': [...],
# 'name': 'polygon'}},
# ... more regions ...
# },
# 'size': 100202
# }
# We mostly care about the x and y coordinates of each region
annotations1 = json.load(open(os.path.join(dataset_dir, "f11_json.json")))
# print(annotations1)
annotations = list(annotations1.values()) # don't need the dict keys
# The VIA tool saves images in the JSON even if they don't have any
# annotations. Skip unannotated images.
annotations = [a for a in annotations if a['regions']]
# Add images
for a in annotations:
# print(a)
# Get the x, y coordinaets of points of the polygons that make up
# the outline of each object instance. There are stores in the
# shape_attributes (see json format above)
polygons = [r['shape_attributes'] for r in a['regions']]
objects = [s['region_attributes']['names'] for s in a['regions']]
print("objects:",objects)
name_dict = {"runway": 1,"taxiway": 2} #,"xyz": 3}
# key = tuple(name_dict)
num_ids = [name_dict[a] for a in objects]
# num_ids = [int(n['Event']) for n in objects]
# load_mask() needs the image size to convert polygons to masks.
# Unfortunately, VIA doesn't include it in JSON, so we must read
# the image. This is only managable since the dataset is tiny.
print("numids",num_ids)
image_path = os.path.join(dataset_dir, a['filename'])
image = skimage.io.imread(image_path)
height, width = image.shape[:2]
self.add_image(
"object", ## for a single class just add the name here
image_id=a['filename'], # use file name as a unique image id
path=image_path,
width=width, height=height,
polygons=polygons,
num_ids=num_ids
)
def load_mask(self, image_id):
"""Generate instance masks for an image.
Returns:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
"""
# If not a Horse/Man dataset image, delegate to parent class.
image_info = self.image_info[image_id]
if image_info["source"] != "object":
return super(self.__class__, self).load_mask(image_id)
# Convert polygons to a bitmap mask of shape
# [height, width, instance_count]
info = self.image_info[image_id]
if info["source"] != "object":
return super(self.__class__, self).load_mask(image_id)
num_ids = info['num_ids']
mask = np.zeros([info["height"], info["width"], len(info["polygons"])],
dtype=np.uint8)
for i, p in enumerate(info["polygons"]):
# Get indexes of pixels inside the polygon and set them to 1
rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x'])
mask[rr, cc, i] = 1
# Return mask, and array of class IDs of each instance. Since we have
# one class ID only, we return an array of 1s
# Map class names to class IDs.
num_ids = np.array(num_ids, dtype=np.int32)
return mask, num_ids #np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "object":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
```
error
```
objects: ['runway', 'runway', 'taxiway', 'taxiway', 'taxiway',
'taxiway', 'taxiway']
numids [1, 1, 2, 2, 2, 2, 2]
objects: ['runway', 'runway', 'taxiway', 'taxiway']
numids [1, 1, 2, 2]
error
<ipython-input-8-fac8e3d87b86> in <listcomp>(.0)
45 # shape_attributes (see json format above)
46 polygons = [r['shape_attributes'] for r in a['regions']]
---> 47 objects = [s['region_attributes']['names'] for s in a['regions']]
48 print("objects:",objects)
49 name_dict = {"runway": 1,"taxiway": 2} #,"xyz": 3}
KeyError: 'names'
```
I had done all possible changes but still getting same error. Basically I am doing image classification on custom dataset in this I had imported json file of custom dataset. | 2021/10/04 | [
"https://Stackoverflow.com/questions/69437836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16702137/"
] | I think it should be `name`, not `names`, based on the file format in the comment:
```
{ 'filename': '28503151_5b5b7ec140_b.jpg',
'regions': {
'0': {
'region_attributes': {},
'shape_attributes': {
'all_points_x': [...],
'all_points_y': [...],
'name': 'polygon'}},
... more regions ...
},
'size': 100202
}
```
>
> `'name': 'polygon'}},`
>
>
> | i resolved this error by rechecking my annotations in VGG tool and found that i double labeled (wrongly labeled) two file.
so my suggestion is to recheck all files in VGG Annotation Tool and check for missing or multiple times labelled files.
Thanks | 2,095 |
13,352,296 | The following works and returns a list of all users
```
ldapsearch -x -b "ou=lunchbox,dc=office,dc=lbox,dc=com" -D "OFFICE\Administrator" -h ad.office.lbox.com -p 389 -W "(&(objectcategory=person)(objectclass=user))"
```
I'm trying to do the same in Python and I'm getting `Invalid credentials`
```
#!/usr/bin/env python
import ldap
dn = "cn=Administrator,dc=office,dc=lbox,dc=com"
pw = "**password**"
con = ldap.initialize('ldap://ad.office.lbox.com')
con.simple_bind_s( dn, pw )
base_dn = 'ou=lunchbox,dc=office,dc=lbox,dc=com'
filter = '(objectclass=person)'
attrs = ['sn']
con.search_s( base_dn, ldap.SCOPE_SUBTREE, filter, attrs )
```
Any suggestions to make this work would be great. I'm trying to learn `python-ldap` Thanks
EDIT
This is the full error I get:
```
`ldap.INVALID_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db1', 'desc': 'Invalid credentials'}`
```
The `LDAP` server is an Active Directory on Windows Server 2008 R2 | 2012/11/12 | [
"https://Stackoverflow.com/questions/13352296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1039166/"
] | You're using different credentials for the bind from the command line and the python script.
The command line is using the bind dn of `OFFICE\Administrator` while the script is using the bind dn of `cn=Administrator,dc=office,dc=lbox,dc=com`
On Active Directory, the built-in account `Administrator` doesn't reside at the top-level of the `AD` forest, it typically resides under at least the `Users` `OU`, so the dn you *probably* should be using is: `CN=Administrator,CN=Users,dc=office,dc=lbox,dc=com`.
The easiest way to find the proper entry for the user is to actually use account name in a search from the command line e.g.
```
ldapsearch -x -b "ou=lunchbox,dc=office,dc=lbox,dc=com" -D "OFFICE\Administrator" -h ad.office.lbox.com -p 389 -W '(samaccountname=Administrator)' dn
```
and use the `dn` returned from the command line query in your python code as the `dn` for the bind. | The python-ldap library does not parse the user name, neither does ldapsearch. In you code, simply use the same username `OFFICE\Administrator` and let Active Directory handle it.
Also it is not uncommon for ActiveDirectory to refuse simple bind over ldap. You must use LDAPS. Add this line to bypass certificat checking:
```
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
```
So the whole code might look like this:
```
#!/usr/bin/env python
import ldap
dn = "OFFICE\Administrator"
pw = "**password**"
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
con = ldap.initialize('ldaps://ad.office.lbox.com')
con.simple_bind_s( dn, pw )
base_dn = 'ou=lunchbox,dc=office,dc=lbox,dc=com'
filter = '(objectclass=person)'
attrs = ['sn']
con.search_s( base_dn, ldap.SCOPE_SUBTREE, filter, attrs )
``` | 2,096 |
53,157,921 | Please excuse my silly question as I am really new to python.
I have 20 different .txt files (eg `"myfile_%s"` with `s` having been attributed to an integer in range=1,21). So I load them as follows:
```
runs=range(1,21)
for i in runs:
Myfile=np.loadtxt("myfile_%s.txt" %i, delimiter=',', unpack=True)
```
Hence, they're being loaded into a variable of "float64" type.
I would like to load them into 20 different lists (so as to find the maximum value of each etc.).
Thank you in advance!
PS: I would be happy to hear any textbook recommendations for python beginners. | 2018/11/05 | [
"https://Stackoverflow.com/questions/53157921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10042405/"
] | You can split using your delimiter and load into a native python list:
```
my_files = []
for i in range(1,21):
with open("my_file_{0}.txt".format(i), 'r') as f:
my_files.append(f.read().split(','))
```
Now you have a list of lists. You can get the max overall, or get the max of each list, like so:
```
# max of each
max_values = [max(map(float,my_list)) for my_list in my_files]
# max overall
max_overall = max(max_values)
``` | Are your lists of equal length? If yes, you can do everything in one numpy array:
```
a = np.zeros((100,20))
for i in range(1,21):
a[i-1,:]=np.loadtxt("myfile_%s.txt" %i, delimiter=',', unpack=True)
```
Now you can do all `numpy` functions on the resulting array such as
```
b = np.sum(a,axis=0)
``` | 2,097 |
56,066,816 | I have several data frames (with equal # columns but different names). I'm trying to create one data frame with rows stacked below each other. I don't care now about the column names (I can always rename them later). I saw different SO links but they don't address this problem completely.
Note I've 21 data frames and scalability is important. I was looking at
[this](https://stackoverflow.com/questions/45590866/python-pandas-concat-dataframes-with-different-columns-ignoring-column-names)
[![enter image description here](https://i.stack.imgur.com/U5W0x.jpg)](https://i.stack.imgur.com/U5W0x.jpg)
How I get df:
```
df = []
for f in files:
data = pd.read_csv(f, usecols = [0,1,2,3,4])
df.append(data)
``` | 2019/05/09 | [
"https://Stackoverflow.com/questions/56066816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9473446/"
] | Assuming your DataFrames are stored in some list `df_l`:
Rename the columns and concat:
```
df_l = [df1, df2, df3]
for df in df_l:
df.columns = df_l[0].columns # Just chose any DataFrame
pd.concat(df_l) # Columns named with above DataFrame
# Index is preserved
```
Or construct a new DataFrame:
```
pd.DataFrame(np.vstack([df.to_numpy() for df in df_l])) # Columns are RangeIndex
# Index is RangeIndex
``` | Once you put all the data frames into a list, try this code.
```
import pandas as pd
df = [df1, df2, df3]
result = pd.DataFrame(columns=df1.columns)
for df in df:
result = pd.concat([result,df.rename(columns=df1.columns)], ignore_index=True)
``` | 2,098 |
6,774,561 | I have a python code treating a lot of apache logs (decompress, parse, crunching numbers, regexping etc). One parent process which takes a list of files (up to few millions), and sends a list of files to parse to workers, using multiprocess pool.
I wonder, if there is any guidelines / benchmarks / advices which can help me to **estimate ideal number of child process ?** Ie. having one process per core is better than launching few hundreds of them?
Currently 3/4 time of script execution is reading files and decompressing them, and in terms of resources, its CPU which is 100% loaded, memory and I/O being ok. So I assume there is a lot which can be done with proper multiprocessing settings. Script will be running on different machines / os, so os-specific hints are welcome, too.
Also, is there any benefit in using threads rather than multiprocess? | 2011/07/21 | [
"https://Stackoverflow.com/questions/6774561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/129289/"
] | >
> I wonder, if there is any guidelines / benchmarks / advices which can help me to estimate ideal number of child process ?
>
>
>
No.
>
> having one process per core is better than launching few hundreds of them?
>
>
>
You can never know *in advance*.
There are too many degrees of freedom.
You can only discover it empirically by running experiments until you get the level of performance you desire.
>
> Also, is there any benefit in using threads rather than multiprocess?
>
>
>
Rarely.
Threads don't help much. Multiple threads doing I/O will be locked up waiting while the process (as a whole) waits for the O/S to finish the I/O request.
Your operating system does a very, very good job of scheduling processes. When you have I/O intensive operations, you really want multiple processes. | I'll address the last question first. In CPython, it is next to impossible to make sizeable performance gains by distributing CPU-bound load across threads. This is due to the [Global Interpreter Lock](http://en.wikipedia.org/wiki/Global_Interpreter_Lock). In that respect [`multiprocessing`](http://docs.python.org/library/multiprocessing.html) is a better bet.
As to estimating the ideal number of workers, here is my advice: run some experiments with your code, your data, your hardware and a varying number of workers, and see what you can glean from that in terms of speedups, bottlenecks etc. | 2,101 |
28,191,221 | I used SQL to convert a social security number to MD5 hash. I am wondering if there is a module or function in python/pandas that can do the same thing.
My sql script is:
```
CREATE OR REPLACE FUNCTION MD5HASH(STR IN VARCHAR2) RETURN VARCHAR2 IS
V_CHECKSUM VARCHAR2(32);
BEGIN
V_CHECKSUM := LOWER(RAWTOHEX(UTL_RAW.CAST_TO_RAW(SYS.DBMS_OBFUSCATION_TOOLKIT.MD5(INPUT_ST RING => STR))));
RETURN V_CHECKSUM;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
WHEN OTHERS THEN
RAISE;
END MD5HASH;
SELECT HRPRO.MD5HASH('555555555') FROM DUAL
```
thanks.
I apologize, now that I read back over my initial question it is quite confusing.
I have a data frame that contains the following headings:
```
df[['ssno','regions','occ_ser','ethnicity','veteran','age','age_category']][:10]
```
Where ssno is personal information that I would like to convert to an md5 hash number and then create a new column into the dataframe.
thanks... sorry for the confusion.
Right now I have to send my file to Oracle and then convert the ssn to hash and then export back out so that I can continue working with it in Pandas. I want to eliminate this step. | 2015/01/28 | [
"https://Stackoverflow.com/questions/28191221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2201603/"
] | Using the standard hashlib module:
```
import hashlib
hash = hashlib.md5()
hash.update('555555555')
print hash.hexdigest()
```
**output**
```
3665a76e271ada5a75368b99f774e404
```
As mentioned in timkofu's comment, you can also do this more simply, using
```
print hashlib.md5('555555555').hexdigest()
```
The `.update()` method is useful when you want to generate a checksum in stages. Please see the [hashlib documentation](https://docs.python.org/2/library/hashlib.html) (or the [Python 3 version](https://docs.python.org/3/library/hashlib.html)) for further details. | hashlib with `md5` might be of your interest.
```
import hashlib
hashlib.md5("Nobody inspects the spammish repetition").hexdigest()
```
output:
```
bb649c83dd1ea5c9d9dec9a18df0ffe9
```
Constructors for hash algorithms that are always present in this module are `md5(), sha1(), sha224(), sha256(), sha384(), and sha512()`.
If you want more condensed result, then you may try `sha` series
output for `sha224`:
```
'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2'
```
For more details : [hashlib](https://docs.python.org/2/library/hashlib.html) | 2,105 |
39,361,496 | I am a python coder but recently started a forey into Java. I am trying to understand a specific piece of code but am running into difficulties which I believe are associated with not knowing Java too well, yet.
Something that stood out to me is that sometimes inside class definitions methods are called twice. I am wondering why that is? For example:
The following code is taken from a file called ApplicationCreator.java. I noticed that the public class ApplicationCreator essentially instantiates itself twice, or am I missing something here?
```
public class ApplicationCreator<MR> implements
IResourceObjectCreator<BinaryRuleSet<MR>> {
private String type;
public ApplicationCreator() {
this("rule.application");
}
public ApplicationCreator(String type) {
this.type = type;
}
```
So here my questions:
1) Why would the class instantiate itself inside the class?
2) Why would it do so twice? Or is this a way to set certain parameters of the ApplicationCreator class to new values?
Any advice would be highly appreciated. | 2016/09/07 | [
"https://Stackoverflow.com/questions/39361496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2439540/"
] | The class is really not instantiating itself twice. Rather, the default constructor `ApplicationCreator()` (i.e. the one which takes no parameters), is simply calling the constructor which accepts an input string.
This ensures that an `ApplicationCreator` object will always have a type. When a type is not specified the default value `rule.application` will be used.
This is an example of overloaded constructors. | Here this class has two constructor.
When class name "method" name are same you can understand those are constructor.
Here constructor is over loaded . Based on parameter classes will be instantiated. Here user have a choice based on need . | 2,106 |
4,088,471 | I have a dictionary in the view layer, that I am passing to my templates. The dictionary values are (mostly) lists, although a few scalars also reside in the dictionary. The lists if present are initialized to None.
The None values are being printed as 'None' in the template, so I wrote this little function to clean out the Nones before passing the dictionary of lists to the template. Since I am new to Python, I am wondering if there could be a more pythonic way of doing this?
```
# Clean the table up and turn Nones into ''
for k, v in table.items():
#debug_str = 'key: %s, value: %s' % (k,v)
#logging.debug(debug_str)
try:
for i, val in enumerate(v):
if val == None: v[i] = ''
except TypeError:
continue;
``` | 2010/11/03 | [
"https://Stackoverflow.com/questions/4088471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461722/"
] | Have you looked at `defaultdict` within collections? You'd have a dictionary formed via
```
defaultdict(list)
```
which initializes an empty list when a key is queried and that key does not exist. | ```
filtered_dict = dict((k, v) for k, v in table.items() if v is not None)
```
or in Python 2.7+, use the dictionary comprehension syntax:
```
filtered_dict = {k: v for k, v in table.items() if v is not None}
``` | 2,116 |
45,125,441 | I have a dataframe that has a column of boroughs visited (among many other columns):
```
Index User Boroughs_visited
0 Eminem Manhattan, Bronx
1 BrSpears NaN
2 Elvis Brooklyn
3 Adele Queens, Brooklyn
```
**I want to create a third column that shows which User visited Brooklyn**, so I wrote the slowest code possible in python:
```
df['Brooklyn']= 0
def borough():
for index,x in enumerate(df['Boroughs_visited']):
if pd.isnull(x):
continue
elif re.search(r'\bBrooklyn\b',x):
df_vols['Brooklyn'][index]= 1
borough()
```
Resulting in:
```
Index User Boroughs_visited Brooklyn
0 Eminem Manhattan, Bronx 0
1 BrSpears NaN 0
2 Elvis Brooklyn 1
3 Adele Queens, Brooklyn 1
```
**It took my computer 15 seconds to run this for 2000 rows. Is there a faster way of doing this?** | 2017/07/16 | [
"https://Stackoverflow.com/questions/45125441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8005777/"
] | Let use `.str` accessor with `contains` and `fillna`:
```
df['Brooklyn'] = (df.Boroughs_visited.str.contains('Brooklyn') * 1).fillna(0)
```
Or another format of the same statement:
```
df['Brooklyn'] = df.Boroughs_visited.str.contains('Brooklyn').mul(1, fill_value=0)
```
Output:
```
Index User Boroughs_visited Brooklyn
0 0 Eminem Manhattan, Bronx 0
1 1 BrSpears NaN None 0
2 2 Elvis Brooklyn 1
3 3 Adele Queens, Brooklyn 1
``` | You can get all Boroughs for the price of one
```
df.join(df.Boroughs_visited.str.get_dummies(sep=', '))
Index User Boroughs_visited Bronx Brooklyn Manhattan Queens
0 0 Eminem Manhattan, Bronx 1 0 1 0
1 1 BrSpears NaN 0 0 0 0
2 2 Elvis Brooklyn 0 1 0 0
3 3 Adele Queens, Brooklyn 0 1 0 1
```
But if you really, really just wanted Brooklyn
```
df.join(df.Boroughs_visited.str.get_dummies(sep=', ').Brooklyn)
Index User Boroughs_visited Brooklyn
0 0 Eminem Manhattan, Bronx 0
1 1 BrSpears NaN 0
2 2 Elvis Brooklyn 1
3 3 Adele Queens, Brooklyn 1
``` | 2,117 |
13,409,559 | I'm trying to replace all single quotes with double quotes, but leave behind all escaped single quotes. Does anyone know a simple way to do this with python regexs?
```
Input:
"{ 'name': 'Skrillex', 'Genre':'Dubstep', 'Bass': 'Heavy', 'thoughts': 'this\'s ahmazing'}"
output:
"{ "name": "Skrillex", "Genre": "Dubstep", "Bass": "Heavy", "thoughts": "this\'s ahmazing"}"
``` | 2012/11/16 | [
"https://Stackoverflow.com/questions/13409559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1432960/"
] | This is kind of...odd, but it may work. Remember to preface your string with `r` to denote a raw string so that we can remove the backslashes:
```
In [19]: s = r"{ 'name': 'Skrillex', 'Genre':'Dubstep', 'Bass': 'Heavy', 'thoughts': 'this\'s ahmazing'}"
In [20]: s.replace("\\'", 'REPLACEMEOHYEAH').replace("'", '"').replace('REPLACEMEOHYEAH', "\\'")
Out[20]: '{ "name": "Skrillex", "Genre":"Dubstep", "Bass": "Heavy", "thoughts": "this\'s ahmazing"}'
```
The `REPLACEMEOHYEAH` the token to replace, so it would need to be something that is not going to appear in your actual string. The response format looks like something that could be parsed in more natural way, but if that isn't an option this should work. | 1. replace all the \' into a magic word
2. replace all the ' into "
3. replace all the magic words back to \' | 2,118 |
68,570,102 | Basically, I'm trying to build a code to get the largest number from the user's inputs. This is my 1st time using a for loop and I'm pretty new to python. This is my code:
```
session_live = True
numbers = []
a = 0
def largest_num(arr, n):
#Create a variable to hold the max number
max = arr[0]
#Using for loop for 1st time to check for largest number
for i in range(1, n):
if arr[i] > max:
max = arr[i]
#Returning max's value using return
return max
while session_live:
print("Tell us a number")
num = int(input())
numbers.insert(a, num)
a += 1
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
pass
elif confirm == "N":
session_live = False
#Now I'm running the function
arr = numbers
n = len(arr)
ans = largest_num(arr, n)
print("Largest number is", ans)
else:
print(":/")
session_live = False
```
When I try running my code this is what happens:
```
Tell us a number
9
Continue? (Y/N)
Y
Tell us a number
8
Continue? (Y/N)
Y
Tell us a number
10
Continue? (Y/N)
N
Largest number is 9
```
Any fixes? | 2021/07/29 | [
"https://Stackoverflow.com/questions/68570102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16420917/"
] | The error in your `largest_num` function is that it returns in the first iteration -- hence it will only return the larger of the first two numbers.
Using the builtin `max()` function makes life quite a bit easier; any time you reimplement a function that already exists, you're creating work for yourself and (as you've just discovered) it's another place for bugs to creep into your program.
Here's the same program using `max()` instead of `largest_num()`, and removing a few unnecessary variables:
```
numbers = []
while True:
print("Tell us a number")
numbers.append(int(input()))
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
continue
if confirm == "N":
print(f"Largest number is {max(numbers)}")
else:
print(":/")
break
``` | I made it without using the built-in function 'max'.
It is a way to update the 'maxNum' variable with the largest number by comparing through the for statement.
```py
numbers = []
while True:
print("Tell us a number")
numbers.append(int(input()))
print("Continue? (Y/N)")
confirm = input()
if confirm == "Y":
continue
if confirm == "N":
maxNum = numbers[0]
for i in numbers:
if i > maxNum:
maxNum = i
print("Largest number is", maxNum)
else:
print(":/")
break
``` | 2,119 |
5,633,067 | I have a pylons project where I need to update some in-memory structures periodically. This should be done on-demand. I decided to come up with a signal handler for this. User sends `SIGUSR1` to the main pylons thread and it is handled by the project.
This works except after handling the signal, the server crashes with following exception:
```
File "/usr/lib/python2.6/SocketServer.py", line 264, in handle_request
fd_sets = select.select([self], [], [], timeout)
select.error: (4, 'Interrupted system call')
```
Is it possible to fix this?
TIA. | 2011/04/12 | [
"https://Stackoverflow.com/questions/5633067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408426/"
] | Yes, it is possible, but not easy using the stock Python libraries. This is due to Python translating all OS errors to exceptions. However, EINTR should really cause a retry of the system call used. Whenever you start using signals in Python you will see this error sporadically.
I have [code that fixes this](http://code.google.com/p/pycopia/source/browse/trunk/aid/pycopia/socket.py) (SafeSocket), by forking Python modules and adding that functionality. But it needs to be added everywhere system calls are used. So it's possible, but not easy. But you can use my open-source code, it may save you years of work. ;-)
The basic pattern is this (implemented as a system call decorator):
```
# decorator to make system call methods safe from EINTR
def systemcall(meth):
# have to import this way to avoid a circular import
from _socket import error as SocketError
def systemcallmeth(*args, **kwargs):
while 1:
try:
rv = meth(*args, **kwargs)
except EnvironmentError as why:
if why.args and why.args[0] == EINTR:
continue
else:
raise
except SocketError as why:
if why.args and why.args[0] == EINTR:
continue
else:
raise
else:
break
return rv
return systemcallmeth
```
You could also just use that around your select call. | A fix, at least works for me, from an [12 year old python-dev list post](http://mail.python.org/pipermail/python-dev/2000-October/009671.html)
```
while True:
try:
readable, writable, exceptional = select.select(inputs, outputs, inputs, timeout)
except select.error, v:
if v[0] != errno.EINTR: raise
else: break
```
The details of the actual select line isn't important... your "fd\_sets = select.select([self], [], [], timeout)" line should work exactly the same.
The important bit is to check for EINTR and retry/loop if that is caught.
Oh, and don't forget to import errno. | 2,121 |
57,507,832 | I'm facing an issue with allocating huge arrays in numpy on Ubuntu 18 while not facing the same issue on MacOS.
I am trying to allocate memory for a numpy array with shape `(156816, 36, 53806)`
with
```
np.zeros((156816, 36, 53806), dtype='uint8')
```
and while I'm getting an error on Ubuntu OS
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
numpy.core._exceptions.MemoryError: Unable to allocate array with shape (156816, 36, 53806) and data type uint8
```
I'm not getting it on MacOS:
```
>>> import numpy as np
>>> np.zeros((156816, 36, 53806), dtype='uint8')
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...,
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
```
I've read somewhere that `np.zeros` shouldn't be really allocating the whole memory needed for the array, but only for the non-zero elements. Even though the Ubuntu machine has 64gb of memory, while my MacBook Pro has only 16gb.
versions:
```
Ubuntu
os -> ubuntu mate 18
python -> 3.6.8
numpy -> 1.17.0
mac
os -> 10.14.6
python -> 3.6.4
numpy -> 1.17.0
```
PS: also failed on Google Colab | 2019/08/15 | [
"https://Stackoverflow.com/questions/57507832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123537/"
] | I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the [pagefile](https://whatis.techtarget.com/definition/pagefile) size, as it was a Memory overcommitment problem for me too.
Windows 8
1. On the Keyboard Press the WindowsKey + X then click System in the popup menu
2. Tap or click Advanced system settings. You might be asked for an admin password or to confirm your choice
3. On the Advanced tab, under Performance, tap or click Settings.
4. Tap or click the Advanced tab, and then, under Virtual memory, tap or click Change
5. Clear the Automatically manage paging file size for all drives check box.
6. Under Drive [Volume Label], tap or click the drive that contains the paging file you want to change
7. Tap or click Custom size, enter a new size in megabytes in the initial size (MB) or Maximum size (MB) box, tap or click Set, and then tap or click OK
8. Reboot your system
Windows 10
1. Press the Windows key
2. Type SystemPropertiesAdvanced
3. Click Run as administrator
4. Under Performance, click Settings
5. Select the Advanced tab
6. Select Change...
7. Uncheck Automatically managing paging file size for all drives
8. Then select Custom size and fill in the appropriate size
9. Press Set then press OK then exit from the Virtual Memory, Performance Options, and System Properties Dialog
10. Reboot your system
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
**EDIT**
From [here](https://www.geeksinphoenix.com/blog/post/2016/05/10/how-to-manage-windows-10-virtual-memory.aspx) the suggested recommendations for page file size:
>
> There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
>
>
>
Some things to keep in mind from [here](https://www.computerhope.com/issues/ch001293.htm):
>
> However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
>
>
>
Also:
>
> Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
>
>
> | change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8:
```
data['label'] = data['label'].astype(np.uint8)
``` | 2,122 |
10,643,982 | Is there a way in python to truncate the decimal part at 5 or 7 digits?
If not, how can i avoid a float like e\*\*(-x) number to get too big in size?
Thanks | 2012/05/17 | [
"https://Stackoverflow.com/questions/10643982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1308318/"
] | Either catch the `OverflowError` or use the `decimal` module. Python is not going to assume you were okay with the overflow.
```
>>> 0.0000000000000000000000000000000000000000000000000000000000000001**-30
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
>>> d = decimal.Decimal(0.0000000000000000000000000000000000000000000000000000000000000001)
>>> d**-30
Decimal('1.000000000000001040827834994E+1920')
``` | The "Result too large" doesn't refer to the number of characters in the decimal representation of the number, it means that the number that resulted from your exponential function is large enough to overflow whatever type python uses internally to store floating point values.
You need to either use a different type to handle your floating point calculations, or rework you code so that e\*\*(-x) doesn't overflow or underflow. | 2,132 |
56,814,981 | the following code gives me the python error 'failed to parse' addon.xml:
(I've used an online checker and it says "error on line 33 at column 15: Opening and ending tag mismatch: description line 0 and extension" - which is the very end of the /extension end tag at the end of the document).
Any advice would be appreciated. This worked yesterday and I have no idea why it's not working at all
```
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<addon id="plugin.audio.criminalpodcast" name="Criminal Podcast" version="1.1.0" provider-name="leopheard">
<requires>
<import addon="xbmc.python" version="2.1.0"/>
<import addon="script.module.xbmcswift2" version="2.4.0"/>
<import addon="script.module.beautifulsoup4" version="4.3.1"/>
<import addon="script.module.requests" version="1.1.0"/>
<import addon="script.module.routing" version="0.2.0"/> </requires>
```
```
<provides>audio</provides> </extension>
<extension point="xbmc.addon.metadata">
<platform>all</platform>
<language></language>
<summary lang="en"></summary>
<description lang="en">description </description>
<license>The MIT License (MIT)</license>
<forum>https://forum.kodi.tv/showthread.php?tid=344790</forum>
<email>leopheard@gmail.com</email>
<source>https://github.com/leopheard/criminalpodcast</source>
<website>http://www.thisiscriminal.com</website>
<audio_guide></audio_guide>
<assets>
<icon>icon.png</icon>
<fanart>fanart.jpg</fanart>
<screenshot>resources/media/Criminal_SocialShare_2.png</screenshot>
<screenshot>resources/media/Criminal_SocialShare_3.png</screenshot>
<screenshot>resources/media/Radiotopia-logo.png</screenshot>
</assets>
``` | 2019/06/29 | [
"https://Stackoverflow.com/questions/56814981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11611598/"
] | Your "XML" file is not well-formed, so it cannot be parsed. Find out how it was created, correct the process so the problem does not occur again, and then regenerate the file.
Files that are vaguely XML-like but not well-formed are pretty well useless. Repair is sometimes possible if the errors are very systematic, but that doesn't appear to the the case here. | Most of the time a "failed to parse" error msg is due to the XML File itself.
Check you're XML File for the correct formatting.
I once forgot the root tag and had the same error message. | 2,135 |
55,197,425 | Ok so here is what I am trying to archieve:
1. Call a URL with a list of dynamically filtered search results
2. Click on the first search result (5/page)
3. Scrape the headlines, paragraphs and images and store them as a json object in a a seperate file e.g.
{
"Title": "Headline element of the individual entry",
"Content" : "Pargraphs and images in DOM order od the individual entry"
}
4. Navigate back to the search results overview page and repeat steps 2 - 3
5. After 5/5 results hav ebeen scraped go to te next page (click pagination link)
6. Repeat steps 2 - 5 until no entry is left
To visualize once more what is intedned:
[![enter image description here](https://i.stack.imgur.com/QJPSA.png)](https://i.stack.imgur.com/QJPSA.png)
What I have so far is:
```
#import libraries
from selenium import webdriver
from bs4 import BeautfifulSoup
#URL
url = "https://URL.com"
#Create a browser session
driver = webdriver.Chrome("PATH TO chromedriver.exe")
driver.implicitly_wait(30)
driver.get(url)
#click consent btn on destination URL ( overlays rest of the content )
python_consentButton = driver.find_element_by_id('acceptAllCookies')
python_consentButton.click() #click cookie consent btn
#Seleium hands the page source to Beautiful Soup
soup_results_overview = BeautifulSoup(driver.page_source, 'lxml')
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click() #click Search Result
#Ask Selenium to go back to the search results overview page
driver.back()
#Tell Selenium to click paginate "next" link
#probably needs to be in a sourounding for loop?
paginate = driver.find_element_by_class_name('pagination-link-next')
paginate.click() #click paginate next
driver.quit()
```
**Problem**
The list count resets every time Selenium navigates back to te search results overview page
so it clicks the first entry 5 times, navigates to the next 5 items and stops
This is probably a predestinated case for a recursive approach, not sure.
Any advice on how to tackle this issue is appreciated. | 2019/03/16 | [
"https://Stackoverflow.com/questions/55197425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4536968/"
] | You can use only `requests` and `BeautifulSoup` to scrape, without Selenium. It will be much faster and will consume much less resources:
```
import json
import requests
from bs4 import BeautifulSoup
# Get 1000 results
params = {"$filter": "TemplateName eq 'Application Article'", "$orderby": "ArticleDate desc", "$top": "1000",
"$inlinecount": "allpages", }
response = requests.get("https://www.cst.com/odata/Articles", params=params).json()
# iterate 1000 results
articles = response["value"]
for article in articles:
article_json = {}
article_content = []
# title of article
article_title = article["Title"]
# article url
article_url = str(article["Url"]).split("|")[1]
print(article_title)
# request article page and parse it
article_page = requests.get(article_url).text
page = BeautifulSoup(article_page, "html.parser")
# get header
header = page.select_one("h1.head--bordered").text
article_json["Title"] = str(header).strip()
# get body content with images links and descriptions
content = page.select("section.content p, section.content img, section.content span.imageDescription, "
"section.content em")
# collect content to json format
for x in content:
if x.name == "img":
article_content.append("https://cst.com/solutions/article/" + x.attrs["src"])
else:
article_content.append(x.text)
article_json["Content"] = article_content
# write to json file
with open(f"{article_json['Title']}.json", 'w') as to_json_file:
to_json_file.write(json.dumps(article_json))
print("the end")
``` | You arenβt using your link variable anywhere in your loop, just telling the driver to find the top link and click it. (When you use the singular find\_element selector and there are multiple results selenium just grabs the first one). I think all you need to do is replace these lines
```
searchResult = driver.find_element_by_class_name('searchResults__detail')
searchResult.click()
```
With
```
link.click()
```
Does that help?
OK.. with regard to the pagination you could use the following strategy since the 'Next' button disappears:
```
paginate = driver.find_element_by_class_name('pagination-link-next')
while paginate.is_displayed() == true:
for link in soup_results_overview.findAll("a", class_="searchResults__detail"):
#Selenium visits each Search Result Page
searchResult.click() #click Search Result
#Scrape the form with a function defined elsewhere
scrape()
#Ask Selenium to go back to the search results overview page
driver.back()
#Click pagination button after executing the for loop finishes on each page
paginate.click()
``` | 2,136 |
34,771,191 | I just upgraded to the latest stable release of `matplotlib` (1.5.1) and everytime I import matplotlib I get this message:
```
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
```
... which always stalls for a few seconds.
Is this the expected behaviour? Was it the same also before, but just without the printed message? | 2016/01/13 | [
"https://Stackoverflow.com/questions/34771191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/497180/"
] | This worked for me on Ubuntu **16.04 LST** with **Python 3.5.2 | Anaconda 4.2.0 (64-bit)**. I deleted all of the files in `~/.cache/matplotlib/`.
```
sudo rm -r fontList.py3k.cache tex.cache
```
At first I thought it wouldn't work, because I got the warning afterward. But after the cache files were rebuilt the warning went away. So, close your file, and reopen again(open again), it has no warning. | This worked for me:
```
sudo apt-get install libfreetype6-dev libxft-dev
``` | 2,140 |
68,616,659 | I am trying to find all instance of a number within an equation. And for that, I wrote this python script:
```
re.findall(fr"([\-\+\*\/\(]|^)({val})([\-\+\*\/\)]|$)", equation)
```
Now, when I give it this: `20+5-20`, and search for `20`, the output is as expected: `[('', '20', '+'), ('-', '20', '')]`
But, when I simply do `20+20-5`, it doesn't work anymore and I only get the first instance: `[('', '20', '+')]`
I don't understand why, it's not even a problem of `20` being at start and end, for example, this `5-20*4-20/3` will still match `20` very well. It just doesn't work when the value is repeated consecutively
how do I fix this?
Thank you | 2021/08/02 | [
"https://Stackoverflow.com/questions/68616659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8754028/"
] | The reason your pattern initially does not work for `20+20-5` is that the character class after matching the first occurrence of 20 actually consumes the `+`
After consuming it, for the second occurrence of 20 right after it, this part of the pattern `[\-\+\*\/\(]|^)` can not match as there is no character to match with the character class, and it is not at the start of the string.
Using 20 for example at the place of `{val}` you can use lookarounds, which do not consume the value but only assert that it is present.
Note that you don't have to escape the values in the character class, and for the last assertion you don't have to add another non capture group.
```
(?:(?<=[-+*/(])|^)20(?=[-+*/)]|$)
```
[Regex demo](https://regex101.com/r/dFXhl0/1)
```
import re
strings = [
"20+5-20",
"20+20-5"
]
val = 20
pattern = fr"(?:(?<=[-+*/(])|^){val}(?=[-+*/)]|$)"
for equation in strings:
print(re.findall(pattern, equation))
```
Output
```
['20', '20']
['20', '20']
``` | I suggest just searching for all numbers (integer + decimal) in your expression, and then filtering for certain values:
```py
inp = "20+5-20*3.20"
matches = re.findall(r'\d+(?:\.\d+)?', inp)
matches = [x for x in matches if x == '20']
print(matches) # ['20', '20']
```
Every number in your formula should *only* be surrounded by either arithmetic symbols, parentheses, or whitespace, all of which are non word characters. | 2,150 |
51,132,025 | I want to create a folder after an hour of the current time in python. I know how to get the current time and date and to create a folder. But how to create a folder at a time specified by me. Any help would be appreciated.
```
from datetime import datetime
from datetime import timedelta
import os
while True:
now = datetime.now ()
#print(now.strftime("%H:%M:%S"))
y = datetime.now () + timedelta (hours = 1)
#print(y.strftime("%H:%M:%S"))
if now== y:
os.makedirs (y.strftime ("%H/%M/%S"))
```
will this work?
EDIT :- I have to run the code continuously i.e. creating folders at every instant of time | 2018/07/02 | [
"https://Stackoverflow.com/questions/51132025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10020438/"
] | Try this simple code
```
import os
import time
while True:
time.sleep(3600) # pending for 1 hour (3600 seconds)
os.makedirs(your directory) # create the directory
```
EDIT (using parallel programming)
```
import os
import time
from datetime import datetime
from multiprocessing import Pool
def create_folder(now):
# you can manipulate variable "now" as you wish
time.sleep(3600) # pending for 1 hour (3600 seconds)
os.makedirs(your directory) # create the directory
return
while True:
pool = Pool()
now = datetime.now()
result = pool.apply_async(create_folder, [now]) # asynchronously evaluate 'create_folder(now)'
```
this may consume many of your computer resources | check this post for better explanation,you can create a function which will run after given time and you can use this function for creating a folder by simple one line code
os.makedirs("path\directory name")
[Python - Start a Function at Given Time](https://stackoverflow.com/questions/11523918/python-start-a-function-at-given-time?noredirect=1&lq=1) | 2,152 |
42,696,635 | I am trying to use the owlready library in Python. I downloaded the file from link(<https://pypi.python.org/pypi/Owlready>) but when I am importing owlready I am getting following error:
```
>>> from owlready import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'owlready'
```
I tried running:
```
pip install owlready
```
I am get the error:
```
error: could not create '/usr/local/lib/python3.4/dist-packages/owlready': Permission denied
``` | 2017/03/09 | [
"https://Stackoverflow.com/questions/42696635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5879314/"
] | Try installing it using `pip` instead.
Run the command `pip install <module name here>` to do so. If you are using python3, run `pip3 install <module name here>`.
If neither of these work you may also try:
`python -m pip install <module name here>`
or
`python3 -m pip install <module name here>`
If you don't yet have `pip`, you should probably get it. Very commonly used python package manager. [Here](https://stackoverflow.com/questions/4750806/how-do-i-install-pip-on-windows) are some details on how to set the tool up. | You need installed library:
```
C:\PythonX.X\Scripts
pip install owlready
Successfully installed Owlready-0.3
``` | 2,154 |
69,969,792 | So, I have to write a code in python that will draw four squares under a function called draw\_square that will take four arguments: the canvas on which the square will be drawn, the color of the square, the side length of the square, and the position of the center of the square. This function should draw the square and return the handle of the square. The create\_rectangle method should only be used inside the draw\_square function. This is what I have so far:
```
from tkinter import*
root = Tk()
my_canvas = Canvas(root, width=900, height=900, bg="white")
my_canvas.pack(pady=30)
def draw_square():
draw_square.create_rectangle(0, 0, 150, 150, fill = "orange",
outline = "orange")
draw_square.create_rectangle(750, 0, 900, 150, fill = "green",
outline = "green")
draw_square.create_rectangle(0, 750, 150, 900, fill = "blue",
outline = "blue")
draw_square.create_rectangle(750, 750, 900, 900, fill = "black",
outline = "black")
draw_square()
```
Please let me know what to do so my code can work. | 2021/11/15 | [
"https://Stackoverflow.com/questions/69969792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17414982/"
] | Use `my_canvas.create_rectangle(...)`.
You were calling a draw rectangle from your function rather than the canvas itself.
Extra info: [Tkinter Canvas creating rectangle](https://stackoverflow.com/questions/42039564/tkinter-canvas-creating-rectangle) | you need to do following:
my\_canvas.create\_rectangle(...)
my\_canvas.pack()
...
...
after you finish for all 4 squares drawing and packing you need to call function like following:
draw\_square()
root.mainloop() | 2,155 |
50,505,067 | I have a simple DAG
```
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table='my_dataset.my_table20180524'),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={})
start >> bq_query >> end
```
When executing the `bq_query` task the SQL query gets saved in a sharded table. I want it to get saved in a daily partitioned table. In order to do so, I only changed `destination_dataset_table` to `my_dataset.my_table$20180524`. I got the error below when executing the `bq_task`:
```
Partitioning specification must be provided in order to create partitioned table
```
How can I specify to BigQuery to save query result to a daily partitioned table ? my first guess has been to use `query_params` in `BigQueryOperator`
but I didn't find any example on how to use that parameter.
**EDIT:**
I'm using `google-cloud==0.27.0` python client ... and it's the one used in Prod :( | 2018/05/24 | [
"https://Stackoverflow.com/questions/50505067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5715610/"
] | You first need to create an Empty partitioned destination table. Follow instructions here: [link](https://cloud.google.com/bigquery/docs/creating-column-partitions#creating_an_empty_partitioned_table_with_a_schema_definition) to create an empty partitioned table
and then run below airflow pipeline again.
You can try code:
```py
import datetime
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
today_date = datetime.datetime.now().strftime("%Y%m%d")
table_name = 'my_dataset.my_table' + '$' + today_date
with DAG(dag_id='my_dags.my_dag') as dag:
start = DummyOperator(task_id='start')
end = DummyOperator(task_id='end')
sql = """
SELECT *
FROM 'another_dataset.another_table'
"""
bq_query = BigQueryOperator(bql=sql,
destination_dataset_table={{ params.t_name }}),
task_id='bq_query',
bigquery_conn_id='my_bq_connection',
use_legacy_sql=False,
write_disposition='WRITE_TRUNCATE',
create_disposition='CREATE_IF_NEEDED',
query_params={'t_name': table_name},
dag=dag
)
start >> bq_query >> end
```
So what I did is that I created a dynamic table name variable and passed to the BQ operator. | The main issue here is that I don't have access to the new version of google cloud python API, the prod is using version [0.27.0](https://gcloud-python.readthedocs.io/en/stable/bigquery/usage.html).
So, to get the job done, I made something bad and dirty:
* saved the query result in a sharded table, let it be `table_sharded`
* got `table_sharded`'s schema, let it be `table_schema`
* saved `" SELECT * FROM dataset.table_sharded"` query to a partitioned table providing `table_schema`
All this is abstracted in one single operator that uses a hook. The hook is responsible of creating/deleting tables/partitions, getting table schema and running queries on BigQuery.
Have a look at the [code](https://gist.github.com/MassyB/be4555a5fc8e6c433766d71e9d760f91). If there is any other solution, please let me know. | 2,156 |
69,795,302 | I am a beginner in python so please be gentle and if you do have an answer please provide details.
I just installed the most recent python version 3.10 after making sure to delete all previous installations (including anaconda). I am positive my system is clear of any prior installation.
after installing python 3.10 I open my terminal and run the following:
```
pip list
```
which outputs:
```
pip list
Package Version
---------- -------
pip 21.2.3
setuptools 57.4.0
```
Then I install pipenv
```
pip install pipenv
```
which outputs
```
WARNING: The script virtualenv-clone.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script virtualenv.exe is installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pipenv-resolver.exe and pipenv.exe are installed in 'C:\Users\Giulio\AppData\Roaming\Python\Python310\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed backports.entry-points-selectable-1.1.0 certifi-2021.10.8 distlib-0.3.3 filelock-3.3.2 pipenv-2021.5.29 platformdirs-2.4.0 six-1.16.0 virtualenv-20.10.0 virtualenv-clone-0.5.7
```
Finally:
```
pipenv
'pipenv' is not recognized as an internal or external command,
operable program or batch file.
```
Now I can see that the terminal spits out 3 warning concerning paths not included in Environment Variables.
I don't understand why pipenv gets installed in user folders.
Indeed my python installation is in C:\Program Files (as I made sure to set up during installation):
```
where python
C:\Program Files\Python310\python.exe
```
If I run:
```
python -m pipenv
```
pipenv does his thing.
So Ok I resolve to use it like this (despite all tutorials have it easy).
I proceed to create a virtual environment in a given folder
```
python -m pipenv shell
```
Everything works and I see the output:
```
Successfully created virtual environment!
Virtualenv location: C:\Users\Giulio\.virtualenvs\project-dhMbrBv2
```
Finally, I inspect the .virtualenvs related folder:
```
01/11/2021 10:58 <DIR> .
01/11/2021 10:58 <DIR> ..
01/11/2021 10:54 42 .gitignore
01/11/2021 10:54 38 .project
01/11/2021 10:58 0 contents.txt
01/11/2021 10:54 <DIR> Lib
01/11/2021 10:54 319 pyvenv.cfg
01/11/2021 10:54 <DIR> Scripts
4 File(s) 399 bytes
4 Dir(s) 660,409,012,224 bytes free
```
Now... shouldn't there be a BIN folder as well?
For instance I would like to set the interpreter in VSCode.
I cannot understand why I am getting all of these small inconsistencies.
Gladly appreciate any help!
EDIT (1):
So apparently there is no `\bin` folder because I am using windows:
In windows the `\Scripts` folder is created instead.
But the problem of pipenv not running without the preemptive call to python persists. | 2021/11/01 | [
"https://Stackoverflow.com/questions/69795302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159404/"
] | You can refer to this answer solution with the highest upvotes - [Windows reports error when trying to install package using pipenv](https://stackoverflow.com/questions/46041719/windows-reports-error-when-trying-to-install-package-using-pipenv/46041892#46041892)
Or refer to this GitHub issue on pipenv - <https://github.com/pypa/pipenv/issues/3101>
1. First, remove your current version of virtualenv: `pip uninstall virtualenv`
2. Then, remove your current version of pipenv: `pip uninstall pipenv`
3. When you are asked Proceed (y/n)? just enter y. This will give you a clean slate.
4. Finally, you can once again install pipenv and its dependencies: pip install pipenv
5. Check installation with `pipenv --version` | Did follow the suggested steps, but did not work,
Later, set the `C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts` to "PATH" environment variable and relaunched the cmd.
It worked like a charm...
Note: During the installation itself it warns to set the `C:\Users\xxxxxxx\AppData\Roaming\Python\Python310\Scripts` to "PATH" env variable | 2,159 |
20,590,331 | On my local PC I can do "python manage.py runserver" and the site runs perfectly, CSS and all. I just deployed the site to a public server and while most things work, CSS (and the images) are not loading into the templates.
I found some other questions with a similar issue, but my code did not appear to suffer from any of the same problems.
Within the Django project settings the same python function is being used to allow the app to see the templates and the static CSS / image files. The templates are being found by the views and are loading without issue.
Both from settings.py:
```
STATICFILES_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates/css').replace('\\','/'),
os.path.join(os.path.dirname(__file__), 'content').replace('\\','/'),
)
TEMPLATE_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates').replace('\\','/'),
)
```
In the base.html file which the rest of the templates all extend:
```
<head>
{% load staticfiles %}
<link rel="stylesheet" type="text/css" href="{% static "style.css" %}" media="screen">
</head>
```
Directory structure:
```
|project_root/
|--manage.py
|--project/
| |--settings.py
| |--__init__.py
| |--content/
| | |--header.jpg
| |--templates/
| | |--base.html
| | |--css/
| | | |--style.css
```
My first thought when the CSS didn't load is that Django couldn't find the style.css file, but since I am using the same "os.path.dirname(**file**)" technique as with the templates, I am not sure this is the case.
What do I have wrong here?
Edit:
I neglected to mention that both the PC and server are running Python 2.7.5 and Django 1.5.5. | 2013/12/15 | [
"https://Stackoverflow.com/questions/20590331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1803100/"
] | In Winforms(or even in WPF) only the thread who create the component can update it you should make your code thread-safe.
For this reason the debugger raises an InvalidOperationException with the message, "Control control name accessed from a thread other than the thread it was created on." which is encapsulated as AggregateException because tasks encapsulate all exceptions in aggregate exception
you can use this code to iterate through all exceptions in aggregate exception raised by the task
```
try
{
t.Wait();
}
catch (AggregateException ae)
{
// Assume we know what's going on with this particular exception.
// Rethrow anything else. AggregateException.Handle provides
// another way to express this. See later example.
foreach (var e in ae.InnerExceptions)
{
if (e is MyCustomException)
{
Console.WriteLine(e.Message);
}
else
{
throw;
}
}
}
```
To make your thread safe just do something like this
```
// If the calling thread is different from the thread that
// created the pictureBox control, this method creates a
// SetImageCallback and calls itself asynchronously using the
// Invoke method.
// This delegate enables asynchronous calls for setting
// the text property on a TextBox control.
delegate void SetPictureBoxCallback(Image image);
// If the calling thread is the same as the thread that created
// the PictureBox control, the Image property is set directly.
private void SetPictureBox(Image image)
{
// InvokeRequired required compares the thread ID of the
// calling thread to the thread ID of the creating thread.
// If these threads are different, it returns true.
if (this.picturebox1.InvokeRequired)
{
SetPictureBoxCallback d = new SetPictureBoxCallback(SetPictureBox);
this.Invoke(d, new object[] { image });
}
else
{
picturebox1.Image= image;
}
}
``` | Another option to use a Task result within the calling thread is using `async/await` key word. This way compiler do the work of capture the right `TaskScheduler` for you. Look code below. You need to add `try/catch` statements for Exceptions handling.
This way, code is still asynchronous but looks like a synchronous one, remember that a code should be readable.
```
var _image = await Task<Image>.Factory.StartNew(InvertImage, TaskCreationOptions.LongRunning);
pictureBox1.Image = _image;
``` | 2,164 |
69,628,226 | I have made an browser with python. I converted it into exe file with pyinstaller. But it's size is 109,426kb!!! I need to upload it to some places and it is showing "Please try to upload files under 25md". What will I do? How to change this big exe file 24mb file? | 2021/10/19 | [
"https://Stackoverflow.com/questions/69628226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15622728/"
] | If you have task that is re-run with the same "Execution Date", using Airflow Variables is your best choice. XCom will be deleted by definition when you re-run the same task with the same execution date and it won't change.
Basically what you want to do is to store the "state" of task execution and it's kinda "against" Airflow's principle of idempotent tasks (where re-running the task should produce "final" results of running the task every time you run it. You want to store the state of the task between re-runs on the other hand and have it behave differently with subsequent re-runs - based on the stored state.
Another option that you could use, is to store the state in an external storage (for example object in S3). This might be better in case of performance if you do not want to load your DB too much. You could come up with a "convention" of naming of such state object and pull it a start and push when you finish the task. | You could use XComs with `include_prior_dates` parameter. [Docs](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/taskinstance/index.html#airflow.models.taskinstance.TaskInstance.xcom_pull) state the following:
>
> **include\_prior\_dates** (bool) -- If False, only XComs from the current execution\_date are returned. If True, XComs from previous dates are returned as well.
>
>
>
(Default value is `False`)
Then you would do: `xcom_pull(task_ids='previous_task', include_prior_dates=True)`
I haven't tried out personally but looks like this may be a good solution to your case. | 2,166 |
68,653,388 | I want to replace the values in manifest.json. My manifest.json file looks like
```
{
"uat1": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
Whenever there will be any update on uat1 database (or any other component), it will update the manifest file with version and sysdate. My output manifest.json will look like
```
{
"uat1": {
"database": {
"artifact_version": "12.0.3",
"date": "04/08/2021 19:50:14"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
},
"uat2": {
"database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"services1": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_database": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"p_services": {
"artifact_version": "0.0.1",
"date": "sysdate"
},
"Build_d": {
"artifact_version": "0.0.1",
"date": "sysdate"
}
}
```
I am writing a python code, but values not getting properly displayed:
I am running python like **test.py 12.0.3 uat1 database**
My code looks like:
```
import sys
import json
from datetime import datetime
version = str(sys.argv[1])
env = str(sys.argv[2])
script = str(sys.argv[3])
now = datetime.now()
sdate = now.strftime("%d/%m/%Y %H:%M:%S")
print(sdate)
print("%s %s %s" % (version, env, script))
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
f1.close()
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
for i in k1:
json.dump(v2, f2, indent=4)
```
Th Output in manifest.json I am getting is:
```
{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}{
"artifact_version": "0.0.1",
"date": "sysdate"
}
```
Please tell me how should I proceed. | 2021/08/04 | [
"https://Stackoverflow.com/questions/68653388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4632240/"
] | Just parse it, update the necessary value, and write it back to the file.
```
with open("manifest.json") as f:
d = json.load(f)
d[env][script] = {"artifact_version": ..., "date": ...}
with tempfile.NamedTemporaryFile(delete=False) as f:
try:
json.dump(d, f)
except Exception:
raise
else:
os.rename(f.name, "manifest.json")
```
If you aren't concerned about `manifest.json` being truncated before successfully writing the new data, you can reduce the third step to
```
with open("manifest.json", "w") as f:
json.dump(d, f)
``` | No, to 'edit' a `json` file, you have to load the whole file in with: `data = json.load(f1)`, then perform the transform, then write the write the whole lot out again:
```py
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "r") as f1:
data = json.load(f1)
#no close needed
#print(data)
for k1, v1 in data.items():
if k1 == env:
for k2, v2 in v1.items():
if k2 == script:
v2['artifact_version'] = version
v2['date'] = sdate
print(v2)
with open("C:/Users/lohapri/PycharmProjects/RFOS/manifest.json", "w") as f2:
json.dump(data, f2, indent=4)
``` | 2,167 |
59,939,819 | I am trying to run Django unit tests in the VSCode Test Explorer, also, I want the CodeLens 'Run Tests' button to appear above each test.
[enter image description here](https://i.stack.imgur.com/kTTjN.png)
However, in the Test Explorer, When I press the Play button, an error displays:
"No Tests were Ran" [No Tests were Ran](https://i.stack.imgur.com/mMlI0.png)
My directory structure is:
* Workspace\_Folder
+ settings.json
+ repo
- python\_module\_1
* sub\_module
+ tests
- test\_a.py
I am using the unittest framework.
My Settings.json looks like this:
```
{
"python.pythonPath": "/Users/nbonilla/.local/share/virtualenvs/koku-iTLe243o/bin/python",
"python.testing.unittestArgs": [
"-v",
"-s",
"${workspaceFolder}/python_module_1/sub_module/"
],
"python.testing.pytestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.unittestEnabled": true,
}
```
When I press the green "Play" button [Test Explorer Play Button](https://i.stack.imgur.com/oeJ8U.png)
The Python Test Log Output shows the message "Unhandled exception in thread started by"
[Unhandled Exception in thread started by](https://i.stack.imgur.com/04HUt.png)
I am using a pipenv virtual environment.
How do I run these Django Tests in the VSCode Test Explorer?
I saw that using pyTest is an alternative to unittest, how can this be set up easily as a replacement? | 2020/01/27 | [
"https://Stackoverflow.com/questions/59939819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12064691/"
] | Please consider the following checks:
1. you should have `__init__.py` in your test directory
2. in vscode on test configuration use pytest framework
3. use: `pip install pytest-django`
4. copy `pytest.ini` in the root with this content:
```
# -- FILE: pytest.ini (or tox.ini)
[pytest]
DJANGO_SETTINGS_MODULE = <your-web-project-name>.settings (like mysite.settings)
# -- recommended but optional:
python_files = tests.py test_*.py *_tests.py
```
Now it should work as you wish.
You can see [this stackoverflow link](https://stackoverflow.com/questions/55837922/vscode-pytest-test-discovery-fails) | I've been looking into this as well. The thing is that python unittest pytest and nose are not alternative to Django tests, because they would not be able to load everything Django tests do.
Django Test Runner might work for you:
<https://marketplace.visualstudio.com/items?itemName=Pachwenko.django-test-runner>
-- I was having trouble with this still since my project root does not directly contain my app(s), but judging on your project structure may work for you. | 2,170 |
33,551,878 | I'm having a problem to read partitioned parquet files generated by Spark in Hive. I'm able to create the external table in hive but when I try to select a few lines, hive returns only an "OK" message with no rows.
I'm able to read the partitioned parquet files correctly in Spark, so I'm assuming that they were generated correctly.
I'm also able to read these files when I create an external table in hive without partitioning.
Does anyone have a suggestion?
**My Environment is:**
* Cluster EMR 4.1.0
* Hive 1.0.0
* Spark 1.5.0
* Hue 3.7.1
* Parquet files are stored in a S3 bucket (s3://staging-dev/test/ttfourfieldspart2/year=2013/month=11)
**My Spark config file has the following parameters(/etc/spark/conf.dist/spark-defaults.conf):**
```
spark.master yarn
spark.driver.extraClassPath /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*
spark.driver.extraLibraryPath /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
spark.executor.extraClassPath /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*
spark.executor.extraLibraryPath /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
spark.eventLog.enabled true
spark.eventLog.dir hdfs:///var/log/spark/apps
spark.history.fs.logDirectory hdfs:///var/log/spark/apps
spark.yarn.historyServer.address ip-10-37-161-246.ec2.internal:18080
spark.history.ui.port 18080
spark.shuffle.service.enabled true
spark.driver.extraJavaOptions -Dlog4j.configuration=file:///etc/spark/conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=512M -XX:OnOutOfMemoryError='kill -9 %p'
spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p'
spark.executor.memory 4G
spark.driver.memory 4G
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.maxExecutors 100
spark.dynamicAllocation.minExecutors 1
```
**Hive config file has the following parameters(/etc/hive/conf/hive-site.xml):**
```
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>hbase.zookeeper.quorum</name>
<value>ip-10-xx-xxx-xxx.ec2.internal</value>
<description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ip-10-xx-xxx-xxx.ec2.internal:8020</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://ip-10-xx-xxx-xxx.ec2.internal:9083</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://ip-10-xx-xxx-xxx.ec2.internal:3306/hive?createDatabaseIfNotExist=true</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.mariadb.jdbc.Driver</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>1R72JFCDG5XaaDTB</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>-1</value>
</property>
<property>
<name>mapred.max.split.size</name>
<value>256000000</value>
</property>
<property>
<name>hive.metastore.connect.retries</name>
<value>5</value>
</property>
<property>
<name>hive.optimize.sort.dynamic.partition</name>
<value>true</value>
</property>
<property><name>hive.exec.dynamic.partition</name><value>true</value></property>
<property><name>hive.exec.dynamic.partition.mode</name><value>nonstrict</value></property>
<property><name>hive.exec.max.dynamic.partitions</name><value>10000</value></property>
<property><name>hive.exec.max.dynamic.partitions.pernode</name><value>500</value></property>
</configuration>
```
**My python code that reads the partitioned parquet file:**
```
from pyspark import *
from pyspark.sql import *
from pyspark.sql.types import *
from pyspark.sql.functions import *
df7 = sqlContext.read.parquet('s3://staging-dev/test/ttfourfieldspart2/')
```
**The parquet file schema printed by Spark:**
```
>>> df7.schema
StructType(List(StructField(transactionid,StringType,true),StructField(eventts,TimestampType,true),StructField(year,IntegerType,true),StructField(month,IntegerType,true)))
>>> df7.printSchema()
root
|-- transactionid: string (nullable = true)
|-- eventts: timestamp (nullable = true)
|-- year: integer (nullable = true)
|-- month: integer (nullable = true)
>>> df7.show(10)
+--------------------+--------------------+----+-----+
| transactionid| eventts|year|month|
+--------------------+--------------------+----+-----+
|f7018907-ed3d-49b...|2013-11-21 18:41:...|2013| 11|
|f6d95a5f-d4ba-489...|2013-11-21 18:41:...|2013| 11|
|02b2a715-6e15-4bb...|2013-11-21 18:41:...|2013| 11|
|0e908c0f-7d63-48c...|2013-11-21 18:41:...|2013| 11|
|f83e30f9-950a-4b9...|2013-11-21 18:41:...|2013| 11|
|3425e4ea-b715-476...|2013-11-21 18:41:...|2013| 11|
|a20a6aeb-da4f-4fd...|2013-11-21 18:41:...|2013| 11|
|d2f57e6f-889b-49b...|2013-11-21 18:41:...|2013| 11|
|46f2eda5-408e-44e...|2013-11-21 18:41:...|2013| 11|
|36fb8b79-b2b5-493...|2013-11-21 18:41:...|2013| 11|
+--------------------+--------------------+----+-----+
only showing top 10 rows
```
**The create table in Hive:**
```
create external table if not exists t3(
transactionid string,
eventts timestamp)
partitioned by (year int, month int)
stored as parquet
location 's3://staging-dev/test/ttfourfieldspart2/';
```
**When I try to select some rows in Hive, it doesn't return any rows:**
```
hive> select * from t3 limit 10;
OK
Time taken: 0.027 seconds
hive>
``` | 2015/11/05 | [
"https://Stackoverflow.com/questions/33551878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5529573/"
] | I finally found the problem. When you create tables in Hive, where partitioned data already exists in S3 or HDFS, you need to run a command to update the Hive Metastore with the table's partition structure. Take a look here:
<https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)>
```
The commands are:
MSCK REPAIR TABLE table_name;
And on Hive running in Amazon EMR you can use:
ALTER TABLE table_name RECOVER PARTITIONS;
``` | Even though this Question was answered already, the following point may also help the users who are still not able to solve the issue just by `MSCK REPAIR TABLE table_name;`
I have an hdfs file system which is partitioned as below:
`<parquet_file>/<partition1>/<partition2>`
eg: `my_file.pq/column_5=test/column_6=5`
I created a hive table with partitions
eg:
```sql
CREATE EXTERNAL TABLE myschema.my_table(
`column_1` int,
`column_2` string,
`column_3` string,
`column_4` string
)
PARTITIONED BY (`column_5` string, `column_6` int) STORED AS PARQUET
LOCATION
'hdfs://u/users/iamr/my_file.pq'
```
After this, I repaired the schema partitions using the following command
`MSCK REPAIR TABLE myschema.my_table;`
After this it was started working for me.
Another thing I noticed was that, **while writing PARQUET files from spark, name the columns with lower case**, otherwise hive may not able to map it. For me after renaming the columns in PARQUET file, it started working
for eg: `my_file.pq/COLUMN_5=test/COLUMN_6=5` didn't worked for me
but `my_file.pq/column_5=test/column_6=5` worked | 2,173 |
12,177,405 | Dear python 3 experts,
with python2, one could do the following (I know this is a bit hairy, but that's not the point here :p):
```
class A(object):
def method(self, other):
print self, other
class B(object): pass
B.method = types.MethodType(A().method, None, B)
B.method() # print both A and B instances
```
with python3, there is no more unbound methods, only functions. If I want the same behaviour, it sounds like I've to introduce a custom descriptor such as:
```
class UnboundMethod:
"""unbound method wrapper necessary for python3 where we can't turn
arbitrary object into a method (no more unbound method and only function
are turned automatically to method when accessed through an instance)
"""
def __init__(self, callable):
self.callable = callable
def __get__(self, instance, objtype):
if instance is None:
return self.callable
return types.MethodType(self.callable, instance)
```
so I can do :
```
B.method = UnboundMethodType(A().method)
B.method() # print both A and B instances
```
Is there any other way to do that without writing such descriptor ?
TIA | 2012/08/29 | [
"https://Stackoverflow.com/questions/12177405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/853679/"
] | ```
B.method = lambda o: A.method(o,A())
b = B()
b.method()
```
the line `b.method()` then calls `A.method(b,A())`. This means a A is initialized each time. To avoid this:
```
a = A()
B.method = lambda o: A.method(o,a)
```
now every time you call b.method() on any instance of B the same instance of A is passed as the second argument. | Well, your code doesn't work in Python 2 either, but I get what you are trying to do. And you can use lambda, as in Sheena's answer, or functools.partial.
```
>>> import types
>>> from functools import partial
>>> class A(object):
... def method(self, other):
... print self, other
...
>>> class B(object): pass
...
>>> B.method = partial(A().method, A())
>>> B().method()
<__main__.A object at 0x112f590> <__main__.A object at 0x1132190>
``` | 2,174 |
46,395,273 | First post here at stack overflow. Please forgive my posting errors.
I have spent a lot of time at this. I started with the 500 server error.
This long is stating python not found. My app is JS, CSS, and HTML only. (at this point) I have included the yaml, because I cant rule out for myself if I have errors there through my research.
Pointers are greatly appreciated.
Thanks.
My `app.yaml`:
```
application: application
version: secureable
runtime: python27
api_version: 1
threadsafe: false
handlers:
- url: /(.*\.(gif|png|jpg|ico|js|css))
static_files: \1
upload: (.*\.(gif|png|jpg|ico|js|css))
- url: /robots.txt
static_files: robots.txt
upload: robots.txt
- url: .*
script: main.py
inbound_services:
- mail
```
The error:
```
httpRequest: {
status: 500
0: {
logMessage: "File referenced by handler not found: main.py"
severity: "WARNING"
time: "2017-09-24T21:12:30.191830Z"
}
]
megaCycles: "2"
method: "GET"
requestId: resource: "/index.html"
startTime: "2017-09-24T21:12:30.138333Z"
status: 500
traceId: "618d060203d57aea2bfddc905e350698"
urlMapEntry: "main.py"
userAgent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:55.0) Gecko/20100101 Firefox/55.0"
versionId: "secureable"
}
receiveTimestamp: "2017-09-24T21:12:30.926277443Z"
resource: {
labels: {
module_id: "default"
project_id: "Application"
version_id: "secureable"
zone: "us9"
}
type: "gae_app"
}
severity: "WARNING"
timestamp: "2017-09-24T21:12:30.138333Z"
}
``` | 2017/09/24 | [
"https://Stackoverflow.com/questions/46395273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4907940/"
] | If your app is only HTML, CSS, and JS, you can remove the catch-all pointer to the Python script all together and instead use an `app.yaml` format like the one shown in the [Hosting a Static Website on App Engine tutorial](https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-website#creating_the_appyaml_file):
```
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
```
Later if you want to add server-side logic with a Python module, you can add in a handler with a `script` associated with it. When you take that step, you use an import style pointer in the form of `[script_name].[var_pointing_to_wsgi_application_in_script]`. So if you have `main.py` and within that a variable called `application` that is set to your WSGI application, then you would use `script: main.application`.
Commonly a WSGI application is either webapp2 ([example](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/appengine/standard/hello_world/main.py#L24)) or Flask ([example](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/appengine/standard/flask/hello_world/main.py#L21)). | Your `script: main.py` statement in the `handlers` section of the `app.yaml` file is wrong, it should be `script: main.app`.
From the `script` row in the [Handlers element](https://cloud.google.com/appengine/docs/standard/python/config/appref#handlers_element) table (sadly not properly formatted, including the quote from the page source to make it readable):
>
> **script**
>
>
> A `script:` directive must be a python import path, for example,
> `package.module.app` that points to a WSGI application. The last
> component of a `script:` directive using a **Python module** path is
> the name of a global variable in the module: that variable must be a
> WSGI app, and is usually called `app` by convention.
>
>
> | 2,175 |
61,206,895 | the python script does execute well manually through the terminal:
```
sudo python3 /home/pi/Documents/AlarmClock/alarm.py
```
but it does not work automatically by the crontab. Here is the cronjob (crontab -e) in the /tmp/crontab.iGf7md/crontab file:
```
32 13 2 * * sudo python3 /home/pi/Documents/AlarmClock/alarm.py
```
In the alarm.py script is no print command. The script only lights up a LED-Strip connected to the gpio-pin which works fine.
Does anyone know my mistake? | 2020/04/14 | [
"https://Stackoverflow.com/questions/61206895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You can use `array_keys` with search value [PHP Doc](https://www.php.net/manual/en/function.array-keys.php)
[Demo](https://3v4l.org/kfTZH)
```
array_keys($arr,3)
```
---
>
> `array_keys()` returns the keys, numeric and string, from the array.
>
>
> If a search\_value is specified, then only the keys for that value are
> returned. Otherwise, all the keys from the array are returned.
>
>
> | With that solution you can create complex filters. In this case we compare every value to be the number three (=== operator). The filter returns the index, when the comparision true, else it will be dropped.
```
$a = [1,2,3,4,3,3,5,6];
$threes = array_filter($a, function($v, $k) {
return $v === 3 ? $k : false; },
ARRAY_FILTER_USE_BOTH
);
```
`$threes` Is an array containing all keys having the value 3.
>
> array(3) { 2, 4, 5 }
>
>
> | 2,176 |
43,967,051 | What is an alternative to firebase for user management/auth for python apps. I know I can use node.js w/ firebase but, I would rather authenticate users through a managed 3rd party API in python using HTTPS requests,if possible. Appery.io has this feature but, I do not need all that comes with appery.io | 2017/05/14 | [
"https://Stackoverflow.com/questions/43967051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7317396/"
] | Check out [Amazon Cognito](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjphrjN7-PXAhUEhuAKHSABA14QFggnMAA&url=https%3A%2F%2Faws.amazon.com%2Fcognito%2F&usg=AOvVaw0IxXy-fQjM_msyj67tH2wG) . They offer a quite nice package for small projects. [Backendless](http://backendless.com) is also a fantastic service, providing authentication and database with very helpful documentation and also SDK for different platforms including iOS, Android, Javascript, Rest API, Angular, React and React Native. I have been using Backendless for a couple of months and I highly recommend you use it, too. | You could try using [Auth0](https://auth0.com/) for pure authentication management. The Auth0 python package can be found [here](https://github.com/auth0/auth0-python). | 2,178 |
16,973,236 | I recently installed Emacs 24.3 and try to use it coding for Python (v3.3.2 x86-64 MSI installer). (I'm new to Emacs). Then i try to install emacs-for-python by unpack the zip to
```
"C:\Users\mmsc\AppData\Roaming\.emacs.d\emacs-for-python"
```
folder and add
```
: (load-file "~/.emacs.d/emacs-for-python/epy-init.el")
```
into C:\Users\mmsc\AppData\Roaming.emacs
after I launch Emacs, I see error
>
> Warning (initialization): An error occurred while loading
> `c:/Users/Klein/AppData/Roaming/.emacs':
>
>
> error: Pymacs helper did not start within 30 seconds
>
>
> To ensure normal operation, you should investigate and remove the
> cause of the error in your initialization file. Start Emacs with the
> `--debug-init' option to view a complete error backtrace.
>
>
>
with the "--debug-init", I saw below information but I have little knowledge about Emacs/Lisp, so I can't locate the problem easily.
```
Debugger entered--Lisp error: (error "Pymacs helper did not start within 30 seconds")
signal(error ("Pymacs helper did not start within 30 seconds"))
pymacs-report-error("Pymacs helper did not start within %d seconds" 30)
(if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))
(while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start)))
(let ((process (apply (quote start-process) "pymacs" buffer (let ((python (getenv "PYMACS_PYTHON"))) (if (or (null python) (equal python "")) pymacs-python-command python)) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append (and (>= emacs-major-version 24) (quote ("-f"))) (mapcar (quote expand-file-name) pymacs-load-path))))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ (match-end 0) (string-to-number (match-string 1))))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start")))))
(progn (let ((process (apply (quote start-process) "pymacs" buffer (let ((python ...)) (if (or ... ...) pymacs-python-command python)) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append (and (>= emacs-major-version 24) (quote ...)) (mapcar (quote expand-file-name) pymacs-load-path))))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ (match-end 0) (string-to-number (match-string 1))))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start"))))) (goto-char (match-end 0)) (let ((reply (read (current-buffer)))) (if (and (pymacs-proper-list-p reply) (= (length reply) 2) (eq (car reply) (quote version))) (if (string-equal (cadr reply) "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" (cadr reply))) (pymacs-report-error "Pymacs got an invalid initial reply"))))
(unwind-protect (progn (let ((process (apply (quote start-process) "pymacs" buffer (let (...) (if ... pymacs-python-command python)) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append (and ... ...) (mapcar ... pymacs-load-path))))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ (match-end 0) (string-to-number ...)))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start"))))) (goto-char (match-end 0)) (let ((reply (read (current-buffer)))) (if (and (pymacs-proper-list-p reply) (= (length reply) 2) (eq (car reply) (quote version))) (if (string-equal (cadr reply) "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" (cadr reply))) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate)))
(let ((save-match-data-internal (match-data))) (unwind-protect (progn (let ((process (apply (quote start-process) "pymacs" buffer (let ... ...) "-c" (concat "import sys;" " from Pymacs import main;" " main(*sys.argv[1:])") (append ... ...)))) (pymacs-kill-without-query process) (while (progn (goto-char (point-min)) (not (re-search-forward "<\\([0-9]+\\) " nil t))) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker (process-mark process)) (limit-position (+ ... ...))) (while (< (marker-position marker) limit-position) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper probably was interrupted at start"))))) (goto-char (match-end 0)) (let ((reply (read (current-buffer)))) (if (and (pymacs-proper-list-p reply) (= (length reply) 2) (eq (car reply) (quote version))) (if (string-equal (cadr reply) "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" (cadr reply))) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate))))
(save-current-buffer (set-buffer buffer) (erase-buffer) (buffer-disable-undo) (pymacs-set-buffer-multibyte nil) (set-buffer-file-coding-system (quote raw-text)) (let ((save-match-data-internal (match-data))) (unwind-protect (progn (let ((process (apply ... "pymacs" buffer ... "-c" ... ...))) (pymacs-kill-without-query process) (while (progn (goto-char ...) (not ...)) (if (accept-process-output process pymacs-timeout-at-start) nil (pymacs-report-error "Pymacs helper did not start within %d seconds" pymacs-timeout-at-start))) (let ((marker ...) (limit-position ...)) (while (< ... limit-position) (if ... nil ...)))) (goto-char (match-end 0)) (let ((reply (read ...))) (if (and (pymacs-proper-list-p reply) (= ... 2) (eq ... ...)) (if (string-equal ... "0.25") nil (pymacs-report-error "Pymacs Lisp version is 0.25, Python is %s" ...)) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate)))))
(let ((buffer (get-buffer-create "*Pymacs*"))) (save-current-buffer (set-buffer buffer) (erase-buffer) (buffer-disable-undo) (pymacs-set-buffer-multibyte nil) (set-buffer-file-coding-system (quote raw-text)) (let ((save-match-data-internal (match-data))) (unwind-protect (progn (let ((process ...)) (pymacs-kill-without-query process) (while (progn ... ...) (if ... nil ...)) (let (... ...) (while ... ...))) (goto-char (match-end 0)) (let ((reply ...)) (if (and ... ... ...) (if ... nil ...) (pymacs-report-error "Pymacs got an invalid initial reply")))) (set-match-data save-match-data-internal (quote evaporate))))) (if (not pymacs-use-hash-tables) (setq pymacs-weak-hash t) (if pymacs-used-ids (progn (let ((pymacs-transit-buffer buffer) (pymacs-forget-mutability t) (pymacs-gc-inhibit t)) (pymacs-call "zombie_python" pymacs-used-ids)) (setq pymacs-used-ids nil))) (setq pymacs-weak-hash (make-hash-table :weakness (quote value))) (if (boundp (quote post-gc-hook)) (add-hook (quote post-gc-hook) (quote pymacs-schedule-gc)) (setq pymacs-gc-timer (run-at-time 20 20 (quote pymacs-schedule-gc))))) (setq pymacs-transit-buffer buffer) (let ((modules pymacs-load-history)) (setq pymacs-load-history nil) (if (and modules (yes-or-no-p "Reload modules in previous session? ")) (progn (mapc (function (lambda (args) (condition-case err ... ...))) modules)))))
pymacs-start-services()
(if (and pymacs-transit-buffer (buffer-name pymacs-transit-buffer) (get-buffer-process pymacs-transit-buffer)) nil (if pymacs-weak-hash (progn (if (or (eq pymacs-auto-restart t) (and (eq pymacs-auto-restart (quote ask)) (yes-or-no-p "The Pymacs helper died. Restart it? "))) nil (pymacs-report-error "There is no Pymacs helper!")))) (pymacs-start-services))
pymacs-serve-until-reply("eval" (pymacs-print-for-apply (quote "pymacs_load_helper") (quote ("ropemacs" "rope-" nil))))
pymacs-call("pymacs_load_helper" "ropemacs" "rope-" nil)
(let ((lisp-code (pymacs-call "pymacs_load_helper" module prefix noerror))) (cond (lisp-code (let ((result (eval lisp-code))) (add-to-list (quote pymacs-load-history) (list module prefix noerror) (quote append)) (message "Pymacs loading %s...done" module) (run-hook-with-args (quote pymacs-after-load-functions) module) result)) (noerror (message "Pymacs loading %s...failed" module) nil)))
pymacs-load("ropemacs" "rope-")
setup-ropemacs()
(progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv)))
(lambda nil (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv))))()
funcall((lambda nil (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv)))))
eval((funcall (quote (lambda nil (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv)))))))
eval-after-load(python (progn (setup-ropemacs) (autoload (quote virtualenv-activate) "virtualenv" "Activate a Virtual Environment specified by PATH" t) (autoload (quote virtualenv-workon) "virtualenv" "Activate a Virtual Environment present using virtualenvwrapper" t) (add-hook (quote python-mode-hook) (lambda nil (if (buffer-file-name) (flymake-mode)))) (defun workon-postactivate (virtualenv) (require (quote virtualenv)) (virtualenv-activate virtualenv) (desktop-change-dir virtualenv))))
eval-buffer(#<buffer *load*-819053> nil "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-python.el" nil t) ; Reading at buffer position 4662
load-with-code-conversion("c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-python.el" "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-python.el" nil t)
require(epy-python)
eval-buffer(#<buffer *load*-283406> nil "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" nil t) ; Reading at buffer position 476
load-with-code-conversion("c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" "c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" nil nil)
load("c:/Users/mmsc/AppData/Roaming/.emacs.d/emacs-for-python/epy-init.el" nil nil t)
load-file("C:\\Users\\mmsc\\AppData\\Roaming\\.emacs.d\\emacs-for-python\\epy-init.el")
eval-buffer(#<buffer *load*> nil "c:/Users/mmsc/AppData/Roaming/.emacs" nil t) ; Reading at buffer position 656
load-with-code-conversion("c:/Users/mmsc/AppData/Roaming/.emacs" "c:/Users/mmsc/AppData/Roaming/.emacs" t t)
load("~/.emacs" t t)
```
I have tried to search some help from Internet but most of them are for Linux/Unix env. Is there anyone using Emacs with Python under Windows and know what does this mean and how can I fix it?
Thanks! | 2013/06/06 | [
"https://Stackoverflow.com/questions/16973236",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118555/"
] | This was a little too much for a comment:
```
(let ((process
(apply 'start-process "pymacs" buffer
(let ((python (getenv "PYMACS_PYTHON")))
(if (or (null python) (equal python ""))
pymacs-python-command
python))
"-c" (concat "import sys;"
" from Pymacs import main;"
" main(*sys.argv[1:])")
(append
(and (>= emacs-major-version 24) '("-f"))
(mapcar 'expand-file-name pymacs-load-path)))))
```
This is the bit of Pymacs code which starts the process in `*Pymacs*` buffer. You could infer from this that Pymacs will first search for environment variable `$PYMACS_PYTHON` and if that doesn't exist or it's value is empty string, then it will try `pymacs-python-command`, which, by default is `"python"`. So, it will make this call:
```
$ python -c 'import sys; from Pymacs import main; main(*sys.argv[1:])'
```
There's a problem with `-f` - I don't know what version of Python accepts this argument, but then one that I have doesn't. The intention of this code is quite clear - probably it has to load the files on `pymacs-load-path`, but for me the value of this variable is `nil` - so I don't think this code ever runs. Anyway, this argument doesn't seem to harm as for me it launches with or without it just the same.
So, if you try running the above command in console, and get something like:
```
(version "0.25")
```
Then this code works fine, otherwise, you'd get some error and that would help you identify the problem. Remember that it may not be just `python`. It is either `$PYMACS_PYHON` or `pymacs-python-command`. | I had the same symptoms but what my problem turned out to be was an old pymacs.el and a new Pymacs. Evidently Pymacs changed the module interface and I had to go hunt down the stray pymacs.el. So the pymacs.el was installed by apt-get in an odd location. You have to make sure the byte code file is gone too. | 2,181 |
55,784,213 | Noob, trying to create a simple form, and validate the inputs on same. However, I don't know how to properly select each input in js, so nothing is happening. I am just learning html, bootstrap and javascript, so simpler (pythonic) answers are preferred to more complex ones.
I've read the documentation, and a number of other stackoverflow posts on this exact topic, which would have likely answered my question, were I not a Noob.
```
<div class="form-group">
<label for="first_name">First Name</label>
<input autocomplete="off" autofocus="" class="form-control" name="first_name" placeholder="First Name" type="text">
<small id="first_name_Help" class="form-text text-muted">* First Name is Mandatory.</small>
</div>
<div class="form-group">
<label for="last_name">Last Name</label>
<input autocomplete="off" autofocus="" class="form-control" name="last_name" placeholder="Last Name" type="text">
<small id="last_name_Help" class="form-text text-muted">* Last Name is Mandatory.</small>
</div>
<p>Select Your Country of Residence Below</p>
<div class="form-group">
<select name="country">
<option disabled selected value="">Country</option>
<option value="Canada">Canada</option></option>
<option value="USA">USA</option></option>
<option value="Mexico">Mexico</option>
<option value="None of the Above">None of the Above</option>
</select>
</div>
<script>
document.querySelector('form').onsubmit = function() {
if (!document.querySelector('input.first_name').value) {
alert('You must provide your name!');
return false;
}
``` | 2019/04/21 | [
"https://Stackoverflow.com/questions/55784213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8519006/"
] | The reason for partial match is that engine doesn't know exactly where it should start from regarding your requirements. You tell engine by including `\d` in character class:
```
(?<![[:space:][:punct:]\d])\d+
^^
``` | [This RegEx](https://regex101.com/r/ruSstp/1/) might help you to divide your string input into two groups, where the second group (`$2`) is the target number and group one (`$1`) is the non-digit behind it:
```
([A-Za-z_+-]+)([0-9]+)
```
[![RegEx](https://i.stack.imgur.com/ubaKl.png)](https://i.stack.imgur.com/ubaKl.png)
It might be safe to do so, if you might want to use it for text-processing. | 2,182 |
58,211,638 | I want to connect to Twitch server. But Godot adds binary characters in front of my data as you can see in the pictures. This happens everytime no matter the data type. Why is this happenning and how can I prevent this happening?
[![python socket server output image](https://i.stack.imgur.com/14N2l.png)](https://i.stack.imgur.com/14N2l.png)
[code](https://paste.ubuntu.com/p/5h6h5vXfPx/) | 2019/10/03 | [
"https://Stackoverflow.com/questions/58211638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10558295/"
] | You can use shapes as well with your background modifier instead using a Color.
Change
```
}.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.green, lineWidth: 1)
).background(Color.gray)
```
to
```
}.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.green, lineWidth: 1)
).background(RoundedRectangle(cornerRadius: 40).fill(Color.pink))
```
and it will work.
Of course the pink color is only to make the area more visible. | What you need is one more modifier to cut off anything outside the thin green outline, add this after `.background`:
```
.clipShape(RoundedRectangle(cornerRadius: 40))
```
**EDIT**
Capsule is a better shape to use in place of RoundedRectangle to achieve matching curves:
```
var body: some View {
HStack {
Text("Login")
.font(.headline)
.foregroundColor(showLogin ? Color.white : .black)
.padding()
.frame(minWidth: 0, maxWidth: .infinity)
.background(Capsule().fill(showLogin ? Color.green : .gray))
.onTapGesture { self.showLogin = true }
Text("Join")
.font(.headline)
.foregroundColor(!showLogin ? Color.white : .black)
.padding()
.frame(minWidth: 0, maxWidth: .infinity)
.background(Capsule().fill(!showLogin ? Color.green : .gray))
.onTapGesture { self.showLogin = false }
} .background(Capsule().fill(Color.gray))
.overlay(Capsule().stroke(Color.green, lineWidth: 1))
}
``` | 2,183 |
31,154,087 | I am developing flask app. I made one table which will populate with JSON data. For Front end I am using Angularjs and for back-end I am using flask. But I am not able to populate the table and getting error like "**UndefinedError: 'task' is undefined.**"
**Directory of flask project**
flask\_project/
rest-server.py
templates/index.html
**rest-server.py**
```
#!flask/bin/python
import six
from flask import Flask, jsonify, abort, request, make_response, url_for, render_template
app = Flask(__name__, static_url_path="")
auth = HTTPBasicAuth()
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
},
{
'id': 2,
'title': u'Learn Python',
'description': u'Need to find a good Python tutorial on the web',
'done': False
}
]
@app.route('/')
def index():
return render_template('index.html')
@app.route('/todo/api/v1.0/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': [make_public_task(task) for task in tasks]})
```
I am successfully able to get json data using
<http://127.0.0.1:5000/todo/api/v1.0/tasks>
**Json array is**
```
{
"tasks":
[
{
"description": "Milk, Cheese, Pizza, Fruit, Tylenol",
"done": false,
"title": "Buy groceries",
"uri": "http://127.0.0.1:5000/todo/api/v1.0/tasks/1"
},
{
"description": "Need to find a good Python tutorial on the web",
"done": false,
"title": "Learn Python",
"uri": "http://127.0.0.1:5000/todo/api/v1.0/tasks/2"
}
]
}
```
**Index.html**
```
<!DOCTYPE html>
<html ng-app="app">
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.19/angular.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css" rel="stylesheet">
</head>
<body data-ng-app="app">
<!--our controller-->
<div ng-controller="ItemController">
<button id="get-items-button" ng-click="getItems()">Get Items</button>
<p>Look at the list of tasks!</p>
<!--this table shows the items we get from our service-->
<table cellpadding="0" cellspacing="0">
<thead>
<tr>
<th>Description</th>
<th>Done</th>
<th>Title</th>
<th>URI</th>
</tr>
</thead>
<tbody>
<!--repeat this table row for each item in items-->
<tr ng-repeat="task in tasks">
<td>{{task.description}}</td>
<td>{{task.done}}</td>
<td>{{task.title}}</td>
<td>{{task.uri}}</td>
</tr>
</tbody>
</table>
</div>
<script>
(function () {
//create our module
angular.module('app', [])
//add controller
.controller('ItemController', function ($scope, $http) {
//declare an array of items. this will get populated with our ajax call
$scope.tasks = [];
//declare an action for our button
$scope.getItems = function () {
//perform ajax call.
$http({
url: "/todo/api/v1.0/tasks",
method: "GET"
}).success(function (data, status, headers, config) {
//copy the data we get to our items array. we need to use angular.copy so that
//angular can track the object and bind it automatically.
angular.copy(data.tasks, $scope.tasks);
}).error(function (data, status, headers, config) {
//something went wrong
alert('Error getting data');
});
}
});
//console.log($scope.tasks);
})();
</script>
</body>
</html>
``` | 2015/07/01 | [
"https://Stackoverflow.com/questions/31154087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4884941/"
] | i think it's because you have two ng-app definitions in your index.html
remove the definition in your html tag and try again
```
<html ng-app="tableJson">
```
into
```
<html>
``` | Try this
```
$scope.tasks = data;
```
it works for me | 2,184 |