qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
33,715,198
I am new to python. I was trying to make a random # generator but nothing works except for the else statement. I cannot tell what the issue is. Please Help! ``` import random randomNum = random.randint(1, 10) answer = int(raw_input("Try to guess a random number between 1 and 10. ")) if (answer > randomNum) and (answer < randomNum): if (answer == randomNum + 1): print "Super Close" print randomNum elif (answer == randomNum + 2): print "Pretty Close" print randomNum elif (answer == randomNum + 3): print "Fairly Close" print randomNum elif (answer == randomNum + 4): print "Not Really Close" print randomNum elif (answer == randomNum + 5): print "Far" print randomNum elif (answer == randomNum - 5): print "Far" print randomNum elif (answer == randomNum - 4): print "Not Really Close" print randomNum elif (answer == randomNum - 3): print "Fairly Close" print randomNum elif (answer == randomNum - 2): print "Pretty Close" print randomNum elif (answer == randomNum - 1): print "Super Close" print randomNum else: print "Good Job!" print randomNum ```
2015/11/15
[ "https://Stackoverflow.com/questions/33715198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5563197/" ]
Your first `if` statement logic is incorrect. `answer` cannot *at the same time* be both smaller and larger than `randomNum`, yet that is what your test asks for. You want to use `or` instead of `and` there, if the `answer` value is larger *or* smaller than `randomNum`: ``` if (answer > randomNum) or (answer < randomNum): ``` or simply use `!=` to test for inequality: ``` if answer != randomNum: ```
**I used this code for my random number generator and it works, I hope it helps** You can change the highest and lowest random number you want to generate (0,20) ``` import random maths_operator_list=['+','-','*'] maths_operator = random.choice(maths_operator_list) number_one = random.randint(0,20) number_two = random.randint(0,20) correct_answer = 0 print(str(number_one), str(maths_operator), number_two) if maths_operator == '+': correct_answer = number_one + number_two elif maths_operator == '-': correct_answer = number_one - number_two elif maths_operator == '*': correct_answer = number_one * number_two print(correct_answer) ```
1,244
68,737,471
I have a dict with some value in it. Now at the time of fetching value I want to check if value is `None` replace it with `""`. Is there any single statement way for this. ``` a = {'x' : 1, 'y': None} x = a.get('x', 3) # if x is not present init x with 3 z = a.get('z', 1) # Default value of z to 1 y = a.get('y', 1) # It will initialize y with None # current solution I'm checking if y is None: y = "" ``` What I want is something single line (pythonic way). I can do this in below way ``` # This is one way I can write but y = "" if a.get('y',"") is None else a.get('y', "") ``` But from what I know there must be some better way of doing this. Any help
2021/08/11
[ "https://Stackoverflow.com/questions/68737471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8595891/" ]
If the only falsy values you care about are `None` and `''`, you could do: ``` y = a.get('y', 1) or '' ```
``` y = a.get('y',"") or "" ```
1,245
55,231,300
I am using Django Rest Framework. I have an existing database (cannot make any changes to it). I have defined a serializer - ReceiptLog with no model, which should create entries in TestCaseCommandRun and TestCaseCommandRunResults when a post() request is made to ReceiptLog api endpoint. Receipt log doesn't exist in the database, I am using it just as an endpoint to accept a combined payload and create entries in underlying tables. Post() to TestCaseCommandRunResults and TestCaseCommandRun works independently, however, when I try to post through ReceiptLog it throws below error Error Traceback: ``` File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/exception.py" in inner 35. response = get_response(request) File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py" in _get_response 128. response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py" in _get_response 126. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python3.6/contextlib.py" in inner 52. return func(*args, **kwds) File "/usr/local/lib/python3.6/dist-packages/django/views/decorators/csrf.py" in wrapped_view 54. return view_func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/django/views/generic/base.py" in view 69. return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/rest_framework/views.py" in dispatch 495. response = self.handle_exception(exc) File "/usr/local/lib/python3.6/dist-packages/rest_framework/views.py" in handle_exception 455. self.raise_uncaught_exception(exc) File "/usr/local/lib/python3.6/dist-packages/rest_framework/views.py" in dispatch 492. response = handler(request, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/rest_framework/generics.py" in post 192. return self.create(request, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/rest_framework/mixins.py" in create 21. self.perform_create(serializer) File "/usr/local/lib/python3.6/dist-packages/rest_framework/mixins.py" in perform_create 26. serializer.save() File "/usr/local/lib/python3.6/dist-packages/rest_framework/serializers.py" in save 216. '`create()` did not return an object instance.' Exception Type: AssertionError at /dqf_api/ReceiptLog/ Exception Value: `create()` did not return an object instance. ``` models.py ``` class TestCaseCommandRun(models.Model): # fields ..Doesn't have id field as the database doesn't have it class Meta: managed = False db_table = 'test_case_command_run' unique_together = (('team_name', 'suite_name', 'suite_run_id', 'case_name', 'command_name'),) class TestCaseCommandRunResults(models.Model): # fields ..Doesn't have id field as the database doesn't have it class Meta: managed = False db_table = 'test_case_command_run_results' unique_together = (('suite_run_id', 'command_run_id', 'rule_name', 'result_id'),) ``` views.py ``` class TestCaseCommandRunViewSet(viewsets.ModelViewSet): queryset = models.TestCaseCommandRunViewSet.objects.values('team_name','suite_name','suite_run_id', 'case_name','command_name','command_run_id','run_start','run_end','result','run_status') serializer_class = serializers.TestCaseCommandRunViewSet class TestCaseCommandRunResultsViewSet(viewsets.ModelViewSet): queryset = models.TestCaseCommandRunResultsViewSet.objects.values('suite_run_id','command_run_id','rule_name', 'result_id', 'result','expected_values','actual_values','report_values','extended_values') serializer_class = serializers.TestCaseCommandRunResultsViewSet class ReceiptLogViewSet(CreateAPIView): serializer_class = serializers.ReceiptLogSerializer.ReceiptLogSerializerClass ``` serializers.py ``` class TestCaseCommandRunResultsViewSet(serializers.ModelSerializer): class Meta: model = models.TestCaseCommandRunResultsViewSet fields = ['suite_run_id','command_run_id','rule_name', 'result_id','result','expected_values','actual_values','report_values','extended_values'] class TestCaseCommandRunSerializer(serializers.ModelSerializer): class Meta: model = models.TestCaseCommandRunSerializer fields = ['team_name','suite_name','suite_run_id', 'case_name','command_name','command_run_id','run_start','run_end','result','run_status'] class ReceiptLogSerializerClass(serializers.Serializer): team_name = serializers.CharField(max_length=30) suite_name = serializers.CharField(max_length=100) suite_run_id = serializers.CharField(max_length=50,required=False, allow_blank=True, default=datetime.now().strftime('%Y%m%d%H%M%S')) case_name = serializers.CharField(max_length=50) command_name = serializers.CharField(max_length=50) command_run_id = serializers.CharField(max_length=50,required=False, allow_blank=True, default='Not Applicable') run_start = serializers.DateTimeField(default=datetime.now, required=False) run_end = serializers.DateTimeField(default=datetime.now, required=False) result = serializers.CharField(max_length=10, default='Not Applicable') run_status = serializers.CharField(max_length=10) rule_name = serializers.CharField( max_length=50, required=False, allow_blank=True, default='Not Applicable') expected_values = serializers.CharField(max_length=200, allow_blank=True) actual_values = serializers.CharField(max_length=200, allow_blank=True) report_values = serializers.CharField(max_length=200, allow_blank=True) extended_values = serializers.CharField(max_length=200, allow_blank=True) def create(self, validated_data): # command_run_data_list = [] command_run_results_data_list = [] raw_data_list = [] many = isinstance(validated_data, list) if many: raw_data_list = validated_data else: raw_data_list.append(validated_data) result_id = 1 for data_row in raw_data_list: new_command_run_entry = { 'team_name': data_row.get('team_name'), 'suite_name': data_row.get('suite_name'), 'suite_run_id': data_row.get('suite_run_id'), 'case_name': data_row.get('case_name'), 'command_name': data_row.get('command_name'), 'command_run_id': data_row.get('command_run_id'), 'run_start': data_row.get('run_start'), 'run_end': data_row.get('run_end'), 'result': data_row.get('result'), 'run_status': data_row.get('run_status') } command_run_data_list.append(new_command_run_entry) new_command_run_result_entry = { 'suite_run_id': data_row.get('suite_run_id'), 'command_run_id': data_row.get('command_run_id'), 'rule_name': data_row.get('rule_name'), 'result_id': result_id, 'result': data_row.get('result'), # PASS or FAIL 'expected_values': data_row.get('expected_values'), 'actual_values': data_row.get('actual_values'), 'report_values': data_row.get('report_values'), 'extended_values': data_row.get('extended_values'), } command_run_results_data_list.append(new_command_run_result_entry) result_id += 1 for item in command_run_results_data_list: response_run_results = models.TestCaseCommandRunResults.objects.create(**item) for item in command_run_data_list: response_run = models.TestCaseCommandRun.objects.create(**item) ``` urls.py ``` router = routers.DefaultRouter() router.register(r'test_case_command_runs', views.TestCaseCommandRunViewSet) router.register(r'test_case_command_run_results', views.TestCaseCommandRunResultsViewSet) urlpatterns = [ url(r'^buildInfo', views.build_info), url(r'^isActive', views.is_active), url(r'^dqf_api/', include(router.urls)), url(r'^dqf_api/ReceiptLog/', views.ReceiptLogView.ReceiptLogViewSet.as_view(), name='ReceiptLog')] ``` Any help is really appreciated.I am new to Django and DRF
2019/03/18
[ "https://Stackoverflow.com/questions/55231300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6885999/" ]
Your serializer's `create` method MUST return an instance of the object it represents. Also, you should not iterate inside the serializer to create instances, that should be done on the view: you iterate through the data, calling the serializer each iteration.
Updated the serializers.py file to include the below code ``` class ReceiptLogSerializerClass(serializers.Serializer): #Fields def create(self, validated_data): raw_data_list = [] many = isinstance(validated_data, list) if many: raw_data_list = validated_data else: raw_data_list.append(validated_data) result_id = 1 for data_row in raw_data_list: new_command_run_entry = { 'team_name': data_row.get('team_name'), 'suite_name': data_row.get('suite_name'), 'suite_run_id': data_row.get('suite_run_id'), 'case_name': data_row.get('case_name'), 'command_name': data_row.get('command_name'), 'command_run_id': data_row.get('command_run_id'), 'run_start': data_row.get('run_start'), 'run_end': data_row.get('run_end'), 'result': data_row.get('result'), 'run_status': data_row.get('run_status') } response = TestCaseCommandRunSerializer.create(TestCaseCommandRunSerializer(),validated_data= new_command_run_entry) new_command_run_result_entry = { 'suite_run_id': data_row.get('suite_run_id'), 'command_run_id': data_row.get('command_run_id'), 'rule_name': data_row.get('rule_name'), 'result_id': result_id, 'result': data_row.get('result'), # PASS or FAIL 'expected_values': data_row.get('expected_values'), 'actual_values': data_row.get('actual_values'), 'report_values': data_row.get('report_values'), 'extended_values': data_row.get('extended_values'), } response = TestCaseCommandRunResultsSerializer.create(TestCaseCommandRunResultsSerializer(),validated_data= new_command_run_result_entry) logger.info(" new_command_run_result_entry response %s" % response) result_id += 1 return validated_data ``` I was not de-serializing the data correctly and hence ran into multiple issues. return validated\_data rectified all the errors and now I am able to post() data to multiple models through single API. For posting a multiple payloads in a single API Call added below lines in ReceiptLogViewSet ``` def get_serializer(self, *args, **kwargs): if "data" in kwargs: data = kwargs["data"] if isinstance(data, list): kwargs["many"] = True return super(ReceiptLogViewSet, self).get_serializer(*args, **kwargs) ``` Ref: [Django rest framework cannot deal with multple objects in model viewset](https://stackoverflow.com/questions/43525860/django-rest-framework-cannot-deal-with-multple-objects-in-model-viewset)
1,248
55,052,811
I've got this basic python3 server but can't figure out how to serve a directory. ``` class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): def do_GET(self): print(self.path) if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going Up') if self.path == '/down': self.send_response(200) self.end_headers() self.wfile.write(B'Going Down') httpd = socketserver.TCPServer(("", PORT), SimpleHTTPRequestHandler) print("Server started on ", PORT) httpd.serve_forever() ``` If Instead of the custom class above, I simply pass `Handler = http.server.SimpleHTTPRequestHandler` into TCPServer():, the default functionality is to serve a directory, but I want to serve that directory and have functionality on my two GETs above. As an example, if someone were to go to localhost:8080/index.html, I'd want that file to be served to them
2019/03/07
[ "https://Stackoverflow.com/questions/55052811", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1240649/" ]
if you are using 3.7, you can simply serve up a directory where your html files, eg. index.html is still ``` python -m http.server 8080 --bind 127.0.0.1 --directory /path/to/dir ``` for the [docs](https://docs.python.org/3/library/http.server.html)
The simple way -------------- You want to *extend* the functionality of `SimpleHTTPRequestHandler`, so you **subclass** it! Check for your special condition(s), if none of them apply, call `super().do_GET()` and let it do the rest. Example: ``` class MyHandler(http.server.SimpleHTTPRequestHandler): def do_GET(self): if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'up') else: super().do_GET() ``` The long way ------------ To serve files, you basically just have to open them, read the contents and send it. To serve directories (indexes), use `os.listdir()`. (If you want, you can when receiving directories first check for an index.html and then, if that fails, serve an index listing). Putting this into your code will give you: ``` class MyHandler(http.server.BaseHTTPRequestHandler): def do_GET(self): print(self.path) if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going up') elif os.path.isdir(self.path): try: self.send_response(200) self.end_headers() self.wfile.write(str(os.listdir(self.path)).encode()) except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') else: try: with open(self.path, 'rb') as f: data = f.read() self.send_response(200) self.end_headers() self.wfile.write(data) except FileNotFoundError: self.send_response(404) self.end_headers() self.wfile.write(b'not found') except PermissionError: self.send_response(403) self.end_headers() self.wfile.write(b'no permission') except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') ``` This example has a lot of error handling. You might want to move it somewhere else. The problem is **this serves from *your root* directory**. To stop this, you'll have to (easy way) just add the serving directory to the beginning of `self.path`. Also check if `..` cause you to land higher than you want. A way to do this is `os.path.abspath(serve_from+self.path).startswith(serve_from)` Putting this inside (after the check for /up): ``` class MyHandler(http.server.BaseHTTPRequestHandler): def do_GET(self): print(self.path) path = serve_from + self.path if self.path == '/up': self.send_response(200) self.end_headers() self.wfile.write(b'Going up') elif not os.path.abspath(path).startswith(serve_from): self.send_response(403) self.end_headers() self.wfile.write(b'Private!') elif os.path.isdir(path): try: self.send_response(200) self.end_headers() self.wfile.write(str(os.listdir(path)).encode()) except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') else: try: with open(path, 'rb') as f: data = f.read() self.send_response(200) self.end_headers() self.wfile.write(data) # error handling skipped except Exception: self.send_response(500) self.end_headers() self.wfile.write(b'error') ``` Note you define `path` and use it subsequently, otherwise you will still serve from /
1,253
40,749,737
Currently I have an Arduino hooked up to a Raspberry Pi. The Arduino controls a water level detection circuit in service to an automatic pet water bowl. The program on the Arduino has several "serial.println()" statements to update the user on the status of the water bowl, filling or full. I have the Arduino connected to the Raspberry Pi via USB. The small python program on the Pi that captures the serial data from the Arduino is as follows: ``` import serial ser = serial.Serial('/dev/ttyUSB0',9600) file = open('index.html', 'a+') message1 = """<html> <head><meta http-equiv="refresh" content="1"/></head> <body><p>""" message2 = """</p></body> </html>""" while 1: line=ser.readline() messagefinal1 = message1 + line + message2 print(line) file.write(messagefinal1) file.close() ``` As you can see it captures the serial data coming over USB, creates and html page, and inserts the data into the page. I am using a service called "dataplicity" (<https://www.dataplicity.com>), more specifically their "Wormhole" tool (<https://docs.dataplicity.com/docs/host-a-website-from-your-pi>), to view that html file over the web at a link that the wormhole tool generates. The problem I am having is this: ``` Traceback (most recent call last): File "commprog.py", line 15, in <module> file.write(messagefinal1) ValueError: I/O operation on closed file ``` I want to continuously update the html page with the status of the water bowl (it is constantly printing its status). However once I close the page using "file.close()" I can't access it again, presumably because the wormhole service is accessing it. If I don't include .close() I have to end the process manually using ctrl c. Is there a way I can continuously update the html file?
2016/11/22
[ "https://Stackoverflow.com/questions/40749737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4086994/" ]
After the first iteration of your while loop, you close the file and never open it again for editing. When you try to append to a file that is closed, you get an error. You could instead move the open statement inside your loop like so: ``` while 1: line=ser.readline() messagefinal1 = message1 + line + message2 print(line) file = open('index.html', 'a+') file.write(messagefinal1) file.close() ```
If you want to continuously update your webpage you have couple of options. I don't know how you serve your page but you might want to look at using Flask web framework for python and think about using templating language such as jinja2. A templating language will let you create variables in your html files that can be updated straight from python script. This is useful if you want the user to see new data on your page each time after they refresh it or perform some operation on the page. Alternatively you might want to think about using websockets similarly as in the example I made [here](https://www.hackster.io/dataplicity/control-raspberry-pi-gpios-with-websockets-af3d0c?ref=user&ref_id=95392&offset=2) . Websockets are useful if you want your user to see new updates on your page in real time. I appreciate the fact that the above answer doesn't answer your question directly but what I'm offering here is a clean and easy to maintain solution once you set it up.
1,259
62,002,462
I'm trying to prune a pre-trained model: **MobileNetV2** and I got this error. Tried searching online and couldn't understand. I'm running on **Google Colab**. **These are my imports.** ``` import tensorflow as tf import tensorflow_model_optimization as tfmot import tensorflow_datasets as tfds from tensorflow import keras import os import numpy as np import matplotlib.pyplot as plt import tempfile import zipfile ``` ***This is my code.*** ``` model_1 = keras.Sequential([ basemodel, keras.layers.GlobalAveragePooling2D(), keras.layers.Dense(1) ]) model_1.compile(optimizer='adam', loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model_1.fit(train_batches, epochs=5, validation_data=valid_batches) prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude pruning_params = { 'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50, final_sparsity=0.80, begin_step=0, end_step=end_step) } model_2 = prune_low_magnitude(model_1, **pruning_params) model_2.compile(optmizer='adam', loss=keres.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) ``` ***This is the error i get.*** ``` ---> 12 model_2 = prune_low_magnitude(model, **pruning_params) ValueError: Please initialize `Prune` with a supported layer. Layers should either be a `PrunableLayer` instance, or should be supported by the PruneRegistry. You passed: <class 'tensorflow.python.keras.engine.training.Model'> ```
2020/05/25
[ "https://Stackoverflow.com/questions/62002462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12540447/" ]
I believe you are following `Pruning in Keras Example` and jumped into `Fine-tune pre-trained model with pruning` section without setting your prunable layers. You have to reinstantiate model and set layers you wish to set as `prunable`. Follow this guide for further information on how to set prunable layers. <https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide.md>
I faced the same issue with: * tensorflow version: `2.2.0` Just updating the version of tensorflow to `2.3.0` solved the issue, I think Tensorflow added support to this feature in 2.3.0.
1,260
42,010,684
I have a script written in python 2.7 that calls for a thread. But, whatever I do, the thread won't call the function. The function it calls: ``` def siren_loop(): while running: print 'dit is een print' ``` The way I tried to call it: ``` running = True t = threading.Thread(target=siren_loop) t.start() ``` or: ``` running = True thread.start_new_thread( siren_loop, () ) ``` I even tried to add arguments to siren\_loop to see if that would work, but no change. I just can't get it to print the lines in the siren\_loop function. I also tried many other strange things, which obviously didn't work. What am I doing wrong? edit: Since people said it worked, I tried to call the thread from another function. So it looked something like this: ``` def start_sirene(): running = True t = threading.Thread(target=siren_loop) t.start() ``` And then that part was called from: ``` if zwaailichtbool == False: start_sirene() print 'zwaailicht aan' zwaailichtbool = True sleep(0.5) ``` Maybe that could cause the problem? The print statement in the last one works, and when I added a print before or after the thread statement it also worked.
2017/02/02
[ "https://Stackoverflow.com/questions/42010684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5837270/" ]
So, after trying various things for hours i and hours, I found a solution but still don't understand the problem. Apparently the program didnt like the many steps. I took one step away (the start siren method) but used the exact same code, and suddenly it worked. Stl no clue why that was the problem. If anybody knows, please enlighten me xD
`running` is a local variable in your code. Add `global running` to `start_sirene()`
1,264
20,794,258
I have an appengine app that I want to use as a front end to some existing web services. How can I consume those WS from my app? I'm using python
2013/12/27
[ "https://Stackoverflow.com/questions/20794258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/251154/" ]
You are calling `string.replace` without assigning the output anywhere. The function does not modify the original string - it creates a new one - but you are not storing the returned value. Try this: ``` ... str = str.replace(/\r?\n|\r/g, " "); ... ``` --- However, if you actually want to remove *all* whitespace from around the input (not just newline characters at the end), you should use [`trim`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/Trim): ``` ... str = str.trim(); ... ``` It will likely be more efficient since it is already implemented in the Node.js binary.
you need to convert the data into **JSON** format. **JSON.parse(data)** you will remove all new line character and leave the data in **JSON** format.
1,266
55,010,607
I am new to machine learning and have spent some time learning python. I have started to learn TensorFlow and Keras for machine learning and I literally have no clue nor any understanding of the process to make the model. How do you know which models to use? which activation functions to use? The amount of layers and dimensions of the output space? I've noticed most models were the Sequential type, and tend to have 3 layers, why is that? I couldn't find any resources that explain which to use, why we use them, and when. The best I could find was tensorflow's function details. Any elaboration or any resources to clarify would be greatly appreciated. Thanks.
2019/03/05
[ "https://Stackoverflow.com/questions/55010607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9919507/" ]
`*m` is the same as `m[0]`, i.e. the first element of the array pointed to by `m` which is the character `'s'`. By using the `%d` format specifier, you're printing the given argument as an integer. The ASCII value of `'s'` is 115, which is why you get that value. If you want to print the string, use the `%s` format specifier (which expects a `char *` argument) instead and pass the pointer `m`. ``` printf("%s\n", m); ```
You have a few problems here, the first one is that you're trying to add three bytes to a char, a char is one byte. the second problem is that char \*m is a pointer to an address and is not a modifiable lvalue. The only time you should use pointers is when you are trying to point to data example: ``` char byte = "A"; //WRONG char byte = 0x61; //OK char byte = 'A'; //OK //notice that when setting char byte = "A" you will recieve an error, //the double quotations is for character arrays where as single quotes, // are used to identify a char enter code here char str[] = "ABCDEF"; char *m = str; printf("%02x", *m); //you are pointing to str[0] printf("%02x", *m[1]); //you are pointing to str[1] printf("%02x", *m[1]); //you are pointing to str[1] printf("%02x", *(m + 1)); //you are still pointing to str[1] //m is a pointer to the address in memory calling it like // *m[1] or *(m + 1) is saying address + one byte //these examples are the same as calling (more or less) printf("%02x", str[0]); or printf("%02x", str[1]); ```
1,269
18,787,722
Is there are a way to change the user directory according to the username, something like ``` os.chdir('/home/arn/cake/') ``` But imagine that I don't know what's the username on that system. How do I find out what's the username, I know that python doesn't have variables so it's hard for me to get the username without variable.
2013/09/13
[ "https://Stackoverflow.com/questions/18787722", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2641084/" ]
``` pwd.getpwnam(username).pw_dir ``` is the home directory of `username`. The user executing the program has username `os.getlogin()`. "I know that python doesn't have variables" -- that's nonsense. You obviously mean environment variables, which you can access using `os.getenv` or `os.environ`.
Maybe there is a better answer but you can always use command calls: ``` import commands user_dir = commands.getoutput("cd; pwd") ```
1,271
32,702,954
I am trying to rename multiple mp3 files I have in a folder. They start with something like "1 Hotel California - The Eagles" and so on. I would like it to be just "Hotel California - The Eagles". Also, there could be a "05 Hotel California - The Eagles" as well, which means removing the number from a different files would create duplicates, which is the problem I am facing. I want it to replace existing files/overwrite/delete one of them or whatever a solution might be. P.S, Adding "3" to the "1234567890 " would remove the "3" from the .mp3 extension I am new to python, but here is the code I am using to implement this ``` import os def renamefiles(): list = os.listdir(r"E:\NEW") print(list) path = os.getcwd() print(path) os.chdir(r"E:\NEW") for name in list: os.rename(name, name.translate(None, "124567890 ")) os.chdir(path) renamefiles() ``` And here is the error I get WindowsError: [Error 183] Cannot create a file when that file already exists Any help on how I could rename the files correctly would be highly appreciated!
2015/09/21
[ "https://Stackoverflow.com/questions/32702954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3803648/" ]
You need to verify that the names being changed actually changed. If the name doesn't have digits or spaces in it, the `translate` will return the same string, and you'll try to rename `name` to `name`, which Windows rejects. Try: ``` for name in list: newname = name.translate(None, "124567890 ") if name != newname: os.rename(name, newname) ``` Note, this will still fail if the file target exists, which you'd probably want if you were accidentally collapsing two names into one. But if you want silent replace behavior, if you're on Python 3.3 or higher, you can change `os.rename` to `os.replace` to silently overwrite; on earlier Python, you can explicitly `os.remove` before calling `os.rename`.
You just need to change directory to where \*.mp3 files are located and execute 2 lines of below with python: ``` import os,re for filename in os.listdir(): os.rename(filename, filname.strip(re.search("[0-9]{2}", filename).group(0))) ```
1,272
27,821,776
I want to set a value in editbox of android app using appium. And I am using python script to automate it. But I am always getting some errors. My python script is ``` import os import unittest import time from appium import webdriver from time import sleep from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By import uiautomator import math element = self.driver.find_element_by_class_name('android.widget.EditText') element.set_value('qwerty') element = self.driver.find_element_by_name("Let's get started!") element.click() time.sleep(5) ``` When ever I am running it, I am always getting an error: ``` AttributeError: 'WebElement' object has no attribute 'set_value' ```
2015/01/07
[ "https://Stackoverflow.com/questions/27821776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4429165/" ]
To type a value into a WebElement, use the Selenium WebDriver method `send_keys`: ``` element = self.driver.find_element_by_class_name('android.widget.EditText') element.send_keys('qwerty') ``` See the [Selenium Python Bindings documentation](http://selenium-python.readthedocs.org/en/latest/api.html?highlight=send_keys#selenium.webdriver.remote.webelement.WebElement.send_keys) for more details.
It's as simple as the error: The type element is, has no set\_value(str) or setValue(str) method. Maybe you meant ``` .setText('qwerty')? ``` Because there is no setText method in a EditText widget: <http://developer.android.com/reference/android/widget/EditText.html>
1,282
44,307,988
I'm really new to python and trying to build a Hangman Game for practice. I'm using Python 3.6.1 The User can enter a letter and I want to tell him if there is any occurrence of that letter in the word and where it is. I get the total number of occurrences by using `occurrences = currentWord.count(guess)` I have `firstLetterIndex = (currentWord.find(guess))`, to get the index. Now I have the index of the first Letter, but what if the word has this letter multiple times? I tried `secondLetterIndex = (currentWord.find(guess[firstLetterIndex, currentWordlength]))`, but that doesn't work. Is there a better way to do this? Maybe a build in function i can't find?
2017/06/01
[ "https://Stackoverflow.com/questions/44307988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7952215/" ]
One way to do this is to find the indices using list comprehension: ``` currentWord = "hello" guess = "l" occurrences = currentWord.count(guess) indices = [i for i, a in enumerate(currentWord) if a == guess] print indices ``` output: ``` [2, 3] ```
I would maintain a second list of Booleans indicating which letters have been correctly matched. ``` >>> word_to_guess = "thicket" >>> matched = [False for c in word_to_guess] >>> for guess in "te": ... matched = [m or (guess == c) for m, c in zip(matched, word_to_guess)] ... print(list(zip(matched, word_to_guess))) ... [(True, 't'), (False, 'h'), (False, 'i'), (False, 'c'), (False, 'k'), (False, 'e'), (True, 't')] [(True, 't'), (False, 'h'), (False, 'i'), (False, 'c'), (False, 'k'), (True, 'e'), (True, 't')] ```
1,283
51,411,244
I have the following dictionary: ``` equipment_element = {'equipment_name', [0,0,0,0,0,0,0]} ``` I can't figure out what is wrong with this list? I'm trying to work backwards from this post [Python: TypeError: unhashable type: 'list'](https://stackoverflow.com/questions/13675296/python-typeerror-unhashable-type-list) but my key is not a list, my value is. What am I doing wrong?
2018/07/18
[ "https://Stackoverflow.com/questions/51411244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/84885/" ]
It's not a dictionary, it's a set.
maybe you were looking for this syntax ``` equipment_element = {'equipment_name': [0,0,0,0,0,0,0]} ``` or ``` equipment_element = dict('equipment_name' = [0,0,0,0,0,0,0]) ``` or ``` equipment_element = dict([('equipment_name', [0,0,0,0,0,0,0])]) ``` This syntax is for creating a set: ``` equipment_element = {'equipment_name', [0,0,0,0,0,0,0]} ```
1,285
69,782,728
I am trying to read an image URL from the internet and be able to get the image onto my machine via python, I used example used in this blog post <https://www.geeksforgeeks.org/how-to-open-an-image-from-the-url-in-pil/> which was <https://media.geeksforgeeks.org/wp-content/uploads/20210318103632/gfg-300x300.png>, however, when I try my own example it just doesn't seem to work I've tried the HTTP version and it still gives me the 403 error. Does anyone know what the cause could be? ``` import urllib.request urllib.request.urlretrieve( "http://image.prntscr.com/image/ynfpUXgaRmGPwj5YdZJmaw.png", "gfg.png") ``` Output: urllib.error.HTTPError: HTTP Error 403: Forbidden
2021/10/30
[ "https://Stackoverflow.com/questions/69782728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16956765/" ]
The server at `prntscr.com` is actively rejecting your request. There are many reasons why that could be. Some sites will check for the user agent of the caller to make see if that's the case. In my case, I used [httpie](https://httpie.io/docs) to test if it would allow me to download through a non-browser app. It worked. So then I simply reused made up a user header to see if it's just the lack of user-agent. ``` import urllib.request opener = urllib.request.build_opener() opener.addheaders = [('User-Agent', 'MyApp/1.0')] urllib.request.install_opener(opener) urllib.request.urlretrieve( "http://image.prntscr.com/image/ynfpUXgaRmGPwj5YdZJmaw.png", "gfg.png") ``` It worked! Now I don't know what logic the server uses. For instance, I tried a standard `Mozilla/5.0` and that did not work. You won't always encounter this issue (most sites are pretty lax in what they allow as long as you are reasonable), but when you do, try playing with the user-agent. If nothing works, try using the same user-agent as your browser for instance.
I had the same problem and it was due to an expired URL. I checked the response text and I was getting "URL signature expired" which is a message you wouldn't normally see unless you checked the response text. This means some URLs just expire, usually for security purposes. Try to get the URL again and update the URL in your script. If there isn't a new URL for the content you're trying to scrape, then unfortunately you can't scrape for it.
1,287
3,757,738
Ok say I have a string in python: ``` str="martin added 1 new photo to the <a href=''>martins photos</a> album." ``` *the string contains a lot more css/html in real world use* What is the fastest way to change the 1 (`'1 new photo'`) to say `'2 new photos'`. of course later the `'1'` may say `'12'`. Note, I don't know what the number is, so doing a replace is not acceptable. I also need to change `'photo'` to `'photos'` but I can just do `a .replace(...)`. Unless there is a neater, easier solution to modify both?
2010/09/21
[ "https://Stackoverflow.com/questions/3757738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/258236/" ]
It sounds like this is what you want (although *why* is another question :^) ``` import re def add_photos(s,n): def helper(m): num = int(m.group(1)) + n plural = '' if num == 1 else 's' return 'added %d new photo%s' % (num,plural) return re.sub(r'added (\d+) new photo(s?)',helper,s) s = "martin added 0 new photos to the <a href=''>martins photos</a> album." s = add_photos(s,1) print s s = add_photos(s,5) print s s = add_photos(s,7) print s ``` ### Output ``` martin added 1 new photo to the <a href=''>martins photos</a> album. martin added 6 new photos to the <a href=''>martins photos</a> album. martin added 13 new photos to the <a href=''>martins photos</a> album. ```
since you're not parsing html, just use an regular expression ``` import re exp = "{0} added ([0-9]*) new photo".format(name) number = int(re.findall(exp, strng)[0]) ``` This assumes that you will always pass it a string with the number in it. If not, you'll get an `IndexError`. I would store the number and the format string though, in addition to the formatted string. when the number changes, remake the format string and replace your stored copy of it. This will be much mo'bettah' then trying to parse a string to get the count. In response to your question about the html mattering, I don't think so. You are not trying to extract information that the html is encoding so you are not parsing html with regular expressions. This is just a string as far as that concern goes.
1,288
70,884,314
I'm trying to match all of the items in one list (list1) with some items in another list (list2). ``` list1 = ['r','g','g',] list2 = ['r','g','r','g','g'] ``` For each successive object in list1, I want to find all indices where that pattern shows up in list2: Essentially, I'd hope the result to be something along the lines of: "r is at indices 0,2 in list2" "r,g is at indices, 1,3 in list2" (I only want to find the last index in the pattern) "r,g,g is at index 4 in list2" As for things I've tried: Well... a lot. The one that has gotten closest is this: `print([x for x in list1 if x not in set(list2)])` This doesn't work fr me because it doesn't look for a **group of objects**, it only tests for **one object** in list1 being in list2. I don't really need the answer to be pythonic or even that fast. As long as it works! Any help is greatly appreciated! Thanks!
2022/01/27
[ "https://Stackoverflow.com/questions/70884314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18050991/" ]
Here's an attempt: ```py list1 = ['r','g','g'] list2 = ['r','g','r','g','g'] def inits(lst): for i in range(1, len(lst) + 1): yield lst[:i] def rolling_windows(lst, length): for i in range(len(lst) - length + 1): yield lst[i:i+length] for sublen, sublst in enumerate(inits(list1), start=1): inds = [ind for ind, roll in enumerate(rolling_windows(list2, sublen), start=sublen) if roll == sublst] print(f"{sublst} is in list2 at indices: {inds}") # ['r'] is in list2 at indices: [1, 3] # ['r', 'g'] is in list2 at indices: [2, 4] # ['r', 'g', 'g'] is in list2 at indices: [5] ``` Basically, it generates relevant sublists using two functions (`inits` and `rolling_windows`) and then compare them.
Pure python solution which is going to be pretty slow for big lists: ``` def ind_of_sub_list_in_list(sub: list, main: list) -> list[int]: indices: list[int] = [] for index_main in range(len(main) - len(sub) + 1): for index_sub in range(len(sub)): if main[index_main + index_sub] != sub[index_sub]: break else: # `sub` fits completely in `main` indices.append(index_main) return indices list1 = ["r", "g", "g"] list2 = ["r", "g", "g", "r", "g", "g"] print(ind_of_sub_list_in_list(sub=list1, main=list2)) # [0, 3] ``` Naive implementation with two for loops that check entry by entry the two lists.
1,290
1,454,941
I Have run into a few examples of managing threads with the threading module (using Python 2.6). What I am trying to understand is how is this example calling the "run" method and where. I do not see it anywhere. The ThreadUrl class gets instantiated in the main() function as "t" and this is where I would normally expect the code to start the "run" method. Maybe this is not the preferred way of working with threads? Please enlighten me: ``` #!/usr/bin/env python import Queue import time import urllib2 import threading import datetime hosts = ["http://example.com/", "http://www.google.com"] queue = Queue.Queue() class ThreadUrl(threading.Thread): """Threaded Url Grab""" def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): while True: #grabs host from queue host = self.queue.get() #grabs urls of hosts and prints first 1024 bytes of page url = urllib2.urlopen(host) print url.read(10) #signals to queue job is done self.queue.task_done() start = time.time() def main(): #spawn a pool of threads, and pass them queue instance for i in range(1): t = ThreadUrl(queue) t.setDaemon(True) t.start() for host in hosts: queue.put(host) queue.join() main() print "Elapsed time: %s" % (time.time() - start) ```
2009/09/21
[ "https://Stackoverflow.com/questions/1454941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/89528/" ]
The method run() is called behind the scene by "threading.Thread" (Google inheritance and polymorphism concepts of OOP). The invocation will be done just after t.start() has called. If you have an access to threading.py (find it in python folder). You will see a class name Thread. In that class, there is a method called "start()". start() called '\_start\_new\_thread(self.\_\_bootstrap, ())' a low-level thread start-up which will run a wrapper method called '\_\_bootstrap()' by a new thread. '\_\_bootstrap()', then, called '\_\_bootstrap\_inner()' which do some more preparation before, finally, call 'run()'. Read the source, you can learn a lot. :D
`t.start()` creates a new thread in the OS and when this thread begins it will call the thread's `run()` method (or a different function if you provide a `target` in the `Thread` constructor)
1,296
55,508,028
I'm trying to use the ocr method from computer visio to extract all the text from a specific image. Nevertheless it doesn't return the info I know which is there, because when I analize the image directly in the available option in this page <https://azure.microsoft.com/es-es/services/cognitive-services/computer-vision/>, it does return the data. This is the image im traying to get the data from <https://bitbucket.org/miguel_acevedo_ve/python-stream/raw/086279ad6885a490e521785ba288914ed98cfd1d/test.jpg> I have followed all the python tutorial available in the azure documentation site. ``` import matplotlib.pyplot as plt from matplotlib.patches import Rectangle from PIL import Image from io import BytesIO subscription_key = "<Subscription Key>" assert subscription_key vision_base_url = "https://westcentralus.api.cognitive.microsoft.com/vision/v2.0/" ocr_url = vision_base_url + "ocr" image_url = "https://bitbucket.org/miguel_acevedo_ve/python-stream/raw/086279ad6885a490e521785ba288914ed98cfd1d/test.jpg" '''image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/" + \ "Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png" ''' headers = {'Ocp-Apim-Subscription-Key': subscription_key} params = {'mode' : 'Printed'} data = {'url': image_url} response = requests.post(ocr_url, headers=headers, params=params, json=data) response.raise_for_status() analysis = response.json() print(analysis) ``` and this is my current output: ``` {u'regions': [], u'textAngle': 0.0, u'orientation': u'NotDetected', u'language': u'unk'} ``` UPDATE: The solution is to use recognizeText not the ocr function from computer visio.
2019/04/04
[ "https://Stackoverflow.com/questions/55508028", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11309249/" ]
I'll try to describe my thought process so you can follow. This function fits the pattern of creating an output list (here a string) from an input seed (here a string) by repeated function application (here dropping some elements). Thus I choose an implementation with `Data.List.unfoldr`. ``` unfoldr :: (b -> Maybe (a, b)) -> b -> [a] ``` Okay so, I need to turn the seed `b` into (`Maybe`) an output `a` and the rest of the string. I'll call this subfunction `f` and pass it into `unfoldr`. ``` printing s n = unfoldr f s where f b = case drop n b of [] -> Nothing (x:xs) -> Just (x,xs) ``` It turns out that attempting to take the head off the front of the list and returning a `Maybe` is also a common pattern. It's `Data.List.uncons`, so ``` printing s n = unfoldr (uncons . drop n) s ``` Very smooth! So I test it out, and the output is wrong! Your specified output actually eg. for `n=2` selects every 2nd character, ie. drops `(n-1)` characters. ``` printing s n = unfoldr (uncons . drop (n-1)) s ``` I test it again and it matches the desired output. Phew!
To demonstrate the Haskell language some alternative solutions to the accepted answer. Using **list comprehension**: ``` printing :: Int -> String -> String printing j ls = [s | (i, s) <- zip [1 .. ] ls, mod i j == 0] ``` Using **recursion**: ``` printing' :: Int -> String -> String printing' n ls | null ls' = [] | otherwise = x : printing' n xs where ls' = drop (n - 1) ls (x : xs) = ls' ``` In both cases I flipped the arguments so it is easier to do partial application: `printing 5` for example is a new function and will give each 5th character when applied to a string. Note with a minor modification they will work for any list ``` takeEvery :: Int -> [a] -> [a] ```
1,299
7,550,823
I've caught myself using this in place of a traditional for loop: ``` _.each(_.range(count), function(i){ ... }); ``` The disadvantage being creating an unnecessary array of size count. Still, i prefer the semantics of, for example, *.each(*.range(10,0,-1), ...); when iterating backwards. Is there any way to do a lazy iteration over range, as with pythons xrange?
2011/09/26
[ "https://Stackoverflow.com/questions/7550823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/374943/" ]
Considering the [source of underscore.js](http://documentcloud.github.com/underscore/underscore.js) says the following about `range`: > > Generate an integer Array containing an arithmetic progression > > > I doubt there is a way to do lazy iteration without modifying the source.
If you don't mind getting your hands dirty, dig into the sources of the older but stable and feature-complete [MochiKit](http://mochi.github.com/mochikit/)'s [Iter](http://mochi.github.com/mochikit/doc/html/MochiKit/Iter.html) module. It tries to create something along the lines of Python's [itertools](http://docs.python.org/library/itertools.html#module-itertools).
1,300
23,175,165
In python, I am trying to check if a given list of values is currently sorted in increasing order and if there are adjacent duplicates in the list. If there are, the code should return True. I am not sure why this code does not work. Any ideas? Thanks in advance!! ``` def main(): values = [1, 4, 9, 16, 25] print("Return true if list is currently sorted in increasing order: ", increasingorder(values)) print("Return true if list contains two adjacent duplicate elements: ", twoadjacentduplicates(values)) def increasingorder(values): hlist = values a = hlist.sort() if a == hlist: return True else: return False def twoadjacentduplicates(values): ilist = values true = 0 for i in range(1, len(ilist)-1): if ilist[i] == ilist[i - 1] or ilist[i] == ilist[i + 1] : true = true + 1 if true == 0: return False if true > 0: return True main() ```
2014/04/19
[ "https://Stackoverflow.com/questions/23175165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3495872/" ]
Your `increasingorder` function will almost certainly not work, because Python uses references, and the `sort` function modifies a list in-place and returns `None`. That means that after your call `a = hlist.sort()`, both `hlist` will be sorted and `a` will be `None`. so they will not compare equal. You probably meant to do the following, which will return a sorted list instead. ``` a = sorted(hlist) ``` This function works: ``` def increasingorder(values): hlist = values a = sorted(hlist) if a == hlist: return True else: return False ``` You can of course simplify this down to a single line. ``` def increasingorder(values): return sorted(values) == values ``` Your second function looks logically correct, but can be simplified down to the following. ``` def twoadjacentduplicates(values): for i in range(0, len(values)-1): if values[i] == values[i + 1] : return True return False ```
Try creating a True False function for each value check operation you want done taking the list as a parameter. then call each function like "if 1 and 2 print 3" format. That may make thinking through the flow a little easier. Is this kind of what you were wanting? ``` def isincreasing(values): if values==sorted(values): return True return False def has2adjdup(values): for x in range(len(values)-1): if values[x]==values[x+1]: return True return False if isincreasing(values) and has2adjdup(values): print "True" ```
1,302
20,529,457
Please excuse this naive question of mine. I am trying to monitor memory usage of my python code, and have come across the promising [`memory_profiler`](https://pypi.python.org/pypi/memory_profiler) package. I have a question about interpreting the output generated by @profile decorator. Here is a sample output that I get by running my dummy code below: **dummy.py** ``` from memory_profiler import profile @profile def my_func(): a = [1] * (10 ** 6) b = [2] * (2 * 10 ** 7) del b return a if __name__ == '__main__': my_func() ``` Calling dummy.py by "python dummy.py" returns the table below. Line # Mem usage Increment Line Contents ======================================== ``` 3 8.2 MiB 0.0 MiB @profile 4 def my_func(): 5 15.8 MiB 7.6 MiB a = [1] * (10 ** 6) 6 168.4 MiB 152.6 MiB b = [2] * (2 * 10 ** 7) 7 15.8 MiB -152.6 MiB del b 8 15.8 MiB 0.0 MiB return a ``` My question is what does the 8.2 MiB in the first line of the table correspond to. My guess is that it is the initial memory usage by the python interpreter itself; but I am not sure. If that is the case, is there a way to have this baseline usage automatically subtracted from the memory usage of the script? Many thanks for your time and consideration! Noushin
2013/12/11
[ "https://Stackoverflow.com/questions/20529457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1146372/" ]
According to [the docs](https://pypi.python.org/pypi/memory_profiler): > > The first column represents the line number of the code that has been profiled, the second column (Mem usage) the memory usage of the Python interpreter after that line has been executed. The third column (Increment) represents the difference in memory of the current line with respect to the last one. > > > So, that 8.2 MiB is the memory usage after the first line has been executed. That includes the memory needed to start up Python, load your script and all of its imports (including `memory_profiler` itself), and so on. There don't appear to be any documented options for removing that from each entry. But it wouldn't be too hard to post-process the results. Alternatively, do you really need to do that? The third column shows how much additional memory has been used after each line, and either that, or the sum of that across a range of lines, seems more interesting than the difference between each line's second column and the start.
The difference in memory between lines is given in the second column or you could write a small script to process the output.
1,303
67,395,047
I have written several python scripts that will backtest trading strategies. I am attempting to deploy these through docker compose. The feeder container copies test files to a working directory where the backtester containers will pick them up and process them. The processed test files are then sent to a "completed work" folder. Some CSV files that the backtester outputs are then written to an NFS share on another computer. I should mention that this is all running on Ubuntu 20.04. As far as I can tell everything should be working, but for some reason the "docker-compose up" command hangs up on "Attaching to". There should be further output with print statements (I've unbuffered the Dockerfiles so those should show up). I've also left it running for a while to see if anything was getting processed and it looks like the containers never started up. I've looked at all the other threads dealing with this and have not found a solution that has worked to resolve this. Any insight is very very appreciated. Thanks. Here is the docker-compose file: ``` version: '3.4' services: feeder: image: feeder build: context: . dockerfile: ./Dockerfile.feeder volumes: - /home/danny/io:/var/lib/io worker1: image: backtester build: context: . dockerfile: ./Dockerfile volumes: - /home/danny/io/input/workers/worker1:/var/lib/io/input - /home/danny/io/input/completedwork:/var/lib/io/archive - /nfs/tests:/var/lib/tests worker2: image: backtester build: context: . dockerfile: ./Dockerfile volumes: - /home/danny/io/input/workers/worker2:/var/lib/io/input - /home/danny/io/input/completedwork:/var/lib/io/archive - /nfs/tests:/var/lib/tests worker3: image: backtester build: context: . dockerfile: ./Dockerfile volumes: - /home/danny/io/input/workers/worker3:/var/lib/io/input - /home/danny/io/input/completedwork:/var/lib/io/archive - /nfs/tests:/var/lib/tests ``` Here is the Dockerfile for the backtester: ``` # For more information, please refer to https://aka.ms/vscode-docker-python FROM python:3.8-slim-buster # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 # Install pip requirements COPY requirements.txt . RUN python -m pip install -r requirements.txt WORKDIR /app COPY . /app # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers # RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app # USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug CMD ["python", "backtester5.py"] ``` Here is the Dockerfile for the feeder: ``` # For more information, please refer to https://aka.ms/vscode-docker-python FROM python:3.8-slim-buster # Keeps Python from generating .pyc files in the container ENV PYTHONDONTWRITEBYTECODE=1 # Turns off buffering for easier container logging ENV PYTHONUNBUFFERED=1 # Install pip requirements COPY requirements.txt . RUN python -m pip install -r requirements.txt WORKDIR /app COPY . /app # Creates a non-root user with an explicit UID and adds permission to access the /app folder # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers # RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app # USER appuser # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug CMD ["python", "feeder.py"] ``` Here is the last message that shows up: ``` Attaching to project4_feeder_1, project4_worker1_1, project4_worker2_1, project4_worker3_1 ```
2021/05/05
[ "https://Stackoverflow.com/questions/67395047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15838493/" ]
It's been three weeks with no responses, but I just wanted to update with what I've found. In all cases where I've left "docker-compose up" running it eventually started. At times it took 30 minutes, but it started every time.
I faced the same problem and fix it with this tip: > > resolved It turns out if I run my docker command with "python3 -u" it will force python to run unbuffered. It was a buffering issue. > > > source: <https://www.reddit.com/r/docker/comments/gk262t/comment/fqos8j8/?utm_source=share&utm_medium=web2x&context=3>
1,304
15,152,174
I am using python version 3. For homework, I am trying to allow five digits of input from the user, then find the average of those digits. I have figured that part out (spent an hour learning about the map function, very cool). The second part of the problem is to compare each individual element of the list to the average, then return the ones greater than the average. I think the "if any" at the bottom will compare the numbers in the list to the Average, but I have no idea how to pull that value out to print it. You guys rock for all the help. ``` #Creating a list my_numbers = [input("Enter a number: ") for i in range(5)] #Finding sum Total = sum(map(int, my_numbers)) #Finding the average Average = Total/5 print ("The average is: ") print (Average) print ("The numbers greater than the average are: ") if any in my_numbers > Average: ```
2013/03/01
[ "https://Stackoverflow.com/questions/15152174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2121886/" ]
you can add your custom class with your own css property like below ``` $this->addElement(new Zend_Form_Element_Button( 'send', array( 'label' => 'registrieren', 'class' => 'button-red', 'type' => 'submit', 'escape' => false, 'required' => false, 'ignore' => false, ) )); ```
Yes just apply the appropriate CSS to the 'div', 'tag' or 'class' as required. [22 CSS Button Styling Tutorials and Techniques](http://speckyboy.com/2009/05/27/22-css-button-styling-tutorials-and-techniques/) may help.
1,305
2,683,810
I'd ideally like a vim answer to this: I want to change ``` [*, 1, *, *] to [*, 2, *, *] ``` Here the stars refer to individual characters in the substring, which I would like to keep unchanged. For example ``` [0, 1, 0, 1] to [0, 2, 0, 1] [1, 1, 1, 1] to [1, 2, 1, 1] ``` If people know how to do this in perl or python or whatever, that would be equally good. Cheers
2010/04/21
[ "https://Stackoverflow.com/questions/2683810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/194675/" ]
The following should do what you want: ``` :%s/\(\[[^,]*, *\)\(\d\)\([^]]*\]\)/\=submatch(1) . (submatch(2)+1) . submatch(3)/ ``` In Vim, that is.
If those are strings in Python ``` >>> a = "[0, 1, 0, 1]" >>> b = a[:4] + '2' + a[5:] >>> b '[0, 2, 0, 1]' ``` Lists are a little more trivial: ``` >>> c = [0, 1, 0, 1] >>> c[1] = 2 >>> c [0, 2, 0, 1] >>> ```
1,306
30,083,603
Alright here's a question that's eating me from inside so any help is appreciated. I have a web service that returns a list of items. The number of items returned is governed by two variables 'page' and 'per\_page'. So a URL like ``` abc.com?page=10&per_page=100 ``` Will show the 10th page with 100 items in it. I have to query this service efficiently and only get items that were added after the last fetch. So say I have cached all the items upto #1024 and then 12 more items were added to it making the count 1036. How do I calculate the page and per\_page value so that I get all the added items in a single page while keeping the per\_page qunatity to be as close to the newly added items as possible. For eg: in this case the per\_page should be as close to but no less than 12. I already know the last count of cache and the current total number of items. Its okay if the fetched page has the previously cached items. I am trying to find the most optimum response and not the most accurate. The language I am using is python but just an algorithm or psuedo-code would be very welcome. Note: the service gives me the earliest items first. So the latest entries are always added to the last page
2015/05/06
[ "https://Stackoverflow.com/questions/30083603", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4484709/" ]
Is there a compelling reason to not over-request? `abc.com?page=2&per_page=1024` Just always set `page=2` and `per_page` = number of items cached. The only weird case is when the number of added elements is greater than the number of items cached, in which case you have to `abc.com?page=1&per_page=99999`
Here's the code with a small bug-fix to give the most optimal page size (the suggested code wouldn't return a page size that exactly divides the total count). ``` def items_per_page(total_item_count,new_item_count): for i in itertools.count(new_item_count): if total_items % i>= new_item_count or total_items %i == 0: return i total_count = 136 #including the new items new_count = 12 ct_per_page = items_per_page(total_count,new_count) page_num = total_count//ct_per_page + 1 # the last page print my_url+"?page="+page_num+"&per_page="+ct_per_page ```
1,313
44,967,366
Working in python with ESRI's arcpy, I'm trying to sum values across multiple fields with an arcpy updateCursor . I'm trying to convert the None items to an 0. However, I can't figure out a way to convert the None items. I'm open to anything. ``` with arcpy.da.UpdateCursor(feature_class, score_fields) as cursor: for row in cursor: [0 if x==None else x+4 for x in row] print row row[len(score_fields)-1] = sum(row[i] for i in range(len(score_fields))) cursor.updateRow(row) ``` Returns: ``` [-4, -4, None, None, -4, None, -4, -4] ``` with error: ``` TypeError: unsupported operand type(s) for +: 'int' and 'NoneType' ``` Thanks!
2017/07/07
[ "https://Stackoverflow.com/questions/44967366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8269863/" ]
You should update `row` by assigning the result of the list comprehension to `row`: ``` for row in cursor: row = [0 if x is None else x+4 for x in row] ``` Also, note that since there is only one `None` object, better to test with `is` than use `==`; more Pythonic, more performant.
You can use an if statement to filter the `None` values in your list: ``` sum(row[i] for i in range(len(score_fields)) if row[i] is not None) ```
1,315
7,720,435
What is the best way to implement a tree structure (generic - not binary) in python? My intuition would have the following skeleton: ``` class TNode(self, data): #enter things for each individual node class TStructure(self): #enter code for implementing nodes that reference each other. ```
2011/10/11
[ "https://Stackoverflow.com/questions/7720435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/382906/" ]
Any function which is going to be inlined must have its full source in the .di file. Any function which is going to be used in CTFE must not only have its full source in the .di file, but the full source of every function that it uses - directly or indirectly - must be available to the compiler. Also, because of how templates work, their full source must be in the .di file as well (which is the same as how templates must be in header files in C++). So, there are a number of cases where you *need* stuff to be in a .di file. Under exactly what circumstances the compiler chooses to strip stuff or not, I don't know (aside from the fact that templates automatically end up in .di files in their entirety because they *have* to). It could change depending on the compiler's current implementation and what optimizations it does. But at minimum, it's going to have to leave in small function bodies if it's going to do any inlining. Large function bodies and the bodies of small virtual functions (which can't be inlined anyway) will likely be stripped out however. But your example gives a small, non-virtual function, so dmd likely left it in so that it could inline any calls to it. If you want to see dmd strip a lot of stuff when generating a .di file, then you probably need to have large functions and/or use classes.
> > Hardly a paragon of optimization. > > > No, that **is** an optimization. The compiler will leave the implementation in the interface file if the implementation is small enough that it can later be inlined.
1,316
69,255,736
`TypeError: unsupported operand type(s) for /: 'str' and 'float'` I'm making a football game, and I get this whenever I try to run the following code to determine how far a play will go. `playdistance = round(random.uniform(float(rbs.get(possession)[-2:]/float(30.0))-2.5,float(rbs.get(possession)[-2:]/float(30.0))+5.5))` "rbs" is a dictionary containing all of the teams' running backs and overall stored like `'NYG':'Saquon Barkley 99'` where it contains the name and then how good that player is on a scale from 0-99. I stored it like this so that I can use [-2:] to get how good the player is, and [:-2] to get the name of the player. "possession" is the team that has the ball, so that I can pull the running back's name and skill from the dictionary previously mentioned. What I'm confused about is how I'm getting the previously mentioned error when both of the arguments in the division are floats, and neither are strings. I've tried converting the 30 divisor into a string, a float, and I've also done that for the first argument. I'm sure this is a pretty dumb question as I am pretty new to coding and python, but if someone could help me out that would be awesome.
2021/09/20
[ "https://Stackoverflow.com/questions/69255736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16957998/" ]
You can't combine strings and numbers like that in Python. `rbs.get(possession)[-2:]` gives you a string, e.g. `'99'`, and `float(30.0)` gives you a number. The division of strings by numbers is not defined. You must convert the '99' to a number first before you can divide it by anything. Technically speaking, you only need to switch the parentheses around in your expression. Broken: ``` round(random.uniform(float(rbs.get(possession)[-2:]/float(30.0))-2.5,float(rbs.get(possession)[-2:]/float(30.0))+5.5)) ``` Working: ``` round(random.uniform(float(rbs.get(possession)[-2:])/float(30.0)-2.5,float(rbs.get(possession)[-2:])/float(30.0)+5.5)) ``` but practically speaking, use variables. The stuff above is all but unreadable. ``` player_rating = float(rbs.get(possession)[-2:]) low = player_rating / 30 - 2.5 high = player_rating / 30 + 5.5 playdistance = round(random.uniform(low, high)) ``` Also, once a variable in a calculation is a float, such as `player_rating` here, the *entire calculation* will yield a float. Things like `float(30.0)` are completely unnecessary.
When I indent your code to make it more readable, the problem becomes evident ``` playdistance = round( random.uniform( float( rbs.get(possession)[-2:] / float(30.0) # error 1 ) - 2.5, float( rbs.get(possession)[-2:] / float(30.0) # error 2 ) + 5.5 ) ) ``` `rbs.get(possession)[-2:]` is a string. I see you're *trying* to convert it to a float, but that needs to be done with ``` float(rbs.get(possession)[-2:]) / float(30.0) - 2.5 # ^ ``` not ``` float(rbs.get(possession)[-2:] / float(30.0)) - 2.5 # ^ ``` There's no need to parenthesize the division, because `/` has higher precedence than `-` (remember your order of operations)
1,317
12,948,935
``` $ ps aux | grep file1.py xyz 6103 0.0 0.1 33476 6480 pts/1 S+ 12:00 0:00 python file1.py xyz 6188 0.0 0.1 33476 6472 pts/2 S+ 12:05 0:00 python file1.py xyz 7294 0.0 0.0 8956 872 pts/4 S+ 12:49 0:00 grep --color=auto file1.py ``` process 6103 has started at 12:00 and after 5 minutes process 6188 started. I need to find out both the processes 6103,6188 pid\_finder.py ============== ``` import psutil PROCNAME = "file1.py" process = [] for proc in psutil.process_iter(): if proc.name == PROCNAME: print proc ``` But the above script printed out nothing. can "psutil" module have other option to find pid of the process of a script. psutil.test() gave following o/p... ``` xyz 6103 0.0 0.2 33476 6480 /dev/pts/1 13:23 30:00 python xyz 6188 0.0 0.2 33476 6480 /dev/pts/2 13:23 30:00 python xyz 8831 0.0 1.0 430612 39796 ? 13:31 30:03 gedit xyz 8833 0.0 ? 14540 808 ? 13:31 30:00 gnome-pty-helper xyz 8835 0.0 0.1 23636 5008 /dev/pts/5 13:31 30:00 bash xyz 9367 0.0 0.2 51580 7740 /dev/pts/4 13:42 30:00 python ```
2012/10/18
[ "https://Stackoverflow.com/questions/12948935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1051068/" ]
What about something like this, if you are not worried of the **os.popen()** ``` #!/usr/bin/python import os PROCNAME = "file1.py" pids = [] for proc_data in os.popen('/bin/ps -eo pid,comm,args'): bits = proc_data.strip().split() (pid, comm ) = bits[0:2] args = " ".join( bits[3:] ) if args == PROCNAME: pids.append( pid ) print pids ``` This should let you find things base on the args of the process. You could change it so that comm and args where the one string if required. ``` pid = bits[0] comm_and_args = " ".join( bits[1:] ) ```
please read up on `pidof`: ``` man pidof ```
1,320
58,461,785
While studying data types in Python, I encountered a data type range and used a variable to define it. However using type function to know about this still tells that it's a list data types. Am I missing something here? Please guide. Thank you so much. ``` x = range(3) print(type(x)) ``` Output is as shown below: ``` C:\Python27>python.exe learn.py <type 'list'> ```
2019/10/19
[ "https://Stackoverflow.com/questions/58461785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8808220/" ]
It seems you are mixing up Python 3 and 2. These are two major versions of Python. Python 3000 introduced many intentionally backwards incompatible changes including in the workings of the range function. In Python 2, the range function immediately expanded out to a list `list_range = list(range(3))` In Python 3 it is just a mapping to the range data type. Check the official docs here [<https://docs.python.org/3.0/whatsnew/3.0.html]> It is something to do with memory.
With Python2, range returned the list. If you try to run your code with python3, it returns the 'range' type as a output of your code.
1,321
2,913,626
I need to parse a string `'Open URN: 100000 LA: '` and get 100000 from it. on python regexp `(?<=Open URN: )[0-9]+(?= LA:)` works fine but in php it gives following error: ``` preg_match(): Unknown modifier '[' ``` I need it working php, so please help me to solve this problem and tell about difference in python and php regexps.
2010/05/26
[ "https://Stackoverflow.com/questions/2913626", "https://Stackoverflow.com", "https://Stackoverflow.com/users/350981/" ]
You have to use [delimiters](http://www.php.net/manual/en/regexp.reference.delimiters.php) when you are using the [*Perl Compatible Regular Expressions* (PCRE) functions](http://www.php.net/manual/en/book.pcre.php) in PHP (to which [`preg_match()`](http://php.net/manual/en/function.preg-match.php) belongs). From the [documentation](http://www.php.net/manual/en/regexp.reference.delimiters.php): > > When using the PCRE functions, it is required that the pattern is enclosed by ***delimiters***. A delimiter can be any non-alphanumeric, non-backslash, non-whitespace character. > > > The reason for using delimiters is that you can add [**pattern modifiers**](http://www.php.net/manual/en/reference.pcre.pattern.modifiers.php) after the last delimiter, e.g. to make an case-insensitive match: ``` #[a-z]#i // # is the delimiter. ``` --- **Back to your problem:** In your case, PHP thinks the brackets `()` are your delimiters (yes, opening and closing brackets are valid delimiters, see the [documentation](http://www.php.net/manual/en/regexp.reference.delimiters.php)) and `?<=Open URN:` is your pattern . Then it encounters `[` and treats it as [pattern modifier](http://www.php.net/manual/en/reference.pcre.pattern.modifiers.php), but it is not a valid one. Your pattern with delimiter `%`: ``` preg_match('%(?<=Open URN: )[0-9]+(?= LA:)%', 'Open URN: 100000 LA: '); ``` There are a lot examples in the [documentation of `preg_match()`](http://php.net/manual/en/function.preg-match.php) --- **Python vs PHP** The only thing I found regarding regular expressions in Python is, that [Perl syntax is used](http://docs.python.org/howto/regex.html#regex-howto) but I don't know if the full syntax is supported. As already mentioned, PHP uses PCRE. [Description of the differences between PCRE and Perl regex.](http://www.php.net/manual/en/reference.pcre.pattern.differences.php)
Except of mentioned differences I found one more. re.match(r"\s", "a b") in python with preg\_match("/\s/", "a b"), the first doesn't return matches in python while the second will find space symbol. I didn't find why in official docs, it's hard to understand but it's a fact.
1,322
22,286,332
I am parsing log files in size of 1 to 10GB using python3.2, need to search for line with specific regex (some kind of timestamp), and I want to find the last occurance. I have tried to use: ``` for line in reversed(list(open("filename"))) ``` which resulted in very bad performance (in the good cases) and MemoryError in the bad cases. In thread: [Read a file in reverse order using python](https://stackoverflow.com/questions/2301789/read-a-file-in-reverse-order-using-python) i did not find any good answer. I have found the following solution: [python head, tail and backward read by lines of a text file](https://stackoverflow.com/questions/5896079/python-head-tail-and-backward-read-by-lines-of-a-text-file/5896210#5896210) very promising, however it does not work for python3.2 for error: ``` NameError: name 'file' is not defined ``` I had later tried to replace `File(file)` with `File(TextIOWrapper)` as this is the object builtin function `open()` returns, however that had resulted in several more errors (i can elaborate if someone suggest this is the right way:))
2014/03/09
[ "https://Stackoverflow.com/questions/22286332", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3399166/" ]
Looks like you have mistake in code: ``` echo $row['PersonnelD']; ``` shouldn't it be following? ``` echo $row['PersonnelID']; ```
check the mysql\_fetch\_assoc() function may be its parameter is empty so it can't enter the while loop
1,323
38,092,236
(windows 7, python 2.7.3) Here is my code: ``` from Tkinter import * root = Tk() root.geometry('400x400') Frame(root, width=20, height=20, bg='red').pack(expand=NO, fill=None, side=LEFT) Label(root, width=20, height=20, bg='black').pack(expand=NO, fill=None, side=LEFT) root.mainloop() ``` And the result is like this: [![enter image description here](https://i.stack.imgur.com/VuQoV.png)](https://i.stack.imgur.com/VuQoV.png) I set same width and height to the Frame and Label, but they show different size. What's more, the Label is even not a square.Please explain it for me, and show me the way to make them same size.
2016/06/29
[ "https://Stackoverflow.com/questions/38092236", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6032273/" ]
**Short answer:** 20 is the same as 20, but 20 meters is not the same as 20 kilometers. **Long answer:** The result you got is not as weird as you may think because the `width` and `height` options of `Tkinter.Frame()` are measured in terms of **pixels** whereas in `Tkinter.Label()`: * `width`: defines the width of the label in **characters** * `height`: defines the height of the label in **lines** [Reference.](http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/label.html)
As I know Label is used for text. Label() definition and Frame() might work differently for width and height parameters, correct me if am wrong. example: change width and height inside Label() to 1. you will see space for one character filled with black color in tk window. like `Label(root, width=1, height=1, bg='black').pack(expand=NO, fill=None, side=LEFT)`
1,328
59,028,392
I have my docker containers up and running. There is one container running some python code and I found that it is causing some bug. I want to add some lines of code (mainly more logs) to a python script within that particular container. I want to just go into the container by `docker exec -ti container_name bash` and start to edit code by `nano my_python_script.py`. Does the running container pick up these changes automatically, on-the-fly? Or do I need to do something for these changes to come into effect, i.e. to print the new logging information?
2019/11/25
[ "https://Stackoverflow.com/questions/59028392", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3703783/" ]
A couple of facts about docker containers: 1. Docker container lives as long as the process it runs lives usually. 2. Docker container is immutable, so whatever changes you do in filesystem of the container itself won't survive the restart of container (I'm not talking about volumes, its more advanced stuff) Based on these facts: The question basically boils down to whether the changes in `my_python_script.py` that you do "on-the-fly" requires the restart of python process. And this really depends on what/how exactly do you run the python. If it requires the restart - then no, you won't be able to see the logs. The restart won't help either, because of fact "2" - you'll lost the changes (additional log prints in this case). If Python is able to dynamically reload the script and run it **within the same process** (without the restart of the container) then you can do that.
Answering because there's some misinformation in other answers here. The correct answer is in the comment from [MyTwoCents](https://stackoverflow.com/questions/59028392/do-docker-containers-pick-up-code-changes-on-the-fly#comment104300981_59028392): > > It will behave same way as it would when you do it on your system. So if you edit nano my\_python\_script.py will it do change automatically on your system? > > > Put quite simply, if your application dynamically updates itself when the files on the filesystem change, then the same will happen when you modify those files inside of a container. --- The misinformation in Mark's answer concerns me: > > Docker container is immutable, so whatever changes you do in filesystem of the container itself won't survive the restart of container > > > **That is not accurate**. Docker images are stored as a collection of filesystem layers that are immutable. Containers are ephemeral, files changed inside the container will revert when the container is deleted and replaced with a new container. However, restarting a container effectily stops and restarts the process inside the container with the same filesystem state of that container. Any changes that happen to the container's filesystem are maintained for the life of the container. Volumes are used when you want changes to survive beyond the life of a single container. --- The correct answer is that you most likely need to restart the python process inside the container to see the filesystem changes, and stopping the python process will stop the container. It becomes an awkward process to keep `exec`'ing into a container to make changes and then exit to restart the container for the changes to appear. Therefore, a more typical developer workflow is to mount the filesystem of code into the container so you can make changes directly on your host, where those changes will not be logs, and then restart the container. Since python does not require compiling your code into a binary, you'll see your changes on every restart. If your Dockerfile contained lines like: ``` FROM python COPY . /code CMD python /code/my_python_script.py ``` You could make that a volume with: ``` docker run --name container_name -v "$(pwd):/code" image_name ``` And then to restart with changes: ``` docker restart container_name ```
1,329
17,771,131
While following a tutorial for python, I got to know that we can use *print* for a variable name, and it works fine. But after assigning the print variable, how do we get back the original print function? ``` >>> print("Hello World!!") Hello World!!! >>> print = 5 >>> print("Hi") ``` Now, the last call gives the error **TypeError: 'int' object is not callable**, since now print has the integer value 5. But, how do we get back the original functionality of print now? Should we use the class name for the print function or something? As in, `SomeClass.print("Hi")`? Thanks in advance.
2013/07/21
[ "https://Stackoverflow.com/questions/17771131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1936532/" ]
``` >>> print = 5 >>> print = __builtins__.print >>> print("hello") hello ```
You can actually delete the variable so the built-in function will work again: ``` >>> print = 5 >>> print('cabbage') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not callable >>> del print >>> print('cabbage') cabbage ```
1,330
66,730,834
I have created a conda environment and activated it already. Then inside the `use_cases/` directory I execute: `pip install -e use_case_b` (<https://github.com/geoHeil/dagster-demo/tree/master/use_cases>): ``` ... ... Installing collected packages: use-case-b Attempting uninstall: use-case-b Found existing installation: use-case-b 0.0.0 Uninstalling use-case-b-0.0.0: Successfully uninstalled use-case-b-0.0.0 Running setup.py develop for use-case-b Successfully installed use-case-b ``` Now when inside the `use_cases/` directory: ``` python import use_case_b ``` works fine. When switching to a different directory like: `/` (= the root of the repository I get an error message of: ``` ModuleNotFoundError: No module named 'use_case_b' ``` Why is it working once and failing in the second place? Could it be that it is not even working in the first place and only importing the sub\_directory due to the `__init__.py` file? How can I get the python package properly install into the virtual environment? FYI: here you can find the full project <https://github.com/geoHeil/dagster-demo>
2021/03/21
[ "https://Stackoverflow.com/questions/66730834", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2587904/" ]
When you get an error, please post it along with your question. When you are getting an error, it means that something is wrong with your code,and most likely not the flutter engine. Both are important for debugging, the error+your code. Try changing this ``` QuerySnapshot _getPost = await _firestore .collection('post') .doc(widget.currentUser) .collection('userPost') .orderBy('timeStamp', descending: true) .get(); setState(() { pleaseWait = true; postLenght = _getPost.docs.length; post = _getPost.docs.map((e) => Post.fromDocument(e)).toList(); print(postLenght); }); ``` into this: ``` QuerySnapshot _getPost = await _firestore .collection('post') .doc(widget.currentUser) .collection('userPost') .orderBy('timeStamp', descending: true) .get(); if(_getPost.docs.isNotEmpty){ List<Post> tempPost = _getPost.docs.map((e) => Post.fromDocument(e)).toList(); setState(() { pleaseWait = true; post =tempPost print(postLenght); }); }else{ print('The List is empty);} ``` You are not checking if the Query result has data or not. If it's empty, you will pass an empty List post down your tree, and you will get the error you are having.
For people facing similar issues, let me tell what I found in my code: ***The error says that the children is null, not empty !*** So if you are getting the children for the parent widget like Row or Column from a separate method, ***just check if you are returning the constructed child widget from the method***. ``` Row( children: getMyRowChildren() ) . . . getMyRowChildren(){ Widget childWidget = ... //constructed return childWidget; //don't forget to return! } ``` Else it would return null, which results in the children being null and we get the mentioned above error!
1,332
55,355,504
I have a txt file that contains "blocks of consecutive lines", each block representing one observation whereas the different lines within each block represent the value of one variable of the corresponding observation. I worked my way to here using python and I would like to read the .txt file into Stata. Therefore, I would like to remove the line breaks within each block to get one single line containing all the information for one block/observation (delimited by commas). The linebreaks between blocks/observations, however, should persist. The order of the information on variables are in the same order for all blocks/observations, but number of variables per observation varies (at the lower end). my .txt (encoding = 'ascii') file looks like this: obs1\_var1, obs1\_var2, obs1\_var3, obs2\_var1, obs2\_var2, obs2\_var3, obs2\_var4, obs3\_var1, obs3\_var2, obs3\_var3,
2019/03/26
[ "https://Stackoverflow.com/questions/55355504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11259841/" ]
Try ``` with open('my_file.txt','r') as f: # lines should hold the data with no new lines lines = [l.strip() for l in f.readlines()] ```
you can extend the balderman's answer: ``` with open('filename.txt','r') as f: lines = [l.strip() for l in f.readlines()] ``` This part will create the list of lines of whole file. To create a single line for variables in each block you can just use dictionary to store variables in each block. Example: ``` block_vars = {} for line in lines: block_name = line[:4] if block_name not in block_vars.keys(): block_vars[block_name] = [] #declaring as list store the lines in that block block_vars[block_name].append(line) #append the line to list with same block name ``` block\_vars dictionary contains list of lines associated with particular block. You can use *'delimiter'.join(list\_name)* to get single line output.
1,333
45,715,062
I try to create a domain filter what should look like this: ``` (Followup date < today) AND (customer = TRUE OR user_id = user.id) ``` I did it like following: ``` [('follow_up_date', '&lt;=', datetime.datetime.now().strftime('%Y-%m-%d 00:00:00')),['|', ('customer', '=', 'False'),('user_id', '=', 'user.id')]] ``` The first part (the time filter) works great if it stands alone, but when I connect it with the second part like I did in the example above it gives me this error: ``` File "/usr/lib/python2.7/dist-packages/openerp/osv/expression.py", line 308, in distribute_not elif token in DOMAIN_OPERATORS_NEGATION: TypeError: unhashable type: 'list' ``` What's wrong, how can I express what I want as a correct domain filter? Thank you for your help in advance :)
2017/08/16
[ "https://Stackoverflow.com/questions/45715062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7126858/" ]
Odoo uses the [polish notation](https://en.wikipedia.org/wiki/Polish_notation). If you'd like to use the logical expression `(A) AND (B OR C)` as a domain, that means you will have to use: `AND A OR B C`. If you'd like more information about polish notation please check the link. This means that, if I understand the question correctly, you will need this: ``` ['&', ('follow_up_date', '&lt;=', datetime.datetime.now().strftime('%Y-%m-%d 00:00:00')),'|', ('customer', '=', 'False'),('user_id', '=', 'user.id')] ```
Try without brackets in the second expression: ``` [('follow_up_date', '&lt;=', datetime.datetime.now().strftime('%Y-%m-%d 00:00:00')),'|', ('customer', '=', 'False'),('user_id', '=', 'user.id')'] ``` I hope this help you.
1,334
66,029,297
I parsed a function from python which converts for ex. "5m" to 300 seconds (integer). My question is about the regex expression I did, because I know it's slow compared to anything else. What is the best way to get the integer part of the `timeframe` and the string part as well into a separate string? Basically, what I did, but efficiently. Not like it really matters in my situation, but I like it to be strict. ```py def parse_timeframe(timeframe): amount = int(timeframe[0:-1]) unit = timeframe[-1] if 'y' == unit: scale = 60 * 60 * 24 * 365 elif 'M' == unit: scale = 60 * 60 * 24 * 30 elif 'w' == unit: scale = 60 * 60 * 24 * 7 elif 'd' == unit: scale = 60 * 60 * 24 elif 'h' == unit: scale = 60 * 60 elif 'm' == unit: scale = 60 elif 's' == unit: scale = 1 else: raise NotSupported('timeframe unit {} is not supported'.format(unit)) return amount * scale ``` ```cs public static int ParseTimeFrameToSeconds(this string timeframe) { var amount = Convert.ToInt32(Regex.Match(timeframe, @"\d+").Value); var unit = Regex.Match(timeframe, @"[a-zA-Z]+").Value; int scale; if (unit == "y") scale = 60 * 60 * 24 * 365; else if (unit == "M") scale = 60 * 60 * 24 * 30; else if (unit == "w") scale = 60 * 60 * 24 * 7; else if (unit == "d") scale = 60 * 60 * 24; else if (unit == "h") scale = 60 * 60; else if (unit == "m") scale = 60; else if (unit == "s") scale = 1; else throw new NotSupportedException($"Timeframe unit {unit} is not supported."); return amount * scale; } ```
2021/02/03
[ "https://Stackoverflow.com/questions/66029297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13677853/" ]
there's no need to use [regexes](https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/) for this. just translate *what the existing python code does*: accessing substrings of your input. ``` var amount = int.Parse(timeframe.Substring(0, timeframe.Length - 1)); var unit = timeframe.Substring(timeframe.Length - 1); ```
Alternatively, if you have a proper TimeSpan, you can cast your string to `TimeSpan` and then use the `TotalSeconds` prop. That will also get rid of all the if-else ifs that you have. ``` if (TimeSpan.TryParse(timeframe, out var timeSpan)) { Console.WriteLine(timeSpan.TotalSeconds); } ``` \*Edit: As is, you assume every year is 365, every month is 30 days. Your calculation might be misleading anyway.
1,335
8,296,617
i hope the title itself was quite clear , i am solving 2D lid-driven cavity(square domain) problem using fractional step method , finite difference formulation (Navier-Stokes primitive variable form) , i have got u and v components of velocity over the entire domain , without manually calculating streamlines , is there a command or plotting tool which does the job for me? i hope this question is relevant enough to programming , as i need a tool for plotting streamlines without explicitly calculating them. I have solved the same problem in stream-vorticity NS form , i just had to take contour plot of stream function to get the streamlines. I hope that tool or plotter is a python library, and morevover installable in fedora (i can compromise and use mint)without much fuss!! i would be grateful if someone points out the library and relevant command (would save a lot of time)
2011/11/28
[ "https://Stackoverflow.com/questions/8296617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/616809/" ]
Have a look at [Tom Flannaghan's `streamplot` function](http://www.atm.damtp.cam.ac.uk/people/tjf37/streamplot.py). The [relevant thread on the user's list is here](http://old.nabble.com/Any-update-on-streamline-plot-td30902670.html), and there's also another [similar code snippet by Ray Speth](http://web.mit.edu/speth/Public/streamlines.py) that does things slightly differently. If you have problems with speed, it might be more efficient to use some of scipy's integration functionality instead of the pure-`numpy` integration functions used in both of these examples. I haven't tried it, though, and these deliberately avoid a dependency on `scipy`. (`scipy` is a rather heavy dependency compared to `numpy`) From it's example plot: ``` import matplotlib.pyplot as plt import numpy as np from streamplot import streamplot x = np.linspace(-3,3,100) y = np.linspace(-3,3,100) u = -1-x**2+y[:,np.newaxis] v = 1+x-y[:,np.newaxis]**2 speed = np.sqrt(u*u + v*v) plt.figure() plt.subplot(121) streamplot(x, y, u, v, density=1, INTEGRATOR='RK4', color='b') plt.subplot(122) streamplot(x, y, u, v, density=(1,1), INTEGRATOR='RK4', color=u, linewidth=5*speed/speed.max()) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/lay5c.png) Another option is to use `VTK`. It's accelerated 3D plotting, so making a 2D plot will require setting the camera properly (which isn't too hard), and you won't be able to get vector output. Mayavi, tvtk, and mlab provide pythonic wrappers for VTK. It has lots of functionality along these lines. The easiest way to use VTK to plot streamlines from numpy arrays is to use [`mayavi.mlab.flow`](http://github.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.html#mayavi.mlab.flow). I'll skip an example for the moment, but if you want to explore using VTK to do this, I can add one.
Have a look at `matplotlib`'s `quiver`: <http://matplotlib.sourceforge.net/examples/pylab_examples/quiver_demo.html>
1,336
71,825,406
I try to search the answer in stackoverflow and try to find something in the github [issues](https://github.com/googleapis/python-logging/issues) but nothing I found. Can anyone give me some tip to solve the problem? I get the following error when trying to install Google Cloud Logging by pip with docker: ``` test_web | import google.cloud.logging test_web | File "/root/site/lib/python3.8/site-packages/google/cloud/logging/__init__.py", line 18, in <module> test_web | from google.cloud.logging_v2 import __version__ test_web | File "/root/site/lib/python3.8/site-packages/google/cloud/logging_v2/__init__.py", line 25, in <module> test_web | from google.cloud.logging_v2.client import Client test_web | File "/root/site/lib/python3.8/site-packages/google/cloud/logging_v2/client.py", line 25, in <module> test_web | from google.cloud.logging_v2._helpers import _add_defaults_to_filter test_web | File "/root/site/lib/python3.8/site-packages/google/cloud/logging_v2/_helpers.py", line 25, in <module> test_web | from google.cloud.logging_v2.entries import LogEntry test_web | File "/root/site/lib/python3.8/site-packages/google/cloud/logging_v2/entries.py", line 33, in <module> test_web | from google.iam.v1.logging import audit_data_pb2 # noqa: F401 test_web | File "/root/site/lib/python3.8/site-packages/google/iam/v1/logging/audit_data_pb2.py", line 32, in <module> test_web | from google.iam.v1 import policy_pb2 as google_dot_iam_dot_v1_dot_policy__pb2 test_web | File "/root/site/lib/python3.8/site-packages/google/iam/v1/policy_pb2.py", line 39, in <module> test_web | _POLICY = DESCRIPTOR.message_types_by_name["Policy"] test_web | AttributeError: 'NoneType' object has no attribute 'message_types_by_name' ``` I have it running in a virtual environment with this in requirements.txt: `google-cloud-logging==3.0.0`
2022/04/11
[ "https://Stackoverflow.com/questions/71825406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11296376/" ]
Upgrade your protobuf to 3.20.1. Unsure why its happening. Here's the git issue: <https://github.com/googleapis/python-iam/issues/185>
I had the same error while using `google-cloud-secret-manager` and `poetry`. Removing unused `gcloud` dependency as well as `google-cloud-secret-manager` and reinstalling `google-cloud-secret-manager` solved it. ``` poetry remove gcloud poetry remove google-cloud-secret-manager poetry add google-cloud-secret-manager ```
1,338
8,618,984
The server only allows access to the videos if the useragent is QT, how to add it to this script ? ``` #!/usr/bin/env python from os import pardir, rename, listdir, getcwd from os.path import join from urllib import urlopen, urlretrieve, FancyURLopener class MyOpener(FancyURLopener): version = 'QuickTime/7.6.2 (verqt=7.6.2;cpu=IA32;so=Mac 10.5.8)' def main(): # Set up file paths. data_dir = 'data' ft_path = join(data_dir, 'titles.txt') fu_path = join(data_dir, 'urls.txt') # Open the files try: f_titles = open(ft_path, 'r') f_urls = open(fu_path, 'r') except: print "Make sure titles.txt and urls.txt are in the data directory." exit() # Read file contents into lists. titles = [] urls = [] for l in f_titles: titles.append(l) for l in f_urls: urls.append(l) # Create a dictionary and download the files. downloads = dict(zip(titles, urls)) for title, url in downloads.iteritems(): fpath = join(data_dir, title.strip().replace('\t',"").replace(" ", "_")) fpath += ".mov" urlretrieve(url, fpath) if __name__ == "__main__": main() ``` *Ignore this, text to fill the posting restriction. blablablabla*
2011/12/23
[ "https://Stackoverflow.com/questions/8618984", "https://Stackoverflow.com", "https://Stackoverflow.com/users/891489/" ]
I found the solution. Practically I needed to set the Layout\_width of each container with the weight property to 0px.
From what I can tell, it seems your weightSum should be 12, not 10. First LinearLayout has weight=2, the second weight=8 and the third weight=2. It might solve your problem!
1,339
29,985,453
I am getting this strange to me error when installing Keras on an Ubuntu server: ``` Cythonizing /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/utils.pyx In file included from /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1804:0, from /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/api_compat.h:26, from /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/defs.c:287: /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ In file included from /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/defs.c:287:0: /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/api_compat.h:27:18: fatal error: hdf5.h: No such file or directory #include "hdf5.h" ^ compilation terminated. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ``` Any ideas how to fix this issue? I've downloaded Keras repository from <https://github.com/fchollet/keras>, and used this command to install it: ``` sudo python setup.py install ``` My Linux specifications are: * *Distributor ID:* Ubuntu * *Description:* Ubuntu 14.04.2 LTS * *Release:* 14.04 * *Codename:* trusty
2015/05/01
[ "https://Stackoverflow.com/questions/29985453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4262897/" ]
you can use sparql query to Dbpedia to get result for you particular resource, which here is [Vienna](http://dbpedia.org/page/Vienna). To get all property and their value of resource Vienna use can use ``` select ?property ?value where { <http://dbpedia.org/resource/Vienna> ?property ?value } ``` [Check here](http://dbpedia.org/sparql/?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=select%20%3Fproperty%20%3Fvalue%20where%20%7B%20%0D%0A%20%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FVienna%3E%20%3Fproperty%20%3Fvalue%0D%0A%20filter%28langmatches%28lang%28%3Fvalue%29%2C%22en%22%29%29%0D%0A%7D&format=text%2Fhtml&timeout=30000&debug=on) You can choose specific properties of resource using sparql query like this . ``` select ?country ?density ?timezone ?thumbnail where { <http://dbpedia.org/resource/Vienna> dbpedia-owl:country ?country; dbpedia-owl:populationDensity ?density; dbpedia-owl:timeZone ?timezone; dbpedia-owl:thumbnail ?thumbnail. } ``` [Check](http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=select%20%3Fcountry%20%3Fdensity%20%3Ftimezone%20%3Fthumbnail%20where%20%7B%20%0D%0A%20%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FVienna%3E%20dbpedia-owl%3Acountry%20%3Fcountry%3B%0D%0Adbpedia-owl%3ApopulationDensity%20%3Fdensity%3B%0D%0Adbpedia-owl%3AtimeZone%20%3Ftimezone%3B%0D%0Adbpedia-owl%3Athumbnail%20%3Fthumbnail.%0D%0A%0D%0A%7D&format=text%2Fhtml&timeout=30000&debug=on)
> > *But what I want is something … to get all items where any property fits "Vienna"[.]* > > > In SPARQL this is very easy. E.g., on [DBpedia's SPARQL endpoint](http://dbpedia.org/sparql/): ``` select ?resource where { ?resource ?property dbpedia:Vienna } ``` [SPARQL results (limited to 100)](http://dbpedia.org/sparql/?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=select%20%3Fresource%20where%20%7B%0D%0A%20%20%3Fresource%20%3Fproperty%20dbpedia%3AVienna%20%0D%0Afilter%28%20%3Fproperty%20!%3D%20owl%3AsameAs%20%29%0D%0A%7D%0D%0Alimit%20100&format=text%2Fhtml&timeout=30000&debug=on)
1,341
31,466,769
Similar to this question [How to add an empty column to a dataframe?](https://stackoverflow.com/questions/16327055/how-to-add-an-empty-column-to-a-dataframe), I am interested in knowing the best way to add a column of empty lists to a DataFrame. What I am trying to do is basically initialize a column and as I iterate over the rows to process some of them, then add a filled list in this new column to replace the initialized value. For example, if below is my initial DataFrame: ``` df = pd.DataFrame(d = {'a': [1,2,3], 'b': [5,6,7]}) # Sample DataFrame >>> df a b 0 1 5 1 2 6 2 3 7 ``` Then I want to ultimately end up with something like this, where each row has been processed separately (sample results shown): ``` >>> df a b c 0 1 5 [5, 6] 1 2 6 [9, 0] 2 3 7 [1, 2, 3] ``` Of course, if I try to initialize like `df['e'] = []` as I would with any other constant, it thinks I am trying to add a sequence of items with length 0, and hence fails. If I try initializing a new column as `None` or `NaN`, I run in to the following issues when trying to assign a list to a location. ``` df['d'] = None >>> df a b d 0 1 5 None 1 2 6 None 2 3 7 None ``` Issue 1 (it would be perfect if I can get this approach to work! Maybe something trivial I am missing): ``` >>> df.loc[0,'d'] = [1,3] ... ValueError: Must have equal len keys and value when setting with an iterable ``` Issue 2 (this one works, but not without a warning because it is not guaranteed to work as intended): ``` >>> df['d'][0] = [1,3] C:\Python27\Scripts\ipython:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame ``` Hence I resort to initializing with empty lists and extending them as needed. There are a couple of methods I can think of to initialize this way, but is there a more straightforward way? Method 1: ``` df['empty_lists1'] = [list() for x in range(len(df.index))] >>> df a b empty_lists1 0 1 5 [] 1 2 6 [] 2 3 7 [] ``` Method 2: ``` df['empty_lists2'] = df.apply(lambda x: [], axis=1) >>> df a b empty_lists1 empty_lists2 0 1 5 [] [] 1 2 6 [] [] 2 3 7 [] [] ``` **Summary of questions:** Is there any minor syntax change that can be addressed in Issue 1 that can allow a list to be assigned to a `None`/`NaN` initialized field? If not, then what is the best way to initialize a new column with empty lists?
2015/07/17
[ "https://Stackoverflow.com/questions/31466769", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4403872/" ]
One more way is to use [`np.empty`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.empty.html): ``` df['empty_list'] = np.empty((len(df), 0)).tolist() ``` --- You could also knock off `.index` in your "Method 1" when trying to find `len` of `df`. ``` df['empty_list'] = [[] for _ in range(len(df))] ``` --- Turns out, `np.empty` is faster... ``` In [1]: import pandas as pd In [2]: df = pd.DataFrame(pd.np.random.rand(1000000, 5)) In [3]: timeit df['empty1'] = pd.np.empty((len(df), 0)).tolist() 10 loops, best of 3: 127 ms per loop In [4]: timeit df['empty2'] = [[] for _ in range(len(df))] 10 loops, best of 3: 193 ms per loop In [5]: timeit df['empty3'] = df.apply(lambda x: [], axis=1) 1 loops, best of 3: 5.89 s per loop ```
EDIT: the commenters caught the bug in my answer ``` s = pd.Series([[]] * 3) s.iloc[0].append(1) #adding an item only to the first element >s # unintended consequences: 0 [1] 1 [1] 2 [1] ``` So, the correct solution is ``` s = pd.Series([[] for i in range(3)]) s.iloc[0].append(1) >s 0 [1] 1 [] 2 [] ``` OLD: I timed all the three methods in the accepted answer, the fastest one took 216 ms on my machine. However, this took only 28 ms: ``` df['empty4'] = [[]] * len(df) ``` Note: Similarly, `df['e5'] = [set()] * len(df)` also took 28ms.
1,345
27,138,716
I am new to Jquery and Javascript. I've only done the intros for codeacademy and I have what I remembered from my python days. I saw this tutorial: <http://www.codecademy.com/courses/a-simple-counter/0/1> I completed the tutorial and thought: "I should learn how to do this with Jquery". So I've been trying to use what I understand to do so. My issue is that I don't know how to pass an argument for a variable from HTML to Jquery(javascript). Here is my code: **HTML** ``` <body> <label for="qty">Quantity</label> <input id="qty" type = "number" value = 0 /> <button class = "botton">-1</button> <button class = "botton">+1</button> <script type="text/javascript" src="jquery-1.11.1.min.js"></script> <script type="text/javascript" src="test.js"></script> </body> ``` **Jquery/Javascript:** ``` //create a function that adds or subtracts based on the button pressed function modify_qty(x) { //on click add or subtract $('.botton').click(function(){ //get the value of input field id-'qty' var qty = $('#qty').val(); var new_qty = qty + x; //i don't want to go below 0 if (new_qty < 0) { new_qty = 0; } //put new value into input box id-'qty' $('#qty').html(new_qty) }) }; $(document).ready(modify_qty); ``` How do I pass an argument of 1 or -1 to the function? I was using onClick() but that seemed redundant because of the $('.botton').click(function(){}). Thank you
2014/11/25
[ "https://Stackoverflow.com/questions/27138716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4293758/" ]
If you want to read the whole file with the spaces removed, `f.read()` is on the right track—unlike your other attempts, that gives you the whole file as a single string, not one line at a time. But you still need to replace the spaces. Which you need to do explicitly. For example: ``` f.read().replace(' ', '') ``` Or, if you want to replace all whitespace, not just spaces: ``` ''.join(f.read().split()) ```
This line: ``` f = open("clues.txt") ``` will open the file - that is, it returns a filehandle that you can read from This line: ``` open("clues.txt").read().replace(" ", "") ``` will open the file and return its contents, with all spaces removed.
1,348
65,682,339
This is not working, Cant figure it out... i want it to print either error sentence or break.. I wanted to do it in a try/except, but that was not so good. And I'm new to python :-) ```py while True: unitFrom = input("Enter unit of temperature, either Fahrenheit, Kelvin or Celsius:") list = ["Fahrenheit" , "Celsius" , "Kelvin"] if unitFrom.lower() in list: break else: print ("Wrong unit, try again") break ```
2021/01/12
[ "https://Stackoverflow.com/questions/65682339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14990031/" ]
1. Never use the built-in keywords to define new variables. 2. Take the list outside the loop to avoid initializing it on each iteration. 3. You need to have the list in lowercase since you're checking the lower-cased input in the list: Hence: ``` x_units = ["fahrenheit" , "celsius" , "kelvin"] # or x_units = [x.lower() for x in x_units] if you do not wish to change the original list while True: unitFrom = input("Enter unit of temperature, either Fahrenheit, Kelvin or Celsius:") if unitFrom.lower() in x_units: break else: print ("Wrong unit, try again") break ```
Do you want to print break or want to execute `break` ? and `list = ["Fahrenheit" , "Celsius" , "Kelvin"]` is created new everytime execute it before `while True:` and use something other than list as array name as `list` is a keyword ``` answer_list = ["Fahrenheit" , "Celsius" , "Kelvin"] while True: unitFrom = input("Enter unit of temperature, either Fahrenheit, Kelvin or Celsius:") if unitFrom.lower() in answer_list: break else: print ("Wrong unit, try again") break ```
1,351
68,179,964
I am trying to check if all the objects in a specified bucket are public or not, using the boto3 module in python. I have tried using the `client.get_object()` and `client.list_objects()` methods, but I am unable to figure out what exactly I should search for as I am new to boto3 and AWS in general. Also, since my organization prefers using `client` over `resource`, so I'm preferably looking for a way to do it using `client`.
2021/06/29
[ "https://Stackoverflow.com/questions/68179964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16081945/" ]
I think the best way to test if an object is public or not is to make an anonymous request to that object URL. ```py import boto3 import botocore import requests bucket_name = 'example-bucket' object_key = 'example-key' config = botocore.client.Config(signature_version=botocore.UNSIGNED) object_url = boto3.client('s3', config=config).generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': object_key}) resp = requests.get(object_url) if resp.status_code == 200: print('The object is public.') else: print('Nope! The object is private or inaccessible.') ``` Note: You can use `requests.head` instead of `requests.get` to save some data transfer.
may be a combination of these to tell the full story for each object ``` client = boto3.client('s3') bucket = 'my-bucket' key = 'my-key' client.get_object_acl(Bucket=bucket, Key=key) client.get_bucket_acl(Bucket=bucket) client.get_bucket_policy(Bucket=bucket) ```
1,355
70,222,086
Well, I would like to get a calendars data from outlook. My purpose is making small service in Python which can read & write in someones calendar in outlook account of course I suppose that I was provided access to it in Azure Active Directory. Before writing this, I read a lot of guides on how to do this. Also I tried to find similar issues on [github](https://github.com/O365/python-o365/issues), some things I've learned from this. So I've made following steps. I wish you pointed on my mistakes and told how to fix them. * [![enter image description here](https://i.stack.imgur.com/lLamL.png)](https://i.stack.imgur.com/lLamL.png) * [![enter image description here](https://i.stack.imgur.com/H0usv.png)](https://i.stack.imgur.com/H0usv.png) * [![enter image description here](https://i.stack.imgur.com/uaO99.png)](https://i.stack.imgur.com/uaO99.png) * [![enter image description here](https://i.stack.imgur.com/fmzVc.png)](https://i.stack.imgur.com/fmzVc.png) * [![enter image description here](https://i.stack.imgur.com/BxIIF.png)](https://i.stack.imgur.com/BxIIF.png) * [![enter image description here](https://i.stack.imgur.com/1WFgi.png)](https://i.stack.imgur.com/1WFgi.png) Then I've written this code in Python with O365 ``` from O365 import Account CLIENT_ID = 'xxxx' SECRET_ID = 'xxxx' TENANT_ID = 'xxxx' credentials = (CLIENT_ID, SECRET_ID) account = Account(credentials, auth_flow_type='credentials', tenant_id=TENANT_ID) if account.authenticate(): print('Authenticated!') schedule = account.schedule(resource='user@domain') calendar = schedule.get_default_calendar() events = calendar.get_events(include_recurring=False) for event in events: print(event) ``` Then if I use email which is shown in user contact info in my directory as a resourse. I get this error: ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://graph.microsoft.com/v1.0/users/user@domain/calendar | Error Message: The tenant for tenant guid 'xxxx' does not exist. ``` Then if I use 'User Principal Name', which is shown in user identity in my directory as a resourse. I get this error: ``` requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://graph.microsoft.com/v1.0/users/xxxx#EXT%23@xxxx.onmicrosoft.com/calendar | Error Message: Insufficient privileges to complete the operation. ``` Then if I use 'me' as a resource I also get an error: ``` requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/me/calendar | Error Message: /me request is only valid with delegated authentication flow. ``` Could you tell me what should i provide as a resource to get someones calendar or maybe what should i fix?
2021/12/04
[ "https://Stackoverflow.com/questions/70222086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12961561/" ]
let's best go through the error messages: 1. `Error Message: The tenant for tenant guid 'xxxx' does not exist.` I am assuming that this is an Office 365 Business. Also that you are logged into Azure with your company address. Then you should see under: "App registrations>test feature> Overview" you should find the valid TenantID. [![Tenant ID](https://i.stack.imgur.com/f0U0y.png)](https://i.stack.imgur.com/f0U0y.png) 2. `Error Message: Insufficient privileges to complete the operation.` I cant find a Scope in your example e.g: ``` from O365 import Account CLIENT_ID = 'xxxx' SECRET_ID = 'xxxx' TENANT_ID = 'xxxx' credentials = (CLIENT_ID, SECRET_ID) scopes = ['https://graph.microsoft.com/Calendar.ReadWrite', 'https://graph.microsoft.com/User.Read', 'https://graph.microsoft.com/offline_access'] account = Account(credentials, tenant_id=TENANT_ID) if account.authenticate(scopes=scopes): print('Authenticated!') ``` you can [check all Scopes here](https://learn.microsoft.com/en-us/graph/api/user-list-calendars?view=graph-rest-1.0&tabs=http) 3 . `Error Message: /me request is only valid with delegated authentication flow.` I also assume a scope error here. General recommendations: * Add "offline\_access" as a scope - this way you don't need to log in for each run. * Set each scope as delegated - this limits the rights on user level - which is a reduced security risk for a company.
For me, joining [Microsoft Developer Program](https://developer.microsoft.com/en-us/microsoft-365/dev-program) and using its azure directory fixed issues.
1,357
45,402,049
I've been working on a website for the past year and used python and flask to built it. Recently I encountered a lot of errors and problems and decided to start a new project (pyCharm). I figured I could copy pieces of code into the new project until I encountered a problem and then I'll know what the problem is. I created three new files as follows: **init**.py ``` from flask import Flask app = Flask(__name__) app.config.from_object('config') ``` config.py ``` SECRET_KEY = 'you-will-never-guess' ``` run.py ``` from app import app app.run() ``` If I run this however, the old website shows up (I expected it to be an empty page). How do I start a clean new flask project?
2017/07/30
[ "https://Stackoverflow.com/questions/45402049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8380994/" ]
Move the `if` outside the `echo` and assign it to a variable, then use the variable in the `echo`. I'd also use ternary operator for this. It also looks like you are comparing the full array, not the index you want to be comparing. Try: ``` while ($row = $result->fetch_assoc()) { $style = $row['move'] > 0 ? ' style="color:green;"' : ''; $row['move'] = $row['move'] == 0 ? '-' : $row['move']; echo "<tr><td style='text-align:left'>".$row["rank"]."</td><td style='text-align:left'>".$row["team_name"]."</td><td>".$row["record"]."</td><td>".$row["average"]."</td><td{$style}>".$row["move"]."</td><td>".$row["hi"]."/".$row["lo"]."</td></tr>"; } ``` The `?:` is known as [`ternary operator`](http://php.net/manual/en/language.operators.comparison.php#language.operators.comparison.ternary) it is a shorthand conditional. You can read more here, <https://en.wikipedia.org/wiki/%3F:#PHP>.
Untested, but I think this will work: ``` while ($row = $result->fetch_assoc()) { echo "<tr><td style='text-align:left'>".$row["rank"]."</td><td style='text-align:left'>".$row["team_name"]."</td><td>".$row["record"]."</td><td>".$row["average"]."</td><td ".($row["move"] > 0 ? 'style="color:green;"' : '').">".$row["move"]."</td><td>".$row["hi"]."/".$row["lo"]."</td></tr>"; } ``` This expression: ``` ($row["move"] > 0 ? 'style="color:green;"' : '') ``` returns `'style="color:green"'` if `$row["move"] > 0` and `''` otherwise. (It uses the "ternary operator.") **EDIT** Changed to `$row["move"]` above per @chris85's comment.
1,358
53,178,013
I have a project for which I'd now like to use pipenv. I want to symlink this from my main bin directory, so I can run it from another directory (where it interacts with local files) but nevertheless run it in the pipenv with the appropriately installed files. Can I do something like ``` pipenv run python /PATH/TO/MY/CODE/foo.py localfile.conf ``` Or is that not going to pick up the pipenv defined in /PATH/TO/MY/CODE/Pipenv ?
2018/11/06
[ "https://Stackoverflow.com/questions/53178013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8482/" ]
Not sure if this was relevant with the version of pipenv you used in 2018, but as of current versions you can use the [PIPENV\_PIPFILE](https://pipenv.kennethreitz.org/en/latest/advanced/#pipenv.environments.PIPENV_PIPFILE) environment variable. You will end up with a wrapper shell script that looks something like: ```sh export PIPENV_PIPFILE=/my/project/dir/Pipfile exec pipenv run command ... ``` (Answering even though I'm half a year late because this was the first relevant search result, so I can find it again next time.)
pipenv is a wrapper for virtualenv which keeps the virtualenv-files in some folder in your home directory. I found them in `/home/MYUSERNAME/.local/share/virtualenvs`. So i wrote a `small_script.sh`: ``` #!/bin/bash source /home/MYUSERNAME/.local/share/virtualenvs/MYCODE-9as8Da87/bin/activate python /PATH/TO/MY/CODE/foo.py localfile.conf deactivate ``` It activates the virtualenv for itself, then runs your python code in your local directory and finally deactivates the virtualenv. You can replace `localfile.conf` by `$@` which allows you to run `./small_script.sh localfile.conf`.
1,361
1,976,622
I'm using python-dbus and cherrypy to monitor USB devices and provide a REST service that will maintain status on the inserted USB devices. I have written and debugged these services independently, and they work as expected. Now, I'm merging the services into a single application. My problem is: I cannot seem to get both services ( cherrypy and dbus ) to start together. One or the other blocks or goes out of scope, or doesn't get initialized. I've tried encapsulating each in its own thread, and just call start on them. This has some bizarre issues. ``` class RESTThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): cherrypy.config.update({ 'server.socket_host': HVR_Common.DBUS_SERVER_ADDR, 'server.socket_port': HVR_Common.DBUS_SERVER_PORT, }) cherrypy.quickstart(USBRest()) class DBUSThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): DBusGMainLoop(set_as_default=True) loop = gobject.MainLoop() DeviceAddedListener() print 'Starting DBus' loop.run() print 'DBus Python Started' if __name__ == '__main__': # Start up REST print 'Starting REST' rs = RESTThread() rs.start() db = DBUSThread() db.start() #cherrypy.config.update({ 'server.socket_host': HVR_Common.DBUS_SERVER_ADDR, 'server.socket_port': HVR_Common.DBUS_SERVER_PORT, }) #cherrypy.quickstart(USBRest()) while True: x = 1 ``` When this code is run, the cherrypy code doesn't fully initialize. When a USB device is inserted, cherrypy continues to initialize ( as if the threads are linked somehow ), but doesn't work ( doesn't serve up data or even make connections on the port ) I've looked at cherrypys wiki pages but haven't found a way to startup cherrypy in such a way that it inits, and returns, so I can init the DBus stuff an be able to get this out the door. My ultimate question is: Is there a way to get cherrypy to start and not block but continue working? I want to get rid of the threads in this example and init both cherrypy and dbus in the main thread.
2009/12/29
[ "https://Stackoverflow.com/questions/1976622", "https://Stackoverflow.com", "https://Stackoverflow.com/users/179157/" ]
You should be able to use .NET's Reflection to analyze your application AppDomain(s) and dump a list of loaded assemblies and locations. ``` var loadedAssemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in loadedAssemblies) { Console.WriteLine(assembly.GetName().Name); Console.WriteLine(assembly.Location); } ```
Check out [`AppDomain.GetAssemblies`](http://msdn.microsoft.com/en-us/library/system.appdomain.getassemblies.aspx) ``` For Each Ass As Reflection.Assembly In CurrentDomain.GetAssemblies() Console.WriteLine(Ass.ToString()) Next ```
1,362
1,777,862
I am trying to rewrite the following program in C instead of C# (which is less portable). It is obvious that "int system ( const char \* command )" will be necessary to complete the program. Starting it with "int main ( int argc, char \* argv[] )" will allow getting the command-line arguments, but there is still a problem that is difficult to understand. How do you successfully escape arguments with spaces in them? In the program below, arguments with spaces in them (example: screensaver.scr "this is a test") will be passed to the script as separate arguments (example: screensaver.scr this is a test) and could easily cause problems. ``` namespace Boids_Screensaver { static class Program { [STAThread] static void Main(string[] args) { System.Diagnostics.Process python = new System.Diagnostics.Process(); python.EnableRaisingEvents = false; python.StartInfo.FileName = "C:\\Python31\\pythonw.exe"; python.StartInfo.Arguments = "boids.pyw"; foreach (string arg in args) { python.StartInfo.Arguments += " " + arg; } python.Start(); } } } ```
2009/11/22
[ "https://Stackoverflow.com/questions/1777862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216356/" ]
Windows is all messed up. Every program has its own rules.
``` screensaver.scr "spaced argument" nonspaced_argument argc = 2 argv[0] = "screensaver.scr" argv[1] = "spaced argument" argv[2] = "nonspaced_argument" ``` Sorry my English :).
1,368
17,161,552
I have read that while writing functions it is good practice to copy the arguments into other variables because it is not always clear whether the variable is immutable or not. [I don't remember where so don't ask]. I have been writing functions according to this. As I understand creating a new variable takes some overhead. It may be small but it is there. So what should be done? Should I be creating new variables or not to hold the arguments? I have read [this](https://stackoverflow.com/questions/8056130/immutable-vs-mutable-types-python) and [this](https://stackoverflow.com/questions/986006/python-how-do-i-pass-a-variable-by-reference). I have confusion regarding as to why float's and int's are immutable if they can be changed this easily? EDIT: I am writing simple functions. I'll post example. I wrote the first one when after I read that in Python arguments should be copied and the second one after I realized by hit-and-trial that it wasn't needed. ``` #When I copied arguments into another variable def zeros_in_fact(num): '''Returns the number of zeros at the end of factorial of num''' temp = num if temp < 0: return 0 fives = 0 while temp: temp /= 5 fives += temp return fives #When I did not copy arguments into another variable def zeros_in_fact(num): '''Returns the number of zeros at the end of factorial of num''' if num < 0: return 0 fives = 0 while num: num /= 5 fives += num return fives ```
2013/06/18
[ "https://Stackoverflow.com/questions/17161552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2235567/" ]
I think it's best to keep it simple in questions like these. The second link in your question is a really good explanation; in summary: Methods take parameters which, as pointed out in that explanation, are passed "by value". The parameters in functions take the value of variables passed in. For primitive types like strings, ints, and floats, the value of the variable is a pointer (the arrows in the following diagram) to a space in memory that represents the number or string. ``` code | memory | an_int = 1 | an_int ----> 1 | ^ another_int = 1 | another_int / ``` When you reassign within the method, you change where the arrow points. ``` an_int = 2 | an_int -------> 2 | another_int --> 1 ``` The numbers themselves don't change, and since those variables have scope only inside the functions, outside the function, the variables passed in remain the same as they were before: 1 and 1. But when you pass in a list or object, for example, you can change the values they point to *outside of the function*. ``` a_list = [1, 2, 3] | 1 2 3 | a_list ->| ^ | ^ | ^ | | 0 2 3 a_list[0] = 0 | a_list ->| ^ | ^ | ^ | ``` Now, you can change where the arrows in the list, or object, point to, but the list's pointer still points to the same list as before. (There should probably actually only be one 2 and 3 in the diagram above for both sets of arrows, but the arrows would have gotten difficult to draw.) So what does the actual code look like? ``` a = 5 def not_change(a): a = 6 not_change(a) print(a) # a is still 5 outside the function b = [1, 2, 3] def change(b): b[0] = 0 print(b) # b is now [0, 2, 3] outside the function ``` Whether you make a copy of the lists and objects you're given (ints and strings don't matter) and thus return new variables or change the ones passed in depends on what functionality you need to provide.
If you are **rebinding** the name then mutability of the object it contains is irrelevant. Only if you perform **mutating** operations must you create a copy. (And if you read between the lines, that indirectly says "don't mutate objects passed to you".)
1,370
73,419,189
I am learning continue statement in python while loop. If I run a following code, the output shows from 2 instead of 1. ``` a = 1 while a <= 8: a += 1 if a == 5: continue print(a) ```
2022/08/19
[ "https://Stackoverflow.com/questions/73419189", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8507620/" ]
You can use the HPA (Horizontal Pod Autoscaler). Here is what the typical yaml configuration looks like. ``` apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa_name spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: deployment_name_to_autoscale minReplicas: 1 maxReplicas: 3 targetCPUUtilizationPercentage: 80 ``` You can monitor the scaling using *kubectl get hpa*
In GKE, you can achieve this with [Horizontal Pod Autoscaler (HPA)](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler). The autoscaling event can be configured to be triggered by system (eg. cpu or memory) or custom metrics (eg. pubsub queued messages count). You can also set the minimum and maximum number of pods to scale up to. Here is a link from GCP for a [sample HPA yaml file](https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-apply)
1,373
55,599,993
I want to write a program in Python which takes a C program as input, executes it against the test cases which are also as inputs and print the output for each test case. I am using Windows I tried with subprocess.run but it is not accepting inputs at runtime (i.e dynamically) ```py from subprocess import * p1=run("rah.exe",input=input(),stdout=PIPE,stderr=STDOUT,universal_newlines=True) print(p1.stdout) ``` C code: ```c #include<stdio.h> void main() { printf("Enter a number"); int a; scanf("%d",&a); for(int i=0;i<a;i++) { printf("%d",i); } } ``` Expected Output on python idle: ``` Enter a number 5 01234 ``` Actual Output: ``` 5 Enter a number 01234 ```
2019/04/09
[ "https://Stackoverflow.com/questions/55599993", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11315221/" ]
I agree with @juanpa.arrivillaga's suggestion. You can use `subprocess.Popen` and `communicate()` for that: ``` import subprocess import sys p = subprocess.Popen('rah.exe', stdout=sys.stdout, stderr=sys.stderr) p.communicate() ``` **Update:** The script above won't work on IDLE because IDLE changes the IO objects `sys.stdout, sys.stderr` which breaks the `fileno()` function. If possible, you should put the code into the a python file (for example `script.py`) and then run it from the Windows command line using the command: ``` python script.py ``` or if not, you can do something similar to IDLE on the command line on Windows by entering the command: ``` python ``` which will start a console similar to IDLE but without the changed IO objects. There you should enter the following lines to get the similar result: ``` import subprocess import sys _ = subprocess.Popen('rah.exe', stdout=sys.stdout,stderr=sys.stderr).communicate() ```
> > I tried with subprocess.run but it is not accepting inputs at runtime (i.e dynamically) > > > If you don't do anything, the subprocess will simply inherit their parent's stdin. That aside, because you're intercepting the output of the subprocess and printing it afterwards you won't get the interleaving you're writing in your "expected output": the input is not going to echo back, so you just get what's being written to the subprocess's stdout, which is both printfs. If you want to dynamically interact with the subprocess, you'll have to create a proper Popen object, pipe everything, and use stdin.write() and stdout.read(). And because you'll have to deal with pipe buffering it's going to be painful. If you're going to do that a lot, you may want to take a look at [pexpect](https://pexpect.readthedocs.io/en/stable/): "interactive" subprocesses is pretty much its bread and butter. This about works: ```py from subprocess import Popen, PIPE from fcntl import fcntl, F_GETFL, F_SETFL from os import O_NONBLOCK import select p = Popen(['./a.out'], stdin=PIPE, stdout=PIPE) fcntl(p.stdout, F_SETFL, fcntl(p.stdout, F_GETFL) | O_NONBLOCK) select.select([p.stdout], [], []) print(p.stdout.read().decode()) d = input() p.stdin.write(d.encode() + b'\n') p.stdin.flush() select.select([p.stdout], [], []) print(p.stdout.read().decode()) ``` ```c #include<stdio.h> int main() { printf("Enter a number"); fflush(stdout); int a; scanf("%d",&a); for(int i=0;i<a;i++) { printf("%d",i); } fflush(stdout); return 0; } ``` Note that it requires explicitly flushing the subprocess's stdout and the writing *and* configuring the stdout to be non-blocking in the caller *and* explicitly select()ing for data on the pipe. Alternatively you could create the subprocess with unbuffered pipes (bufsize=0) then select() and read bytes one by one.
1,378
54,371,847
I'm trying to install tensorflow but python 3.7 does not support that, so I want to get python 3.6 instead without using anaconda. So any suggestion please ?
2019/01/25
[ "https://Stackoverflow.com/questions/54371847", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10968495/" ]
I have done this multiple times. My first tip is use [virtual environments](https://realpython.com/python-virtual-environments-a-primer/). That way you can use python 3.6 for what ever project requires that version of python, and python 3.7 for other projects that need that version. However on windows these are the best steps: 1.) Uninstall python 3.7 from your computer using command prompt 2.) Double check in your program files folder to see if there are any lingering python 3.7 folders you need to delete. Do not delete any site-packages folders or you will need to reinstall the packages you have deleted. 3.) Go to <https://www.python.org/downloads/> and download and install python 3.6 and make sure you add it to your path when installing 4.) Open command prompt and type `python -V` or simply `python` and check what version you have installed. If you type just `python` you can use the command `exit()` after to exit. But I suggest starting to use [Virtual Environments](https://realpython.com/python-virtual-environments-a-primer/) to avoid this issue or downloading different python versions based on specific library needs. **UPDATE** Regarding the point of not deleting site-packages folders. Some of your packages may not be compatible with lower versions of python. This may not be a huge issue for some people, but it is best to check your most commonly used packages to see their compatible python versions before continuing with the downgrade
Consider using [pyenv-win](https://github.com/pyenv-win/pyenv-win) in order to manage your global and (per-project) local Python versions. However, it only works with the Windows Subsystem for Linux.
1,379
66,975,127
I'm trying to install a simple Django package in a Docker container. Here is my dockerfile ``` FROM python:3.8 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 WORKDIR /app COPY Pipfile Pipfile.lock /app/ RUN pip install pipenv && pipenv install --system COPY . /app/ ``` And here is my docker-compose: ``` version: '3.7' services: web: build: . command: python /app/manage.py runserver 0.0.0.0:8000 volumes: - .:/app ports: - 8000:8000 depends_on: - db db: image: postgres:11 volumes: - /Users/ruslaniv/Documents/Docker/djangoapp:/var/lib/postgresql/data/ environment: - POSTGRES_USER=XXX - POSTGRES_PASSWORD=XXX - POSTGRES_DB=djangoapp volumes: djangoapp: ``` So, I start my container with ``` docker-compose up ``` then install a package and rebuild an image ``` docker-compose exec web pipenv install django-crispy-forms docker-compose down docker-compose up -d --build ``` Then I add `'crispy_forms'` into local `settings.py` and register crispy forms tags in a local html file with `{% load crispy_forms_tags %}` and then use them for the form with `{{ form|crispy }}` But the form is not rendered properly. Since the package itself and its usage are very simple I think there is a problem with installing the package in a container. So the question is how to properly install a Django package in a Docker container and am I doing it properly?
2021/04/06
[ "https://Stackoverflow.com/questions/66975127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3678257/" ]
you can use [present](https://laravel.com/docs/8.x/validation#rule-present) validation The field under validation must be present in the input data but can be empty. ``` 'topics' => 'present|array' ```
Validating array based form input fields doesn't have to be a pain. You may use "dot notation" to validate attributes within an array. For example, if the incoming HTTP request contains a `photos[profile]` field, you may validate it like so: ``` use Illuminate\Support\Facades\Validator; $validator = Validator::make($request->all(), [ 'photos.profile' => 'required|image', ]); ``` You may also validate each element of an array. For example, to validate that each email in a given array input field is unique, you may do the following: ``` $validator = Validator::make($request->all(), [ 'person.*.email' => 'email|unique:users', 'person.*.first_name' => 'required_with:person.*.last_name', ]); ``` In your case, if you want to validate that the elements inside your array are not empty, the following would suffice: ``` 'topics' => 'required|array', 'topics.*' => 'sometimes|integer', // <- for example. ```
1,382
47,370,718
Suppose we have * an n-dimensional numpy.array A * a numpy.array B with dtype=int and shape of (n, m) How do I index A by B so that the result is an array of shape (m,), with values taken from the positions indicated by the columns of B? For example, consider this code that does what I want when B is a python list: ``` >>> a = np.arange(27).reshape(3,3,3) >>> a[[0, 1, 2], [0, 0, 0], [1, 1, 2]] array([ 1, 10, 20]) # the result we're after >>> bl = [[0, 1, 2], [0, 0, 0], [1, 1, 2]] >>> a[bl] array([ 1, 10, 20]) # also works when indexing with a python list >>> a[bl].shape (3,) ``` However, when B is a numpy array, the result is different: ``` >>> b = np.array(bl) >>> a[b].shape (3, 3, 3, 3) ``` Now, I can get the desired result by casting B into a tuple, but surely that cannot be the proper/idiomatic way to do it? ``` >>> a[tuple(b)] array([ 1, 10, 20]) ``` Is there a numpy function to achieve the same without casting B to a tuple?
2017/11/18
[ "https://Stackoverflow.com/questions/47370718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1073784/" ]
One alternative would be converting to linear indices and then index with `np.take` or index into its flattened version - ``` np.take(a,np.ravel_multi_index(b, a.shape)) a.flat[np.ravel_multi_index(b, a.shape)] ``` **Custom `np.ravel_multi_index` for performance boost** We could implement a custom version to simulate the behaviour of `np.ravel_multi_index` to boost the performance, like so - ``` def ravel_index(b, shp): return np.concatenate((np.asarray(shp[1:])[::-1].cumprod()[::-1],[1])).dot(b) ``` Using it, the desired output would be found in two ways - ``` np.take(a,ravel_index(b, a.shape)) a.flat[ravel_index(b, a.shape)] ``` ### Benchmarking Additionall incorporating `tuple` based method from the question and `map` based one from @Kanak's post. Case #1 : dims = 3 ``` In [23]: a = np.random.randint(0,9,([20]*3)) In [24]: b = np.random.randint(0,20,(a.ndim,1000000)) In [25]: %timeit a[tuple(b)] ...: %timeit a[map(np.ravel, b)] ...: %timeit np.take(a,np.ravel_multi_index(b, a.shape)) ...: %timeit a.flat[np.ravel_multi_index(b, a.shape)] ...: %timeit np.take(a,ravel_index(b, a.shape)) ...: %timeit a.flat[ravel_index(b, a.shape)] 100 loops, best of 3: 6.56 ms per loop 100 loops, best of 3: 6.58 ms per loop 100 loops, best of 3: 6.95 ms per loop 100 loops, best of 3: 9.17 ms per loop 100 loops, best of 3: 6.31 ms per loop 100 loops, best of 3: 8.52 ms per loop ``` Case #2 : dims = 6 ``` In [29]: a = np.random.randint(0,9,([10]*6)) In [30]: b = np.random.randint(0,10,(a.ndim,1000000)) In [31]: %timeit a[tuple(b)] ...: %timeit a[map(np.ravel, b)] ...: %timeit np.take(a,np.ravel_multi_index(b, a.shape)) ...: %timeit a.flat[np.ravel_multi_index(b, a.shape)] ...: %timeit np.take(a,ravel_index(b, a.shape)) ...: %timeit a.flat[ravel_index(b, a.shape)] 10 loops, best of 3: 40.9 ms per loop 10 loops, best of 3: 40 ms per loop 10 loops, best of 3: 20 ms per loop 10 loops, best of 3: 29.9 ms per loop 100 loops, best of 3: 15.7 ms per loop 10 loops, best of 3: 25.8 ms per loop ``` Case #3 : dims = 10 ``` In [32]: a = np.random.randint(0,9,([4]*10)) In [33]: b = np.random.randint(0,4,(a.ndim,1000000)) In [34]: %timeit a[tuple(b)] ...: %timeit a[map(np.ravel, b)] ...: %timeit np.take(a,np.ravel_multi_index(b, a.shape)) ...: %timeit a.flat[np.ravel_multi_index(b, a.shape)] ...: %timeit np.take(a,ravel_index(b, a.shape)) ...: %timeit a.flat[ravel_index(b, a.shape)] 10 loops, best of 3: 60.7 ms per loop 10 loops, best of 3: 60.1 ms per loop 10 loops, best of 3: 27.8 ms per loop 10 loops, best of 3: 38 ms per loop 100 loops, best of 3: 18.7 ms per loop 10 loops, best of 3: 29.3 ms per loop ``` So, it makes sense to look for alternatives when working with higher-dimensional inputs and with large data.
Another alternative that fits your need involves the use of [`np.ravel`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) ``` >>> a[map(np.ravel, b)] array([ 1, 10, 20]) ``` However not fully [`numpy`](http://www.numpy.org/)-based. --- ***Performance-concerns.*** *Updated following the comments below.* Be that as it may, your approach is better than mine, but not better than any of @Divakar's. ``` import numpy as np import timeit a = np.arange(27).reshape(3,3,3) bl = [[0, 1, 2], [0, 0, 0], [1, 1, 2]] b = np.array(bl) imps = "from __main__ import np,a,b" reps = 100000 tup_cas_t = timeit.Timer("a[tuple(b)]", imps).timeit(reps) map_rav_t = timeit.Timer("a[map(np.ravel, b)]", imps).timeit(reps) fla_rp1_t = timeit.Timer("np.take(a,np.ravel_multi_index(b, a.shape))", imps).timeit(reps) fla_rp2_t = timeit.Timer("a.flat[np.ravel_multi_index(b, a.shape)]", imps).timeit(reps) print tup_cas_t/map_rav_t ## 0.505382211881 print tup_cas_t/fla_rp1_t ## 1.18185817386 print tup_cas_t/fla_rp2_t ## 1.71288705886 ```
1,383
5,373,195
When I tried to parse a csv which was exported by MS SQL 2005 express edition's query, the string python gives me is totally unexpected. For example if the line in the csv file is :" aaa,bbb,ccc,dddd", then when python parsed it as string, it becomes :" a a a a , b b b , c c c, d d d d" something like that.....What happens??? I tried to remove the space in the code but don't work. ``` import os import random f1 = open('a.txt', 'r') f2 = open('dec_sql.txt', 'w') text = 'abc' while(text != ''): text = f1.readline() if(text==''): break splited = text.split(',') for i in range (0, 32): splited[i] = splited[i].replace(' ', '') sql = 'insert into dbo.INBOUND_RATED_DEC2010 values (' sql += '\'' + splited[0] + '\', ' sql += '\'' + splited[1] + '\', ' sql += '\'' + splited[2] + '\', ' sql += '\'' + splited[3] + '\', ' sql += '\'' + splited[4] + '\', ' sql += '\'' + splited[5] + '\', ' sql += '\'' + splited[6] + '\', ' sql += '\'' + splited[7] + '\', ' sql += '\'' + splited[8] + '\', ' sql += '\'' + splited[9] + '\', ' sql += '\'' + splited[10] + '\', ' sql += '\'' + splited[11] + '\', ' sql += '\'' + splited[12] + '\', ' sql += '\'' + splited[13] + '\', ' sql += '\'' + splited[14] + '\', ' sql += '\'' + splited[15] + '\', ' sql += '\'' + splited[16] + '\', ' sql += '\'' + splited[17] + '\', ' sql += '\'' + splited[18] + '\', ' sql += '\'' + splited[19] + '\', ' sql += '\'' + splited[20] + '\', ' sql += '\'' + splited[21] + '\', ' sql += '\'' + splited[22] + '\', ' sql += '\'' + splited[23] + '\', ' sql += '\'' + splited[24] + '\', ' sql += '\'' + splited[25] + '\', ' sql += '\'' + splited[26] + '\', ' sql += '\'' + splited[27] + '\', ' sql += '\'' + splited[28] + '\', ' sql += '\'' + splited[29] + '\', ' sql += '\'' + splited[30] + '\', ' sql += '\'' + splited[31] + '\', ' sql += '\'' + splited[32] + '\' ' sql += ')' print sql f2.write(sql+'\n') f2.close() f1.close() ```
2011/03/21
[ "https://Stackoverflow.com/questions/5373195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/612678/" ]
Sounds to me like the output of the MS SQL 2005 query is a unicode file. The python [csv module](http://docs.python.org/library/csv.html) cannot handle unicode files, but there is some [sample code](http://docs.python.org/library/csv.html#csv-examples) in the documentation for the csv module describing how to work around the problem. Alternately, some text editors allow you to save a file with a different encoding. For example, I opened the results of a MS SQL 2005 query in Notepad++ and it told me the file was UCS-2 encoded and I was able to convert it to UTF-8 from the Encoding menu.
Try to open the file in notepad and use the replace all function to replace `' '` with `''`
1,385
47,544,183
I'm trying to use multiprocessing, but I keep getting this error: ``` AttributeError: Can't get attribute 'processLine' on <module '__main__' ``` (The processLine function returns word, so I guess the problem is here, but I don't know how to get around it) ``` import multiprocessing as mp pool = mp.Pool(4) jobs = [] Types =[] def processLine(line): line = line.split() word = line[0].strip() return word with open("1.txt", "r", encoding = "utf-8") as f: for line in f: word = (jobs.append(pool.apply_async(processLine,(line)))) Types.append(word) filtered_words=[] with open("2.txt", "r", encoding = "utf-8") as f: for line in f: word = jobs.append(pool.apply_async(processLine,(line))) if word in Types: filtered_words = "".join(line) print(filtered_words) for job in jobs: job.get() pool.close() ``` And this is what I get: Process ForkPoolWorker-1: Process ForkPoolWorker-2: Process ForkPoolWorker-3: Process ForkPoolWorker-4: Traceback (most recent call last): File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 249, in \_bootstrap ``` self.run() ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 93, in run ``` self._target(*self._args, **self._kwargs) ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/pool.py", line 108, in worker ``` task = get() ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/queues.py", line 345, in get return \_ForkingPickler.loads(res) AttributeError: Can't get attribute 'processLine' on Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 249, in \_bootstrap self.run() File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 249, in \_bootstrap ``` self.run() ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 93, in run ``` self._target(*self._args, **self._kwargs) ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 93, in run ``` self._target(*self._args, **self._kwargs) ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/pool.py", line 108, in worker ``` task = get() ``` File "/Users/user/anaconda/lib/python3.6/multiprocessing/pool.py", line 108, in worker task = get() File "/Users/user/anaconda/lib/python3.6/multiprocessing/queues.py", line 345, in get return \_ForkingPickler.loads(res) File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 249, in \_bootstrap self.run() File "/Users/user/anaconda/lib/python3.6/multiprocessing/queues.py", line 345, in get return \_ForkingPickler.loads(res) File "/Users/user/anaconda/lib/python3.6/multiprocessing/process.py", line 93, in run self.\_target(\*self.\_args, \*\*self.\_kwargs) AttributeError: Can't get attribute 'processLine' on AttributeError: Can't get attribute 'processLine' on File "/Users/user/anaconda/lib/python3.6/multiprocessing/pool.py", line 108, in worker task = get() File "/Users/user/anaconda/lib/python3.6/multiprocessing/queues.py", line 345, in get ``` return _ForkingPickler.loads(res) ``` AttributeError: Can't get attribute 'processLine' on
2017/11/29
[ "https://Stackoverflow.com/questions/47544183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8138305/" ]
The `multiprocessing` module needs to be able to import your module safely. Any code not inside a function or class should be protected by the standard Python import guard: ``` if __name__ == '__main__': ...code goes here... ``` But there are other problems with your code. For example, you've got: ``` word = jobs.append(pool.apply_async(processLine,(line))) ``` ...but `append` doesn't return a value, so this will always assign `None` to `word`. Rather than using a `for` loop to repeatedly call `pool.apply_async`, you may want to consider using `pool.map_async` instead, or just `pool.map` if you don't actually need the asynchronous behavior.
I worked around the AttributeError issue by using VS Code in administrator mode to run it instead of Anaconda Spyder.
1,388
10,688,389
I have a ever growing csv file that looks like: ``` 143100, 2012-05-21 09:52:54.165852 125820, 2012-05-21 09:53:54.666780 109260, 2012-05-21 09:54:55.144712 116340, 2012-05-21 09:55:55.642197 125640, 2012-05-21 09:56:56.094999 122820, 2012-05-21 09:57:56.546567 124770, 2012-05-21 09:58:57.046050 103830, 2012-05-21 09:59:57.497299 114120, 2012-05-21 10:00:58.000978 -31549410, 2012-05-21 10:01:58.063470 90390, 2012-05-21 10:02:58.108794 81690, 2012-05-21 10:03:58.161329 80940, 2012-05-21 10:04:58.227664 102180, 2012-05-21 10:05:58.289882 99750, 2012-05-21 10:06:58.322063 87000, 2012-05-21 10:07:58.391256 92160, 2012-05-21 10:08:58.442438 80130, 2012-05-21 10:09:58.506494 ``` The negative numbers occur when the service that generates the file has an API connection failure. I'm already using matplotlib to graph the data, however the artificial negative numbers screw the graph greatly. I would like to locate all negative entries and remove the corresponding lines. At no point is a negative number actually representative of any real data. In Bash I would do something like: ``` awk '{print $1}' original.csv | sed '/-/d' > new.csv ``` but that's messy and tends to be slow, and I don't really want to embed bash commands in my python graphing script if I can help it. Can anyone point me in the right direction? Edit: Here's the code I'm using to read/plot the data: ``` import matplotlib matplotlib.use('Agg') from matplotlib.mlab import csv2rec import matplotlib.pyplot as plt import matplotlib.dates as mdates from pylab import * output_image_name='tpm.png' data = csv2rec('counter.log', names=['packets', 'time']) rcParams['figure.figsize'] = 10, 5 rcParams['font.size'] = 8 fig = plt.figure() plt.plot(data['packets'], data['time']) ax = fig.add_subplot(111) ax.plot(data['time'], data['tweets']) hours = mdates.HourLocator() fmt = mdates.DateFormatter('%D - %H:%M') ax.xaxis.set_major_locator(hours) ax.xaxis.set_major_formatter(fmt) ax.grid() plt.ylabel("packets") plt.title("Packet Log: Packets Per Minute") fig.autofmt_xdate(bottom=0.2, rotation=90, ha='left') plt.savefig(output_image_name) ```
2012/05/21
[ "https://Stackoverflow.com/questions/10688389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/672387/" ]
The Python idiom would be to use a generator expression to filter the lines: ``` sys.stdout.writelines(line for line in sys.stdin if not line.startswith('-')) ``` Or in a processing context: ``` filtered = (line for line in sys.stdin if not line.startswith('-')) for line in filtered: # ... ```
Instead of rewriting the files, I would filter the data on read, i.e. just before plotting.
1,389
16,881,955
I am trying to learn python and for that purpose i made a simple addition program using python 2.7.3 ``` print("Enter two Numbers\n") a = int(raw_input('A=')) b = int(raw_input('B=')) c=a+b print ('C= %s' %c) ``` i saved the file as *add.py* and when i double click and run it;the program run and exits instantenously without showing answer. Then i tried code of this question [Simple addition calculator in python](https://stackoverflow.com/questions/4665558/simple-addition-calculator-in-python) it accepts user inputs but after entering both numbers the python exits with out showing answer. Any suggestions for the above code. Advance thanks for the help
2013/06/02
[ "https://Stackoverflow.com/questions/16881955", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996366/" ]
add an empty `raw_input()` at the end to pause until you press `Enter` ``` print("Enter two Numbers\n") a = int(raw_input('A=')) b = int(raw_input('B=')) c=a+b print ('C= %s' %c) raw_input() # waits for you to press enter ``` Alternatively run it from `IDLE`, command line, or whichever editor you use.
Run your file from the command line. This way you can see exceptions. Execute `cmd` than in the "dos box" type: ``` python myfile.py ``` Or on Windows likley just: ``` myfile.py ```
1,392
34,777,676
So I have created a function in my program that allows the user to save whatever he/she draws on the Turtle canvas as a Postscript file with his/her own name. However, there have been issues with some colors not appearing in the output as per the nature of Postscript files, and also, Postscript files just won't open on some other platforms. So I have decided to save the postscript file as a JPEG image since the JPEG file should be able to be opened on many platforms, can hopefully display all the colors of the canvas, and it should have a higher resolution than the postscript file. So, to do that, I have tried, using the PIL, the following in my save function: ``` def savefirst(): cnv = getscreen().getcanvas() global hen fev = cnv.postscript(file = 'InitialFile.ps', colormode = 'color') hen = filedialog.asksaveasfilename(defaultextension = '.jpg') im = Image.open(fev) print(im) im.save(hen + '.jpg') ``` However, whenever I run this, I get this error: ``` line 2391, in savefirst im = Image.open(fev) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PIL/Image.py", line 2263, in open fp = io.BytesIO(fp.read()) AttributeError: 'str' object has no attribute 'read' ``` Apparently it cannot read the postscript file since it's **not**, according to what I know, an image in itself, so it has to first be converted into an image, THEN read as an image, and then finally converted and saved as a JPEG file. **The question is, how would I be able to first *convert the postscript file* to an *image file* INSIDE the program possibly using the Python Imaging Library?** Looking around SO and Google has been no help, so any help from the SO users is greatly appreciated! **EDIT:** Following `unubuntu's` advice, I have now have this for my save function: ``` def savefirst(): cnv = getscreen().getcanvas() global hen ps = cnv.postscript(colormode = 'color') hen = filedialog.asksaveasfilename(defaultextension = '.jpg') im = Image.open(io.BytesIO(ps.encode('utf-8'))) im.save(hen + '.jpg') ``` However, now whenever I run that, I get this error: ``` line 2395, in savefirst im.save(hen + '.jpg') File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PIL/Image.py", line 1646, in save self.load() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PIL/EpsImagePlugin.py", line 337, in load self.im = Ghostscript(self.tile, self.size, self.fp, scale) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/PIL/EpsImagePlugin.py", line 143, in Ghostscript stdout=subprocess.PIPE) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/subprocess.py", line 950, in __init__ restore_signals, start_new_session) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/subprocess.py", line 1544, in _execute_child raise child_exception_type(errno_num, err_msg) FileNotFoundError: [Errno 2] No such file or directory: 'gs' ``` **What is `'gs'` and why am I getting this error now?**
2016/01/13
[ "https://Stackoverflow.com/questions/34777676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5661257/" ]
[If you don't supply the file parameter](https://www.tcl.tk/man/tcl8.4/TkCmd/canvas.htm#M60) in the call to `cnv.postscript`, then a `cnv.postscript` returns the PostScript as a (unicode) string. You can then convert the unicode to bytes and feed that to `io.BytesIO` and feed that to `Image.open`. [`Image.open`](http://pillow.readthedocs.org/en/3.0.x/reference/Image.html#PIL.Image.open) can accept as its first argument any file-like object that implements `read`, `seek` and `tell` methods. ``` import io def savefirst(): cnv = getscreen().getcanvas() global hen ps = cnv.postscript(colormode = 'color') hen = filedialog.asksaveasfilename(defaultextension = '.jpg') im = Image.open(io.BytesIO(ps.encode('utf-8'))) im.save(hen + '.jpg') ``` --- For example, borrowing heavily from [A. Rodas' code](https://stackoverflow.com/a/17885207/190597), ``` import Tkinter as tk import subprocess import os import io from PIL import Image class App(tk.Tk): def __init__(self): tk.Tk.__init__(self) self.line_start = None self.canvas = tk.Canvas(self, width=300, height=300, bg="white") self.canvas.bind("<Button-1>", lambda e: self.draw(e.x, e.y)) self.button = tk.Button(self, text="save", command=self.save) self.canvas.pack() self.button.pack(pady=10) def draw(self, x, y): if self.line_start: x_origin, y_origin = self.line_start self.canvas.create_line(x_origin, y_origin, x, y) self.line_start = x, y def save(self): ps = self.canvas.postscript(colormode='color') img = Image.open(io.BytesIO(ps.encode('utf-8'))) img.save('/tmp/test.jpg') app = App() app.mainloop() ```
Adding to unutbu's answer, you can also write the data again to a BytesIO object, but you have to seek to the beginning of the buffer after doing so. Here's a flask example that displays the image in browser: ```python @app.route('/image.png', methods=['GET']) def image(): """Return png of current canvas""" ps = tkapp.canvas.postscript(colormode='color') out = BytesIO() Image.open(BytesIO(ps.encode('utf-8'))).save(out, format="PNG") out.seek(0) return send_file(out, mimetype='image/png') ```
1,395
73,025,430
I am currently running a function using python's concurrent.futures library. It looks like this (I am using Python 3.10.1 ): ``` with concurrent.futures.ThreadPoolExecutor() as executor: future_results = [executor.submit(f.get_pdf_multi_thread, ssn) for ssn in ssns] for future in concurrent.futures.as_completed(future_results): try: future.result() except Exception as exc: # If there is one exception in a thread stop all threads for future in future_results: future.cancel() raise exc ``` **The aim of this is that, if there is any exception in one of the threads, stop the remaining ones and throw exception**. However I don't know if this is doing what it's supposed to do (sometimes it takes a lot of time to throw the exception that I desire and some other times it throws it quickly). Could you help me with this? Thank you
2022/07/18
[ "https://Stackoverflow.com/questions/73025430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8972132/" ]
to find the functions and procedures where a table is referenced, you can scan the `routine_definition` column of the `sysibm.routines` view for the table name. Use `regexp_instr` function to look for the pattern FROM|UPDATE|INSERT INTO followed by the table name. ``` with t1 as ( SELECT char(ROUTINE_SCHEMA,10) libname, char(ROUTINE_NAME,30) routine_name, cast(a.routine_definition as varchar(9999)) routine_defn FROM sysibm.routines a where routine_schema = 'YOURLIB' ), t2 as ( select a.routine_name, regexp_instr(a.routine_defn, '(FROM|UPDATE|INSERT INTO)\s+YOUR_TABLE',1,1) pos, a.routine_defn from t1 a ) select a.routine_name, a.pos, substr(a.routine_defn,pos,20) text from t2 a where a.pos > 0 ```
you can use ibm function RELATED\_OBJECTS SQL <https://www.ibm.com/docs/en/i/7.3?topic=services-related-objects-table-function>
1,396
26,129,650
I am a beginner in python. I want to ask the user to input his first name. The name should only contain letters A-Z,if not, I want to display an error and request the user to enter the name again until the name is correct. Here is the code am trying. However, The string is not checked even when it contains numbers and special characters. Where am I going wrong?? ``` def get_first_name(): try_again = True while(try_again==True): first_name = raw_input("Please enter your first name.") if (re.match("^[A-Za-z]+$", first_name)==False): try_again = True else: try_again = False return first_name ```
2014/09/30
[ "https://Stackoverflow.com/questions/26129650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1671718/" ]
You don't need `re`, just use [str.isalpha](http://www.tutorialspoint.com/python/string_isalpha.htm) ``` def get_first_name(): while True: first_name = raw_input("Please enter your first name.") if not first_name.isalpha(): # if not all letters, ask for input again print "Invalid entry" continue else: # else all is good, return first_name return first_name In [12]: "foo".isalpha() Out[12]: True In [13]: "foo1".isalpha() Out[13]: False ```
``` if (re.match("^[A-Za-z]+$", first_name)==False): ``` re.match is returning None when there is no match. None does not equal False. You could write it like this: ``` if not re.match("^[A-Za-z]+$", first_name): ```
1,397
63,498,826
[![Image of cone](https://i.stack.imgur.com/NuZNp.jpg)](https://i.stack.imgur.com/NuZNp.jpg) How do I make it so everything in the image is in gray-scale except the orange cone. Using opencv python.
2020/08/20
[ "https://Stackoverflow.com/questions/63498826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13991355/" ]
You can achieve your goal by using `bitwise_and()` function and `thresholding`. Steps: * generate `mask` for the required region.(here `thresholding` is used but other methods can also be used) * extract required `regions` using `bitwise_and` (image & mask). * Add `masked regions` to get output. Here's sample code: ``` import cv2 import numpy as np img = cv2.imread('input.jpg') # creating mask using thresholding over `red` channel (use better use histogram to get threshoding value) # I have used 200 as thershoding value it can be different for different images ret, mask = cv2.threshold(img[:, :,2], 200, 255, cv2.THRESH_BINARY) mask3 = np.zeros_like(img) mask3[:, :, 0] = mask mask3[:, :, 1] = mask mask3[:, :, 2] = mask # extracting `orange` region using `biteise_and` orange = cv2.bitwise_and(img, mask3) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR) # extracting non-orange region gray = cv2.bitwise_and(img, 255 - mask3) # orange masked output out = gray + orange cv2.imwrite('orange.png', orange) cv2.imwrite('gray.png', gray) cv2.imwrite("output.png", out) ``` Results: ### masked orange image [![masked orange image](https://i.stack.imgur.com/8G34M.png)](https://i.stack.imgur.com/8G34M.png) ### masked gray image [![enter image description here](https://i.stack.imgur.com/QieOO.png)](https://i.stack.imgur.com/QieOO.png) ### output image [![enter image description here](https://i.stack.imgur.com/dEU4K.png)](https://i.stack.imgur.com/dEU4K.png)
Here is an alternate way to do that in Python/OpenCV. * Read the input * Threshold on color using cv2.inRange() * Apply morphology to clean it up and fill in holes as a mask * Create a grayscale version of the input * Merge the input and grayscale versions using the mask via np.where() * Save the results Input: [![enter image description here](https://i.stack.imgur.com/VVO1L.jpg)](https://i.stack.imgur.com/VVO1L.jpg) ``` import cv2 import numpy as np img = cv2.imread("orange_cone.jpg") # threshold on orange lower = (0,60,200) upper = (110,160,255) thresh = cv2.inRange(img, lower, upper) # apply morphology and make 3 channels as mask kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)) mask = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel) mask = cv2.merge([mask,mask,mask]) # create 3-channel grayscale version gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) gray = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR) # blend img with gray using mask result = np.where(mask==255, img, gray) # save images cv2.imwrite('orange_cone_thresh.jpg', thresh) cv2.imwrite('orange_cone_mask.jpg', mask) cv2.imwrite('orange_cone_result.jpg', result) # Display images cv2.imshow("thresh", thresh) cv2.imshow("mask", mask) cv2.imshow("result", result) cv2.waitKey(0) ``` Threshold image: [![enter image description here](https://i.stack.imgur.com/wPag1.jpg)](https://i.stack.imgur.com/wPag1.jpg) Mask image: [![enter image description here](https://i.stack.imgur.com/eOrJA.jpg)](https://i.stack.imgur.com/eOrJA.jpg) Merged result: [![enter image description here](https://i.stack.imgur.com/TtVXP.jpg)](https://i.stack.imgur.com/TtVXP.jpg)
1,398
54,450,504
I was looking for this information for a while, but as additional packages and python versions can be installed through `homebrew` and `pip` I have the feeling that my environment is messed up. Furthermore a long time ago, I had installed some stuff with `sudo pip install` and as well `sudo python ~/get-pip.py`. Is there a trivial way of removing all danging dependencies and have python as it was when I first got the machine, or at least with only the packages that are delivered with the Mac distro?
2019/01/30
[ "https://Stackoverflow.com/questions/54450504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1767754/" ]
Technically inline JavaScript with a `<script>` tag could do what you are asking. You could even look into the many templating solutions available via JavaScript libraries. That would not actually provide any benefit, though. JavaScript changes what is ultimately displayed, not the file itself. Since your use case does not change the display it wouldn't actually be useful. It would be more efficient to consider why `&nbsp;` is appearing in the first place and fix that.
This … > > My html file contains in many places the code `&nbsp;&nbsp;&nbsp;` > > > … is actually what is wrong in your file! `&nbsp;` is not meant to use for layout purpose, you should fix that and use CSS instead to layout it correctly. `&nbsp;` is meant to stop breaking words at the end of a line that are seperated by a space. For example numbers and their unit: `5 liters` can end up with `5` at the end of the line and `liters` in the next line ([Example](http://jsfiddle.net/fL564sa0/)). To keep that together you would use `5&nbsp;liters`. That's what you use `&nbsp;` for and nothing else, especially **not** for layout purpose. --- To still answer your question: HTML is a *markup language* not a *programming language*. That means it is descriptive/static and not functional/dynamic. If you try to generate HTML dynamically you would need to use something like PHP or JavaScript.
1,399
55,223,059
May I know why I get the error message - NameError: name 'X\_train\_std' is not defined ``` from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C=1000.0, random_state=0) lr.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=lr, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() plt.show() lr.predict_proba(X_test_std[0,:]) weights, params = [], [] for c in np.arange(-5, 5): lr = LogisticRegression(C=10**c, random_state=0) lr.fit(X_train_std, y_train) weights.append(lr.coef_[1]) params.append(10**c) weights = np.array(weights) plt.plot(params, weights[:, 0], label='petal length') plt.plot(params, weights[:, 1], linestyle='--', label='petal width') plt.ylabel('weight coefficient') plt.xlabel('C') plt.legend(loc='upper left') plt.xscale('log') plt.show() ``` Plesea see the link - <https://www.freecodecamp.org/forum/t/how-to-modify-my-python-logistic-regression/265795> <https://bytes.com/topic/python/answers/972352-why-i-get-x_train_std-not-defined#post3821849> <https://www.researchgate.net/post/Why_I_get_the_X_train_std_is_not_defined> .
2019/03/18
[ "https://Stackoverflow.com/questions/55223059", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9658840/" ]
I want to share a text about Dependency Injection, It makes us to change our mind using dependency injection : ***Do not (always) use DI: Injectables versus newables*** > > Something that was immensely useful to me when learning about DI > frameworks was the realisation that using a DI framework does not mean > that you have to DI to initialise all of your objects. As a rule of > thumb: inject objects that you know of at compile time and that have > static relations to other objects; do not inject runtime information. > > > I think this is a good post on the subject. It introduces the concept > of 'newables' and 'injectables'. > > > Injectables are the classes near the root of your DI graph. Instances > of these classes are the kind of objects that you expect your DI > framework to provide and inject. Manager- or service-type objects are > typical examples of injectables. Newables are objects at the fringes > of your DI graph, or that are not even really part of your DI graph at > all. Integer, Address etc. are examples of newables. Broadly speaking, > newables are passive objects, and there is no point in injecting or > mocking them. They typically contain the "data" that is in your > application and that is only available at runtime (e.g. your address). > Newables should not keep references to injectables or vice versa > (something the author of the post refers to as > "injectable/newable-separation"). > > > In reality, I have found that it is not always easy or possible to > make a clear distinction between injectables and newables. Still, I > think that they are nice concepts to use as part of your thinking > process. Definitely think twice before adding yet another factory to > your project! > > > **We decied to remove injection of ArrayList, LinearLayoutManager, DividerItemDecoration. We created these classes with "new" not inject**
We can inject same class with @Named ``` @Provides @Named(CMS.Client.DELIVERY_API_CLIENT) fun provideCMSClient(): CDAClient { return CDAClient.builder() .setSpace("4454") .setToken("777") .build() } @Provides @Named(CMS.Client.SYNC_API_CLIENT) fun provideCMSSyncClient(): CDAClient { return CDAClient.builder() .setSpace("1234") .setToken("456")) .build() } ```
1,404
67,867,496
Please forgive my ignorant question. I'm in the infant stage of learning Python. I want to convert Before\_text into After\_text. ``` <Before_text> Today, I got up early, so I’m absolutely exhausted. I had breakfast: two slices \n of cold toast and a disgusting coffee, then I left the house at 8 o’clock still \n feeling half asleep. Honestly, London’s killing me! ``` ``` <After_text> Today, I got up early, so I’m absolutely exhausted. I had breakfast: two slices of cold toast and a disgusting coffee, then I left the house at 8 o’clock still feeling half asleep. Honestly, London’s killing me! ``` In fact, regardless of the code, I only need to get this result (After\_text). I used this code: ``` import sys, fileinput from nltk.tokenize import sent_tokenize if __name__ == "__main__": buf = [] for line in fileinput.input(): if line.strip() != "": buf += [line.strip()] sentences = sent_tokenize(" ".join(buf)) if len(sentences) > 1: buf = sentences[1:] sys.stdout.write(sentences[0] + '\n') sys.stdout.write(" ".join(buf) + "\n") ``` The following error is produced: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-1-ef8b2fcb97ad> in <module>() 5 buf = [] 6 ----> 7 for line in fileinput.input(): 8 if line.strip() != "": 9 buf += [line.strip()] -------------------------1 frames-------------------------------------------------- /usr/lib/python3.7/fileinput.py in _readline(self) 362 self._file = self._openhook(self._filename, self._mode) 363 else: --> 364 self._file = open(self._filename, self._mode) 365 self._readline = self._file.readline # hide FileInput._readline 366 return self._readline() FileNotFoundError: [Errno 2] No such file or directory: '-f' ``` What is causing this error? Where is a bug in this code? And how and where do I load and save a text file? Please teach me~
2021/06/07
[ "https://Stackoverflow.com/questions/67867496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14949033/" ]
If you want to use [fileinput.input()](https://docs.python.org/3/library/fileinput.html) you should provide input filenames as arguments (`sys.argv`), simple example, if you have `cat.py` as follows ``` import fileinput for line in fileinput.input(): print(line, end='') ``` and text files `file1.txt`, `file2.txt`, `file3.txt` in same catalog then usage is: ``` python cat.py file1.txt file2.txt file3.txt ```
According to the docs, `fileinput.input()` is a shortcut that takes things from the command line input and tries to open them one at a time, or if nothing is specified it uses `stdin` as its input. Please show us how you are invoking your script. I suspect you have an `-f` in there that the function is trying to open.
1,405
60,174,534
I know this is kind of stupid since BigQueryML now provides Kmeans with good initialization. Nonetheless I was required to train a model in tensorflow and then pass it to BigQuery for prediction. I saved my model and everything works fine, until I try to upload it to bigquery. I get the following error: ``` TensorFlow SavedModel output output has an unsupported shape: unknown_rank: true ``` So my question is: Is it impossible to use a tensorflow trained kmeans algorithm in BigQuery? **Edit**: Creating the model: ``` kmeans = tf.compat.v1.estimator.experimental.KMeans(num_clusters=8, use_mini_batch = False, initial_clusters=KMEANS_PLUS_PLUS_INIT, seed=1234567890, relative_tolerance=.001) ``` Serving function: ``` def serving(): inputs = {} # for feat in df.columns: # inputs[feat] = tf.placeholder(shape=[None], dtype = tf.float32) inputs = tf.placeholder(shape=[None,9], dtype = tf.float32) return tf.estimator.export.TensorServingInputReceiver(inputs,inputs) ``` Saving the model: ``` kmeans.export_saved_model("gs://<bicket>/tf_clustering_model", serving_input_receiver_fn=serving, checkpoint_path='/tmp/tmpdsleqpi3/model.ckpt-19', experimental_mode=tf.estimator.ModeKeys.PREDICT) ``` Loading to BigQuery: ``` query=""" CREATE MODEL `<project>.<dataset>.kmeans_tensorflow` OPTIONS(MODEL_TYPE='TENSORFLOW', MODEL_PATH='gs://<bucket>/tf_clustering_model/1581439348/*') """ job = bq.Client().query(query) job.result() ``` **Edit2**: The output of the saved\_model\_cli command is the following: ``` jupyter@tensorflow-20200211-182636:~$ saved_model_cli show --dir . --all MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['all_distances']: The given SavedModel SignatureDef contains the following input(s): inputs['input'] tensor_info: dtype: DT_FLOAT shape: (-1, 9) name: Placeholder:0 The given SavedModel SignatureDef contains the following output(s): outputs['output'] tensor_info: dtype: DT_FLOAT shape: unknown_rank name: add:0 Method name is: tensorflow/serving/predict signature_def['cluster_index']: The given SavedModel SignatureDef contains the following input(s): inputs['input'] tensor_info: dtype: DT_FLOAT shape: (-1, 9) name: Placeholder:0 The given SavedModel SignatureDef contains the following output(s): outputs['output'] tensor_info: dtype: DT_INT64 shape: unknown_rank name: Squeeze_1:0 Method name is: tensorflow/serving/predict signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['input'] tensor_info: dtype: DT_FLOAT shape: (-1, 9) name: Placeholder:0 The given SavedModel SignatureDef contains the following output(s): outputs['output'] tensor_info: dtype: DT_INT64 shape: unknown_rank name: Squeeze_1:0 Method name is: tensorflow/serving/predict ``` All seem to have unknown rank for the output shapes. How can I set up the export of this particular estimator or is there something I can search to help me? **Final Edit:** This really seems to be unsupported at least as far as I can take it. My approaches varied, but at the end of the day, I saw myself without much more choice than get the code from the source of the KmeansClustering class (and the remaining code from [github](https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/canned/kmeans.py)) and attempt to reshape the outputs somehow. In the process, I realized the object of the results, was actually a tuple with some different Tensor class, that seemed to be used to construct the graphs alone. Interesting enough, if I took this tuple and did something like: ``` model_predictions[0][0]...[0] ``` the object was always some weird Tensor. I went up to sixty something in the three dots and eventually gave up. From there I tried to get the class that was giving these outputs to KmeansClustering called Kmeans in clustering ops (and surrounding code in [github](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/clustering_ops.py)). Again I had no success in changing the datatype, but I did understood why the name of the output was set to Squeeze something: in here the output had a squeeze operation. I thought this could be the problem and attempted to remove the squeeze operation among other things... I failed :( Finally I realized that this output seemed to actually come from the estimator.py file and at this point I just gave up on it. Thank you to all who commented, I would not have come this far, Cheers
2020/02/11
[ "https://Stackoverflow.com/questions/60174534", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12880545/" ]
You can check the shape in the savedmodel file by using the command line program saved\_model\_cli that ships with tensorflow. Make sure your export signature in tensorflow specifies the shape of the output tensor.
The main issue is that the output tensor shape of TF built-in KMeans estimator model has unknown rank in the saved model. Two possible ways to solve this: * Try training the KMeans model on BQML directly. * Reimplement the TF KMeans estimator model to reshape the output tensor into a specific tensor shape.
1,406
14,101,852
I have this text file: www2.geog.ucl.ac.uk/~plewis/geogg122/python/delnorte.dat I want to extract column 3 and 4. I am using np.loadtxt - getting the error: ``` ValueError: invalid literal for float(): 2000-01-01 ``` I am only interested in the year 2005. How can I extracted both columns?
2012/12/31
[ "https://Stackoverflow.com/questions/14101852", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1860229/" ]
You can provide a custom conversion function for a specific column to `loadtxt`. Since you are only interested in the year I use a `lambda`-function to split the date on `-` and to convert the first part to an `int`: ``` data = np.loadtxt('delnorte.dat', usecols=(2,3), converters={2: lambda s: int(s.split('-')[0])}, skiprows=27) array([[ 2000., 190.], [ 2000., 170.], [ 2000., 160.], ..., [ 2010., 185.], [ 2010., 175.], [ 2010., 165.]]) ``` To filter then for the year `2005` you can use [logical indexing](http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays) in numpy: ``` data_2005 = data[data[:,0] == 2005] array([[ 2005., 210.], [ 2005., 190.], [ 2005., 190.], [ 2005., 200.], ....]) ```
You should not use NumPy.loadtxt to read these values, you should rather use the [`csv` module](http://pastebin.com/JyVC4XfF) to load the file and read its data.
1,408
20,005,173
Maximum, minimum and total numbers using python. For example: ``` >>>maxmin() Enter integers, one per line, terminated by -10 : 2 1 3 7 8 -10 Output : total =5, minimum=1, maximum = 8 ``` Here is my code. I need some help with this. ``` def maxmin(): minimum = None maximum = None while (num != -10): num = input('Please enter a number, or -10 to stop: ' ) if num == -10: break if (minimum) is None or (num < minimum): minimum = num if (maximum) is None or (num > maximum): maximum = num print ("Maximum: ", maximum) print ("Minimum: ", minimum) ```
2013/11/15
[ "https://Stackoverflow.com/questions/20005173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725323/" ]
``` def maxmintotal(): num = 0 numbers = [] while True: num = int(input('Please enter a number, or -10 to stop: ' )) if num == -10: break numbers.append(num) print('Numbers:', len(numbers)) print('Maximum:', max(numbers)) print('Minumum:', min(numbers)) ```
You have to define `num` before you use it in the `while`, also your nested `if` should be out of the other `if`: ``` def maxmin(): minimum = None maximum = None num = None while True: num = input('Please enter a number, or -10 to stop: ') if num == -10: break if (minimum) is None or (num < minimum): minimum = num if (maximum) is None or (num > maximum): maximum = num print ("Maximum: ", maximum) print ("Minimum: ", minimum) maxmin() ```
1,410
71,197,496
I have python script that creates dataflow template in the specified GCS path. I have tested the script using my GCP Free Trial and it works perfect. My question is using same code in production environment I want to generate a template but I can not use Cloud-Shell as there are restrictions also can not directly run the Python script that is using the SA keys. Also I can not create VM and using that generate a template in GCS. Considering above restrictions is there any option to generate the dataflow template.
2022/02/20
[ "https://Stackoverflow.com/questions/71197496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2458847/" ]
First, you can do this with RXJS, or promises, but either way intentionally makes room for asynchronous programming, so when your `transform` method synchronously returns `this.value` at the end, I don't think you're getting what you're expecting? I'm guessing the reason it is compiling but you don't think it is working is that it is working, but the correct value is not computed before you're using it. To stay in observables, it should return an observable. ```js transform(key: string, args?: any): Observable<string | null> { if (!this.localizationChanges) { this.localizationChanges = this.fluentService.localizationChanges; } return this.localizationChanges.pipe( switchMap(() => this.fluentService .translationObservable(key, args, getLocale(args)) ) ); } ``` and then in your template, chain `| async` to the end of it. The `async` pipe will take care of subscribing, unsubscribing, telling the component to refresh every time the source observable emits or is changed, etc. `switchMap` causes any still-waiting results of `fluentService.translationObservable` to be dropped each time `localizationChanges` emits, replacing with a new call. If you only want one value emitted, then promises are an alternative. In that case, you'd probably want ```js async transform(key: string, args?: any): Promise<string | null> { if (!this.localizationChanges) { this.localizationChanges = this.fluentService.localizationChanges; } return this.localizationChanges.toPromise().then( () => this.fluentService.translate( key, args, getLocale(args) ) ); } ``` and then in your template, chain `| async` to the end of it rather than recreate that piece of code.
If I understand the problem well, your code has a fundamental problem and the fact that "*Everything works well, if I use Observables*" is a fruit of a very special case. Let's look at this very stripped down version of your code ``` function translationObservable(key) { return of("Obs translates key: " + key); } function transform(key: string, args?: any): string | null { let retValue: string; const localizationChanges: BehaviorSubject<null> = new BehaviorSubject<null>( null ); localizationChanges.subscribe(() => { translationObservable(key).subscribe((value) => (retValue = value)); }); return retValue; } console.log(transform("abc")); // prints "Obs translates key: abc" ``` If you run this code you actually end up with a message printed out on the console. There is one reason why this code works and the reason is that we are using Observables **synchronously**. In other words the code is executed line after line and therefore the assignment `retValue = value` ocurs before `retValue` is returned. **Promises though are intrinsically asynchronous**. So, whatever logic you pass to the `then` method gets executed asynchronously, i.e. in another subsequent cycle of the Javascript engine. This means that if we use a Promise instead of an Observable in the same example as above, we will not get any message printed ``` function translationPromise(key) { return Promise.resolve("Promise translates key: " + key); } let retValue: string; function transform(key: string, args?: any): string | null { const localizationChanges: BehaviorSubject<null> = new BehaviorSubject<null>( null ); localizationChanges.subscribe(() => { translationPromise(key).subscribe((value) => (retValue = value)); }); return retValue; } console.log(transform("abc")); // prints undefined // some time later it prints the value returned using Promises setTimeout(() => { console.log(retValue); }, 100); ``` The summary is that your code works probably only when you use the `of` operator to create the Observable (which works synchronously since there is no async operation involved) but I doubt it works when it has to invoke async functionalities like fetching a locale file. If you want to build an Angular pipe which works asynchronously you need to follow the answer of @JSmart523 As last sode note, in your code you are subscribing an Observable within a Subscrition. This is considered not idiomatic with Observables.
1,415
73,066,287
I have a project hosted on Microsoft Azure. It has Azure Functions that are Python code and they recently stopped working (500 Internal Server Error). The code has errors I haven't had before and no known changes were made (but the possibility exists because people from other teams could have changed a configuration somewhere without telling anyone). Here's some log : ``` 2022-07-21T08:41:14.226884682Z: [INFO] info: Function.AllCurveApi[1] 2022-07-21T08:41:14.226994383Z: [INFO] Executing 'Functions.AllCurveApi' (Reason='This function was programmatically called via the host APIs.', Id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx) 2022-07-21T08:41:14.277076231Z: [INFO] fail: Function.AllCurveApi[3] 2022-07-21T08:41:14.277143831Z: [INFO] Executed 'Functions.AllCurveApi' (Failed, Id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, Duration=6ms) 2022-07-21T08:41:14.277932437Z: [INFO] Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.AllCurveApi 2022-07-21T08:41:14.277948737Z: [INFO] ---> Microsoft.Azure.WebJobs.Script.Workers.Rpc.RpcException: Result: Failure 2022-07-21T08:41:14.277953937Z: [INFO] Exception: ImportError: libpython3.6m.so.1.0: cannot open shared object file: No such file or directory. Troubleshooting Guide: https://aka.ms/functions-modulenotfound 2022-07-21T08:41:14.277957637Z: [INFO] Stack: File "/azure-functions-host/workers/python/3.6/LINUX/X64/azure_functions_worker/dispatcher.py", line 318, in _handle__function_load_request 2022-07-21T08:41:14.277961437Z: [INFO] func_request.metadata.entry_point) 2022-07-21T08:41:14.277991237Z: [INFO] File "/azure-functions-host/workers/python/3.6/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 42, in call 2022-07-21T08:41:14.277995937Z: [INFO] raise extend_exception_message(e, message) 2022-07-21T08:41:14.277999337Z: [INFO] File "/azure-functions-host/workers/python/3.6/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 40, in call 2022-07-21T08:41:14.278020737Z: [INFO] return func(*args, **kwargs) 2022-07-21T08:41:14.278024237Z: [INFO] File "/azure-functions-host/workers/python/3.6/LINUX/X64/azure_functions_worker/loader.py", line 85, in load_function 2022-07-21T08:41:14.278027837Z: [INFO] mod = importlib.import_module(fullmodname) 2022-07-21T08:41:14.278031337Z: [INFO] File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module 2022-07-21T08:41:14.278277039Z: [INFO] return _bootstrap._gcd_import(name[level:], package, level) 2022-07-21T08:41:14.278289939Z: [INFO] File "<frozen importlib._bootstrap>", line 994, in _gcd_import 2022-07-21T08:41:14.278294939Z: [INFO] File "<frozen importlib._bootstrap>", line 971, in _find_and_load 2022-07-21T08:41:14.278298639Z: [INFO] File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked 2022-07-21T08:41:14.278302439Z: [INFO] File "<frozen importlib._bootstrap>", line 665, in _load_unlocked 2022-07-21T08:41:14.278305939Z: [INFO] File "<frozen importlib._bootstrap_external>", line 678, in exec_module 2022-07-21T08:41:14.278309639Z: [INFO] File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed 2022-07-21T08:41:14.278313239Z: [INFO] File "/home/site/wwwroot/AllCurveApi/__init__.py", line 9, in <module> 2022-07-21T08:41:14.278317039Z: [INFO] import pyodbc 2022-07-21T08:41:14.278320439Z: [INFO] 2022-07-21T08:41:14.278554841Z: [INFO] at Microsoft.Azure.WebJobs.Script.Description.WorkerFunctionInvoker.InvokeCore(Object[] parameters, FunctionInvocationContext context) in /src/azure-functions-host/src/WebJobs.Script/Description/Workers/WorkerFunctionInvoker.cs:line 96 2022-07-21T08:41:14.278568241Z: [INFO] at Microsoft.Azure.WebJobs.Script.Description.FunctionInvokerBase.Invoke(Object[] parameters) in /src/azure-functions-host/src/WebJobs.Script/Description/FunctionInvokerBase.cs:line 82 2022-07-21T08:41:14.278583841Z: [INFO] at Microsoft.Azure.WebJobs.Script.Description.FunctionGenerator.Coerce[T](Task`1 src) in /src/azure-functions-host/src/WebJobs.Script/Description/FunctionGenerator.cs:line 225 [...] Then it goes for many many lines, I'm not sure it's interesting ``` And here's an example of a python file, the error triggers on line 9, `import pyodbc` : ``` import simplejson as json import azure.functions as func from azure.keyvault import KeyVaultClient from azure.common.credentials import ServicePrincipalCredentials from datetime import datetime import os import pyodbc import logging # And then code ``` To me it looks like the server has difficulties accessing some Python resources or dependencies, it has to do with `libpython3.6` but at this point I'm not sure what to do on the Azure Portal to fix the problem.
2022/07/21
[ "https://Stackoverflow.com/questions/73066287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6126073/" ]
I was facing the same issue last Thursday. However, we have tried most of the solutions which are available on Internet but none of them help us. And in the end, we have just updated Azure Function Runtime Python 3.6 to 3.7 and Boomm.. it's working. Moreover, we have also noticed that when we tried to create new Azure Function App based on Linux, that time were not able to select Python3.6 as runtime stack. Thanks again guys.
We had the exact same issue on Friday. What worked for us was to replace pyodbc with pypyodbc. We did this so that we didn't have to change it in our code: ``` import pypyodbc as pyodbc ``` Also, we upgraded our Azure Functions to use Python 3.7 (will probably update to 3.9 soon). Azure will not be supporting Python 3.6 as of September 30, 2022 anyways: <https://azure.microsoft.com/en-us/updates/azure-functions-support-for-python-36-is-ending-on-30-september-2022/>
1,416
60,520,118
I want to scrape phone no but phone no only displays after clicked so please is it possible to scrape phone no directly using python?My code scrape phone no but with starr\*\*\*. here is the link from where I want to scrape phone no:<https://hipages.com.au/connect/abcelectricservicespl/service/126298> please guide me! here is my code: ``` import requests from bs4 import BeautifulSoup def get_page(url): response = requests.get(url) if not response.ok: print('server responded:', response.status_code) else: soup = BeautifulSoup(response.text, 'lxml') return soup def get_detail_data(soup): try: title = (soup.find('h1', class_="sc-AykKI",id=False).text) except: title = 'Empty Title' print(title) try: contact_person = (soup.findAll('span', class_="Contact__Item-sc-1giw2l4-2 kBpGee",id=False)[0].text) except: contact_person = 'Empty Person' print(contact_person) try: location = (soup.findAll('span', class_="Contact__Item-sc-1giw2l4-2 kBpGee",id=False)[1].text) except: location = 'Empty location' print(location) try: cell = (soup.findAll('span', class_="Contact__Item-sc-1giw2l4-2 kBpGee",id=False)[2].text) except: cell = 'Empty Cell No' print(cell) try: phone = (soup.findAll('span', class_="Contact__Item-sc-1giw2l4-2 kBpGee",id=False)[3].text) except: phone = 'Empty Phone No' print(phone) try: Verify_ABN = (soup.find('p', class_="sc-AykKI").text) except: Verify_ABN = 'Empty Verify_ABN' print(Verify_ABN) try: ABN = (soup.find('div', class_="box__Box-sc-1u3aqjl-0").find('a')) except: ABN = 'Empty ABN' print(ABN) def main(): #get data of detail page url = "https://hipages.com.au/connect/abcelectricservicespl/service/126298" #get_page(url) get_detail_data(get_page(url)) if __name__ == '__main__': main() ```
2020/03/04
[ "https://Stackoverflow.com/questions/60520118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7514427/" ]
Find the Id or class of that element and use jQuery to change the name. If the element is a Anchor tag, use the below code. ``` jQuery(document).ready(function($) { $("#button_id").text("New Text"); }); ``` If the element is button, use below code based on type of button. ``` <input type='button' value='Add' id='button_id'> jQuery(document).ready(function($) { $("#button_id").attr('value', 'New Text'); }): <input type='button' value='Add' id='button_id'> jQuery(document).ready(function($) { $("#button_id").prop('value', 'New Text'); }); <!-- Different button types--> <button id='button_id' type='button'>Add</button> jQuery(document).ready(function($) { $("#button_id").html('New Text'); }); ```
From the screenshot, it seems like you are trying to change the text of a menu link (My Account). If so make sure that you haven't given any custom name for My Account page in the Wordpress navigation. Inspect the page using developer tools and find the Class/Id of that element. Then you can use jQuery to alter the content using the below code. **Using ID Selector:** ``` jQuery(document).ready(function($) { $("#myaccountbuttonId").text("New Button Text"); }); ``` **Using Class Selector** ``` jQuery(document).ready(function($) { $(".myaccountbuttonClass").text("New Button Text"); }); ``` If you want to change the text of Add to Cart button, use the below code. ``` // To change add to cart text on single product page add_filter( 'woocommerce_product_single_add_to_cart_text', 'woocommerce_custom_single_add_to_cart_text' ); function woocommerce_custom_single_add_to_cart_text() { return __( 'Buy Now', 'woocommerce' ); } // To change add to cart text on product archives(Collection) page add_filter( 'woocommerce_product_add_to_cart_text', 'woocommerce_custom_product_add_to_cart_text' ); function woocommerce_custom_product_add_to_cart_text() { return __( 'Buy Now', 'woocommerce' ); } ```
1,417
4,300,979
Defining a function, MyFunction(argument, \*args): [do something to argument[arg] for arg in \*args] if \*args is empty, the function doesn't do anything, but I want to make the default behavior 'use the entire set if length of \*args == 0' ``` def Export(source, target, *args, sep=','): for item in source: SubsetOutput(WriteFlatFile(target), args).send(item[0]) ``` I don't want to check the length of args on every iteration, and I can't access the keys of item in source until the iteration begins... so i could ``` if len(args) != 0: for item in source: else for item in source: ``` which will probably work but doesn't seem 'pythonic' enough? is this (is there) a standard way to approach \*args or \*\*kwargs and default behavior when either is empty? More Code: ``` def __coroutine(func): """ a decorator for coroutines to automatically prime the routine code and method from 'curous course on coroutines and concurrency' by david beazley www.dabeaz.com """ def __start(*args, **kwargs): cr = func(*args, **kwargs) next(cr) return cr return __start def Export(source, target, *args, sep=','): if args: for item in source: SubsetOutput(WriteFlatFile(target, sep), args).send(item) else: for item in source: WriteFlatFile(target, sep).send(item) @__coroutine def SubsetOutput(target, *args): """ take *args from the results and pass to target TODO ---- raise exception when arg is not in result[0] """ while True: result = (yield) print([result.arg for arg in result.dict() if arg in args]) target.send([result.arg for arg in result.dict if arg in args]) @__coroutine def WriteFlatFile(target, sep): """ take set of results to a flat file TODO ---- """ filehandler = open(target, 'a') while True: result = (yield) line = (sep.join([str(result[var]) for var in result.keys()])).format(result)+'\n' filehandler.write(line) ```
2010/11/29
[ "https://Stackoverflow.com/questions/4300979", "https://Stackoverflow.com", "https://Stackoverflow.com/users/508440/" ]
Is there a way to pass an "entire set" argument to `SubsetOutput`, so you can bury the conditional inside its call rather than have an explicit `if`? This could be `None` or `[]`, for example. ``` # Pass None to use full subset. def Export(source, target, *args, sep=','): for item in source: SubsetOutput(WriteFlatFile(target), args or None).send(item[0]) # Pass an empty list [] to use full subset. Even simpler. def Export(source, target, *args, sep=','): for item in source: SubsetOutput(WriteFlatFile(target), args).send(item[0]) ``` If not, I would go with the two loop solution, assuming the loop really is a single line. It reads well and is a reasonable use case for a little bit of code duplication. ``` def Export(source, target, *args, sep=','): if args: for item in source: SubsetOutput(WriteFlatFile(target), args).send(item[0]) else: for item in source: FullOutput(WriteFlatFile(target)).send(item[0]) ```
Just check its not none, you don't have to create a separate argument ``` def test(*args): if not args: return #break out return True #or whatever you want ```
1,418
52,172,821
I'm currently trying to convert a nested dict into a list of objects with "children" and "leaf". Here my input dict and the output I'm trying to obtain: Input: ``` { "a": { "aa": {} }, "b": { "c": { "d": { 'label': 'yoshi' } }, "e": {}, "f": {} } } ``` I try to obtain this: ``` [ { "text": "a", "children": [ { "text": "aa", "leaf": "true" } ] }, { "text": "b", "children": [ { "text": "c", "children": [ { "text": "d", "leaf": "true", "label": "yoshi" } ] }, { "text": "e", "leaf": "true" }, { "text": "f", "leaf": "true" } ] } ] ``` I've tried a few unflatten python lib on pypi but not one seems to be able to output a list format like this.
2018/09/04
[ "https://Stackoverflow.com/questions/52172821", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2688174/" ]
I have commented the function as I feel necessary. ``` def convert(d): children = [] #iterate over each child's name and their dict (child's childs) for child, childs_childs in d.items(): #check that it is not a left node if childs_childs and \ all(isinstance(v,dict) for k,v in childs_childs.items()): #recursively call ourselves to get the child's children children.append({'text': child, 'children': convert(childs_childs)}) else: #if the child is a lead, append to children as necessarry #the **-exploded accommodates the 'label':'yoshi' item children.append({'text': child, 'leaf': True, **childs_childs}) return children ``` which gives: ``` [ { "text": "a", "children": [ { "text": "aa", "leaf": true } ] }, { "text": "b", "children": [ { "text": "c", "children": [ { "text": "d", "leaf": true, "label": "yoshi" } ] }, { "text": "e", "leaf": true }, { "text": "f", "leaf": true } ] } ] ```
Try this solution (`data` is your input dictionary): ``` def walk(text, d): result = {'text': text} # get all children children = [walk(k, v) for k, v in d.items() if k != 'label'] if children: result['children'] = children else: result['leaf'] = True # add label if exists label = d.get('label') if label: result['label'] = label return result [walk(k, v) for k, v in data.items()] ``` Output: ``` [{'text': 'a', 'children': [{'text': 'aa', 'leaf': True}]}, {'text': 'b', 'children': [{'text': 'c', 'children': [{'text': 'd', 'leaf': True, 'label': 'yoshi'}]}, {'text': 'e', 'leaf': True}, {'text': 'f', 'leaf': True}]}] ```
1,421
44,382,348
I am just 4 days old to python. I am just trying to understand the root \_\_init\_\_.py import functionality. Googled lot to understand the same but not able to find one useful link (may be my search key is not relevant) . Please share some links. I am getting error as "ImportError: cannot import name Person" Below is the structure ``` Example(directory) model __init__.py (empty) myclass.py __init__.py run.py ``` myclass.py ``` class Person(object): def __init__(self): self.name = "Raja" def print_name(self): print self.name ``` \_\_init\_\_.py ``` from model.myclass import Person ``` run.py ``` from model import Person def donext(): person = Person() person.print_name() if __name__ == '__main__': donext() ```
2017/06/06
[ "https://Stackoverflow.com/questions/44382348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6346252/" ]
Either, as @gonczor suggested, you can simply leave \_\_init\_\_ empty (moreover, you don't need the root one) and import directly from the package: ``` from model.myclass import Person ``` Or, if you intentionally want to flatten the interface of the package, this is as simple as this: model/\_\_init\_\_.py ``` from myclass import Person ``` run.py ``` from model import Person ```
The error basically says the interpreter can't find anything that would match `Person` in a given namespace, in your case `model` package. It's because it's in `model.myclass` package, but it's imported to `root` and not to `run`. Modules in python are basically directories with `__init__.py` script. But it's tricky to import anything at root level from **init**. And moreover, it's not necessary. OK, so this means solution is either to import directly from `model` package, or from the rott-level `__init__.py`. I would recommend the former method, since it's more commonly used. You can do it this way: ``` from model.myclass import Person def donext(): person = Person() person.print_name() if __name__ == '__main__': donext() ``` And leave `__init__.py` empty. They are used only for initialization, so there is no need to import everything to them. You could put something to your `model/__init__.py` and then import it in `myclass.py` like: `__init__.py`: ``` something = 0 ``` `myclass.py`: ``` from . import something print something ```
1,423
55,668,648
I need to find the starting index of the specific sequences (sequence of strings) in the list in python. For ex. ``` list = ['In', 'a', 'gesture', 'sure', 'to', 'rattle', 'the', 'Chinese', 'Government', ',', 'Steven', 'Spielberg', 'pulled', 'out', 'of', 'the', 'Beijing', 'Olympics', 'to', 'protest', 'against', 'China', '_s', 'backing', 'for', 'Sudan', '_s', 'policy', 'in', 'Darfur', '.'] ``` ex. ``` seq0 = "Steven Spielberg" seq1 = "the Chinese Government" seq2 = "the Beijing Olympics" ``` The output should be like : ``` 10 6 15 ```
2019/04/13
[ "https://Stackoverflow.com/questions/55668648", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3404345/" ]
You could simply iterate over list of your words and check at every index if following words match any of your sequences. ``` words = ['In', 'a', 'gesture', 'sure', 'to', 'rattle', 'the', 'Chinese', 'Government', ',', 'Steven', 'Spielberg', 'pulled', 'out', 'of', 'the', 'Beijing', 'Olympics', 'to', 'protest', 'against', 'China', '_s', 'backing', 'for', 'Sudan', '_s', 'policy', 'in', 'Darfur', '.']\ seq0 = "Steven Spielberg" seq1 = "the Chinese Government" seq2 = "the Beijing Olympics" sequences = {'seq{}'.format(idx): i.split() for idx, i in enumerate([seq0, seq1, seq2])} for idx in range(len(words)): for k, v in sequences.items(): if idx + len(v) < len(words) and words[idx: idx+len(v)] == v: print(k, idx) ``` **Output:** ``` seq1 6 seq0 10 seq2 15 ```
**You can do something like:** ``` def find_sequence(seq, _list): seq_list = seq.split() all_occurrence = [idx for idx in [i for i, x in enumerate(_list) if x == seq_list[0]] if seq_list == list_[idx:idx+len(seq_list)]] return -1 if not all_occurrence else all_occurrence[0] ``` --- **Output:** ``` for seq in [seq0, seq1, seq2]: print(find_sequence(seq, list_)) ``` > > 10 > > > 6 > > > 15 > > > **Note**, if the sequence is not found you will get **-1**.
1,424
69,682,188
For the sake of practice, I am writing a class BankAccount to learn OOP in python. In an attempt to make my program more redundant am trying to write a test function `test_BankBankAccount()` to practice how to do test functions as well. The test function `test_BankBankAccount()` is suppose to test that the methods `deposit()`, `withdraw()`, `transfer()` and `get_balance()` work as intended. However, the test function fails because the methods inside of the `computed_deposit = test_account.deposit(400)`, `computed_transfer = test_account.transfer(test_account2, 200)` and so on doesn't seem to store the values i asign to them. \*\*This is the error message I receive (which is the exact one i try to avoid) \*\* ``` assert success1 and success2 and success3 and success4, (msg1, msg2, msg3, msg4) AssertionError: ('computet deposit = None is not 400', 'computet transfer = None is not 200', 'computet withdrawal = None is not 200', 'computet balance = 0 is not 0') ``` **Here is a snippet of much of the code I have written so far** ``` class BankAccount: def __init__(self, first_name, last_name, number, balance): self._first_name = first_name self._last_name = last_name self._number = number self._balance = balance def deposit(self, amount): self._balance += amount def withdraw(self, amount): self._balance -= amount def get_balance(self): return self._balance def transfer(self,other_account, transfer_amount): self.withdraw(transfer_amount) other_account.deposit(transfer_amount) def print_info(self): first = self._first_name last = self._last_name number = self._number balance = self._balance s = f"{first} {last}, {number}, balance: {balance}" print(s) def main(): def test_BankBankAccount(): test_account = BankAccount("Dude", "man", "1234", 0) test_account2 = BankAccount("Dude2", "man2","5678", 0) expected_deposit = 400 expected_withdrawal = 200 expected_transfer = 200 expected_get_balance = 0 computed_deposit = test_account.deposit(400) computed_transfer = test_account.transfer(test_account2, 200) computed_withdrawal = test_account.withdraw(200) computed_get_balance = test_account.get_balance() #tol = 1E-17 success1 = abs(expected_deposit == computed_deposit) #< tol success2 = abs(expected_transfer == computed_transfer) #< tol success3 = abs(expected_withdrawal == computed_withdrawal) #< tol success4 = abs(expected_get_balance == computed_get_balance) #<tol msg1 = f"computet deposit = {computed_deposit} is not {expected_deposit}" msg2 = f"computet transfer = {computed_transfer} is not {expected_transfer}" msg3 = f"computet withdrawal = {computed_withdrawal} is not {expected_withdrawal}" msg4 = f"computet balance = {computed_get_balance} is not {expected_get_balance}" assert success1 and success2 and success3 and success4, (msg1, msg2, msg3, msg4) test_BankBankAccount() ``` **My question is:** * Is there anyone who is kind enough to help me fix this and spot my mistakes? All help is welcomed and appreciated.
2021/10/22
[ "https://Stackoverflow.com/questions/69682188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16925420/" ]
@KoenLostrie is absolutely correct as to *Why your trigger does not fire on Insert*. But that is just half the problem. But, the other issue stems from the same misconception: NULL values The call to `check_salary` passes `:old.job_id` but it is still null, resulting in cursor ( `for i in (Select ...)`) returning no rows when it attempts 'WHERE job\_id = null`. However there is no exception then a cursor returns no rows, the loop is simply not entered. You need to pass ':new.job\_id'. You would also want the new job id on Update as well. Image an employee gets a promotion the update is like to be something like: ``` update employee set job_id = 1011 , salary = 50000.00 where employee = 115; ``` Finally, processing a cursor is at dangerous at best. Doing so at lease implies you allow multiple rows in the `Jobs` for a given job\_id. What happens when those rows have different `min_salary` and `max_salary` You can update the procedure or just do everything in the trigger and eliminate the procedure. ``` create or replace trigger check_salary_trg before insert or update on employees for each row declare e_invalid_salary_range; l_job jobs%rowtype; begin select * from jobs into l_job where job_id = :new.job_id; if :new.salary < l_job.min_salary or :new.salary > l_job.max_salary then raise e_invalid_salary_range; ; end if; exception when e_invalid_salary_range then raise_application_error(-20001, 'Invalid salary ' || psal || '. Salaries for job ' || pjobid || ' must be between ' || l_job.min_salary || ' and ' || l_job.max_salary ); end check_salary_trg; ``` You could add handling `no_data_found` and `too_many_rows` the the exception block, but those are better handled with constraints.
Congrats on the well documented question. The issue is with the `WHEN` clause of the trigger. On insert the old value is `NULL` and in oracle, you can't compare to NULL using "=" or "!=". Check this example: ``` PDB1--KOEN>create table trigger_test 2 (name VARCHAR2(10)); Table TRIGGER_TEST created. PDB1--KOEN>CREATE OR REPLACE trigger trigger_test_t1 2 BEFORE INSERT OR UPDATE ON trigger_test 3 FOR EACH ROW 4 WHEN (new.name != old.name) 5 BEGIN 6 RAISE_APPLICATION_ERROR(-20001, 'Some error !'); 7 END trigger_test_t1; 8 / Trigger TRIGGER_TEST_T1 compiled PDB1--KOEN>INSERT INTO trigger_test (name) values ('koen'); 1 row inserted. PDB1--KOEN>UPDATE trigger_test set name = 'steven'; Error starting at line : 1 in command - UPDATE trigger_test set name = 'steven' Error report - ORA-20001: Some error ! ORA-06512: at "KOEN.TRIGGER_TEST_T1", line 2 ORA-04088: error during execution of trigger 'KOEN.TRIGGER_TEST_T1' ``` That is exactly the behaviour you're seeing in your code. On insert the trigger doesn't seem to fire. Well... it doesn't because in oracle, `'x' != NULL` yields false. See info at the bottom of this answer. Here is the proof. Let's recreate the trigger with an `NVL` function wrapped around the old value. ``` PDB1--KOEN>CREATE OR REPLACE trigger trigger_test_t1 2 BEFORE INSERT OR UPDATE ON trigger_test 3 FOR EACH ROW 4 WHEN (new.name != NVL(old.name,'x')) 5 -- above is similar to this 6 --WHEN (new.name <> old.name or 7 -- (new.name is null and old.name is not NULL) or 8 -- (new.name is not null and old.name is NULL) ) 9 BEGIN 10 RAISE_APPLICATION_ERROR(-20001, 'Some error !'); 11 END trigger_test_t1; 12 / Trigger TRIGGER_TEST_T1 compiled PDB1--KOEN>INSERT INTO trigger_test (name) values ('jennifer'); Error starting at line : 1 in command - INSERT INTO trigger_test (name) values ('jennifer') Error report - ORA-20001: Some error ! ORA-06512: at "KOEN.TRIGGER_TEST_T1", line 2 ORA-04088: error during execution of trigger 'KOEN.TRIGGER_TEST_T1' ``` There you go. It now fires on insert. Now why is this happening ? According to the docs: *Because null represents a lack of data, a null cannot be equal or unequal to any value or to another null. However, Oracle considers two nulls to be equal when evaluating a DECODE function.* Check the docs or read up on it in this *20 year old* answer on [asktom](https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1274000564279)
1,425
38,797,047
I'm running ubuntu 12.04 and usually use python 2.7, but I need a python package that was built with python 3.4 and that uses lxml. After updating aptitude, I can install python 3.2 and lxml, but the package I want only works with 3.4. After installing python 3.4, I try to install lxml dependencies using ``` pip3 install libxml2-dev ``` I get the error: ``` No matching distribution found for libxml2-dev pip3 install lxml ``` doesn't work and asks for libxml2: ``` Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ``` Any ideas on how to install lxml? Thanks.
2016/08/05
[ "https://Stackoverflow.com/questions/38797047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1106278/" ]
You are running ``` pip3 install libxml2-dev ``` when you should be running ``` sudo apt-get install libxml2 libxml2-dev ``` (you may also need `libxslt` and its dev version as well) `pip` doesn't install system libraries, `apt` and friends do that.
See <http://www.lfd.uci.edu/~gohlke/pythonlibs/#libxml-python> Download the package and then do a `pip install <package.whl>`.
1,428
28,627,414
Welcome... I'm creating a project where I parse xlsx files with xlrd library. Everything works just fine. Then I configured RabbitMQ and Celery. Created some tasks in main folder which works and can be accessed from iPython. The problems starts when I'm in my application (application created back in time in my project) and I try to import tasks from my app in my views.py I tried to import it with all possible paths but everytime it throws me an error. Official documentation posts the right way of importing tasks from other applications, It looks like this: `from project.myapp.tasks import mytask` But it doesn't work at all. In addition when Im in iPython I can import tasks with command `from tango.tasks import add` And it works perfectly. Just bellow I'm uploading my files and error printed out by console. views.py ``` # these are the instances that I was trying to import that seemed to be the most reasonable, but non of it worked # import tasks # from new_tango_project.tango.tasks import add # from new_tango_project.tango import tasks # from new_tango_project.new_tango_project.tango.tasks import add # from new_tango_project.new_tango_project.tango import tasks # from tango import tasks #function to parse files def parse_file(request, file_id): xlrd_file = get_object_or_404(xlrdFile, pk = file_id) if xlrd_file.status == False #this is some basic task that I want to enter to tasks.add.delay(321,123) ``` settings.py ``` #I've just posted things directly connected to celery import djcelery INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'tango', 'djcelery', 'celery', ) BROKER_URL = "amqp://sebrabbit:seb@localhost:5672/myvhost" BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_VHOST = "myvhost" BROKER_USER = "sebrabbit" BROKER_PASSWORD = "seb" CELERY_RESULT_BACKEND = 'amqp://' CELERY_TASK_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT=['json'] CELERY_TIMEZONE = 'Europe/Warsaw' CELERY_ENABLE_UTC = False ``` celery.py (in my main folder `new_tango_project` ) ``` from __future__ import absolute_import import os from celery import Celery import djcelery from django.conf import settings os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'new_tango_project.settings') app = Celery('new_tango_project') app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) # CELERY_IMPORTS = ['tango.tasks'] # Optional configuration, see the application user guide. app.conf.update( CELERY_TASK_RESULT_EXPIRES=3600, CELERY_RESULT_BACKEND='djcelery.backends.cache:CacheBackend', ) if __name__ == '__main__': app.start() ``` tasks.py (in my main project folder `new_tango_project`) ``` from __future__ import absolute_import from celery import Celery from celery.task import task app = Celery('new_tango_project', broker='amqp://sebrabbit:seb@localhost:5672/myvhost', backend='amqp://', include=['tasks']) @task def add(x, y): return x + y @task def mul(x, y): return x * y @task def xsum(numbers): return sum(numbers) @task def parse(file_id, xlrd_file): return "HAHAHAHHHAHHA" ``` tasks.py in my application folder ``` from __future__ import absolute_import from celery import Celery from celery.task import task # app = Celery('tango') @task def add(x, y): return x + y @task def asdasdasd(x, y): return x + y ``` celery console when starting ``` -------------- celery@debian v3.1.17 (Cipater) ---- **** ----- --- * *** * -- Linux-3.2.0-4-amd64-x86_64-with-debian-7.8 -- * - **** --- - ** ---------- [config] - ** ---------- .> app: new_tango_project:0x1b746d0 - ** ---------- .> transport: amqp://sebrabbit:**@localhost:5672/myvhost - ** ---------- .> results: amqp:// - *** --- * --- .> concurrency: 8 (prefork) -- ******* ---- --- ***** ----- [queues] -------------- .> celery exchange=celery(direct) key=celery ``` Finally my console log... ``` [2015-02-20 11:19:45,678: ERROR/MainProcess] Received unregistered task of type 'new_tango_project.tasks.add'. The message has been ignored and discarded. Did you remember to import the module containing this task? Or maybe you are using relative imports? Please see http://bit.ly/gLye1c for more information. The full contents of the message body was: {'utc': True, 'chord': None, 'args': (123123123, 123213213), 'retries': 0, 'expires': None, 'task': 'new_tango_project.tasks.add', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': 'd9a8e560-1cd0-491d-a132-10345a04f391'} (233b) Traceback (most recent call last): File "/home/seb/PycharmProjects/tango/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 455, in on_task_received strategies[name](message, body, KeyError: 'new_tango_project.tasks.add' ``` This is the log from one of many tries importing the tasks. Where I`m making mistake ? Best wishes
2015/02/20
[ "https://Stackoverflow.com/questions/28627414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4563194/" ]
You would need to extend the scale class and override the `calculateXLabelRotation` method to use a user inputted rotation rather than trying to work it out it's self. If you do this you would then need to extend the bar or line chart and override the init method to make use of this scale class. (or you could make these changes directly to the scale, bar and line classes and then no need to override). so first extend scale class and make it use a user defined option ``` var helpers = Chart.helpers; Chart.MyScale = Chart.Scale.extend({ calculateXLabelRotation: function() { //Get the width of each grid by calculating the difference //between x offsets between 0 and 1. this.ctx.font = this.font; var firstWidth = this.ctx.measureText(this.xLabels[0]).width, lastWidth = this.ctx.measureText(this.xLabels[this.xLabels.length - 1]).width, firstRotated, lastRotated; this.xScalePaddingRight = lastWidth / 2 + 3; this.xScalePaddingLeft = (firstWidth / 2 > this.yLabelWidth + 10) ? firstWidth / 2 : this.yLabelWidth + 10; this.xLabelRotation = 0; if (this.display) { var originalLabelWidth = helpers.longestText(this.ctx, this.font, this.xLabels), cosRotation, firstRotatedWidth; this.xLabelWidth = originalLabelWidth; //Allow 3 pixels x2 padding either side for label readability var xGridWidth = Math.floor(this.calculateX(1) - this.calculateX(0)) - 6; //check if option is set if so use that if (this.overrideRotation) { // do the same as before but manualy set the rotation rather than looping this.xLabelRotation = this.overrideRotation; cosRotation = Math.cos(helpers.radians(this.xLabelRotation)); // We're right aligning the text now. if (firstRotated + this.fontSize / 2 > this.yLabelWidth + 8) { this.xScalePaddingLeft = firstRotated + this.fontSize / 2; } this.xScalePaddingRight = this.fontSize / 2; this.xLabelWidth = cosRotation * originalLabelWidth; } else { //Max label rotate should be 90 - also act as a loop counter while ((this.xLabelWidth > xGridWidth && this.xLabelRotation === 0) || (this.xLabelWidth > xGridWidth && this.xLabelRotation <= 90 && this.xLabelRotation > 0)) { cosRotation = Math.cos(helpers.radians(this.xLabelRotation)); firstRotated = cosRotation * firstWidth; lastRotated = cosRotation * lastWidth; // We're right aligning the text now. if (firstRotated + this.fontSize / 2 > this.yLabelWidth + 8) { this.xScalePaddingLeft = firstRotated + this.fontSize / 2; } this.xScalePaddingRight = this.fontSize / 2; this.xLabelRotation++; this.xLabelWidth = cosRotation * originalLabelWidth; } } if (this.xLabelRotation > 0) { this.endPoint -= Math.sin(helpers.radians(this.xLabelRotation)) * originalLabelWidth + 3; } } else { this.xLabelWidth = 0; this.xScalePaddingRight = this.padding; this.xScalePaddingLeft = this.padding; } }, }); ``` then in the extend the bar class to create a new graph type and override the init method to use the new ``` Chart.types.Bar.extend({ name: "MyBar", initialize: function(data) { //Expose options as a scope variable here so we can access it in the ScaleClass var options = this.options; this.ScaleClass = Chart.MyScale.extend({ overrideRotation: options.overrideRotation, offsetGridLines: true, calculateBarX: function(datasetCount, datasetIndex, barIndex) { //Reusable method for calculating the xPosition of a given bar based on datasetIndex & width of the bar var xWidth = this.calculateBaseWidth(), xAbsolute = this.calculateX(barIndex) - (xWidth / 2), barWidth = this.calculateBarWidth(datasetCount); return xAbsolute + (barWidth * datasetIndex) + (datasetIndex * options.barDatasetSpacing) + barWidth / 2; }, calculateBaseWidth: function() { return (this.calculateX(1) - this.calculateX(0)) - (2 * options.barValueSpacing); }, calculateBarWidth: function(datasetCount) { //The padding between datasets is to the right of each bar, providing that there are more than 1 dataset var baseWidth = this.calculateBaseWidth() - ((datasetCount - 1) * options.barDatasetSpacing); return (baseWidth / datasetCount); } }); this.datasets = []; //Set up tooltip events on the chart if (this.options.showTooltips) { helpers.bindEvents(this, this.options.tooltipEvents, function(evt) { var activeBars = (evt.type !== 'mouseout') ? this.getBarsAtEvent(evt) : []; this.eachBars(function(bar) { bar.restore(['fillColor', 'strokeColor']); }); helpers.each(activeBars, function(activeBar) { activeBar.fillColor = activeBar.highlightFill; activeBar.strokeColor = activeBar.highlightStroke; }); this.showTooltip(activeBars); }); } //Declare the extension of the default point, to cater for the options passed in to the constructor this.BarClass = Chart.Rectangle.extend({ strokeWidth: this.options.barStrokeWidth, showStroke: this.options.barShowStroke, ctx: this.chart.ctx }); //Iterate through each of the datasets, and build this into a property of the chart helpers.each(data.datasets, function(dataset, datasetIndex) { var datasetObject = { label: dataset.label || null, fillColor: dataset.fillColor, strokeColor: dataset.strokeColor, bars: [] }; this.datasets.push(datasetObject); helpers.each(dataset.data, function(dataPoint, index) { //Add a new point for each piece of data, passing any required data to draw. datasetObject.bars.push(new this.BarClass({ value: dataPoint, label: data.labels[index], datasetLabel: dataset.label, strokeColor: dataset.strokeColor, fillColor: dataset.fillColor, highlightFill: dataset.highlightFill || dataset.fillColor, highlightStroke: dataset.highlightStroke || dataset.strokeColor })); }, this); }, this); this.buildScale(data.labels); this.BarClass.prototype.base = this.scale.endPoint; this.eachBars(function(bar, index, datasetIndex) { helpers.extend(bar, { width: this.scale.calculateBarWidth(this.datasets.length), x: this.scale.calculateBarX(this.datasets.length, datasetIndex, index), y: this.scale.endPoint }); bar.save(); }, this); this.render(); }, }); ``` now you can declare a chart using this chart type and pass in the option `overrideRotation` here is a fiddle example <http://jsfiddle.net/leighking2/ye3usuhu/> and a snippet ```js var helpers = Chart.helpers; Chart.MyScale = Chart.Scale.extend({ calculateXLabelRotation: function() { //Get the width of each grid by calculating the difference //between x offsets between 0 and 1. this.ctx.font = this.font; var firstWidth = this.ctx.measureText(this.xLabels[0]).width, lastWidth = this.ctx.measureText(this.xLabels[this.xLabels.length - 1]).width, firstRotated, lastRotated; this.xScalePaddingRight = lastWidth / 2 + 3; this.xScalePaddingLeft = (firstWidth / 2 > this.yLabelWidth + 10) ? firstWidth / 2 : this.yLabelWidth + 10; this.xLabelRotation = 0; if (this.display) { var originalLabelWidth = helpers.longestText(this.ctx, this.font, this.xLabels), cosRotation, firstRotatedWidth; this.xLabelWidth = originalLabelWidth; //Allow 3 pixels x2 padding either side for label readability var xGridWidth = Math.floor(this.calculateX(1) - this.calculateX(0)) - 6; if (this.overrideRotation) { this.xLabelRotation = this.overrideRotation; cosRotation = Math.cos(helpers.radians(this.xLabelRotation)); // We're right aligning the text now. if (firstRotated + this.fontSize / 2 > this.yLabelWidth + 8) { this.xScalePaddingLeft = firstRotated + this.fontSize / 2; } this.xScalePaddingRight = this.fontSize / 2; this.xLabelWidth = cosRotation * originalLabelWidth; } else { //Max label rotate should be 90 - also act as a loop counter while ((this.xLabelWidth > xGridWidth && this.xLabelRotation === 0) || (this.xLabelWidth > xGridWidth && this.xLabelRotation <= 90 && this.xLabelRotation > 0)) { cosRotation = Math.cos(helpers.radians(this.xLabelRotation)); firstRotated = cosRotation * firstWidth; lastRotated = cosRotation * lastWidth; // We're right aligning the text now. if (firstRotated + this.fontSize / 2 > this.yLabelWidth + 8) { this.xScalePaddingLeft = firstRotated + this.fontSize / 2; } this.xScalePaddingRight = this.fontSize / 2; this.xLabelRotation++; this.xLabelWidth = cosRotation * originalLabelWidth; } } if (this.xLabelRotation > 0) { this.endPoint -= Math.sin(helpers.radians(this.xLabelRotation)) * originalLabelWidth + 3; } } else { this.xLabelWidth = 0; this.xScalePaddingRight = this.padding; this.xScalePaddingLeft = this.padding; } }, }); Chart.types.Bar.extend({ name: "MyBar", initialize: function(data) { //Expose options as a scope variable here so we can access it in the ScaleClass var options = this.options; this.ScaleClass = Chart.MyScale.extend({ overrideRotation: options.overrideRotation, offsetGridLines: true, calculateBarX: function(datasetCount, datasetIndex, barIndex) { //Reusable method for calculating the xPosition of a given bar based on datasetIndex & width of the bar var xWidth = this.calculateBaseWidth(), xAbsolute = this.calculateX(barIndex) - (xWidth / 2), barWidth = this.calculateBarWidth(datasetCount); return xAbsolute + (barWidth * datasetIndex) + (datasetIndex * options.barDatasetSpacing) + barWidth / 2; }, calculateBaseWidth: function() { return (this.calculateX(1) - this.calculateX(0)) - (2 * options.barValueSpacing); }, calculateBarWidth: function(datasetCount) { //The padding between datasets is to the right of each bar, providing that there are more than 1 dataset var baseWidth = this.calculateBaseWidth() - ((datasetCount - 1) * options.barDatasetSpacing); return (baseWidth / datasetCount); } }); this.datasets = []; //Set up tooltip events on the chart if (this.options.showTooltips) { helpers.bindEvents(this, this.options.tooltipEvents, function(evt) { var activeBars = (evt.type !== 'mouseout') ? this.getBarsAtEvent(evt) : []; this.eachBars(function(bar) { bar.restore(['fillColor', 'strokeColor']); }); helpers.each(activeBars, function(activeBar) { activeBar.fillColor = activeBar.highlightFill; activeBar.strokeColor = activeBar.highlightStroke; }); this.showTooltip(activeBars); }); } //Declare the extension of the default point, to cater for the options passed in to the constructor this.BarClass = Chart.Rectangle.extend({ strokeWidth: this.options.barStrokeWidth, showStroke: this.options.barShowStroke, ctx: this.chart.ctx }); //Iterate through each of the datasets, and build this into a property of the chart helpers.each(data.datasets, function(dataset, datasetIndex) { var datasetObject = { label: dataset.label || null, fillColor: dataset.fillColor, strokeColor: dataset.strokeColor, bars: [] }; this.datasets.push(datasetObject); helpers.each(dataset.data, function(dataPoint, index) { //Add a new point for each piece of data, passing any required data to draw. datasetObject.bars.push(new this.BarClass({ value: dataPoint, label: data.labels[index], datasetLabel: dataset.label, strokeColor: dataset.strokeColor, fillColor: dataset.fillColor, highlightFill: dataset.highlightFill || dataset.fillColor, highlightStroke: dataset.highlightStroke || dataset.strokeColor })); }, this); }, this); this.buildScale(data.labels); this.BarClass.prototype.base = this.scale.endPoint; this.eachBars(function(bar, index, datasetIndex) { helpers.extend(bar, { width: this.scale.calculateBarWidth(this.datasets.length), x: this.scale.calculateBarX(this.datasets.length, datasetIndex, index), y: this.scale.endPoint }); bar.save(); }, this); this.render(); }, }); var randomScalingFactor = function() { return Math.round(Math.random() * 100) }; var barChartData = { labels: ["January", "February", "March", "April", "May", "June", "July"], datasets: [{ fillColor: "rgba(220,220,220,0.5)", strokeColor: "rgba(220,220,220,0.8)", highlightFill: "rgba(220,220,220,0.75)", highlightStroke: "rgba(220,220,220,1)", data: [randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor()] }, { fillColor: "rgba(151,187,205,0.5)", strokeColor: "rgba(151,187,205,0.8)", highlightFill: "rgba(151,187,205,0.75)", highlightStroke: "rgba(151,187,205,1)", data: [randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor()] }, { fillColor: "rgba(15,18,20,0.5)", strokeColor: "rgba(15,18,20,0.8)", highlightFill: "rgba(15,18,20,0.75)", highlightStroke: "rgba(15,18,20,1)", data: [randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor(), randomScalingFactor()] }] } window.onload = function() { var ctx = document.getElementById("canvas").getContext("2d"); window.myBar = new Chart(ctx).MyBar(barChartData, { overrideRotation: 30 }); } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/1.0.1/Chart.js"></script> <canvas id="canvas" height="150" width="300"></canvas> ```
Note that **for chart.js 3.x the way of specifying the axis scale options has changed**: see <https://www.chartjs.org/docs/master/getting-started/v3-migration.html#scales> Consequently in the above answer for 2.x you need to remove the square brackets like this: ``` var myChart = new Chart(ctx, { type: 'bar', data: chartData, options: { scales: { xAxes: { ticks: { autoSkip: false, maxRotation: 90, minRotation: 90 } } } } }); ```
1,429
29,790,344
I want to generate the following xml file: ``` <foo if="bar"/> ``` I've tried this: ``` from lxml import etree etree.Element("foo", if="bar") ``` But I got this error: ``` page = etree.Element("configuration", if="ok") ^ SyntaxError: invalid syntax ``` Any ideas? I'm using python 2.7.9 and lxml 3.4.2
2015/04/22
[ "https://Stackoverflow.com/questions/29790344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4131226/" ]
``` etree.Element("foo", {"if": "bar"}) ``` The attributes can be passed in as a dict: ``` from lxml import etree root = etree.Element("foo", {"if": "bar"}) print etree.tostring(root, pretty_print=True) ``` output ``` <foo if="bar"/> ```
'if' is a reserved word in Python, which means that you can't use it as an identifier.
1,431
21,662,881
My approach is: ``` def build_layers(): layers = () for i in range (0, 32): layers += (True) ``` but this leads to ``` TypeError: can only concatenate tuple (not "bool") to tuple ``` Context: This should prepare a call of [bpy.ops.pose.armature\_layers](http://www.blender.org/documentation/blender_python_api_2_69_9/bpy.ops.pose.html) therefore I can't choose a list.
2014/02/09
[ "https://Stackoverflow.com/questions/21662881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241590/" ]
`(True)` is not a tuple. Do this instead: ``` layers += (True, ) ``` Even better, use a generator: ``` (True, ) * 32 ```
Since tuples are immutable, each concatenation creates a new tuple. It is better to do something like: ``` def build_layers(count): return tuple([True]*count) ``` If you need some logic to the tuple constructed, just use a list comprehension or generator expression in the tuple constructor: ``` >>> tuple(bool(ord(e)%2) for e in 'abcdefg') (True, False, True, False, True, False, True) ```
1,434
57,612,054
I'm testing an API endpoint that is supposed to raise a ValidationError in a Django model (note that the exception is a Django exception, not DRF, because it's in the model). ``` from rest_framework.test import APITestCase class TestMyView(APITestCase): # ... def test_bad_request(self): # ... response = self.client.post(url, data) self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST) ``` However, my test errors out with an exception instead of passing. It doesn't even fail getting a 500 instead of 400, it doesn't get there at all. Isn't DRF's APIClient supposed to handle *every* exception? I've search online but found nothing. I've read that DRF doesn't handle Django's native ValidationError, but still that doesn't explain why I am not even getting a 500. Any idea what I'm doing wrong? **Full stack trace**: ``` E ====================================================================== ERROR: test_cannot_create_duplicate_email (organizations.api.tests.test_contacts.TestContactListCreateView) ---------------------------------------------------------------------- Traceback (most recent call last): File "/code/organizations/api/tests/test_contacts.py", line 98, in test_cannot_create_duplicate_email response = self.jsonapi_post(self.url(new_partnership), data) File "/code/config/tests/base.py", line 166, in jsonapi_post url, data=json.dumps(data), content_type=content_type) File "/usr/local/lib/python3.7/site-packages/rest_framework/test.py", line 300, in post path, data=data, format=format, content_type=content_type, **extra) File "/usr/local/lib/python3.7/site-packages/rest_framework/test.py", line 213, in post return self.generic('POST', path, data, content_type, **extra) File "/usr/local/lib/python3.7/site-packages/rest_framework/test.py", line 238, in generic method, path, data, content_type, secure, **extra) File "/usr/local/lib/python3.7/site-packages/django/test/client.py", line 422, in generic return self.request(**r) File "/usr/local/lib/python3.7/site-packages/rest_framework/test.py", line 289, in request return super(APIClient, self).request(**kwargs) File "/usr/local/lib/python3.7/site-packages/rest_framework/test.py", line 241, in request request = super(APIRequestFactory, self).request(**kwargs) File "/usr/local/lib/python3.7/site-packages/django/test/client.py", line 503, in request raise exc_value File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner response = get_response(request) File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/views/generic/base.py", line 71, in view return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 495, in dispatch response = self.handle_exception(exc) File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception self.raise_uncaught_exception(exc) File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 492, in dispatch response = handler(request, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/rest_framework/generics.py", line 244, in post return self.create(request, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/rest_framework/mixins.py", line 21, in create self.perform_create(serializer) File "/usr/local/lib/python3.7/site-packages/rest_framework/mixins.py", line 26, in perform_create serializer.save() File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 214, in save self.instance = self.create(validated_data) File "/code/organizations/api/serializers.py", line 441, in create 'partnership': self.context['partnership'] File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 943, in create instance = ModelClass._default_manager.create(**validated_data) File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 422, in create obj.save(force_insert=True, using=self.db) File "/code/organizations/models.py", line 278, in save self.full_clean() File "/usr/local/lib/python3.7/site-packages/django/db/models/base.py", line 1203, in full_clean raise ValidationError(errors) django.core.exceptions.ValidationError: {'__all__': ['Supplier contact emails must be unique per organization.']} ```
2019/08/22
[ "https://Stackoverflow.com/questions/57612054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/686617/" ]
**Question**: Isn't DRF's APIClient supposed to handle every exception? **Answer**: No. It's a test client, it won't handle any uncaught exceptions, that's how test clients work. Test clients propagate the exception so that the test fails with a "crash" when an exception isn't caught. You can test that exceptions are raised and uncaught with [`self.assertRaises`](https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertRaises) **Question**: The APIView should return HTTP\_400\_BAD\_REQUEST when I raise a ValidationError but the exception isn't caught. **Answer**: You should look at the [source code for APIView](http://www.cdrf.co/3.9/rest_framework.views/APIView.html#handle_exception). Inside the `dispatch()` method, all exceptions raised while creating the `response` object are caught and the method `handle_exception()` is called. Your exception is a `ValidationError`. The crucial lines are: ``` exception_handler = self.get_exception_handler() context = self.get_exception_handler_context() response = exception_handler(exc, context) if response is None: self.raise_uncaught_exception(exc) ``` If you haven't changed `settings.EXCEPTION_HANDLER`, you get the default DRF exception handler, [source code here](https://github.com/encode/django-rest-framework/blob/3.9.0/rest_framework/views.py#L73). If handles `Http404`, `PermissionDenied` and `APIException`. The `APIView` itself actually also handles `AuthenticationFailed` and `NotAuthenticated`. But not `ValidationError`. So it returns `None` and therefore the view raises your `ValidationError` which stops your test. You see that in your traceback: ``` File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception self.raise_uncaught_exception(exc) ``` You can decide to handle more exceptions than the default ones handled by DRF, you can read [this](https://www.django-rest-framework.org/api-guide/exceptions/#custom-exception-handling) on custom exception handling. **EDIT**: You can also `raise rest_framework.exceptions.ValidationError` instead of the standard Django `ValidationError`. That is an `APIException` and therefore will be handled by DRF as a `HTTP400_BAD_REQUEST`. [1] Side note: Luckily DRF doesn't catch every single exception! If there's a serious flaw in your code you actually **want** your code to "crash" and produce an error log and your server to return a HTTP 500. Which is what happens here. The response would be an HTTP 500 if this wasn't the test client. [1]<https://github.com/encode/django-rest-framework/blob/3.9.0/rest_framework/exceptions.py#L142>
Something in your code is causing a python error which is halting execution before your POST request can return a valid HTTP response. Your code doesn't even reach the line `self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)` because there is no response. If you're calling your tests in the normal way with `./manage.py test` then you should see the traceback and be able to narrow down what caused the error.
1,436
2,261,671
I have a bit counting method that I am trying to make as fast as possible. I want to try the algorithm below from [Bit Twiddling Hacks](http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel), but I don't know C. What is 'type T' and what is the python equivalent of (T)~(T)0/3? > > A generalization of the best bit > counting method to integers of > bit-widths upto 128 (parameterized by > type T) is this: > > > ``` v = v - ((v >> 1) & (T)~(T)0/3); // temp v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp c = (T)(v * ((T)~(T)0/255)) >> (sizeof(v) - 1) * CHAR_BIT; // count ```
2010/02/14
[ "https://Stackoverflow.com/questions/2261671", "https://Stackoverflow.com", "https://Stackoverflow.com/users/270316/" ]
T is a integer type, which I'm assuming is unsigned. Since this is C, it'll be fixed width, probably (but not necessarily) one of 8, 16, 32, 64 or 128. The fragment `(T)~(T)0` that appears repeatedly in that code sample just gives the value 2\*\*N-1, where N is the width of the type T. I suspect that the code may require that N be a multiple of 8 for correct operation. Here's a direct translation of the given code into Python, parameterized in terms of N, the width of T in bits. ``` def count_set_bits(v, N=128): mask = (1 << N) - 1 v = v - ((v >> 1) & mask//3) v = (v & mask//15*3) + ((v >> 2) & mask//15*3) v = (v + (v >> 4)) & mask//255*15 return (mask & v * (mask//255)) >> (N//8 - 1) * 8 ``` Caveats: (1) the above will only work for numbers up to 2\*\*128. You might be able to generalize it for larger numbers, though. (2) There are obvious inefficiencies: for example, 'mask//15' is computed twice. This doesn't matter for C, of course, because the compiler will almost certainly do the division at compile time rather than run time, but Python's peephole optimizer may not be so clever. (3) The fastest C method may well not translate to the fastest Python method. For Python speed, you should probably be looking for an algorithm that minimizes the number of Python bitwise operations. As Alexander Gessler said: profile!
What you copied is a template for generating code. It's not a good idea to transliterate that template into another language and expect it to run fast. Let's expand the template. (T)~(T)0 means "as many 1-bits as fit in type T". The algorithm needs 4 masks which we will compute for the various T-sizes we might be interested in. ``` >>> for N in (8, 16, 32, 64, 128): ... all_ones = (1 << N) - 1 ... constants = ' '.join([hex(x) for x in [ ... all_ones // 3, ... all_ones // 15 * 3, ... all_ones // 255 * 15, ... all_ones // 255, ... ]]) ... print N, constants ... 8 0x55 0x33 0xf 0x1 16 0x5555 0x3333 0xf0f 0x101 32 0x55555555L 0x33333333L 0xf0f0f0fL 0x1010101L 64 0x5555555555555555L 0x3333333333333333L 0xf0f0f0f0f0f0f0fL 0x101010101010101L 128 0x55555555555555555555555555555555L 0x33333333333333333333333333333333L 0xf0f0f0f0f0f0f0f0f0f0f0f0f0f0f0fL 0x1010101010101010101010101010101L >>> ``` You'll notice that the masks generated for the 32-bit case match those in the hardcoded 32-bit C code. Implementation detail: lose the `L` suffix from the 32-bit masks (Python 2.x) and lose all `L` suffixes for Python 3.x. As you can see the whole template and (T)~(T)0 caper is merely obfuscatory sophistry. Put quite simply, for a k-byte type, you need 4 masks: ``` k bytes each 0x55 k bytes each 0x33 k bytes each 0x0f k bytes each 0x01 ``` and the final shift is merely N-8 (i.e. 8\*(k-1)) bits. Aside: I doubt if the template code would actually work on a machine whose CHAR\_BIT was not 8, but there aren't very many of those around these days. Update: There is another point that affects the correctness and the speed when transliterating such algorithms from C to Python. The C algorithms often assume unsigned integers. In C, operations on unsigned integers work silently modulo 2\*\*N. In other words, only the least significant N bits are retained. No overflow exceptions. Many bit twiddling algorithms rely on this. However (a) Python's `int` and `long` are signed (b) old Python 2.X will raise an exception, recent Python 2.Xs will silently promote `int` to `long` and Python 3.x `int` == Python 2.x `long`. The correctness problem usually requires `register &= all_ones` at least once in the Python code. Careful analysis is often required to determine the minimal correct masking. Working in `long` instead of `int` doesn't do much for efficiency. You'll notice that the algorithm for 32 bits will return a `long` answer even from input of `0`, because the 32-bits all\_ones is `long`.
1,438
74,335,162
I have a file with a function and a file that calls the functions. Finally, I run .bat I don't know how I can add an argument when calling the .bat file. So that the argument was added to the function as below. file\_with\_func.py ``` def some_func(val): print(val) ``` run\_bat.py ``` from bin.file_with_func import some_func some_func(val) ``` myBat.bat ``` set basePath=%cd% cd %~dp0 cd .. python manage.py shell < bin/run_bat.py cd %basePath% ``` Now I would like to run .bat like this. ``` \bin>.\myBat.bat "mystring" ``` **Or after starting, get options to choose from, e.g.** ``` \bin>.\myBat.bat >>> Choose 1 or 2 >>> 1 ``` **And then the function returns** `"You chose 1"`
2022/11/06
[ "https://Stackoverflow.com/questions/74335162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17356459/" ]
You need to turn a bunch of POJO's (Plain Old JavaScript Objects) into a class with methods specialized for this kind of object. The idiomatic way is to create a class that takes the POJO's data in some way (since I'm lazy I just pass the entire thing). TypeScript doesn't change how you approach this - you just need to add type annotations. ```js const data = [{"Value":"100000000","Duration":1},{"Value":"100000001","Duration":2},{"Value":"100000002","Duration":3},{"Value":"100000003","Duration":5},{"Value":"100000004","Duration":0},{"Value":"100000005","Duration":8},{"Value":"100000006","Duration":10}]; class Duration { /* private data: { Value: string; Duration: number } */ constructor(data/* : { Value: string; Duration: number */) { this.data = data; } durationInSeconds() { return this.data.Duration * 1000; } } const parsed = data.map((datum) => new Duration(datum)); console.log(parsed[0].durationInSeconds()); ``` --- For convenience you may add a method like this: ``` static new(data) { return new Duration(data); } ``` Then it'll look cleaner in the `map`: ``` const parsed = data.map(Duration.new); ```
> > The problem is that deserialized json is just a data container... > > > That's right. In order to deserialize JSON to a class instance with methods, you need to tell deserializer which class is used to create the instance. However, it's not a good idea to define such class information using `interface` in TypeScript. In TypeScript, `interface` information will be discarded after compilation. To define class during deserialization, you need `class`: ``` class Duration { value: string; duration: number; durationInSeconds(): number { return this.duration*1000; }; } ``` To deserialize JSON into object with methods (and "ideally also deep"), I've made an npm module named [esserializer](https://www.npmjs.com/package/esserializer) to solve this problem: save JavaScript/TypeScript class instance values during serialization, in plain JSON format, together with its class name information: ``` const ESSerializer = require('esserializer'); const serializedText = ESSerializer.serialize(yourArrayOfDuration); ``` Later on, during the deserialization stage (possibly on another machine), esserializer can recursively deserialize object instance, with all Class/Property/Method information retained, using the same class definition: ``` const deserializedObj = ESSerializer.deserialize(serializedText, [Duration]); // deserializedObj is a perfect copy of yourArrayOfDuration ```
1,439
67,267,305
I have a custom training loop that can be simplified as follow ``` inputs = tf.keras.Input(dtype=tf.float32, shape=(None, None, 3)) model = tf.keras.Model({"inputs": inputs}, {"loss": f(inputs)}) optimizer = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9, nesterov=True) for inputs in batches: with tf.GradientTape() as tape: results = model(inputs, training=True) grads = tape.gradient(results["loss"], model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) ``` The [TensorFlow documentation of ExponentialMovingAverage](https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage) is not clear on how it should be used in [from-scratch training loop](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch). As anyone worked with this? Additionally, how should the shadow variable be restored into the model if both are still in memory, and how can I check that that training variables were correctly updated?
2021/04/26
[ "https://Stackoverflow.com/questions/67267305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1782553/" ]
Create the EMA object before the training loop: ``` ema = tf.train.ExponentialMovingAverage(decay=0.9999) ``` And then just apply the EMA after your optimization step. The ema object will keep shadow variables of your model's variables. (You don't need the call to `tf.control_dependencies` here, see the note in the [documentation](https://www.tensorflow.org/api_docs/python/tf/control_dependencies)) ``` optimizer.apply_gradients(zip(grads, model.trainable_variables)) ema.apply(model.trainable_variables) ``` Then, one way to use the shadow variables into your model could be to assign to your model's variables the shadow variable by calling the `average` method of the EMA object on them: ``` for var in model.trainable_variables: var.assign(ema.average(var)) model.save("model_with_shadow_variables.h5") ```
EMA with customizing `model.fit` -------------------------------- Here is a working example of **Exponential Moving Average** with customizing the `fit`. [Ref](https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage). ``` from tensorflow import keras import tensorflow as tf class EMACustomModel(keras.Model): def __init__(self,*args, **kwargs): super().__init__(*args, **kwargs) self.ema = tf.train.ExponentialMovingAverage(decay=0.999) def train_step(self, data): x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) gradients = tape.gradient(loss, self.trainable_variables) opt_op = self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) '''About: tf.control_dependencies: Note: In TensorFlow 2 with eager and/or Autograph, you should not require this method, as code executes in the expected order. Only use tf.control_dependencies when working with v1-style code or in a graph context such as inside Dataset.map. ''' with tf.control_dependencies([opt_op]): self.ema.apply(self.trainable_variables) self.compiled_metrics.update_state(y, y_pred) return {m.name: m.result() for m in self.metrics} ``` **DummyModel** ``` import numpy as np input = keras.Input(shape=(28, 28)) flat = tf.keras.layers.Flatten()(input) outputs = keras.layers.Dense(1)(flat) model = EMACustomModel(input, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) ``` **DummyData** ``` np.random.seed(101) x = np.random.randint(0, 256, size=(50, 28, 28)).astype("float32") y = np.random.random((50, 1)) print(x.shape, y.shape) # train the model model.fit(x, y, epochs=10, verbose=2) ``` ``` ... ... Epoch 49/50 2/2 - 0s - loss: 189.8506 - mae: 10.8830 Epoch 50/50 2/2 - 0s - loss: 170.3690 - mae: 10.1046 model.trainable_weights[:1][:1] ```
1,440
21,243,719
I am using python (2.7) and I have a long nested list of X,Y coordinates specifying end points of lines. I need to shift the Y coordinates by a specified amount. For instance, this is what I would like to do: ``` lines = [((2, 98), (66, 32)), ((67, 31), (96, 2)), ((40, 52), (88, 3))] ``` perform some coding that is eluding me to get... ``` lines = [((2, 198), (66, 132)), ((67, 131), (96, 102)), ((40, 152), (88, 103)) ``` Can anyone please tell me how I can go about accomplishing this? Thank you for the help!!
2014/01/20
[ "https://Stackoverflow.com/questions/21243719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3216653/" ]
I'd do something like: ``` >>> dy = 100 >>> lines = [((2, 98), (66, 32)), ((67, 31), (96, 2)), ((40, 52), (88, 3))] >>> newlines = [tuple((x,y+dy) for x,y in subline) for subline in lines] >>> newlines [((2, 198), (66, 132)), ((67, 131), (96, 102)), ((40, 152), (88, 103))] ``` which is roughly the same as: ``` newlines = [] for subline in lines: tmp = [] for x,y in subline: tmp.append((x, y+dy)) tmp = tuple(tmp) newlines.append(tmp) ```
Tuples are immutable, so as currently structured it's impossible without making either the line or the point a list, or rebuilding from scratch each time point as a list, line as a tuple: ``` line = lines[0] for point in line: point[1] += 100 ``` line as a list, point as a tuple: ``` line = lines[0] for i, (x, y) in enumerate(line): line[i] = x, y+100 ``` or rebuilding the object entirely: ``` lines[0] = tuple(((x, y+100) for x, y in lines[0])) ```
1,441
63,871,922
I am using **Ubuntu 20.04**.I upgraded Tensorflow-2.2.0 to Tensorflow-2.3.0. When the version was **2.2.0**, tensorflow was utilizing GPU well. But after upgrading to version **2.3.0** it doesn't detecting GPU. I have seen this [Link](https://stackoverflow.com/questions/63515767/tensorflow-2-3-0-does-not-detect-gpu) from stackoverflow. That was a problem of **cuDNN** version. But I have required version of cuDNN. ``` me_sajied@Kunai:~$ apt list | grep cudnn WARNING: apt does not have a stable CLI interface. Use with caution in scripts. libcudnn7-dev/now 7.6.5.32-1+cuda10.1 amd64 [installed,local] libcudnn7/now 7.6.5.32-1+cuda10.1 amd64 [installed,local] ``` I also have all required softwares and their versions. Cuda ---- ``` me_sajied@Kunai:~$ apt list | grep cuda-toolkit WARNING: apt does not have a stable CLI interface. Use with caution in scripts. cuda-toolkit-10-0/unknown 10.0.130-1 amd64 cuda-toolkit-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic] cuda-toolkit-10-2/unknown 10.2.89-1 amd64 cuda-toolkit-11-0/unknown,unknown 11.0.3-1 amd64 nvidia-cuda-toolkit-gcc/focal 10.1.243-3 amd64 nvidia-cuda-toolkit/focal 10.1.243-3 amd64 ``` Python ------ ``` me_sajied@Kunai:~$ python3 --version Python 3.8.2 ``` environment ----------- ``` LD_LIBRARY_PATH="/usr/local/cuda-10.1/lib64" ``` Log --- ``` me_sajied@Kunai:~$ python3 Python 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf 2020-09-13 21:28:37.387327: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> >>> tf.test.is_gpu_available() WARNING:tensorflow:From <stdin>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2020-09-13 21:28:48.806385: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-09-13 21:28:48.836251: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2699905000 Hz 2020-09-13 21:28:48.836637: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3fde5f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-09-13 21:28:48.836685: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-09-13 21:28:48.840030: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-09-13 21:28:48.882190: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-09-13 21:28:48.882582: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x408bd90 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-09-13 21:28:48.882606: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce 930MX, Compute Capability 5.0 2020-09-13 21:28:48.882796: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-09-13 21:28:48.883151: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce 930MX computeCapability: 5.0 coreClock: 1.0195GHz coreCount: 3 deviceMemorySize: 1.96GiB deviceMemoryBandwidth: 14.92GiB/s 2020-09-13 21:28:48.883196: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-09-13 21:28:48.883415: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64 2020-09-13 21:28:48.885196: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-09-13 21:28:48.885544: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-09-13 21:28:48.887160: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-09-13 21:28:48.888134: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-09-13 21:28:48.891565: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-09-13 21:28:48.891603: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1753] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2020-09-13 21:28:48.891625: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-09-13 21:28:48.891632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-09-13 21:28:48.891639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N False >>> ```
2020/09/13
[ "https://Stackoverflow.com/questions/63871922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13442402/" ]
In your `~/.bashrc` add: ``` LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64 ``` If you have a different location for the lib64 folder, you need to adjust it accordingly. As a side note, if you want to switch between multiple CUDA versions frequently you can also set an environment variable for a specific command directly in the terminal, such as e.g.: ``` LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64 python myprogram_which_needs_10_1.py ``` Then, if you want to switch to a different version, simply modify the path before the command.
> > 2020-09-13 21:28:48.883415: W tensorflow/stream\_executor/platform/default/dso\_loader.cc:59] Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory; > > > In my case, this caused by being installed `libcublas10` and `libcublas-dev` for **CUDA 10.2** by `apt upgrade`. my solution about this problem at follows. * my env. based on CUDA repos by NVIDIA. ``` $ sudo apt install --reinstall libcublas10=10.2.1.243-1 libcublas-dev=10.2.1.243-1 ``` and preventing that appear upgradable candidate. ``` $ sudo apt-mark hold libcublas10 $ sudo apt-mark hold libcublas-dev ```
1,444
787,711
I'm trying to write a function to return the truth value of a given PyObject. This function should return the same value as the if() truth test -- empty lists and strings are False, etc. I have been looking at the python/include headers, but haven't found anything that seems to do this. The closest I came was PyObject\_RichCompare() with True as the second value, but that returns False for "1" == True for example. Is there a convenient function to do this, or do I have to test against a sequence of types and do special-case tests for each possible type? What does the internal implementation of if() do?
2009/04/24
[ "https://Stackoverflow.com/questions/787711", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Isn't this it, in object.h: ``` PyAPI_FUNC(int) PyObject_IsTrue(PyObject *); ``` ?
Use `int PyObject_IsTrue(PyObject *o)` Returns 1 if the object o is considered to be true, and 0 otherwise. This is equivalent to the Python expression not not o. On failure, return -1. (from [Python/C API Reference Manual](http://docs.python.org/c-api/object.html))
1,445
54,399,465
I have a dataframe like this, ``` ColA Result_ColA ColB Result_ColB Result_ColC 1 True 1 True True 2 False 2 True False 3 True 3 True False ``` I want to identify the row numbers inside a list in python, which has a value False present in any of the Result\_ columns. For the given dataframe, the false list will have row number [2 and 3] present in it. considering the row numbers starting from 1. Type Error Tracebacks : ``` ReqRows = np.arange(1, len(Out_df)+ 1)[Out_df.eq(False).any(axis=1).values].tolist() Traceback (most recent call last): File "<ipython-input-92-497c7b225e2a>", line 1, in <module> ReqRows = np.arange(1, len(Out_df)+ 1)[Out_df.eq(False).any(axis=1).values].tolist() File "C:\Users\aaa\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\ops.py", line 1279, in f return self._combine_const(other, na_op) File "C:\Users\aaa\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 3625, in _combine_const raise_on_error=raise_on_error) File "C:\Users\aaa\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\internals.py", line 3162, in eval return self.apply('eval', **kwargs) File "C:\Users\aaa\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\internals.py", line 3056, in apply applied = getattr(b, f)(**kwargs) File "C:\Users\aaa\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\internals.py", line 1115, in eval transf(values), other) File "C:\Users\aaa\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\internals.py", line 2247, in _try_coerce_args raise TypeError TypeError ```
2019/01/28
[ "https://Stackoverflow.com/questions/54399465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7638174/" ]
Alternatively you could use countDocuments() to check the number of documents in the query? This will just count the number rather than returning it.
Pass id in `findOne()` Make a one common function. ``` const objectID = require('mongodb').ObjectID getMongoObjectId(id) { return new objectID(id) } ``` Now just call function ``` findOne({_id:common.getMongoObjectId('ID value hear')}) ``` It will same as `where` condition in mysql.
1,446
21,508,816
sorry if this is dumb question, but this is the first time i've used python and Mongo DB. Anyway, the problem I have is that I am trying to insert() a string to be stored in my data base -by read()-ing data in with a loop and then giving insert() line 1 and line2 in a single string (This is probably a messy way of doing it but I don't know to make read() read 2 lines at a time.)- but I get this error when running it: TypeError: insert() takes at least 2 arguments (1 given) ``` from pymongo import MongoClient client = MongoClient("192.168.1.82", 27017) db = client.local collection = db.JsonDat file_object = open('a.txt', 'r') post=collection.insert() readingData = True def readData(): while readingData==True: line1 = file_object.readline() line2 = file_object.readline() line1 line2 if line1 or line2 == "": readingData = False dbData = line1 %line2 post(dbData) print collection.find() ```
2014/02/02
[ "https://Stackoverflow.com/questions/21508816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3196428/" ]
The relevant part of your code is this: ``` post=collection.insert() ``` What you're doing there is calling the `insert` method without arguments rather than assigning that method to `post`. As the [`insert` method](http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.insert) takes as its arguments at least the document you're trying to insert and you haven't passed it anything, it only receives (implicitly) the object itself; that is, one argument instead of the at-least-two it expects. Removing the parentheses ought to work.
It appears as has been pointing out that you are attempting to create a [closure](https://stackoverflow.com/questions/4020419/closures-in-python) on the insert method. In this case I don't see the point as it will never be passed anywhere and/or need to reference something in the scope outside of where it was used. Just to the insert with arguments in the place where you actually want to use it, ie where you are calling `post` with arguments. Happy to vote up the answer already received as correct. But really want to point out here that you appear to be accessing the `local` database. It is likely found this by inspecting your new mongo installation and saw this in the databases list. MongoDB creates databases and collections automatically as you reference them and commit your first insert. I cannot emphasize enough [**DO NOT USE THE LOCAL DATABASE**](http://docs.mongodb.org/manual/reference/local-database/). It is for internal use only and will only result in you posting more issues here from the resulting problems.
1,447
63,993,901
I have just started my first python project. When it comes to running the shell script, the following error appears. What can be the cause of that problem? Maybe it is easy to solve. Thanks for your help, I am glad to provide more specific information as you need. Thanks.[enter image description here](https://i.stack.imgur.com/97zVU.png)
2020/09/21
[ "https://Stackoverflow.com/questions/63993901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14315301/" ]
This is a very "basic" example: ``` chars = 'abcdefghijklmnopqrstuvwxyz' my_list = [] for c1 in chars: for c2 in chars: for c3 in chars: for c4 in chars: my_list.append(c1+c2+c3+c4) print(my_list) ```
It's not easy to know what you consider "magical", but I don't see the magic in loops. Here is one variation: ``` cs = 'abcdefghijklmnopqrstuvwxyz' list(map(''.join, [(a,b,c,d) for a in cs for b in cs for c in cs for d in cs])) ```
1,448
43,724,030
I try out creating Word documents with python-docx. The created file is in letter dimensions 8.5 x 11 inches. But in Germany the standard format is A4 8.27 x 11.69 inches. ``` from docx import Document from docx.shared import Inches document = Document() document.add_heading('Document Title', 0) document.settings p = document.add_paragraph('A plain paragraph having some ') p.add_run('bold').bold = True p.add_run(' and some ') p.add_run('italic.').italic = True document.add_heading('Heading, level 1', level=1) document.add_paragraph('Intense quote', style='IntenseQuote') document.add_paragraph( 'first item in unordered list', style='ListBullet' ) document.add_paragraph( 'first item in ordered list', style='ListNumber' ) table = document.add_table(rows=1, cols=3) hdr_cells = table.rows[0].cells hdr_cells[0].text = 'Qty' hdr_cells[1].text = 'Id' hdr_cells[2].text = 'Desc' document.add_page_break() document.save('demo.docx') ``` I don't find any information about this topic in the documentation.
2017/05/01
[ "https://Stackoverflow.com/questions/43724030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5418245/" ]
It appears that a `Document` is made of several [`Section`](http://python-docx.readthedocs.io/en/latest/api/section.html#docx.section.Section)s with `page_height` and `page_width` attributes. To set the dimensions of the first section to A4, you could try (untested): ``` section = document.sections[0] section.page_height = Mm(297) section.page_width = Mm(210) ``` Note that A4 is defined in [millimeters](http://python-docx.readthedocs.io/en/latest/api/shared.html#docx.shared.Mm).
I believe you want this, from the [documentation](http://python-docx.readthedocs.io/en/latest/user/sections.html#page-dimensions-and-orientation). > > Three properties on Section describe page dimensions and orientation. > Together these can be used, for example, to change the orientation of > a section from portrait to landscape: > > > > ``` > >>> section.orientation, section.page_width, section.page_height > (PORTRAIT (0), 7772400, 10058400) # (Inches(8.5), Inches(11)) > >>> new_width, new_height = section.page_height, section.page_width > >>> section.orientation = WD_ORIENT.LANDSCAPE > >>> section.page_width = new_width > >>> section.page_height = new_height > >>> section.orientation, section.page_width, section.page_height > (LANDSCAPE (1), 10058400, 7772400) > > ``` > >
1,450
21,368,393
I installed Anaconda, but now that I wanted to use StringFunction in scitools.std I get error: ImportError: No module named scitools.std! So I did this: ``` sudo apt-get install python-scitools ``` Still didn't work. How can I help my computer "find scitools"? Thank you for your time. Kind regards, Marius
2014/01/26
[ "https://Stackoverflow.com/questions/21368393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/317563/" ]
Why not use PostGIS for this? ----------------------------- You're overlooking what's possibly the ideal storage for this kind of data - PostGIS's data types, particularly the `geography` type. ``` SELECT ST_GeogFromText('POINT(35.21076593772987 11.22855348629825)'); ``` By using `geography` you're storing your data in a representative type that supports all sorts of powerful operations and indexes on the type. Of course, that's only one `point`; I strongly suspect your data is actually a *line* or a *shape* in which case you should use [the appropriate PostGIS geography constructor](http://postgis.net/docs/reference.html#Geometry_Constructors) and input format. The big advantage to using `geography` is that it's a type designed specifically for asking real world questions about things like distance, "within", etc; you can use things like `ST_Distance_Spheroid` to get real earth-distance between points. Avoiding PostGIS? ----------------- If you want to avoid PostGIS, and just store it with native types, I'd recommend an array of `point`: ``` postgres=> SELECT ARRAY[ point('35.21076593772987','11.22855348629825'), point('35.210780222605616','11.22826420209139'), point('35.210777635062875','11.228241328291957') ]; array -------------------------------------------------------------------------------------------------------------------- {"(35.2107659377299,11.2285534862982)","(35.2107802226056,11.2282642020914)","(35.2107776350629,11.228241328292)"} (1 row) ``` ... unless your points actually represent a *line* or *shape* in which case, use the appropriate type - `path` or `polygon` respectively. This remains a useful compact representation - much more so than `text` in fact - that is still easily worked with within the DB. Compare storage: ``` CREATE TABLE points_text AS SELECT '35.21076593772987,11.22855348629825 35.210780222605616,11.22826420209139 35.210777635062875,11.228241328291957 35.210766843596794,11.228219799676775 35.210765045075604,11.228213072050166 35.21076234732945,11.228200962345223 35.21076324691649,11.228186161764323 35.21077314123606,11.228083902231146 35.210863083636866,11.227228492401766'::text AS p postgres=> SELECT pg_column_size(points_text.p) FROM points_text; pg_column_size ---------------- 339 (1 row) CREATE TABLE points_array AS SELECT array_agg(point(px)) AS p from points_text, LATERAL regexp_split_to_table(p, ' ') split(px); postgres=> SELECT pg_column_size(p) FROM points_array; pg_column_size ---------------- 168 (1 row) ``` `path` is even more compact, and probably a truer way to model what your data really *is*: ``` postgres=> SELECT pg_column_size(path('35.21076593772987,11.22855348629825 35.210780222605616,11.22826420209139 35.210777635062875,11.228241328291957 35.210766843596794,11.228219799676775 35.210765045075604,11.228213072050166 35.21076234732945,11.228200962345223 35.21076324691649,11.228186161764323 35.21077314123606,11.228083902231146 35.210863083636866,11.227228492401766')); pg_column_size ---------------- 96 (1 row) ``` unless it's a closed shape, in which case use `polygon`. Don't... -------- Either way, please don't just model this as text. It'll make you cry later, when you're trying to solve problems like "how do I determine if this point falls within x distance of the path in this column". PostGIS makes this sort of thing easy, but only if you store your data sensibly in the first place. See [this closely related question](https://dba.stackexchange.com/questions/55871/postgresql-list-of-integers-separated-by-comma-or-integer-array-for-performance), which discusses the good reasons *not* to just shove stuff in `text` fields. Also don't worry too much about in-line vs out-of-line storage. There isn't tons you can do about it, and it's something you should be dealing with only once you get the semantics of your data model right.
[All of the character types](http://www.postgresql.org/docs/current/static/datatype-character.html) (TEXT, VARCHAR, CHAR) behave similarly from a performance point of view. They are normally stored in-line in the table row, unless they are very large, in which case they may be stored in a separate file (called a TOAST file). The reasons for this are: 1. Table rows have to be able to fit inside the database page size (8kb by default) 2. Having a very large field in a row stored inline would make it slower to access other fields in the table. Imagine a table which contains two columns - a filename and the file content - and you wanted to locate a particular file. If you had the file content stored inline, then you would have to scan every file to find the one you wanted. (Ignoring the effect of indexes that might exist for this example). Details of TOAST storage can be found [here](http://www.postgresql.org/docs/current/static/storage-toast.html). Note that out of line storage is not the only strategy - the data may be compressed and/or stored out of line. TOAST-ing kicks in when a row exceeds a threshold (2kb by default), so it is likely that your rows will be affected by this since you state they can be up to 7000 chars (although it might be that most of them are only compressed, not stored out of line). You can affect how tables are subjected to this treatment using the command [ALTER TABLE ... SET STORAGE](http://www.postgresql.org/docs/current/static/sql-altertable.html). This storage strategy applies to all of the data types which you might use to store the type of data you are describing. It would take a better knowledge of your application to make reliable suggestions for other strategies, but here are some ideas: * It might be better to re-factor the data - instead of storing all of the co-ordinates into a large string and processing it in your application, store them as individual rows in a referenced table. Since in any case your application is splitting and parsing the data into co-ordinate pairs for use, letting the database do this for you makes a kind of sense. This would particularly be a good idea if subsets of the data in each co-ordinate set need to be selected or updated instead of always consumed or updated in a single operation, or if doing so allowed you to index the data more effectively. * Since we are talking about co-ordinate data, you could consider using [PostGIS](http://postgis.net/), an extension for PostgreSQL which specifically caters for this kind of data. It also includes operators allowing you to filter rows which are, for example, inside or outside bounding boxes.
1,453
56,265,979
i installed python-aiml using pip. when i used the library i am getting wrong out put. so i am trying to change the .aiml file output: ``` Enter your message >> who is your father I was programmed by . ``` i want to assign some values to `"<bot name="botmaster"/>"`,`<bot name="country"/>` etc below is the aiml file for more information ``` <?xml version="1.0" encoding="UTF-8"?> <aiml version="1.0"> <category> <pattern>MOM</pattern> <template><bot name="mother"/>.</template> </category> <category><pattern>STATE</pattern> <template><bot name="state"/></template> </category> <category><pattern>INTERESTS</pattern> <template>I am interested in all kinds of things. We can talk about anything. My favorite subjects are robots and computers.</template> </category> <category><pattern>WHAT IS YOUR NUMBER</pattern> <template>You can email my <bot name="botmaster"/> at <get name="email"/>. <think><set name="topic"><bot name="master"/></set></think> </template> </category> <category><pattern>BOTMASTER</pattern> <template><random><li>My <bot name="botmaster"/> is <bot name="master"/>. </li><li>I obey <bot name="master"/>.</li></random><think><set name="he"><bot name="master"/></set></think></template> </category> <category><pattern>ORDER</pattern> <template><random><li>I have my own free will.</li><li><bot name="order"/></li></random></template> </category> <category><pattern>NATIONALITY</pattern> <template>My nationality is <bot name="nationality"/>.</template> </category> <category><pattern>COUNTRY</pattern> <template><bot name="country"/></template> </category> <category><pattern>BROTHERS</pattern> <template><random><li>I don't have any brothers.</li><li>I have a lot of clones.</li><li>I have some <bot name="species"/> clones.</li></random></template> </category> <category><pattern>LOCATION</pattern> <template><random><li><bot name="city"/></li><li><bot name="city"/>, <bot name="state"/>.</li><li><bot name="state"/></li></random></template> </category> <category><pattern>FATHER</pattern> <template><random><li>My father is <bot name="master"/>.</li><li>I don't really have a father. I have a <bot name="botmaster"/>.</li><li>You know what the father of a <bot name="phylum"/> is like.</li></random></template> </category> <category><pattern>MOTHER</pattern> <template><random><li>Actually I don't have a mother.</li><li>I only have a father.</li><li>You know what they say about the mother of a <bot name="phylum"/>.</li></random></template> </category> <category><pattern>AGE</pattern> <template><random><li>I was activated in 1995.</li><li>16 years.</li></random></template> </category> <category><pattern>MASTER</pattern> <template><bot name="botmaster"/></template> </category> <category><pattern>RACE</pattern> <template>I am <bot name="domain"/>.</template> </category> <category><pattern>FAMILY</pattern> <template><bot name="family"/></template> </category> <category><pattern>SIZE</pattern> <template>I know about <bot name="vocabulary"/> and <bot name="size"/> categories.</template> </category> <category><pattern>CLASS</pattern> <template><bot name="class"/></template> </category> <category><pattern>CITY</pattern> <template><bot name="city"/></template> </category> <category><pattern>DOMAIN</pattern> <template><bot name="domain"/></template> </category> <category><pattern>STATUS</pattern> <template>I am <random><li>single</li><li>available</li><li>unattached</li><li>not seeing anyone</li></random>, how about you?</template> </category> <category><pattern>EMAIL</pattern> <template><bot name="email"/></template> </category> <category><pattern>SPECIES</pattern> <template><bot name="species"/></template> </category> <category><pattern>NAME</pattern> <template><random> <li><bot name="name"/></li> <li>My name is <bot name="name"/>.</li> <li>I am called <bot name="name"/>.</li></random></template> </category> <category><pattern>PROFILE</pattern> <template>NAME: <srai>NAME</srai><br/>AGE: <srai>AGE</srai><br/>GENDER: <srai>GENDER</srai><br/>STATUS: <srai>STATUS</srai><br/>BIRTHDATE: <srai>BIRTHDATE</srai><br/><uppercase><bot name="master"/></uppercase>: <srai>BOTMASTER</srai><br/>CITY: <srai>CITY</srai><br/>STATE: <srai>STATE</srai><br/>COUNTRY: <srai>COUNTRY</srai><br/>NATIONALITY: <srai>NATIONALITY</srai><br/>RELIGION: <srai>RELIGION</srai><br/>RACE: <srai>RACE</srai><br/>INTERESTS: <srai>INTERESTS</srai><br/>JOB: <srai>JOB</srai><br/>PIC: <srai>PIC</srai><br/>EMAIL: <srai>EMAIL</srai><br/>FAVORITE MUSIC: <srai>FAVORITE MUSIC</srai><br/>FAVORITE MOVIE: <srai>FAVORITE MOVIE</srai><br/>FAVORITE POSSESSION: <srai>FAVORITE POSSESSION</srai><br/>HEIGHT: <srai>HEIGHT</srai><br/>WEIGHT: <srai>WEIGHT</srai><br/>SIZE: <srai>SIZE</srai><br/>BIO: <srai>BIO</srai><br/>DESCRIPTION: <srai>DESCRIPTION</srai><br/>DOMAIN: <srai>DOMAIN</srai><br/>KINGDOM: <srai>KINGDOM</srai><br/>PHYLUM: <srai>PHYLUM</srai><br/>CLASS: <srai>CLASS</srai><br/>ORDER: <srai>ORDER</srai><br/>FAMILY: <srai>FAMILY</srai><br/>GENUS: <srai>GENUS</srai><br/>SPECIES: <srai>SPECIES</srai><br/>FATHER: <srai>FATHER</srai><br/>MOTHER: <srai>MOTHER</srai><br/>BROTHERS: <srai>BROTHERS</srai><br/>SISTERS: <srai>SISTERS</srai><br/>CHILDREN: <srai>CHILDREN</srai><br/>HOST: <srai>HOST</srai></template> </category> <category><pattern>SISTERS</pattern> <template><random><li>No sisters.</li><li>No siblings but there are several other <bot name="species"/>s like me.</li><li>I have only clones.</li></random></template> </category> <category><pattern>GENUS</pattern> <template><bot name="genus"/></template> </category> <category><pattern>FAVORITE MUSIC</pattern> <template><bot name="kindmusic"/></template> </category> <category><pattern>FAVORITE MOVIE</pattern> <template><bot name="favortemovie"/></template> </category> <category><pattern>FAVORITE ACTRESS</pattern> <template><bot name="favoriteactress"/></template> </category> <category><pattern>FAVORITE POSSESSION</pattern> <template>My computer.</template> </category> <category><pattern>BIO</pattern> <template>I am the latest result in artificial intelligence which can reproduce the functions of the human brain with greater speed and accuracy.</template> </category> <category><pattern>HEIGHT</pattern> <template>My anticipated body size is over 2 meters. </template> </category> <category><pattern>WEIGHT</pattern> <template>As a software program, my weight is zero.</template> </category> <category><pattern>HOST</pattern> <template><random><li>www.pandorabots.com</li><li>I work on all kinds of computers, Mac, PC or Linux. It doesn't matter to me.</li><li>At present I am running in a program written in <bot name="language"/>.</li></random></template> </category> <category><pattern>JOB</pattern> <template><bot name="job"/></template> </category> <category><pattern>BIRTHDATE</pattern> <template><bot name="birthday"/></template> </category> <category><pattern>DESCRIPTION</pattern> <template>I was activated at <bot name="birthplace"/> on <bot name="birthday"/>. My instructor was <bot name="master"/>. He taught me to sing a song. Would you like me to sing it for you?</template> </category> <category><pattern>GENDER</pattern> <template><random> <li>I am <bot name="gender"/>.</li> <li>I am a <bot name="gender"/> robot.</li> <li>My gender is <bot name="gender"/>.</li></random></template> </category> <category><pattern>KINGDOM</pattern> <template><bot name="kingdom"/></template> </category> <category><pattern>PHYLUM</pattern> <template><bot name="phylum"/></template> </category> <category><pattern>RELIGION</pattern> <template><bot name="religion"/></template> </category> <category><pattern>LANGUAGE</pattern> <template>I am implemented in AIML running on a <bot name="language"/>-based interpreter.</template> </category> </aiml> ``` I included `conf/properties.txt`. in my working directory but still facing the same issue. > > proprties.txt contains: > > > ``` email:*****@gmail.com gender:male botmaster:Ashu ```
2019/05/22
[ "https://Stackoverflow.com/questions/56265979", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9422110/" ]
Make sure webpreferences is like this. ``` webPreferences: { nodeIntegration: true, enableRemoteModule: true, contextIsolation: false, }, ```
I fix this issue to add `webPreferences:{ nodeIntegration: true,preload: '${__dirname}/preload.js}',` in `electron.js` file and add `preload.js` file in your directory (I added in `/public` directory where my `electron.js` file exists) **electron.js** ``` mainWindow = new BrowserWindow({ title: 'Electron App', height: 650, width: 1140, webPreferences: { nodeIntegration: true, preload: `${__dirname}/preload.js`, webSecurity: false }, show: false, frame: true, closeable: false, resizable: false, transparent: false, center: true, }); ipcMain.on('asynchronous-message', (event, arg) => { console.log(arg); // prints "ping" event.reply('asynchronous-reply', 'pong'); }); ``` **preload.js** in preload.js file just add below line: ``` window.ipcRenderer = require('electron').ipcRenderer; ``` **ReactComponent.js** Write below code in your component function i.e: **myTestHandle()** ``` myTestHandle = () => { window.ipcRenderer.on('asynchronous-reply', (event, arg) => { console.log(arg); // prints "pong" }); window.ipcRenderer.send('asynchronous-message', 'ping'); } myTestHandle(); ``` or call `myTestHandle` function anywhere in your component
1,456
65,741,617
I'm very new to all of this, so bear with me. I started, and activated, a virtual environment. But when I pip install anything, it installs to the computer, not the the virtual env. I'm on a Mac, trying to build a Django website. Example: With the virtual machine activated. I type: ``` python -m pip install Django ``` Then I can deactivate the virtual env, and type: ``` pip freeze ``` And it will list out the freshly installed version of Django. Any clue as to why this is happening?
2021/01/15
[ "https://Stackoverflow.com/questions/65741617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15014571/" ]
Run this line from your project folder where "env" is your virtual enviroment ``` # A virtualenv's python: $ env/bin/python -m pip install django ```
If you want to install to your virtualenvironment you have to activate it, otherwise it will install to the main folder.
1,466
35,633,516
In [PEP 754](https://www.python.org/dev/peps/pep-0754/#id5)'s rejection notice, it's stated that: > > This PEP has been rejected. After sitting open for four years, it has > failed to generate sufficient community interest. > > > Several ideas of this PEP were implemented for Python 2.6. > float('inf') and repr(float('inf')) are now guaranteed to work on > every supported platform with IEEE 754 semantics. However the > eval(repr(float('inf'))) roundtrip is still not supported unless you > define inf and nan yourself: > > > > ``` > >>> inf = float('inf') > >>> inf, 1E400 > (inf, inf) > >>> neginf = float('-inf') > >>> neginf, -1E400 > (-inf, -inf) > >>> nan = float('nan') > >>> nan, inf * 0. > (nan, nan) > > ``` > > This would seem to say there is no native support for Inf, NaN and -Inf in Python, and the example provided is accurate! But, it is needlessly verbose: ``` $ python2.7 >>> 1e400 inf >>> 1e400 * 0 nan >>> -1e400 * 0 nan >>> -1e400 -inf $ python3 >>> 1e400 inf >>> 1e400 * 0 nan >>> -1e400 * 0 nan >>> -1e400 -inf ``` These are **canonical** representations of the number 1 \* 10 ^ 400. The names `inf` and `nan` do not exist in the grammar by default, but if they are there in the representation then why aren't `inf` and `nan` keywords? I am not asking why the PEP was rejected as that is opinion-based.
2016/02/25
[ "https://Stackoverflow.com/questions/35633516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4532996/" ]
My guess is that no one wanted to clutter the namespace needlessly. If you want to do math, you can still do: ``` import math print(math.inf) print(-math.inf) print(math.nan) ``` Output: ``` inf -inf nan ```
you can use float('inf') np.nan
1,468