input_text
stringlengths 1
40.1k
| target_text
stringlengths 1
29.4k
⌀ |
---|---|
Blender 2 6 JSON exporter texture wrong only one side of cube I am making a simple exporter for Blender 2 6x for a custom JSON format (mainly for use with WebGL) because the existing ones I could find online do not work with Blender 2 6 I have almost got it working but one bug remains that I cannot figure out On a simple cube the texture on one of its sides is in the wrong orientation The rest of the cube is textured properly You can see a picture of the problem here (the left face on the left side is in the wrong orientation as compared to the correct cube on the right side): <img src="http://i imgur com/mJCUO png" alt="the bug"> Are there some common misconceptions or errors that could cause this behaviour to happen? This is the function that exports from Blender 2 65 to my custom JSON format (the bug must be in here but I cannot find it): ````def get_json(objects scene): """ Currently only supports one scene Exports with -Z forward Y up """ object_number = -1 scene_data = [] # for every object in the scene for object in bpy context scene objects: # if the object is a mesh if object type == 'MESH': object_number = 1 # convert all the mesh's faces to triangles bpy ops object mode_set(mode='OBJECT') object select = True bpy context scene objects active = object # triangulate using new Blender 2 65 Triangulate modifier bpy ops object modifier_add(type='TRIANGULATE') object modifiers["Triangulate"] use_beauty = False bpy ops object modifier_apply(apply_as="DATA" modifier="Triangulate") bpy ops object mode_set(mode='OBJECT') object select = False # add data to scene_data structure scene_data append({ "name" : object name "vertices" : [] "indices" : [] "normals" : [] "tex_coords" : [] }) vertex_number = -1 # for each face in the object for face in object data polygons: vertices_in_face = face vertices[:] # for each vertex in the face for vertex in vertices_in_face: vertex_number = 1 # store vertices in scene_data structure scene_data[object_number]["vertices"] append( object data vertices[vertex] co x object location x ) scene_data[object_number]["vertices"] append( object data vertices[vertex] co z object location z ) scene_data[object_number]["vertices"] append( -(object data vertices[vertex] co y object location y) ) # store normals in scene_data structure scene_data[object_number]["normals"] append( object data vertices[vertex] normal x ) scene_data[object_number]["normals"] append( object data vertices[vertex] normal z ) scene_data[object_number]["normals"] append( -(object data vertices[vertex] normal y) ) # store indices in scene_data structure scene_data[object_number]["indices"] append(vertex_number) # texture coordinates # bug: for a simple cube one face's texture is warped mesh = object to_mesh(bpy context scene True 'PREVIEW') if len(mesh tessface_uv_textures) > 0: for data in mesh tessface_uv_textures active data: scene_data[object_number]["tex_coords"] append( data uv1 x ) scene_data[object_number]["tex_coords"] append( data uv1 y ) scene_data[object_number]["tex_coords"] append( data uv2 x ) scene_data[object_number]["tex_coords"] append( data uv2 y ) scene_data[object_number]["tex_coords"] append( data uv3 x ) scene_data[object_number]["tex_coords"] append( data uv3 y ) return json dumps(scene_data indent=4) ```` And in case this would help figure it out here is the exported JSON data that results from running my export script (the same data used to render the cube on the left in the image above): ````[ { "vertices": [ -1 0203653573989868 1 0320179611444473 0 669445663690567 -1 0203653573989868 1 0320179611444473 -1 330554336309433 -1 0203653573989868 -0 9679820388555527 0 669445663690567 -1 0203653573989868 1 0320179611444473 -1 330554336309433 0 9796346426010132 1 0320179611444473 -1 330554336309433 -1 0203653573989868 -0 9679820388555527 -1 330554336309433 0 9796346426010132 1 0320179611444473 -1 330554336309433 0 9796346426010132 1 0320179611444473 0 669445663690567 0 9796346426010132 -0 9679820388555527 -1 330554336309433 0 9796346426010132 1 0320179611444473 0 669445663690567 -1 0203653573989868 1 0320179611444473 0 669445663690567 0 9796346426010132 -0 9679820388555527 0 669445663690567 -1 0203653573989868 -0 9679820388555527 0 669445663690567 -1 0203653573989868 -0 9679820388555527 -1 330554336309433 0 9796346426010132 -0 9679820388555527 0 669445663690567 0 9796346426010132 1 0320179611444473 0 669445663690567 0 9796346426010132 1 0320179611444473 -1 330554336309433 -1 0203653573989868 1 0320179611444473 0 669445663690567 -1 0203653573989868 1 0320179611444473 -1 330554336309433 -1 0203653573989868 -0 9679820388555527 -1 330554336309433 -1 0203653573989868 -0 9679820388555527 0 669445663690567 0 9796346426010132 1 0320179611444473 -1 330554336309433 0 9796346426010132 -0 9679820388555527 -1 330554336309433 -1 0203653573989868 -0 9679820388555527 -1 330554336309433 0 9796346426010132 1 0320179611444473 0 669445663690567 0 9796346426010132 -0 9679820388555527 0 669445663690567 0 9796346426010132 -0 9679820388555527 -1 330554336309433 -1 0203653573989868 1 0320179611444473 0 669445663690567 -1 0203653573989868 -0 9679820388555527 0 669445663690567 0 9796346426010132 -0 9679820388555527 0 669445663690567 -1 0203653573989868 -0 9679820388555527 -1 330554336309433 0 9796346426010132 -0 9679820388555527 -1 330554336309433 0 9796346426010132 -0 9679820388555527 0 669445663690567 0 9796346426010132 1 0320179611444473 -1 330554336309433 -1 0203653573989868 1 0320179611444473 -1 330554336309433 -1 0203653573989868 1 0320179611444473 0 669445663690567 ] "normals": [ -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 -0 5773491859436035 -0 5773491859436035 0 5773491859436035 0 5773491859436035 ] "indices": [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 ] "name": "Cube" "tex_coords": [ 0 008884529583156109 0 6587533354759216 0 3244488537311554 0 3412468135356903 0 32541996240615845 0 657782256603241 0 008884510956704617 0 32541996240615845 0 007913422770798206 0 008884549140930176 0 32541996240615845 0 3244488537311554 0 9920865893363953 0 32444891333580017 0 675551176071167 0 32541996240615845 0 9911155700683594 0 00791349820792675 0 3412467837333679 0 008884538896381855 0 6577821969985962 0 007913422770798206 0 34221789240837097 0 32541996240615845 0 6587532758712769 0 6577821969985962 0 3422178626060486 0 6587533354759216 0 6577821373939514 0 3412468135356903 0 6745801568031311 0 34221789240837097 0 9911155700683594 0 3412468135356903 0 6755512356758118 0 6587533354759216 0 007913460955023766 0 34221789240837097 0 3244488537311554 0 3412468135356903 0 008884529583156109 0 6587533354759216 0 007913422770798206 0 008884549140930176 0 324448823928833 0 007913422770798206 0 32541996240615845 0 3244488537311554 0 675551176071167 0 32541996240615845 0 6745801568031311 0 008884529583156109 0 9911155700683594 0 00791349820792675 0 6577821969985962 0 007913422770798206 0 6587533354759216 0 3244488835334778 0 34221789240837097 0 32541996240615845 0 3422178626060486 0 6587533354759216 0 3412467837333679 0 34221789240837097 0 6577821373939514 0 3412468135356903 0 9911155700683594 0 3412468135356903 0 99208664894104 0 6577821969985962 0 6755512356758118 0 6587533354759216 ] } ] ```` I am not currently looking for ways to make a better more feature filled or more efficient exporter rather I would just like to finally squash this bug so I can get on to more interesting things like making a WebGL game and learning about collision detection and such Any help or advise in regards to this problem I am having would be greatly appreciated! <strong>Edit:</strong> In case it might be my rendering code and not the exporter that is the problem here is the part of my WebGL code related to initializing the buffers and drawing the scene (a modified version of some code found at <a href="http://learningwebgl com" rel="nofollow">http://learningwebgl com</a>): ````var gl; var current_shader_program; var per_vertex_shader_program; var per_fragment_shader_program; var modelview_matrix = mat4 create(); var modelview_matrix_stack = []; var projection_matrix = mat4 create(); var teapot_vertex_position_buffer = new Array(); var teapot_vertex_tex_coord_buffer = new Array(); var teapot_vertex_normal_buffer = new Array(); var teapot_vertex_index_buffer = new Array(); var earth_texture; var galvanized_texture; var teapot_angle = 180; var last_time = 0; function createProgram(vertex_shader_filename fragment_shader_filename) { var vertex_shader_text = readFromUrl(vertex_shader_filename); var fragment_shader_text = readFromUrl(fragment_shader_filename); var vertex_shader = gl createShader(gl VERTEX_SHADER); var fragment_shader = gl createShader(gl FRAGMENT_SHADER); gl shaderSource(vertex_shader vertex_shader_text); gl shaderSource(fragment_shader fragment_shader_text); gl compileShader(vertex_shader); if (!gl getShaderParameter(vertex_shader gl COMPILE_STATUS)) { alert(gl getShaderInfoLog(vertex_shader)); } gl compileShader(fragment_shader); if (!gl getShaderParameter(fragment_shader gl COMPILE_STATUS)) { alert(gl getShaderInfoLog(fragment_shader)); } var shader_program = gl createProgram(); gl attachShader(shader_program vertex_shader); gl attachShader(shader_program fragment_shader); gl linkProgram(shader_program); if (!gl getProgramParameter(shader_program gl LINK_STATUS)) { alert("Error: Unable to link shaders!"); } shader_program vertex_position_attribute = gl getAttribLocation(shader_program "a_vertex_position"); gl enableVertexAttribArray(shader_program vertex_position_attribute); shader_program vertex_normal_attribute = gl getAttribLocation(shader_program "a_vertex_normal"); gl enableVertexAttribArray(shader_program vertex_normal_attribute); shader_program tex_coord_attribute = gl getAttribLocation(shader_program "a_tex_coord"); gl enableVertexAttribArray(shader_program tex_coord_attribute); shader_program projection_matrix_uniform = gl getUniformLocation(shader_program "u_projection_matrix"); shader_program modelview_matrix_uniform = gl getUniformLocation(shader_program "u_modelview_matrix"); shader_program normal_matrix_uniform = gl getUniformLocation(shader_program "u_normal_matrix"); shader_program sampler_uniform = gl getUniformLocation(shader_program "u_sampler"); shader_program material_shininess_uniform = gl getUniformLocation(shader_program "u_material_shininess"); shader_program show_specular_highlights_uniform = gl getUniformLocation(shader_program "u_show_specular_highlights"); shader_program use_textures_uniform = gl getUniformLocation(shader_program "u_use_textures"); shader_program use_lighting_uniform = gl getUniformLocation(shader_program "u_use_lighting"); shader_program ambient_color_uniform = gl getUniformLocation(shader_program "u_ambient_color"); shader_program point_lighting_location_uniform = gl getUniformLocation(shader_program "u_point_lighting_location"); shader_program point_lighting_specular_color_uniform = gl getUniformLocation(shader_program "u_point_lighting_specular_color"); shader_program point_lighting_diffuse_color_uniform = gl getUniformLocation(shader_program "u_point_lighting_diffuse_color"); return shader_program; } function initShaders() { per_fragment_shader_program = createProgram("per_fragment_lighting vs" "per_fragment_lighting fs"); } function handleLoadedTexture(texture) { gl pixelStorei(gl UNPACK_FLIP_Y_WEBGL true); gl bindTexture(gl TEXTURE_2D texture); gl texImage2D(gl TEXTURE_2D 0 gl RGBA gl RGBA gl UNSIGNED_BYTE texture image); gl texParameteri(gl TEXTURE_2D gl TEXTURE_MAG_FILTER gl LINEAR); gl texParameteri(gl TEXTURE_2D gl TEXTURE_MIN_FILTER gl LINEAR_MIPMAP_NEAREST); gl generateMipmap(gl TEXTURE_2D); gl bindTexture(gl TEXTURE_2D null); } function initTextures() { earth_texture = gl createTexture(); earth_texture image = new Image(); earth_texture image onload = function() { handleLoadedTexture(earth_texture); } earth_texture image src = "earth jpg"; galvanized_texture = gl createTexture(); galvanized_texture image = new Image(); galvanized_texture image onload = function() { handleLoadedTexture(galvanized_texture); }; galvanized_texture image src = "galvanized jpg"; } function setMatrixUniforms() { gl uniformMatrix4fv(current_shader_program projection_matrix_uniform false projection_matrix); gl uniformMatrix4fv(current_shader_program modelview_matrix_uniform false modelview_matrix); var normal_matrix = mat3 create(); mat4 toInverseMat3(modelview_matrix normal_matrix); mat3 transpose(normal_matrix); gl uniformMatrix3fv(current_shader_program normal_matrix_uniform false normal_matrix); } function handleLoadedTeapot(teapot_data) { for (var i = 0; i < teapot_data length; i++) { teapot_vertex_normal_buffer[i] = gl createBuffer(); gl bindBuffer(gl ARRAY_BUFFER teapot_vertex_normal_buffer[i]); gl bufferData(gl ARRAY_BUFFER new Float32Array(teapot_data[i] normals) gl STATIC_DRAW); teapot_vertex_normal_buffer[i] item_size = 3; teapot_vertex_normal_buffer[i] num_items = teapot_data[i] normals length / teapot_vertex_normal_buffer[i] item_size; teapot_vertex_tex_coord_buffer[i] = gl createBuffer(); gl bindBuffer(gl ARRAY_BUFFER teapot_vertex_tex_coord_buffer[i]); gl bufferData(gl ARRAY_BUFFER new Float32Array(teapot_data[i] tex_coords) gl STATIC_DRAW); teapot_vertex_tex_coord_buffer[i] item_size = 2; teapot_vertex_tex_coord_buffer[i] num_items = teapot_data[i] tex_coords length / teapot_vertex_tex_coord_buffer[i] item_size; teapot_vertex_position_buffer[i] = gl createBuffer(); gl bindBuffer(gl ARRAY_BUFFER teapot_vertex_position_buffer[i]); gl bufferData(gl ARRAY_BUFFER new Float32Array(teapot_data[i] vertices) gl STATIC_DRAW); teapot_vertex_position_buffer[i] item_size = 3; teapot_vertex_position_buffer[i] num_items = teapot_data[i] vertices length / teapot_vertex_position_buffer[i] item_size; teapot_vertex_index_buffer[i] = gl createBuffer(); gl bindBuffer(gl ELEMENT_ARRAY_BUFFER teapot_vertex_index_buffer[i]); gl bufferData(gl ELEMENT_ARRAY_BUFFER new Uint16Array(teapot_data[i] indices) gl STATIC_DRAW) teapot_vertex_index_buffer[i] item_size = 1; teapot_vertex_index_buffer[i] num_items = teapot_data[i] indices length / teapot_vertex_index_buffer[i] item_size; } document getElementById("loading_text") textContent = ""; } function loadTeapot() { var request = new XMLHttpRequest(); request open("GET" "untitled json"); request onreadystatechange = function() { if (request readyState == 4) { handleLoadedTeapot(JSON parse(request responseText)); } }; request send(); } function drawScene() { gl viewport(0 0 gl viewportWidth gl viewportHeight); gl clear(gl COLOR_BUFFER_BIT | gl DEPTH_BUFFER_BIT); if (teapot_vertex_position_buffer[0] == null || teapot_vertex_normal_buffer[0] == null || teapot_vertex_tex_coord_buffer[0] == null || teapot_vertex_index_buffer[0] == null) { return; } current_shader_program = per_fragment_shader_program; gl useProgram(current_shader_program); var specular_highlights = document getElementById("specular") checked; gl uniform1i(current_shader_program show_specular_highlights_uniform specular_highlights); var lighting = document getElementById("lighting") checked; gl uniform1i(current_shader_program use_lighting_uniform lighting); if (lighting) { gl uniform3f(current_shader_program ambient_color_uniform parseFloat(document getElementById("ambient_r") value) parseFloat(document getElementById("ambient_g") value) parseFloat(document getElementById("ambient_b") value)); gl uniform3f(current_shader_program point_lighting_location_uniform parseFloat(document getElementById("light_pos_x") value) parseFloat(document getElementById("light_pos_y") value) parseFloat(document getElementById("light_pos_z") value)); gl uniform3f(current_shader_program point_lighting_specular_color_uniform parseFloat(document getElementById("specular_r") value) parseFloat(document getElementById("specular_g") value) parseFloat(document getElementById("specular_b") value)); gl uniform3f(current_shader_program point_lighting_diffuse_color_uniform parseFloat(document getElementById("diffuse_r") value) parseFloat(document getElementById("diffuse_g") value) parseFloat(document getElementById("diffuse_b") value)); } var texture = document getElementById("texture") value; gl uniform1i(current_shader_program use_textures_uniform texture != "none"); mat4 identity(modelview_matrix); mat4 translate(modelview_matrix [0 0 -10]); mat4 rotate(modelview_matrix degToRad(23 4) [1 0 0]); mat4 rotate(modelview_matrix degToRad(teapot_angle) [0 1 0]); gl activeTexture(gl TEXTURE0); if (texture == "earth") { gl bindTexture(gl TEXTURE_2D earth_texture); } else if (texture == "galvanized") { gl bindTexture(gl TEXTURE_2D galvanized_texture); } gl uniform1i(current_shader_program sampler_uniform 0); gl uniform1f(current_shader_program material_shininess_uniform parseFloat(document getElementById("shininess") value)); for (var i = 0; i < teapot_vertex_position_buffer length; i++) { gl bindBuffer(gl ARRAY_BUFFER teapot_vertex_position_buffer[i]); gl vertexAttribPointer(current_shader_program vertex_position_attribute teapot_vertex_position_buffer[i] item_size gl FLOAT false 0 0); gl bindBuffer(gl ARRAY_BUFFER teapot_vertex_tex_coord_buffer[i]); gl vertexAttribPointer(current_shader_program tex_coord_attribute teapot_vertex_tex_coord_buffer[i] item_size gl FLOAT false 0 0); gl bindBuffer(gl ARRAY_BUFFER teapot_vertex_normal_buffer[i]); gl vertexAttribPointer(current_shader_program vertex_normal_attribute teapot_vertex_normal_buffer[i] item_size gl FLOAT false 0 0); gl bindBuffer(gl ELEMENT_ARRAY_BUFFER teapot_vertex_index_buffer[i]); setMatrixUniforms(); gl drawElements(gl TRIANGLES teapot_vertex_index_buffer[i] num_items gl UNSIGNED_SHORT 0); } } ```` I know it is a hell of a lot to look through which is why I did not post it before but there has been a suggestion that the rendering code might be at fault Note that the "teapot" mentioned in the code is really the exported model (the cube I am trying to render) | As ejrowley commented the indexing of your vertices and texture coordinates are not aligned Also using `to_mesh` <them>after</them> exporting the vertices can lead to a lot of craziness You can figure out the correct indexing using your mesh's `tessfaces` |
How to get a current Item's info from QtGui QListWidget? Created a QtGui QListWidget list widget: ````myListWidget = QtGui QListWidget() ```` Populated this ListWidget with QListWidgetItem list items: ````for word in ['cat' 'dog' 'bird']: list_item = QtGui QListWidgetItem(word myListWidget) ```` Now connect a function on list_item's left click: ````def print_info(): print myListWidget currentItem() text() myListWidget currentItemChanged connect(print_info) ```` As you see from my code all I am getting on a left click is a list_item's label name But aside from a label name I would like to get a list_item's index number (order number as it is displayed in ListWidget) I would like to get as much info on left-clicked list_item as possible I looked at dir(my_list_item) But I cannot anything useful there ( other than already used my_list_item text() method which returns a list_item's label name) Thanks in advance! | Well I have listed some of the things you can display about the current item if you want more than this then you should look through the PyQt Documentation <a href="http://pyqt sourceforge net/Docs/PyQt4/classes html" rel="nofollow">link</a> ```` def print_info(): print myListWidget currentItem() text() print myListWidget row(myListWidget currentItem()) print myListWidget checkState() # if it is a checkable item print myListWidget currentItem() toolTip() toString() print myListWidget currentItem() whatsThis() toString() myListWidget currentItemChanged connect(print_info) ```` |
Unable to write tables with "flowable" cells to PDF using PyPDF I have the following code How can I make it more dynamic such that: - I will not have to hardcode/specify the cell width - HTML Cells should be "flowable" meaning the cell length automatically adjusts depending on the size of the data Should apply "word wrap" as well - Number of columns can vary - I can fetch data from a database table convert it to html and write the table to PDF I tried searching and it looks like the PyFDF has a support for that <a href="http://python 6 n6 nabble com/PyFPDF-1-54b-HTML-rendering-templates-and-web2py-integration-td2120415 html" rel="nofollow">http://python 6 n6 nabble com/PyFPDF-1-54b-HTML-rendering-templates-and-web2py-integration-td2120415 html</a> but it looks like I have to use web2py Is there a way to do it without having to use a web framework? ````import fpdf as pyfpdf from fpdf import FPDF HTMLMixin class MyFPDF(FPDF HTMLMixin): pass pdf=MyFPDF() #First page #pdf = pyfpdf FPDF(format='letter') pdf add_page() #set the font pdf set_font("Arial" size=10) #define the html text html = """<H1 align="center">Summary of Transactions</H1> <h3>Date:</h3> <h3>Branch:</h3>""" html = """ <table border="1" align="center" width="100%"> <thead><tr><th width="20%">Date of Transaction</th><th width="20%">Name of Customer</th><th width="20%">Address</th><th width="20%">Contact Number</th><th width="20%">Status</th></tr></thead> <tbody> <tr><td>cell 1hgjhh jhjhjk jhjfsafsafsafsaf</td><td>cell 2</td><td>cell 3</td><td>cell 4</td><td>cell 5</td></tr> <tr><td>cell 1</td><td>cell 2</td><td>cell 3</td><td>cell 4</td><td>cell 5</td></tr> """ html = '<tr><td>' 'data' '</td><td>' 'data' '</td> <td>' 'data' '</td> <td>' 'data' '</td><td>' 'data' '</td></tr>' html = """</tbody></table>""" #write the html text to PDF pdf write_html(html) pdf output('html pdf' 'F') ```` What I am trying to achieve is similar to the listings pdf demo here <a href="https://code google com/p/pyfpdf/wiki/Web2Py" rel="nofollow">https://code google com/p/pyfpdf/wiki/Web2Py</a> | I am not sure if it will help but you might want to check out <a href="https://code google com/p/pyfpdf/source/browse/Templates wiki?repo=wiki" rel="nofollow">PyfPDF Templates</a> |
Django - The current URL did not match any of these ````from django conf urls import include url patterns from django contrib import admin #urlpatterns = [ # url(r'^mypage/' include('mypage urls')) # url(r'^admin/' admin site urls) #] urlpatterns = patterns('' url(r'^$' 'mypage views home' name='home') url(r'^admin/' admin site urls) ) ```` The uncommented code is working fine But as per tutorials the commented code also should have to work But its showng an error of "The current url did not match any of these" The code path is /ownblog/ownblog/urls py ````urlpatterns = patterns('' url(r'^$' 'views home' name='home') ) ```` The above code is in ownblog/mypage/urls py ````def home(request): return HttpResponse("Hello world You are at the polls index ") ```` The above code is in ownblog/mypage/views py What I am missing | The error message when you visit `http://localhost:8000/` is expected because you have not defined a url pattern for / in your commented code The tutorial tells you to go to `http://localhost:8000/polls/` In your case change that to `http://localhost:8000/mypage/` because you use `mypage` instead of `polls` The second error `No module named views` is because you have used the string `'views home'` in your url patterns instead of the callable `views home` Make sure you include the import as well ````from import views urlpatterns = [ url(r'^$' views home name='home') ] ```` I notice that you are not following the <a href="https://docs djangoproject com/en/1 9/intro/tutorial01/" rel="nofollow">1 9 tutorial</a> very closely For example you are using `patterns()` and strings like `'mypage views home'` which are both outdated since Django 1 8 I think you would find it useful to follow the tutorial exactly before you begin changing lots of stuff |
py2app application is working but Mac OS X shows it as not responding I have a python script (no GUI) that spawns a child thread while parent thread is preventing the app from finishing using this method: <pre class="lang-python prettyprint-override">`try: while True: time sleep(1) except (KeyboardInterrupt SystemExit): pass finally: cleanup() ```` When I create an application from this script with py2app and run it it is staying in Dock and working as expected but when I right click it shows that "Application Not Responding" (the same in Activity Monitor) and to finish it I can only select "Force Quit" which results in a crash report dialog afterwards Why is it not responding and if the reason is `sleep()` how can I keep the app open without it? | It is showing as "not responding" because it is not responding An application on OS X (as opposed to just a plain "Unix executable"/script agent or daemon) has to respond to messages from the operating system Normally you do this by using a <a href="https://developer apple com/library/mac/documentation/Cocoa/Conceptual/Multithreading/RunLoopManagement/RunLoopManagement html" rel="nofollow">Cocoa run loop</a> PyObjC offers some <a href="https://pythonhosted org/pyobjc/api/module-PyObjCTools AppHelper html" rel="nofollow">high-level helpers</a> that make it even simpler or just let us you access the same Cocoa methods that the Apple docs describe from Python Another option is to use a script-wrapper that just runs your script while maintaining a run loop (with or without a GUI) for you Finally do you actually need to be an application in the first place? |
xml sax parser and line numbers etc The task is to parse a simple XML document and analyze the contents by line number The right Python package seems to be `xml sax` But how do I use it? After some digging in the documentation I found: - The `xmlreader Locator` interface has the information: `getLineNumber()` - The `handler ContentHandler` interface has `setDocumentHandler()` The first thought would be to create a `Locator` pass this to the `ContentHandler` and read the information off the Locator during calls to its `character()` methods etc BUT `xmlreader Locator` is only a skeleton interface and can only return -1 from any of its methods So as a poor user WHAT am I to do short of writing a whole `Parser` and `Locator` of my own?? I will answer my own question presently <hr> (Well I would have except for the arbitrary annoying rule that says I cannot ) <hr> I was unable to figure this out using the existing documentation (or by web searches) and was forced to read the source code for `xml sax`(under /usr/lib/python2 7/xml/sax/ on my system) The `xml sax` function `make_parser()` by default creates a real `Parser` but what kind of thing is that? In the source code one finds that it is an `ExpatParser` defined in expatreader py And it has its own `Locator` an `ExpatLocator` But there is no access to this thing Much head-scratching came between this and a solution - write your own `ContentHandler` which knows about a `Locato`are and uses it to determine line numbers - create an `ExpatParser` with `xml sax make_parser()` - create an `ExpatLocator` passing it the `ExpatParser` instance - make the `ContentHandler` giving it this `ExpatLocator` - pass the `ContentHandler` to the parser's `setContentHandler()` - call `parse()` on the `Parser` For example: ````import sys import xml sax class EltHandler( xml sax handler ContentHandler ): def __init__( self locator ): xml sax handler ContentHandler __init__( self ) self loc = locator self setDocumentLocator( self loc ) def startElement( self name attrs ): pass def endElement( self name ): pass def characters( self data ): lineNo = self loc getLineNumber() print >> sys stdout "LINE" lineNo data def spit_lines( filepath ): try: parser = xml sax make_parser() locator = xml sax expatreader ExpatLocator( parser ) handler = EltHandler( locator ) parser setContentHandler( handler ) parser parse( filepath ) except IOError as e: print >> sys stderr e if len( sys argv ) > 1: filepath = sys argv[1] spit_lines( filepath ) else: print >> sys stderr "Try providing a path to an XML file " ```` Martijn Pieters points out below another approach with some advantages If the superclass initializer of the `ContentHandler` is properly called then it turns out a private-looking undocumented member ` _locator` is set which ought to contain a proper `Locator` Advantage: you do not have to create your own `Locator` (or find out how to create it) Disadvantage: it is nowhere documented and using an undocumented private variable is sloppy Thanks Martijn! | The sax parser <them>itself</them> is supposed to provide your content handler with a locator The locator has to implement certain methods but it can be any object as long as it has the right methods The <a href="http://docs python org/2/library/xml sax reader html#locator-objects" rel="nofollow">`xml sax xmlreader Locator` class</a> is the <them>interface</them> a locator is expected to implement; if the parser provided a locator object to your handler then you can count on those 4 methods being present on the locator The parser is only <them>encouraged</them> to set a locator it is not required to do so The expat XML parser does provide it If you subclass <a href="http://docs python org/2/library/xml sax handler html#contenthandler-objects" rel="nofollow">`xml sax handler ContentHandler()`</a> then it will provide a standard `setDocumentHandler()` method for you and by the time ` startDocument()` on the handler is called your content handler instance will have `self _locator` set: ````from xml sax handler import ContentHandler class MyContentHandler(ContentHandler): def __init__(self): ContentHandler __init__(self) # initialize your handler def startElement(self name attrs): loc = self _locator if loc is not None: line col = loc getLineNumber() loc getColumnNumber() else: line col = 'unknown' 'unknown' print 'start of {} element at line {} column {}' format(name line col) ```` |
Translating a declarative DSL into nested function calls I have a python library which builds special iterators (a behavior tree) out of <a href="https://github com/eykd/owyl/blob/master/examples/boids py#L281" rel="nofollow">nested function calls</a> While the API has a fairly nice and light-weight syntax (due to it being python) it could really use a declarative DSL Here is a rough sketch of what I am envisioning: The DSL (using YAML): ````tree: - sequence: - do_action1 - do_action2 - select: - do_action3 - sequence: - do_action4 - do_action5 - do_action6 ```` would result in the following nested function calls: ````visit( sequence( do_action1() do_action2() select( do_action3() sequence( do_action4() do_action5() ) do_action6() ) ) ) ```` I am having trouble visualizing exactly how to do this Because the DSL must represent a tree a simple depth-first traversal seems appropriate But in order to build the nested function calls I have to turn this inside out somehow It probably involves something clever with an intermediary stack or some-such but I cannot quite grasp it What is the correct way to perform this transformation? | I think you could let python keep track of the function calls and parameters instead of doing it yourself with a stack Suppose you have a YAML parse tree in which each node represents a function call and each child of this node is a parameter (which is also a function call so it could potentially have its own parameters) Then define the function `evaluate` which evaluates a node of this tree as follows (pseudocode): ````def evaluate(node): # evaluate parameters of the call params = [] for child in node: params append(evaluate(child)) # now make the call to whatever function this node represents # passing the parameters return node function call(*params) ```` Finally call `evaluate` passing the root of the YAML tree as the parameter and you should get the desired behaviour <hr> A slightly different eval-apply structure ````def evaluate(node): # evaluate parameters of the call params = [ evaluate(child) for child in node ] # apply whatever function this node represents return node function call(*params) ```` |
Python - defining global string variables My question is how do initialize global string variables in python For example when I do the following ```` 24 def global_paths(): 25 global RUN_PATH 26 global BASE_PATH 27 global EXE_SUFIX 28 global SPEC_PATH 29 global cmd_list 30 31 global RUN_PATH = "/run/run_base_ref_amd64-m64-gcc43-nn 0000/" 32 global BASE_PATH = "/SPECcpu2006/1 1/cdrom" 33 global EXE_SUFIX = "_base amd64-m64-gcc43-nn" 34 global SPEC_PATH = BASE_PATH "/benchspec/CPU2006/" 35 global cmd_list = {} ```` I get the error: ```` global RUN_PATH = "/run/run_base_ref_amd64-m64-gcc43-nn 0000/" ^ SyntaxError: invalid syntax ```` What is the mistake I am doing? Question is similar to <a href="http://stackoverflow com/questions/370357/python-variable-scope-question">this</a> | You do not need to add the extra `global` when creating global variables You only need to globalise it before you create the variable (as you have done) and then you can create it normally: ````def global_paths(): global RUN_PATH global BASE_PATH global EXE_SUFIX global SPEC_PATH global cmd_list RUN_PATH = "/run/run_base_ref_amd64-m64-gcc43-nn 0000/" BASE_PATH = "/SPECcpu2006/1 1/cdrom" EXE_SUFIX = "_base amd64-m64-gcc43-nn" SPEC_PATH = BASE_PATH "/benchspec/CPU2006/" cmd_list = {} ```` |
How much money did Spectre gross on it's opening day in the UK? | $9.2 million |
Maintain Selection of QComboBox under a QStyledItemDelegate Below is an example of a custom delegated `QComboBox` When I make a selection click out (or otherwise lose focus with the `QComboBox`) and then TAB back in (gain focus) I lose my original selection For example using the below code if I choose `"Item 2"` click out then TAB back in the selection will go back to `"Item 1 "` <strong>How can I maintain the selection?</strong> I am assuming this issue occurs because I am using `addItem()` in `TheEditor` `QComboBox` class every time it is initialized except I am not too sure how I should be approaching this method Should I instead be initalizing `TheEditor` in the `EditDelegate` `__ init __` class so that it is only initialized once and not every time it is focused? How might I do that properly? ````import sys from PySide import QtCore QtGui QtSql class EditDelegate(QtGui QStyledItemDelegate): def __init__(self parent=None): super(EditDelegate self) __init__(parent) def createEditor(self parent option index): editor = TheEditor(parent) return editor class TheEditor(QtGui QComboBox): def __init__(self parent=None): super(TheEditor self) __init__(parent) self addItem("Item 1") self addItem("Item 2") self addItem("Item 3") self setEditable(True) class TheTable(QtGui QTableWidget): def __init__(self columns parent=None): super(TheTable self) __init__(parent) self setItemDelegate(EditDelegate()) self setEditTriggers(QtGui QAbstractItemView AllEditTriggers) self setColumnCount(1) self setRowCount(1) self setHorizontalHeaderLabels(["QCombo"]) class MainWindow(QtGui QMainWindow): def __init__(self parent=None): super(MainWindow self) __init__(parent) self setCentralWidget(TheTable(self)) if __name__ == '__main__': app = QtGui QApplication(sys argv) frame = MainWindow() frame show() app exec_() ```` Note: PySide v1 2 0 | As QT Introduction to (<a href="https://qt-project org/doc/qt-4 7/model-view-programming html" rel="nofollow">Model/View Programming</a>) says <blockquote> Note that we do not need to keep a pointer to the editor widget because the view takes responsibility for destroying it when it is no longer needed </blockquote> editor is temporary object But you can try catching selection from old editor and pass it to new editor like this: ````class EditDelegate(QtGui QStyledItemDelegate): editorsLastIndex=None def __init__(self parent=None): super(EditDelegate self) __init__(parent) def createEditor(self parent option index): editor = TheEditor(parent) if self editorsLastIndex != None: editor setCurrentIndex(self editorsLastIndex) editor currentIndexChanged connect(self editorIndexChanged) return editor def editorIndexChanged(self index): self editorsLastIndex = index ```` |
What is the highest temperature recorded during a barsat? | null |
Map point to closest point on fibonacci lattice I use the following code to generate the <a href="http://arxiv org/pdf/0912 4540 pdf" rel="nofollow">fibonacci lattice see page 4</a> for the unit sphere I think the code is working correctly Next I have a list of points (specified by latitude and longitude in radians just as the generated fibonacci lattice points) For each of the points I want to find the index of the closest point on the fibonacci lattice I e I have `latitude` and `longitude` and want to get `i` How would I do this? I specifically <strong>do not want to iterate over all the points from the lattice</strong> and find the one with minimal distance as in practice I generate much more than just `50` points and I do not want the runtime to be `O(n*m)` if `O(m)` is possible <strong>FWIW when talking about distance I mean <a href="https://en wikipedia org/wiki/Haversine_formula" rel="nofollow">haversine distance</a> </strong> ````#!/usr/bin/env python2 import math import sys n = 50 phi = (math sqrt(5 0) 1 0) / 2 0 phi_inv = phi - 1 0 ga = 2 0 * phi_inv * math pi for i in xrange(-n n 1): longitude = ga * i longitude = (longitude % phi) - phi if longitude < 0 else longitude % phi latitude = math asin(2 0 * float(i) / (2 0 * n 1 0)) print("{}-th point: " format(i n 1)) print("\tLongitude is {}" format(longitude)) print("\tLatitude is {}" format(latitude)) // Given latitude and longitude of point A determine index i of point which is closest to A // ??? ```` | What you are probably looking for is a spatial index: <a href="https://en wikipedia org/wiki/Spatial_database#Spatial_index" rel="nofollow">https://en wikipedia org/wiki/Spatial_database#Spatial_index</a> Since you only care about nearest neighbor search you might want to use something relatively simple like <a href="http://docs scipy org/doc/scipy-0 14 0/reference/generated/scipy spatial KDTree html" rel="nofollow">http://docs scipy org/doc/scipy-0 14 0/reference/generated/scipy spatial KDTree html</a> Note that spatial indexes usually consider points on a plane rather than a sphere To adapt it to your situation you will probably want to split up the sphere into several regions that can be approximated by rectangles You can then find several of the nearest neighbors according to the rectangular approximation and compute their actual haversine distances to identify the true nearest neighbor |
If the page does not require Javascript what could be blocking it? For example this URL: <a href="http://websta me/n/victoria a3456" rel="nofollow">http://websta me/n/victoria a3456</a> In a request everything loads but the photos and everything within those divs like their comments etc But the footer and the header (down to the photos) loads like their bio profile pic etc So in the browser I have disabled javascript and set the user-agent to `python-requests/1 2 0` The page still loads fine in the browser I do not understand why it will not load by a programatic HTTP request | So you have some code like: ````import requests as req site = req get('http://websta me/n/victoria a3456') print(site text) ```` You can change your headers of the request like so ````headers = {'':''} site = req get('http://websta me/n/victoria a3456' headers=headers) ```` The html file is a document which refers to other documents It is not a zip file Those other files (images videos etc ) are not embedded in the html document The web server is instructed to give you the html document and let the browser figure out how to download the linked documents from that html file The browser is doing more work in the background I would suggest looking at <a href="http://scrapy org/" rel="nofollow">scrapy</a> to get the other elements of the site You can see that the images are in the site text it is just a matter of putting in a 2nd request to download it ````import string string rfind(site text "https://scontent cdninstagram com/hphotos-xfa1/t51 2885-15/s320x320/e35/10838359_831976973588137_408868997_n jpg") >>>10039 ```` |
In which period did North and South America become linked? | Pliocene |
Fastest way to create DataFrame from last available data I had no success looking for answers for this question in the forum since it is hard to put it in keywords Any keywords suggestions are appreciated so that I cane make this question more accessible so that others can benefit from it The closest <a href="http://stackoverflow com/questions/21231834/creating-a-pandas-dataframe-from-columns-of-other-dataframes-with-similar-indexe">question</a> I found does not really answer mine My problem is the following: I have one DataFrame that I called `ref` and a dates list called `pub` `ref` has dates for indexes but those dates are different (there will be a few matching values) from the dates in `pub` I want to create a new DataFrame that contains all the dates from `pub` but fill it with the "last available data" from `ref` Thus say `ref` is: ````Dat col1 col2 2015-01-01 5 4 2015-01-02 6 7 2015-01-05 8 9 ```` And `pub` ````2015-01-01 2015-01-04 2015-01-06 ```` I would like to create a DataFrame like: ````Dat col1 col2 2015-01-01 5 4 2015-01-04 6 7 2015-01-06 8 9 ```` For this matter performance is an issue So i am looking for the fastest / a fast way of doing that Thanks in advance | You can do an outer merge set the new index to `Dat` sort it forward fill and then reindex based on the dates in `pub` ````dates = ['2015-01-01' '2015-01-04' '2015-01-06'] pub = pd DataFrame([dt datetime strptime(ts '%Y-%m-%d') date() for ts in dates] columns=['Dat']) >>> (ref merge(pub on='Dat' how='outer') set_index('Dat') sort_index() ffill() reindex(pub Dat)) col1 col2 Dat 2015-01-01 5 4 2015-01-04 6 7 2015-01-06 8 9 ```` |
Python address allocation to variables Initially variables a b and c all have value 1 and same address When variable a is incremented by 1 then address gets altered while the address of variables b and c remains same Can someone elaborate on this address allotment? Also now when variable b is incremented by 1 and address of b now equals to address of a Can someone please elaborate on this as well? ````>>> a = 1 >>> b = a >>> c = b >>> a = 1 >>> print a b c 2 1 1 >>> id(a) 26976576 >>> id(b) 26976600 >>> id(c) 26976600 >>> b = 1 >>> print a b c 2 2 1 >>> id(c) 26976600 >>> id(b) 26976576 >>> id(a) 26976576 ```` | <a href="https://docs python org/2/c-api/int html#c PyInt_FromLong" rel="nofollow">https://docs python org/2/c-api/int html#c PyInt_FromLong</a> <blockquote> The current implementation keeps an array of integer objects for all integers between -5 and 256 when you create an int in that range you actually just get back a reference to the existing object </blockquote> Also In Python Integer comes from a immutable object: `PyIntObject` Once you create a `PyIntObject` you will never change it is value and the others is just reference |
Django like button dosen't increment and redirect me to the same page I have a problem with like button it does not work at all In the template I use `{{ post likes }}` to show likes count and for the button I use `<a href="{% url 'post_like' pk=post pk %}" class="btn btn-block hvr-bounce-in">` This is views py ````from django http response import HttpResponse HttpResponseRedirect from django shortcuts import render get_object_or_404 redirect render_to_response from django template import Context from django utils import timezone from django views generic import View from blog forms import CommentForm PostForm SearchForm from blog models import Post def post_list(request): posts = Post objects filter(published_date__lte=timezone now()) order_by('published_date')[0:50] return render(request 'blog/index html' {'posts': posts}) def post_detail(request pk): post = get_object_or_404(Post pk=pk) return render(request 'blog/post_detail html' {'post': post}) def add_comment_to_post(request pk): post = get_object_or_404(Post pk=pk) if request method == "POST": form = CommentForm(request POST) if form is_valid(): comment = form save(commit=False) comment post = post comment save() return redirect('blog views post_detail' pk=post pk) else: form = CommentForm() return render(request 'blog/add_comment_to_post html' {'form': form}) def post_new(request): if request method == "POST": form = PostForm(request POST request FILES) if form is_valid(): post = form save(commit=False) post author = request user post published_date = timezone now() post save() return redirect('post_detail' pk=post pk) else: form = PostForm() return render(request 'blog/add_post html' {'form': form}) def search_view(request): if request method == "POST": search_text = request POST['search-form'] else: search_text = '' posts = Post objects filter(title__contains=search_text) return render_to_response('blog/search html' {'post': posts}) def post_like(request pk): if pk: post = Post objects get(id=pk) count = post likes count = 1 post likes = count post save() return HttpResponseRedirect('post/%s/like/' % pk) def dislike_post(request pk): if pk: post = Post objects get(id=pk) post dislikes = 1 post save() return HttpResponseRedirect('post/%s/like/' % pk) ```` and for urls py ````from django conf urls import url from blog import views urlpatterns = [ url(r'^$' views post_list name='post_list') url(r'^post/(?P<pk>[0-9]+)/$' views post_detail name='post_detail') url(r'^post/(?P<pk>[0-9]+)/comment/$' views add_comment_to_post name='add_comment_to_post') url(r'^post/new/$' views post_new name='post_new') url(r'^search/$' views search_view name='search') url(r'^post/(?P<pk>[0-9]+)/like/$' views post_like name='post_like') # url(r'^post/(?P<pk>[0-9]+)/dislike/$' views dislike_post name='post_dislike') ] ```` Edit : For the model py this is the code I use : ````class Post(models Model): author = models ForeignKey('auth User') title = models CharField(max_length=150) text = models TextField() created_date = models DateTimeField( default=timezone now) published_date = models DateTimeField( blank=True null=True) image = VersatileImageField(upload_to='images') tags = TaggableManager() likes = models IntegerField(default=0) dislikes = models ImageField(default=0) views = models IntegerField(default=0) def publish(self): self published_date = timezone now() self save() def __str__(self): return self title ```` | To change the url that your view redirects to you just need to change the value that you pass to `HttpResponseRedirect` ````return HttpResponseRedirect('/post/%s/' % pk) ```` Note that the url starts with a slash You can use `reverse` to prevent hardcoding the url ````from django core urlresolvers import reverse def post_like(request pk): return HttpResponseRedirect(reverse('post_detail' args=[pk])) ```` I am not sure why the `likes` count is not increasing In your view I would add some print statements to try and work out what is going on for example: ````from django shortcuts import redirect def post_like(request pk): print("In post_like") post = Post objects get(id=pk) print("likes before: " post likes) post likes = 1 post save() print("likes after: " post likes) return redirect('post_detail' pk=pk) ```` Note that I have used the `redirect` shortcut to simplify the code You do not need the `if pk` statement because `pk` is a required argument |
What company did Comcast propose a merger with? | Time Warner Cable |
Get coords of an oval in Tkinter I cannot seem to figure out how to reterive the `x y` position of an oval created on a Tkinter canvas using Python via ````c create_oval(x0 y0 x1 y2) ```` I understand that `Tkinter` creates the oval inside the box specified by `x0 y0 x1 y2` and if I can get those coordinates that would also work I need the coordinates to move the oval by an offset equal to the mouse coords and the actual oval | Assign the results of `c create_oval` to `x` -- that is the "object ID" of the oval Then ````c coords(x) ```` gives you the `(x1 y1 x2 y2)` tuple of the oval's coordinates (you call `coords` with new coordinates following the `x` to move the oval) |
What was the mass of the wuzhu coin? | 3.2 g |
Connecting to MySQL database via SSH I am trying to connect my python program to a remote MySQL Database via SSH I am using Paramiko for SSH and SQLAlchemy Here is what I have so far: ````import paramiko from sqlalchemy import create_engine ssh = paramiko SSHClient() ssh set_missing_host_key_policy(paramiko AutoAddPolicy()) ssh connect('host' port=port username='user' password='pass') engine = create_engine('mysql+mysqldb://user:pass@host/db') ```` I am getting an error: ````sqlalchemy exc OperationalError: (_mysql_exceptions OperationalError) (2003 "Cannot connect to MySQL server on 'mcsdev croft-it com' (60)") ```` | Sorry I posted a duplicated answer before Here is a more elaborated answer tailored exactly to your question ;) If you still in need of connecting to a remote MySQL db via SSH I have used a library named sshtunnel that wraps ands simplifies the use of paramiko (a dependency of the sshtunnel) With this code I think you will be good to go: ````from sshtunnel import SSHTunnelForwarder from sqlalchemy import create_engine server = SSHTunnelForwarder( ('host' 22) ssh_password="password" ssh_username="username" remote_bind_address=('127 0 0 1' 3306)) engine = create_engine('mysql+mysqldb://user:pass@127 0 0 1:%s/db' % server local_bind_port) # DO YOUR THINGS server stop() ```` |
When to use python interpreter vs she will I have a very basic question: If we want to run a script called script py we go to she will and type "python script py" However if we want to check for example if Django is installed or not we first go into Python interpreter by typing "python" in the she will and while we get the >>> then we type import Django What is the conceptual difference? For example in the second case why directly running "python import Django" in the she will does not work? | `python import Django` tries to run a Python script named `import` with an argument `Django` `python -c 'import Django'` would attempt to execute the Python statement `import Django` as if you had typed it from the Python interpreter directly |
How can I insert millions of records into a mongo DB from a large zipped csv file efficiently? I am trying to insert about 8 million of records into Mongo and it seems to insert them with the rate of 1000 records per second which is extremely slow The code is written in python so it may be the problem of python but I doubt it Here is the code: ````def str2datetime(str): return None if (not str or str == r'\N') else datetime strptime(str '%Y-%m-%d %H:%M:%S') def str2bool(str): return None if (not str or str == r'\N') else (False if str == '0' else True) def str2int(str): return None if (not str or str == r'\N') else int(str) def str2float(str): return None if (not str or str == r'\N') else float(str) def str2float2int(str): return None if (not str or str == r'\N') else int(float(str) 0 5) def str2latin1(str): return unicode(str 'latin-1') _ = lambda x: x converters_map = { 'test_id': str2int 'android_device_id': str2int 'android_fingerprint': _ 'test_date': str2datetime 'client_ip_address': _ 'download_kbps': str2int 'upload_kbps': str2int 'latency': str2int 'server_name': _ 'server_country': _ 'server_country_code': _ 'server_latitude': str2float 'server_longitude': str2float 'client_country': _ 'client_country_code': _ 'client_region_name': str2latin1 'client_region_code': _ 'client_city': str2latin1 'client_latitude': str2float 'client_longitude': str2float 'miles_between': str2float2int 'connection_type': str2int 'isp_name': _ 'is_isp': str2bool 'network_operator_name': _ 'network_operator': _ 'brand': _ 'device': _ 'hardware': _ 'build_id': _ 'manufacturer': _ 'model': str2latin1 'product': _ 'cdma_cell_id': str2int 'gsm_cell_id': str2int 'client_ip_id': str2int 'user_agent': _ 'client_net_speed': str2int 'iphone_device_id': str2int 'carrier_name': _ 'iso_country_code': _ 'mobile_country_code': str2int 'mobile_network_code': str2int 'model': str2latin1 'version': _ 'server_sponsor_name': _ } def read_csv_zip(path): with ZipFile(path) as z: with z open(z namelist()[0]) as input: are = csv reader(input) header = r next() converters = tuple((title if title != 'test_id' else '_id' converters_map[title]) for title in header) for row in r: row = {converter[0]:converter[1](value) for converter value in zip(converters row)} yield row argv = [x for x in argv if not x == ''] if len(argv) == 1: print("Usage: " argv[0] " zip-file") exit(1) zip_file = argv[1] collection_name = zip_file[:zip_file index('_')] print("Populating " collection_name " with the data from " zip_file) with Connection() as connection: db = connection db collection = db __getattr__(collection_name) i = 0; try: start = time() for item in read_csv_zip(zip_file): i = 1 if (i % 1000) == 0: stdout write("\r%d " % i) stdout flush() try: collection insert(item) except Exception as exc: print("Failed at the record #{0} (id = {1})" format(i item['_id'])) print exc print("Elapsed time = {0} seconds {1} records " format(time() - start i)) raw_input("Press ENTER to exit") except Exception as exc: print("Failed at the record #{0} (id = {1})" format(i item['_id'])) print exc exit(1) ```` It takes 350 seconds to insert 262796 records (one csv file) The mongo server is running on the same machine and no one is using it So I could write directly to the database file if there was a way I am not interested in sharding because 8 million records are not supposed to require sharding are not they? My question is what am I doing wrong? Maybe my choice of DB is wrong? The typical flow is that once a month the records are refreshed and then only queries are made against the database Thanks <strong>EDIT</strong> It turns out that the bottleneck is not mongo but reading the zip file I have changed the code to read the zip file in chunks of 1000 rows and then feed them to mongo in one call to `Collection insert` It the zip file which takes all the time Here is the modified code: ````def insert_documents(collection source i batch_size): count = 0; while True: items = list(itertools islice(source batch_size)) if len(items) == 0: break; old_i = i count = len(items) i = len(items) if (old_i / 1000) != (i / 1000): sys stdout write("\r%d " % i) sys stdout flush() try: collection insert(items) except Exception as exc: print("Failed at some record between #{0} (id = {1}) and #{2} (id = {3})" format(old_i items[0]['_id'] i items[-1]['_id'])) print exc return count def main(): argv = [x for x in sys argv if not x == ''] if len(argv) == 1: print("Usage: " argv[0] " zip-file") exit(1) zip_file = argv[1] collection_name = zip_file[:zip_file index('_')] print("Populating " collection_name " with the data from " zip_file) with Connection() as connection: ookla = connection ookla collection = ookla __getattr__(collection_name) i = 0; start = time() count = insert_documents(collection read_csv_zip(zip_file) i 1000) i = count print("Elapsed time = {0} seconds {1} records " format(time() - start count)) raw_input("Press ENTER to exit") if __name__ == "__main__": main() ```` It turns out that most of the time goes into `items = list(itertools islice(source batch_size))` Any ideas on how to improve it? | At slightly more than 1ms per record I would not really call it "extremely slow" but anyways here are some thoughts as to what to do next: - Use a profiler to see where the program spends its time It might not be where you think - Consider using the mongoimport utility linked from the comment by ChrisP as it is designed for this purpose |
Getting file input into Python script for praw script So I have a simple reddit bot set up which I wrote using the praw framework The code is as follows: ````import praw import time import numpy import pickle are = praw Reddit(user_agent = "Gets the Daily General Thread from subreddit ") print("Logging in ") r login() words_to_match = ['sdfghm'] cache = [] def run_bot(): print("Grabbing subreddit ") subreddit = r get_subreddit("test") print("Grabbing thread titles ") threads = subreddit get_hot(limit=10) for submission in threads: thread_title = submission title lower() isMatch = any(string in thread_title for string in words_to_match) if submission id not in cache and isMatch: print("Match found! Thread ID is " submission id) r send_message('FlameDraBot' 'DGT has been posted!' 'You are awesome!') print("Message sent!") cache append(submission id) print("Comment loop finished Restarting ") # Run the script while True: run_bot() time sleep(20) ```` I want to create a file (text file or xml or something else) using which the user can change the fields for the various information being queried For example I want a file with lines such as : ````Words to Search for = sdfghm Subreddit to Search in = text Send message to = FlameDraBot ```` I want the info to be input from fields so that it takes the value after Words to Search for = instead of the whole line After the information has been input into the file and it has been saved I want my script to pull the information from the file store it in a variable and use that variable in the appropriate functions such as: ````words_to_match = ['sdfghm'] subreddit = r get_subreddit("test") r send_message('FlameDraBot' ```` So basically like a config file for the script How do I go about making it so that my script can take input from a txt or another appropriate file and implement it into my code? | Yes that is just a plain old Python config which you <a href="http://stackoverflow com/questions/19078170/python-how-would-you-save-a-simple-settings-config-file">can implement in an ASCII file or else YAML or JSON</a> Create a subdirectory ` /config` put your settings in ` /config/__init__ py` Then `import config` Using PEP-18 compliant names the file ` /config/__init__ py` would look like: ````search_string = ['sdfghm'] subreddit_to_search = 'text' notify = ['FlameDraBot'] ```` If you want more complicated config just read the many other posts on that |
What is an efficient way to crawl multiple pages on a website without yielding/creating a request/method for each page using scrapy? Just as an example I am using Yelp Yelp does not list emails so if you wanted to acquire Yelp emails you would need to scrape a listing and then yield a request to that listings website and crawl it for an email Currently I am crawling the homepage of the listings website and if the email phone number etc is not listed on that page then I load the contact page and check there The problem I am having is that the information I am looking for is not always on these pages It would be ideal to load all of the links on a website that contain certain keywords and then create a method that looks through all of these pages for the emails phone numbers etc and return them when found What would be a good way to go about doing this? Here is how I am currently crawling through the pages of a website: ```` rules = ( Rule(LinkExtractor(allow=r'biz' restrict_xpaths='//*[contains(@class "natural-search-result")]//a[@class="biz-name"]') callback='parse_item' follow=True) Rule(LinkExtractor(allow=r'start' restrict_xpaths='//a[contains(@class "prev-next")]') follow=True) ) def parse_item(self response): i = YelpscraperItem() i['phone'] = self beautify(response xpath('//*[@class="biz-phone"]/text()') extract()) i['state'] = self beautify(response xpath('//span[@itemprop="addressRegion"]/text()') extract()) i['company'] = self beautify(response xpath('//h1[contains(@class "biz-page-title")]/text()') extract()) website = i['website'] = self beautify(response xpath('//div[@class="biz-website"]/a/text()') extract()) if type(website) is list and website: website = self checkScheme(website[0]) request = Request(website callback=self parse_home_page dont_filter=True) request meta['item'] = i yield request else: yield i def parse_home_page(self response): try: i = response meta['item'] sel = Selector(response) rawEmail = sel xpath("substring-after(//a[starts-with(@href 'mailto:')]/@href 'mailto:')") extract() if (type(rawEmail) is list) and ('@' in rawEmail[0]): i = self format_email(rawEmail i "Home Page (Link)") yield i else: rawContactPage = response xpath("//a[contains(@href 'contact')]/@href") extract() if type(rawContactPage) is list and rawContactPage: contactPage = rawContactPage[0] contactPage = urlparse urljoin(response url contactPage strip()) request = Request(contactPage callback=self parse_contact_page dont_filter=True) request meta['item'] = i request meta['home-page-response'] = response yield request else: yield i except TypeError as er: print er def parse_contact_page(self response): try: i = response meta['item'] homePageResponse = response meta['home-page-response'] rawEmail = response xpath("substring-after(//a[starts-with(@href 'mailto:')]/@href 'mailto:')") extract() if (type(rawEmail) is list) and ('@' in rawEmail[0]): i = self format_email(rawEmail i "Contact Page (Link)") elif (type(rawEmail) is list) and (rawEmail[0] == ''): rawEmail = response xpath('//body') re(r'[a-zA-Z0-9\ \-+_]+@[a-zA-Z0-9\ \-+_]+\ [A-Za-z]{2 3}') if (type(rawEmail) is list) and rawEmail: i = self format_email(rawEmail i "Contact Page (Text)") else: rawEmail = homePageResponse xpath('//body') re(r'[a-zA-Z0-9\ \-+_]+@[a-zA-Z0-9\ \-+_]+\ [A-Za-z]{2 3}') if (type(rawEmail) is list) and rawEmail: i = self format_email(rawEmail i "Home Page (Text)") else: rawEmail = [self get_whois_email(i)] i = self format_email(rawEmail i "Whois Page") yield i except TypeError as er: print er def get_whois_email(self i): email = "" try: if 'website' in i keys(): website = i['website'] if type(website) is list: website = i['website'][0] lower() w = whois whois(website) for whoisEmail in w emails: whoisEmail = whoisEmail lower() if website in whoisEmail: email = whoisEmail else: for domain in self whiteListed: if domain in whoisEmail: email = whoisEmail except IndexError as er: log message("Whois Email IndexError:") return email ```` | That is the way Scrapy works as it is based on Twisted an asynchronous framework Each crawled page is handled by a single callback I think the only way to pass information from one callback to another is the way you are doing via the request's meta attribute |
pandas Timestamp components - some are properties and some are methods? I have been playing around with the pandas Time/Date components described in the docs and I have noticed that of the Time/Date components listed <a href="http://pandas pydata org/pandas-docs/stable/timeseries html#time-date-components" rel="nofollow">http://pandas pydata org/pandas-docs/stable/timeseries html#time-date-components</a> some of them are properties and some of them are methods Examples: <a href="http://i stack imgur com/wXsV7 png" rel="nofollow"><img src="http://i stack imgur com/wXsV7 png" alt="output"></a> Is there an explanation for this in the docs and/or an overall logic to which components are properties and which are methods? | If I refer your example Then: ````pd Timestamp ```` is the property Any direct pandas operation is a pandas property Below are some possibilities: ````In [4]: pd Display all 158 possibilities? (y or n) pd Categorical pd SparseTimeSeries pd ewmvol pd io pd read_excel pd rolling_sum pd CategoricalIndex pd Term pd expanding_apply pd isnull pd read_fwf pd rolling_var pd DataFrame pd TimeGrouper pd expanding_corr pd json pd read_gbq pd rolling_window pd DateOffset pd TimeSeries pd expanding_count pd lib pd read_hdf pd scatter_matrix pd DatetimeIndex pd Timedelta pd expanding_cov pd lreshape pd read_html pd set_eng_float_format pd ExcelFile pd TimedeltaIndex pd expanding_kurt pd match pd read_json pd set_option pd ExcelWriter pd Timestamp pd expanding_max pd melt pd read_msgpack pd show_versions pd Expr pd WidePanel pd expanding_mean pd merge pd read_pickle pd sparse pd Float64Index pd algos pd expanding_median pd missing_dependencies pd read_sas pd stats pd Grouper pd bdate_range pd expanding_min pd msgpack pd read_sql pd test pd HDFStore pd compat pd expanding_quantile pd notnull pd read_sql_query pd timedelta_range pd Index pd computation pd expanding_skew pd np pd read_sql_table pd to_datetime pd IndexSlice pd concat pd expanding_std pd offsets pd read_stata pd to_msgpack pd Int64Index pd core pd expanding_sum pd ols pd read_table pd to_numeric pd MultiIndex pd crosstab pd expanding_var pd option_context pd reset_option pd to_pickle pd NaT pd cut pd factorize pd options pd rolling_apply pd to_timedelta pd Panel pd date_range pd fama_macbeth pd ordered_merge pd rolling_corr pd tools pd Panel4D pd datetime pd get_dummies pd pandas pd rolling_count pd tseries pd Period pd datetools pd get_option pd parser pd rolling_cov pd tslib pd PeriodIndex pd dependency pd get_store pd period_range pd rolling_kurt pd unique pd RangeIndex pd describe_option pd groupby pd pivot pd rolling_max pd util pd Series pd eval pd hard_dependencies pd pivot_table pd rolling_mean pd value_counts pd SparseArray pd ewma pd hashtable pd plot_params pd rolling_median pd wide_to_long pd SparseDataFrame pd ewmcorr pd index pd pnow pd rolling_min pd SparseList pd ewmcov pd indexes pd qcut pd rolling_quantile pd SparsePanel pd ewmstd pd infer_freq pd read_clipboard pd rolling_skew pd SparseSeries pd ewmvar pd info pd read_csv pd rolling_std ```` The further drill-down operations in those properties are methods from your example: ````t dayofweek t weekday ```` are methods |
django javascript does not working When the gray div is clicked I want the website to be opened in a new tab but here this javascript code is not working Why? How I can edit it? views py: ````from django shortcuts import render from django http import HttpResponse def test(request): return render(request 'test html') ```` test html: ````<html> <head> <script src="jquery min js"></script> <script> $(document) ready(function(){ $("#gray") click(function(){ window open("http://www w3schools com"); }); }); </script> </head> <body> <div id='gray' style="width:100px;height:100px;background:gray;"></div> </body> ```` urls py: ````from django conf urls import include url from secondv views import test urlpatterns = [ url(r'^test' test) ] ```` The `jquery min js` file is in the template directory the same directory as test html file | Template directory is only for templates All static content such as js images should be placed to static directory It is well described in docs <a href="https://docs djangoproject com/en/1 9/intro/tutorial06/" rel="nofollow">https://docs djangoproject com/en/1 9/intro/tutorial06/</a> <a href="https://docs djangoproject com/en/1 9/howto/static-files/deployment/" rel="nofollow">https://docs djangoproject com/en/1 9/howto/static-files/deployment/</a> - for production |
Python multiprocessing try to request GitHub api with two tokens in parallel I am trying to use function bucket() request GitHub API users' info with two access tokens in parallel Then save users' info into a csv file The reason why I am doing this is to surpass GitHub API rate limit Please ignore whether GitHub will block me or not (I asked GitHub but got no reply yet ) My approach is using Python multiprocessing library to run same function with different parameters in parallel Actually those two processes I created run sequentially not in parallel Here is my code: ````import requests import csv import time from multiprocessing import Process # *************Construct url************ url1 = 'https://api github com/users' url2 = 'https://api github com/users?since=1000000' token1 = 'my_token1' token2 = 'my_token2' headers1 = {'Authorization': 'token %s' % token1} headers2 = {'Authorization': 'token %s' % token2} params = {'per_page': 100} def bucket(url header params file_path): count = 0 cnt = 0 csv_file = open(file_path 'a' buffering=0) writer = csv writer(csv_file) while count < 1: # just run 100 users' profile to see result fast are = requests get(url headers=header params=params) # get user's basic info 100 users/request users = r json() for user in users: user_profile = requests get(user['url'] headers=header) json() # get user's detailed profile 1 user/request field_names = user_profile keys() line = [] for field in field_names: if (field in user_profile) and user_profile[field]: if isinstance(user_profile[field] basestring): line append(user_profile[field] encode('utf-8')) else: line append(user_profile[field]) else: line append('NULL') writer writerow(line) cnt = 1 print cnt time sleep(0 75) try: url = r links['next'] get('url') # get url for next page (100 users/page) 1 page is one request except: break print(r headers['X-RateLimit-Remaining']) count = 1 if __name__ == '__main__': p1 = Process(target=bucket(url3 headers3 params 'GitHub_users3 csv')) p1 start() p2 = Process(target=bucket(url4 headers4 params 'GitHub_users4 csv')) p2 start() p1 join() p2 join() ```` Could someone help me accomplish the goal? If you would like to share any idea that can pass GitHub api rate limit I am happy to learn Thanks | <blockquote> I asked GitHub but got no reply yet </blockquote> As I mentioned in the reply to your message -- using two tokens from the same user will not grant you a better rate limit because the rate limit for authenticated requests is per-user (not per-user per-token) And I would strongly discourage you from creating multiple free accounts and using tokens from those as a way to work around the rate limit (since it is not allowed by the ToS) |
emacs Jedi disable # I have a working version of JEDI with Emacs 24 3 on osx The autocomplete feature is wonderful except for when I want to comment things out! I frequently comment/uncomment portions of my code and was hoping there was a way to prevent JEDI's auto completion from coming up when I type # Any advice or thoughts would be greatly appreciated! edit: When I go to comment multiple lines I enter in # typically followed by DownArrow and LeftArrow but what usually happens with JEDI enabled is this dialog pops up preventing me from moving to the following line until I make a selection: <img src="http://i stack imgur com/3Lp1a png" alt="dialog that pops up"> | One way to get around this issue would be to select the lines (region) you would like to comment out and hit <kbd>M-;</kbd> This runs the command `comment-dwim` which comments out the selected region (or uncomments it if it is currently commented out) When used in conjunction with e g <a href="http://www emacswiki org/emacs/mark-lines el" rel="nofollow">`mark-lines`</a> which allows you to select the current line with a single key stroke this makes for a really fast way of (un)commenting portions of your code even if they span just one or two lines |
PyODBC Pandas Parameterization I am using PyODBC to connect to Oracle with a connection called cnxn I have a list of unique identifiers: <them>list1 = [1234 2345 3456 4567]</them> I also have a query: ````query1 = """ select * from tablename where unique_id = ? ""' ```` What I would like to do is use this list of identifiers to create a pandas dataframe As a test I did this: ````testid = "1234" (since Oracle wants a string as that id not an integer) ```` However when I do this: ````pd read_sql(query1 cnxn params = testid) ```` I get <them>"the sql contains 1 parameter marker yet 4 were supplied "</them> Eventually I want to be able to do something like this: ````for i in list1: newdataframe append(pd read_sql(query1 cnxn params = i)) ```` and have it spit out a dataframe I have read the docs on PyODBC and it looks like it specifies ? as the parameter I have also looked at <a href="http://stackoverflow com/questions/9518148/pyodbc-how-to-perform-a-select-statement-using-a-variable-for-a-parameter">this question</a> and it is similar but I need to be able to feed the results into a Pandas dataframe for further manipulation I think if I can get the testid working I will be on the right track Thanks | The below is a full example with connection details but is SQL Server specific Because you are using ORACLE you can steal the df_query part The point I am trying to illustrate here is that you can use string formatting for parameter values instead of using params in your connection string ````import os import sqlalchemy as sa import urllib import pandas as pd #Specify the databases and servers used for reading and writing data read_server = 'Server' read_database = 'Database' read_params = urllib quote_plus("DRIVER={Server};SERVER={read_server};DATABASE={read_database};TRUSTED_CONNECTION=Yes" format(Server = 'SQL Server' read_server = read_server read_database=read_database)) read_engine = sa create_engine("mssql+pyodbc:///?odbc_connect=%s" % read_params) unique_id= 'id' single_quote = "'" df_query = """ SELECT * FROM TABLE WHERE UNIQUE_ID = {single_quote}{unique_id}{single_quote} """ format(single_quote = single_quote unique_id=unique_id) DF = pd read_sql_query(df_query con=read_engine index_col=None) ```` |
Mutliplying dataframe values by another dataframe and returning 'sumproduct' of columns for all rows How do I summarise this data into one table using Pandas? I have two existing tables df1 <them>Cost table</them> ````> Month A B C > Jan 10 5 4 > Feb 5 10 5 > Mar 20 10 8 > Apr 10 10 10 > May 5 20 10 ```` df2 <them>Weighting Option</them> ````> A B C > Option > x 1 00 0 90 0 80 > y 0 95 0 75 0 60 > x 0 90 0 85 0 65 ```` I want a new dataframe (d3) showing the totals for columns A+B+C when weighted by each of the options (x y z) It would look like this df3 <them>total cost in each month by weighting option</them> ````> x y z > month > Jan > Feb > Mar > Apr > May ```` E g (Mar y) above would be (0 95*20)+(0 75*10)+(0 60*8) = 31 3 and so on | It looks like you just want a standard matrix multiplication which can be done using `np dot()` ````pd DataFrame(df1 values dot(df2 T values) columns=list('xyz') index=df1 index) x y z Month Jan 17 7 15 65 15 85 Feb 18 0 15 25 16 25 Mar 35 4 31 30 31 70 Apr 27 0 23 00 24 00 May 31 0 25 75 28 00 ```` <h2>Edit:</h2> As suggested by @Mr F ````pd DataFrame(np dot(df1 df2 T) columns=list('xyz') index=df1 index) ```` |
What type of philosophy does both Kasmir Shaivism and Advaita Vedanta share? | non-dual |
Bundling win32lfn extension with Mercurial I am working on a Windows server application which uses Mercurial for folder synchronisation between servers (so that we only copy the changes each time) The installer includes a bundled hg folder including all the dlls etc for running Mercurial This currently works well however users have been reporting error with files with long paths There is an win32lfn extension <a href="http://mercurial selenic com/wiki/Win32LongFileNamesExtension" rel="nofollow">http://mercurial selenic com/wiki/Win32LongFileNamesExtension</a> which deals with this issue I would like to add this to the Mercurial bundle in the installer If I copy the win32lfn py file into the hg folder and add the relevent line to Mercurial ini then run hg eze I get the message: <blockquote> win32lfn: This extension requires the pywin32 extension </blockquote> If I attempt to install Python for Windows Extensions (<a href="http://sourceforge net/projects/pywin32/" rel="nofollow">http://sourceforge net/projects/pywin32/</a>) the installer tells me I need to install Python 2 7 first I can do this and get it working on my computer however I need to bundle this with my installer without requiring the user to install Python Mercurial comes with a Python bundled - how can I add the Python for Windows Extensions to this bundle? I have tried copying the dlls into the hg folder without success Can anyone direct me to a solution? | This is pretty fortuitous I just made a commit to win32lfn removing the pywin32 dependency |
Why does this dict give me a syntax error? ````#create a mapping of state to abbreviation states = [ 'Oregon': 'OR' 'Florida': 'FL' 'California': 'CA' 'New York': 'NY' 'Michigan': 'MI' ] ```` I am working through Learn Python the Hard Way and this lesson has me stumped I have looked everywhere but I cannot figure out why it is giving me this error : ````oldlink:my-python HarveyMark$ python ex39 py File "ex39 py" line 3 'Oregon': 'OR' ^ SyntaxError: invalid syntax ```` | When you use `[` it indicates that you are defining a `list` not a `dict` For a dict use `{` and `}` Example - ````states = { 'Oregon': 'OR' 'Florida': 'FL' 'California': 'CA' 'New York': 'NY' 'Michigan': 'MI' } ```` |
I have installed pillow using pip3 but pyautogui fails with error code 1 Every forum I have looked at says that: ````pip install pillow ```` remedies issues with installing pyautogui however I have installed pillow and I am still receiving: ````python setup py egg_info failed with error code 1 ```` Any suggestions? I also tried installing PIL but that failed as well with the same error | Have you tried to upgrade your setuptools? What version of it you use? Try this: ````pip install --upgrade setuptools ```` If you have not installed setuptools use: ````pip install setuptools ```` |
Recursive file renaming in python I am trying to write some code to recursively renaming files in python I have got a root folder which has another folder inside it and yet another folder inside that folder Each of this folders has a file named "Name txt" and I would like to change them to "Test txt" in order to understand how os walk() and os rename() work I have written this code: ````# -*- coding: utf-8 -*- import os def renamefiles(path): rootstructure=os walk(path topdown=False) for root dirs files in os walk(rootstructure): for filenames in files: fullfilename=os path abspath(filenames) os rename(fullfilename "Test txt") renamefiles(" ") ```` However I get this error: ````File "/usr/lib/python2 7/os py" line 278 in walk names = listdir(top) TypeError: coercing to Unicode: need string or buffer generator found ```` What am I doing wrong? Thanks in advance | `os rename` can be destructive Use it carefully You had `rootstructure` initialised to os walk for somereason You need to initialize it to the path of the current directory ````import os def renamefiles(path): rootstructure=os path abspath(path) for root dirs files in os walk(rootstructure): for filenames in files: fullfilename=os path abspath(filenames) print(fullfilename) # Use this carefuly it can wipe off your entire system # if not used carefully os rename(fullfilename "Test txt") renamefiles(" ") ```` |
How did humans cause the aridification of the Sahara? | null |
What means there is an increase in structural damage or absorption inhibition? | null |
Parsing HTML in python3 re html parser or something else? I am trying to get a list of craigslist states and their associates urls Do not worry I have no intentions of spaming if you are wondering what this is for see the * below What I am trying to extract begins the line after 'us states' and is the next 50 < li >'s I read through html parser's docs and it seemed too low level for this more aimed at making a dom parser or syntax highlighting/formatting in an ide as opposed to searching which makes me think my best bet is using re's I would like to keep myself contained to what is in the standard library just for the sake of learning I am not asking for help writing a regular expression I will figure that out on my own just making sure there is not a better way to do this before spending the time on that *This is my first program or anything beyond simple python scripts I am making a c++ program to manage my posts and remind me when they have expired in case I want to repost them and a python script to download a list of all of the US states and cities/areas in order to populate a combobox in the gui I really do not need it but I am aiming to make this 'production ready'/feature complete both as a learning exercise and to create a portfolio to possibly get a job I do not know if I will make the program publicly available or not there is obvious potential for misuse and is probably against their ToS anyway | There is <a href="http://docs python org/dev/library/xml etree elementtree html" rel="nofollow">xml etree</a> an XML Parser available in the Python Standard library itself You should not using regex for parsing XMLs Go the particular node where you find the information and extract the links from that |
When were proposals for a Chief Councillor put on hold? | 23 March 2013 |
Pypi: can I claim to be the new mantainer of an unmantained package? I have done quite a lot of work on a python package named <strong>"django-hstore"</strong> recently (link to my repo: <a href="https://github com/nemesisdesign/django-hstore">https://github com/nemesisdesign/django-hstore</a>) the latest version published on pypi has been released more than a year ago (link to pypi: <a href="https://pypi python org/pypi/django-hstore/1 1 1">https://pypi python org/pypi/django-hstore/1 1 1</a>) but the very curious thing is that the github repository linked on pipy is not the one of the original author Moreover this github user does not seem to have any commit with its github account (<a href="https://github com/aino/django-hstore">https://github com/aino/django-hstore</a>) He has not answered so far to my email and pull request what happens if he does not answer at all? My questions are: - Can I claim to be the new mantainer of this package? Is there a way to do so? - Have ever happened a similar occurrence? <blockquote> Note: I am sorry if this question is not specific about programming but I think it is still related to programming somehow if there is any better stackexchange site to ask this question please let me know </blockquote> | You could try filing a <a href="http://sourceforge net/p/pypi/support-requests/" rel="nofollow">support request</a> to take over the package; I do not believe PyPI has native support for it however |
base64encode altchars in java What the java version of this python method? <a href="http://docs python org/library/base64 html" rel="nofollow">http://docs python org/library/base64 html</a> base64 b64encode(s[ altchars]) Encode a string use Base64 Optional altchars must be a string of at least length 2 (additional characters are ignored) which specifies an alternative alphabet for the and / characters This allows an application to e g generate URL or filesystem safe Base64 strings The default is None for which the standard Base64 alphabet is used does java support altchars? | Apache Commons `Base64` has a constructor (and various static methods) for URL safe Base64 encoding: <a href="http://commons apache org/codec/apidocs/org/apache/commons/codec/binary/Base64 html#Base64%28int %20byte%5b%5d %20boolean%29" rel="nofollow">`Base64(int lineLength byte[] lineSeparator boolean urlSafe)`</a> <blockquote> `urlSafe` - Instead of emitting '`+`' and '`/`' we emit '`-`' and '`_`' respectively `urlSafe` is only applied to encode operations Decoding seamlessly handles both modes </blockquote> |
Simple map function error I am trying out the `map` function and it is giving me unexpected output: ````map(lambda x: x range(3)) <builtins map at 0x7fc3f6c0ab70> ```` When I try to call it with `map(lambda x: x range(3))()` it says `map` is not callable | I think what you are looking for is ````>>> list(map(lambda x: x range(3))) [0 1 2] ```` `map` returns an iterator The message you are seeing is simply the object type for which you have just created an instance ````>>> map(lambda x: x range(3)) <map object at 0x02E11F10> ```` |
Python Selenium return HTML location I have a simple HTML table consisting only of tr/td I need to return exact table row table column number of an exact record Solution should be able to handle any number of rows/columns To find that item I use: ````webdriver find_element_by_xpath("//*[contains(text() '8')]") ```` Now I need to return it is location so far I have been looking into attributes like `value_of_css_property` / `getCssValue` etc I am new to selenium any help would be appreciated | If you need to locate a row containing a `td` element containing a specific text: ````element = webdriver find_element_by_xpath("//tr[contains(td '8')]") ```` Or if you need to locate a specific cell (`td` element): ````element = webdriver find_element_by_xpath("//td[contains( '8')]") ```` Then you can get the <a href="https://selenium-python readthedocs org/api html#selenium webdriver remote webelement WebElement location" rel="nofollow">`location`</a>: ````print(element location) ```` This would give you the <them>coordinates</them> of the element on the page <hr> If you need to get the row and column number of a cell containing the desired text: ````table = webdriver find_element_by_id("mytable") rows = table find_elements_by_tag_name("tr") for row_index row in enumerate(rows start=1): cells = row find_elements_by_tag_name("td") for column_index cell in enumerate(cells start=1): if "8" in cell text: print("Found a match!") print(row_index column_index) break ```` |
python's re: return True if regex contains in the string I have a regular expression like this: ````regexp = you'ba[r|z|d]' ```` Function must return True if word contains <strong>bar</strong> <strong>baz</strong> or <strong>bad</strong> In short I need regexp analog for Python's ````'any-string' in 'text' ```` How can I realize it? Thanks! | `Match` objects are always true and `None` is returned if there is no match Just test for trueness Code: ````>>> st = 'bar' >>> m = re match(r"ba[r|z|d]" st) >>> if m: m group(0) 'bar' ```` Output = `bar` If you want `search` functionality ````>>> st = "bar" >>> m = re search(r"ba[r|z|d]" st) >>> if m is not None: m group(0) 'bar' ```` and if `regexp` not found than ````>>> st = "hello" >>> m = re search(r"ba[r|z|d]" st) >>> if m: m group(0) else: print "no match" no match ```` <strong>As @bukzor mentioned if st = `foo bar` than match will not work So its more appropriate to use `re search` </strong> |
assign a value of dictionary's key into a variable I have the following code in Python: ````def buildXmlUpdate(dfrom roomId ldays): start_date_sard text = dfrom roomId = str(roomId) room_id_sard text = roomId ldays = {'avail': str() 'price': str()} availability_in_data text = ldays['avail'] price_in_data text = ldays['price'] for n in ldays: print (dfrom roomId ldays) ```` Now when running ````buildXmlUpdate ('21/12/2015' 1 [{'avail': 1 'price': 100} {'avail': 3 'price': 120}]) ```` I get the following output ````('21/12/2015' '1' {'avail': '' 'price': ''}) ('21/12/2015' '1' {'avail': '' 'price': ''}) ```` In other words: ````('21/12/2015' '1' {'avail': 1 'price': 100}) ('21/12/2015' '1' {'avail': 3 'price': 120}) ```` As you see here the dictionary `avail` and `price` keys are set to an empty string but I want to set them according to the `ldays` arguments in the method What am I doing wrong? | Solved: ````def buildXmlUpdate(dfrom roomId ldays): start_date_sard text = dfrom roomId = str(roomId) room_id_sard text = roomId #ldays = {'avail': str() 'price': str()} #availability_in_data text = ldays['avail'] #price_in_data text = ldays['price'] for n in ldays: print (dfrom roomId n) #availability_in_data text = get ldays['avail'] #price_in_data txt = get ldays['price'] ldays[-1]['avail'] = str(ldays[-1]['avail']) ldays[-1]['price'] =str(ldays[-1]['price']) availability_in_data text = ldays[-1]['avail'] price_in_data text = ldays[-1]['price'] ```` Thank you all! |
how to integrate spiders and scrapy-ctl py I am new to python and scrapy and hence am getting some basic doubts(please spare my ignorance about some fundamentals which i m willing to learn :D) Right now I am writing some spiders and implementing them using scrapy-ctl py from the command line by typing: ````C:\Python26\dmoz>python scrapy-ctl py crawl spider ```` But I do not want two separate python codes and a command line to implement this I want to somehow define a spider and make it crawl urls by writing and running a single python code I could notice that in the file scrapy-ctl py 'execute' of type <them>function</them> is imported but i am clueless as to how this function can be defined in the code containing spider Could someone explain me how to do this if it is possible because it greatly reduces the work Thanks in advance!! | <blockquote> But I do not want two separate python codes and a command line to implement this I want to somehow define a spider and make it crawl urls by writing and running a single python code </blockquote> I am not sure the effort pays out if you just want to scrape something You have at least two options: - Dig into `scrapy/cmdline py` You will see that this is a kind of dispatch script finally handing over the work to the `run` method for the specified command here `crawl` (in `scrapy/commands/crawl py`) Look at line 54 I think `scrapymanager start()` will begin your actual command after some setup - A little hacky method: use pythons <a href="http://docs python org/library/subprocess html" rel="nofollow">`subprocess`</a> module to have one your project and execution in one file (or project) |
Making asyncronous call synchronous in Python What is the best way to make an asynchronous call appear synchronous? Eg something like this but how do I coordinate the calling thread and the async reply thread? In java I might use a CountdownLatch() with a timeout but I cannot find a definite solution for Python ````def getDataFromAsyncSource(): asyncService subscribe(callback=functionToCallbackTo) # wait for data return dataReturned def functionToCallbackTo(data): dataReturned = data ```` | There is a module you can use ````import concurrent futures ```` Check this post for sample code and module download link: <a href="http://gosteven com/blog/2012/05/13/concurrent-tasks-execution-in-python/">Concurrent Tasks Execution in Python</a> You can put executor results in future then get them here is the sample code from <a href="http://pypi python org">http://pypi python org</a>: ````import concurrent futures import urllib request URLS = ['http://www foxnews com/' 'http://www cnn com/' 'http://europe wsj com/' 'http://www bbc co uk/' 'http://some-made-up-domain com/'] def load_url(url timeout): return urllib request urlopen(url timeout=timeout) read() with concurrent futures ThreadPoolExecutor(max_workers=5) as executor: future_to_url = dict((executor submit(load_url url 60) url) for url in URLS) for future in concurrent futures as_completed(future_to_url): url = future_to_url[future] if future exception() is not None: print('%r generated an exception: %s' % (url future exception())) else: print('%r page is %d bytes' % (url len(future result()))) ```` |
Make a function to call while the program is running For example I have a program that can add letters and remove letters from list Here is the code: ````my_list = ['a' 'b' 'c' would' 'e' 'f'] do = input("Press 'a' to append and 'r' to remove: ") if do == 'a': letter = input("Enter a letter to append: ") my_list append(letter) print (my_list) elif do == 'r': letter = input("Enter a letter to append: ") my_list remove(letter) print (my_list) else: print ("Something gone wrong ") ```` To remove a letter from list I have to tell the program what I am going to do and then it asks me for a letter to remove Is there any possible way to call my own function (just to make it easier to use the program) like this: ````def removing(letter): my_list remove(letter) print (my_list) ```` To use the function in console like this: ````What are you going to do? removing(b) ```` | Here is a somewhat restructured suggestion It asks the user to either input append something or remove something ````my_list = ['a' 'b' 'c' would' 'e' 'f'] choices = {'remove': my_list remove 'append': my_list append} print my_list while True: try: choice item = raw_input('append <x> OR remove <x>\n') split() choices[choice](item) except (KeyError ValueError): print('something went wrong ') print my_list ```` Demo: ````['a' 'b' 'c' would' 'e' 'f'] append <x> OR remove <x> append z ['a' 'b' 'c' would' 'e' 'f' 'z'] append <x> OR remove <x> remove d ['a' 'b' 'c' 'e' 'f' 'z'] append <x> OR remove <x> remove y something went wrong ['a' 'b' 'c' 'e' 'f' 'z'] ```` This should give you an idea/get you started The dictionary is easily extendable |
Pandas multi-index best way to slice for ranges of subordinate levels I have two questions concerning pandas dataframe multi-indices Assume I have a data-frame df as follows: ```` data port bm pf sector instrument date 1 A 2013-01-14 0 0 2013-01-15 5 5 2013-01-16 10 10 2013-01-17 15 15 2013-01-18 20 20 ```` Which can be generated with the following code: ````import pandas as pd date = pd bdate_range('2013-01-14' '2013-01-20') repeat(5) sector = [1 1 1 2 2] * 5 df = pd DataFrame(dict(port=['pf']*25 sector=sector instrument=list('ABCDE')*5 date=date data=xrange(25))) df = pd concat([df pd DataFrame(dict(port=['bm']*25 sector=sector instrument=list('ABCDE')*5 date=date data=xrange(25)))] axis=0) df = df set_index(['port' 'sector' 'instrument' 'date']) df = df unstack('port') ```` I want to get two sets of results: all the values on 2013-01-17 and all the values from 2013-01-17 to the end of the series For the first I know I can use one of the following approaches: ````idx = pd IndexSlice targetdate = pd Timestamp('2013-01-17') slicer = (slice(None) slice(None) targetdate) ```` 1) `df loc[slicer :]` 2) `df xs(pd Timestamp('2013-01-17') level=2)` 3) `df xs(slicer )` 4) `df[idx[: : targetdate] :]` all of which seem quite clunky Is there a more obvious way I am missing? What other ways are there to acheive this I guess I am hoping there is something like `df loc(level=2)[targetdate]` (which does not work of course) For the second I have only come up with one solution ````query = df index get_level_values(2) >= pd Timestamp('2013-01-17') df[query] ```` Again is there a more efficient way to do this? Final bonus question: what does `df index get_loc_level()` do? I feel like it should help with this but I have no idea how to use it Thanks | I think this masking like you are doing is going to be pretty good here: ````query = df index get_level_values(2) >= pd Timestamp('2013-01-17') df[query] ```` if you have lots of repetition in dates you may improve performance with something lower-level: ````query = (df index levels[2] >= pd Timestamp("2013-01-17"))[df index labels[2]] df[query] ```` <them>I will probably get shouted out for that ! But it will be significantly faster in some cases </them> <hr> `get_loc_level` is the similar of loc i e label based rather than by position: <blockquote> Get integer location slice for requested label or tuple </blockquote> ````In [21]: df index get_loc_level(2) Out[21]: (slice(15 25 None) MultiIndex(levels=[[you'A' you'B' you'C' you would' you'E'] [2013-01-14 00:00:00 2013-01-15 00:00:00 2013-01-16 00:00:00 2013-01-17 00:00:00 2013-01-18 00:00:00]] labels=[[3 3 3 3 3 4 4 4 4 4] [0 1 2 3 4 0 1 2 3 4]] names=[you'instrument' you'date'])) ```` by default it takes the first index but you can pass in more ````In [21]: df index get_loc_level((1 "A")) Out[21]: (slice(0 5 None) <class 'pandas tseries index DatetimeIndex'> [2013-01-14 2013-01-18] Length: 5 Freq: None Timezone: None) ```` |
django template question (accessing a list) I am writing a template for my first django website I am passing a list of dictionaries to the template in a variable I also need to pass a few other lists which hold boolean flags (Note: all lists have the same length) The template looks something like this: ````<html> <head><title>First page</title></head><body> {% for item in data_tables %} <table> <tbody> <tr><td colspan="15"> {% if level_one_flags[forloop counter-1] %} <tr><td>Premier League {% endif %} <tr><td>Junior league <tr><td>Member count {% if level_two_flags[forloop counter-1] %} <tr><td>Ashtano League {% endif %} </tbody> </table> {% endfor %} </body> </html> ```` I am getting the following error: <blockquote> Template error In template /mytemplate html error at line 7 Could not parse the remainder: '[forloop counter-1]' from 'level_one_flags[forloop counter-1]' </blockquote> I am not suprised I am getting this error since I was just trying to see if would work So far from the documentation I have not found out how to obtain the items in a list by index (i e other than by enumeration) Does anyone know how I may access a list by index in a template? | You use the <a href="http://docs djangoproject com/en/dev/topics/templates/" rel="nofollow">dot-operator</a> to index the array or really to do anything <blockquote> Technically when the template system encounters a dot it tries the following lookups in this order: ````* Dictionary lookup * Attribute lookup * Method call * List-index lookup ```` </blockquote> I do not believe you can do math on the index You will have to pass in your array constructed in some other way so that you do not have to do this subtraction |
Django - Best practice to join two models I am developing a project that allows users to make a test based in some random questions My models py has this two classes: ````class Question(models Model): content = models CharField() class Answer(models Model): content = models CharField() isCorrect = models BooleanField() question = models ForeignKey(Question) ```` And in my views py I get 20 random questions using this query: ````questions = Question objects order_by('?')[:20] ```` With this approach I have only the questions but I want also the answers related to every question I found some solutions but I would like to know what could be the best practice to get Question and related Answers? Can I add them to Question constructor? Thanks! | You can do as @karthikr said but it will make an extra database call for each question I would do it maybe this way: ````questions = Question objects order_by('?')[:20] answers = Answer objects filter(question__in=questions) #some databases will not suppoert this so use: #answers = Answer objects filter(question_id__in=[q id for q in questions]) for question in question: answers_for_question = filter(lambda answer:answer question_id = question_id answers) ```` Which is only 2 db calls instead of 21 (For a really large sets of questions make use of <a href="http://docs python org/2/library/itertools html" rel="nofollow">itertools</a> to get the answers for even better performance) |
Fix bootstrap field columns on mobiles I am create a simple webpage to present on the pc web browser as well as in mobiles I am using python Flask to create it Here is the code of my template ````{% block content %} <div class="container"> <div class="jumbotron" > <div class="row"> <h1 class="text-center">Header</h1> <h2 class="text-center">Subtitle </h2> <p class="text-center">Comment </p> <br> </div> <form class="form form-inline" method="post" role="form"> <div class="row"> {{ form hidden_tag() }} {{ wtf form_errors(form hiddens="only") }} {{ wtf form_field(form urlid placeholder='Hint') }} {{ wtf form_field(form submit) }} <span class="help-block text-center">Help block </span> </div> <p>Comment </p> </form> <br><br><br><br><br><br><br> </div> </div> {%- endblock %} ```` The result in the pc browser is OK See it <a href="http://i stack imgur com/GHv75 png" rel="nofollow">enter image description here</a> But the form in my mobile is not inline See it <a href="http://i stack imgur com/IfT6S jpg" rel="nofollow">enter image description here</a> How can I fix it? I tried this but did not work either ````{{ wtf form_field(form urlid placeholder='Hint' horizontal_columns=('sm' 2 4)) }} ```` Thank you in advance | Mobile is below the `sm` breakpoint for bootstrap so the input is taking up the whole width If you want the input to take up the width you have specified use the `xs` breakpoint ` {{ wtf form_field(form urlid placeholder='Hint' horizontal_columns=('xs' 2 4)) }} ` |
Python String Comparison is not working So I have the following chunk of code in python which basically opens a text file makes an array out of the file's text contents (splitting it line by line) and then proceeds to take input from a user and compare that input to each of the terms in the array Despite the fact that a usersubmitted string is in the array exactly as it was typed the program is unable to tell that the two strings are equal How do I compare the strings? And if this notation is correct what could possibly be the because of inequality? I tried substituting "is" for "==" but that did not work either both the input for `userinput` is `frogeyedpeas` and `UserArray[0]` is `frogeyedpeas` The file Usernames txt ````frogeyedpeas vsauce ```` (there is an additional line that is entirely blank after vsauce) Code posted below: ````Usernames = open('Usernames txt' 'r+') UserArray = Usernames read() split("\n") userinput = raw_input("Enter Username: ") passinput = raw_input("Enter Password: ") i = 0 q = len(UserArray) while(i < q): founduser = 0 print userinput UserArray[i] UserArray #out if(UserArray[i] == userinput): founduser = 1 if(PassArray[i] == passinput): userstatus = i founduser = 2 i = i 1 ```` | You are testing for object identity: ````UserArray[i] is userinput: ```` you really want to use <them>equality</them> instead: ````UserArray[i] == userinput: ```` `is` tests if two object references are pointing to the same object `==` tests if the values of two objects is equivalent Two strings that have the same contents (are equal) are not necessarily one and the same object: ````>>> spam = 'foo bar baz' >>> ham = 'foo bar baz' >>> spam is ham False >>> spam == ham True >>> eggs = ham >>> eggs is ham True ```` CPython does use some tricks to save memory and improve performance where small integers and some strings are <them>interned</them>; every time you try to create a new string or integer that is interned instead of a new object you get an already cached value In those cases `is` tests return `True` <them>only</them> because you indeed have the same object reused for you by the interpreter Note that the equality test can still be incorrect There could be extra whitespace around the values or other characters that are not immediately obvious when printed to the screen: ````>>> val1 = 'frogeyedpeas' >>> val2 = 'frogeyedpeas\t' >>> val1 == val2 False >>> print val1 val2 frogeyedpeas frogeyedpeas >>> print repr(val1) repr(val2) 'frogeyedpeas' 'frogeyedpeas\t' ```` |
What is a popular use for LED lighting? | traffic signals |
On which island in Tuvalu is the hospital? | Funafuti |
During what period did the African civil war last? | 1989 to 1996 |
What is wrong with the logic in this sequence? I am a Python beginner and I decided that I wanted to create a program over my summer holiday before I did this however I decided I would make a small program before to practice creating GUI's I am working on a joke machine that uses the random module to select a random integer that will decide which joke is displayed however it seems to only ever output joke 3 regardless of how many times I run it I cannot see any issues here any advice? ````joke_select = random randint (1 3) joke1 = "Why do you never see elephants hiding in trees?\n because they are really good at it!" joke2 = "What is grey and cannot climb a tree? \n A parking lot" joke3 = "What is red and bad for your teeth?\n A brick" if joke_select == '1': joke_label = tkinter Label (main text = joke1) elif joke_select == '2': joke_label = tkinter Label (main text = joke2) else: joke_label = tkinter Label (main text = joke3) def get_joke(): joke_label pack () ```` | ````if joke_select == '1': ```` `joke_select` is only ever an integer but you are comparing it to a string here Try comparing to an integer instead ````if joke_select == 1: ```` Likewise for `elif joke_select == '2':` |
python 2 7 word generator Algorithm: take input on how many letters to go back for loop to loop a-z lock the first character loop the second character lock the first two loop the third and so on and so forth The out put will look like a b c d aa ab ac ad aaa aab aac and so on I am very new to python I have something that cycles through the alphabet but my problem is to lock the first and cycle the second and so on ````w = 'abcdefghijklmnopqrstuvwxyz' n = input ("# of characters: ") for a in range(0 n): for i in w: print i ```` | ````alphabet = 'abcdefghijklmnopqrstuvwxyz' l= [''] for i in range(input): l = [letter item for letter in alphabet for item in l] for item in l: print(item) ```` I think this is what you are looking for |
Who aren't the CAF supposed to protect? | null |
save ImageField mongoengine I have following class definition in mongoengine orm: ````import mongoengine as me class Description(me Document): user = me ReferenceField(User required=True) name = me StringField(required=True max_length=50) caption = me StringField(required=True max_length=80) description = me StringField(required=True max_length=100) image = me ImageField() ```` in my post method of my tornado web requesthandler: ````from PIL import Image def post(self *args **kwargs): merchant = self _merchant data = self _data obj_data = {} if merchant: params = self serialize() # I am getting params dict NO Issues with this obj_data['name'] = params get('title' None) obj_data['description'] = params get('description' None) path = params get('file_path' None) image = Image open(path) print image # ** obj_data['image'] = image # this is also working fine obj_data['caption'] = params get('caption' None) obj_data['user'] = user des = Description(**obj_data) des save() print obj_data['image'] # ** print des image # This is printing as <ImageGridFsProxy: None> ```` ** print obj_data['image'] and print image are printing following: ````<PIL PngImagePlugin PngImageFile image mode=1 size=290x290 at 0x7F83AE0E91B8> ```` but <blockquote> des image still remains None </blockquote> Please suggest me what is wrong here Thanks in advance to all | You can not just put PIL objects into a field with `obj image = image` that way You must do: ````des = Description() des image put(open(params get('file_path' None))) des save() ```` In other words `ImageField` should be filled with file object after creating an instance by calling `put` method |
why does it appear cannot multiply sequence by non-int of type 'str' ? I think I already give the width and height an integer velue Thanks ````# This is a calcualtor shape = raw_input('what shape do you want to calculate? rectangle circle triangle square? ') if shape == 'rectangle': width = raw_input('Please tell me the width of rectangle: ') height = raw_input('Please tell me the height of rectangle: ') print 'the area of the rectange is: ' width*height elif shape == 'circle': radius = raw_input('Please tell me the radius of the circle: ') print 'the area of the circle is: ' 3 14 * int(radius)**2 ```` | `raw_input` returns a string Use `width = int(raw_input( ))` and the same thing for `height` to convert them to actual integers Demo: ````>>> width = raw_input('Please tell me the width of rectangle: ') Please tell me the width of rectangle: 5 >>> width '5' >>> type(width) <type 'str'> >>> width * width Traceback (most recent call last): File "<input>" line 1 in <module> TypeError: cannot multiply sequence by non-int of type 'str' >>> width_int = int(width) >>> width_int 5 >>> type(width_int) <type 'int'> >>> width_int * width_int 25 ```` Use `float` instead of `int` if you want numbers that have a fractional part |
Python: get the string between two capitals I would like your opinion as you might be more experienced on Python as I do I came from C++ and I am still not used to the Pythonic way to do things I want to loop under a string between 2 capital letters For example I could do that this way: ```` i = 0 str = "PythonIsFun" for i z in enumerate(str): if(z isupper()): small = '' x = i 1 while(not str[x] isupper()): small = str[x] ```` I wrote this on my phone so I do not know if this even works but you caught the idea I presume I need you to help me get the best results on this not just in a non-forced way to the cpu but clean code too Thank you very much | You can use a list comprehension to do this easily ````>>> s = "PythonIsFun" >>> you = [i for i x in enumerate(s) if x isupper()] >>> s[you[0]+1:you[1]] 'ython' ```` If you cannot guarantee that there are two upper case characters you can check the length of `you` to make sure it is at least 2 This does iterate over the entire string which could be a problem if the two upper case characters occur at the start of a lengthy string |
debug variable passed to python interpreter? In a script of mine I am using a Python module and I know I can turn on that module's <strong>DEBUG</strong> prints by just doing: ````LOGLEVEL=DEBUG python myscript py ```` I am new to (real) debugging so this is for sure a stupid question What is the name of this kind of <strong>variable passing</strong> in Python? | It is an <a href="http://en wikipedia org/wiki/Environment_variable" rel="nofollow">environment variable</a> The value is set and handled by the operating system and the Python script in question checks the value to determine the granularity of the logging |
Where does the turtle (and its screen) appear in scipy/anaconda/spyder (2 7)? I am using SciPy/Anaconda/Spyder and when I put in: ````import turtle from turtle import Turtle turtle getscreen() turtle showturtle ```` and run it nothing happens Whereas in IDLE when the script is run a new screen appears with a "turtle" (the turtle being a right pointing arrow head) in the middle of it Where does the "turtle screen appear" in SciPy/Anaconda/Spyder? | To make this work you need to: - Select the <them>IPython</them> Console instead of the Python one - Enter this command there: `%gui tk` - Run your provided code (it works for me on Linux) <them>Note</them>: If you are on Windows unfortunately there is a bug in Anaconda that prevents people to use the `turtle` module This bug is not related to the module itself but to the graphical toolkit it uses to create the turtle animations |
what is reflected and transmitted into the ground when a plane surface is struck? | electromagnetic wave |
Django - how to use user_pass_test with urls I am trying to use `user_pass_test` in my URL's definitions for CBV and views I want to use a similar syntax to this: ````url (r'^question_detail-(?P<pk>\w+)$' user_passes_test(not_in_group_chef login_url='public_connexion')Question_detail as_view() name='detail_question') ```` I found : <a href="http://stackoverflow com/questions/3139284/django-limiting-url-access-to-superusers">Django - limiting url access to superusers</a> and <a href="http://jonatkinson co uk/djangos-user_passes_test-and-generic-views/" rel="nofollow">http://jonatkinson co uk/djangos-user_passes_test-and-generic-views/</a> But it is not functional in my case Thank you | you are missing a pair of brackets in your code example does this work? ````url ( r'^question_detail-(?P<pk>\w+)$' user_passes_test(not_in_group_chef login_url='public_connexion')( Question_detail as_view() ) name='detail_question' ) ```` |
Validate sqlachemy session I am using sqlalchemy to store values in my database I want to write test cases that can validate a session The code for getting a session object: ````def get_session(): Base = declarative_base() engine = create_engine('postgresql+psycopg2://testdb:hello@localhost/mydatabase') Base metadata bind = engine DBSession = sessionmaker(bind=engine) session = DBSession() return session ```` The problem is that even if I put a wrong database name the above code still works When I try to commit to database then it throws an error(OperationalError) How can I validate my session during its creation? | ````def validate(session): try: # Try to get the underlying session connection If you can get it its up connection = session connection() return True except: return False ```` |
Python Best way to "zip" char and list I want to "zip" char and list in Python: An example: <pre class="lang-py prettyprint-override">`char = '<' list = [3 23 67] "zip"(char list) >>> [('<' 3) ('<' 23) ('<' 67)] ```` How I am using <strong>itertools repeat()</strong>: <pre class="lang-py prettyprint-override">`itertools izip(itertools repeat(char len(list)) list) >>>[('<' 3) ('<' 23) ('<' 67)] ```` It works but it so interesting to find more pythonic solution | ````[(char i) for i in list] ```` Naming your list as "list" is probably not a good idea by the way as this shadows the constructor for the internal list type |
Understanding LDA implementation using gensim I am trying to understand how gensim package in Python implements Latent Dirichlet Allocation I am doing the following: Define the dataset ````documents = ["Apple is releasing a new product" "Amazon sells many things" "Microsoft announces Nokia acquisition"] ```` After removing stopwords I create the dictionary and the corpus: ````texts = [[word for word in document lower() split() if word not in stoplist] for document in documents] dictionary = corpora Dictionary(texts) corpus = [dictionary doc2bow(text) for text in texts] ```` Then I define the LDA model ````lda = gensim models ldamodel LdaModel(corpus=corpus id2word=dictionary num_topics=5 update_every=1 chunksize=10000 passes=1) ```` Then I print the topics: ````>>> lda print_topics(5) ['0 181*things 0 181*amazon 0 181*many 0 181*sells 0 031*nokia 0 031*microsoft 0 031*apple 0 031*announces 0 031*acquisition 0 031*product' '0 077*nokia 0 077*announces 0 077*acquisition 0 077*apple 0 077*many 0 077*amazon 0 077*sells 0 077*microsoft 0 077*things 0 077*new' '0 181*microsoft 0 181*announces 0 181*acquisition 0 181*nokia 0 031*many 0 031*sells 0 031*amazon 0 031*apple 0 031*new 0 031*is' '0 077*acquisition 0 077*announces 0 077*sells 0 077*amazon 0 077*many 0 077*nokia 0 077*microsoft 0 077*releasing 0 077*apple 0 077*new' '0 158*releasing 0 158*is 0 158*product 0 158*new 0 157*apple 0 027*sells 0 027*nokia 0 027*announces 0 027*acquisition 0 027*microsoft'] 2013-12-03 13:26:21 878 : INFO : topic #0: 0 181*things 0 181*amazon 0 181*many 0 181*sells 0 031*nokia 0 031*microsoft 0 031*apple 0 031*announces 0 031*acquisition 0 031*product 2013-12-03 13:26:21 880 : INFO : topic #1: 0 077*nokia 0 077*announces 0 077*acquisition 0 077*apple 0 077*many 0 077*amazon 0 077*sells 0 077*microsoft 0 077*things 0 077*new 2013-12-03 13:26:21 880 : INFO : topic #2: 0 181*microsoft 0 181*announces 0 181*acquisition 0 181*nokia 0 031*many 0 031*sells 0 031*amazon 0 031*apple 0 031*new 0 031*is 2013-12-03 13:26:21 881 : INFO : topic #3: 0 077*acquisition 0 077*announces 0 077*sells 0 077*amazon 0 077*many 0 077*nokia 0 077*microsoft 0 077*releasing 0 077*apple 0 077*new 2013-12-03 13:26:21 881 : INFO : topic #4: 0 158*releasing 0 158*is 0 158*product 0 158*new 0 157*apple 0 027*sells 0 027*nokia 0 027*announces 0 027*acquisition 0 027*microsoft >>> ```` I am not able to understand much out of this result Is it providing with a probability of the occurrence of each word? Also what is the meaning of topic #1 topic #2 etc? I was expecting something more or less like the most important keywords I already checked the <a href="http://radimrehurek com/gensim/wiki html#latent-dirichlet-allocation">gensim tutorial</a> but it did not really help much Thanks | The answer you are looking for is in the <a href="http://radimrehurek com/gensim/wiki html#latent-dirichlet-allocation">genism tutorial</a> `lda printTopics(k)` prints the most contributing words for `k` randomly selected topics One can assume that this is (partially) the distribution of words over each of the given topics meaning the probability of those words appearing in the topic to the left Usually one would run this on a large corpus running on a ridiculously small sample will not give the best results |
which dictionary or hashtable's key can be symbol expression? - would like to store a list of ideals as values and use a polynomial or a function as key in dictionary or hashtable dict[x^2+x+1] = x+1 which function can it do this like table function in maple? show as this ```` dict[x^2+x+1] = [x+1 x+2 x+3 x+4 x+5] dict[x^3] = [x+1 x+2] ```` <old start="2"> - do sympy have hilbert function? since i use hilbert function as key in dictionary | SymPy is just a Python library so you have all of Python at your disposal when you use it Python has a <a href="https://docs python org/2/tutorial/datastructures html#dictionaries" rel="nofollow">dictionary</a> type built in ````>>> a = {x: 1} >>> a[x**2 1] = 2 >>> a {x: 1 x**2 1: 2} ```` |
finding the LCM using python def multiple(a b): """so I am trying to return the smallest number n that is a multiple of both a and b for example: <blockquote> <blockquote> <blockquote> multiple(3 4) 12 multiple(14 21) 42 """ </blockquote> </blockquote> </blockquote> ````def gcd (a b): if a < b : a b = b a while b: a b = b a % b return a def lcm (a b): n= (a*b) / gcd(a b) return n ```` it keeps throwing errors about indentation and logic I do not understand why I have tried changing the variables around too | No need to find GCD we can directly find LCM Below code works ````def lcmof(x y): res=0 mx=max(x y) mn=min(x y) for i in range(1 mx+1 1): temp=mx*i try: if(temp%mn==0): res=temp break except ZeroDivisionError: res=0 break return res ```` |
Colored output from fabric script I am trying to color my output from fabric script so I do something like this: ````local("hg pull") print(blue(local("hg update"))) local("hg status") ```` I expected to have the 'hg update' response print in blue but instead I get the result of the next line 'hg status' in blue In the Fabric documentation there are examples with coloring hard-coded strings which work as I expect Do you have any advice on how I can color response to just a single local command? | This is what I use: ````local("hg pull") res = local("hg update" capture=True) print(blue(res)) local("hg status") ```` [Edit] you also need the `capture=True` to fetch the output |
import error running picam on a raspberry pi I am trying to run <a href="https://github com/ashtons/picam" rel="nofollow">picam</a> on my raspberry pi Unfortunately I am not getting around the following import error: ````--------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-ae50f21e1c18> in <module>() ---> 1 import picam 2 import time /usr/local/lib/python2 7/dist-packages/picam/__init__ py in <module>() 1 # Copyright (c) 2013 Sean Ashton 2 # Licensed under the terms of the MIT License (see LICENSE txt) ---> 3 from _picam import * 4 import StringIO 5 from PIL import Image ImportError: /usr/local/lib/python2 7/dist-packages/picam/_picam so: cannot open shared object file: No such file or directory ```` I have been looking all over the place (e g : <a href="http://stackoverflow com/questions/2172621/cannot-open-shared-object-file-no-such-file-or-directory">cannot open shared object file: No such file or directory</a>) but so far without success Recompiling also did not work due to a whole bunch of missing libraries(mmal h vcos h etc ) <strong>update:</strong> ````pi@raspberrypi ~ $ ls -l /usr/local/lib/python2 7/dist-packages/picam total 48 -rw-r--r-- 1 root staff 1819 Nov 18 14:47 __init__ py -rw-r--r-- 1 root staff 2903 Nov 21 23:29 __init__ pyc -rw-r--r-- 1 root staff 39567 Nov 18 14:47 _picam so ```` <strong>update2:</strong> ````pi@raspberrypi ~ $ ldd /usr/local/lib/python2 7/dist-packages/picam/_picam so not a dynamic executable ```` <strong>update3:</strong> ````pi@raspberrypi ~ $ file /usr/local/lib/python2 7/dist-packages/picam/_picam so ```` /usr/local/lib/python2 7/dist-packages/picam/_picam so: ELF 32-bit LSB shared object ARM version 1 (SYSV) dynamically linked BuildID[sha1]=0xe403bf379f8c1dc2cb82df774ac3f11998661ff1 not stripped ````readelf -d /usr/local/lib/python2 7/dist-packages/picam/_picam so Dynamic section at offset 0x700c contains 37 entries: Tag Type Name/Value 0x00000001 (NEEDED) Shared library: [libmmal_core so] 0x00000001 (NEEDED) Shared library: [libmmal_util so] 0x00000001 (NEEDED) Shared library: [libmmal_vc_client so] 0x00000001 (NEEDED) Shared library: [libvcos so] 0x00000001 (NEEDED) Shared library: [libbcm_host so] 0x00000001 (NEEDED) Shared library: [libpython2 7 so 1 0] 0x00000001 (NEEDED) Shared library: [libpthread so 0] 0x00000001 (NEEDED) Shared library: [libdl so 2] 0x00000001 (NEEDED) Shared library: [librt so 1] 0x00000001 (NEEDED) Shared library: [libvchiq_arm so] 0x00000001 (NEEDED) Shared library: [libc so 6] 0x0000000e (SONAME) Library soname: [_picam so] 0x0000000f (RPATH) Library rpath: [/home/pi/SOURCE/userland/build/lib] 0x0000000c (INIT) 0x1d88 0x0000000d (FINI) 0x5840 0x00000019 (INIT_ARRAY) 0xf000 0x0000001b (INIT_ARRAYSZ) 4 (bytes) 0x0000001a (FINI_ARRAY) 0xf004 0x0000001c (FINI_ARRAYSZ) 4 (bytes) 0x00000004 (HASH) 0xf8 0x6ffffef5 (GNU_HASH) 0x444 0x00000005 (STRTAB) 0xcbc 0x00000006 (SYMTAB) 0x5bc 0x0000000a (STRSZ) 2330 (bytes) 0x0000000b (SYMENT) 16 (bytes) 0x00000003 (PLTGOT) 0xf154 0x00000002 (PLTRELSZ) 712 (bytes) 0x00000014 (PLTREL) REL 0x00000017 (JMPREL) 0x1ac0 0x00000011 (REL) 0x16f8 0x00000012 (RELSZ) 968 (bytes) 0x00000013 (RELENT) 8 (bytes) 0x6ffffffe (VERNEED) 0x16b8 0x6fffffff (VERNEEDNUM) 2 0x6ffffff0 (VERSYM) 0x15d6 0x6ffffffa (RELCOUNT) 114 0x00000000 (NULL) 0x0 ```` Any help would be greatly appreciated! | The package should have included the shared object file when installing via PIP It is not ideal but you can download it yourself from github and place it in the required folder Or you can trying reinstalling from PIP <a href="https://github com/ashtons/picam/tree/master/picam" rel="nofollow">https://github com/ashtons/picam/tree/master/picam</a> What does the output of `ls -l /usr/local/lib/python2 7/dist-packages/picam` look like on your pi? |
remove tick labels in Python but keep gridlines I have a Python script which is producing a plot consisting of 3 subplots all in 1 column In the middle subplot I currently have gridlines but I want to remove the x axis tick labels I have tried ````ax2 axes get_xaxis() set_ticks([]) ```` but this seems to remove the gridlines also How can I remove the tick labels and keep the gridlines please? | Please try this: ````plt grid(True) ax2 axes get_xaxes() set_ticks([]) ```` Or maybe this: ````from matplotlib ticker import NullFormatter ax2 axes get_xaxis() set_major_formatter(NullFormatter()) ```` |
Norfolk Island was originally a what? | colony acquired by settlement |
compare values in a file python Here is my data sample in a txt file: ````1322484979 322313000 85 24 168 19 QQlb-j7itDQ 1322484981 070116000 83 233 56 133 Ne8Bb1d5oyc 1322484981 128791000 83 233 56 133 Ne8Bb1d5oyc 1322484981 431075000 83 233 56 133 Ne8Bb1d5oyc 1322484985 210652000 83 233 57 136 QWUiCAE4E7U ```` The first column is timestamp second column is IP address third one is some hash value I want to check if two or more successive rows have same IP address and hash value I need to use the last timestamp of the duplicated row to substract the first timestamp of the duplicated row in this case is 132248981 431075000-1322484981 070116000 If the result is less than 5 I will only keep the first row (the earliest) in the file If the result is more than 5 I will keep the first and the last duplicated row delete rows between them Since I Am a pretty newbie of python This problem is a bit complicated for me I do not know what kind of function is needed can anyone help a little bit? | In a basic way it could looks like this : ````data = open("data txt" "r") last_time = 0 0 last_ip = None last_hash = None for line in data: timestamp ip hash_value = line split() if ip==last_ip and hash_value==last_hash and float(timestamp) - float(last_time) < 5 0: print "Remove " line else: print "Keep " line last_time last_ip last_hash = timestamp ip hash_value ```` |
Who was the chief god of Eridu? | Enki |
Drawing cubic lattice in Python So I want to draw a simple cubic lattice in Python using visual package I have a simple way of making a lattice with small spheres which all have the same color but I want the colors to alternate: to make NaCl lattice I need to have a sphere of one color surrounded by 6 spheres of other color So I did this: ````from __future__ import division from visual import sphere color L = 5 ARE = 0 3 even = [] odd = [] for i in range(-L L+1): if i%2==0: even append(i) else: odd append(i) for i in even: for j in even: for k in even: sphere(pos=[i j+1 k+1] radius=R color=color green) for i in odd: for j in odd: for k in odd: sphere(pos=[i j k] radius=R color=color yellow) ```` And I get spheres of one color next to speres of different color but they are in rows: <img src="http://i stack imgur com/q3AWk png" alt="lattice"> But I need them to alternate :\ The correct placement is only in the i direction How do I correct the others to make a simple cubic lattice? I tried fiddling with the positions of the spheres (i j k+-number) but that way I got bcc lattice (one green sphere in the middle others around it) I am stuck | What you would need is this: ````from visual import sphere color count = 3 R=0 3 for x in range(-count count+1): for y in range(-count count+1): for z in range(-count count+1): if ((x+y+z+3*count)%2) == 0: sphere(pos=[x y z] radius=R color=color green) else: sphere(pos=[x y z] radius=R color=color yellow) ```` The point is you should switch colors depending on whether the sum of the (integral in this case) coordinates is divisible by 2 or not |
Python Multiprocessing - sending inputs to child processes I am using the multiprocessing module in python to launch few processes in parallel These processes are independent of each other They generate their own output and write out the results in different files Each process calls an external tool using the subprocess call method It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input Now in my python script I use the join method to wait till all the processes finish their tasks This is causing the whole thing to wait for this erroneous subprocess call I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated My code here: ````import subprocess import sys import os import multiprocessing def write_script(fname e): f = open(fname 'w') f write("Some useful cammnd calling external tool") f close() subprocess call(['chmod' '+x' os path abspath(fname)]) return os path abspath(fname) def run_use(mname script): print "ssh "+mname+" "+script subprocess call(['ssh' mname script]) if __name__ == '__main__': dict1 = {} dict['mod1'] = ['pp1' 'ext2' 'les3' 'pw4'] dict['mod2'] = ['aaa' 'bbb' 'ccc' 'ddd'] machines = ['machine1' 'machine2' 'machine3' 'machine4'] log_file write(str(dict1 keys())) for key in dict1 keys(): arr = [] for mod in dict1[key]: d = {} arr append(mod) if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)): for i in range(0 len(arr)): e = arr pop() script = write_script(e+"_temp sh" e) d[i] = multiprocessing Process(target=run_use args=(machines[i] script )) d[i] daemon = True for pp in d: d[pp] start() for pp in d: d[pp] join() ```` | Since you are writing a she will script to run your subcommands can you simply tell them to read input from `/dev/null`? ````#!/bin/bash # my_other_command -a -b arg1 arg2 < /dev/null # ```` This may stop them blocking on input and is a really simple solution If this does not work for you read on for some other options The `subprocess call()` function is simply shorthand for constructing a `subprocess Popen` instance and then calling the `wait()` method on it So your spare processes could instead create their own `subprocess Popen` instances and poll them with `poll()` method on the object instead of `wait()` (in a loop with a suitable delay) This leaves them free to remain in communication with the main process so you can for example allow the main process to tell the child process to terminate the `Popen` instance with the `terminate()` or `kill()` methods and then itself exit So the question is how does the child process tell whether the subprocess is awaiting user input and that is a trickier question I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt assuming that it always uses some string that you can look for Alternatively if the subprocess is expected to generate output continually then you could simply look for <them>any</them> output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above Since you are reading the output actually you do not need `poll()` or `wait()` - the process closing its output file descriptor is good enough to know that it is terminated in this case Here is an example of a modified `run_use()` method which watches the output of the subprocess: ````def run_use(mname script): print "ssh "+mname+" "+script proc = subprocess Popen(['ssh' mname script] stdout=subprocess PIPE) for line in proc stdout: if "UserPrompt>>>" in line: proc terminate() break ```` In this example we assume that the process <them>either</them> gets hung on on `UserPrompt>>>` (replace with the appropriate string) <them>or</them> it terminates naturally If it were to get stuck in an infinite loop for example then your script would still not terminate - you can only really address that with an overall timeout but you did not seem keen to do that Hopefully your subprocess will not misbehave in that way however Finally if you do not know in advance the prompt that will be giving from your process then your job is rather harder Effectively what you are asking to do is monitor an external process and know when it is blocked reading on a file descriptor and I do not believe there is a particularly clean solution to this You <them>could</them> consider running a process under `strace` or similar but that is quite an awful hack and I really would not recommend it Things like `strace` are great for manual diagnostics but they really should not be part of a production setup |
python regex dynamic string as input I am trying to get the string dynamically and use regex over it but i get the error: coding partial snippet: ````import re def file_save(): f = th1 asksaveasfile(mode='w' defaultextension=" txt") re search(r"<[^>]*\bname='([^']*)" f) group(1) f close() ```` Error ```` re search(r"<[^>]*\bname='([^']*)" f) group(1) File "C:\Python34\lib\re py" line 166 in search return _compile(pattern flags) search(string) TypeError: expected string or buffer ```` Here the input string has to be attained from a 'filename to be saved' which produces the text 'f' as: ````<_io TextIOWrapper name='C:/Python34/abcd txt' mode='w' encoding='cp1252'> ```` the regex works perfect but icannot get the string and get the ouput as: ````C:/Python34/abcd txt ```` Please help me to fix my problem Answers will be appreciated! | I think you are using wrong tool If you want to get filename just use `asksaveasfilename` instead of `asksaveasfile` ````def file_save(): filename = th1 asksaveasfilename(defaultextension=" txt") # do something with the filename ```` Or if you need to operate on the file use `asksaveasfile` and use `name` attribute of the file returned ````def file_save(): f = th1 asksaveasfile(mode='w' defaultextension=" txt") filename = f name # do something with the filename f close() ```` |
What term was used by the Basic Law of 1949? | null |
problems dealing with pandas read csv I have got a problem with pandas read_csv I had a many txt files that associate with stock market It is like this: ````SecCode SecName Tdate Ttime LastClose OP CP Tq Tm Tt Cq Cm Ct HiP LoP SYL1 SYL2 Rf1 Rf2 bs s5 s4 s3 s2 s1 b1 b2 b3 before b5 sv5 sv4 sv3 sv2 sv1 bv1 bv2 bv3 bv4 bv5 bsratio spd rpd depth1 depth2 600000 浦åé¶è¡ 20120104 091501 8 490 000 000 0 000 0 0 000 0 000 000 000 000 000 000 000 000 000 000 8 600 8 600 000 000 000 000 0 0 0 0 1100 1100 38900 0 0 0 00 000 00 00 00 600000 浦åé¶è¡ 20120104 091506 8 490 000 000 0 000 0 0 000 0 000 000 000 000 000 000 000 000 000 000 8 520 8 520 000 000 000 000 0 0 0 0 56795 56795 33605 0 0 0 00 000 00 00 00 600000 浦åé¶è¡ 20120104 091511 8 490 000 000 0 000 0 0 000 0 000 000 000 000 000 000 000 000 000 000 8 520 8 520 000 000 000 000 0 0 0 0 56795 56795 34605 0 0 0 00 000 00 00 00 600000 浦åé¶è¡ 20120104 091551 8 490 000 000 0 000 0 0 000 0 000 000 000 000 000 000 000 000 000 000 8 520 8 520 000 000 000 000 0 0 0 0 56795 56795 35205 0 0 0 00 000 00 00 00 600000 浦åé¶è¡ 20120104 091621 8 490 000 000 0 000 0 0 000 0 000 000 000 000 000 000 000 000 000 000 8 520 8 520 000 000 000 000 0 0 0 0 57795 57795 34205 0 0 0 00 000 00 00 00 ```` while I use this code to read it : ````fields = ['SecCode' 'Tdate' 'Ttime' 'LastClose' 'OP' 'CP' 'Rf1' 'Rf2'] df = pd read_csv('SHL1_TAQ_600000_201201 txt' usecols=fields) ```` But I got a problem: ````Traceback (most recent call last): File "E:/workspace/Senti/highlevel/highlevel py" line 8 in <module> df = pd read_csv('SHL1_TAQ_600000_201201 txt' usecols=fields header=1) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers py" line 562 in parser_f return _read(filepath_or_buffer kwds) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers py" line 315 in _read parser = TextFileReader(filepath_or_buffer **kwds) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers py" line 645 in __init__ self _make_engine(self engine) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers py" line 799 in _make_engine self _engine = CParserWrapper(self f **self options) File "D:\Anaconda2\lib\site-packages\pandas\io\parsers py" line 1257 in __init__ raise ValueError("Usecols do not match names ") ValueError: Usecols do not match names ```` I cannot find any problem similar to mine And also it is wired when I copy the txt file into another one the code runs well but the original one because the above problem How can I solve it ? | Use `names` instead of `usecols` while specifying parameter |
Getting forbidden while downloading file in django view I have this code ````class DownloadView(TemplateView): template_name = 'pdfform/create_form2 html' def serve_pdf(self request): #pdf_data = magically_create_pdf() response = HttpResponse(mimetype='application/pdf') response['Content-Disposition'] = 'attachment; filename="http://localhost/static/pdfs/angular pdf"' return response ```` When i go to that page i get the downlaod dialog but i am not able to download the file it says that ````http 403 forbidden ```` Now i can directly access the file but putting `http://localhost/static/pdfs/angular pdf` that in browser i treid puting `static/pdfs/angular pdf` but same error | Filename in should be just a plain file name not `http:// ` So change ````response['Content-Disposition'] = 'attachment; filename="http://localhost/static/pdfs/angular pdf"' ```` to ````response['Content-Disposition'] = 'attachment; filename="angular pdf"' ```` Also you need to serve the file content through response so that the file contents are served e g ```` def serve_pdf(self request): from django core servers basehttp import FileWrapper # your code wrapper = FileWrapper(open(your_pdf_file_path)) response = HttpResponse(wrapper 'application/pdf') response['Content-Length'] = os path getsize(your_pdf_file_path) response['Content-Disposition'] = 'attachment; filename="angular pdf"' return response ```` |
Convert list elements into array I have a list tsv file which I am parsing and want to convert it into an array Here is the file format - ````jobname1 queue maphours reducehours jobname2 queue maphours reducehours ```` code ````with open(file tsv) as tsv: line = [elem strip() split('\t') for elem in tsv] vals = np asarray(line) print vals[0] print vals[4] ```` Vals currently returns the following output - ````['job1' 'queue' '1 0' '0 0\n'] ['job2' 'queue' '1 0' '0 0\n'] ```` I want to convert each element in a row in the entire file to an array object - ````vals[0] = job1 vals[1] = queue vals[2] = 1 0 vals[3] = 0 0 ```` How do i achieve this? | From what I understand you would like to create 2D array in numpy where each row of the file is a row corresponds to the created array and column in a file is a column in the array If so you could do this as follows: For example if your data file is: ````jobname1 queue 1 3 jobname2 queue 2 4 jobname41 queue 1 1 jobname32 queue 2 2 jobname21 queue 3 4 jobname12 queue 1 6 ```` The following code: ````with open(file) as tsv: line = [elem strip() split('\t') for elem in tsv] vals = np asarray(line) ```` will result in the following `vals` array: ````[['jobname1' 'queue' '1' '3'] ['jobname2' 'queue' '2' '4'] ['jobname41' 'queue' '1' '1'] ['jobname32' 'queue' '2' '2'] ['jobname21' 'queue' '3' '4'] ['jobname12' 'queue' '1' '6']] ```` The get the job names you can do: ````print(vals[: 0]) % gives ['jobname1' 'jobname2' 'jobname41' 'jobname32' 'jobname21' 'jobname12'] ```` Or if you want rows containing some job you can do: ````print(vals[np apply_along_axis(lambda row: row[0] == 'jobname1' 1 vals)]) ```` |
Django calculation based on form and database values I started the project with django1 9 and python-3 4 4 I have set up an app to track my energy water and gas consumption I started this project to get to know django and python better I have created a form where I have two fields One for the Type of counter and one for the value Now I want to get the latest value from the database and do a simple calculation ````delta = value_db - value_form ```` <strong>models py</strong> ````class Verbraucher(models Model): Id = models AutoField(primary_key=True) Typ = models CharField(max_length=50) Nummer = models CharField(max_length=50 unique=True) def __str__(self): return format(self Typ) class Daten(models Model): Daten_id = models AutoField(primary_key=True) Verbraucher = models ForeignKey(Verbraucher) Stand = models DecimalField(max_digits=10 decimal_places=3) Verbrauch = models DecimalField(max_digits=10 decimal_places=3) Zeitstempel = models DateTimeField(auto_now=True auto_now_add=False) Updatestempel = models DateTimeField(auto_now=False auto_now_add=True) def __str__(self): return format(self Zeitstempel) ```` <strong>forms py</strong> ````class DatenForm(forms ModelForm): class Meta: model = Daten fields = ['Verbraucher' 'Stand'] ```` <strong>views py</strong> ````def dateneingabe(request): if request method == "GET": form = DatenForm() return render(request 'verbrauch/eingabe html' {'form': form}) elif request method == "POST": form = DatenForm(request POST) model = Daten() if form is_valid(): instance = form save(commit=False) Stand_db = Daten objects lastest(Verbraucher) instance Verbrauch = do_calc(Stand - Stand_db) instance save() return HttpResponseRedirect('/') ```` I have tried various ways but none seem to work I hope that someone has an idea to help me out! Thanks a lot up front! <strong>EDIT:</strong> The whole idea behind it ist to have a local website where you have a small form of two fields One for type of the counter and one for the value it has today Afterwards I want to get the latest value from the database of this counter and subtract bot Afterwards the calculated value should be inserted into the db I hope that helps?! <strong>EDIT2:</strong> I have played a little bit araound and found an error I get it if I want to do the calculation `unsupported operand type(s) for -: 'decimal Decimal' and 'Daten'` The weird thing is that I have defined both as decimals in my models file This is the code (views py after form = DatenForm(request POST)): `if form is_valid(): instance = form save(commit=False) Stand = form cleaned_data['Stand'] print(Stand) Stand_db = Daten objects latest('Stand') print(Stand_db) Verbrauch = (Stand - Stand_db) print(Verbrauch) instance save()` | first of all the way arrange your view is not ok although it will work The validated form data will be in the `form cleaned_data` dictionary This data will have been nicely converted into Python types for you Fields such as IntegerField DecimalField and FloatField convert values to a Python int Decimal and float respectively ````from yourapp model import Daten def dateneingabe(request): if request method == "POST": form = DatenForm(request POST) if form is_valid(): instance = form save(commit=False) Stand_db = Daten objects lastest(Verbraucher) #you used Stand instead of instance stand and since instance stand is decimalfield is will be processed nicely instance Verbrauch = do_calc(instance Stand - float(Stand_db)) instance save() return HttpResponseRedirect('/') else: form = DatenForm() return render(request 'verbrauch/eingabe html' {'form': form}) ```` I hope it helps out |
SSL error using Python Requests to access Shibboleth authenticated server I am trying to access a journal article hosted by an academic service provider (SP) using a Python script The server authenticates using a Shibboleth login I read <a href="http://stackoverflow com/questions/16512965/logging-into-saml-shibboleth-authenticated-server-using-python">Logging into SAML/Shibboleth authenticated server using python</a> and tried to implement a login with Python Requests The script starts by querying the SP for the link leading to my IDP institution and is supposed then to authenticate automatically with the IDP The first part works but when following the link to the IDP it chokes on an SSL error Here is what I used: ````import requests import lxml html LOGINLINK = 'https://www jsave org/action/showLogin?redirectUri=%2F' USERAGENT = 'Mozilla/5 0 (X11; Linux x86_64; rv:28 0) Gecko/20100101 Firefox/28 0' s = requests session() s headers update({'User-Agent' : USERAGENT}) # getting the page where you can search for your IDP # need to get the cookies so we can continue response = s get(LOGINLINK) rtext = response text print('Don\'t see your school?' in rtext) # prints True # POSTing the name of my institution data = { 'institutionName' : 'tubingen' 'submitForm' : 'Search' 'currUrl' : '%2Faction%2FshowBasicSearch' 'redirectUri' : '%2F' 'activity' : 'isearch' } response = s post(BASEURL '/action/showLogin' data=data) rtext = response text print('university of tubingen' in rtext) # prints True # get the link that leads to the IDP tree = lxml html fromstring(rtext) loginlinks = tree cssselect('a extLogin') if (loginlinks): loginlink = loginlinks[0] get('href') else: exit(1) print('continuing to IDP') response = s get(loginlink) rtext = response text print('zentrale Anmeldeseite' in rtext) ```` This yields: ````continuing to IDP 2014-04-04 10:04:06 010 - INFO - Starting new HTTPS connection (1): idp uni-tuebingen de Traceback (most recent call last): File "/usr/lib/python3 4/site-packages/requests/packages/urllib3/connectionpool py" line 480 in urlopen body=body headers=headers) File "/usr/lib/python3 4/site-packages/requests/packages/urllib3/connectionpool py" line 285 in _make_request conn request(method url **httplib_request_kw) File "/usr/lib/python3 4/http/client py" line 1066 in request self _send_request(method url body headers) File "/usr/lib/python3 4/http/client py" line 1104 in _send_request self endheaders(body) File "/usr/lib/python3 4/http/client py" line 1062 in endheaders self _send_output(message_body) File "/usr/lib/python3 4/http/client py" line 907 in _send_output self send(message) File "/usr/lib/python3 4/http/client py" line 842 in send self connect() File "/usr/lib/python3 4/site-packages/requests/packages/urllib3/connection py" line 164 in connect ssl_version=resolved_ssl_version) File "/usr/lib/python3 4/site-packages/requests/packages/urllib3/util py" line 639 in ssl_wrap_socket return context wrap_socket(sock server_hostname=server_hostname) File "/usr/lib/python3 4/ssl py" line 344 in wrap_socket _context=self) File "/usr/lib/python3 4/ssl py" line 540 in __init__ self do_handshake() File "/usr/lib/python3 4/ssl py" line 767 in do_handshake self _sslobj do_handshake() ssl SSLError: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl c:598) During handling of the above exception another exception occurred: Traceback (most recent call last): File "/usr/lib/python3 4/site-packages/requests/adapters py" line 330 in send timeout=timeout File "/usr/lib/python3 4/site-packages/requests/packages/urllib3/connectionpool py" line 504 in urlopen raise SSLError(e) requests packages urllib3 exceptions SSLError: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl c:598) During handling of the above exception another exception occurred: Traceback (most recent call last): File " /try py" line 154 in <module> response = s get(loginlink) File "/usr/lib/python3 4/site-packages/requests/sessions py" line 395 in get return self request('GET' url **kwargs) File "/usr/lib/python3 4/site-packages/requests/sessions py" line 383 in request resp = self send(prep **send_kwargs) File "/usr/lib/python3 4/site-packages/requests/sessions py" line 486 in send are = adapter send(request **kwargs) File "/usr/lib/python3 4/site-packages/requests/adapters py" line 385 in send raise SSLError(e) requests exceptions SSLError: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl c:598) ```` Using s get(loginlink verify=False) yields exactly the same error Simply using urllib request urlopen(loginlink) does so too Printing and pasting the link into Firefox on the other hand works fine | After trying with `openssl s_client` it looks like the destination idp uni-tuebingen de:443 is only support SSLv3 and misbehaving on anything newer With forcing SSLv3 one gets: ````$ openssl s_client -connect idp uni-tuebingen de:443 -ssl3 CONNECTED(00000003) depth=3 C = DE O = Deutsche Telekom AG OU = T-TeleSec Trust Center CN = Deutsche Telekom Root CA 2 ```` But with default setup or forcing TLv1 (-tls1) it only returns an alert: ````openssl s_client -connect idp uni-tuebingen de:443 CONNECTED(00000003) 140493591938752:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:s23_clnt c:741: ```` So you need to find a way to force SSLv3 for this connection I am not familiar with the python at this point but maybe <a href="http://docs python-requests org/en/latest/user/advanced/" rel="nofollow">http://docs python-requests org/en/latest/user/advanced/</a> chapter "Example: Specific SSL Version" helps And why it works with firefox: the browsers usually retry with a downgraded SSL version if the connects with the safer versions fail E g everybody is trying to work around broken stuff so that the owner of the broken stuff has no intention to fix it :( |
Programmatic Form Submit I want to scrape the contents of a webpage The contents are produced after a form on that site has been filled in and submitted I have read on how to scrape the end result content/webpage - but how to I programmatically submit the form? I am using python and have read that I might need to get the original webpage with the form parse it get the form parameters and then do X? Can anyone point me in the rigth direction? | You can do it with javascript If the form is something like: ````<form name='myform' ```` Then you can do this in javascript: ````<script language="JavaScript"> function submitform() { document myform submit(); } </script> ```` You can use the "onClick" attribute of links or buttons to invoke this code To invoke it automatically when a page is loaded use the "onLoad" attribute of the element: ````<body onLoad="submitform()" > ```` |
Printing output/stdout for the other commands in paramiko in python I am using Paramiko in Python 2 7 to connect to a linux server and the program works fine The problem is that when I run it I get this output from the IDE: ````Start This is a test program before the cd after the cd after the pwd after the ls /home/11506499 End ```` My code looks like this: ````import paramiko ssh = paramiko SSHClient() print('Start') ssh set_missing_host_key_policy(paramiko AutoAddPolicy()) ssh connect('XXX XX XXX XX' port = 22 username = 'tester' password = 'test') print("This is a test program") stdin stdout stderr = ssh exec_command('pwd') print('before the cd ') stdin write('cd ') stdin write('\n') stdin flush() print('after the cd ') stdin write('pwd') stdin write('\n') stdin flush() print('after the pwd') stdin write('ls') stdin write('\n') stdin flush() print('after the ls') output = stdout readlines() print '\n' join(output) ssh close() print('End') ```` As you can see on the prints the program runs through all the commands but stdout only shows the output from the first ssh exec_command (the 'pwd') and not from all the stdin write What I want to know is if there is a way or command to get the output from the other commands I sent through the terminal? I am thinking of the commands such as the second 'pwd' or the 'ls' command? Is there a way to show the output for the response for each action I take in the terminal as was I using cmd exe or a terminal in Linux? I have tried looking on the net but have not been able to see anything since the examples only show the output from the first command only So I hope someone can help me with this problem <hr> Edit: I went away from making a client connection and instead went with a she will which will keep the connection until I log out I used recv to store the output from the terminal and print to printing it out This worked wonders I did though import time so the script could take a little break where it could collect the rest of the output before printing it out This way I was able to print out everything appearing in the terminal without it being lacking | You are only executing one command in your script From my understanding stdin in your case would be used to pass arguments into a running command Meaning you would have to individually run `ssh exec_command(<cmd>)` for pwd cd and ls After your initial execution the session closes and you are not able to issue more commands This is just like issuing the command ````ssh user@hostname "pwd" ```` The session has completed and the connection is closed It is not quite like telnet where you simply type a command and add '\n' to execute it nor like a bash prompt because you are not starting a tty Regards Lisenby |
Dot product of two sparse matrices affecting zero values only I am trying to compute a simple dot product but leave nonzero values from the original matrix unchanged A toy example: ````import numpy as np A = np array([[2 1 1 2] [0 2 1 0] [1 0 1 1] [2 2 1 0]]) B = np array([[ 0 54331039 0 41018682 0 1582158 0 3486124 ] [ 0 68804647 0 29520239 0 40654206 0 20473451] [ 0 69857579 0 38958572 0 30361365 0 32256483] [ 0 46195299 0 79863505 0 22431876 0 59054473]]) ```` Desired outcome: ````C = np array([[ 2 1 1 2 ] [ 2 07466874 2 1 0 73203386] [ 1 1 5984076 1 1 ] [ 2 2 1 1 42925865]]) ```` The actual matrices in question however are sparse and look more like this: ````A = sparse rand(250000 1700 density=0 001 format='csr') B = sparse rand(1700 1700 density=0 02 format='csr') ```` One simple way go would be just setting the values using mask index like that: ````mask = A != 0 C = A dot(B) C[mask] = A[mask] ```` However my original arrays are sparse and quite large so changing them via index assignment is painfully slow Conversion to lil matrix helps a bit but again conversion itself takes a lot of time The other obvious approach I guess would be just resort to iteration and skip masked values but I would like not to throw away the benefits of numpy/scipy-optimized array multiplication <strong>Some clarifications:</strong> I am actually interested in some kind of special case where `B` is always square and therefore `A` and `C` are of the same shape So if there is a solution that does not work on arbitrary arrays but fits in my case that is fine <strong>UPDATE:</strong> Some attempts: ````from scipy import sparse import numpy as np def naive(A B): mask = A != 0 out = A dot(B) tolil() out[mask] = A[mask] return out tocsr() def proposed(A B): Az = A == 0 ARE C = np where(Az) out = A copy() out[Az] = np einsum('ij ji>i' A[R] B[: C]) return out %timeit naive(A B) 1 loops best of 3: 4 04 s per loop %timeit proposed(A B) /usr/local/lib/python2 7/dist-packages/scipy/sparse/compressed py:215: SparseEfficiencyWarning: Comparing a sparse matrix with 0 using == is inefficient try using != instead /usr/local/lib/python2 7/dist-packages/scipy/sparse/coo pyc in __init__(self arg1 shape dtype copy) 173 self shape = M shape 174 -> 175 self row self col = M nonzero() 176 self data = M[self row self col] 177 self has_canonical_format = True MemoryError: ```` <strong>ANOTHER UPDATE:</strong> Could not make anything more or less useful out of Cython at least without going too far away from Python The idea was to leave the dot product to scipy and just try to set those original values as fast as possible something like this: ````cimport cython @cython cdivision(True) @cython boundscheck(False) @cython wraparound(False) cpdef coo_replace(int [:] row1 int [:] col1 float [:] data1 int[:] row2 int[:] col2 float[:] data2): cdef int N = row1 shape[0] cdef int M = row2 shape[0] cdef int i j cdef dict d = {} for i in range(M): d[(row2[i] col2[i])] = data2[i] for j in range(N): if (row1[j] col1[j]) in d: data1[j] = d[(row1[j] col1[j])] ```` This was a bit better then my pre-first "naive" implementation (using ` tolil()`) but following hpaulj's approach lil can be thrown out Maybe replacing python dict with something like std::map would help | Cracked it! Well there is a lot of scipy stuffs specific to sparse matrices that I learnt along the way Here is the implementation that I could muster - ````# Find the indices in output array that are to be updated R C = ((A!=0) dot(B!=0)) nonzero() mask = np asarray(A[R C]==0) ravel() R C = R[mask] C[mask] # Make a copy of A and get the dot product through sliced rows and columns # off A and B using the definition of matrix-multiplication out = A copy() out[R C] = (A[R] multiply(B[: C] T) sum(1)) ravel() ```` The most expensive part seems to be element-wise multiplication and summing On some quick timing tests it seems that this would be good on a sparse matrices with a high degree of sparsity to beat the original dot-mask-based solution in terms of performance which I think comes from its focus on memory efficiency <strong>Runtime test</strong> Function definitions - ````def naive(A B): mask = A != 0 out = A dot(B) tolil() out[mask] = A[mask] return out tocsr() def proposed(A B): R C = ((A!=0) dot(B!=0)) nonzero() mask = np asarray(A[R C]==0) ravel() R C = R[mask] C[mask] out = A copy() out[R C] = (A[R] multiply(B[: C] T) sum(1)) ravel() return out ```` Timings - ````In [57]: # Input matrices : M N = 25000 170 : A = sparse rand(M N density=0 001 format='csr') : B = sparse rand(N N density=0 02 format='csr') : In [58]: %timeit naive(A B) 10 loops best of 3: 92 2 ms per loop In [59]: %timeit proposed(A B) 10 loops best of 3: 132 ms per loop In [60]: # Input matrices with increased sparse-ness : M N = 25000 170 : A = sparse rand(M N density=0 0001 format='csr') : B = sparse rand(N N density=0 002 format='csr') : In [61]: %timeit naive(A B) 10 loops best of 3: 78 1 ms per loop In [62]: %timeit proposed(A B) 100 loops best of 3: 8 03 ms per loop ```` |
numpy how find local minimum in neighborhood on 1darray I have got a list of sorted samples They are sorted by their sample time where each sample is taken one second after the previous one I would like to find the minimum value in a neighborhood of a specified size For example given a neighborhood size of 2 and the following sample size: ````samples = [ 5 12 3 12 3 7 2 6 9 10 5 9 17 2 ] ```` I would expect the following output: [5 2 5 2] What would be the best way to achieve this in numpy / scipy Edited: Explained the reasoning behind the min values: - 5 - the 2 number window next to it are [12 3 12 3] 5 is smaller - 2 - to the left [12 3 7] to the right [6 9] 2 is the min - 5 - to the left [9 10] to the right [9 17] 5 is the min notice that 9 is not min are there is a 2 window to its left and right with a smaller value (2) | ````>>> import numpy as np >>> a = np array(samples) >>> [a[max(i-2 0):i+2] min() for i in xrange(1 a size)] [5 0 2 0 2 0 2 0 2 0 5 0 5 0 5 0 2 0] ```` As Divakar pointed out in the comments this is what a sliding window yields If you want to remove duplicates that can be done separately |
What was the geothermal capacity in 2008? | 10 GW |
What was the population of Kathmandu in 1991? | 427,045 |
How to return an array in ctyps when calling a function What I want to do is to convert a C function into python function For example I have a C function prototype which is pre-defined: ````function_test(ViInt32 Arraysize ViReal64 readingArray[] ViInt32 *Actual_rraysize) ```` The input of the above function is `Arraysize` which is the number of elements in the `readingArray` parameter The outputs of the above function are `readingArray[]` and `Arraysize` `readingArray[]` returns an array of the result and `Actual_arraysize` indicates the actual size of the returned array And the following is the Python function I wrote by using this C function prototype: ````from ctypes import * def Py_function_test(self arraysize readingarray = [] actualarraysize = [-1e-10]): while len(readingarray) < arraysize: readingarray append(0) _c_actualarraysize = c_int() ArrayOfDouble = c_double * arraysize _c_readingarray = ArrayOfDouble() self function_test(c_int(arraysize) byref(_c_readingarray) byref(_actualarraysize)) for n in range(arraysize): readingarray[n] = _c_readingarray[n] actualarraysize[0] = _c_actualarraysize value ```` In the end it turned out the value of `acutalarraysize` is successfully changed But the value of `readingarray` did not change after calling the `c function prototype` How to deal with this situation is it because I should not have used the `byref`? | Yes you do not need byref when passing array And using list as default argument values will cause unexpected bugs You can return the array and the actualarrysize in your Python function Here is my code: ````from ctypes import * test = cdll LoadLibrary("test") test function_test argtypes = [ c_int POINTER(c_double) POINTER(c_int) ] def Py_function_test(arraysize): _c_readingarray = (c_double*arraysize)() _c_actualarraysize = c_int() test function_test(arraysize _c_readingarray byref(_c_actualarraysize)) return _c_readingarray _c_actualarraysize value array size = Py_function_test(10) print list(array) size ```` the c code: ````void function_test(int Arraysize double readingArray[] int *Actual_rraysize) { int i; *Actual_rraysize = Arraysize/2; for(i=0;i<*Actual_rraysize;i++) readingArray[i] = i; } ```` and the output of python code is: ````[0 0 1 0 2 0 3 0 4 0 0 0 0 0 0 0 0 0 0 0] 5 ```` |
pynfc-AttributeError (Rpi) After installing pynfc on our Raspberry pi running Raspbian we encountered the following error after running the command `python mifareauth py`: <blockquote> Traceback (most recent call last): File "/home/pi/uitgepakt/pynfc-0 0 4/pynfc py" line 22 in NDO_HANDLE_CRC = nfc NDO_HANDLE_CRC AttributeError: 'module' object has no attribute 'NDO_HANDLE_CRC' </blockquote> We could not find a solution to this problem I hope someone can help us out | pynfc 0 0 4 is outdated; you will likely have better luck using the latest version which can be obtained via Subversion as per the <a href="https://code google com/p/pynfc/source/checkout" rel="nofollow">instructions on the Google Code project pages</a> Note however that the project does not appear to have been maintained for almost two years and libnfc (the underlying API that pynfc uses) has moved on a lot during that time Significant code changes have required a lot of backward compatibility to be sacrificed causing pynfc to no longer work with recent versions of libnfc At the time of writing the latest version of pynfc is 0 0 7 which works perfectly with <a href="https://code google com/p/libnfc/source/browse/?name=libnfc-1 5 1" rel="nofollow">libnfc 1 5 1</a> A small tweak may be required to get the latter working on the Raspberry Pi <them>if</them> your NFC hardware is connected via UART which involves little more than adding Raspbian's default path to the device node (`/dev/ttyAMA0`) to `libnfc/buses/uart_posix c` |
Python: access objects from another module I am a very inexperienced programmer creating a game (using Python 3 3) as a learning exercise I currently have a main module and a combat module The people in the game are represented by instances of class "Person" and are created in the main module However the combat module obviously needs access to those objects Furthermore I am probably going to create more modules later that will also need access to those objects How do I allow other modules to access the Persons from main py? As things stand main py has ````import combat ```` at the top; adding ````import main ```` to combat py does not seem to help Should I instantiate my objects in a separate module (common py?) and import them to every module that needs to access them? | Yes you should definitely factor this out What you tried is circular imports between your modules and that can be very problematic If `combat` imports `main` and `main` imports `combat` then you may get an exception because `main` will not have finished executing when `combat` starts executing for the import Assuming `main` is your start up script it should probably do nothing more than instantiate a class or call a method from another module Avoid global variables too Even if it does not seem like they will be a problem now that can bite you in the behind later on That said you can reference members of a module like so: ````import common x = common some_method_in_common() y = common SomeClass() ```` or ````from common import SomeClass y = SomeClass() ```` Personally I generally avoid referencing a method from another module without qualifying it with the module name but this is also legal: ````from common import some_method_in_common x = some_method_in_common() ```` An alternate syntax of which is discouraged is ````from common import * y = SomeClass() ```` This will import every member of common into the current scope that does not start with an underscore (`_`) I believe the reason this is discouraged is because of what happens when you have 2 imports ````from common import * from some_other_module import * y = SomeClass() ```` Which module does `SomeClass` come from? There is no way to tell other than to go look at the two modules Worse what if <strong>both</strong> modules define `SomeClass` or `SomeClass` is later added to `some_other_module`? |
Can Pickle handle multiple object references If I have objects `a` and `b` and both reference object `obj` what happens when I Pickle and then restore the objects? Will the pickled data 'know' that `a` and `b` both referenced the same object and restore everything accordingly or will the two get two different — and initially equal — objects? | Yes shared objects will only get serialized once (the pickle protocol can even handle circular references) From the <a href="http://docs python org/library/pickle html#relationship-to-other-python-modules">documentation</a>: <blockquote> The `pickle` module keeps track of the objects it has already serialized so that later references to the same object wonât be serialized again `marshal` doesnât do this This has implications both for recursive objects and object sharing Recursive objects are objects that contain references to themselves These are not handled by `marshal` and in fact attempting to marshal recursive objects will crash your Python interpreter Object sharing happens when there are multiple references to the same object in different places in the object hierarchy being serialized `pickle` stores such objects only once and ensures that all other references point to the master copy Shared objects remain shared which can be very important for mutable objects </blockquote> |
What age did the Archbishop of Milan not have to be below to serve in office? | null |
How to execute Python and bat scripts from a web page on local machine I want create a web page which has links of different Python scripts and bat files which can call different bat file with different parameters specified in web-form Purpose of this web page is to only use on a local machine I tried with JavaScript and PHP but I am not able to open any local bat or py file on machine How can I do this? | Learn more about exec @ <a href="http://php net/manual/en/function exec php" rel="nofollow">http://php net/manual/en/function exec php</a> ````<?php exec("batfilename bat"); ?> ```` |
Data changes while interpolating data frame using Pandas and numpy I am trying to calculate degree hours based on hourly temperature values The data that I am using has some missing days and I am trying to interpolate that data Below is some part of the data; ````2012-06-27 19:00:00 24 2012-06-27 20:00:00 23 2012-06-27 21:00:00 23 2012-06-27 22:00:00 16 2012-06-27 23:00:00 15 2012-06-29 00:00:00 15 2012-06-29 01:00:00 16 2012-06-29 02:00:00 16 2012-06-29 03:00:00 16 2012-06-29 04:00:00 17 2012-06-29 05:00:00 17 2012-06-29 06:00:00 18 2014-12-14 20:00:00 1 2014-12-14 21:00:00 0 2014-12-14 22:00:00 -1 2014-12-14 23:00:00 8 ```` The full code is; ````import pandas as pd import matplotlib pyplot as plt import numpy as np filename = 'Temperature12 xls' df_temp = pd read_excel(filename) df_temp = df_temp set_index('datetime') ts_temp = df_temp['temp'] def inter_lin_nan(ts_temp rule): ts_temp = ts_temp resample(rule) mask = np isnan(ts_temp) # interpolling missing values ts_temp[mask] = np interp(np flatnonzero(mask) np flatnonzero(~mask) ts_temp[~mask]) return(ts_temp) ts_temp = inter_lin_nan(ts_temp '1H') print ts_temp['2014-06-28':'2014-06-29'] def HDH (Tcurr Tref=15 0): if Tref >= Tcurr: return ((Tref-Tcurr)/24) else: return (0) df_temp['H-Degreehours'] = df_temp apply(lambda row: HDH(row['temp']) axis=1) df_temp['CDD-CUMSUM'] = df_temp['C-Degreehours'] cumsum() df_temp['HDD-CUMSUM'] = df_temp['H-Degreehours'] cumsum() df_temp1=df_temp['H-Degreehours'] resample('H' how=sum) print df_temp1 ```` Now I have two questions; while using `inter_lin_nan` function it does interpolate data but it also changes the next day data and the next data is totally different from the one available in the excel file Is this common or I have missed something? Second question: At the end of the code I am trying to add hourly degree days values and that is why I have created another Data frame but when I print that data frame it still has NaN number as in the original data file Could you please tell why this is happening? I may be missing something very obvious as I am new to Python | Do not use numpy when pandas has its own version ````df = pd read_csv(filepath) df =df asfreq('1d') #get a timeseries with index timestamps each day df['somelabel'] = df['somelabel'] interpolate(method='linear') # interpolate nan values ```` Use as frequency to add the required frequency of timestamps to your time series and uses interpolate() to interpolate nan values only <a href="http://pandas pydata org/pandas-docs/version/0 17 1/generated/pandas Series interpolate html" rel="nofollow">http://pandas pydata org/pandas-docs/version/0 17 1/generated/pandas Series interpolate html</a> <a href="http://pandas pydata org/pandas-docs/version/0 17 0/generated/pandas DataFrame asfreq html" rel="nofollow">http://pandas pydata org/pandas-docs/version/0 17 0/generated/pandas DataFrame asfreq html</a> |