id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_cs.61038
I am trying to classify the data in a database columns. DB has about 90 million entries.The main goal is to find different patterns in columns to leverage it for create look alike data. The data columns has entries which can easily look like patterns as : CUST1212,CUST1213,CUST1214..., number sequential after CUST CODE1213,CODE1242,CODE1289... random numbers after code CUST1213,MUST8324,FIFA12313... 009-123-2123,003-124-2314,006-213-5322, INTER122,INTER222,INTER322..., number increasing in batch of 100 OM|ON|TO, IO|OI|UH..., pipe delimited some 6 and 7 digits number.The issue is there are so many patterns and looks like creating patterns manually based on data is out of option.I would really appreciate if one can point to how can I build a collection of patterns preferably by usage frequency.What I started is 1. Segregate the data as per their length as patterns usually generate same string length, sorted them and stored them in different files. 2. Then classifying them further as per Alphabet prefix or numeric prefix. 3. For alpha-prefix segregate further on common alphabets at start.Now seems I am lost and sincerely hoping if one can point out to me what are my options and probably best to took.I believe if end of computation if get something like patterns in data and their frequency e.g. CUST[0-9]{4} -> 10k times, [a-zA-Z]{4} -> 50k times [0-9]{3}-[0-9]{3}-[0-9]{4} 10K times etc....., it would be great.Thanks
Find string patterns preferably in regex for string streams
regular expressions;data mining;classification;pattern recognition
null
_webmaster.15113
I'm redesigning a website. For certain content areas the layout is fine at my text size but screws up if I set the text any bigger. I often resize pages with Firefox, but the whole page resizes so the layout still works. So, should I worry about users having larger text but the same CSS otherwise? I don't know how to test for this sort of thing. The site works fine with every browser I've looked at it with. I know some usability devices change layouts but don't they ignore normal styles altogether?
Do users resize text?
usability
it really depends on the target audience i'd say. if its a frontfacing website i think you should cover your bets and try and make it look good (or atleast usable) with increased text size, if its an internal website perhaps it doesnt matter so much..sorry for the vague awnser, but its really something the customer (internal or external) can awnser best
_webapps.1318
Is there a web service that can receive faxes and store them in a PDF format? I would like to be able to keep my current fax number.
Is there a web service that can receive faxes and store them in a PDF format?
webapp rec;storage;pdf;fax
The answer given to this question can help you here.efax.com offers a service to keep your existing number. You have to ring to discuss it. The number I see is a UK one - I'm guessing because the site detects your locale and offers a local service - so I won't post it here as it won't apply to everyone.
_codereview.27144
I'm having trouble structuring the logic of OAuth integration into my web app. In the app, a user creates a Report that consists of data from their Google Analytics account.User steps:User clicks 'New Report'Google presents the 'Allow Access' page for OAuth accessUser is presented with a list of their GA web properties and selects oneA report is created using data from the selected web propertyMy issue is in structuring the code below.When the user clicks New Report, they are actually redirected to google_analytics#ga_session to begin the authorization process. The code to retrieve the user's web properties succeeds, but the code at the bottom needs to be refactored so it is reusable when retrieving web property data. The main two issues I can't figure out is how to make the Google Analytics instance reusable and how to structure the OAuth redirects. Retrieve web properties:GoogleAnalyticsControllerdef ga_session client = OAuth2::Client.new(ENV['GA_CLIENT_ID'], ENV['GA_SECRET_KEY'], { :authorize_url => 'https://accounts.google.com/o/oauth2/auth', :token_url => 'https://accounts.google.com/o/oauth2/token' }) redirect_to client.auth_code.authorize_url({ :scope => 'https://www.googleapis.com/auth/analytics.readonly', :redirect_uri => ENV['GA_OAUTH_REDIRECT_URL'], :access_type => 'offline' }) end def oauth_callback session[:oauth_code] = params[:code] redirect_to new_report_path endReportsControllerdef new @report = Report.new ga_obj = GoogleAnalytics.new ga_obj.initialize_ga_session(session[:oauth_code]) @ga_web_properties = ga_obj.fetch_web_propertiesendGoogleAnalytics modeldef initialize_ga_session(oauth_code) client = OAuth2::Client.new(ENV['GA_CLIENT_ID'], ENV['GA_SECRET_KEY'], { :authorize_url => 'https://accounts.google.com/o/oauth2/auth', :token_url => 'https://accounts.google.com/o/oauth2/token' }) access_token_obj = client.auth_code.get_token(oauth_code, :redirect_uri => ENV['GA_OAUTH_REDIRECT_URL']) self.user = Legato::User.new(access_token_obj) end def fetch_web_properties self.user.web_properties endRetrieve web property data: when creating the reportReportsControllerdef create @report = Report.new(params[:report]) @report.get_top_traffic_keywords(session[:oauth_code]) create!endReport Modeldef get_keywords(oauth_code) ga = GoogleAnalytics.new ga.initialize_ga_session(oauth_code) # this is a problem b/c the user will be redirected the new_report_path after the callack self.url = ga.fetch_url(self.web_property_id) self.keywords = # Get keywords for self.url from another service keyword_revenue_data(oauth_code)enddef keyword_revenue_data(oauth_code) ga = GoogleAnalytics.new ga.initialize_ga_session(oauth_code) revenue_data = # Get revenue dataend
Rails service + OAuth
ruby;ruby on rails;oauth
null
_webmaster.58597
I have read that Google Analytics supports click conversions (Tracking click conversions with Google Analytics).But I think I rather have conversions tracked within AdWords so I have a single source where I can monitor performance of specific campaigns.So, is there a way for me to setup click conversions within AdWords, I could not find it here https://support.google.com/adwords/answer/2375435 or here https://support.google.com/adwords/answer/1722054What I need specifically:be able to measure if a link with a specific class was clicked and of course assign that conversion to the campaign via which the user arrivedonly measure a conversion if a link was clicked by a user who camevia one of my AdWords campaigns (so a user who would navigate directly to my siteand clicked that link would not be tracked as a conversionUPDATEI do not have a landing page on which I can measure a conversion, since I'm an affiliate and don't sell the products myself, but redirect users to 3rd party publisher sites, I don't know whether a user actually buys the product. But I can get a pretty good indication of a conversion by measuring the click on a link that is directed to an external site. See how it works here: http://www.wonderweddings.com/weddingshopfrom this page and from any productdetail page the user can click to an external site. THAT is the click I want to set as a conversion.
Use Google Adwords to track click conversions
google adwords;tracking;conversions
null
_codereview.154933
I solved this programming challenge:Given a 2D board and a word, find if the word exists in the grid. The word can be constructed from letters of sequentially adjacent cell, where adjacent cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once.For example, Given board =[ ['A','B','C','E'], ['S','F','C','S'], ['A','D','E','E']]word = ABCCED, -> returns trueword = SEE, -> returns trueword = ABCB, -> returns falseclass Solution {public: bool DFS(vector<vector<char>> &board, string word, vector<vector<bool>> visited, int i, int j, int curr) { if (i < 0 || j < 0 || i >= board.size() || j >= board[0].size()) { return false; } if (visited[i][j] || board[i][j] != word[curr]) { return false; } visited[i][j] = true; ++curr; if (curr == word.size()) { return true; } return DFS(board, word, visited, i + 1, j, curr) || // Down DFS(board, word, visited, i, j + 1, curr) || // Right DFS(board, word, visited, i - 1, j, curr) || // Up DFS(board, word, visited, i, j - 1, curr); // Left } bool exist(vector<vector<char>> &board, string word) { for (int i = 0; i < board.size(); ++i) { for (int j = 0; j < board[i].size(); ++j) { if (word[0] == board[i][j]) { vector<vector<bool>> visited(board.size(), vector<bool>(board[0].size(), false)); if (DFS(board, word, visited, i, j, 0)) { return true; } } } } return false; }};My solution ranks at around 2% faster when compared to other solutions with the same language. My main desire from reviews are performance improvements.
Word Search in LeetCode
c++;performance;programming challenge;c++11
null
_unix.42357
At work, I would like to use KDE's dolphin as a file manager. However, our home directories reside on an AFS share [1]. When starting dolphin, it becomes unresponsive for dozens of minutes. stracing it reveals that it tries to open all the nodes in our AFS tree:openat(AT_FDCWD, /afs/somewhereElse.tld, O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXECI need to stop dolphin from doing that; this behaviour makes the program completely unusable on AFS trees. Is there some setting that controls this?[1] If you have never worked with AFS before, for the sake of this question, assume that there is a root directory that has subtrees from different universities, research institutes etc. mounted below it. The data in those subtrees really reside at the remote sites, so access is slow and resource-intensive.
How can I stop dolphin from reading my entire home directory tree in order to make it usable on AFS?
kde;afs;dolphin
While this doesn't completely solve the problem raised here, there is an option called -dynroot-sparse in the OpenAFS client these days, which tries to reduce the number of directories that are visible under /afs. That can help avoid processes from trying to traverse all AFS cells in the world just by reading the directories in /afs (but it doesn't prevent anything from traversing everything in your local cell). See afsd(8).KDE stuff just really needs to detect networked filesystems, and default to not traversing the whole thing (many programs do similar things, just by detecting certain filesystems like AFS, NFS, sshfs, etc). Here is a bug about this general issue, if you want to raise this there: https://bugs.kde.org/show_bug.cgi?id=178678. It sounds like this is still a problem.
_cstheory.20401
Regex Equivalence is a hard problem which in general takes exponential space and exponential time. Are there any approximation/sub-optimal algorithms with some theoretical guarantees over equivalence available?
Sub optimal regex equivalence
approximation algorithms;automata theory;regular expressions
null
_codereview.63995
This is a sort of follow up to my previous question:Counting the number of character occurrencesThis time I have written code that still counts the number of different characters in a string, but with the added ability to find categories of characters. For example, finding how many numbers are in the string, how many punctuation characters are in the string.Is there anything I should be doing differently? Any improvements? Also, I'm trying to learn better OOP design and software design and best practice, so any advice on that would also be helpful.A couple of notes:From what I've read immutable objects are preferred, so I created the results class to have private setters and return a read only dictionary to stop something else from changing it.Is the way I've created the CharacterCountResult object from CharacterCount a good thing to do? As in, am I doing it correctly?static void Main(string[] args){ var i = CharacterCount.Count(@Hello, World! $% ^ powejdoiwr3u?!?!!/1;';'\\z\\]p[\][z]z\,.,/???); // Demonstrating some of the avaliable properties Console.WriteLine(Alphanumeric: {0}\nLowercase: {1}\nUppercase: {2}\nPunctuation: {3}\nDigits: {4}\nSymbols: {5}, i.LetterAndDigitCount, i.LowercaseCount, i.UppercaseCount, i.PunctuationCount, i.DigitCount, i.SymbolCount); foreach (var character in i.GetCharacterDictionary()) { Console.WriteLine({0} - {1}, character.Key, character.Value); }}This is the class that counts the characters in the string:class CharacterCount{ public static CharacterCountResult Count(string stringToCount) { var tempDictionary = new Dictionary<char, uint>(); uint controlCount = 0; uint highSurrogatecount = 0; uint lowSurrogateCount = 0; uint whiteSpaceCount = 0; uint symbolCount = 0; uint punctuationCount = 0; uint separatorCount = 0; uint letterCount = 0; uint digitCount = 0; uint numberCount = 0; uint letterAndDigitCount = 0; uint lowercaseCount = 0; uint upperCaseCount = 0; // Build dictionary of characters and occurrence of characters. foreach (var character in stringToCount) { if (!tempDictionary.ContainsKey(character)) { tempDictionary.Add(character, 1); } else { tempDictionary[character]++; } } // Iterate over string and count various types of characters. foreach (var character in stringToCount) { if (char.IsNumber(character)) { numberCount++; } if (char.IsPunctuation(character)) { punctuationCount++; } if (char.IsSeparator(character)) { separatorCount++; } if (char.IsSymbol(character)) { symbolCount++; } if (char.IsUpper(character)) { upperCaseCount++; } if (char.IsWhiteSpace(character)) { whiteSpaceCount++; } } var result = new CharacterCountResult(controlCount, highSurrogatecount, lowSurrogateCount, whiteSpaceCount, symbolCount, punctuationCount, separatorCount, letterCount, digitCount, numberCount, letterAndDigitCount, lowercaseCount, upperCaseCount, tempDictionary); return result; }}And this class is the result of the character counting. It has properties which can be used to find the number of different types of characters as well as a method which returns a ReadOnlyDictionary<char, uint> that can be used to find the number of times each specific character occurs:class CharacterCountResult{ // Unicode special characters. public uint ControlCount { get; private set; } public uint HighSurrogateCount { get; private set; } public uint LowSurrogateCount { get; private set; } // Textual special characters. public uint WhiteSpaceCount { get; private set; } public uint SymbolCount { get; private set; } public uint PunctuationCount { get; private set; } public uint SeparatorCount { get; private set; } //Letters, digits, numbers. public uint LetterCount { get; private set; } public uint DigitCount { get; private set; } public uint NumberCount { get; private set; } public uint LetterAndDigitCount { get; private set; } public uint LowercaseCount { get; private set; } public uint UppercaseCount { get; private set; } private Dictionary<char, uint> _characterDictionary = new Dictionary<char, uint>(); public CharacterCountResult(uint controlCount, uint highSurrogateCount, uint lowSurrogateCount, uint whiteSpaceCount, uint symbolCount, uint punctuationCount, uint separatorCount, uint letterCount, uint digitCount, uint numberCount, uint letterAndDigitCount, uint lowercaseCount, uint uppercaseCount, Dictionary<char, uint> characterDictionary) { ControlCount = controlCount; HighSurrogateCount = highSurrogateCount; LowSurrogateCount = lowSurrogateCount; WhiteSpaceCount = whiteSpaceCount; SymbolCount = symbolCount; PunctuationCount = punctuationCount; SeparatorCount = separatorCount; LetterCount = letterCount; DigitCount = digitCount; NumberCount = numberCount; LetterAndDigitCount = letterAndDigitCount; LowercaseCount = lowercaseCount; UppercaseCount = uppercaseCount; _characterDictionary = characterDictionary; } public ReadOnlyDictionary<char, uint> GetCharacterDictionary() { var readOnly = new ReadOnlyDictionary<char, uint>(_characterDictionary); return readOnly; }}
Counting occurrences of different categories of characters
c#;beginner;object oriented;design patterns
The good:Good naming, both in following conventions and in picking expressive namesYour approach is good procedurally- e.g. good use of the dictionary and in-build char methods.Putting the results in their own class and making it immutable through the use of private setters and the ReadOnlyDictionary is smart.In terms of areas for potential improvement, reviewing is made somewhat difficult because of the nature of the made-up requirements this code is fulfilling. To demonstrate why, here's my train of thought:It's not very realistic that you always want to get all this information together, so the first thing might be to think how to extract each piece of information individually. The first step in doing this would be to extract out the individual counts into their own methods, like:public int CountLetters(string stringToCount){ var letterCount = 0; foreach(var character in stringToCount) { if (char.IsLetter(character)) { letterCount++; } } return letterCount;}Unfortunately as you can see, this is pretty quickly going to end up with a load of really similar methods, all of which are frustratingly large. The solution to this is two-fold. First, because the only difference between the methods is which char function we call, the use of Func can cut it down to just one method:public int CountLetters(string stringToCount, Func<string, bool> predicate){ var letterCount = 0; foreach(var character in stringToCount) { if (predicate(character)) { letterCount++; } } return letterCount;}Second, we can use LINQ:public int CountLetters(string stringToCount, Func<string, bool> predicate){ return stringToCount.Count(predicate);}And now we find the method is so simple that it probably doesn't need to be its own method at all. For example, if in the middle of some other method I wanted to count how many characters in a string were letters I could just do:var letterCount = myString.Count(char.IsLetter);This goes back to what I was saying about this being difficult code to review, because it essentially exists to fulfill requirement which is already fulfilled so simply by the .NET framework.However, if we go with the fiction that exposing the counts for all those different char methods for a single string is something that is done commonly throughout your program, then your approach is sensible. Using the LINQ-style counting above, you can remove the second foreach statement and all the ifs inside, replacing them with one-liners. You can also remove all the variable declarations and feed them right into the result constructor, since they are so simple:return new CharacterCountResult( stringToCount.Count(char.IsControl), //etc...Going forward, my suggestion is that you'll learn more about design by tackling problems that are a little more realistic than this one, and they will probably garner more informative reviews, too.
_softwareengineering.348367
I'm fiddling around with OOP in building simple CRUD systems.I've decided to focus on using the Repository Pattern for separating Object business logic and Object data persistence (actually saving the object in a persistent data store, i.e a database).Saving a simple object is straightforwardclass Customer { setData(data) { this.data = data; }}// PUT Customercustomer = new Customer();customer.setData(data);customerRepo.save(customer);But saving a composite object becomes a bit complicatedBut what happens if my Customer class is now includes other objects that also need to be persisted in the DB?In the following example, setting a customer's data also needs to create anAuditTrail which is a set of differences between the previous data and the new data passed in customer.setData().AuditTrail is a class and in this example it's a classic has-a relationship between Customer and AuditTrailclass Customer { setData(data) { // instantiate an auditTrailCalculator with some constants from the DB auditTrail = auditTrailRepo.create(); // `calculate()` produces a diff between the previous data and the new data this.auditTrail = auditTrail.calculate(this.data, data); } getAuditTrail() { return this.auditTrail; }}// PUT customercustomer = Customer();customer.setId(customerId);customer.setData(data);customerRepo.update(customer);auditTrailRepo.insert(customer.getAuditTrail());The above example looks clumsy.I'm instantiating the AuditTrail object using it's repo from within the Customer Object.For performing the whole update of a Customer I clumsily:Instantiate a new CustomerSet it's dataGet it's auditTrail that was generated inside the objectSave the AuditTrail to the DB using the AuditTrailRepoSave the Customer to the DB using the CustomerRepoMy questions:Is it correct to instantiate objects from their repo's from within other objects?Should I create a factory function instead, which instantiates both Objects, Customer & AuditTrail and return a composition of the 2?How should I handle saving this composite object?
Handling composite objects in the Repository Pattern
object oriented
This is clear example of Aggregate as seen in Domain Driven Design.In this case, the AuditTrail is part of Customer's aggregate. And according to DDD, repositories are per-aggregate, not per-entity. So in your case, there would be only CustomerRepository, which would write an audit every time Customer is updated.
_unix.379087
I have a linux mint Cinnamon desktop running in HiDPI.When I try to connect to it with a VNC client from my macbook (retina/hidpi display), the desktop shows up about 4x too big.I tried with mac's built in VNC viewer and also the real VNC client.I tried different settings in Real VNC's expert mode preferences - Scaling - AspectFit, 25%, etc; nothing seems to change the scaling.
VNC to a linux desktop in HiDPI mode won't scale properly
linux;osx;vnc;display
null
_codereview.164194
I've made the following backwards transformation from a transformed source x,y point, to the resulting index of the actual image that I have, so I can avoid black spots appearing in my barrel eye lens image. I've used the formula focal_length * arctan(radius, focal_length) for image undistortion. Here is the code I've used (its part of another class, but the rest of the class is not important):double BarrelAbberationClass::toDistortedRadius(const double r) const { return m_focal_length * atan2(r, m_focal_length);}cv::Vec2d BarrelAbberationClass::transformBackward(const cv::Point &source_xy, const cv::Mat &affine, const cv::Vec2d &center) { const double x_affined = source_xy.x * affine.at<double>(0, 0) + affine.at<double>(0, 2); const double y_affined = source_xy.y * affine.at<double>(1, 1) + affine.at<double>(1, 2); const double r = cv::norm(cv::Vec2d(x_affined, y_affined)); const double theta = atan2(y_affined, x_affined); const double new_r = toUnDistortedRadius(r); const double new_i = (new_r * sin(theta) + center[1]); const double new_j = (new_r * cos(theta) + center[0]); return cv::Vec2d(new_j, new_i);}Basically, given a source x,y point, affine matrix for the transformation account for the center of my image actually being width/2 and height/2, and the visual bounds I would even be able to grab from in the non transformed image, then I warp the x and y affined points to the correct undistorted plane and return the calculated row and column i, j that would correspond to points in the original image (these must be floating point for use later in bilinear interpolation of positions in between others) I should also note that the code with affine.at<double>(...) were originally matrix operations. I saw in the profiler that this was slowing doing my code. Removing the matrix creation routines caused by matrix operations here increased my speed by 4.When profiling this code (I'm forced to use Windows + MingW for this, so I don't have many convenient profiling options; I've been using gprof) it looks like tan, cos, sin, and atan2 take up the vast majority of the time at -O3, with atan2 cos and sin each taking about 20% of the time.Here is the full implementation to give context:double BarrelAbberationClass::toDistortedRadius(const double r) const { return m_focal_length * atan2(r, m_focal_length);}cv::Vec2d BarrelAbberationClass::transformBackward(const cv::Point &source_xy, const cv::Mat &affine, const cv::Vec2d &center) { const double x_affined = source_xy.x * affine.at<double>(0, 0) + affine.at<double>(0, 2); const double y_affined = source_xy.y * affine.at<double>(1, 1) + affine.at<double>(1, 2); const double r = std::hypot(x_affined, y_affined); const double new_r = toUnDistortedRadius(r); const double new_i = center[1] + (new_r / r) * y_affined; const double new_j = center[0] + (new_r / r) * x_affined; return cv::Vec2d(new_j, new_i);}double bilinearInterpolate(const cv::Mat &cv_source, const float x, const float y) { const int px = static_cast<int>(x); const int py = static_cast<int>(y); const double p1 = cv_source.at<double>(py, px); const double p2 = cv_source.at<double>(py, px + 1); const double p3 = cv_source.at<double>(py + 1, px); const double p4 = cv_source.at<double>(py + 1, px + 1); const float fx = x - px; const float fy = y - py; const float fx1 = 1.0f - fx; const float fy1 = 1.0f - fy; const float w1 = fx1 * fy1; const float w2 = fx * fy1; const float w3 = fx1 * fy; const float w4 = fx * fy; return p1 * w1 + p2 * w2 + p3 * w3 + p4 * w4;}double BarrelAbberationClass::toUnDistortedRadius(const double r) const { return m_focal_length * tan(r / m_focal_length);}cv::Point BarrelAbberationClass::transformForward(const cv::Point &source_xy, const cv::Vec2d &center) { const double x_translated = source_xy.x -center[0]; const double y_translated = source_xy.y -center[1]; const double r = std::hypot(x_translated, y_translated); const double new_r = toDistortedRadius(r); const uint64_t new_i = static_cast<uint64_t>(center[1] + (new_r / r) * y_translated); const uint64_t new_j = static_cast<uint64_t>(center[0] + (new_r / r) * x_translated); return cv::Point(new_j, new_i);}cv::Mat BarrelAbberationClass::getScaleMatrix(const cv::Rect &bounds, const cv::Vec2d &center) { const cv::Point new_tl = transformForward(bounds.tl(), center); const cv::Point new_br = transformForward(bounds.br(), center); const cv::Point diff = new_br - new_tl; const double scale_x = diff.x / static_cast<double>(bounds.width); const double scale_y = diff.y / static_cast<double>(bounds.height); const double scale = scale_x > scale_y ? scale_x : scale_y; cv::Mat scale_matrix = (cv::Mat_<double>(3, 3) << scale, 0, 0, 0, scale, 0, 0, 0, 1); return scale_matrix;}cv::Mat& BarrelAbberationClass::calculateAberation(cv::Mat& imageData) { const double center_x = imageData.size().width / 2; const double center_y = imageData.size().height / 2; const cv::Vec2d center(center_x, center_y); const cv::Mat translate = (cv::Mat_<double>(3, 3) << 1, 0, -center_x, 0, 1, -center_y, 0, 0, 1); const cv::Mat cpy = imageData.clone(); imageData.setTo(cv::Scalar(0)); const cv::Rect bounds(cv::Point(), cpy.size()); const cv::Mat scale = getScaleMatrix(bounds, center); const cv::Mat affine = scale * translate; for (uint64_t i = 0; i < cpy.rows; ++i) { for (uint64_t j = 0; j < cpy.cols; ++j) { const cv::Vec2d new_dpoint = transformBackward(cv::Point(j, i), affine, center); const cv::Point new_point = {new_dpoint[0], new_dpoint[1]}; if (bounds.contains(new_point)) { imageData.at<double>(i, j) = bilinearInterpolate(cpy, new_dpoint[0], new_dpoint[1]); } } } return imageData;}Here is the .h fileclass BarrelAbberationClass{protected: const double m_focal_length;public: BarrelDegredation(const double focal_length) : m_focal_length(focal_length) {}; cv::Mat& calculateAberation(cv::Mat& imageData); double toDistortedRadius(const double r) const; double toUnDistortedRadius(const double r) const; cv::Point transformForward(const cv::Point &source_xy, const cv::Vec2d &center); cv::Mat getScaleMatrix(const cv::Rect &bounds, const cv::Vec2d &center); cv::Vec2d transformBackward(const cv::Point &source_xy, const cv::Mat &affine, const cv::Vec2d &center);}
Backward transformation implementation for a barrel eye lens aberration in c++ 14 (using opencv)
c++;matrix;c++14
It might make no difference to performance, but it may be a little better to use std::hypot() rather than create a temporary cv::Vec2d.You can avoid working with the angle theta, simply by using the ratio new_r/r to scale the distance from centre. Something like:const double r = std::hypot(x_affined, y_affined);const double new_r = toUnDistortedRadius(r);const double new_i = center[1] + (new_r / r) * y_affined;const double new_j = center[0] + (new_r / r) * x_affined;I've parenthesized new_r / r just in case your compiler needs extra help to hoist this common expression.I'm guessing that you're passing this function to a general-purpose transform function in OpenCV. If you have control over the order in which the output pixels are iterated, and if the affine transformation has no shear components (just translation and rotation), you might be able to reduce the calculation of r and new_r almost eightfold by observing that (relative to the centre position), r is the same for x,y and y,x. This will hit your locality of reference, though, so may not be as significant as it sounds!
_softwareengineering.342592
I read this in a book:Most of the time, calls to third-party products are entangled throughout the code. But if you really abstracted the idea of a database outto the point where it simply provides persistence as aservicethen you have the flexibility to change horses in midstream.Could you please explain to me in plain English (or by example) what does the idea written above in bold mean?EDIT:I quoted the paragraph above from a book called: The Pragmatic Programmer on page 60. The more appropriate tag for my question is reversibility but it is not available.
persistence as a service: what does that mean?
scalability
<Something> as a service usually means that an application programmer can forget about an aspect of component of the system they are programming entirely. For instance, platform as a service means that you pay a cloud provider money, and they provide you with a ready-to-use running machine with no questions asked - you never again have to worry about security patches, server location, electricity cost or anything else that is usually necessary in order to maintain an array of machines usable for your purposes.Analogously, persistence as a service would mean that storing things to database and retrieving them is handled completely transparently. Ideally, you would create objects in your business domain and simply expect that they are still available and up to date the next time your code runs, without ever programming explicit calls to EntityManager.persist() or Transaction.commit() or any of the plumbing code that is usually necessary to achieve this. I'm uncertain how closely this ideal can actually be approached, but it would certainly be very nice to have.
_scicomp.14470
I was 7 years old when I learnt BASIC. Then I learnt C and Visual Basic till the age of 13. I stopped programming for 4 years continuously, and don't remember much about it now. I have lost the skill, and need to do learn it all over again.How can I learn basics and the terminology of programming all over again using C?Can someone suggest an E-Book for that?
Beginning Computer Programming
c
null
_unix.219258
I'm trying to install monit on an Ubuntu 12.04 server. I have it set up, and configured (i think), but i'm not sure what user it's supposed to run as.My user on the server is called deploy, and my monitrc file looks like this:$ ls -l /etc/monit/monitrc-rwx------ 1 deploy deploy 10229 2015-07-30 12:38 /etc/monit/monitrcie, it's owned by the user i log into the server with. I've started the monit daemon, and i can see it running with ps and i can log into the web interface for it.What i'm unsure about is how to give it priveleges to restart processes. For example, nginx: if i want to restart nginx myself i need to do sudo /etc/init.d/nginx restartDoes this mean that monit needs to do sudo as well in order to restart it? Or, should i configure monit with its own user, and set that user up so that it can restart nginx (and any other services which monit needs to restart or access) without sudo?thanks, Max
User settings for monit? Should it run as root, or it's own user?
ubuntu;sudo;nginx;monit
Yes, monit either needs to run sudo, or to be running as the root user. Configuring monit as its own user with the correct permissions is also viable however it is probably the most involved of the potential solutions.Generally running sudo from scripts is not viable as it will prompt for a password. It is possible to stop sudo prompting for a password in specific situations by editing /etc/sudoers. The answer to this question explains a suitable approach.
_unix.255226
I've spent a few hours trying to understand the differences between KVM and Xen without much succes. So both are Type 1 Hypervisors with comparable performances (source) and I don't understand what differenciates them.The only specific needs I have are that the guest OS isn't able to interract with the hosts's files (which seems to be the default behaviour) and that both the host and the guest can use their own GPU for video rendering. That seems not to be a problem as both Xen and KVM support some kind of GPU/VGA/PCI pass-through as long as there are two physical graphics cards.So what are the differences between Xen and KVM ? Is one or the other more suitable for graphical performances ?Thanks in advance for the help :)
Should I use KVM or Xen ? What are the main differences?
kvm;graphics;xen;gpu
KVM is normally supported already via libvirt and the kernels in modern distros without much hassle. You just need a CPU that has VT-d extensions (or AMD-V for AMD processors), and your BIOS has to have it enabled. After that, it's all about installing the necessary packages to get it going. XEN, yes, does support it. Xen is literally its own platform. A long time back, it was once known that Xen had the most documentation on IOMMU and VGA passthrough. Now, you'll find multitudes of users using KVM in a similar manner with high rates of success (I am one of them, using F23, GTX970 to a Windows VM for gaming).To answer your question though, the primary difference is that KVM is has had a lot of work put into it by Red Hat and the open source community, since Red Hat dropped Xen in 2009 in favor of KVM. This should in no way sway your options. You'll find varying benchmarks that show KVM outperforming Xen on numerous occasions, with KVM just being 5% slower than bare metal. Now, Xen does come with a bigger feature set than KVM out of the box, but a lot of it is for migrations and other things that you would consider enterprise. I personally believe you should try both and see what works better for you. There are many guides to choose from and to work on, based on your distribution of choice.
_codereview.74775
I'm actually creating an AI for the online contest Vindinium. The fact is that's also an exam for my school because I'm going to have a note with that work.I created an AI based on ants pheromones with a recursive function. The bot is working but only on small maps. I need to send my answer turn after turn (which movement my bot is going to make) in only one second. But one of my functions: to apply all ant pheromones on the map is taking too long.At each turn, if my bot has got a new objectif, he is going to spreads pheromones on the map with some value on each squares of the map. An on each square the value can propagate to to all neighbors until the square of the final destination.public function appliCoeff($x, $y, $sx, $sy, $value){ // Optimisation du code $newval = $value+1; if( /* Special condition */ ) $this->setReccurence($x, $y-1, $sx, $sy, $newval); if( /* Special condition */ ) $this->setReccurence($x-1, $y, $sx, $sy, $newval); if( /* Special condition */ ) $this->setReccurence($x+1, $y, $sx, $sy, $newval); if( /* Special condition */ ) $this->setReccurence($x, $y+1, $sx, $sy, $newval); if( /* Special condition */ ) $this->appliCoeff($x, $y-1, $sx, $sy, $newval); if( /* Special condition */ ) $this->appliCoeff($x-1, $y, $sx, $sy, $newval); if( /* Special condition */ ) $this->appliCoeff($x+1, $y, $sx, $sy, $newval); if( /* Special condition */ ) $this->appliCoeff($x, $y+1, $sx, $sy, $newval);}Do you have any idea to optimize it? Do you think threads are a good idea? I never used them and I don't know if it's such a good idea to parallelize treatments?
AI for an online contest
php;optimization;performance;recursion
null
_unix.162900
What is this folder: /run/user/1000 on my Fedora system and what does it do?~ $ df -hFilesystem Size Used Avail Use% Mounted ontmpfs 1.2G 20K 1.2G 1% /run/user/1000
What is this folder /run/user/1000?
directory structure
/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that user. These might be things such as your keyring daemon, pulseaudio, etc.Prior to systemd, these applications typically stored their files in /tmp. They couldn't use a location in /home/$user as home directories are often mounted over network filesystems, and these files should not be shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all users.However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can change the ownership & mode on the files being created, it's more difficult to work with.So systemd came along and created /run/user/$uid. This directory is local to the system and only accessible by the target user. So applications looking to store their files locally no longer have to worry about access control.It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp, you couldn't do this.
_unix.105759
In my previous question I have learned how to implement trash functionality in mutt. unset confirmappendfolder-hook . set trash==trashfolder-hook trash$ unset trashThat works well, except that I have to confirm every deletion. I would like it to work without the delete confirmation. So that I only need to press d and then $ to sync, without being asked for confirmation.Can somebody please suggest how to do it?
mutt: trash macro
mutt
null
_scicomp.27441
I need to integrate the following function on the line segment from $P_{1} = \begin{bmatrix}-2\\-1\end{bmatrix}$ to$P_{2} = \begin{bmatrix}1\\2\end{bmatrix}$:$$\int_{P_{1}}^{P_{2}} 4x + y \ ds$$This question take part into the implementation of a 2D finite element solver. How I plan to do itSuppose that the following transformation is used to transform a general triangular element K to the standard triangular element $T_{st}$ :$$ x = P(\xi,\eta) $$$$y = Q(\xi,\eta)$$This corresponds to this physical mapping in fact:Then we have:$$dx = \frac{\delta x}{\delta \xi}d\xi + \frac{\delta x}{\delta \eta}d\eta = J_{11}d\xi + J_{21}d\eta $$$$dy = \frac{\delta y}{\delta \xi}d\xi + \frac{\delta y}{\delta \eta}d\eta = J_{12}d\xi + J_{22}d\eta $$Along the side $P_{1} - P_{2}$, the coordinate $\xi$ is in fact always fixed. So we can write $d\eta = 0$ . Using this it comes:$$ dx = (\frac{\delta x}{\delta \xi})_{\eta = 0} \ d\xi = J_{11}(\xi,0) d\xi $$$$ dy = (\frac{\delta y}{\delta \xi})_{\eta = 0} \ d\xi = J_{12}(\xi,0) d\xi $$Therefore:$$ ds = \sqrt{J_{11}^{2}(\xi,0)+ J_{12}^{2}(\xi,0)} d\xi $$So we can rewrite the integral with a variable change accordingly:$$\int_{P_{1}}^{P_{2}} 4x + y \ ds = \int_{0}^{1} B(P(\xi,0),Q(\xi,0)) \ \sqrt{J_{11}^{2}(\xi,0)+ J_{12}^{2}(\xi,0)} d\xi $$Using this isoparametrical formulation, we can finally compute the numerical integral thanks to a 4 node quadrature :$$ I = \sum_{i=1}^{gauss \ point} w_{i} B(P(\xi_{i},0),Q(\xi_{i},0)) \ \sqrt{J_{11}^{2}(\xi_{i},0)+ J_{12}^{2}(\xi_{i},0)} d\xi $$This can be done with this really simple code in Python (at least I was thinking):Edit of the code : 20.07.2017 => removed an error (dividing by two without reason)# -*- coding: utf-8 -*-from __future__ import division # avoid integer problem of division#Import zoneimport numpy as npimport math#Define shape functiondef P1shapes(r,s): S = np.array([1-r-s,r,s]) dSdr = np.array([-1,1,0]) dSds = np.array([-1,0,1]) return S,dSdr,dSds#Jacobian functiondef isopmap(x,y,r,s,shapefcn): # x = vector of x coordinate of the element's point #shapefcn = P1shapes S,dSdr,dSds = shapefcn(r,s); j11=np.dot(dSdr,x) j12=np.dot(dSdr,y) j21=np.dot(dSds,x) j22=np.dot(dSds,y) detJ=j11*j22-j12*j21 dSdx=( j22*dSdr-j12*dSds)/detJ dSdy=(-j21*dSdr+j11*dSds)/detJ return S,dSdx,dSdy,detJ,j11,j12,j21,j22#Gauss point quadratureqwgts=np.array([-27/48,25/48,25/48,25/48])rspts=np.array([[1/3,1/3],[0.2,0.2],[0.6,0.2],[0.2,0.6]])def B(x,y): z = 4 *x + y return z#Point of the 3 nodes triangles (test case)x = [-2, 1 ,-1]y = [-1 ,2 ,3]#P1-P2int_total = 0#Begin integral on segmentfor q in range(len(qwgts)): r = rspts[q,0] # r coordinate of the q_th quadrature point s = rspts[q,1] # s coordinate of the q_th quadrature point #Define ds and Map x_physical, y_physical S,dSdx,dSdy,detJ,j11,j12,j21,j22 = isopmap(x,y,r,0,P1shapes) ds = math.sqrt(math.pow(j11,2) + math.pow(j12,2)) x_physical = np.dot(x,S) y_physical = np.dot(y,S) B_gausspoint = B(x_physical,y_physical) wxarea = B_gausspoint*qwgts[q]*ds int_total += wxareaprint int_totalThe result gives:$$ I = -16.97 $$Of course if I change the third point of the triangle, it should change nothing. With this:x = [-2, 1 ,-1]y = [-1 ,2 ,7]We again find for the path $P_{1} - P_{2}$:$$ I = -16.97 $$What we should expectFrom an analytical point of view we have for the parametrisation $x = -2 + 3t$ and $y = -1 + 3t$. This allow to have:$$ds = \sqrt{ (\frac{dx}{dt})^{2} + (\frac{dy}{dt})^{2}} \ dt = 3\sqrt{2} \ dt$$So we have: $$\int_{P_{1}}^{P_{2}} 4 x + y \ ds = \int_{0}^{1} 4(-2+3t) + (-1+3t)) \ 3\sqrt{2} \ dt = -6.36$$Actually, we can see that the numerical method is not working...My questionDid I make a mystake in the main concept? Is it an implementation error?I think it could be an interesting question for all the people that try to implement a path integral on a FEM mesh.I really did'nt find litterature example to check my implementation.Extension of the questionIs there an other way/more elegant way to compute a path integral on the boundary of my FEM model? It seems that I have everything (non linear solver, quadrature integral for assembly of stiffness matrix, shape function, isoparametric formulation), but I'm really stuck at this point. Without this I never would be able to compute for example:$$ R_{ij} = \int \kappa(x,y) \ \phi_{i} \ \phi_{j} \ dS \ for \ i \ = \ 1,2 $$EDIT 20.07.17 : Additional testActually, if we take for the integral $B(x,y) = 1$, we are integrating the length of the path :def B(x,y): z = 1 return zIf we run the code, we end up with :$$ I = 4.24$$Which is the correct value since the length of the path:$$\sqrt{(P_{2}^{x} -P_{1}^{x})^{2} + (P_{2}^{y} -P_{1}^{y})^{2} } = 4.24$$Thank you in advance. Correct code (see explanation on accepted answer)# -*- coding: utf-8 -*-from __future__ import division # avoid integer problem of division#Import zoneimport numpy as npimport math#Define shape functiondef P1shapes(r,s): S = np.array([1-r-s,r,s]) dSdr = np.array([-1,1,0]) dSds = np.array([-1,0,1]) return S,dSdr,dSds#Jacobian functiondef isopmap(x,y,r,s,shapefcn): # x = vector of x coordinate of the element's point #shapefcn = P1shapes S,dSdr,dSds = shapefcn(r,s); j11=np.dot(dSdr,x) j12=np.dot(dSdr,y) j21=np.dot(dSds,x) j22=np.dot(dSds,y) detJ=j11*j22-j12*j21 dSdx=( j22*dSdr-j12*dSds)/detJ dSdy=(-j21*dSdr+j11*dSds)/detJ return S,dSdx,dSdy,detJ,j11,j12,j21,j22#Gauss point quadrature def transfo1D(a,b,ti): #Inverse mapping to find r coordinate from 2D [r,0] space corresponding to 1D space [-1;1] defined by t epsylon_i = ((b-a)/2.0)*ti + ((b+a)/2.0) return epsylon_idef B(x,y): z = 4*math.pow(x,3) return zcoordinate=np.array([-0.774596669,0.000000000,0.774596669])weight=np.array([0.555555556,0.888888889,0.555555556])#Point of the 3 nodes triangles (test case)x = [-2, 1 ,-1]y = [-1 ,2 ,3]#P1-P2int_total = 0#Begin integral on segmentfor q in range(len(coordinate)): ti = coordinate[q] # r coordinate of the q_th quadrature point #Transform this to r space [r,0] a = 0 b = 1 r = transfo1D(a,b,ti) dettransform = (b-a)/2.0 #Define ds = fct(r) = fct(r(t)) S,dSdx,dSdy,detJ,j11,j12,j21,j22 = isopmap(x,y,r,0,P1shapes) ds = math.sqrt(math.pow(j11,2) + math.pow(j12,2)) #Define B(P(r,0),Q(r,0)) = B(P(r(t),0),Q(r(t),0)) x_physical = np.dot(x,S) y_physical = np.dot(y,S) B_gausspoint = B(x_physical,y_physical) wxarea = weight[q] * B_gausspoint* ds * dettransform int_total += wxareaprint int_total
Line integral along the edge of an isoparametrically mapped triangle
finite element;boundary conditions;quadrature
Interestingly enough, you are using quadrature rule for a master triangle in order to integrate over a segment. You should use quadrature rule for a master segment (e.g. $[-1, 1]$) instead. You should also provide proper transformations $\mathbf r_i : [-1, 1] \rightarrow \partial K_i$, $i = 1, 2, 3$ for each edge of a physical triangle.The best way of doing this is via introducing extra mappings $\mathbf T_i : [-1, 1] \rightarrow \partial \hat K_i$, $i = 1, 2, 3$ for each edge of the master triangle $\hat K$.If $\hat K$ is spanned by vertices $\{(0,0), (1, 0), (0, 1)\}$, then $$\mathbf T_1(t) = \langle \, .5(1-t), .5(1+t) \, \rangle, \\\mathbf T_2(t) = \langle \, 0, .5(1-t) \, \rangle, \\\mathbf T_3(t) = \langle \, .5(1+t), 0 \, \rangle.$$ (I assume that the $i$th edge is the edge against the $i$th vertex.) Then you can set $\mathbf r_i := \mathbf T \circ \mathbf T_i$, where $\mathbf T : \hat K \rightarrow K$ is a usual mapping from the master triangle to the physical one.So your element Robin matrix can be computed as$$\mathbf R^K_{ij} := \int_{\partial K_m} \kappa \, s^K_j \, s^K_i \, \text{d}s = \int_{-1}^{1} \left( \kappa \, s^K_j \, s^K_i \right) \circ \mathbf r_m \, || \mathbf r_m' || \, \text{d}t = \\\int_{-1}^{1} \left( \kappa \circ \mathbf T \circ \mathbf T_m \right) \, \left( \hat s_j \circ \mathbf T_m \right) \, \left( \hat s_i \circ \mathbf T_m \right) \, || (\nabla \mathbf T \circ \mathbf T_m) \, \mathbf T_m' || \, \text{d}t.$$ Note that this choice of transformations $\mathbf r_i$ allows you to use master shape functions (so you can compute their images of quadrature nodes only once and then use them for every physical edge). Note also that the length element is constant unless you are using curvilinear mapping $\mathbf T$ (defined e.g. via $P2$ or $P3$ Lagrange shape functions).Finally, note that you have to use quadrature rule for the master segment $[-1, 1]$.
_unix.4325
I have a folder that has a number of files all of a similar form as:Dropkick Murphys - 01 - Walk Away.mp3Dropkick Murphys - 02 - Workers Song.mp3And so forth...I want to convert them all so that they appear as:01 - Walk Away.mp302 - Workers Song.mp3How can I do this?
truncating file names
command line;bash
Under Ubuntu or Debian, it is simply:rename 's/Dropkick Murphys - //' *mp3
_webapps.102442
I'm working on a demo for a business user to see if Google Forms can provide a quick solution to gathering some data. One of the requirements is a Combobox for one of the fields, where the user can either type into the text field or a dropdown will suggest options. I know there is a drop down option, but is there something more I can do with scripting or custom coding to add this feature?
Combobox in Google Forms
google forms
null
_webapps.42771
Four days ago emails sent to our Gmail accounts via our ISP's mail servies started being rejected due to not being RFC 2822 complainant.The following message to was undeliverable. The reason for the problem: 5.3.0 - Other mail system problem 550-'5.7.1 [2001:44b8:8060:ff02:300:1:6:6 11] Our system has detected that\n5.7.1 this message is not RFC 2822 compliant. To reduce the amount of spam\n5.7.1 sent to Gmail, this message has been blocked. Please review\n5.7.1 RFC 2822 specifications for more information. iw4si27447595pac.153 - gsmtp'Its frustrating because these emails have been working fine for over a yearI'm assuming Google has upped their filters in the last week. The email address we are trying to send to belongs to our Google Apps for Business account. I'm wondering, is there a way to override the RFC 2822 compliance filter to allow the emails to come through?So far, adding the ISPs domain name to the spam whitelist in Gmail settings (in the Apps control panel) hasnt worked.The telnet log for the rejected message in question is:220-ipmail06.adl6.xxxxx.net ESMTP 220 ESMTP; eth2958.xxx.adsl.OurISP.net [150.xxx.xxx.xx1] in MTAHELO WINDOWS-xxxxx (<- this is our server name) 250 ipmail06.adl6.OurISP.net MAIL FROM: account@OurISP.net250 sender ok RCPT TO: admin@googleappsdomain.com250 recipient ok RCPT TO: admin@DifferentGoogleAppsDomain.com250 recipient ok DATA 354 go ahead Subject: Test email from the Avid ISIS Notification Application This message was generated by Avid ISIS Notification Application. . QUIT 250 ok: Message 716893804 accepted
Emails sent to Gmail domain suddenly not RFC 2822 compliant, Possible to bypass with Google Apps?
gmail;google apps;spam prevention
null
_webmaster.100031
I'm stuck on the last stage(going live) of Firebase hosting. I have a Godaddy domain:As u can see i put down all the CNAME and A records.I'm adding my firebase.json in case it's conneted in some manner.So,what I'm doing wrong?How can I make it work?Notice that the site works..But i believe something is wrong, otherwise firebase would finish the process.tnx
Firebase hosting not going live - stuck on Continue setup to direct traffic to your domain
web hosting;dns;godaddy;cname
null
_codereview.169740
I have the following code, and I'd like to refactor it to a more functional way:public void processPersons(List<Person> personList) { for (Person person : personList) { Integer addressId = createAddress(person); if (addressId != null) { updateDbStatus(addressId, person); } }}How do I convert the above to a more functional style of programming?
Updating persons in database with newly created addresses
java;functional programming
null
_unix.351615
Given a file wrong_name.txt, how can I add it to a zip archives archive.zip with an other name right_name.txt without modifying the file itself.
How to add a file to a zip with another filename?
rename;zip
One way to cheat, since zip adds referenced files and not the symlink itself (barring the -y option):ln -s wrong_name.txt right_name.txtzip myzip.zip right_name.txtrm right_name.txt wrong_name.txtunzip myzip.zip--> right_name.txt
_unix.33529
I was thinking it would be cool to be able to look things up in the man pages the same way one looks up words with the Dictionary app. Is there a way to add the man pages that OSX supplies into the Dictionary app so when you right click on a word (or in this case, a unix function/keyword/etc.), and click Look Up in Dictionary, it can search for the word in the man pages too and integrate the search results into the Dictionary window? So when the window pops up, the tabs across the top would be All, Dictionary, Thesaurus, Apple, Wikipedia, Man Pages. Or is this too wishful of thinking?
Is there a way to integrate the unix man pages into OSX's Dictionary app?
osx
No. The Dictionary's support for Wikipedia is hard-coded; it's not pluggable. (There is a class internal to Dictionary.app called WikipediaDictionaryObj.)
_unix.255423
I want to change shell from ksh to bash and source the .kshrc file. I want to execute following lines of command sequentially:bash. ~/.kshrcclear Can anybody help me?
How to change shell from script
bash;scripting;ksh
null
_unix.279974
I have a Ruby 1.8.7 app which runs under Phusion Passenger and Nginx, for one of my clients, on an Ubuntu VPS. It's been ticking away happily for years, but yesterday ran out of log space (sending me an error via monit which i use to monitor it).I cleared out the bloated log file by doing the following:sudo cat /dev/null > log/production.logthen restarted and it was back to normal. This morning, i've got another error, which i've not seen before. I don't know if it's related to the log problem, it might just be a coincidence, but it's odd to get two problems so close together after literally years of nothing at all going wrong. I haven't made any changes to anything.This is the stack trace i see:Passenger encountered the following error:The application spawner server exited unexpectedly: Connection closedException class:PhusionPassenger::Rack::ApplicationSpawner::ErrorBacktrace:# File Line Location0 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/rack/application_spawner.rb 118 in `spawn_application'1 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb 257 in `spawn_rack_application'2 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb 82 in `synchronize'3 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb 79 in `synchronize'4 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb 244 in `spawn_rack_application'5 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb 137 in `spawn_application'6 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb 275 in `handle_spawn_application'7 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb 357 in `__send__'8 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb 357 in `server_main_loop'9 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb 206 in `start_synchronously'10 /usr/local/lib/ruby/gems/1.8/gems/passenger-3.0.2/helper-scripts/passenger-spawn-server 99 I've tried restarting it by doing touch tmp/restart.txtin the project folder, which is the normal restart procedure for the app, and also restarting nginx. I still get the same error.Kind of out of ideas - has anyone seen this error before or have any ideas on how to fix it?
Sudden problem with a rack/passenger ruby app: Connection closed
nginx;ruby
null
_unix.89089
In my first question, I asked how to get the Date created field in NTFS-3G. Now, that I know I can get the Date created, I have started adding files onto my NTFS-3G partition and would like to set the Date created of each file to its Date modified value.Since this needs to be done on a whole repository of files, I would like to recursively apply it to a single directory on down. If I know how to do this for a single file, I could probably do the recursion myself, but if you want to add that in I would be more than happy.
How do I recursively set the date created attribute to the date modified attribute on NTFS-3G?
files;date;file copy;ntfs;ntfs 3g
null
_webmaster.5474
I see that most domain names that contain some nice words like: www.pixelmania.com , www.musicbox.com and so on... are registered but look just auto-populated with some random data. Why they do this ? Is it for me to pay extra to get that domain? Is it for advertising purposes (all of them have insane amounts of ads on them)? Or what ... ?
What is the idea behind occupying domains?
domains
They're probably expired domains that were snapped up by companies that build huge networks of domains with the sole purpose of showing advertising. They make countless millions of dollars a year doing it.
_reverseengineering.11811
I am using IDA Pro to disassemble a C++ behemoth with 1600+ classes, many of them having pure virtual methods.Some classes also are made up of multiple base classes, in hierarchies 5+ levels deep.Ida PRO supports making structures of pointers, to handle vtables, but some of the final classes can have multiple different vtables in the same slot, due to heavy polymorphism, so how you organize the vtables? How you tell IDA that in this method, or that method, what vtable is actually being refered to?
How to organize vtables in IDA Pro?
ida;c++;hexrays
null
_webmaster.51011
What are SEO risks and what are the best courses of action via Google, when an industry competitor duplicates 95% of your home page content and passes it as their own. Especially when your own website has ranked for years etc and held this unique copy for years. Further to this - what are the implications when another industry competitor replicates both your entire site design template and word for word content across your ENTIRE website?
Home Page plagiarism risks from a competitors
seo
null
_scicomp.21768
I was wondering if anyone could help with understanding the geometric conservation law for moving domains. I came across Link1, and have tried to understood the paper by Farhat et al Link2. So far my understanding is that the two things to consider are 1) the time step to which the cell areas using for integration of flux terms correspond to; 2) the time step at which the mesh velocity ( $\dot{x}$ ) is being evaluated. Finally, the ALE equations must also be satisfied for a uniform flow case. The paper states a GCL law as:$$A_{i}x^{n+1}-A_{i}x^{n}=\int_{t^{n}}^{t^{n+1}}\int_{\partial C_{i}(x)}\dot{x}\overrightarrow{n}d\sigma dt$$where $A$ refers to the area of cells, $C$ refers to the cell areas swept by the fluxes, t is time, x is the spatial coordinate at the current configuration, and I think $\sigma$ refers to the surface area of integration corresponding $C$. I have a few questions that I would really appreciate some help with:1) what is $C$ in 1D. Is C referring to the length of the cells or the area, which is set to 1 in 1D.2) in the paper (just before equation 16), for a uniform flow, it sets the variables being solved for at different time steps equal to each other. Is that not the definition of steady flow?3) Finally, is there a way to determine whether a set of numerical results in 1D, obey GCL or not, without actually going through the equations. For instance, my understanding is that the results should stay independent of the moving domains, so if I compared the results obtained using $\dot{x}$ =0 and $\dot{x} \neq 0$ and showed that they were not the same?If there are any simpler papers or examples on GCL, please let me know. Thank you in advance.
a few questions on understanding geometric conservation law
finite element;fluid dynamics;finite volume;discretization
null
_cs.20249
Language:$ L = a^{n+m}b^{n}c^{m} $As per a recent test I gave, this language is not context free.However, I think it is.Corresponding Grammar:$ X \rightarrow aXY \space |\space \epsilon $$ Y \rightarrow b \space | \space c $Pushdown Automata:Keeping pushing all $a$ to the stack, until a $b$ is scanned. Keeping popping $a$ from stack for each character scanned, until end of input.If, after the end of input the stack is empty accept the string. Else, go to non-accepting state.Please let me know if I'm thinking along the right lines or if I've missed something..
Is $a^{n+m}b^{n}c^{m}$ context free?
formal languages;context free;formal grammars
the corresponding grammar you gave would accept aaaabcbc. A better grammar for this problem would be$X \rightarrow aXc$$X \rightarrow aYb$$X \rightarrow \lambda$$Y \rightarrow aYb$$Y \rightarrow \lambda$where $\lambda$ is empty string.
_cs.60114
I have several algorithms that map read-only input into write-only output utilizing only logarithmic space with pointer arithmetic. While the algorithms have a very small $O(\log^c{}n)$ critical path time complexity, fully parallelizing is too impractical with today's computers due to high degree polynomial growth in required resources.What are some general purpose techniques to transform logarithmic space algorithms to low sequential time complexity (preferably linear) by utilizing more space (preferably linear)?
Space-time tradeoffs for deterministic logarithmic space algorithms
time complexity;space complexity;program optimization
null
_hardwarecs.4054
I want to build a low power relatively cheep NAS that can hold at least two drives. I want to go the DIY route since I'm blind and would rather do everything from the command-line instead of relying on a third party GUI that may or may not be accessible. What would good hardware be for this? All the single board computers I've found appear to have either one or no SATA ports. I'd prefer a pre-built system that only requires hard drives but can build a computer from scratch if required.
Low power computer with at least two SATA ports?
linux;nas
null
_unix.175127
I installed xmonbar and try to launch it.xmonbar &I got Stopped. I don't know what's wrong. Here is my .xmobarrcConfig { font = -misc-fixed-*-*-*-*-13-*-*-*-*-*-*-* , borderColor = black , border = TopB , allDesktops = True , overrideRedirect = True , persistent = False , hideOnStart = False , bgColor = black , fgColor = grey , position = TopW L 100 , lowerOnStart = True , commands = [ Run Cpu [-L,15,-H,50,--normal,green,--high,red] 10 , Run Date %a %b %_d %Y %H:%M:%S date 10 , Run StdinReader ] , sepChar = % , alignSep = }{ , template = %StdinReader% }{ %cpu% | %date% }By the way, I'm running xmonad window manager. It works well.Edit:My xmonad.hs file:---- xmonad example config file.---- A template showing all available configuration hooks,-- and how to override the defaults in your own xmonad.hs conf file.---- Normally, you'd only override those defaults you care about.--import XMonadimport System.Exitimport qualified XMonad.StackSet as Wimport qualified Data.Map as Mimport XMonad.Hooks.DynamicLogimport XMonad.Hooks.ManageDocksimport XMonad.Util.EZConfig(additionalKeys)import System.IOimport Graphics.X11.ExtraTypes.XF86-- The preferred terminal program, which is used in a binding below and by-- certain contrib modules.--myTerminal = xterm-- myTerminal = gnome-terminal-- Width of the window border in pixels.--myBorderWidth = 1-- modMask lets you specify which modkey you want to use. The default-- is mod1Mask (left alt). You may also consider using mod3Mask-- (right alt), which does not conflict with emacs keybindings. The-- windows key is usually mod4Mask.---- myModMask = mod1MaskmyModMask = mod4Mask-- The mask for the numlock key. Numlock status is masked from the-- current modifier status, so the keybindings will work with numlock on or-- off. You may need to change this on some systems.---- You can find the numlock modifier by running xmodmap and looking for a-- modifier with Num_Lock bound to it:---- > $ xmodmap | grep Num-- > mod2 Num_Lock (0x4d)---- Set numlockMask = 0 if you don't have a numlock key, or want to treat-- numlock status separately.--myNumlockMask = mod2Mask-- The default number of workspaces (virtual screens) and their names.-- By default we use numeric strings, but any string may be used as a-- workspace name. The number of workspaces is determined by the length-- of this list.---- A tagging example:---- > workspaces = [web, irc, code ] ++ map show [4..9]--myWorkspaces = [1,2,3,4,5,6,7,8,9]-- Border colors for unfocused and focused windows, respectively.--myNormalBorderColor = #ddddddmyFocusedBorderColor = #ff0000-------------------------------------------------------------------------- Key bindings. Add, modify or remove key bindings here.--myKeys conf@(XConfig {XMonad.modMask = modm}) = M.fromList $ -- launch a terminal [ ((modm .|. shiftMask, xK_Return), spawn $ XMonad.terminal conf) -- launch dmenu , ((modm, xK_p ), spawn exe=`dmenu_path | dmenu_run -fn 'DejaVu Sans Mono 12'` && eval \exec $exe\) -- launch Chrome browser , ((modm, xK_b), spawn exe=`google-chrome`) -- launch Emacs editor , ((modm, xK_z), spawn exe=`emacs`) -- launch gmrun , ((modm .|. shiftMask, xK_p ), spawn gmrun) -- close focused window , ((modm .|. shiftMask, xK_c ), kill) -- Rotate through the available layout algorithms , ((modm, xK_space ), sendMessage NextLayout) -- Reset the layouts on the current workspace to default , ((modm .|. shiftMask, xK_space ), setLayout $ XMonad.layoutHook conf) -- Resize viewed windows to the correct size , ((modm, xK_n ), refresh) -- Move focus to the next window , ((modm, xK_Tab ), windows W.focusDown) -- Move focus to the next window , ((modm, xK_j ), windows W.focusDown) -- Move focus to the previous window , ((modm, xK_k ), windows W.focusUp ) -- Move focus to the master window , ((modm, xK_m ), windows W.focusMaster ) -- Swap the focused window and the master window , ((modm, xK_Return), windows W.swapMaster) -- Swap the focused window with the next window , ((modm .|. shiftMask, xK_j ), windows W.swapDown ) -- Swap the focused window with the previous window , ((modm .|. shiftMask, xK_k ), windows W.swapUp ) -- Shrink the master area , ((modm, xK_h ), sendMessage Shrink) -- Expand the master area , ((modm, xK_l ), sendMessage Expand) -- Push window back into tiling , ((modm, xK_t ), withFocused $ windows . W.sink) -- Increment the number of windows in the master area , ((modm , xK_comma ), sendMessage (IncMasterN 1)) -- Deincrement the number of windows in the master area , ((modm , xK_period), sendMessage (IncMasterN (-1))) -- toggle the status bar gap (used with avoidStruts from Hooks.ManageDocks) -- , ((modm , xK_b ), sendMessage ToggleStruts) -- Quit xmonad , ((modm .|. shiftMask, xK_q ), io (exitWith ExitSuccess)) -- Restart xmonad , ((modm , xK_q ), restart xmonad True) , ((mod4Mask .|. shiftMask, xK_z), spawn xscreensaver-command -lock), ((0 , 0x1008FF11), spawn amixer set Master 2-), ((0 , 0x1008FF13), spawn amixer set Master 2+), ((0 , 0x1008FF12), spawn amixer set Master toggle)-- ((0, xF86XK_AudioMute ), spawn amixer set Master toggle) ] ++ -- -- mod-[1..9], Switch to workspace N -- mod-shift-[1..9], Move client to workspace N -- [((m .|. modm, k), windows $ f i) | (i, k) <- zip (XMonad.workspaces conf) [xK_1 .. xK_9] , (f, m) <- [(W.greedyView, 0), (W.shift, shiftMask)]] ++ -- -- mod-{w,e,r}, Switch to physical/Xinerama screens 1, 2, or 3 -- mod-shift-{w,e,r}, Move client to screen 1, 2, or 3 -- [((m .|. modm, key), screenWorkspace sc >>= flip whenJust (windows . f)) | (key, sc) <- zip [xK_w, xK_e, xK_r] [0..] , (f, m) <- [(W.view, 0), (W.shift, shiftMask)]]-------------------------------------------------------------------------- Mouse bindings: default actions bound to mouse events--myMouseBindings (XConfig {XMonad.modMask = modMask}) = M.fromList $ -- mod-button1, Set the window to floating mode and move by dragging [ ((modMask, button1), (\w -> focus w >> mouseMoveWindow w)) -- mod-button2, Raise the window to the top of the stack , ((modMask, button2), (\w -> focus w >> windows W.swapMaster)) -- mod-button3, Set the window to floating mode and resize by dragging , ((modMask, button3), (\w -> focus w >> mouseResizeWindow w)) -- you may also bind events to the mouse scroll wheel (button4 and button5) ]-------------------------------------------------------------------------- Layouts:-- You can specify and transform your layouts by modifying these values.-- If you change layout bindings be sure to use 'mod-shift-space' after-- restarting (with 'mod-q') to reset your layout state to the new-- defaults, as xmonad preserves your old layout settings by default.---- The available layouts. Note that each layout is separated by |||,-- which denotes layout choice.--myLayout = avoidStruts (tiled ||| Mirror tiled ||| Full) where -- default tiling algorithm partitions the screen into two panes tiled = Tall nmaster delta ratio -- The default number of windows in the master pane nmaster = 1 -- Default proportion of screen occupied by master pane ratio = 1/2 -- Percent of screen to increment by when resizing panes delta = 3/100-------------------------------------------------------------------------- Window rules:-- Execute arbitrary actions and WindowSet manipulations when managing-- a new window. You can use this to, for example, always float a-- particular program, or have a client always appear on a particular-- workspace.---- To find the property name associated with a program, use-- > xprop | grep WM_CLASS-- and click on the client you're interested in.---- To match on the WM_NAME, you can use 'title' in the same way that-- 'className' and 'resource' are used below.--myManageHook = composeAll [ className =? MPlayer --> doFloat , className =? Gimp --> doFloat , resource =? desktop_window --> doIgnore , resource =? kdesktop --> doIgnore ]-- Whether focus follows the mouse pointer.myFocusFollowsMouse :: BoolmyFocusFollowsMouse = True-------------------------------------------------------------------------- Status bars and logging-- Perform an arbitrary action on each internal state change or X event.-- See the 'DynamicLog' extension for examples.---- To emulate dwm's status bar---- > logHook = dynamicLogDzen--myLogHook = return ()-- myLogHook = dynamicLogDzen-------------------------------------------------------------------------- Startup hook-- Perform an arbitrary action each time xmonad starts or is restarted-- with mod-q. Used by, e.g., XMonad.Layout.PerWorkspace to initialize-- per-workspace layout choices.---- By default, do nothing.-- myStartupHook = return ()myStartupHook = do spawn python2 ~/apps/goagent-goagent-593bfa1/local/proxy.py&-------------------------------------------------------------------------- Now run xmonad with all the defaults we set up.-- Run xmonad with the settings you specify. No need to modify this.--main = xmonad defaults-- A structure containing your configuration settings, overriding-- fields in the default config. Any you don't override, will -- use the defaults defined in xmonad/XMonad/Config.hs-- -- No need to modify this.--defaults = defaultConfig { -- simple stuff terminal = myTerminal, focusFollowsMouse = myFocusFollowsMouse, borderWidth = myBorderWidth, modMask = myModMask,-- numlockMask = myNumlockMask, workspaces = myWorkspaces, normalBorderColor = myNormalBorderColor, focusedBorderColor = myFocusedBorderColor, -- key bindings keys = myKeys, mouseBindings = myMouseBindings, -- hooks, layouts layoutHook = myLayout, manageHook = myManageHook, logHook = myLogHook, startupHook = myStartupHook }
xmobar doesn't appear
xmonad
null
_softwareengineering.315295
Is an API always returning 200 OK, an issue?
Code structure of third party framework
c#;mvc;rest;api;solid
null
_codereview.10611
I have a method which loads data from a remote app (send TCP request and parse response). Now, I have a simple class for sending a TCP request:public class PremieraTcpClient { public PremieraTcpClient() { QueryItems = new NameValueCollection(); int port; int.TryParse(ConfigurationManager.AppSettings[PremieraPort], out port); Port = port; ServerIp = ConfigurationManager.AppSettings[PremieraServerIp]; ServiceId = ConfigurationManager.AppSettings[PremieraServiceId]; } public NameValueCollection QueryItems { get; set; } private int Port { get; set; } private string ServerIp { get; set; } private string ServiceId { get; set; } private string ReadyQuery { get; set; } public string SendQuery() { StringBuilder parameters = new StringBuilder(); //... // build query for request //... ReadyQuery = parameters.ToString(); return Connect(); } private string Connect() { string responseData; try { TcpClient client = new TcpClient(ServerIp, Port); client.ReceiveBufferSize = Int32.MaxValue; Byte[] data = Encoding.GetEncoding(1251).GetBytes(ReadyQuery); NetworkStream stream = client.GetStream(); // send data stream.Write(data, 0, data.Length); var sizeBuffer = new byte[10]; stream.Read(sizeBuffer, 0, 10); var sizeMessage = int.Parse(Encoding.GetEncoding(1251).GetString(sizeBuffer, 0, 10)); data = new Byte[sizeMessage]; var readSoFar = 0; //read data while (readSoFar < sizeMessage) { readSoFar += stream.Read(data, readSoFar, data.Length - readSoFar); } responseData = Encoding.GetEncoding(1251).GetString(data, 0, data.Length); responseData = responseData.TrimStart('&'); stream.Close(); client.Close(); return responseData; } catch (ArgumentNullException e) { //return responseData = string.Format(ArgumentNullException: {0}, e); } catch (SocketException e) { //return responseData = string.Format(SocketException: {0}, e); } return string.Empty; } }This is method for load data: private static void GetUpdatesFromPremiera() { Debug.WriteLine(DateTime.Now + :GetUpdatesFromPremiera); PremieraTcpClient client = new PremieraTcpClient(); client.QueryItems.Add(QueryCode, QueryCode.GetUpdates.ToString()); client.QueryItems.Add(ListType, Movie;Hall;Session;Place;Delete); client.QueryItems.Add(Updates, _lastUpdateId); _lastUpdateId = String.Empty; var response = client.SendQuery(); // here parse response //... }This code works fine. But, now I have to load data from two remote app (tomorrow may be three).The simple solution is to iterate through all remote apps:private static void GetUpdatesFromPremiera(){ foreach(var remoteApp in listRemoteApp) { PremieraTcpClient client = new PremieraTcpClient(); // here assigned different properties var response = client.SendQuery(); }}Is there is a better way of doing it? Also, each time a connection is established, I think it impacts performance greatly.
Loading data from a remote app
c#;asp.net mvc 3;tcp
I have some suggestions about PremieraTcpClient. The way it is written may lead to unreleased resources. If you have an error then you will remain with a stream and client opened.The correct way to do it is by using try...catch...finally or by using using.Below you can find the code using try catch finally private string Connect() { string responseData = string.Empty; TcpClient client = null; NetworkStream stream = null; try { client = new TcpClient(ServerIp, Port); client.ReceiveBufferSize = Int32.MaxValue; Byte[] data = Encoding.GetEncoding(1251).GetBytes(ReadyQuery); stream = client.GetStream(); // send data stream.Write(data, 0, data.Length); var sizeBuffer = new byte[10]; stream.Read(sizeBuffer, 0, 10); var sizeMessage = int.Parse(Encoding.GetEncoding(1251).GetString(sizeBuffer, 0, 10)); data = new Byte[sizeMessage]; var readSoFar = 0; //read data while (readSoFar < sizeMessage) { readSoFar += stream.Read(data, readSoFar, data.Length - readSoFar); } responseData = Encoding.GetEncoding(1251).GetString(data, 0, data.Length); responseData = responseData.TrimStart('&'); //stream.Close(); //client.Close(); //return responseData; } catch (ArgumentNullException e) { //return responseData = string.Format(ArgumentNullException: {0}, e); } catch (SocketException e) { //return responseData = string.Format(SocketException: {0}, e); } finally { if(stream!=null) stream.Close(); if(client!=null) client.Close(); } return responseData; }And another small suggestion: I think is misleading to have a method named Connect that in fact does more than connect. It will be better to break the Connect method into smaller methods, each one with specific actions (even if they are private).
_webmaster.6485
I'm currently looking to register the Taiwanese version of my company's domain. Dynadot, doesn't register domains with that extension. I found a few places on the web: Godaddy has them, and a fewer smaller, shadier places claim to have them, but they start at $39.99/year which seems a bit outrageous. Has anyone found a more affordable, reliable registration company for .tw domains?
Where can I register .tw domain extensions?
domains;domain registration
null
_unix.357926
When I start Spacemacs I get a box created out of \u2502 sequences which I assume is the a box of particular character or colour not rendering properly. Below is the output from the locale command. What settings to I have to apply globally, or in my .bashrc etc to fix this?LANG=en_GBLANGUAGE=:en_GB.utf8LC_CTYPE=en_GBLC_NUMERIC=en_GBLC_TIME=en_GBLC_COLLATE=en_GBLC_MONETARY=en_GBLC_MESSAGES=en_GBLC_PAPER=en_GBLC_NAME=en_GBLC_ADDRESS=en_GBLC_TELEPHONE=en_GBLC_MEASUREMENT=en_GBLC_IDENTIFICATION=en_GBLC_ALL=
How can I configure my locale correctly for Spacemacs?
locale
I don't know anything specific to spacemaps, but this looks like an encoding issue.Your character is a pretty good test already.$ echo -e \u2502 To set up UTF-8 encoding (which is great for ASCII data), make sure all your language variables have UTF-8 in them.It should be enough to do:export LC_ALL=en_GB.UTF-8export LANG=en_GB.UTF-8export LANGUAGE=en_GB.UTF-8afterwards run locale to confirm it.$ export LC_ALL=en_GB.UTF-8$ export LANG=en_GB.UTF-8$ export LANGUAGE=en_GB.UTF-8$ localeLANG=en_GB.UTF-8LC_CTYPE=en_GB.UTF-8LC_NUMERIC=en_GB.UTF-8LC_TIME=en_GB.UTF-8LC_COLLATE=en_GB.UTF-8LC_MONETARY=en_GB.UTF-8LC_MESSAGES=en_GB.UTF-8LC_PAPER=en_GB.UTF-8LC_NAME=en_GB.UTF-8LC_ADDRESS=en_GB.UTF-8LC_TELEPHONE=en_GB.UTF-8LC_MEASUREMENT=en_GB.UTF-8LC_IDENTIFICATION=en_GB.UTF-8LC_ALL=en_GB.UTF-8Now testing it again$ echo -e \u2502 This, in your .bashrc, should solve it.Make sure your terminal emulator (if any) actually uses the correct encoding too. It should properly read it from $LC_TYPE i believe, but some have settings to override this in their preferences. If you also want to setup/test colors as well, make sure you have 256 colors set in your term variableexport TERM=xterm-256colorthe 256colors.pl is a nice test for this https://gist.github.com/hSATAC/1095100
_cs.33999
I am having trouble getting my head wrapped around epsilon transitions while creating an LALR(1) parse table.Here's a grammar that recognizes any number of 'a' followed by a 'b'. 'S' is an artificial start state. '$' is an artificial 'EOS' token.0. S -> A $1. A -> B b2. B -> B a3. B -> epsilonItemsets:i0: S -> . A $ A -> .B b B -> .B a A -> B . b ! because B could -> epsilon B -> B . a ! i1: S -> A . $i2: S -> A $ .i3: A -> B . b ! from i0 B -> B . ai4: A -> B b . ! from i0 or i3; the LALR algorithm compresses identical states.i5: B -> B a . ! from i0 or i3: the LALR algorithm compresses identical states.I previously had a description on how this would work to parse a simple string. I removed it because I know less now than I did before. I can't even figure out a parse tree for 'ab'.If someone could show me how I have mis-constructed my itemsets and how I'd reduce the epsilon transition I'd be grateful.
LALR(1) parsers and the epsilon transition
formal grammars;parsers
Your states and itemsets are not quite correct. The epsilon production must appear in relevant itemsets, and you have combined two states into one, which would produce a shift-reduce conflict if the epsilon production were added to the itemset (which should be done).The following was generated with bison (using the --report=all command-line option); it differs from the theoretic model because the grammar has been augmented with an extra start symbol and an explicit end-of-input marker ($end). Also, it has done some table compression, so in the action tables, you can think of $default as meaning either a or b.It is worth explaining how State 0 comes about, since it shows how epsilon productions are handled (no differently from other productions).We start with $accept: . S $end, by definition. ($accept is the starting state). Then the closure rule is applied as long as possible. Remember that the closure rule is: If any item in the itemset, the . is immediately before a non-terminal, add all the productions for that non-terminal with an initial .. Hence we add:S: . Acontinuing with A:A: . B 'b'continuing with B:B: . B 'a'B: .We can't apply closure any longer, so we're done. Since the state now has an item with the dot at the end (the epsilon production for B), a reduction is possible.State 0 0 $accept: . S $end 1 S: . A 2 A: . B 'b' 3 B: . B 'a' 4 | . $default reduce using rule 4 (B) S go to state 1 A go to state 2 B go to state 3State 1 0 $accept: S . $end $end shift, and go to state 4State 2 1 S: A . $default reduce using rule 1 (S)State 3 2 A: B . 'b' 3 B: B . 'a' 'b' shift, and go to state 5 'a' shift, and go to state 6State 4 0 $accept: S $end . $default acceptState 5 2 A: B 'b' . $default reduce using rule 2 (A)State 6 3 B: B 'a' . $default reduce using rule 3 (B)In State 0, the closure rule has added the epsilon production (line 4). Furthermore, no item in the state 0 itemset has the point before a terminal. So with any lookahead, the parser is forced to reduce the epsilon production, after which it will use the goto function for state 0 to decide to move to state 3. (In your state machine, states 0 and 3 are conflated, but I do not believe this is correct.) State 3 will definitely shift a terminal; with the input ab$end, it will shift the a and move to state 6, which will then reduce a B. And so on.
_codereview.1099
This is a simple linked list program which creates a list by appending an object at the tail. It compiles and runs perfectly.Is the coding style, logic etc are fine? How can I improve this program? Is there anything redundant or did I miss out some important things?#include<iostream>#include<string>using namespace std;class Link_list {private: string name; Link_list *next_node;public: void add_item(Link_list *); void add_item(); friend void show(Link_list *sptr) { while(sptr) { cout << sptr->name << endl; sptr = sptr->next_node; } }};void Link_list::add_item(){ cin >> name; next_node = NULL;}void Link_list::add_item(Link_list *pptr){ cin >> name; next_node = NULL; pptr->next_node = this; }int main(){ Link_list *str_ptr = NULL; Link_list *curr_ptr = str_ptr; Link_list *prev_ptr; char ch = 'y'; str_ptr = new(Link_list); str_ptr->add_item(); curr_ptr = str_ptr; do { prev_ptr = curr_ptr; curr_ptr = new(Link_list); curr_ptr->add_item(prev_ptr); cout <<Do you want to add the item << endl; cin >> ch; }while(ch != 'n'); show(str_ptr); }
Linked List program
c++;linked list
null
_codereview.70457
Currently I try to create some unit tests for a project, which provides access to some webservice methods.It's interface is rather simple. I have a class WebService which offers methods like CreatePDF.Here's a little example:The Model:The model contains a number of properties, which each represent a URL parameter of the webservice. The models provide a method GetParameterDictionary, which delivers a dictionary with key == url parameter name and value == url parameter value.public class CreatePDFParameters{ public List<string> Names { get; set; } public List<string> Colors { get; set; } public Dictionary<string, string> GetParameterDictionary() { [...] //returns Key == url parameter name and Value == ; separated list of values }}So here's a example for the Interface method:public byte[] CreatePDF(CreatePDFParameters parameters){ return MakeByteRequest(GetQuery(WebServiceMethods.CreatePDF, parameters.GetParameterDictionary()));}/// <summary>/// Creates a query out of a dictionary. The dictionary must have the paramname as key and the value as value!/// </summary>private string GetQuery(string methodName, Dictionary<string, string> parameters){ string parameterString = parameters.Where(parameter => !String.IsNullOrEmpty(parameter.Value)) .Aggregate(String.Empty, (current, parameter) => String.Format(String.IsNullOrEmpty(current) ? {0}?{1}={2} : {0}&{1}={2}, current, parameter.Key, parameter.Value)); return methodName + parameterString;}As you can see, you only need to call the CreatePDF method with a parameters object. The method itself calls a methods, which creates the query and calls a method, which makes the actual request (It calls directly the ByteRequest method, there are methods like StringRequest (delivers a string as return value) too).I would like to write a UnitTest for the GetQuery method. But I don't want to make it public or internal in the actual state. I could make the method public If I would remove the CreatePDF method and let the user make direkt calls to MakeByteRequest / GetQuery and so on - which would require the users to have some knowledge about the webservice itself (knowledge about return types, web method names and so on).Would you prefer a more simple interface over unit tests in this case?
Prefer simplicity over testability?
c#;unit testing
The WebService interface with a CreatePDF method is good like that.The private GetQuery method is an implementation detail that should not be exposed.But there's something else you can do to test GetQuery.You can move that logic outside to a different class,whose main responsibility will be to build query strings.Then the method will be public, naturally, and you can implement unit tests for it.The query string builder class can become a collaborator of WebService:implementations of WebService can call it from their GetQuery methods to build query strings correctly,with the comforting thought that the utility class is properly unit tested.
_cstheory.1574
There is an increasing number of scientific articles which I look through and somehow I feel a lack of a tool to keep track of the already read ones and my summary notes on it. Or another usage scenario would be to search by an author and see which articles of the given author I have already read.The question is: Do you use or know of any freeware tools for organizing articles?
Do you use any article organizers?
soft question
I used to store them in a SVN repository, and then I found out about Mendeley, quite satisfying so far except for a few bugs. Different features are available depending on whether or not you want to pay some fee, but the free version is enough for many people -- it seems it would be for you.
_unix.178903
I run the top command on my linux machine and I see that vim command take like 99.5% CPU PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23320 gsachde 25 0 10720 3500 2860 R 99.5 0.2 30895:11 vim how to verify which script/program is it?
linux + top command
linux;command;cpu;process management
If you press c while in top then the command will be expanded to show the full command used to start the process.You can also take the PID and run: ps -ef | grep $PIDOr: cat /proc/$PID/cmdline
_softwareengineering.122436
We're writing a requirements document for our client and need to include the use cases of the system. We're following this template:IDDescriptionActorsPreconditionBasic StepsAlternate StepsExceptionsBusiness validations/RulesPostconditionsIn the Basic Steps section, should we include steps that the system performs in the back end or should we only include steps that the user directly interacts with?Example:Basic Steps for Search 1:User goes to search pageUser enters termUser presses searchSystem matches search term with database entriesSystem displays resultsvsBasic Steps for Search 2:User goes to search pageUser enters termUser presses searchSystem displays results
Should back end processes be included in use cases in requirements document?
requirements
null
_cs.71962
In the proof of TQBF-complete, it says if the input size is i, then the TM for the input has at most 2^i numbers of configuration. Can someone explain why?The proof is from: http://zoo.cs.yale.edu/classes/cs468/fall12/TQBF-complete.pdf
Why are there only 2^i configurations?
turing machines
Let $s(n)$ be the space used by Turing machine, $\Sigma $ is a input alphabet, $$ is tape alphabet ( $ $ ) and $Q$ is a finite set of states. Then maximum number of different configuration bounded by $|Q| \times| |^{s(n)}\times s(n)$. You have $s(n)$ many places and in each place you have choices and then you can derive the expression very easily.
_codereview.71713
I want to know if this is a good idea. It is definitely only a proof of concept at this point, but if it's a good idea I would pursue the development into a more mature product. There are three main parts of this concept (maybe they should be separate questions, but I am asking about the concept as a whole and whether it conforms to acceptable standards or is just a bad idea):Create a single file Python interpreter (sort of) which will be compiled with pyinstaller for various platforms and included with the distribution of the app. This allows a completely pluggable system of command line utilities written in Python.Create a library which provides a decorator which creates a command line interface dynamically based on the function signature.Provide a production-ready server based on Bottle and CherryPy which serves a Web GUI based on a very simple plugin system.To this end I created a project on GitHub, and I would recommend looking at the structure and source code there, but I am including the most relevant pieces of code here as per the recommendations of the moderators.This is the code in magic.py which executes the python scripts. Note that the main point of this is to compile this code with pyinstaller so there is a one-step build process and it provides a pluggable system of command line utilities (also note the ellipses):# I want to make __future__ available, but it must be at the beginning.import __future__from config import get_configimport loggingimport sysimport osif False: # The following modules (even though they are not actually imported) # are meant to be available. When this module is packaged with pyinstaller # It will make these modules available for import or in other words they # will be bundled in the executable. import re import xml ... import cherrypyconfig = get_config(logging.conf)log_level = config.getint('logging', log_level)logger = logging.getLogger('magic')logger.setLevel(log_level)formatter = logging.Formatter(%(asctime)s %(levelname)s %(name)s %(message)s)handler = logging.StreamHandler(sys.stdout)handler.setLevel(log_level)handler.setFormatter(formatter)logger.addHandler(handler)# Only one parameter is needed for functionality, but I would like# to add support for flags to simulate the python interpreter flags.sys.argv = sys.argv[1:]if not sys.argv: sys.exit(-1)_file = sys.argv[0]if not _file.endswith('.py'): # append .py to _file if not present _file = '{}.py'.format(_file)ran = Falseconfig = get_config(magic.conf)dirs = config.get(magic, path).split()logger.debug('Executing command {}'.format(' '.join(sys.argv)))for _dir in dirs: filename = os.path.join(_dir, _file) if not os.path.exists(filename) or not os.path.isfile(filename): continue try: execfile(filename) ran = True except Exception, e: msg = Failed to execute {}. Reason: {}.format(' '.join(sys.argv), e) if hasattr(e, 'read'): msg = '{}\n\t{}'.format(msg, e.read()) logger.error(msg) # Here it ran, but raised an exception raise breakif not ran: logger.debug( Failed to execute file: {0}. {0} does not exist or is not a file.format(_file))Now, for the dynamic creation of the command line interface. I use inspect to get at the function signature and argparse to implement the CLI.cli.pyimport sysimport inspectimport argparseclass Cli(object): def __init__(self, description=): self.parser = argparse.ArgumentParser(description=description, formatter_class=argparse.RawDescriptionHelpFormatter) self.subparsers = self.parser.add_subparsers() self.functions = {} def command(self): def inner(fn): collects information about decorated function, builds a subparser then returns the function unchanged name = fn.__name__ self.functions[name] = fn desc = fn.__doc__ args, _, __, defaults = inspect.getargspec(fn) if not args: args = [] defaults = [] if len(args) != len(defaults): print All cli.command function arguments must have a default. sys.exit(-1) _parser = self.subparsers.add_parser(name, description=desc, formatter_class=argparse.RawDescriptionHelpFormatter) _parser.set_defaults(func=self.functions[name]) for arg, default in zip(args, defaults): # Try the lower case first letter for the short option first if '-{}'.format(arg[0]) not in _parser._option_string_actions: flag = ('-{}'.format(arg[0]), '--{}'.format(arg)) # Then the upper-case first letter for the short option elif '-{}'.format(arg[0]).upper() not in _parser._option_string_actions: flag = ('-{}'.format(arg[0]).upper(), '--{}'.format(arg)) # otherwise no short option else: flag = ('--{}'.format(arg)) if isinstance(default, basestring): _parser.add_argument(*flag, type=str, default=default) elif isinstance(default, list): _parser.add_argument(*flag, nargs='+') elif isinstance(default, bool): if default: _parser.add_argument( *flag, action='store_false', default=default) else: _parser.add_argument( *flag, action='store_true', default=default) elif isinstance(default, int): _parser.add_argument(*flag, type=int, default=default) return fn return inner def run(self): Executes the function corresponding to the command line arguments provided by the user args = self.parser.parse_args() func = args.func _args, _, __, defaults = inspect.getargspec(func) kwargs = {} for arg in _args: kwargs[arg] = getattr(args, arg) func(**kwargs)Now for the web GUI. Here is the script web.py which dynamically loads anything in our plugin directory as a plugin:import osimport var.lib.bottle as bottledef _get_plugins(app): This function builds a list ret = {} dirs = [d for d in os.walk(app.PLUGIN_DIR).next()[1]] dirs.sort() for d in dirs: # import the main function into a temporary variable _main = __import__( 'var.www.plugins.{}.plugin'.format(d), globals(), locals(), ['main']) # add plugin directory to TEMPLATE_DIR so they can house their # own templates (this allows plugins to be self-contained) bottle.TEMPLATE_PATH.append(os.path.join(app.PLUGIN_DIR, d)) # Route GET and POST requests to the main() method of plugin.py app.route( '/{}'.format(d), ['GET', 'POST', 'DELETE', 'PUT', 'PATCH'], _main.main) ret[d] = bottle.template('link', name=d) # TODO: inspect function to allow for dynamic routing and # service discovery return retapp = bottle.Bottle()bottle.TEMPLATE_PATH = ['var/www/templates/']app.PLUGIN_DIR = 'var/www/plugins/'app.STATIC_DIR = 'var/www/static/'@app.route('/')@app.route('/index')@app.route('/index.html')@bottle.view('index')def index(): Returns the index template with a list of templates(actually a list of links to the plugins URIs). return {'plugins': app.PLUGINS}@app.route('/static/<filepath:path>')def static(filepath): Serves static files located in app.STATIC_DIR. return bottle.static_file(filepath, root=app.STATIC_DIR)if __name__ == '__main__': app.PLUGINS = _get_plugins(app) app.run(server='cherrypy')Is it a good idea to structure apps this way to provide a cross-platform application with as little boiler-plate code as possible?
Super simple way of generating dynamic interfaces in Python both CLI and Web GUI
python;console;user interface;bottle;cherrypy
Now I don't take answering my own question lightly, and perhaps someone isout there writing up the perfect answer right now, but I doubt it becausemy question my question is very broad and as I've discovered there is a lotof research to be done on the subject. I hope I can do the subject somejustice as there are a lot of very talented people implementing seperatelywhat I am implementing myself.The subject of creating User Interfaces has a long and sorted historyperhaps the most famous incident of which is the well known clash between Steve Jobs and Bill Gates. In fact there was a movie (I had a link to wikipedia's entry on Pirates of Silicon Valley here, but I can only add two links) made about that one (There are a lot of criticisms of this film and it's accuracy, but I reference only the fact that it was such a major event that there was a movie made about it). The fact is that how a user interacts with the programs that we as developers create is a very important topic.Now good design is very important, and there are many factors in whatdetermines good design. I will not list them here, but if you're interestedin doing further research Hereis a good place to start (pay particular attention to the references section).With that being said, it would be hard to design a framework which couldaccount for all of the design principles that a User Interface Engineerwould be aware of and would care about. There is one type of interface wheregood design doesn't have as many variables and that is on the command line.There are some standards which dictate how a program should behave as statedin the answers to this questionNow, getting something for free (I think) is something which everyone loves,but which can lead us into a trap like in the old Bait and Switch (I had a link to wikipedia's entry on bait and switch here, but I can only add two links).but there are times when technology advances to a point where a once hard toachieve end is made significantly easier thus allowing us to get somethingwithout having to put any extra effort into it ourselves, and we essentiallyget it for free think HTTP(although the real price is actually the years oftrial and error that those who have gone before us have put into it), but thatis the beauty of open source, we can give so that others can profit along withourselves.OK, enough rhetoric, The answer to my question as I see it is that whena standard has been developed and matured to a point when most peopleagree on it then a framework makes sense, but when something hasn't beenironed out as much then we need to make attempts at defining it more andmore so that we can all benefit.so for my project, I am going to try and capture the standards surroundingthe command line while making it incredibly easy to create a web interfacearound python programs. One of the most important things I will focus on isthe deployment of web applications, because that is one of the hardest partsof creating a Python web application.Please feel free to edit this answer if you feel you have a better idea ormore sources.
_codereview.71114
I have written a function which takes the year in question and words as a data which is a dictionary that maps words to the list of year/count. Now I am wondering how I can improve the code that I have or how to make it more simpler or make it better performance-wise.def avgWordLen(year, words): totLen = 0 totword = 0 for word in words: for nary in words[word]: if nary.year == year: totLen += len(word) * nary.count totword += nary.count if totword != 0: return totLen / totword else: return 0
Finding average word length in a given year
python;performance;python 3.x;hash table
totword and totLen are not so good names.And in any case PEP8 suggests to use snake_case for both variable and function names. So I recommend the following renames:total_word_length instead of totLenword_count instead of totwordaverage_word_length instead of avgWordLenWhen you iterate over keys in a dictionary and then lookup the values in every iteration step,then it's better to iterate over the dictionary items.That is, instead of: for word in words: for nary in words[word]:Do like this: for word, nary_list in words.items(): for nary in nary_list:This way you avoid unnecessary dictionary lookups.At the end of the method, the else is unnecessary,because the if part always returns.It's slightly simpler this way: if word_count != 0: return total_word_length / word_count return 0
_unix.120679
EDIT: I have since found that by using a folder in the root directory, things get a bit further - I can list the subfiles. So it really looks like permissions on the folder are the issue. I'm not sure what else to do besides chmod 777.I'm trying to configure an anonymous rsync daemon on CentOS 5.9.If I allow chroot the server reports that chroot fails. If I disable it, chdir fails.# rsyncd.confmax connections = 20log file = /var/log/rsync.logtimeout = 300use chroot = false[builds] path = /home/fuzz/builds read only = yes list = yes uid = nobody gid = nobody.# /etc/xinetd.d/rsync# default: off# description: The rsync server is a good addition to an ftp server, as it \# allows crc checksumming etc.service rsync{ disable = no socket_type = stream wait = no user = root server = /usr/bin/rsync server_args = --daemon log_on_failure += USERID}I have set all files and folders under /home/fuzz/builds to 777. The folder is owned by the user fuzz.On the client side, this works...$ rsync rsync://hostbuildsBut when I try to view the contents of the builds directory, I get this error...$ rsync -vvvv rsync://host/buildsopening tcp connection to host port 873Connected to host (10.186.5.90)note: iconv_open(UTF-8, UTF-8) succeeded.sending daemon args: --server --sender -vvvvde.Lsf . builds/@ERROR: chdir failed[Receiver] _exit_cleanup(code=5, file=main.c, line=1534): enteredrsync error: error starting client-server protocol (code 5) at main.c(1534) [Receiver=3.0.9][Receiver] _exit_cleanup(code=5, file=main.c, line=1534): about to call exit(5)
Configuring anonymous rsync daemon
rsync;daemon
Generally this would indicate some sort of permission problem. If you've already checked the permissions on /home/fuzz and /home/fuzz/builds, my next suspicion would be selinux. You can check if selinux is enabled with getenforce. To temporarily disable it to determine if that's the issue, run setenforce 0
_codereview.164045
I have some data in data frame and would like to return a value based on specific conditions. It is highly time consuming.I tried three methods:Method 1:Without dataframe, this is the simple logic I have and it is super fast.@numba.vectorize(['float64(float64, float64)'])def Method1(a,b): x=0.0 y=0.0 z=0.0 if (a <= 0.002): x=0.5 y=2500 z=20000 elif (a <= 0.003): x=0.3 y=2500 z=15000 elif (a <= 0.005): x=0.2 y=1000 z=10000 else: return 0.0 return min(max(x*b,y),z)%timeit Method1(0.001,200000)Method 2 - Input the condition data as a dataframe and run the functiondict = {'amin':[0.000,0.002,0.003], 'amax':[0.002,0.003,0.005], 'dfx':[0.5,0.3,0.2], 'dfy':[2500,2500,1000], 'dfz':[20000,15000,10000]}df=pd.DataFrame(dict)@numba.vectorize(['float64(float64, float64)'])def Method2(a,b): x=0.0 y=0.0 z=0.0 x=df[(a<=df.amax) & (a>=df.amin)]['dfx'].values y=df[(a<=df.amax) & (a>=df.amin)]['dfz'].values z=df[(a<=df.amax) & (a>=df.amin)]['dfy'].values if (len(x)==0) or (len(y)==0) or (len(z)==0): return 0.0 else: return min(max(x[0]*b,y[0]),z[0])%timeit Method2(0.001,200000)Method 3 - looped the rows of the dfdef Method3(a,b): for index,row in df.iterrows(): if (mPD >= row['amin']) & (mPD <= row['amax']): return min(max(row['dfx']*b,row['dfy']),row['dfz']) return 0.0%timeit Method3(0.001,200000)Method 1 gets finished in 1.2 micro secondsMethod 2 takes 2.47 milli seconds (1000 times slower than the Method 1)Method 3 takes ~80 micro secondsPlease help me how to improve the performance of Method 2 / 3.Also, please let me know why Method 3 is faster?P.S. I plan to use Numba so cannot use lambda in the functions.
Select value based on condition on dataframe
python;performance;python 2.7;lambda;numba
There is some overhead to numpy, and even more overhead to pandas. You won't be able to attain the performance of Method1 using pandas.I'll comment on the methods one at a time:Method1There is no need to initialize x, y, and z.You don't deal with the case where a is negativeFor me, Method1 is twice as fast when I leave off the @numba decorator.Method2Don't name a dictionary dict! This is the name of the class. I've renamed your dict as param_dict below.Again, there is no need to initialize x, y, and z.You're running the two inequalities three times each.Better would be to set valid_rows = (a<=df.amax) & (a>=df.amin) or similar.def Method2a(a,b): valid_rows = (a <= df.amax) & (a >= df.amin) if not any(valid_rows): return 0.0 x=df[valid_rows]['dfx'].values y=df[valid_rows]['dfz'].values z=df[valid_rows]['dfy'].values return min(max(x[0]*b,y[0]),z[0])With a setup where Method2 takes 2.25 ms, this takes 1.35 ms.Accessing a dataframe at a boolean array is slower than at an index.It's much better to find the first True index of valid_rows first.def Method2b(a,b): valid_rows = (a <= df.amax) & (a >= df.amin) if not any(valid_rows): return 0.0 idx = np.where(valid_rows)[0][0] x=df['dfx'].iat[idx] y=df['dfz'].iat[idx] z=df['dfy'].iat[idx] return min(max(x*b,y),z)This takes 531 sDrilling down into this, just the first line takes 472 s (try it with the method truncated after the first line, not returning anything)! That's where we can improve.We don't really need two sets of comparisons. There's enough information in df.amin together with the final value of df.amax:def Method2c(a,b): idx = np.searchsorted(df.amin.values, a) - 1 # special case if a == df.amin.iat[0] if idx < 0: if a == df.amin.iat[0]: idx = 0 else: return 0.0 # special case if a is bigger than all values in df.amin elif idx == df.shape[0] - 1: if a > df.amax.iat[idx]: return 0.0 x=df['dfx'].iat[idx] y=df['dfz'].iat[idx] z=df['dfy'].iat[idx] return min(max(x*b,y),z)This takes 38.6 s. Note the idea is closer to your approach in Method1.This is about as well as we can hope to do with pandas, as the bottle neck is now actually the three accesses: x=df['dfx'].iat[idx] y=df['dfz'].iat[idx] z=df['dfy'].iat[idx]which actually takes 24.6 s by itself!Edit: Actually, using .values instead of .iat saves a good amount here, cutting the whole run time down to 19.3 s for:def Method2c_values(a,b): idx = np.searchsorted(df['amin'].values, a) - 1 # special case if a == df['amin'].values[0] if idx < 0: if a == df['amin'].values[0]: idx = 0 else: return 0.0 # special case if a is bigger than all values in df.amin elif idx == df.shape[0] - 1: if a > df['amax'].values[idx]: return 0.0 x=df['dfx'].values[idx] y=df['dfz'].values[idx] z=df['dfy'].values[idx] return min(max(x*b,y),z)If we go back to the dictionary (which I'm calling param_dict) we can speed it up quite a bit:def Method2d(a,b): idx = np.searchsorted(param_dict['amin'], a) - 1 # special case if a == df.amin.iat[0] if idx < 0: if a == param_dict['amin'][0]: idx = 0 else: return 0.0 # special case if a is bigger than all values in df.amin elif idx == len(param_dict['amin']) - 1: if a > param_dict['amax'][idx]: return 0.0 x=param_dict['dfx'][idx] y=param_dict['dfz'][idx] z=param_dict['dfy'][idx] return min(max(x*b,y),z)This takes 6.91 s. Now the bottleneck is back to the first line, which is taking 5.66 s by itself.We can rewrite this to do np.searchsorted ourselves, with the extra logic for the special cases worked in:def Method2e(a,b): idx = 0 for value in param_dict['amin']: if a < value: # if idx is 0, we're out of bounds if not idx: return 0.0 break elif a == value: # if idx is 0, we need to adjust by 1 if not idx: idx = 1 break idx += 1 else: # a is larger than every element of param_dict['amin'] if a > param_dict['amax'][-1]: return 0.0 idx -= 1 x=param_dict['dfx'][idx] y=param_dict['dfz'][idx] z=param_dict['dfy'][idx] return min(max(x*b,y),z)This is 823 ns. We could tweak mildly to put the idx == 0 part out of the main loop and other such things, but I'll leave it as is.Method3This method is fine, except thatdef Method3(a,b): for index,row in df.iterrows(): passtakes 112 s for me.
_unix.90819
I'm using CentOS 6.4 and I was following this tutorial in order to upgrade PHP from v 5.3.3 to v 5.4.19 but I got the following error:Error: php54w-common conflicts with php-common-5.3.3-23.el6_4.i686. How do I resolve this problem?[my_profile@localhost gplus-quickstart-php]$ sudo rpm -Uvh http://mirror.webtatic.com/yum/el6/latest.rpm [sudo] password for my_profile: Retrieving http://mirror.webtatic.com/yum/el6/latest.rpm warning: /var/tmp/rpm-tmp.S0yqSL: Header V4 DSA/SHA1 Signature, key ID cf4c4ff9: NOKEY Preparing... ########################################### [100%] 1:webtatic-release ########################################### [100%] [my_profile@localhost gplus-quickstart-php]$ sudo yum install php54wLoaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile * base: mirror.netglobalis.net * extras: mirror.netglobalis.net * rpmforge: mirror.nexcess.net * updates: mirror.netglobalis.net * webtatic: us-east.repo.webtatic.com webtatic | 2.9 kB 00:00 webtatic/primary_db | 98 kB 00:00Setting up Install ProcessResolving Dependencies--> Running transaction check---> Package php54w.i386 0:5.4.19-1.w6 will be installed--> Processing Dependency: php54w-common = 5.4.19-1.w6 for package: php54w-5.4.19-1.w6.i386--> Processing Dependency: php54w-cli = 5.4.19-1.w6 for package: php54w-5.4.19-1.w6.i386--> Running transaction check---> Package php54w-cli.i386 0:5.4.19-1.w6 will be installed---> Package php54w-common.i386 0:5.4.19-1.w6 will be installed--> Processing Conflict: php54w-common-5.4.19-1.w6.i386 conflicts php-common < 5.4.0--> Finished Dependency ResolutionError: php54w-common conflicts with php-common-5.3.3-23.el6_4.i686 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest[my_profile@localhost gplus-quickstart-php]$ ^C[my_profile@localhost gplus-quickstart-php]$ ^C[my_profile@localhost gplus-quickstart-php]$ Error: php54w-common conflicts with php-common-5.3.3-23.el6_4.i686bash: Error:: command not found[my_profile@localhost gplus-quickstart-php]$
PHP Upgrade Error (PHP 5.3.3 to PHP 5.4.19 on CentOS 6.4)
centos;php;upgrade
The tutorial you cited does recommend using this Webtatic repo on a fresh system, where you can avoid conflicts with installed packages, but suggests that you can upgrade a currently-installed php using (as root or with sudo):yum install yum-plugin-replaceyum replace php-common --replace-with=php54w-commonThen try sudo yum install php54w again.
_unix.25822
Is there a security risk in running a web server like Unicorn as root?The Nginx master process runs as root, the Nginx worker runs as the limited www-data user, but I can't set another user like www-data to run the Unicorn master/workers without messing around with www-data's PATH.
Debian + Nginx/Unicorn permissions
security;nginx
Is there a security risk in running a web server like Unicorn as root?As Thomas said in sec.se chat, running anything as root carries an implicit security risk. The thing to understand about root is that the kernel essentially trusts all its actions without complaint.The issue occurs if there are any vulnerabilities in nginx, or unicorn. If this happens, it may be possible to execute an exploit, misusing the process.However it is important to understand how these servers work to understand what the exploit vector may be. In theory, there are two parts that must occur holding root permissions - the reading of configuration and the bind() operation, assuming your server has a port < 1023. unicorn acts (assuming gunicorn is similar) as a prefork model - each client request is handled in a separate process. The job of the process running as root is to bind to the necessary port and then pass connections off to the workers. Worker models mix threads and processes. As I understand it nginx operates in a very similar way, with the proviso that it has a greater bias to asynchronous IO - I believe epoll/kqueue/accept. If you have a look at the strategies for solving the c10k problem these are why the designs operate this way.In theory, then, most worker processes can seteuid() and seteguid() to drop their root permissions and should do so. Problems arise when these processes do not and handle all their traffic as root; most processes do drop their root permissions. I should also make two fairly obvious statements:You could configure your nginx daemon to run as something other than root if you do not bind to ports < 1024.You can (and I do) configure unicorn (gunicorn) in my case to create socket files, meaning it does not need to be run as root. nginx can proxy web requests onto unix sockets, meaning gunicorn never exposes a tcp connection.The vulnerable section of code should therefore amount to parsing the config file and handing off connections; in theory the danger to root is therefore quite minimal assuming this works and is heavily tested.POSIX capabilities are a different way (other than setuid bits) to delegate portions of root's capabilities to other processes; CAP_NET_BIND_SERVICE for example allows a process to bind to a port less than 1024 without having to be root. They work via extended attributes I believe. Fedora has recently (f16?) moved to ensuring all packages use capabilities rather than sticky bits.Another point of note and hopefully a positive one - unicorn, if I understand it correctly, is a ruby process (gunicorn is definitely a python process). The use of an interpreted language does reduce the risk that the developers have introduced bugs as the string handling should definitely be safe and pointers are not available. However, bugs with the interpreter may cause a security risk to all interpreted programs, too.The unfortunate reality, however, is that a compromise of your www-data process is still going to be a problem for you; an attacker can potentially dump your database, deface your website etc. Knowing root is secure is great, but if for example your website is your main advertising point for your customers, having it defaced is still a threat to your business. SummaryYes, running unicorn as root is a risk. However, the attack surface is relatively small in terms of the code that will execute as root. Also, there may be options for minimising what you run as root. I have also not covered MAC systems such as SELinux, but these are viable options along with capabilities assuming you're prepared to learn them. The important thing to understand is that risk is a balance - how sensitive/important this service is will determine how much effort you should put into securing it. If you're running a banking website, you might want to think seriously about how you harden your system; if this is a website hosting lolcat pictures (ok, y'know what I mean) you may decide the current setup is just fine.
_unix.339832
We have machines with both local and LDAP accounts. Every computer has a HDD mounted where the local group users has reading and writing permissions. How can I add all the LDAP users to that group users?
How to assign LDAP user to local group users?
linux;ubuntu;ldap
null
_codereview.116075
I use this template to lookup clients registered for data. The data is associated by name(key) and clients are shared pointers (value) to a class which will consume the data.//////////////////////////////////////////////////////////////////////// Registrar Template to help manage Key to Value Registrations//// T1 - Key Object// T2 - Value Object//// For Example: Register clients (T2) for Data (T1) ////////////////////////////////////////////////////////////////////#ifndef _RegistrarT_hpp_#define _RegistrarT_hpp_#include <map>#include <vector>#include <set>template <class T1,class T2, class CompareT1 = std::less<T1> >class RegistrarT{public: typedef std::multimap<T1,T2, CompareT1> RegistrationMultiMap; typedef std::vector<T2> RegistrationVector; typedef std::set<T1> KeySet;public: RegistrarT(){} ~RegistrarT(){} // // Register a value; Do not allow duplicate registrations void Register(T1 const & key, T2 const & value) { Unregister(key, value); // Remove if it exists in the multimap registrations_.insert(std::make_pair(key,value)); } // Lookup all Registered for Key then find and remove value void Unregister(T1 const & key, T2 const & value) { bool found=false; typename RegistrationMultiMap::iterator itr = registrations_.lower_bound(key); while (!found && itr != registrations_.upper_bound(key)) { if (itr->second == value) found = true; else ++itr; } if (found) registrations_.erase(itr); } // Remove all values registered for key void UnregisterByKey(T1 const & key) { registrations_.erase(registrations_.lower_bound(key), registrations_.upper_bound(key)); } // Find all values and remove registrations for all keys void UnregisterAll(T2 const & value) { typename RegistrationMultiMap::iterator itr = registrations_.begin(); while (itr != registrations_.end()) { if (itr->second == value) registrations_.erase(itr++); else ++itr; } } // Find all values and remove registrations for all keys // return all keys affected void UnregisterAll(T2 const & value, KeySet& ks) { typename RegistrationMultiMap::iterator itr = registrations_.begin(); while (itr != registrations_.end()) { if (itr->second == value) { ks.insert(itr->first); registrations_.erase(itr++); } else ++itr; } } // Get all values registered for key bool GetRegistrations(T1 const & key, RegistrationVector& rv) { typename RegistrationMultiMap::iterator itr = registrations_.lower_bound(key); while (itr != registrations_.upper_bound(key)) { rv.push_back(itr->second); ++itr; } return (rv.size() > 0); } // Get all keys; std::set will not allow duplicates void GetRegistrationKeys(KeySet& ks) { typename RegistrationMultiMap::iterator itr = registrations_.begin(); while (itr != registrations_.end()) { ks.insert(itr->first); ++itr; } } // Check if key is registered bool RegistrationsExist(T1 const & key) { typename RegistrationMultiMap::iterator itr = registrations_.lower_bound(key); return (itr != registrations_.upper_bound(key)); } // Get count of registrations for key std::size_t RegistrationsCount(T1 const & key) { std::size_t cnt=0; typename RegistrationMultiMap::iterator itr = registrations_.lower_bound(key); while (itr != registrations_.upper_bound(key)) { cnt++; ++itr; } return (cnt); } // Is value registered for key? bool RegistrationsExist(T1 const & key, T2 const & value) { typedef typename RegistrationMultiMap::iterator ResIter; std::pair< ResIter , ResIter> range= registrations_.equal_range(key); ResIter it; for(it=range.first;it!=range.second;++it) { if(it->second==value) return true; } return false; } // Is any value registered bool RegistrationsExist() { return ! registrations_.empty(); } // How many keys are in use std::size_t RegistrationCount() { return registrations_.size(); } // Clean up void Clear() { registrations_.clear(); }private: RegistrationMultiMap registrations_; // Holds all};#endif // _RegistrarT_hpp_Sample usage:#include RegistrarT.hpp#include <string>typedef RegistrarT<std::string,std::string> NewsRegistrations;int main(int argc, char *argv[]){ NewsRegistrations sportingNews_; NewsRegistrations::KeySet keyset; std::string moe(Moe); std::string curly(Curly); std::string larry(Larry); sportingNews_.Register(std::string(Football),moe); sportingNews_.Register(std::string(Wrestling),moe); sportingNews_.Register(std::string(Wrestling),curly); sportingNews_.RegistrationsCount(std::string(Wrestling)); sportingNews_.Register(std::string(Rugby),curly); sportingNews_.Register(std::string(BeachVolleyBall),larry); sportingNews_.UnregisterAll(moe,keyset); sportingNews_.UnregisterByKey(std::string(Wrestling)); sportingNews_.RegistrationsExist(std::string(Bowling));}I use a test framework so for simplicity I did not post my tests. Basically I check Registration counts and if Registrations exist. Looking to learn new stuff and gain reputation points so I can attempt to give back.
C++ Template for one to many Registration (pre Gang of Four)
c++;design patterns;template
null
_codereview.26449
I am teaching myself Ruby and Ruby-on-rails, as part of this regimen I thought it would be a good idea to do some Ruby quiz exercises and so I started with the solitaire cipher. The basic functionality of decoding a properly formatted message is there but only just so. I've come to the realization that I've written this like Ruby has no object-oriented functionality, and instead it's a big imperative function full of clever expressions that make it hard to read.At this point I would like to pad it out with a fuller feature set and unclever some of the logic, and I wanted to do so via TDD. Is this program hardly unit-testable because of it's imperative design?As someone who wants to be a great ruby programmer, is it imperative that I refactor this code to utilize classes, methods and objects? If not, will unit-testing be of very limited value as a result? Am I overreacting and this is fine for what it was designed?input = String.newinput = ARGV[0].dupdef solitaire(input)def decode(msg) #Creates a hash with keys 1-54 abc = ('A'..'Z').to_a alphaHash = Hash.new alphaHash.default = (1..54).to_a.each {|x| alphaHash[x] = (abc[x - 1])} #assigns 1-26 a letter in alphabetical order abc.each {|x| alphaHash[abc.index(x) + 27] = x} #assigns letters in order to 27-52 #All non-joker card values 1-52 can be resolved to their letter #Creates array in which each letter from msg is added as a number to the array, A = 1, B = 2 etc. msg.delete! ' ' convertedMessage = Array.new msg.each_char {|letter| convertedMessage << alphaHash.key(letter)} #Create deck array, for this example in ascending numerical order; clubs, diamonds, hearts, spades deck = (1..54).to_a #Set indexes of two jokers jkr_a_idx = deck.index 53 jkr_b_idx = deck.index 54 convertedKeys = Array.new #This uses the solitaire cipher to generate the keys the message was encrypted with while convertedKeys.length < convertedMessage.length #Joker A down one card jkr_a_idx = deck.index 53 jkr_a_idx += 1 #check if it returns to front of deck if jkr_a_idx >= 54 jkr_a_idx -= 54 #Reset index to beginning of deck jkr_a_idx += 1 #Joker can never be first card so it skips index 0 end #Remove and insert Joker A at new index deck.delete(53) deck.insert(jkr_a_idx, 53) #Joker B down two cards jkr_b_idx = deck.index 54 jkr_b_idx += 2 #check if Joker B must return to front of deck if jkr_b_idx >= 54 jkr_b_idx -= 54 #Reset index to beginning of deck jkr_b_idx += 1 #Joker can never be first card so it skips index 0. end #Remove and insert Joker B at new index deck.delete(54) deck.insert(jkr_b_idx, 54) #Triple cut around jokers, exchange cards above first joker with cards below second joker. #determine top and bottom jokers topJoker = deck.detect {|e| e == 53 or e == 54} if topJoker == 53 bottomJoker = 54 end if topJoker == 54 bottomJoker = 53 end #Make the cuts topCut = deck.slice!(0...deck.index(topJoker)) if bottomJoker != deck.last #if a joker is the last card, there is no bottom cut bottomCut = deck.slice!((deck.index(bottomJoker) + 1)..-1) #cuts cards after bottom joker to the last one deck.unshift bottomCut #Inserts the bottomCut at the front deck.flatten! end deck << topCut deck.flatten! #deck must be flattened as cuts are inserted as nested arrays #Count cut: take last card's value, cut this many cards from top and insert before last card if deck.last == 53 or deck.last == 54 #Either joker's value is always 53 countCut = deck.slice!(0...53) #If either joker is the last card, we cut 53 cards else countCut = deck.slice!(0...deck.last) end deck.insert(deck.index(deck.last), countCut) #inserts the countCut before the last card deck.flatten! #Take first card's value, count this many cards, convert the facing card to a letter, this is the letter for the keystream if deck.first == 54 #All jokers get value 53 if deck[53] != 53 and deck[53] != 54 #If a joker is the facing card, there is no output to the keystream for this iteration convertedKeys << alphaHash.key((alphaHash[deck[53]])) #Any other facing card is converted to a letter, then back to numeric end else if deck[deck.first] != 53 and deck[deck.first] != 54 #Step is skipped if the facing card is a joker convertedKeys << alphaHash.key((alphaHash[deck[deck.first]])) end end end #while loop decodedMessage = String.new #Decodes the message #Both convertedMessage and convertedKeys are numeric values 1-26 convertedMessage.each { |value| #When decoding, subtract key from the encoded value for the decoded message if convertedKeys[decodedMessage.length] >= value #If this operation is 0 or negative, add 26 to value decodedMessage << alphaHash[((value + 26) - convertedKeys[decodedMessage.length])] else decodedMessage << alphaHash[(value - convertedKeys[decodedMessage.length])] end } decodedMessageend #decodeputs decode(input)endputs solitaire(input)
Is my solitaire cipher too imperative for an Object Oriented language like Ruby?
ruby
Some notes on your code:alphaHash = Hash.new, each, delete, +=, ...: This shows that you think in imperative terms (init, update, remove, insert, destroy, change, ...), in Ruby is more idiomatic a functional style (see this) with immutable data structures.or/and are used for flow control, for logic you should use &&/||.The real problem of your code is that is not declarative. You have a bunch of code put together doing things, but it's difficult to relate each step with the specifications (if you have to insert comments for that, it's a signal something is wrong). The way to solve this is by using abstractions (functions/methods) that capture the specifications. I'll show the skeleton of my solution, I think it's more useful than going into full detail (ask if you want to see the complete code). Note how every step (in decode,encode` and the deck re-arranging) has its own abstraction, the code is a composition of them:class Deck < Array def move(card, offset) end def triple_cut_around(card1, card2) end def count_cut_last end def get_output_letter end def self.value_from_card(card) endendclass SolitaireCipher CharsToDigits = Hash[(A..Z).map.with_index(1).to_a] DigitsToChars = CharsToDigits.invert def self.gen_keystream initial_cards = Deck.new((1..52).to_a + [:joker_a, :joker_b]) ... shuffled_cards = cards. move(:joker_a, +1). move(:joker_b, +2). triple_cut_around(:joker_a, :joker_b). count_cut_last letter = shuffled_cards.get_output_letter [letter, shuffled_cards] ... end def self.chars_to_digits(chars) end def self.digits_to_chars(digits) end def self.encode(string) s0 = string.upcase.gsub(/[^A-Z]/, '') s = s0.ljust((s0.size / 5) * 5, X) digits1 = chars_to_digits(s.chars) digits2 = chars_to_digits(gen_keystream.take(s.length)) digits_encoded = digits1.zip(digits2).map { |d1, d2| (d2 + d1) % 26 } digits_to_chars(digits_encoded).each_slice(5).map(&:join).join( ) end def self.decode(string) endendencoded = SolitaireCipher.encode(Code in Ruby, live longer!)puts encoded #=> GLNCQ MJAFF FVOMB JIYCBdecoded = SolitaireCipher.decode(encoded)puts decoded #=> CODEI NRUBY LIVEL ONGER
_unix.214392
I have a strange issue on mikrotik rb951-2hnd router. I built image a few years ago using revision 39392 and patch firmwared it and everything worked fine. Just a few months ago I decided to update firmware. So I did it and discovered that physical network is completely broken. I have over 90% packet loss via ethernet ports, though wifi works perfect . I thought that I messed up while build. So I firmwared 2 different images from download.openwrt and a couple more that I built by myself, but symptoms are always the same. I wanted to try worked for me svn revision but unfortunately this patch is unavailable so I can't firmare image back. The funny thing that everything including physical ports works while netbooting via vmlinux-initramfs (bootp) the same build revision. Since vmlinux works fine I suspect that flash is damaged so I made myself sure that files from rootfs.tar.gz and firmwared ones are the same. On the next step I compared the loaded demons on vmlinux-initramfs and firmwared system, the firmwared one has extra listed below:nf_log_common.konf_log_ipv4.konf_log_ipv6.konf_nat_masquerade_ipv4.konf_reject_ipv4nf_reject_ipv4.konf_reject_ipv6nf_reject_ipv6.konls_base.koPreventing them from loading doesn't help. Furthermore I get no errors from dmesg or logread. Here's my configuration:topiptables -L -n/etc/config/network - 1 wan. 2-5 lan (wifi in sta client mode)scenario for dhcp:After plugging laptop (dhcp client) to lan port for 15 seconds ( plug out after) I see the followings:Laptop send: 3 dhcp request and 9 icmpv6 Laptop receive: 0 packetsrouter sends: None ? (ifconfig displays 4 packets but tcpdump doesn't catch them) router receives: 2 icmp packets fromLaptop sent list (listed below)I also checked tcpdump on router, and it doesn't show lost packets. Seems like the problem is somewhere on the driver level. But wait, vmlinux works and drivers (kernel modules) are the same.root@OpenWrt:/# tcpdump -vv -i eth0.3tcpdump: WARNING: eth0.3: no IPv4 address assignedtcpdump: listening on eth0.3, link-type EN10MB (Ethernet), capture size 65535 bytes[ 1042.060000] Atheros AR8216/AR8236/AR8316 ag71xx-mdio.0:00: Port 2 is up09:35:24.172637 IP6 (hlim 1, next-header Options (0) payload length: 36) :: > ff02::16: HBH (rtalert: 0x0000) (padn) [icmp6 sum ok] ICMP6, m]09:35:25.872843 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::b2e3:928a:66b2:ff43 > ff02::2: [icmp6 sum ok] ICMP6, router6 source link-address option (1), length 8 (1): 5c:f9:dd:48:9e:89 0x0000: 5cf9 dd48 9e895c:f9:dd:48:9e:89 /fe80::b2e3:928a:66b2:ff43 - laptop, d4:ca:6d:92:a4:7e / fe80::d6ca:6dff:fe92:a47e: - routerscenario for static ip:openwrt (static 192.168.2.1)root@OpenWrt:/# ping 192.168.2.2PING 192.168.2.2 (192.168.2.2): 56 data bytes64 bytes from 192.168.2.2: seq=4 ttl=64 time=0.505 ms64 bytes from 192.168.2.2: seq=21 ttl=64 time=0.489 ms64 bytes from 192.168.2.2: seq=34 ttl=64 time=0.528 ms64 bytes from 192.168.2.2: seq=39 ttl=64 time=0.512 ms64 bytes from 192.168.2.2: seq=45 ttl=64 time=0.527 ms64 bytes from 192.168.2.2: seq=48 ttl=64 time=0.549 ms64 bytes from 192.168.2.2: seq=51 ttl=64 time=0.813 ms^C--- 192.168.2.2 ping statistics ---56 packets transmitted, 7 packets received, 87% packet lossround-trip min/avg/max = 0.489/0.560/0.813 mslaptop (static 192.168.2.2)14:50:08:andrew:/home/andrew:0: ping 192.168.2.1PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.From 192.168.2.2 icmp_seq=13 Destination Host UnreachableFrom 192.168.2.2 icmp_seq=14 Destination Host UnreachableFrom 192.168.2.2 icmp_seq=15 Destination Host Unreachable^C--- 192.168.2.1 ping statistics ---100 packets transmitted, 0 received, +3 errors, 100% packet loss, time 99022mspipe 314:51:53:andrew:/home/andrew:1: ping 192.168.2.1PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.^C--- 192.168.2.1 ping statistics ---29 packets transmitted, 0 received, 100% packet loss, time 28080msSo here is my question:What steps should I take to dive deeper and find out what the problem is? I want to find at least the error. Should I turn a debug level somewhere? What can cause such a huge packet loss? EDIT Netbooting vmlinux produces the same Netbooting kerneldebug - works perfectly, unfortunately its size is 11mb, mtd1 is to small to hold it.
Huge packet loss on openwrt
networking;openwrt;tcpdump
Ok I finally found the reason. Initramfs wasn't the key of success. It turns out that routerboard works if it was booted with pressed reset button. But it seems like flashed driver can't last for long. Router reboots with kernel panic after a few hours of using depending on loading rate.
_cs.13181
I've developed the following backtrack algorithm, and I'm trying to find out it time complexity.A set of $K$ integers defines a set of modular distances between all pairs of them. In thisalgorithm, I considered the inverse problem of reconstructing all integer sets which realize a given distance multiset. i.e. :Inputs: $D=\{p_ip_j \mod N, ij \},K $Output : $P=\{p_1,p_2,...,p_K\},\qquad p_i \in \{0,1,2,...,N-1\},\qquad p_i > p_j $ for $i>j$Simply saying, the algorithm puts $K$ blanks to be filled. Initially, puts 1 in the first blank. For the second blank it looks for the first integer that if we add to P, it doesn't produce any difference exceeding the existent differences in $D$. Then, it does so, for next blanks. While filling a blank if it checked all possible integers and found no suitable integer for that blank, it turns back to the previous blank and looks for next suitable integer for it. If all blanks are filled, it has finished his job, otherwise it means that there weren't any possible $P$'s for this $D$.Here's my analysis so far.Since the algorithm checks at most all members of $\{2,...,N\}$ for each blank (upper bound) there is $N-1$ search for each blank. If each visited blank was filled in visiting time, the complexity would be $O((K-1)(N-1))$ since we have $K-1$ blank (assuming first one is filled with 1). But the algorithm is more complex since for some blanks it goes backward and some blanks may be visited more that once. I'm looking for the worst case complexity i.e. the case that all blanks are visited and no solution is found.
Time complexity of a backtrack algorithm
algorithms;algorithm analysis;combinatorics;search algorithms;greedy algorithms
null
_softwareengineering.133506
I'm self-learning iOS development through the iTunes U CS193p course, and I often find myself stuck. I've been trying to get unstuck myself, but it might take me hours and hours to figure out what I'm doing wrong, be it missing a method or not really getting a whole concept like delegation. I'm worried that I might be wasting too much time, and I'd be better off going to Stack Overflow shortly after I get stuck so I can move on. In your experience, does quickly asking on Stack Overflow hamper the learning process or improve it?
When stuck, how quickly should one resort to Stack Overflow?
productivity;education
When I am working with new developers, I encourage them to come ask questions after five or ten minutes where they are not making progress.That has two benefits: the first is that they can get help without too much time spent staring at a problem, but they only ask when they are not getting somewhere. If they are learning - even on something that isn't ultimately the answer - they are much more likely to usefully retain that information.The second is that after about that much time they have to explain the problem to someone else. That solves a huge proportion of problems, because going through it end-to-end in order means you can spot the thing that you missed in your earlier work.Since it sounds like you are doing this alone, try turning to a stuffed toy, or the clock, or the wall, and asking that about the problem. Explain it as you would to a person, and see if that fixes things.If it doesn't, and you are not making progress, ask someone. Spending more than five or ten minutes stuck is a waste of your time - unless you go on to do something else, then come back to the problem with a fresh mind.
_unix.268666
In the below script the cases aserver and bserver work fine. But in case cserver above, after su - gsxuserp, I need to perform the following three options with the same user.cd ..cd random_directorytail -f file_in_random_directoryI am not able to do this using -c option, since the connection just closes without executing anything. can someone please suggest a basic way to do this?echo Please type one of the following: aserver,bserver,cserver: read inputecho You entered: $inputcase $input in aserver) echo Logging into a. Please enter the passwords when prompted ssh -t user@something.com ssh -t aserver su - gsxp -c sqlplus grep_ro/pwd ;; bserver) echo Logging into b. Please enter the passwords when prompted ssh -t user@something.com ssh -t bserver su - gsxp -c sqlplus grep_ro/pwd ;; cserver) echo Logging into c. Please enter the passwords when prompted ssh -t user@something.com ssh -t cserver su - gsxuserp -c cd ;; *) echo Incorrect Option entered. Exiting the script ;;esac
How to use cd command in su command?
shell script;command line;su;cd command
null
_codereview.16426
I have created a simple array-based Stack class with some methods like the actual Stack in Java.I am testing this for mistakes, but since I am learning Java, my tests may not be as comprehensive as they should.import java.util.*;public class SJUStack<E> { // Data Fields private E[] theData; private int topOfStack = -1; private static final int INITIAL_CAPACITY = 10; private int size = 0; private int capacity = 0; // Constructors public SJUStack(int initCapacity) { capacity = initCapacity; theData = (E[]) new Object[capacity]; } public SJUStack() { this(INITIAL_CAPACITY); } // Methods public E push(E e) { if(size == capacity) { reallocate(); } theData[size] = e; size++; topOfStack++; return e; } // End push(E e) method public E peek() { if(empty()) { throw new EmptyStackException(); } return theData[topOfStack]; } // End peek() method public E pop() { E result = peek(); theData[topOfStack] = null; size--; topOfStack--; if(size <= (capacity/4) && capacity >= INITIAL_CAPACITY) { shrink(); } return result; } // End pop() method public boolean empty() { return size == 0; } // End empty() method private void reallocate() { capacity *= 2; theData = Arrays.copyOf(theData, capacity); } // End reallocate() method private void shrink() { capacity /= 2; theData = Arrays.copyOf(theData, capacity); } // End shrink() method public String toString() { return Arrays.toString(theData); } // End toString() method public int size() { return size; } // End size() method}
Simple array-based Stack class in Java
java;array;homework;stack
I don't see any obvious logic mistakes, points for that. On the other hand some redundant and non-standard ways of coding in java.Drop either size or topOfStack members, (topOfStack == size - 1)Drop capacity capacity is same as theData.lengthMethod name: isEmpty (more concise with java standard collections)Use data instead of theData
_unix.244557
The following is the text I want to parse with sed (Mac OS X 10.11.1 bash)100:25:43,959 --> 00:25:46,502Here you are, sir.Main level, please.I can delete the first line with sed -e 's/[0-9]//'.But with sed -e 's/^[0-9]//', the first line, i.e. 1 remains there.Since 1 is at the beginning of the first line, shouldn't it be deleted?head -n1 2001.srt | od -c0000000 357 273 277 1 \n0000005Just created a new text file starting with 1.head -n1 2002.srt | od -c0000000 1 \n0000002sed -e 's/^[0-9]//' works for this newly created file.Yes, there's something before 1.
sed -e 's/^[0-9]//' does not work for the first line
text processing;sed;regular expression
Your file starts with a UTF-8 byte order mark. It is unicode symbol U+FEFF which is encoded as three bytes in UTF-8. Those three bytes show up as 357 273 277 when you print them in base 8.To the sed command those bytes at the start of the line means that 1 is in fact not the first character on that line. Many other tools will treat it the same way.You need to remove the BOM before doing other processing in order to get a useful result. For instance you could start your sed script with s/^\xef\xbb\xbf// to remove the BOM. Your full command would then becomesed -e 's/^\xef\xbb\xbf//;s/^[0-9]//'
_cs.62464
Given a binary $n$-times-$n$ matrix $A$, we'd like to cover the regions comprised of $1$'s with non-intersecting rectangles. A collection of disjoint rectangles that covers all $1$'s (and only $1$'s, i.e., it mustn't cover any $0$'s) is called a cover. (Notice that a problem instance may have many different covers.)A cover is called a minimum cover if it uses the smallest number of rectnagles possible.The counting problem I'm interested in is: given an $n$-times-$n$ binary matrix $A$, count the number of minimum covers of $A$.What can you say about this problem? (This post was inspired by this SO question.)
Counting the number of minimal covers of a binary matrix
algorithms;complexity theory
null
_unix.151012
I have:One Raspberry Pi with Raspbian (distribution based on Debian), with one enthernet interfaceOne laptop with windows, with one ethernet interfaceOne USB modemEthernet cable between Pi and laptopI want to:Share internet on Raspberry Pi when modem is connected to PiUse internet which is shared on laptop when modem is connected to laptop.What I have done:My modem is working on both machines, its ppp0 interface on Pi. ppp0 has dynamic IP.Sharing internet from laptop to Raspberry works. Laptop IP: 192.168.137.1Question:How can I share internet on Raspberry Pi without ruining too much / reconfiguring network on both machines when I switch modem from one machine to another? Extra question: I know that interfaces on both windows and linux can have multiple IP addresses. Can I have both configurations set and just plug my modem here and there and start connection to have internet on second machine?
Two way internet sharing configuration (swiching modem from one machine to another)
networking
null
_webapps.10375
I have an Excel spreadsheet I made to track my own scores for an Xbox game I play (TrialsHD, if you are curious). I've made it available to other players so they can track their own scores, but it occurred to me that more people could use it if I figured out how to make it available to people online. I know I can import from Excel to Google Spreadsheets, but I'm not sure if sharing works in the way I need it to.Specifically, I don't want people to share and edit the master, each person needs their own independent copy of the spreadsheet to enter their race scores and times. In essense, my copy is like a template.Can I do Google Drive sharing like this? How exactly would it work? (Would each person need their own independent Google account?)If not, is there another tool out there that would work for me to make this spreadsheet available for other users? What about the newly announced MS Office online products?
How exactly does sharing work in Google Drive?
google spreadsheets;google drive
null
_unix.216020
Can someone please help me here. My cron job is not sending an email with output. While I run the shell script manually it generates an email with output.Here is the script looks like #!/bin/bashMAILLIST=<email>LogDirectory='/app/oracle/admin/monitor/'DBUSER='rman'DBUSERPASSWORD='rman01'DB='pdcatdb'SUBJECT=RMAN Backup Status ReportORACLE_HOME=/app/oracle/product/12.1.0.2_64${ORACLE_HOME}/bin/sqlplus -s <<EOF > ${LogDirectory}/query.log${DBUSER}/${DBUSERPASSWORD}@${DB}set pagesize 20000set linesize 2000set wrap offset trimspool onset feedback offset echo offset termout offset heading offset underline offset colsep ','SELECT RTRIM(A.DB_NAME)||'---->'|| LTRIM(A.STATUS) BACKUP_STATUS FROM rman.RC_RMAN_STATUS A, ( SELECT DB_NAME, OBJECT_TYPE, MAX (END_TIME) END_TIME FROM rman.RC_RMAN_STATUS --WHERE OBJECT_TYPE IN ('DB FULL', 'DB INCR') WHERE OBJECT_TYPE IN ('DB INCR') AND STATUS IN ('COMPLETED', 'COMPLETED WITH ERRORS', 'FAILED') AND OPERATION IN ('BACKUP', 'BACKUP COPYROLLFORWARD') GROUP BY DB_NAME, OBJECT_TYPE) B WHERE A.OBJECT_TYPE IN ('DB FULL', 'DB INCR', 'ARCHIVELOG') AND STATUS IN ('COMPLETED', 'COMPLETED WITH ERRORS', 'FAILED') AND OPERATION IN ('BACKUP', 'BACKUP COPYROLLFORWARD') AND A.DB_NAME = B.DB_NAME AND A.END_TIME = B.END_TIME AND A.OBJECT_TYPE = B.OBJECT_TYPE AND A.end_time > sysdate-7 order by 1 /EOFmailx -s Rman Backup Report <email> < /app/oracle/admin/monitor/query.log
My cron job is not sending an email with any output, I see only a blank email
cron;email
null
_unix.334428
I have PHP 7.1 and apache 2.4 running in a docker vm. I have mod rewrite enabled. I have code that needs $_SERVER['SCRIPT_URL'] and $_SERVER['SCRIPT_URI']. These are not set. I created a minimal example that proves it. Do the following in bash:git clone https://github.com/zippy1981/php7-mod_rewrite.gitcd php7-mod_rewritedocker-compose stop && docker-compose rm -fv && docker-compose build --force-rm --no-cache && docker-compose up -dcurl localhost:8080/fooThat redirect is enabled by the .htaccess line RewriteRule ^foo$ app.php proving mod_rewrite is installed, enabled, and is touching this particular request. That url returns a JSON version of $_SERVER that looks like this:{ _SERVER: { REDIRECT_STATUS: 200, HTTP_HOST: localhost:8080, HTTP_USER_AGENT: curl\/7.47.0, HTTP_ACCEPT: *\/*, PATH: \/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin, SERVER_SIGNATURE: <address>Apache\/2.4.10 (Debian) Server at localhost Port 8080<\/address>\n, SERVER_SOFTWARE: Apache\/2.4.10 (Debian), SERVER_NAME: localhost, SERVER_ADDR: 172.24.0.2, SERVER_PORT: 8080, REMOTE_ADDR: 172.24.0.1, DOCUMENT_ROOT: \/var\/www\/html, REQUEST_SCHEME: http, CONTEXT_PREFIX: , CONTEXT_DOCUMENT_ROOT: \/var\/www\/html, SERVER_ADMIN: webmaster@localhost, SCRIPT_FILENAME: \/var\/www\/html\/app.php, REMOTE_PORT: 49122, REDIRECT_URL: \/foo, GATEWAY_INTERFACE: CGI\/1.1, SERVER_PROTOCOL: HTTP\/1.1, REQUEST_METHOD: GET, QUERY_STRING: , REQUEST_URI: \/foo, SCRIPT_NAME: \/app.php, PHP_SELF: \/app.php, REQUEST_TIME_FLOAT: 1483412792.892, REQUEST_TIME: 1483412792, argv: [], argc: 0 }}This does not have the SCRIPT_URI or SCRIPT_URL environment variables in it. How do I get them to show up?
$_SERVER['SCRIPT_URL'] and $_SERVER['SCRIPT_URI'] not appearing when mod_rewrite enabled
apache httpd;docker;mod rewrite;php7
null
_unix.20510
I recently formatted an entire drive so I could install Linux on it. The partitions:15 GB, Primary, sda1, mount point: /232.9 GB Logical, sda5, mount point: /home3 GB Logical, sda6, swapHowever, upon install completion (with the GRUB bootloader) and reboot, the BIOS reports that it cannot find a bootable device.I am thinking that I did not set sda1's bootable flag. If this is the case - is there some way I can do this from the Debian CD's rescue mode?The exact error message from the BIOS is No bootable device -- insert boot disk and press any key.Attempted:Removed all other boot options (CD, USB) from the boot listSwapped cabledTried other SATA portsSwapped hard drives (with new SSD)
Newly installed Debian install is not recognized
debian;boot
null
_unix.270907
I usually play SWTOR on Windows (10), but being able to play on linux would be so much better for too many reasons to list (and beyond the scope of this question). So I've set it up via playonlinux (using wine version 1.8-staging as recommended by a friend).Some of my keybinds have changed, however. On windows, I have one keybind as backslash ('\') which I use incessantly, and several more as AltGr+, for example AltGr+K, L, Y and so on. These work fine on Windows.However, when I load SWTOR on linux - having installed the EXACT same keybind/interface files (and I know it found them because my chatbox and interface are exactly as they should be, which they weren't before I transferred those configs) - the slot which should be bound to \ is now bound to #. Attempting to bind it back, I found that it won't even register \ in the binding dialog box. However, I can easily type backslashes into chat messages, and I can confirm from xkey that the keypress is sent to SWTOR.A similar story with AltGr: using AltGr+F actually invokes the keybind associated with F, and attempting to bind it back just binds it to F instead. I can't check by typing it in chat, but I verified with xkey that the keypress is sent to the window by X. The bindings are still listed with Ctrl+Alt+F, and indeed I can invoke it like that (it's just how it comes up on windows).The weirdest thing about this is that it automatically rebound the \ binding to #, with no editing of configs and no manual rebinding (and that I can still type \ into chat, so it's clearly receiving the keypresses). And yet it works fine on Windows.Can anyone shed any light on where these problems are occurring and what might help fix them?I'm running playonlinux 4.2.10 installed from the official website, using the SWTOR script listed when you search for it and the installer from the official swtor.com website. My system is$ lsb_release -aNo LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux 8.3 (jessie)Release: 8.3Codename: jessie$ uname -srviopmLinux 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u4 (2016-02-29) x86_64 unknown unknown GNU/LinuxI have a nvidia 940M graphics card (Optimus) which is used by SWTOR via bumblebee (installed from backports, as is the driver).
wine - Can I get SWTOR to use backslash and AltGr in keybinds?
keyboard shortcuts;wine;playonlinux
null
_unix.119132
My motherboard is a Gigabyte 990XA-UD3 (CPU 1), it's a UEFI -Dual boot, and when I try installing Linux Mint 16 Cinnamon or Ubuntu 13.10 it always bring this error(initramfs) Unable to find a medium containing a live file system.I put all my BIOS config in legacy options, disabled UEFI, but still the same error. Right now I am running Windows 8.1 64bI use Universal USB installer and made a live USB
How can I install Linux on a UEFI system with Secure boot?
linux;security;boot;uefi;initramfs
null
_webapps.82892
I want to check if an email has been sent from my email account and then deleted from the sent folder.I had a draft with some personal information on it and it disappeared. I wasn't sure if I had deleted it (I couldn't find it in the Delete folder, or any other folder) or if someone secretly sent it.I'm using Hotmail by the way.
Can I check if an email has been secretly sent from my account?
outlook.com
null
_codereview.800
Is there a way to do this using parameters so the value is automatically converted to whatever datatype the keyfield has in the datatable?This code should be reusable for future bulk update applications hence the constant and the check on multiple datatypes.private const string DBKEYFIELDNAME = CustomerNr;... for (int i = 1; i <= csvLines.Length - 1; i++) // i=1 => skip header line{ string[] csvFieldsArray = csvLines[i].Split(';'); int indexKeyField = csvHeaders.IndexOf(CSVKEYFIELDNAME.ToLower()); object csvKeyValue = csvFieldsArray[indexKeyField]; // ... some more code here that is not relevant // Find the matching row for our csv keyfield value Type keyType = parameters.DataTableOriginal.Columns[DBKEYFIELDNAME].DataType; DataRow[] rowsOriginal = null; if (keyType.IsAssignableFrom(typeof(string))) rowsOriginal = parameters.DataTableOriginal.Select(DBKEYFIELDNAME + =' + csvKeyValue.ToString() + '); else if (keyType.IsAssignableFrom(typeof(Int16)) || keyType.IsAssignableFrom(typeof(Int32)) || keyType.IsAssignableFrom(typeof(Int64)) || keyType.IsAssignableFrom(typeof(bool))) rowsOriginal = parameters.DataTableOriginal.Select(DBKEYFIELDNAME + = + csvKeyValue); if (rowsOriginal != null && rowsOriginal.Length == 1) { // Do some processing of the row here }}
Strongly-typed reading values from CSV DataTable
c#;csv
A few comments:This method does way too much. It is difficult to understand and will be difficult to debug and maintain. Break it down into smaller methods that each have a single responsibility.Use curly braces after your if and else statements. This improves readability of the code and makes it less likely for other code to sneak in there in the future.Can't your else statement just be a plain else with no if after it? It seems like you want to put quotes around a string, and use the plain value for everything else. Are there other requirements here?The type of csvKeyValue could just be string, since it's pulling a value out of a string[]No need for the call to .ToString() in your if branchI would try to write the call to parameters.DataTableOriginal.Select only once. Consider using your if/else to set a delimeter variable to either string.Empty or ', then write your query once like so:DataRow[] rowsOriginal = parameters.DataTableOriginal.Select( DBSLEUTELVELDNAAM + = + delimeter + csvKeyValue + delimeter);
_softwareengineering.283294
There are two ways to do the same thing (pseudo code)Define databaseHandle in the parent function, and use it as a global in this scope: function API() { function openDatabase() { return databaseHandle; } databaseHandle = openDatabase() function getItem(i) { databaseHandle.get(i) } function addItem(name) { databaseHandle.add(name) }}Define a function for getting this handle, and then get it when we need it:function API() { function openDatabase() { return databaseHandle; } function getItem(i) { databaseHandle = openDatabase() databaseHandle.get(i) } function addItem(name) { databaseHandle = openDatabase() databaseHandle.add(name) }}The first option seems simpler, and I see it in many examples. But the second one seems to me more reliable and obvious in what it does (and a bit redundant).What is the best practice here? If there's another, better way, I'd like to hear about it. Thanks.
Should all functions be fully self-contained (is it bad practice to share a variable between functions)?
design patterns;object oriented;programming practices;functional programming
I'd go with the second approach, with a change which releases the handle when done. If each method takes care of getting, operating and releasing its own handle, then your application should be better suited to scale up (assuming you have some sort of pooling underneath).With the first approach, it is hard to say what will happen should two different threads call each method separately at the same time.
_unix.381236
I use LightDM with awesome-wm.To lock screen I use command dm-tool lock. Most of the time it works fine but if after issuing the session lock command I switch to another tty and then go back, session unlocks by itself. /etc/lightdm/lightdm.conf is set to all defaults. How can I fix this behavior?Linux 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26)awesome v4.0lightdm 1.18.3-1EDITOutput of the systemctl status lightdm.service command after a couple of locksCGroup: /system.slice/lightdm.service 931 /usr/sbin/lightdm 941 /usr/lib/xorg/Xorg :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1754 /usr/lib/xorg/Xorg :1 -seat seat0 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch 1794 lightdm --session-child 15 24 2137 lightdm --session-child 27 30 2192 lightdm --session-child 31 34 2224 lightdm --session-child 35 38 2304 lightdm --session-child 15 20
LightDM screen lock in awesome-wm unlocks by itself
awesome;screen lock;lightdm
null
_webapps.13708
I have two email addresses that I use with GMail, let's call them personal@gmail.com and me@mycompany.com, the latter is integrated with SMTP and POP. When I want to send an email I can choose which email address I want to send from. Now let's say I receive an email to me@mycompany.com when I reply I want that reply to come from me@mycompany.com not personal@gmail.com, yet it always defaults to the @gmail.com address. Is there any way to make it default to be from the To address in a reply?
How can I get GMail to make the default address I'm sending an email from be the same as the To address in a reply?
gmail;email;email management
You can do that from Settings > Accounts and Imports and select Reply from the same address the message was sent to:
_softwareengineering.158118
I am a developer in PHP technology, I am aware of almost all the basics of OOPS, but still cannot find out the way to apply these concepts over a procedural programming.I do it in very orthodox way, but I don't know why I am coding it in that way. I never have reasoning to justify my OOPS applications, which has inferred me that I am worst in OOPS, Please help me guys to understand the application of OOPS.
I cannot understand the application of oops How can I develop the understanding of application of oops?
object oriented;programming practices;functional programming
Firstly, it's pretty difficult to summarise something as complicated as OO in a few sentences. I would recommend that you find yourself some good books on the topic, and read them, while practicing what they preach. That said, a short explanation would be: object orientation is simply one particular way to structure code. You are trying to create pieces of code which are loosely coupled and cohesive. Loosely coupled meaning that they don't unnecessarily rely on each other, and cohesive meaning that things that should be close together, are. Encapsulation is the principle of making a class appear as simple as possible to the outside world, while containing whatever complexity is necessary, inside itself. You are also trying to reduce repetition and duplication of code and logic (DRY, etc). These are the general principles, and you should spend a lot of time trying to understand why these are good principles, and what they mean. They tend to be fairly universal, so you will be able to apply it outside of OO. Once you have these basics (and they are not simple) down, you will understand the whys, and can make your own judgement calls on design. Unfortunately, I can't think of any way to go into any more detail without writing pages and pages, so I'll leave it at that. Last tips: try to make sure you understand why you are doing something; but be patient - I've almost never seen anyone who was more than barely competent in OO with less than 5 years of experience.
_codereview.121740
I am writing a parser to parse out the fields of an email message in the following format (note that I expect that the To: field could contain multiple lines, same with the Subject: field.From: joebloggs@mail.netTo: jane@othermail.com, john@somemail.net, otherperson@hello.com, onemore@whatever.orgSubject: A subject goes hereX-FileName: joebloggs.pstHi, this is the information you were looking for...Sincerely, JoeI intend to construct an instance of this Mail class:public class Mail { public List<String> to = new ArrayList<>(); public String subject, from, body; public String xFileName;}and I am parsing it using the following method: public class Parser { public enum CurrentState { NEW(), FROM(From:), TO(To:), SUBJECT(Subject:), X_FILENAME(X-FileName), BODY(); private final String startsWith; CurrentState(String startsWith) { this.startsWith = startsWith; } public String getStartsWith() { return startsWith; } } public Mail parseEmail(List<String> lines) throws Exception { CurrentState currState = NEW; Mail sentMail = new Mail(); for (String line: lines) { if (line.startsWith(FROM.getStartsWith())) { sentMail.from = parseFrom(line); currState = FROM; } else if (line.startsWith(TO.getStartsWith())) { sentMail.to.addAll(parseTo(line)); currState = TO; } else if (line.startsWith(SUBJECT.getStartsWith())) { sentMail.subject += parseSubject(line); currState = SUBJECT; } else if (line.startsWith(X_FILENAME.getStartsWith())) { sentMail.xFileName = simpleParse(X_FILENAME, line); currState = X_FILENAME; } else if (currState == BODY) { sentMail.body += line; } else { if (currState == X_FILENAME && !line.isEmpty()) { sentMail.body += line; currState = BODY; } else if (currState == TO) { sentMail.to.addAll(parseTo(line)); } else if (currState == SUBJECT) { sentMail.subject += parseSubject(line); } else { throw new Exception(Could not parse line: + line + previous state was: + currState.name()); } } } } return sentMail; }I'm wondering how well this code tolerates badly formatted data, and whether there is a cleaner or more elegant way to implement this parsing functionality.
Email text parser
java;parsing;email
null
_cs.24652
I was given this function:$F(n)$ returns the smallest TM (measured in number of states) such that on input $\epsilon$, the TM makes at least $n$ steps before eventually halting ($n$ is a natural number). I was asked to prove that this function is uncomputable using a reduction from the Busy Beaver. I'm still new to reductions and after sitting on this problem for a while I've gotten nowhere. I'd appreciate any help/guidance.
Proving a language is not decideable using a reduction from Busy Beaver?
formal languages;turing machines;reductions
Hint: Why is the busy beaver function difficult (rather, impossible) to compute? Consider the following algorithm: given $n$, run all $n$-state Turing machines, and whenever one of them halts, update your estimate on the maximum number of steps. Eventually you will have found $BB(n)$, but you wouldn't know, since some of your machines are still running. Will any of them terminate, or have you discovered $BB(n)$? The function $F(n)$ given to you in the question could help in that respect.
_unix.220438
I installed cinnamon on Arch. Everything works fine, but a little annoyance I am having is that there are still org.gnome.dekstop settings present. I removed gnome-desktop, which was installed as a dependency by evince (also removed). Are these settings normally present under cinnamon or is there some way to get rid of them? I dislike having the duplicates and I think this also led to a double lock issue where setting disable-lock-screen under org.gnome.desktop.lockdown fixed the issue. Is there a way to completely get rid of gnome-desktop?
GNOME Desktop Dconf Settings Are Present Even Though Using Cinnnamon
linux;arch linux;gnome;cinnamon;dconf
null
_codereview.113823
Currently I'm engaged with implementing a Drag and Drop GUI. I've discovered that there a not many resources (tutorials, etc.) available. So I wrote this:function Cursor(cssSelector, rightLimit, bottomLimit) { var element = document.querySelector('#square'); var styles = window.getComputedStyle(square); var x = 0; var y = 0; var fromLeft = 0; var fromTop = 0; var pushed = false; var limits = { top: 0, right: rightLimit, bottom: bottomLimit, left: 0 } // Uses the offsetX and the offsetY of the // mousedown event. this.setCoordinates = function(left, top) { if (!fromLeft && !fromTop) { fromLeft = left; fromTop = top; } } this.togglePushed = function() { pushed ? pushed = false : pushed = true; } this.getPushed = function() { return pushed; } // Uses the offsetX and the offsetY of the // mousemove event. this.moveCursor = function(offsetX, offsetY) { // How much have the x and the y coordinate // changed since the mousedown event? var tmpX = offsetX - fromLeft; var tmpY = offsetY - fromTop; if ((x + tmpX) <= limits.right && (x + tmpX) >= limits.left && (y + tmpY) >= limits.top && (y + tmpY) <= limits.bottom) { // If the values are valid then store them ... x += tmpX; y += tmpY; // ... and use them to move the element. element.style.left = x + 'px'; element.style.top = y + 'px'; } } } var cursor = new Cursor('#square', 550, 450); square.addEventListener('mousedown', function(ev) { cursor.togglePushed(); cursor.setCoordinates(ev.offsetX, ev.offsetY); }); document.body.addEventListener('mouseup', function(ev) { cursor.togglePushed(); }); square.addEventListener('mousemove', function(ev) { if (cursor.getPushed()) { cursor.moveCursor(ev.offsetX, ev.offsetY); } });body { background-color: #eefafa;}#wrap { width: 600px; margin: 50px auto;}#panel { height: 500px; position: relative; background-color: rgba(150, 150, 150, 0.2); border-radius: 3px;}#square { width: 50px; height: 50px; background-color: orangered; border: 1px solid teal; border-radius: 3px; position: absolute; top: 0px; left: 0px;}.instruct { font-size: 125%; font-weight: bold; font-family: helvetica;}<div id=wrap> <p class=instruct> Click the square. Keep the mouse-button pushed and move the pointer slowly. </p> <div id=panel> <div id=square></div> </div></div> There's also a demo on CodePen.
Drag and drop GUI with native JavaScript
javascript;html;css
null
_cstheory.4161
Does it have anything to do with the heap data structure, for example the Buddy blocks implementation, or does it only take the literal English meaning of the word (a big pile)?I know heap memory is more practical than theoretical, but there's no Stack Exchange for Practical Computer Science yet.
Why is the free store memory called the heap?
ds.data structures;ho.history overview
I don't think it has anything to do with the data structure.It's just the opposite of the stack, which carefully orders its elements and doesn't allow them to be read or written except at the top.
_unix.287626
I have txt file whose inside there are 8 times ATOMIC_POSITIONS string and when I'm trying to write each one of them with ;AtomicPos=$(grep -n ATOMIC_POSITIONS hw1_out_si_wire.txt)echo $AtomicPost gives me just the last one 4779:ATOMIC_POSITIONS (bohr)4779 is the line number , where is the last one.In fact , after that I was going to take the last one so that I can take the next lines after the last ATOMIC_POSITIONS, but, hence, it gives me directly the last one ,so I continued like ;$NtL=262i=1until [ $i == $NtL ]doPos=$(grep -A $i ATOMIC_POSITIONS hw1_out_si_wire.txt)echo $Posi=$(expr $i + 1)unset PosdoneBut when I run that , it starts from the first ATOMIC_POSITIONS and continues.Could someone explain why is that ?
Why doesn't grep give me the all found strings?
bash
In order to read grep output into an array you have to change AtomicPos=$(grep -n ATOMIC_POSITIONS hw1_out_si_wire.txt)to AtomicPos=( $(grep -n ATOMIC_POSITIONS hw1_out_si_wire.txt) )This way you will have all the matched patterns in AtomicPos then loop over the array and print each element.
_cs.41523
so I have this code:for (int i=1; i < n; i=i*5) for (j=i; j < n; j++) sum = i+j;And I'm wondering, what's the time complexity of this for loop?To start off, I know the first line is logn base 5, with an additional check to exit out of the for loop.Then, for the second line, I have the following:i = 1 j = 1, 2, 3,, n (n-5^0)+1i = 5 j = 5, 6, 7, , n (n-5^1)+1i = 25 j = 25, 26, 27,, n (n-5^2)+1i = n j = n (n-5^k)+1But now, I'm stuck. Any help is appreciated.
Algorithm analysis of nested loop
algorithms;time complexity;runtime analysis;loops
null
_unix.16620
Which directories should I expect to have in an install prefix when I'm writing makefiles? I've noticed that in the common prefix /usr, there is no /etc, yet there is an /include dir, which isn't in the root directory. Which paths are hard-coded such as /etc and /var maybe and which directories lie in a prefix? As far as I can see /bin and /lib are standard.
What do I install into a given install prefix
filesystems;software installation;directory structure;make;gnu make
See the FHS (Filesystem Heirarchy Standard) for details: http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard and http://www.pathname.com/fhs/
_cs.22030
Can anyone explain me what is a trampolined interpreter? I am versed with relevant concepts viz. procedural languages, continuations etc but am finding difficulty understanding the definition/need for trampolining. Please help.
What is a trampolined interpreter?
programming languages;interpreters
null
_webapps.74457
Is it possible in Google spreadsheets to have one list which is somehow divided into 2 parts and both of those parts consist of different lists? Something like iframes in HTML.
Is there any option to operate with 2 lists in one list in Google Sheets?
google spreadsheets
null
_codereview.36743
I'm creating a simple syslog server in Delphi XE2 using Indy's TIdSyslogServer. I've decided to make it dump into a TClientDataSet and display in a TDBGrid, but I'm skeptical about how well it would handle if the log got quite big, and what I should expect when it grows to millions of records. This application is for internal use and I don't intend to make any software from it, and just keep the code real simple.The purpose of the application is for numerous IP surveillance cameras along with various other network based equipment to report their log to one place.This is a simple application with just 1 form, all the code is directly in the form's unit. The actual application is a separate project (call this my SSCCE).uMain.pasunit uMain;interfaceuses Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics, Vcl.Controls, Vcl.Forms, Vcl.Dialogs, IdSocketHandle, IdBaseComponent, IdComponent, IdSysLogServer, IdSysLog, IdSysLogMessage, IdUDPBase, IdUDPServer, Vcl.StdCtrls, Vcl.Grids, Vcl.DBGrids, Data.DB, Datasnap.DBClient, MidasLib {to avoid requiring MIDAS.DLL};type TForm1 = class(TForm) Server: TIdSyslogServer; DS: TClientDataSet; DSC: TDataSource; DBGrid1: TDBGrid; procedure FormCreate(Sender: TObject); procedure FormDestroy(Sender: TObject); procedure ServerSyslog(Sender: TObject; ASysLogMessage: TIdSysLogMessage; ABinding: TIdSocketHandle); private procedure PrepareDS; public end;var Form1: TForm1;implementation{$R *.dfm}procedure TForm1.FormCreate(Sender: TObject);var H: TIdSocketHandle;begin PrepareDS; Server.Bindings.Clear; H:= Server.Bindings.Add; H.IP:= '0.0.0.0'; //All IP's H.Port:= 514; //syslog standard port 514 Server.Active:= True; //Activate serverend;procedure TForm1.PrepareDS;begin DS.DisableControls; try DS.Close; DS.FieldDefs.Clear; DS.FieldDefs.Add('timestamp', ftDateTime); DS.FieldDefs.Add('pri', ftInteger); //Need to convert the next 2 to string DS.FieldDefs.Add('facility', ftString, 15); DS.FieldDefs.Add('severity', ftString, 15); DS.FieldDefs.Add('hostname', ftString, 15); DS.FieldDefs.Add('message', ftString, 200); DS.CreateDataSet; DS.Open; finally DS.EnableControls; end;end;procedure TForm1.FormDestroy(Sender: TObject);begin Server.Active:= False;end;procedure TForm1.ServerSyslog(Sender: TObject; ASysLogMessage: TIdSysLogMessage; ABinding: TIdSocketHandle);begin DS.Append; DS['timestamp']:= ASysLogMessage.TimeStamp; DS['pri']:= ASysLogMessage.Pri; DS['facility']:= ASysLogMessage.Facility; DS['severity']:= ASysLogMessage.Severity; DS['hostname']:= ASysLogMessage.Hostname; DS['message']:= ASysLogMessage.Msg.Content; DS.Post;end;end.uMain.dfmobject Form1: TForm1 Left = 354 Top = 124 Caption = 'Form1' ClientHeight = 400 ClientWidth = 597 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False OnCreate = FormCreate OnDestroy = FormDestroy PixelsPerInch = 96 TextHeight = 13 object DBGrid1: TDBGrid Left = 0 Top = 48 Width = 597 Height = 352 Align = alBottom Anchors = [akLeft, akTop, akRight, akBottom] DataSource = DSC TabOrder = 0 TitleFont.Charset = DEFAULT_CHARSET TitleFont.Color = clWindowText TitleFont.Height = -11 TitleFont.Name = 'Tahoma' TitleFont.Style = [] end object Server: TIdSyslogServer Bindings = <> OnSyslog = ServerSyslog Left = 120 Top = 168 end object DS: TClientDataSet Aggregates = <> Params = <> Left = 168 Top = 168 end object DSC: TDataSource DataSet = DS Left = 200 Top = 168 endendI'm assuming that at some point I should at least make it dump to a file, then start fresh. That's an obvious feature I will need to add. Along with that of course a way to recall the saved logs. That all comes later, but I'm only worried how the client dataset will handle it when it gets very large, and really how I should determine the maximum before I dump it.
Is it feasible to create a syslog server which writes to a client dataset?
performance;logging;delphi;server
null
_softwareengineering.195040
I saw a video about random numbers and how the programmer in that video was talking about computers generating pseudo random numbers and that they are not really random. I knew about this.Then he showed the decay of a radioactive material to generate random numbers where he claimed to be truly random. Is there really such a thing? I mean the process of the radioactive material shooting electrons might seem random but is it? Isn't it just a mysterious black box to us simply because we don't know how it really works?Or does randomness just depend on the current level of scientific knowledge?If so, then how come quantum computers are often quoted to be capable of generating truly random numbers? Can they really do this?
Is there such a thing as truly random?
computer science;math;random
null
_unix.295207
So at my job I SSH from my CentOS machine to other local CentOS machines. We use an application that runs in both X11 and terminal. Some features are available exclusively in terminal and other features are exclusively in X11. The program auto detects if there is a X display to connect to and will use it if available. It would be nice to be able to quickly toggle between the two version of the application without having to put in an enhancement request. We have a large amount of desktop icons/short cuts without a -X or -Y flag. Is there any way to enable/disable X11 forwarding on a running SSH session that was started without the -X or -Y flag?
Enable/Disable X on an established SSH connection
ssh;x11
If you run with -X or -Y then this will set $DISPLAY on the remote end to point to the X-tunnel. Unsetting $DISPLAY will prevent X applications from talking to the X server.e.g.$ echo $DISPLAY localhost:10.0$ xdpyinfo | head -2name of display: localhost:10.0version number: 11.0$ DISPLAY= xdpyinfo | head -2xdpyinfo: unable to open display .$ DISPLAY= xtermxterm: Xt error: Can't open display: xterm: DISPLAY is not setSo with X tunneling enabled you should be able to hide it by unsetting $DISPLAY.Inside an SSH session you can type ~? to get a list of changes you can make. You can add/remove port forwarding via ~C but you can't easily change X tunneling because that would require running xauth and similar. The sequence of events would be to forwarding a remote port back to localhost:6000 (or whatever port your local X server is on), setting DISPLAY and adding xauth permissions - not so easy!
_unix.289326
I have a personall machine running Ubuntu 14.04.4 LTS. I use it to host a Teamspeak and a Minecraft server and also a website.I am trying to make sub-domains to only point to the right services. So for exampleusing panel.domain.com would point only to https://localhost:8000 (CP Panel)Managed to get the CP Panel sorted out, by using a DNS URL Redirect rather than an A Recordusing mc.domain.com would point only to localhost:25565(Minecraft Server)using ts.domain.com would point only to localhost:9987(Teamspeak Server)using domain.com would point only to the website (domain.com/forums/index.php)I managed to do this atleast for connections that come trough a web browser using httpd.conf<VirtualHost *:80>ServerName mc.domain.comredirect / localhost:25565</VirtualHost><VirtualHost *:80>ServerName ts.domain.comredirect / localhost:9987</VirtualHost>But this only applies to connections coming from a web browser, and if i try to connect in Teamspeak using any sub-domain or the domain name it still connects...This is probably useless, and i should just use the domain name, but i would like to have some sorting going on.Is this even possible to do?From what i can figure out it would be something to do with IPTables but i honestly have no clue. Something like this?iptables coming from any ip:25565 to anything else than localhost:25565 Dropiptables coming from any ip:9987 to anything else than localhost:9987 Dropiptables coming from any ip:80 to anything else than localhost:80/8000 DropAm i correct?
How would I limit connections to certain services, to be only accesed via a connection coming from a sub-domains?
networking;firewall;apache httpd
null
_codereview.27813
Could it be written better?package mainimport ( code.google.com/p/go-tour/wc fmt)func WordCount(s string) map[string]int { dict := make(map[string]int) splited := Split(s) for _, string := range splited { _, present := dict[string] if present { dict[string]++ } else { dict[string] = 1 } } return dict}func Split(s string) []string{ arraySize := 1 for i := 0; i < len(s); i++ { if s[i] == ' ' { arraySize++ } } array := make([]string, arraySize) currentStrInd := 0 currentStr := for i := 0; i < len(s); i++ { if s[i] == ' ' { array[currentStrInd] = currentStr currentStrInd++ currentStr = } else { currentStr += string(s[i]) } } array[arraySize - 1] = currentStr return array;}func main() { fmt.Println(Split(I am learning Go!)) wc.Test(WordCount)}
Split and word count in go
strings;go
null
_webmaster.34252
I have a question about redirecting a subdomain of a blog hosted on wordpress.com to an external URL.Given the following:1) I own a domain name foobar.com purchased from another registrar (not from wordpress.com).2) I have purchased the custom domain option on wordpress.com, and have completed the configuration to make foobar.com resolve to foobar.wordpress.com.3) I will establish an external site for a store, such as store.yahoo.com/foobar.4) I want to redirect the subdomain store.foobar.com to store.yahoo.com/foobar.How do I set up the custom DNS records within wordpress.com to accomplish this subdomain redirection, while leaving foobar.com pointed to my Wordpress blog? I suspect that the CNAME directive is involved, but I cannot figure out the required syntax.
Redirecting a subdomain from wordpress.com to an external web address
wordpress;redirects;url;subdomain
null
_webmaster.74288
I have been getting crawl errors on Google Adsense on a number of pages, all ending in a peculiar suffix mozekcdn-a.akamaihd.net. For examplehttp://my-website.com/mozekcdn-a.akamaihd.nethttp://my-website.com/mozekcdn-a.akamaihd.net/gsd.htmlNow the strange thing is that such pages do not exist at all on my website. And all of a sudden the number of such pages being created has increased in the last 24 hours; all leading to a 404 Not found page.Thus, trying to get to the bottom of this problem, I searched online, and came across some discussions on Stackexchange (this link) and Google Groups (this link). It seems like some more websites are facing this problem, and the initial analysis is that this is some sort of malware / adware. A french website (this link) has given some more details, though I am not sure how authentic this is. I am worried at the moment at the consequences of this issue. Will be great if any of you can check into it and suggest any possible solution.
Website pages being generated with mozekcdn-a.akamaihd.net; adware / malware?
google;malware
This does not look like a problem for you to solve. It appears that you do not have malware, adware, or a virus on your system. You may not like these entries in your log of course. You can likely filter them in what ever software you use to analyze your web traffic.This appears to be coming from your users as they access your site. It is thought to be an adware bug installed on the client computer that adds requests to each page accessed with various forms of mozekcdn-a.akamaihd.net in the URI. These always result in a 404.I cannot figure out what the payoff would be and why this would be coded this way except that it might introduce new or more adware or viruses. I suspect that some of this will fade away as people update their anti-virus and anti-adware software, but some will remain for those who do not use these software or update them regularly.Do keep tabs on your site just in case.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card