id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_codereview.20418
I've just created a teeny little script to count the seconds until I get auto-logged off. I already solved my issue with my ssh client settings, but I'd still like any help making the bash script nicer to read, or just tips in general.#!/bin/bashcount=0while ( [ : ] )do count=$(($count+1)) n=${n:-0} for i in `seq 0 $n`; do echo -en '\b'; done n=${#count} echo -n $count sleep 1done
Counting seconds until auto-log-off
bash
count=0while true; do printf \r%d $((++count)) sleep 1doneYour while condition is doing a lot of work just to return true: you launch a subshell, and evaluate a test giving a 1-character string (which will always return true). Make it simpler and more readable by just executing the true program.Note that variable names inside an arithmetic expression do not require the leading dollar sign in most cases. That's quietly documented here:Within an expression, shell variables may also be referenced by name without using the parameter expansion syntax.printf is more portable than echo, if that's a concern for you. I also find it makes your intentions more obvious.Instead of backing up the right number of characters, just carriage return to the first column and overwrite the previous contents.
_unix.182743
I've edited /etc/passwd by running usermod -s to change my shell. (chsh doesn't work, because it prompts for a password; we SSH in using keys.)When I disconnect, and reconnect, the change doesn't take effect. I've restarted sshd too, and still nothing.
Why do changes to /etc/passwd not take effect?
ssh;login
I use ControlMaster, and I wasn't actually disconnecting.ControlMaster is an SSH configuration option that keeps connections open for a certain time, and can multiplex SSH sessions over the same connection (which avoids key exchanges, which are slow). However, if you ^D from a shell, and then re-run ssh, you've not killed the original connection.Restarting sshd only restarts the listening process: any in-progress session remain alive, so that doesn't restart the connection either.Apparently launching a new shell doesn't re-check /etc/passwd for changes.Solution was just to kill the connection: ssh <hostname> -O exit, and log in again.
_webmaster.43705
I have a web site in two different languages (English and Spanish). It is indexed by Google but, when searched, sitelinks below my result appear mixed with Spanish and English subtitles.Each language is in a different folder (mydomain.com/en for pages in English and mydomain.com/es for pages in Spanish).Is there a way to help Google to distinguish between them? The Spanish ones should show for the Spanish speakers and the English ones for the rest of users?
How can I prevent Google from mixing different languages in my sitelinks?
seo;google;search engines;sitelinks
I have a site localized into over forty languages and have no problem with my site links.You don't state what url structure your two sites are in. Google recommends that your internationalized sites be on separate top level domains (example.com vs example.es), different sub-domains (www.example.com vs es.example.com), or different folders (example.com/en/ vs example.com/es/). Any other layout is likely to confuse Google. Also, I don't recommend using the Accept-Language header for dynamically and automatically determine the language to use.Google usually chooses site links from links near the top of the page. They tend to be the links that users click on most. Make sure that you don't have Spanish links on your English pages or the other way round. The link from your English site to your Spanish site would be better placed in your footer than at the top of the page.You should register both your Spanish site and your English site separately in Google Webmaster tools. You can do so even if they are are sub-domains or in directories. Once you have done so you can correct any site links that Google has wrong. Under Configuration -> Sitelinks you can demote any link that Google has wrong for a specific page. Use this feature to demote the Spanish site links that appear on your English site. Fix the ones for your Spanish site too.
_unix.280897
I want to extract the process having highest utilization on each processor coreand then output its information (PID etc.) to a file. How can I do it by using either top or ps command?Thanks.
Process Scheduling Information Extraction
ps;top
How aboutps -k -pcpu -O pcpu,psr The k flag is your sort key which is percent CPU. Capital O changes the output to add the percent CPU utilisation and the current processor/cpu the process ran on. You get output like: PID %CPU PSR S TTY TIME COMMAND15049 5.8 2 S tty2 00:00:28 chrome14808 4.3 1 S tty2 00:00:21 chrome14448 3.9 5 S tty2 00:00:21 gnome-shell15234 1.8 5 S tty2 00:00:08 chrome14896 1.5 2 S tty2 00:00:07 chrome14322 1.2 0 S tty2 00:00:06 Xorgpercent cpu is the time column divided by the real time. You may get odd results if you have a busy process that then idles (but its average overall is still high or low, depending on what you were expecting).To get something to answer what's keeping my CPU busy in the last few seconds then top is a better tool.Also note, the processes will bounce around on the CPUs, so nailing down why a CPU runs hot can sometimes be tricky to work out. You generally want this to spread the load across them.
_unix.345507
I have an java application that runs on windows and needs to be moved to linux. The executable for windows takes a config file as input which does a bunch of things like logging settings, classpaths, settings java path and other dependencies. In the config file, logging properties are set in this way: wrapper.logfile.format=LPTM wrapper.logfile.loglevel=INFO wrapper.syslog.loglevel=STATUSHow can this be achieved in the shell script that is going to execute the application? Also, is it possible to have a config file like such for the shell script as well?
Log file and properties for shell script
shell script;logs
null
_codereview.111456
I've written a JavaScript dictionary sorting algorithm, which takes a .txt file (the dictionary), and loads it via node's file system. The purpose of this algorithm is to sort every word into its corresponding array based on its letter pattern. For example, the word little would have the letter pattern of ABCCDE, and hello would have the letter pattern of ABCCD. The purpose of this sorting algorithm is to use it to decode substitution ciphers; however, this is the only working code I've written so far for it. This code works, and I have a sorted dictionary file here.However, I want to know how I might be able to improve this algorithm to make it as fast and efficient as possible. Even though its only run once, I want to get into the habit of writing efficient code. To break down exactly what's going on in the algorithm I've added a few comments, but to elaborate it loads each word from the dictionary and stores it in an array, in which I then iterate through the array and each letter of the word. The clpl variable starts at 'A', and it checks if it exists in a temp object. If it does, it adds that letter to the letter pattern, if it does not, it creates a new property on the object and assigns it the current letter as a value. It adds the assigned letter to lp, which is initialized as an empty string.The line clpl = String.fromCharCode(clpl.charCodeAt(0) + 1);gets the next letter. After it has iterated through each letter in the word, it checks if the current letter pattern exists in the sortedDictionary object, if it does it pushes it to an array containing all words with its letter pattern. If it does not, it creates a new array for that letter pattern and pushes the current word to that array. After it has gone through every word, it writes the sortedDictionary object to an external JSON file. //load file systemvar fs = require(fs);//load jsonfilevar jsonfile = require(jsonfile);//create var for dictionary filevar dictFile = american-english, sortedDictFile = sortedDictionary.json;//empty object to hold sorted dictionary according to letter patterns//empty dictionary array to hold wordsvar sortedDictionary = {}, dictionary = [];//declare variables for usevar temp, clpl, lp, word;console.time(Dictionary Sort);fs.readFile(dictFile, utf8, function(error, data){ if(error) throw error; //push all words into an array dictionary = data.toString().split(\n); for(var i = 0; i < dictionary.length; i++){ //set word to current word in dictionary word = dictionary[i]; //set temp to empty object, clpl to A, and lp to an empty string //this is used to get the current letter pattern temp = {}, clpl = 'A', lp = ''; for(var j = 0; j < word.length; j++){ if(word[j] in temp){ lp += temp[word[j]]; } else { temp[word[j]] = clpl; lp += clpl; clpl = String.fromCharCode(clpl.charCodeAt(0) + 1); } } //if letter pattern of word exists in sorted dictionary if(lp in sortedDictionary){ //add word to the array of words with same letter pattern sortedDictionary[lp].push(word); } else { //if letter pattern is new, create new array to store words sortedDictionary[lp] = []; //add word to the array of words with same letter pattern sortedDictionary[lp].push(word); } } //write the sortedDictionary object to the sortedDictionary.json file jsonfile.writeFile(sortedDictFile, sortedDictionary, {spaces: 2}, function(error){ if(error) throw error; }); //time to sort and write to json file - Dictionary Sort: 687ms (137602 lines) console.timeEnd(Dictionary Sort);});
Sorting dictionary according to letter patterns
javascript;node.js;cryptography
This can be improved and shortened by naming the concepts you are using:A function called encodeWord that takes a single word and returns its encoded valueA function called groupedByCodes which takes an array of words, and returns the object you are seeking: codes as keys, and arrays of words that get encoded to those keys.This will have the following benefits:What your code is doing will be crystal clear.You'll have less codeMost of your temporary variables will vanishI've also purposefully left out the writing and reading of the files, as that's really a separate concern from your main program. Rewriting it this way, you'll get something that looks like this:var sampleFile = CAT\nDOG\nTOM\nBOB\nTOT, words = sampleFile.split(\n);console.log(groupedByCodes(words));// { ABC: [ 'CAT', 'DOG', 'TOM' ], ABA: [ 'BOB', 'TOT' ] }function groupedByCodes(words) { return words.reduce(function(dict, word) { var code = encodeWord(word); if (!dict[code]) dict[code] = []; dict[code].push(word); return dict; }, {});}function encodeWord(word) { var encodingDict = {}, nextCode = 65; //'A' return word.split('').map(encodeLetter).join(''); function encodeLetter(l) { return encodingDict[l] || (encodingDict[l] = String.fromCharCode(nextCode++)); }}
_unix.128852
My home partition on a Debian wheezy install is an encrypted LVM volume. It is ext3. Earlier today, I had a weird message in a terminal window about an attempt to write to a file in my /home tree failing due to having a read only file system. I rebooted and ended up with an error message saying /dev/sda1 is reported as clean. fsck.ext3, which runs automatically and reports that there is no such device as /dev/mapper/sda1_crypt and reports exit code 8. I get dropped to a maintenance shell and told there was an attempt to write a log to /var/log/fsck/checkfs.That log reads:[Timestamp]fsck from util-linux 2.20.1/dev/mapper/sda1_crypt: Super blocks need_recovery flag is clear, but journal has data./dev/mapper/sda1_crypt: Run journal anyway/dev/mapper/sda1_crypt: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY (i.e., without -a or -p options)fsck died with exit status 4I ran$ fsck -vnM /dev/mapper/sda1A bunch of illegal block #nnnn (mmmmmmmmm) in inode ppppppp IGNORED messages blew past, followed bytoo many blocks in Inode somenumberhereThen running additional passes to resolve blocks claimed by more than one inodeIt then outputPass 1B: Rescanning for multiply claimed blocksAfter a bit, I got a wall ofIllegal block number passed to ext2fs_test_block_bitmap somenumberhere for multiply claimed block mapThese were followed by 2 Multiply claimed blocks in I node anothernumber: [lists of 5 and 8 block numbers]Then I got a number of stanzas like[ 3828.181915] ata1.01: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0[ 3828.182462] ata1.01 BMDMA stat 0x64[ 3828.183810] ata1.01 failed command: READ DMA EXT[ 3828.185889] ata1.01 cmd 25/00:08:08:10:9c/00:00:29:00:00/f0 tag dma 4096 in[ 3828.185891] res 51/40:00:09:10:9c/40:00:29:00:00/f0 Emask 0x9 (media error)[ 3828.190071] ata1.01 status: { DRDY ERR }[ 3828.192153] ata1.01 status: { UNC }These were followed by[ 3830.509338] end_request: I/O error, deb SDA, sector 698093577[ 3830.509841] Buffer I/O error on device dm-3, logical block 87261184Error reading block 87261184 (Attempt to read block from filesystem resulted in short read) while reading I node and block bitmaps. Ignore error? nofsck.ext3: Can't read an block bitmap while retrying to read bitmaps for /dev/mappersfa1_crypt/dev/mapper/sda1_crypt: ******* WARNING: Filesystem still has errors *******e2fsck: aborted/dev/mapper/sda1_crypt: ******* WARNING: Filesystem still has errors *******And the it aborted with a warning that the filesystem still had errors.My questions are:Is my data toasted? (My rigorous backup policy hasn't been rigorously followed of late; I am being punished by the universe, I am sure.)What can/ought I to do now?Did I do the wrong thing already?Will someone hold me until the shaking stops?EDITI also asked on my local LUG mailing list. The advice I got there was to take an image of the drive with ddrescue and run fsck on a copy of that image. That seems sound and unlikely to make things worse. So, that is the present plan of attack, pending any better suggestions.
encrypted ext3 damaged; how to proceed?
encryption;data recovery;fsck
null
_webapps.87199
I'm creating a form where organizations can report on a number of different training positions which they provide. These training positions are called 'posts'. On the first page of the form, users enter basic information about their posts, e.g. the 'post number'. I've set this up as a repeating section, as one organization may have multiple posts.On the second page of the form, users need to enter more specific information relating to each of the posts they'd previously listed. Because of this, I'd like some of the information entered in the repeating section to be automatically populated here. For example, if a user lists three different posts in the repeating section on the first page, then the second page should display the details they entered for all three posts, with extra fields where they can provide further information for each.Essentially, I'm trying to dynamically create content and fields on one page based on what was entered in the repeating section on the previous page. Is this possible?
Cognito Forms: dynamically displaying content entered into a repeating section on a subsequent page
cognito forms
null
_unix.370421
I have enabled two serial ports in VirtualBox, and then typed the lspci command in Ubuntu, and this is the result:The serial ports are not listed, is it because the serial ports are not part of the PCI bus?
Why the lspci command does not list the serial ports?
linux;serial port;x86;pci
is it because the serial ports are not part of the PCI bus?Yes. Traditional PC serial ports on x86 hardware interface with applications via old-style ISA I/O ports and interrupts. Keep in mind that RS-232 data rates are down in the kHz range in the vast majority of cases. PCI holds no advantage for RS-232.Add-on PCI serial port cards may appear in lspci output, but that's more about available slots than the appropriateness of PCI to RS-232.
_webmaster.101400
we're currently rebuilding a client's application and are wondering if there are any benefits of serving specific pages to crawlers?I.E. Semantic html, no stylesheets, additional meta tags?Would love to know what you think
SEO benefits of serving simplified pages to crawlers
seo;html;googlebot
null
_unix.349471
We are using linux logrotate for rotating our log files,Example :/location/tomcat/logs/* /location/jboss/log/* { copytruncate daily rotate 10 compress size 20M olddir rotated create 0644 test test}As per the copy truncate definition given in LINUX,copytruncate Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one, It can be used when some program can not be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some log- ging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place.So there will be a data loss in log file during rotate process. I noticed around 5 to 20 seconds of log loss, Is there a way / configuration to do the same process without data loss?
Logrotate in linux to handle log data loss
logs;logrotate
null
_cs.49111
Since we started talking about relations on languages and so on i keep on struggling with this subject. So now iam faced with two questions where i really dont know how to start. First of all, the natural relation of a language $L \subseteq \Sigma^*$ is the equivalence relation on $\Sigma^*$ with the equivalence classes $L$ and $\Sigma^* \setminus L$Let the natural relations of two arbitrary languages $L_1$ and $L_2$ be right congruent. Is the natural relation of the language $L_1 \cup L_2 $ also right congruent?So a right congruence is a equivalence relation with the added property that:$ \forall a \in \Sigma : u \equiv v \Longrightarrow ua \equiv va$Intuitively i would say that the union of this two relations remains to be a right congruence but i dont know how to prove this idea.For every equivalence relation $\equiv$, there are at least two languages which are saturated by $\equiv$, true or false? An equivalence relation saturates a Language $L$ if: $ u \equiv v \Longrightarrow u \in L \Leftrightarrow v \in L $My guess would be that any equivalence relation saturates the Language $L$ and $\bar{L}$I would be pleased if some one could provide me with a useful hint on how to solve this questions.
Union of right congruence relations
formal languages
I'll walk you through the first question. You can ask the second question separately (see D.W.'s answer).When faced with a proof or refute question like this, we need to try both directions. First, we can try proving the claim. If we seem to get stuck, we can look for a counterexample.Let's try proving the claim. We are given that $L_1$ and $L_2$ are right congruent. This means the following, for $u,v \in \Sigma^*$ and $a \in \Sigma$:If $u,v \in L_1$ or $u,v \notin L_1$ then either $ua,va \in L_1$ or $ua,va \notin L_1$.If $u,v \in L_2$ or $u,v \notin L_2$ then either $ua,va \in L_2$ or $ua,va \notin L_2$.Let's try to prove that $L_1 \cup L_2$ also satisfies a similar condition. We have to consider two cases: $u,v \in L_1 \cup L_2$ and $u,v \notin L_1 \cup L_2$. The second case looks simpler, so let's start with it. If $u,v \notin L_1 \cup L_2$ then $u,v \notin L_1$ and $u,v \notin L_2$. The assumptions then show that:Either $ua,va \in L_1$ or $ua,va \notin L_1$.Either $ua,va \in L_2$ or $ua,va \notin L_2$.We have to prove that either $ua,va \in L_1 \cup L_2$ or $ua,va \notin L_1 \cup L_2$. We reason as follows. If $ua,va \in L_1$ then $ua,va \in L_1 \cup L_2$ so we're done. Same if $ua,va \in L_2$. The remaining case is $ua,va \notin L_1$ and $ua,va \notin L_2$. In this case, $ua,va \notin L_1 \cup L_2$, so we're again done.Now let's get back to the other case: $u,v \in L_1 \cup L_2$. This case looks harder. There are many different subcases. Let's start with a simple one: $u,v \in L_1$. In this case we can say that $ua,va \in L_1$ (and then we're done, since $ua,va \in L_1 \cup L_2$) or $ua,va \notin L_1$. What do we do in the latter case? We have to consider membership of $u,v$ in $L_2$. If $u,v \in L_2$ or $u,v \notin L_2$ then either $ua,va \in L_2$ (and then we're done, since $ua,va \in L_1 \cup L_2$) or $ua,va \notin L_2$, in which case we know that $ua,va \notin L_1 \cup L_2$. But what happens if $u \in L_2$ and $v \notin L_2$? Then we seem to be stuck.Now we switch gears, trying to refute the claim. We got stuck when we had $u,v$ such that $u,v \in L_1$, $ua,va \notin L_1$, $u \in L_2$, $v \notin L_2$. So we try to look for two right congruent languages for which two such words exist. In order to do that, we need first to think of examples of right congruent languages. One very simple example is the language of even-length words. We can enrich the repertoire by taking the language of all words having an even number of some letter $a$.Here are two examples of this sort: take $L_1$ to be all words with an even number of $0$s, and take $L_2$ to be all words with an even number of $1$s. We can take $u = \epsilon \in L_1 \cap L_2$ (here $\epsilon$ is the empty word) and $v = 1 \in L_1 \setminus L_2$. Since we want $ua,va \notin L_1$, we take $a = 0$. Then $u0,v0 \notin L_1$ while $u0 \in L_2, v0 \notin L_2$. So $u0 \in L_1 \cup L_2$ but $v0 \notin L_1 \cup L_2$, and we have found our counterexample!As you can see, finding a counterexample is often more creative then proving a statement, which can be quite mechanical. You just need to try a lot of things until something works.The next step is to reflect on what we have shown. Is there any other operation on two languages which preserves the property of being right congruent? Is there an operation on one language? Indeed, there are: if $L$ is right congruent then so is $\overline{L}$; and if $L_1,L_2$ are right congruent then their symmetric difference $L_1 \triangle L_2$ is right congruent. See if you can prove these claims.(In fact, right congruent languages are exactly the languages accepted by a DFA with at most two states. This leads to a complete classification of these languages, which I leave to the reader.)
_unix.353784
Probably a stupid question but I want to be sure on this. On an SSH connection, can you do everything that you can do on a console connection?In other words, after launching a system and installing and configuring an SSH server on it, can you do all your further interaction with this system via SSH, and not use the console (except in cases that the SSH server is not available for some reason)?
Is there anything that can be done via a console login, but not via an SSH login?
ssh;console
Yes.Here are just some of the things.superuser logonOn systems like FreeBSD and OpenBSD, the init program only permits logging on as user #0, in single user mode, on terminals marked as secure in the /etc/ttys file. And the login program (directly on OpenBSD, through a PAM module in FreeBSD) enforces the secure flag in multi-user mode.Debian and Ubuntu similarly have the /etc/securetty mechanism. In all of them, the console, and the kernel virtual terminals (which are not necessarily the console, note), default to permitting superuser log on.On OpenBSD, the out of the box default for SSH is not to permit log on as the superuser when using password or keyboard-interactive authentication (but to permit it when using public key authetication). The out of the box default for SSH on FreeBSD is to not permit log on as the superuser at all.framebuffer output event/USB input programsOne can tunnel X over an SSH connection, and one can interact with programs that employ a terminal device. But programs that expect to perform I/O via Linux's event device system, via USB HID devices, and via a framebuffer device, are not operable via SSH. further readingHow to use /dev/fb0 as a console from userspace, or output text to itsoundOne generally doesn't get sound tunnelled over SSH, either. One can manually direct programs that make sounds to a PulseAudio server over the LAN, or manually tunnel from the remote sound clients to a local PulseAudio server. But not all sound is PulseAudio, for starters.further readinghttps://superuser.com/questions/231920/BraillePrograms such as BRLTTY require direct access to the character+attribute cell array of a virtual terminal device. Where a server has BRLTTY set up, it cannot access SSH login sessions and display them in Braille, whereas it can display kernel virtual terminal login sessions. further readingJonathan de Boyne Pollard (2014). Combining the nosh user-space virtual terminals with BRLTTY. Softwares.accessibility. Debian wiki.emergency mode and rescue mode workBootstrapping into emergency mode or rescue mode does not start an SSH server. Neither mode starts all of the hooplah that underpins networking, let alone network services like an SSH server.unpredictable PolicyKit stuffPolicyKit has the notion of active and inactive login sessions. This tries to boil down what in the /etc/ttys system is a set of several attributes (on, secure, network, dialup) into a single true/false switch between active and inactive. This doesn't quite fit the world where SSH and its ilk exist.One can only have an active login session when one logs on on the console or on a kernel virtual terminal. Logging on via SSH is always considered to be an inactive login session. Surprising behavioural differences can result, because software authors and the system administrator can grant differing permissions to do stuff to active versus inactive login sessions.further readingEnabling system management privileges for non-local users - How the heck does `polkit` work, anyways?https://askubuntu.com/questions/21586/Jeff Lane (2014-03-28). plainbox-secure-policy doesn't work over ssh. Bug #1299201. Launchpad. Ubuntu.
_codereview.62059
BackgroundI'm using Lua with luaglut to do some OpenGL stuff. The luaglut API is almost identical to the gl/glut C APIs. Sometimes, gl functions want a pointer to some data, for example:glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, oh_crap_a_pointer)In these cases, the luaglut bindings take a light userdata containing the pointer. In the luaglut demos, a small example library called memarray does the work of setting up a C array and returning a pointer to it as a light userdata.You might use memarray something like this:require 'memarray'function loadTexture(filename) local file = assert(io.open(filename, 'rb')) local data = file:read('*a') file:close() local array = memarray('uchar', #data) array:from_str(data) return arrayend-- ... laterlocal texture = loadTexture('whatever.rgba')glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture:ptr())That got me thinking, why do we need memarray at all? Lua strings should be binary safe; all we really need is to get a pointer to the string returned fromfile:read('*a') as a light userdata. It would look something like this:function loadTexture(filename) local file = assert(io.open(filename, 'rb')) local data = file:read('*a') file:close() return dataend-- ... laterlocal stringpointer = require 'stringpointer'local texture = loadTexture('whatever.rgba')glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, stringpointer.get(texture))Of course we pass the string around instead of the pointer, because the GC could free the memory the string was in if there are no more references to it, invalidating the pointer. Internally Lua strings are immutable and passed by reference anyway, so this is no big deal.Code#include <lua.h>#include <lualib.h>#include <lauxlib.h>int get_pointer(lua_State *L){ luaL_checktype(L, 1, LUA_TSTRING); lua_pushlightuserdata(L, (void *)lua_tostring(L, 1)); return 1;}int luaopen_stringpointer(lua_State *L){ const luaL_Reg api[] = { {get, get_pointer}, {NULL, NULL} };#if LUA_VERSION_NUM == 501 luaL_register(L, stringpointer, api);#else luaL_newlib(L, api);#endif return 1;}It's really tiny, but my C is rusty so maybe I did something stupid. It seems to work fine so far. Any input is welcome.
Tiny Lua library to get char pointer from string
c;memory management;lua;opengl
null
_webmaster.12129
I've been optimizing my website for search engines, and for some reason I have a crawl error on a page that has not been found at 'www.peach-designs.com/a' which is not a page, is not a link, or is not anything on my site. I've checked my code and I'm only left to assume that this is a link to a page (a href, etc.).Is this common for other websites?
SEO - Crawl Errors
google;seo;web crawlers
null
_unix.51944
Imagine something like this: $ curlsh http://www.example.org> GET /foo/bar/bam...output here...> POST /thing/pool ...... result here.... is there a tool that lets me do that?
Is there a way to use curl interactively? Or is there an interactive curl/wget shell?
wget;curl
Thanks for the answers.After googling around, I found resty, which is a shell script wrapper around the curl tool. This is really what I want. It's 155 lines of shell script, and when I run it, I get functions for GET, PUT, POST, DELETE, and OPTIONS. These functions are just wrappers around the curl program found on my path.It works like this on MacOSX bash: $ . resty$ resty https://api.example.orghttps://api.myhost.com*$ GET /v1/o/orgname -u myusername:password{ createdAt : 1347007133508, createdBy : admin, displayName : orgname, environments : [ test, prod ], lastModifiedAt : 1347007133508, lastModifiedBy : admin, name : orgname, properties : { propertyList : [ ... ] },}$The first line there just runs the commands in the current shell. The next line, the resty command, sets the URL base. Thereafter, any call to GET, PUT, POST... implicitly references that base. I showed an example that emits prettified JSON. I think if your server emits minified JSON, you could pretty-print it with an external script by piping the output. There's support for host-based preferences. Suppose your target host is api.example.org. Ceate a file called ~/.resty/api.example.org, and insert in there, lines which specify arguments that should be passed to every curl call to the host by that name. Each http verb gets its own line. So, inserting this content in the file: GET -u myusername:mypassword --write-out \nStatus = %{http_code}\n...means that every time I do a GET when api.example.org is the base hostname, the curl command will implicitly use the -u and --write-out args shown there. (-u for basic auth). As another example, you could specify the Accept header in that file, so that you always request XML: GET --header Accept: application/xmlAny curl command line arg is supported in that preferences file. All the curl args for the host+verb tuple need to go on a single line in the preferences file. Handy.
_unix.117040
My input file has positions in first column with different number of spaces (or no space)16504 16516 1650811 16520 1651 16524 16516111 16528 165204 16532 I need to get an output file where fist column has no spaces at all, while keeping second column as it is. 16504 16516 16508 16520 16512 16524 16516 16528 16520 16532
removing spaces from first column
text processing;sed;awk;columns
null
_unix.364096
I have a problem with PulseAudio on my Buildroot based Raspberry Pi. It's playing audio 2% slower than normal. Indeed, after measurement, I get 100 BPM on the Raspberry when I have 102 on my computer, and it scales with the BPM. I always have RPi's BPM = 98% normal BPM .It also works with frequencies, that are 2% smaller ...I'm using a Raspberry Compute Module (CM1) with a Wolfson WM8731 audio codec.I tried everything I found on the internet to play files in real time, have the best scheduling priority, but still that problem. It's very strange because I don't hear the music cracking or whatever. I also don't hear 'vibrato' effect because of frequency changing regularly. So it really seems that Pulse Audio is just 2% late compared to real time.Do you guys have an idea of what could be the problem ?
Pulse Audio plays 2% slower
audio;raspberry pi;pulseaudio
null
_unix.219564
I downloaded debian-live-8.1.0-i386-standard.isofrom http://cdimage.debian.org/debian-cd/8.1.0-live/i386/iso-hybrid/and booted a system from it , logged in with user and password as live. NOW, When i got bash prompt, now what to type so that the debian standard installation starts to install on my hard disk?background note: i downloaded all other .iso (for example: debian-live-8.1.0-i386-mate-desktop.iso ) from the same above directory url. And from all those .iso, i could run the system as live and install also - ie, they are hybrid CD.The standard iso works fine as live CD , but as it is hybrid CD too, so it should be able to start installation also.
install debian from hybrid standard CD
debian
null
_unix.293327
It's fairly straightforward to determine from within a script that the code being run is not running in an interactive shell, but you already know that as you write the script, so I'm not sure how it's useful except within a function (or if you source the script).Within a script, how do you determine if the script is being run from an interactive shell, or being run by another script? In other words, how do you determine the interactivity of the caller of the script? Put yet another way, how do you know whether the parent shell is interactive or not?Just to further clarify, I'm not trying to find the name of the nearest interactive ancestor, I'm just trying to figure out if the immediate parent is interactive or not.If this varies from shell to shell, I'm most interested in zsh, but also in bash.
How to determine if script is called by interactive shell or another script?
shell script;interactive
null
_webapps.20113
We're using Trello to track our hiring process. It would be nice to use the vote feature to quickly gauge opinions without having to read comments. However, there doesn't seem to be a way to clear votes besides having each member unvote. So, votes currently must represent the card over the whole board. It seems reasonable for it to represent the card only for that list, but that's only possible if you can clear it somehow...
Is there a way I can clear votes in Trello?
trello
null
_softwareengineering.233498
Requirements:My application is latency sensitive. Millisecond level responsiveness matters. Not all the time, but when it acts, it needs to be fast.My application needs to log information about what it's done in those times.The actual writing of the logs does not need to be that fast, just the actions.However, the log files need to be written at human speed, in other words, I cannot set immediateFlush to false.So, obviously, certain kinds of logging are getting offloaded to another thread. I am using logback as a framework, but for certain sorts of log messages I still want to offload the work to another thread.Here is how the system currently works:Each part of the latency sensitive system gets a particular logger interface injected. Each of these log methods has a signature specific to the sorts of things I know the logger will need to log.A SimpleLogger implementation is written for each case. This writes to the log on the same thread.I have also written a ThreadedLogger which implements ALL the logging interfaces, and gets a backing logger implementation injected for each sort of logger.Whenever a log method is called in ThreadedLogger, it wraps the request in an SomeLogObject implements LogCommand, throws the objected into a LinkedBlockingQueue, and returns. This is similar to the Go4 Command patternThere is a consumption thread that blocks on BlockingQueue.take() waiting for log objects to come in. When they do, it calls LogCommand.execute(), which calls the appropriate method in the backing Logger in ThreadedLogger.Currently the LogCommand implementations are very stupid. They just call the appropriate method in the proper injected logger.I am trying to decide if I should refactor this system. Currently, if I need to create a new place to do these offloaded logs, I have to create:A new interfaceA new implementation of this interfaceTwo new DI bindings (one for the simple logger, one for the ThreadedLogger backer)A new LogCommand implementationA new method for creating this LogCommand objectCode for injecting and storing as a field the appropriate logger in ThreadedLogggerIt seems to me that it would be a lot simpler to offload the creation of the LogObject to the calling threads, but I am concerned that I'm exposing too much of the internal workings of the log system if I do that. Not that this is necessarily a problem, but it seems unclean. Another possibility would be to combine the functionality of the simple logger implementations with their respective log objects, so the objects change from being I am an object that does logging when given log data to I am a log event that knows how to log myself, and these objects can be created by a log factory.
Logging in a latency sensitive system
java;design;logging
You have made your design overly complex by creating specific logger signatures for the various latency sensitive parts.A better design would have been to use the Logger interface from logback for all parts of the application. If performance measurements have shown that the logging causes a bottleneck for some low-latency parts, then you can offload the logging in this way:You create a ThreadedLogger class that implements the Logger interface and a ThreadedLogReader class. These two classes communicate using LogCommand objects. The LogCommand class contains all the required log information (which logger to use, the severity level and the actual log message).The ThreadedLogReader class runs on a separate thread and injects the log messages in the received LogCommand objects into the logback framework.This way, your logging framework remains free from knowledge about the latency-sensitive parts of the code. If you want to make it really fancy, you could even implement a custom version of the LoggerFactory class which decides if a certain logger should be a ThreadedLogger or a regular Logger.
_datascience.20264
The KDD 1999 dataset had 41 features and the UNB ISCX Intrusion Detection Evaluation DataSet (iscx.ca/dataset)had 49. For an intrusion detection problem, I want to extract these features from a given/generated pcapng file. Is there any feature extraction tool available for this? [All the information may not be present in the input pcapng file and some features may not be computable]
Feature extraction from given pcapng file
feature extraction
null
_opensource.140
How can I keep people involved and motivated to work on a project that doesn't involve direct monetary benefit?What specific strategies do open source projects tend to use to keep core developers involved?Partly this is a general what strategies reliably motivate most open-source developers, but it's also about how can you prevent people from forgetting they're involved at all? What specific tactics do project leaders use to remind people that they're part of something?.At least in my own case, my largest problem with projects is that I have hundreds of barely started, and tens of half-finished things that I just completely forget about / put off, but that I do really want to see completed. While this arguably isn't specific to open source, to me at least it's the largest impediment.
How can I keep a project from losing momentum?
project management;human resources
null
_softwareengineering.190876
In class I am learning about value iteration and markov decision problems, we are doing through the UC Berkley pac-man project, so I am trying to write the value iterator for it and as I understand it, value iteration is that for each iteration you are visiting every state, and then tracking to a terminal state to get its value.I have a feeling I am not right, because when I try that in python I get a recursive depth exceed. So I return to the pseudo-code, and there is a Vk[s] and Vk-1[s'], which I had thought to mean value of state, and value of newState, but I must be missing something.So what is the significance of the k and k-1?My Code: def val(i, state): if mdp.isTerminal(state) or i == 0: return 0.0 actionCost = {} for action in mdp.getPossibleActions(state): actionCost[action] = 0 for (nextState, probability) in mdp.getTransitionStatesAndProbs(state, action): reward = mdp.getReward(state, action, nextState) actionCost[action] += probability * reward + discount * val(i - 1, nextState) return actionCost[max(actionCost, key=actionCost.get)] for i in range(iterations): for state in mdp.getStates(): self.values[state] = val(i, state)Pseudo Code:k 0 repeat k k+1 for each state s do Vk[s] = maxa s' P(s'|s,a) (R(s,a,s')+ Vk-1[s']) until s |Vk[s]-Vk-1[s]| <
I don't understand value iteration
python;artificial intelligence
null
_codereview.7725
I am writing a few python functions to parse through an xml schema for reuse later to check and create xml docs in this same pattern. Below are two functions I wrote to parse out data from simpleContent and simpleType objects. After writing this, it looked pretty messy to me and I'm sure there is a much better (more pythonic) way to write these functions and am looking for any assistance. I am use lxml for assess to the etree library.def get_simple_type(element): simple_type = {} ename = element.get(name) simple_type[ename] = {} simple_type[ename][restriction] = element.getchildren()[0].attrib elements = element.getchildren()[0].getchildren() simple_type[ename][elements] = [] for elem in elements: simple_type[ename][elements].append(elem.get(value)) return simple_type def get_simple_content(element): simple_content = {} simple_content[simpleContent] = {} simple_content[simpleContent][extension] = element.getchildren()[0].attrib simple_content[attributes] = [] attributes = element.getchildren()[0].getchildren() for attribute in attributes: simple_content[attributes].append(attribute.attrib) return simple_contentExamples in the schema of simpleContent and simpleTypes (they will be consistently formatted so no need to make the code more extensible for the variety of ways these elements could be represented in a schema):<xs:simpleContent> <xs:extension base=xs:integer> <xs:attribute name=sort_order type=xs:integer /> </xs:extension></xs:simpleContent><xs:simpleType name=yesNoOption> <xs:restriction base=xs:string> <xs:enumeration value=yes/> <xs:enumeration value=no/> <xs:enumeration value=Yes/> <xs:enumeration value=No/> </xs:restriction></xs:simpleType>The code currently creates dictionaries like those show below, and I would like to keep that consistent:{'attributes': [{'type': 'xs:integer', 'name': 'sort_order'}], 'simpleContent': {'extension': {'base': 'xs:integer'}}}{'yesNoOption': {'restriction': {'base': 'xs:string'}, 'elements': ['yes', 'no', 'Yes', 'No']}}
Python xml schema parsing for simpleContent and simpleTypes
python;parsing;xml
Do more with literals:def get_simple_type(element): return { element.get(name): { restriction: element.getchildren()[0].attrib, elements: [ e.get(value) for e in element.getchildren()[0].getchildren() ] } }def get_simple_content(element): return { simpleContent: { extension: element.getchildren()[0].attrib, attributes: [ a.attrib for a in element.getchildren()[0].getchildren() ] } }
_softwareengineering.308749
I am developing an e-commerce product and I have been able to implement all functionality and am left with allowing users to create additional attributes for a product. Right Now I have two options.EAVEAV is largely frowned upon but seems to work for Magento. But after researching all the headaches it causes I am a bit reluctant to use itUse JSON Columns in MySql 5.7This is rather new an I have not seen it being implemented anywhere else and I am dreading full table scans as a result of querying the JSON attributes. But after reading this MySql 5.7 JSON they seem to recommend using JSON. and it would be less tiresome than implementing something like this Practical MySql schema advice .My question is, although I am biased towards using the JSON column way of storing attributes as NoSQL is not an option for me, are there any drawbacks that are more severe than using EAV tables.
Using MySql 5.7 JSON columns for EAV
performance;sql;mysql
null
_codereview.18415
Here's some code that removes the specified character, ch, from the string passed in. Is there a better way to do this? Specifically, one that's more efficient and/or portable?//returns string without any 'ch' characters in it, if any.#include <string>using namespace std;string strip(string str, const char ch){ size_t p = 0; //position of any 'ch' while ((p = str.find(ch, p)) != string::npos) str.erase(p, 1); return str;}
Stripping specified character
c++
I'm not entirely sure how the performance will compare, but the standard way to accomplish this would be the erase-remove idiom:str.erase(std::remove(str.begin(). str.end(), ch), str.end());Unless the performance proves to be a bottleneck, it's typically better to stick with the C++ style of doing things. I can't imagine that this would be significantly less efficient than the other method. (In fact, I wouldn't be surprised if this is a bit faster for long strings with a high amount of the removed character -- though my assumption of that depends on quite a few non-guaranteed implementation choices, and a rather rough estimation of the cost of different low level operations.)
_codereview.167176
I have wrote code for a stopwatch that utilizes the abstract design pattern and would like to get some feedback on the code, so be as harsh as you can.Note: I used ctime instead of chrono because this is not meant for benchmarking. The code will later be used in a console game, and the format of the std::tm struct is easy to work with.Also, note that the way I wrote the tests in source.cpp will result in a small visual bug from time to time to resolve it lower the waiting time inside of the do while loop from std::chrono::seconds(1) to std::chrono::milliseconds(250); and increase the value of timer, the following action will increase the refresh rate.EDIT: Note that this question is a followup to Watch that uses the abstract factory design pattern they both use the same Console, Constants and Digits library but both accomplish a different task StopWatch.h:#ifndef STOP_WATCH#define STOP_WATCH#includeDigits.h#include<vector>#include<memory>#include<ctime>class StopWatch{public: virtual void printTime() = 0; void setStopWatchXY(int x, int y); bool countDownFrom(int seconds); void updateTime(); void reset(); void start(); void stop(); void lap(); const std::vector<int>& getLapTimes() const; int getElapsed() const; virtual ~StopWatch() = default;protected: int m_watchXPos; int m_watchYPos; int m_seconds; int m_minutes; int m_hours;private: std::vector<int> m_lapTimes; bool m_running{ false }; int m_elapsed{}; int m_beg; std::time_t m_now; void converter(int seconds); void clearTime();};class DigitalStopWatch final : public StopWatch{public: virtual void printTime() override; explicit DigitalStopWatch(int x, int y) { setStopWatchXY(x, y); }};class SegmentedStopWatch final : public StopWatch{public: virtual void printTime() override; explicit SegmentedStopWatch(int x, int y) { setStopWatchXY(x, y); }private: Digit m_stopWatchDigits[6]; void printDigitAtLoc(Digit digArr[], int index, int x, int y) const; void printColon(int x, int y); void printSeconds(); void printMinutes(); void printHours(); void set(Digit digArr[], int startIndex, int unit); void setDigitsToCurrentTime(); void setSeconds(); void setMinutes(); void setHours();};class Factory{public: virtual std::unique_ptr<StopWatch> createStopWatch(int stopWatchXPos = 0, int stopWatchYPos = 0) const = 0;};class DigitalStopWatchFactory final : public Factory{ virtual std::unique_ptr<StopWatch> createStopWatch(int stopWatchXPos = 0, int stopWatchYPos = 0) const override { return std::make_unique<DigitalStopWatch>(stopWatchXPos, stopWatchYPos); }};class SegmentedStopWatchFactory final : public Factory{ virtual std::unique_ptr<StopWatch> createStopWatch(int stopWatchXPos = 0, int stopWatchYPos = 0) const override { return std::make_unique<SegmentedStopWatch>(stopWatchXPos, stopWatchYPos); }};#endif StopWatch.cpp:#includeStopWatch.h#includeConsole.h#include<thread> //for this_thread::sleep_fornamespace{ constexpr int maxTime { 356400 }; constexpr int digitPadding { 5 }; constexpr int secondsIndexStart{ 5 }; constexpr int minutesIndexStart{ 3 }; constexpr int timePadding { 2 }; constexpr int hoursIndexStart { 1 }; enum { First, Second, Third, Fourth, Fifth, Sixth };}/*|---STOP_WATCH_FUNCTIONS_START---|*//*|---PUBLIC_FUNCTIONS_START---|*/void StopWatch::setStopWatchXY(int x, int y){ Console::setXY(x, y, m_watchXPos, m_watchYPos);}bool StopWatch::countDownFrom(int seconds){ if (seconds > maxTime) seconds = maxTime; while (seconds >= 0) { converter(seconds); printTime(); if (seconds > 0) { std::this_thread::sleep_for(std::chrono::seconds(1)); }//end of if --seconds; }//end of while return true;}void StopWatch::updateTime(){ long long curTimeInSec{ static_cast<long long>(std::time(&m_now)) - m_beg + m_elapsed }; if (curTimeInSec > maxTime) curTimeInSec = 0; converter(curTimeInSec);}void StopWatch::reset(){ m_running = false; m_lapTimes.clear(); m_lapTimes.shrink_to_fit(); clearTime();}void StopWatch::start(){ if (!m_running) { m_beg = static_cast<long long>(std::time(&m_now)); m_running = true; }//end of if}void StopWatch::stop(){ if (m_running) { m_elapsed += static_cast<long long>(std::time(&m_now)) - m_beg; m_running = false; }//end of if}void StopWatch::lap(){ if (m_running) { stop(); m_lapTimes.emplace_back(m_elapsed); clearTime(); start(); }//end of if}const std::vector<int>& StopWatch::getLapTimes() const{ return m_lapTimes;}int StopWatch::getElapsed() const{ return m_elapsed;}/*|----PUBLIC_FUNCTIONS_END----|*//*|---PRIVATE_FUNCTIONS_START---|*/void StopWatch::converter(int seconds){ m_hours = seconds / 3600; seconds = seconds % 3600; m_minutes = seconds / 60; m_seconds = seconds % 60;}void StopWatch::clearTime(){ m_elapsed = 0; m_seconds = 0; m_minutes = 0; m_hours = 0;}/*|----PRIVATE_FUNCTIONS_END----|*//*|----STOP_WATCH_FUNCTIONS_END----|*//*|---DIGITAL_STOP_WATCH_FUNCTIONS_START---|*//*|---PUBLIC_FUNCTIONS_START---|*//*|---VIRTUAL_FUNCTIONS_START---|*/void DigitalStopWatch::printTime() { Console::gotoxy(m_watchXPos, m_watchYPos); if (m_hours < 10) std::cout << '0'; std::cout << m_hours << ':'; if (m_minutes < 10) std::cout << '0'; std::cout << m_minutes << ':'; if (m_seconds < 10) std::cout << '0'; std::cout << m_seconds;}/*|----VIRTUAL_FUNCTIONS_END----|*//*|----PUBLIC_FUNCTIONS_END----|*//*|----DIGITAL_STOP_WATCH_FUNCTIONS_END----|*//*|---SEGMENTED_STOP_WATCH_FUNCTIONS_START---|*//*|---PUBLIC_FUNCTIONS_START---|*//*|---VIRTUAL_FUNCTIONS_START---|*/void SegmentedStopWatch::printTime() { setDigitsToCurrentTime(); printHours(); printColon(m_watchXPos + 10, m_watchYPos); printMinutes(); printColon(m_watchXPos + 22, m_watchYPos); printSeconds();}/*|----VIRTUAL_FUNCTIONS_END----|*//*|----PUBLIC_FUNCTIONS_END----|*//*|---PRIVATE_FUNCTIONS_START---|*/void SegmentedStopWatch::printDigitAtLoc(Digit digArr[], int index, int x, int y) const{ digArr[index].setDigitXY(x + index * digitPadding, y); digArr[index].printDigit();}void SegmentedStopWatch::printColon(int x, int y){ Console::putSymbol(x, y + 1, '.'); Console::putSymbol(x, y + 2, '.');}void SegmentedStopWatch::printSeconds(){ printDigitAtLoc(m_stopWatchDigits, Fifth, m_watchXPos + timePadding * 2, m_watchYPos); printDigitAtLoc(m_stopWatchDigits, Sixth, m_watchXPos + timePadding * 2, m_watchYPos);}void SegmentedStopWatch::printMinutes(){ printDigitAtLoc(m_stopWatchDigits, Third, m_watchXPos + timePadding, m_watchYPos); printDigitAtLoc(m_stopWatchDigits, Fourth, m_watchXPos + timePadding, m_watchYPos);}void SegmentedStopWatch::printHours(){ printDigitAtLoc(m_stopWatchDigits, First, m_watchXPos, m_watchYPos); printDigitAtLoc(m_stopWatchDigits, Second, m_watchXPos, m_watchYPos);}void SegmentedStopWatch::set(Digit digArr[], int startIndex, int unit){ if (unit < 10) digArr[startIndex - 1] = 0; else digArr[startIndex - 1] = unit / 10; digArr[startIndex] = unit % 10;}void SegmentedStopWatch::setDigitsToCurrentTime(){ setHours(); setMinutes(); setSeconds();}void SegmentedStopWatch::setSeconds(){ set(m_stopWatchDigits, secondsIndexStart, m_seconds);}void SegmentedStopWatch::setMinutes(){ set(m_stopWatchDigits, minutesIndexStart, m_minutes);}void SegmentedStopWatch::setHours(){ set(m_stopWatchDigits, hoursIndexStart, m_hours);}/*|----PRIVATE_FUNCTIONS_END----|*//*|----SEGMENTED_STOP_WATCH_FUNCTIONS_END----|*/Source.cpp:#includeStopWatch.h#includeConsole.h //for Console::gotoxy#include<thread> //for this_thread::sleep_forint main(){ std::unique_ptr<Factory> segFact{ std::make_unique<SegmentedStopWatchFactory>() }; std::unique_ptr<Factory> digFact{ std::make_unique<DigitalStopWatchFactory>() }; std::unique_ptr<StopWatch> stoppers[2]; stoppers[0] = segFact->createStopWatch(); stoppers[1] = digFact->createStopWatch(); stoppers[0]->setStopWatchXY(5, 5); //to test the second stopper simply change stoppers[0] to stoppers[1] stoppers[0]->countDownFrom(12); //test countdown //stoppers[0]->countDownFrom(60 * 60 * 60 * 60); //overflow test /* stoppers[0]->start(); while (1) { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::milliseconds(200)); //this is only responsible for the refresh rate, the lower the better }//note that no waiting at all will result in visible reprinting. */ /* int timer = 9; stoppers[0]->start(); do { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::seconds(1)); }while (--timer); stoppers[0]->stop(); //test stop std::this_thread::sleep_for(std::chrono::seconds(3)); timer = 9; stoppers[0]->start(); do { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::seconds(1)); }while (--timer); */ /* int timer = 9; stoppers[0]->start(); do { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::seconds(1)); }while (--timer); stoppers[0]->reset(); //test reset std::this_thread::sleep_for(std::chrono::seconds(3)); timer = 9; stoppers[0]->start(); do { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::seconds(1)); }while (--timer); */ /* int timer = 9; stoppers[0]->start(); do { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::seconds(1)); } while (--timer); stoppers[0]->lap(); //lap reset stoppers[0]->stop(); std::this_thread::sleep_for(std::chrono::seconds(3)); timer = 9; stoppers[0]->start(); do { stoppers[0]->updateTime(); stoppers[0]->printTime(); std::this_thread::sleep_for(std::chrono::seconds(1)); } while (--timer); stoppers[0]->lap(); stoppers[0]->stop(); Console::gotoxy(0, 0); std::cout << m_elapsed: << stoppers[0]->getElapsed() << '\n'; std::cout << First lap: << stoppers[0]->getLapTimes()[0] << '\n'; std::cout << Second lap: << stoppers[0]->getLapTimes()[1] << '\n'; */ return 0;}
Stopwatch that uses the abstract factory design pattern
c++;object oriented;design patterns;datetime;polymorphism
Okay, after I worked my way through the original code, a few things have become clearer. Since I have never done programming with ncurses I was eager to try my hand at a better design.Here it comes. It's a sketch only in the sense that I didn't create separate translation units. That is basically a tedious exercise and left for the reader.However, it does implement stopwatch (including lap times and reset), countdown and a bonus random timer task.Big ideasI noticed that Stopwatch was a bit of a God Object antipattern. It does too many things (you can't really usefully do a countdown and a lap time simultaneously).I reckoned it would be nice to simply have tasks (without any UI) that expose duration measurements, and views that can display them as they update. This is akin to a publish/subscribe pattern.To demonstrate this, I made not only the views generic, but also the tasks.The UI will be an interactive terminal application that supports the following short cut keys: Select (multiple) views: [d]igital/[s]egmented Launch task: [r]andom [c]ountdown s[t]opwatch [?]any Control tasks: [l]ap [!]reset [k]ill Other: [z]ap all views [q]uit/Esc I am aware of the usual implementation in hardware stopwatch devices where the operation modes form a state machine. I also realize that the implementation in code tried to mimick this. Unfortunately, not only did it fall short, it also conflated things with the UI side of things. Consider this answer a finger exercise on my part.Code walkthroughIncludes#include <iostream>#include <sstream>#include <iomanip>We'll be using stream formatting to display hh:mm:ss times.#include <chrono>using namespace std::chrono_literals;#include <memory>#include <random>#include <algorithm>#include <set>We'll be using standard library containers and algorithms.#include <boost/signals2.hpp>A little bit of Boost to aid in the publish/subscribe mechanism. It facilitates multi-cast subscription and automatic (RAII) disconnection.#include <cursesapp.h>#include <cursesp.h>As in the last comment on the old answer, we'll be using ncurses to create an interactive terminal UI (TUI).Preamble/general declarationsWe start at the foundation, some general utilities:namespace { using Clock = std::chrono::steady_clock; using TimePoint = Clock::time_point; using Duration = Clock::duration; using Durations = std::vector<Duration>;Just some convenience shorthands, so we keep our code legible and easy to change. struct HMS { int hours, minutes, seconds; std::string str() const { std::stringstream ss; ss << std::setfill('0') << std::setw(2) << hours << ':' << std::setw(2) << minutes << ':' << std::setw(2) << seconds; return ss.str(); } }; HMS to_hms(Duration duration) { auto count = std::chrono::duration_cast<std::chrono::seconds>(duration).count(); int s = count % 60; count /= 60; int m = count % 60; count /= 60; int h = count; return {h,m,s}; }You had these conversions anyways, but here they are in functional style (specifically, side-effect free). You'll notice how much this uncomplicates the stopwatch tasks below. using boost::signals2::signal; using boost::signals2::scoped_connection;}(More shorthands)There's a subtle point to maintaining Duration at full resolution internally. This means that no rounding errors are introduced when starting a new lap halfway a second.The Task hierarchystruct Task { virtual ~Task() = default; virtual void tick() {}};The foundation of our task class hierarchy. Note that the most generic base class doesn't even presuppose any published events, which makes the framework extensible to tasks other than time measurement. Our time-related tasks share the following abstract base:struct TimerTask : Task { signal<void(Duration, Durations const&)> updateEvent; virtual void tick() = 0;};As you can see, we promise to publish events carrying one or more durations. These can be subscribed to by any view capable of displaying one or more durations.Implementing the various timer operations on this is peanuts. Let's for example do a random durations generator in 3 lines of code:struct RandomTimeTask : TimerTask { virtual void tick() { updateEvent(rand()%19000 * 1s, {}); }};That might seem a cheap example, but countdown is really not much different:struct CountdownTask : TimerTask { CountdownTask(Duration d) : _deadline(Clock::now() + d) { } virtual void tick() { updateEvent(std::max(Duration{0s}, _deadline - Clock::now()), {}); } private: TimePoint _deadline;};Even stopwatch isn't harder, before we add lap times:struct StopwatchTask : TimerTask { StopwatchTask() : _startlap(Clock::now()) { } virtual void tick() { updateEvent(Clock::now() - _startlap, {}); } private: TimePoint _startlap;};Adding laptimes and reset() the full-featured stopwatch task becomes:struct StopwatchTask : TimerTask { StopwatchTask() : _startlap(Clock::now()) { } virtual void tick() { updateEvent(elapsed(), _laptimes); } void lap() { _laptimes.push_back(elapsed()); _startlap = Clock::now(); } void reset() { _startlap = Clock::now(); _laptimes.clear(); } private: Duration elapsed() const { return Clock::now() - _startlap; } std::vector<Duration> _laptimes; TimePoint _startlap;};View hierarchyWe introduce one more parameter object:struct Bounds { int x, y, lines, cols;};And then we get down to business: Views will be things that have a TUI element (a panel):struct View { View(Bounds const& bounds) : _window(bounds.lines, bounds.cols, bounds.x, bounds.y) { } virtual ~View() = default; protected: mutable NCursesPanel _window;};Without delay, let's present the abstract base TimerView that subscribes to any TimerTask:struct TimerView : View { TimerView(Bounds const& bounds) : View(bounds) { } void subscribe(TimerTask& task) { _conn = task.updateEvent.connect([this](Duration const& value, Durations const& extra) { //if (value == 0s) _window.setcolor(1); update(value, extra); }); } private: scoped_connection _conn; virtual void update(Duration const& value, Durations const& extra) const = 0;};I haven't figured out how to configure colors in Curses, but you can see how simple it would be to add generic behaviour to timer views there.A simple view to implement would be the digital timer view. We already have the to_hms() and HMS::str() utilities. Let's decide that the view should show the total elapsed time regardless of lap-times and it should indicate how many laps have been recorded:struct DigitalView final : TimerView { using TimerView::TimerView; private: void update(Duration const& value, Durations const& extra) const override { _window.erase(); auto total = std::accumulate(extra.begin(), extra.end(), value); auto hms = to_hms(total).str(); hms += [# + std::to_string(extra.size()) + ]; int x = 0; for (auto ch : hms) _window.CUR_addch(0, x++, ch); _window.redraw(); }};Note that I have foregone a Console like abstraction here. Instead I used _window methods directly, which does tie the implementation to Curses. You may want to abstract this again, probably using a buffered approach like in my other answer.The SegmentView isn't actually much more complicated. I dropped the bitset<> in favour of more obviously readable code. It has the added benefit of making the code short (except for the constants of course).Functionally, we showthe total elapsed time in a frame captionthe current lap time in the 7-segment large displaya running list of previous lap-times on the right hand side:struct SegmentView final : TimerView { using TimerView::TimerView; private: void update(Duration const& value, Durations const& extra) const override { _window.erase(); // total times in frame caption { auto total = std::accumulate(extra.begin(), extra.end(), value); _window.frame(to_hms(total).str().c_str()); } // big digits show current lap { auto hms = to_hms(value).str(); int digits[6] { hms[0]-'0', hms[1]-'0', hms[3]-'0', hms[4]-'0', hms[6]-'0', hms[7]-'0', }; auto xpos = [](int index) { return 4 + (index * 5); }; int index = 0; for (auto num : digits) printDigit(num, xpos(index++), 1); for (auto x : {xpos(2)-1, xpos(4)-1}) for (auto y : {2, 4}) _window.CUR_addch(y, x, '.'); } // previous laptimes to the right { // print lap times int y = 1; for (auto& lap : extra) { int x = 35; _window.CUR_addch(y, x++, '0'+y); _window.CUR_addch(y, x++, '.'); _window.CUR_addch(y, x++, ' '); for (auto ch : to_hms(lap).str()) _window.CUR_addch(y, x++, ch); ++y; } } _window.redraw(); } void printDigit(int num, int x, int y) const { static char const* const s_masks[10] = { -- | | | | -- , | | , -- | -- | -- , -- | -- | -- , | | -- | , -- | -- | -- , -- | -- | | -- , -- | | , -- | | -- | | -- , -- | | -- | -- , }; if (num < 0 || num > 9) throw std::runtime_error(Cannot assign Invalid digit must be: (0 < digit < 9)); for (auto l = s_masks[num]; *l; l += 4) { for (auto c = l; c < l+4; ++c) _window.CUR_addch(y, x++, *c); ++y; x-=4; } }};Behold, a thing of beauty. One should never underestimate the value of self-explanatory code (if only the constants here). The Main ApplicationAll that remains to be done is the demo application itself. It sets up some factories, and starts an input event loop to receive keyboard shortcuts. Handling them is pretty straightforward.The stopwatch-specific operations ([l]ap and [!]reset) are the only mildly complicated ones because they will have to filter for any running tasks that might be stopwatch tasks.Launching a task without selecting one or more views to connect up will result in a beep() and nothing happening.Note how clearing the _tasks or _views automatically destroys the right subscriptions (due to the use of scoped_connection).struct DemoApp : NCursesApplication { using TaskPtr = std::unique_ptr<TimerTask>; using ViewPtr = std::unique_ptr<TimerView>; using TaskFactory = std::function<TaskPtr()>; using ViewFactory = std::function<ViewPtr()>; using ViewFactoryRef = std::reference_wrapper<ViewFactory const>; int run() { _screen.useColors(); _screen.centertext(lines - 4, Select (multiple) views: [d]igital/[s]egmented); _screen.centertext(lines - 3, Launch task: [r]andom [c]ountdown s[t]opwatch [?]any); _screen.centertext(lines - 2, Control tasks: [l]ap [!]reset [k]ill); _screen.centertext(lines - 1, Other: [z]ap all views [q]uit/Esc); ::timeout(10); // replace with set<> to disallow repeats std::multiset<ViewFactoryRef, CompareAddresses> selected_views; while (true) { TaskPtr added; switch (auto key = ::CUR_getch()) { // waits max 10ms due to timeout case 'r': added = makeRandomTimeTask(); break; case 'c': added = makeCountdownTask(); break; case 't': added = makeStopwatchTask(); break; case 'l': for (auto& t : _tasks) if (auto sw = dynamic_cast<StopwatchTask*>(t.get())) sw->lap(); break; case '!': for (auto& t : _tasks) if (auto sw = dynamic_cast<StopwatchTask*>(t.get())) sw->reset(); break; case '\x1b': case 'q': return 0; case 'k': _tasks.clear(); break; case 'z': _views.clear(); break; case 'd': selected_views.insert(makeDigitalView); break; case 's': selected_views.insert(makeSegmentView); break; default: for (auto& d : _tasks) d->tick(); } if (added) { if (selected_views.empty()) ::beep(); else { for (ViewFactory const& maker : selected_views) { _views.push_back(maker()); _views.back()->subscribe(*added); } selected_views.clear(); _tasks.push_back(std::move(added)); } } ::refresh(); } }The remainder is the setup for the DemoApp, which includes rather boring stuff like generating random view positions. private: std::mt19937 prng{ std::random_device{}() }; NCursesPanel _screen; int const lines = _screen.lines(); int const cols = _screen.cols(); int ranline() { return std::uniform_int_distribution<>(0, lines - 9)(prng); }; int rancol() { return std::uniform_int_distribution<>(0, cols - 9)(prng); }; TaskFactory makeRandomTimeTask = [] { return TaskPtr(new RandomTimeTask); }, makeCountdownTask = [] { return TaskPtr(new CountdownTask(rand()%240 * 1s)); }, makeStopwatchTask = [] { return TaskPtr(new StopwatchTask()); }, taskFactories[2] { makeRandomTimeTask, makeCountdownTask, }; ViewFactory makeDigitalView = [&] { return std::make_unique<DigitalView>(Bounds { ranline(), rancol(), 1, 13 }); }, makeSegmentView = [&] { return std::make_unique<SegmentView>(Bounds { ranline(), rancol(), 7, 48 }); }, viewFactories[2] = { makeDigitalView, makeSegmentView }; std::vector<ViewPtr> _views; std::vector<TaskPtr> _tasks; struct CompareAddresses { // in case you wanted to make selected_views unique template <typename A, typename B> bool operator()(A const& a, B const& b) const { return std::addressof(a.get())<std::addressof(b.get()); }; };};static DemoApp app;You can see a live demo here:
_webmaster.108096
We have a Wordpress HTTP site which we wish to convert to HTTPS or SSL. Do we have to find all mentions of hard coded HTTP resources or can we simply do a 301 redirect in .htaccess, and how do we do that?We are a not for profit website and I have limited knowledge of Wordpress and webmastering (although I am a power user of browsers and can edit basic html).
When converting a WordPress site from HTTP to HTTPS, do all hard-coded HTTP references need to be updated?
redirects;wordpress;https;http;conversions
null
_softwareengineering.254016
When facing new programming jargon words, I first try to reason about them from an semantic and etymological standpoint when possible (that is, when they aren't obscure acronyms). For instance, you can get the beginning of a hint of what things like Polymorphism or even Monad are about with the help of a little Greek/Latin. At the very least, once you've learned the concept, the word itself appears to go along with it well. I guess that's part of why we name things names, to make mental representations and associations more fluent.I found Functor to be a tougher nut to crack. Not so much the C++ meaning -- an object that acts (-or) as a function (funct-), but the various functional meanings (in ML, Haskell) definitely left me puzzled.From the (mathematics) Functor Wikipedia article, it seems the word was borrowed from linguistics. I think I get what a function word or functor means in that context - a word that makes function as opposed to a word that makes sense. But I can't really relate that to the notion of Functor in category theory, let alone functional programming. I imagined a Functor to be something that creates functions, or behaves like a function, or short for functional constructor, but none of those seems to fit...How do experienced functional programmers reason about this ? Do they just need any label to put in front of a concept and be fine with it ? Generally speaking, isn't it partly why advanced functional programming is hard to grasp for mere mortals compared to, say, OO -- very abstract in that you can't relate it to anything familiar ?Note that I don't need a definition of Functor, only an explanation that would allow me to relate it to something more tangible, if there is any.
How can I make sense of the word Functor from a semantic standpoint?
functional programming;math;theory;semantics
In category theory Functors describe relationships between categories by describing some, or all, mappings from one category to another. There are so many philosophical ideas wrapped up in this (all beautiful) it's difficult to say much that relates this to programming or other real world things that doesn't rob it of some of its intrinsic completeness, but here goes.Imagine a dataset, and a set of functions that you could perform on that dataset. Now imagine another dataset (maybe looking or feeling very different from the first) and a set of functions you could perform on that dataset. A functor between the two dataset+function 'things' would map the dataset+function thing 1 to dataset+function thing 2 in a meaningful way i.e. would relate functions to functions so as to preserve some properties of the functions between things. Functors therefore express relationships between 'meanings' expressed by the functions in each 'thing'.It's almost like patterns in programming - the first dataset and function 'thing' might be 'people registered to vote' and functions like 'voting history and likelihood to vote for different parties functions/mappings', the second dataset and function 'thing' might be 'quark groupings in atoms' and functions 'spin interaction prediction under different gravitational influences' and 'total quark breakdown likelihood' or some such weird stuff. A functor would map between one 'world' (dataset and functions on it) and another in a way that allowed you to talk about how you test/express truths about functional outcomes in one world and transfer the results in another because you know how the functions and datasets are related in a specific way.Functors maps relationships between semantics i.e. are higher-order mechanisms for expressing commonalities between meanings of functional systems.Sorry, that's as clear as I can express category theoretic functors in a 'real world' summary.
_unix.287643
I'm trying to compile GCC 4.8.3. I have read the documentation carefully but I'm still unable to cross compile it for 64bit system. I have also gone through this guide. But my requirement is to build GCC in /tmp/xxx directory. But the --with-sysroot & --with-native-system-header-dir flags are messing up the compilation. According to the documentation of GCC, I need to use --with-sysroot flag along with --with-native-system-header-dir.--with-system-header-dir accepts a directory name where the header files are installed. In my case it is ${TOOLS_DIR} and this should be an absolute path, which it is. And, --with-sysroot requires the root path of the folder where the compilation is being done. In my case, it is ${INSTALL_DIR}.INSTALL_DIR=/tmp/gcc-compileTOOLS_DIR=${INSTALL_DIR}/toolsAccording to the Linuxfromscratch guide, I should create a folder on the root system of the host. But in order to do that I will need sudo permission which I don't have. So, I thought to compile GCC in a subdirectory rather than what was mentioned in the book (because all the users get read/write permission for /tmp directory).Now, GCC stops with an error that it cannot find the system header directory. And, it tries to search in /tmp/gcc-compile/tmp/gcc-compile/tools/include, which is the wrong path.The options that I have used are:sed -i s#/tools#${TOOLS_DIR}#g ../gcc-4.8.3-pure64_specs-1.patchpatch -Np1 -i ../gcc-4.8.3-branch_update-1.patchpatch -Np1 -i ../gcc-4.8.3-pure64_specs-1.patchprintf '\n#undef STANDARD_STARTFILE_PREFIX_1\n#define STANDARD_STARTFILE_PREFIX_1 %s/lib/\n' ${TOOLS_DIR} >> gcc/config/linux.hprintf '\n#undef STANDARD_STARTFILE_PREFIX_2\n#define STANDARD_STARTFILE_PREFIX_2 \n' >> gcc/config/linux.hmkdir ${BUILD_DIR} &&cd ${BUILD_DIR} &&AR=ar LDFLAGS=-Wl,-rpath,${CROSS_DIR}/lib \../configure --prefix=${CROSS_DIR} \ --build=${HOST} \ --target=${TARGET} \ --host=${HOST} \ --with-sysroot=${INSTALL_DIR} \ --with-local-prefix=${TOOLS_DIR} \ --with-native-system-header-dir=${TOOLS_DIR}/include \ --disable-nls \ --disable-static \ --enable-languages=c,c++ \ --enable-__cxa_atexit \ --enable-threads=posix \ --disable-multilib \ --with-mpc=${CROSS_DIR} \ --with-mpfr=${CROSS_DIR} \ --with-gmp=${CROSS_DIR} \ --with-cloog=${CROSS_DIR} \ --with-isl=${CROSS_DIR} \ --with-system-zlib \ --enable-checking=release \ --enable-libstdcxx-timeWhile setting the options, I have written ${TOOLS_DIR}/include, then why is it the GCC is trying to look into ${INSTALL_DIR}/${TOOLS_DIR}/include? Can somebody direct me in the right direction?OUTPUTThe directory that should contain system headers does not exist:/tmp/gcc-compile/tmp/gcc-compile/tools/includemake[2]: *** [stmp-fixinc] Error 1make[2]: *** Waiting for unfinished jobs....rm gcc.podmake[2]: Leaving directory `/tmp/gcc-compile/cross-compile-tools/gcc- final/gcc-4.8.3/gcc-build/gcc'make[1]: *** [all-gcc] Error 2make[1]: Leaving directory `/tmp/gcc-compile/cross-compile-tools/gcc-final/gcc-4.8.3/gcc-build'make: *** [all] Error 2
GCC 4.8 compilation error: cannot find the system header directory
compiling;gcc;cross compilation
null
_softwareengineering.267053
I am working on a large C++ project. It consists in a server that exposes a REST API, providing a simple and user-friendly interface for a very broad system comprising many other servers. The codebase is quite large and complex, and evolved through time without a proper design upfront. My task is to implement new features and refactor/fix the old code in order to make it more stable and reliable.At the moment, the server creates a number of long-living objects that are never terminated nor disposed when the process terminates. This makes Valgrind almost unusable for leak detection, as it is impossible to distinguish between the thousands of (questionably) legitimate leaks from the dangerous ones.My idea is to ensure that all objects are disposed before termination, but when I made this proposal, my colleagues and my boss opposed me pointing out that the OS is going to free that memory anyway (which is obvious to everybody) and disposing the objects will slow down the shutdown of the server (which, at the moment, is basically a call to std::exit). I replied that having a clean shutdown procedure does not necessarily imply that one must use it. We can always call std::quick_exit or just kill -9 the process if we feel impatient.They replied most Linux daemons and processes don't bother freeing up memory at shutdown. While I can see that, it is also true that our project does need accurate memory debugging, as I already found memory corruption, double frees and uninitialised variables.What are your thoughts? Am I pursuing a pointless endeavour? If not, how can I convince my colleagues and my boss? If so, why, and what should I do instead?
Correctly disposing objects upon server termination
c++;debugging;memory
null
_unix.264021
I was trying to dockerize (into Debian 8.2) an OpenVPN server (yes, I do know, there already are such containers) but something went wrong inside the container and the server failed to start.I decided to inspect logs but /var/log/syslog (OpenVPN logs here on my host machine) was missing inside the container.I thought that rsyslog was not istalled and added its installation before OpenVPN installation to the Dockerfile. But this had no effect, the syslog was still missing.My Dockerfile is:FROM debian:8.2USER rootEXPOSE 53/udpEXPOSE 1194/udpEXPOSE 443/tcpRUN apt-get updateRUN apt-get install -y rsyslogRUN apt-get install -y openvpn# ...# Some configuration stuff# ...ENTRYPOINT service openvpn start && shThe questions are:Why does OpenVPN logs to syslog after default installation on my host Debian 8.2 and doesn't do it inside a container? I didn't configure anything on my host machine to force OpenVPN log to syslog. It was a default behavior.How do I configure logging of the OpenVPN server running inside a docker container?
How to configure logging inside a Docker container?
debian;syslog;docker;rsyslog;containers
null
_webapps.20869
So after gratuitous theme installations and customizations my tumblr site just isn't looking how I want it to. I've decided to just do the theme myself, but now I have one on there with all this extra stuff.How can I go back to default layout so I have a clean slate to work on?
Go Back to Default Tumblr Layout
tumblr;tumblr themes
Go to http://www.tumblr.com/customize (you can get there by clicking the cog wheel in the backend and then Customize your blog). Click Themes in the upper left, then search and click on Optica, which is the default theme. Click on Use to take the theme.
_unix.190999
You already know my question.I don't have su authority.So I want to know su password.I really(x100) don't know how to find it.I tried change direction(/etc/pam.d/su) and tried to delete auth sufficient pam_wheel.so trust.But I could not do it,Because I don't have su authority.:-(so...Could you do me a favor?
How to know su password
security;root
null
_unix.136335
I had two CentOS 6.5 servers that I was running using the Plesk control panel. I have moved and decided not to use them no more but just buy my hosting. My new ISP blocks port 80 and the cost is insane to get it unblocked from them. I took out the server HDD and trying to use a Fedora 12 Live CD to just get the website files backed up. The issue I'm having is that the folder I need access to are all locked out. The error says I do not have permissions to view folder. When I go to permissions tab I'm being told I'm not the owner. I'm not good with command lines so is there a way to make myself the owner from the interface?
Trying To Get Data From Server HDD
fedora;permissions;data recovery
null
_cs.60987
In the past I have thought a bit about how to register a NIR-image and a thermal image and noticed that this is not trivial - one statement was that if I had the depth information for each pixel, the task would be much easier.Now consider having a 3D-NIR camera (e.g. asus xtion) and a thermal camera and I want to map the thermal information to the depth map (the 3d cloud) or vice versa and the NIR information to the thermal information (and vice versa). I thought I could do it simply like this:Conduct a stereo camera calibration for the two cameras (-> R1,R2,T)Use R1, R2 and T to transform the 3D points from 3D-NIR to the thermal camera's coordinate systemUse the camera matrix of the thermal camera to project the 3D points to its image planeNow I know where the 3D points fall in the thermal image, which gives me a depth value (and intensity value, because the depth map and NIR-image are aligned) for each pixel in the thermal image.Which to me sounds similar to the procedure described in this answer by D.W..However, I can't find any article describing this method. Always there seems to be some form of feature matching step, e.g. here.Also, thinking about this, the above solution can not really work, I believe. Consider the case where the two cameras look at an object from different sides. The 3D camera will calculate the depth for a point p1 in the world. When projecting p1 to the thermal camera's image plane it will fall in pixel x. However, since the thermal camera looked at another side of the object, another point in the world p2 actually formed pixel x when the thermal camera took the image - and thats what the measured temperature in x is for (i.e., p1 and p2 roughly lie on a line viewed from the thermal camera).So, all in all:Is my statement that the outlined solution can't always work correct?Under which circumstances does the outlined solution work?What do I need to keep in mind when building the setup?How else to go about this problem? Thanks for reading this long post!
Registering 3D-NIR image to thermal image and vice versa
computer vision
null
_datascience.11695
My problem has three categorical variables C1,C2, C3 and one continous variable X, predicting a continuous outcome Y. I can visualize the problem with the following reproducible code (apology for the badly written code):library(data.tree)i = expand.grid(c(A,B),c(C,D),c(E,F))i = i[order(i[,1],i[,2],i[,3]),]i[,4] = c(1:8)t = expand.grid(c(A,B),c(C,D),c(E,F),seq(0,1,0.1))t = t[order(t[,1],t[,2],t[,3]),]t = join(t,i, by = c(Var1,Var2,Var3))t$Var5 = runif(nrow(t))t$pathString <- with(t, paste(Tree, Var1, Var2, Var3, V4, sep=/))plot_tree <- as.Node(t)plot(plot_tree)ggplot(data = t, aes(x=Var4, y=Var5)) + geom_line() + facet_grid(~V4)The tree categorical variables show 8 possible path combinations:And within each path there is a distribution of Y depending on a continuous variable X:I would like to bin both the continuous Y and X variables such that I am left with a more concise decision tree. There is method to this madness as in my actual problem, some categorical paths will become insignificant due to no movement in the predicted Y beyond the established volatility threshold of 5%.I can manually obtain bins by brute force, but is there a binning algorithm that supports binning of both predicted and predictor variables? I have looked at smbinning and my knowledge in this area of algorithms is limited. My actual problem has many categorical and continuous variables resulting in a more complex structure, but I would like to convert it to a decision tree result which will help me understand the significance and movement of variables better.
Binning of Continous Predictor and Predicted Variables
r;data mining;decision trees
null
_unix.248265
Related to my question about awk being ignored by cron, are there any alternatives to awk? This is the line in question:for dirlist in `ls -l $WEBFOLDER | awk '$1 ~ /d/ {print $10 }' `I don't know awk so I don't understand the $1 ~ /d/ part, but I think what it does is that it prints the 10th column out of the ls -l result. As I can't use awk as of current, is there an alternative to getting the directory names without using awk?EDIT: The line above only outputs the names. No lines or dots, just the names.
Alternative to script command getting directory names (using ls & awk)
shell script
Just a simple for loop:for dir in $WEBFOLDER/*/; do basename $dirdoneIf you also want directories starting with a dot:for dir in $WEBFOLDER/.*/ $WEBFOLDER/*/; do basename $dirdone
_softwareengineering.116541
Here's an interesting discussion of Tennent's Correspondence Principle, and a brief description from Neal Gafter:The principle dictates that an expression or statement, when wrapped in a closure and then immediately invoked, ought to have the same meaning as it did before being wrapped in a closure. Any change in semantics when wrapping code in a closure is likely a flaw in the language.Does the Groovy language follow this principle?
Does Groovy follow Tennent's Correspondence Principle?
language design;groovy;closures
null
_unix.137492
I'm trying to get a C application to load shared objects from a relative directory regardless of where I call it from. So far it only works if I'm in the same directory as the executable when I call it:~/prog$ ./my_programSuccess~/prog$ cd ..~$ ./prog/my_program./prog/my_program: error while loading shared libraries: libs/libmysharedobject.so: cannot open shared object file: No such file or directoryAs you can guess from the output above, the shared object is stored under the ~/prog/libs/ directory. Here's what the relevant gcc calls look like:gcc -std=c99 -ggdb -Wall -pedantic -Isrc -fPIC -shared -Wl,-soname,libs/libmysharedogbject.so -o libs/libmysharedobject.so libs/mysharedobject.c[...]gcc [CFLAGS omitted] -o my_program main.c build/src/my_program.o build/src/common.o -lm -Llibs -lmysharedobjectHere's the top of the output from readelf -d my_program:Dynamic section at offset 0x6660 contains 26 entries: Tag Type Name/Value 0x0000000000000001 (NEEDED) Shared library: [libm.so.6] 0x0000000000000001 (NEEDED) Shared library: [libs/libmysharedobject.so] 0x0000000000000001 (NEEDED) Shared library: [libc.so.6]I have tried adding -Wl,-z,origin,-rpath='$ORIGIN', which causes the following lines to show up in readelf's output: 0x000000000000000f (RPATH) Library rpath: [$ORIGIN][...] 0x000000006ffffffb (FLAGS_1) Flags: ORIGINBut it doesn't seem to solve my problem. I've also tried setting rpath to $ORIGIN/libs, ., and ./libs, all to no avail. (UPDATE: $(CURDIR) doesn't have any effect either. This surprises me, since it's expanded to an absolute path.)Is there a way to get my executable to find its shared objects regardless of the directory from which it's invoked, preferably without having the end user set LD_LIBRARY_PATH every time? Or am I trying to do something that Linux doesn't support?
Load shared objects relative to executable path
compiling;dynamic linking
null
_unix.328882
I have an array containing some element ,but i want to push new items to beginning of array .How to achieve it ?
how to add new value to beginning of array in bash?
bash;shell script;array
null
_codereview.15539
/*Create a random maze*/package mainimport ( fmt math/rand time)const ( mazewidth = 15 mazeheight = 15)type room struct { x, y int}func (r room) String() string { return fmt.Sprintf((%d,%d), r.x, r.y)}func (r room) id() int { return (r.y * mazewidth) + r.x}// whetwher walls are open or not.// There are (num_rooms * 2) walls. Some are on borders, but nevermind them ;)type wallregister [mazewidth * mazeheight * 2]boolvar wr = wallregister{}// rooms are visited or nottype roomregister [mazewidth * mazeheight]boolvar rr = roomregister{}func main() { rand.Seed(time.Now().Unix()) stack := make([]room, 0, mazewidth*mazeheight) start := room{0, 0} // mark start position visited rr[start.id()] = true // put start position on stack stack = append(stack, room{0, 0}) for len(stack) > 0 { // current node is in top of the stack current := stack[len(stack)-1] // Slice of neighbors we can move availneighbrs := current.nonvisitedneighbors() // cannot move. Remove this room from stack and continue if len(availneighbrs) < 1 { stack = stack[:len(stack)-1] continue } // pick a random room to move. next := availneighbrs[rand.Intn(len(availneighbrs))] // mark next visited rr[next.id()] = true // open wall between current and next: first, second := orderrooms(current, next) // second is either at the right or bottom of first. if second.x == first.x+1 { wr[first.id()*2] = true } else if second.y == first.y+1 { wr[first.id()*2+1] = true } else { // probably impossible or maybe not... panic(Wot?!?) } // push next to stack stack = append(stack, next) } // print maze // print upper border for x := 0; x < mazewidth; x++ { if x == 0 { fmt.Printf( ) } else { fmt.Printf(_ ) } } fmt.Println() for y := 0; y < mazeheight; y++ { fmt.Printf(|) // left border for x := 0; x < mazewidth; x++ { id := room{x, y}.id() right := | bottom := _ if wr[id*2] { right = } if wr[id*2+1] { bottom = } if x == mazewidth-1 && y == mazeheight-1 { right = } fmt.Printf(%s%s, bottom, right) } fmt.Println() }}// return slice of neighbor roomsfunc (r room) neighbors() []room { rslice := make([]room, 0, 4) if r.x < mazewidth-1 { rslice = append(rslice, room{r.x + 1, r.y}) } if r.x > 0 { rslice = append(rslice, room{r.x - 1, r.y}) } if r.y < mazeheight-1 { rslice = append(rslice, room{r.x, r.y + 1}) } if r.y > 0 { rslice = append(rslice, room{r.x, r.y - 1}) } return rslice}// return rooms that are not visited yetfunc (r room) nonvisitedneighbors() []room { rslice := make([]room, 0, 4) for _, r := range r.neighbors() { if rr[r.id()] == false { rslice = append(rslice, r) } } return rslice}// order to rooms by closeness to origin (upperleft)func orderrooms(room1, room2 room) (room, room) { dist1 := room1.x*room1.x + room1.y*room1.y dist2 := room2.x*room2.x + room2.y*room2.y if dist1 < dist2 { return room1, room2 } return room2, room1}http://play.golang.org/p/8W_FbBfUjb (You can run it here. But since time.Now() is fixed there, you will always get same maze.)In any aspect of it, how does it look?
Creating random maze in Go
random;go
null
_codereview.75799
This is my code, for calculating a GPA for 7 subjects. It works, but is there a better way? Any hints on making it more flexible?from __future__ import divisionimport stringprint This program will calculate a Semester GPA for a given set of courses. Enter 0 in all inputs, if you want to skip extra courses.\ncname1 = raw_input(First course name: )while True: cred1 = raw_input(First course credit: ) try: i = int(cred1) break except ValueError: print 'Invalid input, Should be an positive interger'grade1 = raw_input(First course grade: )choice1 = grade1while choice1 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade1 = raw_input(First course grade: ) users_turn = choice1 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice1 = grade1cname2 = raw_input(Second course name: )while True: cred2 = raw_input(Second course credit: ) try: i = int(cred2) break except ValueError: print 'Invalid input, Should be an positive interger'grade2 = raw_input(Second course grade: )choice2 = grade2while choice2 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade2 = raw_input(Second course grade: ) users_turn = choice2 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice2 = grade2cname3 = raw_input(Third course name: )while True: cred3 = raw_input(Third course credit: ) try: i = int(cred3) break except ValueError: print 'Invalid input, Should be an positive interger'grade3 = raw_input(Third course grade: )choice3 = grade3while choice3 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade3 = raw_input(Third course grade: ) users_turn = choice3 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice3 = grade3cname4 = raw_input(Fourth course name: )while True: cred4 = raw_input(Fourth course credit: ) try: i = int(cred4) break except ValueError: print 'Invalid input, Should be an positive interger'grade4 = raw_input(Fourth course grade: )choice4 = grade4while choice4 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade4 = raw_input(Fourth course grade: ) users_turn = choice4 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice4 = grade4cname5 = raw_input(Fifth course name: )while True: cred5 = raw_input(Fifth course credit: ) try: i = int(cred5) break except ValueError: print 'Invalid input, Should be an positive interger'grade5 = raw_input(Fifth course grade: )choice5 = grade5while choice5 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade5 = raw_input(Fifth course grade: ) users_turn = choice5 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice5 = grade5cname6 = raw_input(Sixth course name: )while True: cred6 = raw_input(Sixth course credit: ) try: i = int(cred6) break except ValueError: print 'Invalid input, Should be an positive interger'grade6 = raw_input(Sixth course grade: )choice6 = grade6while choice6 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade6 = raw_input(Sixth course grade: ) users_turn = choice6 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice6 = grade6cname7 = raw_input(Seventh course name: )while True: cred7 = raw_input(Seventh course credit: ) try: i = int(cred7) break except ValueError: print 'Invalid input, Should be an positive interger'grade7 = raw_input(Seventh course grade: )choice7 = grade7while choice7 not in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail]: print 'Invalid choice' grade7 = raw_input(Seventh course grade: ) users_turn = choice7 in [A+, a+, A, a,A-, a-, B+, b+, B, b, B-, b-, C+, c+, C, c,C-, c-, D+, d+, D, d, D-, d-, FAIL, fail] choice7 = grade7totalGPA = 0.0overallGPA = 0.0cred1i = string.atoi(cred1)cred2i = string.atoi(cred2)cred3i = string.atoi(cred3)cred4i = string.atoi(cred4)cred5i = string.atoi(cred5)cred6i = string.atoi(cred6)cred7i = string.atoi(cred7)if grade1 in {A+, a+, A, a}: c1points = (4.0*cred1i)elif grade1 in {A-, a-}: c1points = (3.67*cred1i)elif grade1 in {B+, b+}: c1points = (3.33*cred1i)elif grade1 in {B, b}: c1points = (3.0*cred1i)elif grade1 in {B-, b-}: c1points = (2.67*cred1i)elif grade1 in {C+, c+}: c1points = (2.33*cred1i)elif grade1 in {C, c}: c1points = (2.0*cred1i)elif grade1 in {C-, c-}: c1points = (1.67*cred1i)elif grade1 in {D+, d+}: c1points = (1.33*cred1i)elif grade1 in {D, d}: c1points = (1.0*cred1i)else: c1points = 0.0if grade2 in {A+, a+, A, a}: c2points = (4.0*cred2i)elif grade2 in {A-, a-}: c2points = (3.67*cred2i)elif grade2 in {B+, b+}: c2points = (3.33*cred2i)elif grade2 in {B, b}: c2points = (3.0*cred2i)elif grade2 in {B-, b-}: c2points = (2.67*cred2i)elif grade2 in {C+, c+}: c2points = (2.33*cred2i)elif grade2 in {C, c}: c2points = (2.0*cred2i)elif grade2 in {C-, c-}: c2points = (1.67*cred2i)elif grade2 in {D+, d+}: c2points = (1.33*cred2i)elif grade2 in {D, d}: c2points = (1.0*cred2i)else: c2points = 0.0if grade3 in {A+, a+, A, a}: c3points = (4.0*cred3i)elif grade3 in {A-, a-}: c3points = (3.67*cred3i)elif grade3 in {B+, b+}: c3points = (3.33*cred3i)elif grade3 in {B, b}: c3points = (3.0*cred3i)elif grade3 in {B-, b-}: c3points = (2.67*cred3i)elif grade3 in {C+, c+}: c3points = (2.33*cred3i)elif grade3 in {C, c}: c3points = (2.0*cred3i)elif grade3 in {C-, c-}: c3points = (1.67*cred3i)elif grade3 in {D+, d+}: c3points = (1.33*cred3i)elif grade3 in {D, d}: c3points = (1.0*cred3i)else: c3points = 0.0if grade4 in {A+, a+, A, a}: c4points = (4.0*cred4i)elif grade4 in {A-, a-}: c4points = (3.67*cred4i)elif grade4 in {B+, b+}: c4points = (3.33*cred4i)elif grade4 in {B, b}: c4points = (3.0*cred4i)elif grade4 in {B-, b-}: c1points = (2.67*cred4i)elif grade4 in {C+, c+}: c4points = (2.33*cred4i)elif grade4 in {C, c}: c4points = (2.0*cred4i)elif grade4 in {C-, c-}: c1points = (1.67*cred4i)elif grade4 in {D+, d+}: c4points = (1.33*cred4i)elif grade4 in {D, d}: c4points = (1.0*cred4i)else: c4points = 0.0if grade5 in {A+, a+, A, a}: c5points = (4.0*cred5i)elif grade5 in {A-, a-}: c5points = (3.67*cred5i)elif grade5 in {B+, b+}: c5points = (3.33*cred5i)elif grade5 in {B, b}: c5points = (3.0*cred5i)elif grade5 in {B-, b-}: c5points = (2.67*cred5i)elif grade5 in {C+, c+}: c5points = (2.33*cred5i)elif grade5 in {C, c}: c5points = (2.0*cred5i)elif grade5 in {C-, c-}: c5points = (1.67*cred5i)elif grade5 in {D+, d+}: c5points = (1.33*cred5i)elif grade5 in {D, d}: c5points = (1.0*cred5i)else: c5points = 0.0if grade6 in {A+, a+, A, a}: c6points = (4.0*cred6i)elif grade6 in {A-, a-}: c6points = (3.67*cred6i)elif grade6 in {B+, b+}: c6points = (3.33*cred6i)elif grade6 in {B, b}: c6points = (3.0*cred6i)elif grade6 in {B-, b-}: c6points = (2.67*cred6i)elif grade6 in {C+, c+}: c6points = (2.33*cred6i)elif grade6 in {C, c}: c6points = (2.0*cred6i)elif grade6 in {C-, c-}: c6points = (1.67*cred6i)elif grade6 in {D+, d+}: c6points = (1.33*cred6i)elif grade6 in {D, d}: c6points = (1.0*cred6i)else: c6points = 0.0if grade7 in {A+, a+, A, a}: c7points = (4.0*cred7i)elif grade7 in {A-, a-}: c7points = (3.67*cred7i)elif grade7 in {B+, b+}: c7points = (3.33*cred7i)elif grade7 in {B, b}: c7points = (3.0*cred7i)elif grade7 in {B-, b-}: c7points = (2.67*cred7i)elif grade1 in {C+, c+}: c7points = (2.33*cred7i)elif grade7 in {C, c}: c7points = (2.0*cred7i)elif grade7 in {C-, c-}: c7points = (1.67*cred7i)elif grade7 in {D+, d+}: c7points = (1.33*cred7i)elif grade7 in {D, d}: c7points = (1.0*cred7i)else: c7points = 0.0totalCredits = cred1i+cred2i+cred3i+cred4i+cred5i+cred6i+cred7ioverallGPA = (c1points + c2points + c3points + c4points + c5points + c7points + c7points)/totalCreditscname1 = cname1.ljust(15)cred1 = cred1.center(9)grade1 = grade1.center(6)cname2 = cname2.ljust(15)cred2 = cred2.center(9)grade2 = grade2.center(6)cname3 = cname3.ljust(15)cred3 = cred3.center(9)grade3 = grade3.center(6)cname4 = cname4.ljust(15)cred4 = cred4.center(9)grade4 = grade4.center(6)cname5 = cname5.ljust(15)cred5 = cred5.center(9)grade5 = grade5.center(6)cname6 = cname6.ljust(15)cred6 = cred6.center(9)grade6 = grade6.center(6)cname7 = cname7.ljust(15)cred7 = cred1.center(9)grade7 = grade7.center(6)print COURSE CREDITS GRADE \nprint ------ ------- ----- \nprint '%s%s%s' % (cname1, cred1, grade1)print '%s%s%s' % (cname2, cred2, grade2)print '%s%s%s' % (cname3, cred3, grade3)print '%s%s%s' % (cname4, cred4, grade4)print '%s%s%s' % (cname5, cred5, grade5)print '%s%s%s' % (cname6, cred6, grade6)print '%s%s%s' % (cname7, cred7, grade7)print SEMESTER GPA = %.2f % (overallGPA)
Getting a 4.0 GPA
python;python 2.7
null
_webapps.44567
I currently have a free Google Apps for your Domain account, with only one email address in there. I will soon be moving from Google Mail to another hosted email solution (such as a hosted Exchange server).As a result of this, I want to remove the Google Apps part from my domain, but retain all of the other services I use, such as Analytics and Webmaster Tools. This would need to move to a Google Account, with the same name as my current Google Apps account.How can I do this?
How to Migrate Google Apps account to regular Google Account
google apps;google analytics;google account
Unfortunately, I could not find a way to do this. It would seem such a feature has not been built by Google.I ended up moving my mail account, linking my Analytics profiles to a separate GMail address, and then destroying my Google Apps domain. Not a great solution, but thankfully it worked as it turns out I didn't use too many other Google services.
_datascience.16485
I'm trying to build recommender based on user history from e-commerce. There are two(potentially more) types of events: purchase and view.Is it okay to sum up number of purchases and views for a given item(with purchase and view having different weights)? Or I`ll just mix up user intent this way?
Recommender System: how to treat different events
recommender system
I assume you're using an implicit feedback recommender approach like ALS. Otherwise, summing data points generally won't make sense, such as if you're feeding it to a recommender that expects ratings.The input to implicit ALS is, conceptually, weighted user-item pairs. Therefore it makes sense to perhaps use a sum of user-item clicks as the weight. Summing makes sense.However, does a purchase and view seem to carry the same weight? obviously not. A purchase is a much stronger association and should be weighted accordingly.As to how much, I'd suggest weighting purchases simply by price, to start. Then weight clicks by price times purchase-to-click ratio. A $10 item purchase is weight 10; if 1 in 200 clicks results in a purchase, then weight a click 0.2.This is crude but probably about as close as anything for capturing this info in the context of ALS.
_unix.154229
I have 2 computers, both with Gigabit controllers, connected with a regular Ethernet cable. I have a host machine, which is connected to the internet, and a client machine, which is only connected to the host machine. I would like to share the host's internet connection with the client machine. I have read online, and I found out the best way to do this is by opening Network Manager's GUI on my host machine, editing my Ethernet connection and setting the IPv4 settings to Shared to other computers. On my client machine, I have it set to automatically get an IP address using DHCP. My host machine does not detect the cable as being plugged in under these settings, and my client machine reports me it cannot connect to the network. How do I properly share my internet connection from the host machine to my client machine, preferably using Network Manager.
How to share internet connection over Ethernet using Network Manager?
networkmanager;internet
null
_softwareengineering.342772
List specific programming concepts that should code adhere to that it will run in parallel. For example, if a block of code does not change shared state, it should be able to be done on another thread. What other such attributes exist? I would imagine there only to be a small number of such concerns by which to evaluate whether code is a good candidate for parallelization, so what are they? For example SQL Server when creating the execution plans much use these test to decide for each query section whether to run it in serial or allow it to run on another CPU. All I'm looking for is the list of theoretical rules, nothing implementation-specific.
What specific attributes make code able to be executed in parallel?
parallel programming
null
_codereview.18862
I have a lot of variables that I need to check for exceptions and output the empty field in case of a null returned value (I am reading a calender list from Sharepoint and my program is supposed to send email notifications if some of the conditions are met).I surrounded my variables with a try-catch with a generic output variable not found. Here is an example;try{var name = item[Name].ToString(); var dueDate= item[Due Date].ToString();..//more variables. var Title= item[Title].ToString(); }catch (Exception ex) { Console.WriteLine(); Console.WriteLine(ex.Message); //generic message for now Console.WriteLine(); }Is there is a way to handle this better? I know that going to each individual line and adding an exception is an option but it would save me a lot of time if I can just output something like the variable name.
Handling null exception when having multiple variables
c#;object oriented;exception
Refactor into a method:var name = this.GetVariable(Name); var dueDate= this.GetVariable(Due Date);..//more variables. var Title= this.GetVariable(Title); private string GetVariable(string name){ try { return item[name].ToString(); } catch (Exception ex) { Console.WriteLine(); Console.WriteLine(name + not found.); Console.WriteLine(); return null; }}
_cs.67429
For example for the sw command in MIPs, the control signal values areALUOp1: 0ALUOp: 0 RegWrite: 0 MemRead: x MemWrite: 1 Branch: 0 ALUsrc: 1 RegDest: x MemToReg: xWhy do memRead, RegDest and MemToReg have the values of x? Why not just 0?
For data-path cycles in MIPS, what determines wheter a control signal gets the don't-care value?
cpu pipelines
null
_unix.331059
I have one bash script for installing wordpress and want to add also mysql installer to easly can install database#!/bin/bashdr=$1db=$2zDir=latest.zipwpInstall=https://wordpress.org/$zDireval mkdir -p $dr && cd $dr && wget $wpInstall && unzip $zDir && cp -r $dr/wordpress/* $dr && sudo chmod -R 0777 $dr && sudo chmod -R 0777 $dr/* && rm -rf $dr/wordpress && rm -f -r $zDir && echo WordPress installatio$echo CREATE DATABASE IF NOT EXISTS `$db` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci | mysql -u root -phere I have one realy wierd problem:When I run script like this:$ ./install-wp.sh /var/www/test project_somethinginstaller install wordpress, place all files on the rigt place but when start part with mysql I get 2 errors.First is:> ./install-wp.sh: line 13: project_something: command not foundand second after mysql passowrd is entered:ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci' at line 1How to fix this?
How to create new MySQL database with encoding via BASH script
shell script;mysql
null
_unix.204501
An excerpt from the man man page:The default action is to search in all of the available sections following a pre-defined order (1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7 by default)What are the n, l and 3pm sections of the manual for?
What are the 'n', 'l', '3pm' sections of the manual for?
perl;man;posix;standard
The 3pm section is not used anymore. It is defined as manual pages concerning modul packages of perl in an old version of the Debian Perl Policy, noteably in version 1.2. Here is a site where you can read that old deprecated policy (see 3.1 and 1.4). In the latest Debian Perl Policy it is defined in 2.4 that module manual pages should be installed in section 3perl now.In the standards manual page you see that POSIX.1-2001, SUSv3 standard has defined manual page sections called 1p and 3p. That is also mentioned at Wikipedia, that p is a subsection that describes POSIX specifications. 3p is also an abbreviation of 3posix, which means the same.The sections n and l seem to appear only in IBMs AIX. n specifies new and l specifies local.Local (l) manual pages describe usage policies, administrative contacts, special local software, and other information unique to this particular installation. New (n) manual pages are for newly installed software. They stay for an amount of time in n before they get moved to their permanent section. In Linux those two section and also the o section (old) are deprecated.Note, that n in Mac OSX can also be the section for Tcl/Tk functions.
_softwareengineering.194859
What's the most secure method of performing authentication in a single paged apps? I'm not talking about any specific client-side or server-side frameworks, but just general guidelines or best practices. All the communications are transfered primarily through sockJS.Also, OAuth is out of the question.
Security in Authentication in single page apps
javascript;security;authentication
I think here are two starting points.Web services/REST services on the same server as your single web app.It's not uncommon to do so nor it is necessarily bad, and also clearly the easiest way. Whatever authentication you choose (basic, digest, form, etc.) the API and the app are going to share the same user sessions on the server. Which means several things:The API can easily detect whether the user is logged or not, and obtain its permissions, roles or whatever you need to know if the user is allowed to perform a given action.You can deploy both the API and the app at the same time.Scaling the app is simple, just deploy both the API and the app to another instance. Load balancing is also easier.Monitoring the app is simple.On the negative side, if you want another app to use this API you'll have to either store this new app on the same server, or negotiate the user login yourself (for example, transmitting the credentials to the server). It may be even more complicated if your new app is NOT a web app.Another point: even if it's a single web app, you will most likely load images or other resources. If the API usage is high it may impact the loading time of your app (which is generally a big HTML file) and other client resources. One of the way to prevent this is to use CDN for your JavaScript files, CSS stylesheets and pictures.I think this approach is perfectly suited for quickly launching simple websites and/or for API that will not need to be accessed by other services. And even if you start with that approach (we know everything is possible when it comes to programming, right?) you can still at one point switch to the split API/app model.Web services/REST services on one server and the app on another.The best practice today for authenticating users to a SPA linked to an external API, or an API stored on a different instance of the server is to imitate OAuth in its principles. The key is to use ... keys, or tokens obtained from your main server. The basic flow should be:Authenticate on the main server (where the app is hosted)If the authentication is successful, create a token and send it in the HTTP response, for example in a cookie.Whenever you need to access your API, send the token. It may be sent in a custom HTTP request header for example.The API receives the request and checks :If the requested resource needs a token, otherwise it's ok (example: returning information available to guests, unauthenticated users)If the token is there, check if it's still acceptable. Deciding how and when the token expires is up to you, there are various ways to design the token expiration strategy.Return the response data (optionally, with an updated token)This approach allows you to:Dissociate the API from the apps that are going to use it AND from the user system in itself. Granted, you have to somehow interpret the token, but you don't need to know how and where the user data is stored. If you want a new web app or client/server app to access this API, all you need is a proper token. Deploy separately. For example, updating or fixing your API will not cause the single app server to go down.Monitor usage of your API only.To conclude, apply the KISS and YAGNI principles before deciding. And remember you can always go from one model to another.
_unix.81614
What I'm trying to do backup?users home dir which include Desktop,Documents,Pictures, thunderbird which i need to backup of every userall users are in /home partition with there respective user namethere are certain user AND files in /home which need to be excludeWhat I've tried so far?$ tar cvf home01 -T include /home NOTE: T which says only take that which are mention in file but does not work$ find . \( -name \*Desktop -o -name \*Documents\* -o -name \*Pictures\* -o -name \.thunderbird\* \) | xargs tar zcvf /opt/rnd/home-$(date +%d%m%y).tar.zip NOTE: which takes backup of mention dir but it puts every users dir into one folder.For example$ ls -l /home/home/user1/{Desktop,Documents,Pictures,.thunderbird}/home/user2/{Desktop,Documents,Pictures,.thunderbird}/home/user3/{Desktop,Documents,Pictures,.thunderbird}/home/user4/{Desktop,Documents,Pictures,.thunderbird}/home/user5/{Desktop,Documents,Pictures,.thunderbird}From which I need to take only user1,user2,user3 home dir backup and exclude user4,user5
backup script to exclude some parent dir and include some child dir
backup
null
_unix.163237
I have the following lines in a text file:1 Q0 /home/nikol123/Downloads/Ergasia_1/Ergasia_1/metadata/13/120411.xml 1 1 Q0 /home/nikol123/Downloads/Ergasia_1/Ergasia_1/metadata/11/105016.xml 2 1 Q0 /home/nikol123/Downloads/Ergasia_1/Ergasia_1/metadata/15/149972.xml 3 1 Q0 /home/nikol123/Downloads/Ergasia_1/Ergasia_1/metadata/12/110688.xml 4 and I want to keep only these data:1 Q0 120411 1 1 Q0 105016 2 1 Q0 149972 3 1 Q0 110688 4 namely to keep from each line from the path /home/nikol123/Downloads/Ergasia_1/Ergasia_1/metadata/13/120411.xml for example only the number 120411 and so on...
How to delete specific characters in a text file
shell script;text processing
null
_cs.79638
It's fair to summarize classes in OOP as product types with functions. However, couldn't there be something like sum types with functions? How would inheritance work with them?I'm trying to scout if anybody has considered this question before in language design.
What would sum types with functions look like in OOP?
programming languages;type theory;object oriented
null
_unix.88838
How can we preserve or maintain the same history across multiple terminals?The same question, but for bash shell , were discussed in the below linkPreserve bash history in multiple terminal windowslet me know the corresponding settings for tcsh shell ?
Preserve tcsh history in multiple terminal windows
command history;tcsh
null
_softwareengineering.38368
the goal is to have an online documentation system, with these major requirements:will be mainly used as an intermediate stage for the final technical docs of all our application (which will probably never get completed though :]). It would be typically used as so: someone has a problem, I fix it, and write down the fix immedeately. What happens now is getting unmanagable: someone has a problem, I fix it, both me and someone are happy but 2 months later somebody else has the same problem and nobody remembers what the fix was.accessible from everywhere, running behind our apache serveruser/group managment, allowing read-only/read-write/admin accessthe format is not too important: plain text would do, wiki-style would be nicer thoughcheap or freesome ideas of mine:just serve files on a file share or through ssh (cons: not too copmatible with windows, pros: simple, can be any file type)keep it in an SCM (svn/git, idem as above but easier to access and control access)Confluence: we use Jira already, is Confluence worth it? How does it integrate with Jira?something else?Please don't hesitate commenting on these or share your experience with other systems.
what kind of online technical documentation system would you recommend?
documentation
I was going to suggest a Wiki As Confluence is a wiki I think using it with your JIRA is an excellent idea. You'll have the advantage of being able to directly tie back into JIRA issues, and therefore the actual code/doc/whatever change made etc.The key to any code doc repo like this is the navigation aspect. You don't want pages that are disconnected, hard to find etc. Do put in thought to a 'site layout' much like you would for a web site.
_softwareengineering.216915
I know that in agile requirement changes should not only be planned for but also embraced. But I still don't know agile how to handle these changes.
How to deal with requirement changes in agile development model?
agile
null
_webapps.88917
How can I add second page in Google Plus? I have personal Google+ account and my company page, but I want to create next page about my hobby. How can I do this?
Two pages on Google+
google plus;google plus pages
null
_softwareengineering.60684
You are conversant with the ZF?How would you go about getting familiar with it in one week?What would be your suggested schedule?
Familiarizing with the Zend Framework in one week
learning;php;zend framework
If you have a few years of experience with PHP and are familiar with MVC frameworks, then you shouldn't have a problem to be familiar with ZF in one week. You're not going to be a guru, but you'll be able to get things done. My plan (and what I actually did few years ago), would be to start with ZF QuickStart, build the base of your application. Don't just read, code & try. Then follow the rest of Learning ZF, skipping parts that you're not going to use (eg. if you're not going to be using Lucene, just skip that). Then as code your app, whenever you need more information on some of the components, just go to the ZF Reference Manual. You probably will only need to use small fraction of what's covered there.
_cs.62824
How do you know that a decision problem $X$ is NP-complete?, if all other NP-problems polynomially transform to $X$ or if all other NP-problems polynomially reduces (there exist a polynomial time oracle for any problem in NP using an oracle for $X$).Definitions seem to differ all over the web.Thanks!
Is NP-complete complexity defined in terms of polynomial reductions or polynomial transformations?
complexity theory;np complete;reductions;np;oracle machines
null
_codereview.139954
This is a follow up to my previous question: Serializing objects to delimited filesI've added some feature enhancements, and based on suggestions from rolfl in chat, I've fixed up a couple inconsistencies with the serializer.First, if you don't mark any properties with DelimitedColumnAttribute, I added a DelimitedIgnoreAttribute which will blacklist columns instead. For objects with no properties marked with either, all properties are serialized instead, with the following exception:No collection properties (except System.String) are serialized, period.You can also replace invalid values in names/fields (values that have the RowDelimiter or ColumnDelimiter in them) with whatever you specify.You can choose whether or not to include the header row.You can choose to quote values/names. If you choose to, all values/names are quoted.You can choose how double-quotes are escaped (necessary for quoted values/names).DelimitedSerializer.cs:/// <summary>/// Represents a serializer that will serialize arbitrary objects to files with specific row and column separators./// </summary>public class DelimitedSerializer{ /// <summary> /// The string to be used to separate columns. /// </summary> public string ColumnDelimiter { get; set; } /// <summary> /// The string to be used to separate rows. /// </summary> public string RowDelimiter { get; set; } /// <summary> /// If not null, then sequences in values and names which are identical to the <see cref=ColumnDelimiter/> will be replaced with this value. /// </summary> public string InvalidColumnReplace { get; set; } /// <summary> /// If not null, then sequences in values and names which are identical to the <see cref=RowDelimiter/> will be replaced with this value. /// </summary> public string InvalidRowReplace { get; set; } /// <summary> /// If true, a trailing <see cref=ColumnDelimiter/> will be included on each line. (Some legacy systems require this.) /// </summary> public bool IncludeTrailingDelimiter { get; set; } /// <summary> /// If true, an empty row will be included at the end of the response. (Some legacy systems require this.) /// </summary> public bool IncludeEmptyRow { get; set; } /// <summary> /// If true, then all values and columns will be quoted in double-quotes. /// </summary> public bool QuoteValues { get; set; } /// <summary> /// If not null, then double quotes appearing inside a value will be escaped with this value. /// </summary> public string DoubleQuoteEscape { get; set; } /// <summary> /// If true, then a header row will be output. /// </summary> public bool IncludeHeader { get; set; } /// <summary> /// Serializes an object to a delimited file. Throws an exception if any of the property names, column names, or values contain either the <see cref=ColumnDelimiter/> or the <see cref=RowDelimiter/>. /// </summary> /// <typeparam name=T>The type of the object to serialize.</typeparam> /// <param name=items>A list of the items to serialize.</param> /// <returns>The serialized string.</returns> public string Serialize<T>(List<T> items) { if (string.IsNullOrEmpty(ColumnDelimiter)) { throw new ArgumentException($The property '{nameof(ColumnDelimiter)}' cannot be null or an empty string.); } if (string.IsNullOrEmpty(RowDelimiter)) { throw new ArgumentException($The property '{nameof(RowDelimiter)}' cannot be null or an empty string.); } var result = new ExtendedStringBuilder(); var properties = typeof(T).GetProperties() .Select(p => new { Attribute = p.GetCustomAttribute<DelimitedColumnAttribute>(), Info = p }) .Where(x => x.Attribute != null) .OrderBy(x => x.Attribute.Order) .ThenBy(x => x.Attribute.Name) .ThenBy(x => x.Info.Name) .ToList(); if (properties.Count == 0) { properties = typeof(T).GetProperties() .Where(x => x.GetCustomAttribute<DelimitedIgnoreAttribute>() == null) .Select(p => new { Attribute = new DelimitedColumnAttribute { Name = p.Name }, Info = p }) .Where(x => x.Attribute != null) .OrderBy(x => x.Attribute.Order) .ThenBy(x => x.Attribute.Name) .ThenBy(x => x.Info.Name) .ToList(); } Action<string, string, string> validateCharacters = (string name, string checkFor, string humanLocation) => { if (name.Contains(checkFor)) { throw new ArgumentException($The {humanLocation} string '{name}' contains an invalid character: '{checkFor}'.); } }; var columnLine = new ExtendedStringBuilder(); foreach (var property in properties) { if (property.Info.PropertyType.IsArray || (property.Info.PropertyType != typeof(string) && property.Info.PropertyType.GetInterface(typeof(IEnumerable<>).FullName) != null)) { continue; } var name = property.Attribute?.Name ?? property.Info.Name; if (InvalidColumnReplace != null) { name = name.Replace(ColumnDelimiter, InvalidColumnReplace); } if (InvalidRowReplace != null) { name = name.Replace(RowDelimiter, InvalidRowReplace); } if (DoubleQuoteEscape != null) { name = name.Replace(\, DoubleQuoteEscape); } validateCharacters(name, ColumnDelimiter, column name); validateCharacters(name, RowDelimiter, column name); if (columnLine.HasBeenAppended) { columnLine += ColumnDelimiter; } if (QuoteValues) { columnLine += \; } columnLine += name; if (QuoteValues) { columnLine += \; } } if (IncludeTrailingDelimiter) { columnLine += ColumnDelimiter; } if (IncludeHeader) { result += columnLine; } foreach (var item in items) { var row = new ExtendedStringBuilder(); foreach (var property in properties) { if (property.Info.PropertyType.IsArray || (property.Info.PropertyType != typeof(string) && property.Info.PropertyType.GetInterface(typeof(IEnumerable<>).FullName) != null)) { continue; } var value = property.Info.GetValue(item)?.ToString(); if (property.Info.PropertyType == typeof(DateTime) || property.Info.PropertyType == typeof(DateTime?)) { value = ((DateTime?)property.Info.GetValue(item))?.ToString(u); } if (value != null) { if (InvalidColumnReplace != null) { value = value.Replace(ColumnDelimiter, InvalidColumnReplace); } if (InvalidRowReplace != null) { value = value.Replace(RowDelimiter, InvalidRowReplace); } if (DoubleQuoteEscape != null) { value = value.Replace(\, DoubleQuoteEscape); } validateCharacters(value, ColumnDelimiter, property value); validateCharacters(value, RowDelimiter, property value); } if (row.HasBeenAppended) { row += ColumnDelimiter; } if (QuoteValues) { row += \; } row += value; if (QuoteValues) { row += \; } } if (IncludeTrailingDelimiter) { row += ColumnDelimiter; } if (result.HasBeenAppended) { result += RowDelimiter; } result += row; } return result; } /// <summary> /// Returns an instance of the <see cref=DelimitedSerializer/> setup for Tab-Separated Value files. /// </summary> public static DelimitedSerializer TsvSerializer => new DelimitedSerializer { ColumnDelimiter = \t, RowDelimiter = \r\n, InvalidColumnReplace = \\t, IncludeHeader = true }; /// <summary> /// Returns an instance of the <see cref=DelimitedSerializer/> setup for Comma-Separated Value files. /// </summary> public static DelimitedSerializer CsvSerializer => new DelimitedSerializer { ColumnDelimiter = ,, RowDelimiter = \r\n, InvalidColumnReplace = \\u002C, IncludeHeader = true }; /// <summary> /// Returns an instance of the <see cref=DelimitedSerializer/> setup for Pipe-Separated Value files. /// </summary> public static DelimitedSerializer PsvSerializer => new DelimitedSerializer { ColumnDelimiter = |, RowDelimiter = \r\n, InvalidColumnReplace = \\u007C, IncludeHeader = true }; /// <summary> /// Returns an instance of the <see cref=DelimitedSerializer/> from the RFC 4180 specification. See: https://tools.ietf.org/html/rfc4180 /// </summary> public static DelimitedSerializer Rfc4180Serializer => new DelimitedSerializer { ColumnDelimiter = ,, RowDelimiter = \r\n, IncludeHeader = true, IncludeTrailingDelimiter = true, QuoteValues = true, DoubleQuoteEscape = \\ };}DelimitedColumnAttribute.cs:/// <summary>/// Represents a column which can be used in a <see cref=DelimitedSerializer/>./// </summary>[AttributeUsage(AttributeTargets.Property)]public class DelimitedColumnAttribute : Attribute{ /// <summary> /// The name of the column. /// </summary> public string Name { get; set; } /// <summary> /// The order the column should appear in. /// </summary> public int Order { get; set; }}DelimitedIgnoreAttribute.cs:[AttributeUsage(AttributeTargets.Property)]public class DelimitedIgnoreAttribute : Attribute{}
Serializing Objects to Delimited Files Part II
c#;.net;serialization;reflection
To me the Serialize method is too big. I'd split it in three parts.The first part would be the reflection stuff where you read and sort the properties - this could be a new class like ElementReflector or maybe an extension.The second part (maybe a method) would be the first foreach that I cannot figure out what it does.The third part (maybe a method too) would be the second foreach that does something but I'm not sure what.Putting them in methods would give them a meaning without you having to explain each time what they are good for.public string Serialize<T>(List<T> items){ ... prepare sort etc properties var result = new StringBuilder() .Append(CreateHeader(properties)) .Append(SerializeData(items, properties)); return result.ToString();}ifsif (property.Info.PropertyType.IsArray || (property.Info.PropertyType != typeof(string) && property.Info.PropertyType.GetInterface(typeof(IEnumerable<>).FullName) != null))You could use a variable for that to tell what this condition is for. Or even create an extension for it.The same applies for many other ifs where the intentions is not clear.Besides you use the same condition twice (in both loops) so an extenstion would definitely help.ColumnDelimiter & RowDelimiterIf these two properties cannot be null you should requrie them via the constructor instead of checking them in the Serilize method.Also if the user can change them later then you should check the values in the property setter instead of throwing exeptions later and causing astonishment why isn't this working.DelimitedColumnAttributeIt is possible to create an invalid attribute because there is no construtor that enforces the required values. I guess the name is required if the attribute is specified.propertiesproperties = typeof(T).GetProperties() .Where(x => x.GetCustomAttribute<DelimitedIgnoreAttribute>() == null) .Select(p => new { Attribute = new DelimitedColumnAttribute { Name = p.Name }, Info = p }) .Where(x => x.Attribute != null) .OrderBy(x => x.Attribute.Order) .ThenBy(x => x.Attribute.Name) .ThenBy(x => x.Info.Name) .ToList();You are always adding the attribute so this .Where(x => x.Attribute != null) isn't necessary.As the Attribute.Name is equal p.Name either the .ThenBy(x => x.Attribute.Name) or .ThenBy(x => x.Info.Name) can be removed.
_softwareengineering.71825
The canonical idea is pervasive in software; patterns like Canonical Model, Canonical Schema, Canonical Data Model and so on, seem to come up again and again in development.Like many developers, I've often followed, uncritically, the conventional wisdom that you need a canonical model, otherwise you'll face a combinatorial explosion of mappers and translators. Or at least, I used to do that until a couple of years ago when I first read the somewhat-infamous EF Vote of No Confidence:The hypotheses that once supported the pursuit of canonical data models didnt and couldnt include factors that would be discovered once the idea was put into practice. We have found, through years of trial and error, that using separate models for each individual context in which a canonical data model might be used is the least complex approach, is the least costly approach, and the one that leads to greater maintainability and extensibility of the applications and endpoints using contextual models, and its an approach that doesnt encourage the software entropy that canonical models do.The essay presents no evidence of any kind to support its claims, but did make me question the CDM approach long enough to try the alternative, and the resulting software didn't explode, literally or figuratively. But that doesn't mean a whole lot in isolation; I could have just been lucky.So I'm wondering, has any serious research been done into the practical, long-term effects of having a canonical model vs. contextual models in a software system or architecture?Or, if it's too early to be asking that, then have any developers/architects written about personal experiences switching from a CDM to independent contextual models, or vice versa, and what the practical effects were on things like productivity, complexity, or reliability?What about the differences at different levels, i.e. using the same model across a single application vs. using it across a system of applications or an entire enterprise?(Facts only, please; war stories are welcome but no speculation.)
Does current evidence support the adoption of Contextual over Canonical Data Models?
design patterns;architecture;data;domain model
null
_unix.97940
I'm working in Unix.I'd like to fetch information from a log in a specified time range for the current date. For example I want the data from a log file of today's date from 00:00 to 09:00. Sample log entry:13/10/16 14:45:02 <batchspeedchange> <BELLBD.BD77350A.G6987V00> <> FAILED FILE FORMAT VALIDATION - ERROR:-213:rawData cannot contain tokensHow do I get the output from such a log file?
How do I filter a log to only the lines between two times?
bash
Assuming the dates look like HH:MM as you've show, and assuming the date appears in the 2nd field, you can use awk:awk -v start=00:00 -v stop=09:00 'start <= $2 && $2 < stop' file.log[rant] I'm quite particular about date formatting, and this one is terrible: what date is 09/10/11? [/rant]Anyway, assuming this it YY/MM/DDawk -v date=$(date +%y/%m/%d) \ -v start=00:00:00 \ -v stop=09:00:00 \ -v search=File format not found \'$1 == date && start <= $2 && $2 < stop && $0 ~ search' file.log
_unix.195728
So the idea is to create an alias that will search my alias's for me. I have quite a few. dude@gnarleybox:~$ grep alg .bash_aliases.shalias alg='alias | grep 'dude@gnarleybox:~$ alias | grep algalias alg='alias | grep 'dude@gnarleybox:~$ alg gdgrep: invalid max countdude@gnarleybox:~$ Huh? Like grep is getting two many parameters? How is that possible? Note that I've also tried it without the space on the end:alias alg='alias | grep' You should just be able to type: alg gd ...and get the alias I use to fuse mount GoogleDocs.
Why am I getting invalid max count from grep in an alias?
bash;grep;alias
Remove the blank at the end of the alias definition (as suggested by rici) and your issue should be fixed.But; in such cases as yours, where you have not only synonyms or abbreviations in your alias but also functional code with pipes, it's better to define a function instead of an alias.
_cs.14954
After skimming Multiplication by a Constant is Sublinear (PDF), (slides (PDF), slides with notes (PDF)) I was wondering if this could be extended to division by a constant in sublinear time?Additionally, what about division with a constant numerator, ie. division of a constant?
Division by a constant
algorithms;reference request;integers
Division by a constant can always be recast as multiplication by a constant followed by a shift. The relevant papers are:Robert Alverson, Integer Division Using Reciprocals, IEEE Int'l Symp Comp Arithmetic, (ISCA-10):186-190, 1991.Torbjrn Granlund and Peter L. Montgomery, Division by Invariant Integers using Multiplication, ACM Conf on Prog Lang Dsgn and Impl, (PLDI-1994):61-72.Daniel J. Magenheimer, Liz Peters, Karl W. Peters, and Dan Zuras, Integer Multiplication and Division on the HP Precision Architecture, IEEE T. Comp., 37(8):980-990, August 1988.
_unix.349347
I execute: sudo apt-get dist-upgrade and I get this:Reading Package Lists ... DoneBuilding the dependency treeReading status information ... DoneCalculation of the update ... Some packages can not be installed. This may meanThat you have asked for the impossible, or if youUnstable distribution, which some packages have not yetHave been created or have not been released from Incoming.The following information should help you resolve the situation:The following packages contain unsatisfied dependencies: Systemd: Break: rdnssd (<1.0.1-5) but 1.0.1-1 + b1 must be installedE: Error, pkgProblem :: Resolve generated breaks, which could be caused by the packages to be kept as is.OUTPUT apt-cache policy rdnssd Dmicaelandre @ ThinkPad: ~ $ apt-cache policy rdnssdrdnssd: Installed: 1.0.1-1+b1 Candidate: 1.0.3-3 Version table: 1.0.3-3 0 650 http://ftp2.fr.debian.org/debian/ stretch/main amd64 Packages *** 1.0.1-1+b1 0 100 /var/lib/dpkg/statusOUTPUT apt-cache policy100 /var/lib/dpkg/status release a=now 500 http://security.debian.org/ stretch/updates/non-free Translation-en 500 http://security.debian.org/ stretch/updates/main Translation-en 500 http://security.debian.org/ stretch/updates/contrib Translation-en 650 http://security.debian.org/ stretch/updates/non-free i386 Packages release o=Debian,a=testing,n=stretch,l=Debian-Security,c=non-free origin security.debian.org 650 http://security.debian.org/ stretch/updates/contrib i386 Packages release o=Debian,a=testing,n=stretch,l=Debian-Security,c=contrib origin security.debian.org 650 http://security.debian.org/ stretch/updates/main i386 Packages release o=Debian,a=testing,n=stretch,l=Debian-Security,c=main origin security.debian.org 650 http://security.debian.org/ stretch/updates/non-free amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian-Security,c=non-free origin security.debian.org 650 http://security.debian.org/ stretch/updates/contrib amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian-Security,c=contrib origin security.debian.org 650 http://security.debian.org/ stretch/updates/main amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian-Security,c=main origin security.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch-updates/non-free Translation-en 500 http://ftp2.fr.debian.org/debian/ stretch-updates/main Translation-en 500 http://ftp2.fr.debian.org/debian/ stretch-updates/contrib Translation-en 500 http://ftp2.fr.debian.org/debian/ stretch-updates/non-free i386 Packages release o=Debian,a=testing-updates,n=stretch-updates,l=Debian,c=non-free origin ftp2.fr.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch-updates/contrib i386 Packages release o=Debian,a=testing-updates,n=stretch-updates,l=Debian,c=contrib origin ftp2.fr.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch-updates/main i386 Packages release o=Debian,a=testing-updates,n=stretch-updates,l=Debian,c=main origin ftp2.fr.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch-updates/non-free amd64 Packages release o=Debian,a=testing-updates,n=stretch-updates,l=Debian,c=non-free origin ftp2.fr.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch-updates/contrib amd64 Packages release o=Debian,a=testing-updates,n=stretch-updates,l=Debian,c=contrib origin ftp2.fr.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch-updates/main amd64 Packages release o=Debian,a=testing-updates,n=stretch-updates,l=Debian,c=main origin ftp2.fr.debian.org 500 http://ftp2.fr.debian.org/debian/ stretch/non-free Translation-en 500 http://ftp2.fr.debian.org/debian/ stretch/main Translation-fr 500 http://ftp2.fr.debian.org/debian/ stretch/main Translation-en 500 http://ftp2.fr.debian.org/debian/ stretch/contrib Translation-en 650 http://ftp2.fr.debian.org/debian/ stretch/non-free i386 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=non-free origin ftp2.fr.debian.org 650 http://ftp2.fr.debian.org/debian/ stretch/contrib i386 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=contrib origin ftp2.fr.debian.org 650 http://ftp2.fr.debian.org/debian/ stretch/main i386 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=main origin ftp2.fr.debian.org 650 http://ftp2.fr.debian.org/debian/ stretch/non-free amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=non-free origin ftp2.fr.debian.org 650 http://ftp2.fr.debian.org/debian/ stretch/contrib amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=contrib origin ftp2.fr.debian.org 650 http://ftp2.fr.debian.org/debian/ stretch/main amd64 Packages release o=Debian,a=testing,n=stretch,l=Debian,c=main origin ftp2.fr.debian.org
I can't upgrade Debian 9
debian;upgrade
null
_unix.293284
i would like to learn and work on internet security, because thia is a world that is in continuos change and id like to improve myself on it..anyone can help me linking site or PDF where i can study this fantastic world?
Learn the base of security and defend myself from external attack
linux;networking;security;osx;windows
You could follow and Watch CEH Channel on YoutubeInstall Kali Linux/BackTrack/BlackBuntu and get started with basic tools installed in these operating systemsThis document could be of some use.
_webmaster.38778
I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+).I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says:This text is part of the internal format of your mail folder, and is nota real message. It is created automatically by the mail system software.If deleted, important folder data will be lost, and it will be re-createdwith the data reset to initial values.Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below.Scan not valid for mh mailboxesBogus character 0x%x in news stateCan't rewrite news state %.80sError closing backup news state %.80sNo state for newsgroup %.80s foundNow, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like).Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike.Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.
core.* files eating up server space (~50MB)
php;cpanel;mailing list;webmail
Core files contain the image of the process' memory at the time of its termination (i.e. when it crashed). They can be used to inspect the state of the program when it was terminated.If you're seeing lots of them and they are recreated fast, I would invest some time in debugging which specific program crashed and why. It's probably not a good thing it crashed so it might be worth to look after them.If you don't see many of them, it's safe to delete them.Core Dump on Wikipedia
_webapps.73196
I am looking for a way to add up cells under a name. For instance, I am trying to tally all of the assists the character Annie got in a few different games of League of Legends. I am looking for a way to search for a name (which shows up multiple times) and then count adjacent cells.
How to add adjacent cells in Google Sheets
google spreadsheets
null
_datascience.1053
I would like to summarize (as in R) the contents of a CSV (possibly after loading it, or storing it somewhere, that's not a problem). The summary should contain the quartiles, mean, median, min and max of the data in a CSV file for each numeric (integer or real numbers) dimension. The standard deviation would be cool as well.I would also like to generate some plots to visualize the data, for example 3 plots for the 3 pairs of variables that are more correlated (correlation coefficient) and 3 plots for the 3 pairs of variables that are least correlated.R requires only a few lines to implement this. Are there any libraries (or tools) that would allow a similarly simple (and efficient if possible) implementation in Java or Scala?PD: This is a specific use case for a previous (too broad) question.
Summarize and visualize a CSV in Java/Scala?
tools;visualization;scala;csv
null
_cs.68549
let's say L is a regular language.And there in an NFA automata with epsilon moves A,in which for every accepting state (q,)=.How can I prove that there must be an automata A as defined for L?
NFA automata with moves proof
automata;finite automata;proof techniques
null
_unix.381739
There are two documented differences between start reboot.target and reboot. But start reboot.target is what is triggered by ctrl-alt-del.target.Does it matter that ctrl-alt-del.target will omit --job-mode=replace-irreversibly? In what situations will this cause different behaviour? Why is it included by systemctl reboot?man systemctlreboot [arg]Shut down and reboot the system. This is mostly equivalent to start reboot.target --job-mode=replace-irreversibly, but also prints a wall message to all users.man systemd.specialctrl-alt-del.target: systemd starts this target whenever Control+Alt+Del is pressed on the console. Usually, this should be aliased (symlinked) to reboot.target.
What is the practical difference between `systemctl start reboot.target` and `systemctl reboot`?
systemd;reboot
When queuing a new job, this option controls how to deal with already queued jobs. It takes one of fail, replace, replace-irreversibly, isolate, ignore-dependencies, ignore-requirements or flush. Defaults to replace, except when the isolate command is used which implies the isolate job mode.If fail is specified and a requested operation conflicts with a pending job (more specifically: causes an already pending start job to be reversed into a stop job or vice versa), cause the operation to fail.If replace (the default) is specified, any conflicting pending job will be replaced, as necessary.If replace-irreversibly is specified, operate like replace, but also mark the new jobs as irreversible. This prevents future conflicting transactions from replacing these jobs (or even being enqueued while the irreversible jobs are still pending). Irreversible jobs can still be cancelled using the cancel command.This suggests a practical effect. Suppose you hook units into the sleep state logic, using sleep.target to pull them in. Your hook units do not have DefaultDependencies=no, so they depend on sysinit.target... and Conflict with shutdown.target.If you run systemctl start reboot.target and then immediately systemctl start suspend.target, it seems that your hook unit will stop shutdown.target. Now systemd-reboot.service has Requires=shutdown.target, so it should be stopped/cancelled as well. (umount.target should not be cancelled).I have verified a difference in behaviour along these lines and reported it as a defect in the systemd issue tracker.
_unix.64848
Can someone explain why I get permission denied when running touch -m on this file even though it is group writable and I can write to the file fine.~/test1-> iduid=1000(plyons) gid=1000(plyons) groups=1000(plyons),4(adm),20(dialout),24(cdrom),46(plugdev),109(lpadmin),110(sambashare),111(admin),1002(webadmin)~/test1-> ls -ld .; ls -ldrwxrwxr-x 2 plyons plyons 4096 Feb 14 21:20 .total 4-r--rw---- 1 www-data webadmin 24 Feb 14 21:29 foo~/test1-> echo the file is writable >> foo~/test1-> touch -m footouch: setting times of `foo': Operation not permitted~/test1-> lsattr foo -------------e- foo~/test1-> newgrp - webadmin ~/test1-> iduid=1000(plyons) gid=1002(webadmin) groups=1000(plyons),4(adm),20(dialout),24(cdrom),46(plugdev),109(lpadmin),110(sambashare),111(admin),1002(webadmin)~/test1-> touch -m footouch: setting times of `foo': Operation not permitted~/test1-> echo the file is writable >> foo~/test1->
cannot touch -m a writable file
filesystems;permissions
From man utime: The utime() system call changes the access and modification times of the inode specified by filename to the actime and modtime fields of times respectively. If times is NULL, then the access and modification times of the file are set to the current time. Changing timestamps is permitted when: either the process has appropri ate privileges, or the effective user ID equals the user ID of the file, or times is NULL and the process has write permission for the file.So, to change only the modification time for the file (touch -m foo), you'd need to either be root, or the owner of the file.Being able to write to the file only gives you permission to update both the modified and access times to the current time; you can not update either separately, nor set them to a different time.
_codereview.163043
In this question here on S.O, the accepted answer suggests to use both anonymous function and factory pattern for dealing with PDO connection. I believe the anonymous function is used in case a connection to a different database needs to be established, a different function will be defined for that. In that case, will it be alright to move the anonymous function to the factory class itself? With this approach just passing the PDO parameters to the constructor will achieve the same as the original answer.Something like:class StructureFactory{ protected $provider = null; protected $connection = null; public function __construct( $PDO_Params ) { $this->provider = function() { $instance = new PDO($PDO_Params[dsn], $PDO_Params[username], $PDO_Params[password]); $instance->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $instance->setAttribute(PDO::ATTR_EMULATE_PREPARES, false); return $instance; }; } public function create( $name) { if ( $this->connection === null ) { $this->connection = call_user_func( $this->provider ); } return new $name( $this->connection ); }}
PDO anonymous function inside a Factory
php;object oriented;pdo;factory method
TL;DR: no.The construction that is shown in the original example provided the following functionality:decoupling from MySQL (you dont, need disable emulation of prepared statements, if you are not using mysql, and you usually dont need to provide username and password with SQLite)option to use MySQLi or SOAP or any other DAL without rewriting the factory (see OCP)replacing PDO with a mock object, when running unit testsa way to alter the anonymous function, so that you can share the same PDO instance among multiple factories (when appropriate changes added)establishing a connection to data source, only when it is necessary (connecting to SOAP can be really slow, and RDBMS under heavy load can actually run out of connections that can be opened)
_cs.77062
I'm trying to understand the paper Incentive Compatibility ofBitcoin Mining Pool Reward Functions (Schrijvers, Bonneau, Doneh and Roughgarden, in Financial Cryptography and DataSecurity – FC 2016 Workshops, BITCOIN, 2016; PDF).In page 3 Section 2.1, they say pool operator does not know actual $\alpha_i$ mining power of player $i$. To estimate $\alpha_i$ depends on the reported shares and solutions.A reward function $ R\colon H \mapsto [0,1]^n$ is a function from a history transcript to an allocation $ \{a_i\}_{i=0} ^ {n} $ with $ \sum_i a_i =1 $. I don't understand this? What allocation does the authors mean?What I somewhat understand from next para is $ H(k) = b = (y_1(k), \cdots, y_i(k) ) $ where $y_i(k)$ is no of shares reported by player $i$ in round $k$.This I am basing on $H$ contains for each miner $i$ the total no of shares $b_i$ reported in that round. History transcript is given by a vector $b \in N^n $.Also what do they mean when they say We use vector notation for $b$, so $b_1 + b_2 $ means component wise addition of these, and $\|b\|_1 = \sum_{i=1}^n b_i $ is the sum of components of $b$Could someone explain with examples or in a more concrete way ?Also any suggestions for understanding this paper would be much appreciated. Thanks.
Understanding Incentive Compatibility of pooled Bitcoin Mining paper
algorithms;cryptography;game theory;one way functions
null
_unix.147203
I'm very new in bash scripting and unix so I will need some help on this.I have 7-10 hosts which I want to ping from one of the servers via cronjobs. What I want is when host is up to execute command on it. When is down do nothing. I don't want logs or any messages.So far I have this and unfortunately don't have ability to try it right now. If you can just check it and point me.#!/bin/bashservers=( 1.1.1.1 2.2.2.2 3.3.3.3 4.4.4.4 5.5.5.5 6.6.6.6 7.7.7.7 )for i in ${servers[@]}do ping -c 1 $i > /dev/null doneping -c 1 $i > /dev/nullif [ $? -ne 0 ]; then if [ $STATUS >= 2 ]; then echo fielse while [ $STATUS <= 1 ]; do # command should be here where is status 1 ( i.e. Alive ) /usr/bin/snmptrap -v 2c -c public ... donefiI'm not sure if this is right or no. I've used this from one tutorial and there is some things that I'm not sure what they exactly do.Am I on right way here or I'm totaly wrong?
Ping multiple hosts and execute command
shell;ping;hosts
I've made some comments in line to explain what different parts of the script are doing. I've then made a concise version of the script below.#!/bin/bashservers=( 1.1.1.1 2.2.2.2 3.3.3.3 4.4.4.4 5.5.5.5 6.6.6.6 7.7.7.7 )# As is, this bit doesn't do anything. It just pings each server one time # but doesn't save the outputfor i in ${servers[@]}do ping -c 1 $i > /dev/null # done# done marks the end of the for-loop. You don't want it to end yet so I# comment it out# You've already done this above so I'm commenting it out#ping -c 1 $i > /dev/null # $? is the exit status of the previous job (in this case, ping). 0 means # the ping was successful, 1 means not successful. # so this statement reads If the exit status ($?) does not equal (-ne) zero if [ $? -ne 0 ]; then # I can't make sense of why this is here or what $STATUS is from # You say when the host is down you want it to do nothing so let's do # nothing #if [ $STATUS >= 2 ]; then # echo #fi true else # I still don't know what $STATUS is #while [ $STATUS <= 1 ]; #do # command should be here where is status 1 ( i.e. Alive ) /usr/bin/snmptrap -v 2c -c public ... #done fi# Now we end the for-loop from the topdoneIf you need a parameter for each server, create an array of parameters and an index variable in the for-loop. Access the parameter via the index:#!/bin/bashservers=( 1.1.1.1 2.2.2.2 3.3.3.3 4.4.4.4 5.5.5.5 6.6.6.6 7.7.7.7 )params=(PARAM1 PARAM2 PARAM3 PARAM4 PARAM5 PARAM6 PARAM7)n=0for i in ${servers[@]}; do ping -c 1 $i > /dev/null if [ $? -eq 0 ]; then /usr/bin/snmptrap -v 2c -c public ${params[$n]} ... fi let $((n+=1)) # increment n by onedone
_unix.301962
I'm writing kinda virtual keyboard using uinput and during looking into what all should I put intoioctl(fd, UI_SET_KEYBIT, ???);I found input-event-codes.h. Some constants there are pretty self-explanatory (KEY_1, KEY_D, ...), but some are a bit more cryptic.Is there anywhere documentation where those keycodes are listed and explained? I tried google, but BTN_TRIGGER_HAPPY didn't lead me to anywhere useful :/ What is this keycode useful for?PS: If there is complete list somewhere, that would be nice, there is a few more quite interesting (KEY_HIRAGANA? KEY_102ND? ...).
What is BTN_TRIGGER_HAPPY?
input;header file
There is documentation here, quite a lot of it too. Happy is close to joy, and this association is supported by the following search result:I wouldn't expect every event to have a strict definition. But there's a note in input-event-codes.h stating:/* * Keys and buttons * * Most of the keys/buttons are modeled after USB HUT 1.12 * (see http://www.usb.org/developers/hidpage). * Abbreviations in the comments: * AC - Application Control * AL - Application Launch Button * SC - System Control */
_codereview.158574
I've written a function to convert an image into characters and colors for the windows console. At the moment the calculation takes about 13 seconds with a 700x700 pixel image but that time is undesirable especially when I plan on making the function more complex in order to account for character shapes.What are some methods to speed up heavy calculations and loops like below in C++? I've been recommended multiple threads, SIMD, and inline assembly but how would I go about improving a function like below with those methods?This is the current code I'm using.unsigned char characterValues[256] = { 0 };// This operation can be done ahead of time when the program is started up{ ResourceInputStream in = ResourceInputStream(); // This image is the font for the console. The background color is black while the foreground color is white in.open(BMP_FONT, 2); // 2 is for RT_BITMAP, BMP_FONT is a resource if (in.isOpen()) { auto bmp = readBitmap(&in, true); in.close(); for (int x = 0; x < bmp->size.x; x++) { for (int y = 0; y < bmp->size.y; y++) { int charIndex = (x / 8) + (y / 12) * 16; if (bmp->pixels[x][y].r == 255) characterValues[charIndex]++; } } }}// This operation is for asciifying the image{ FileInputStream in = FileInputStream(); in.open(R(image-path.bmp)); if (in.isOpen()) { auto bmp = readBitmap(&in, false); in.close(); // The size of the image in characters Point2I imageSize = (Point2I)GMath::ceil((Point2F)bmp->size / Point2F(8.0f, 12.0f)); int totalImageSize = imageSize.x * imageSize.y; auto palette = /* get palette of 16 colors here */ // Iterate through each (character area) for (int imgx = 0; imgx < imageSize.x; imgx++) { for (int imgy = 0; imgy < imageSize.y; imgy++) { // Read image color value int r = 0, g = 0, b = 0; int totalRead = 0; // Read each pixel inside the bounds of a single character // 8x12 is the size of a character for (int px = 0; px < 8; px++) { for (int py = 0; py < 12; py++) { Point2I p = Point2I(imgx * 8 + px, imgy * 12 + py); if (p < bmp->size) { r += bmp->pixels[p.x][p.y].r; g += bmp->pixels[p.x][p.y].g; b += bmp->pixels[p.x][p.y].b; totalRead++; } } } Color imageValue = Color(r / totalRead, g / totalRead, b / totalRead); // A combo of a character and foreground/background color Pixel closestPixel = Pixel(); float closestScore = std::numeric_limits<float>().max(); for (int col = 1; col < 255; col++) { unsigned char f = getFColor(col); unsigned char b = getBColor(col); for (int ch = 1; ch < 255; ch++) { // Calculate values Color value = Color( (palette[f].r * characterValues[ch] + palette[b].r * (TOTAL_CHARACTER_VALUE - characterValues[ch])) / TOTAL_CHARACTER_VALUE, (palette[f].g * characterValues[ch] + palette[b].g * (TOTAL_CHARACTER_VALUE - characterValues[ch])) / TOTAL_CHARACTER_VALUE, (palette[f].b * characterValues[ch] + palette[b].b * (TOTAL_CHARACTER_VALUE - characterValues[ch])) / TOTAL_CHARACTER_VALUE ); Color fvalue = Color( (palette[f].r * characterValues[ch]) / TOTAL_CHARACTER_VALUE, (palette[f].g * characterValues[ch]) / TOTAL_CHARACTER_VALUE, (palette[f].b * characterValues[ch]) / TOTAL_CHARACTER_VALUE ); Color bvalue = Color( (palette[b].r * (TOTAL_CHARACTER_VALUE - characterValues[ch])) / TOTAL_CHARACTER_VALUE, (palette[b].g * (TOTAL_CHARACTER_VALUE - characterValues[ch])) / TOTAL_CHARACTER_VALUE, (palette[b].b * (TOTAL_CHARACTER_VALUE - characterValues[ch])) / TOTAL_CHARACTER_VALUE ); // Add up score here float score = (float)((int)value.r - (int)imageValue.r) * (float)((int)value.r - (int)imageValue.r) + (float)((int)value.g - (int)imageValue.g) * (float)((int)value.g - (int)imageValue.g) + (float)((int)value.b - (int)imageValue.b) * (float)((int)value.b - (int)imageValue.b) + (float)((int)fvalue.r - (int)imageValue.r) * (float)((int)fvalue.r - (int)imageValue.r) + (float)((int)fvalue.g - (int)imageValue.g) * (float)((int)fvalue.g - (int)imageValue.g) + (float)((int)fvalue.b - (int)imageValue.b) * (float)((int)fvalue.b - (int)imageValue.b) + (float)((int)bvalue.r - (int)imageValue.r) * (float)((int)bvalue.r - (int)imageValue.r) + (float)((int)bvalue.g - (int)imageValue.g) * (float)((int)bvalue.g - (int)imageValue.g) + (float)((int)bvalue.b - (int)imageValue.b) * (float)((int)bvalue.b - (int)imageValue.b); // More if (score < closestScore) { closestPixel = Pixel((unsigned char)ch, (unsigned char)col); closestScore = score; } } } // Set the character/color combo here } } }}As a bonus, this is the result of my calculation. There's definitely room for improvement with the scoring but at least you can see the shape and colors.
Convert an image into characters and colors for the windows console
c++;time limit exceeded;image;console;windows
null
_unix.192884
What worked for other packages, doesn't work for kernel. Why?First, sync:[git@dioptase SRPMS]$ ssh root@localhost yum-builddep /home/git/rpmbuild/SRPMS/kernel-2.6.32-431.el6.src.rpmGetting requirements for kernel-2.6.32-431.el6.src --> Already installed : module-init-tools-3.9-21.el6_4.x86_64 --> Already installed : patch-2.6-6.el6.x86_64 --> Already installed : bash-4.1.2-15.el6_4.x86_64 --> Already installed : coreutils-8.4-31.el6.x86_64 --> Already installed : 2:tar-1.23-11.el6.x86_64 --> Already installed : bzip2-1.0.5-7.el6_0.x86_64 --> Already installed : 1:findutils-4.4.2-6.el6.x86_64 --> Already installed : gzip-1.3.12-19.el6_4.x86_64 --> Already installed : m4-1.4.13-5.el6.x86_64 --> Already installed : 4:perl-5.10.1-136.el6.x86_64 --> Already installed : 1:make-3.81-20.el6.x86_64 --> Already installed : diffutils-2.8.1-28.el6.x86_64 --> Already installed : gawk-3.1.7-10.el6.x86_64 --> Already installed : gcc-4.4.7-4.el6.x86_64 --> Already installed : binutils-2.20.51.0.2-5.36.el6.x86_64 --> Already installed : redhat-rpm-config-9.0.3-42.el6.noarch --> Already installed : net-tools-1.60-110.el6_2.x86_64 --> Already installed : patchutils-0.3.1-3.1.el6.x86_64 --> Already installed : rpm-build-4.8.0-37.el6.x86_64 --> Already installed : xmlto-0.0.23-3.el6.x86_64 --> Already installed : asciidoc-8.4.5-4.1.el6.noarch --> Already installed : gnupg2-2.0.14-6.el6_4.x86_64 --> Already installed : python-2.6.6-51.el6.x86_64 --> Already installed : hmaccalc-0.9.12-1.el6.x86_64No uninstalled build requiresThen build:[git@dioptase SRPMS]$ rpmbuild --rebuild kernel-2.6.32-431.el6.src.rpmInstalling kernel-2.6.32-431.el6.src.rpmwarning: user mockbuild does not exist - using rootwarning: group mockbuild does not exist - using rooterror: Failed build dependencies: elfutils-libelf-devel is needed by kernel-2.6.32-431.el6.x86_64 elfutils-devel is needed by kernel-2.6.32-431.el6.x86_64 binutils-devel is needed by kernel-2.6.32-431.el6.x86_64 newt-devel is needed by kernel-2.6.32-431.el6.x86_64 python-devel is needed by kernel-2.6.32-431.el6.x86_64 audit-libs-devel is needed by kernel-2.6.32-431.el6.x86_64
Why doesn't yum-builddep install all dependencies?
rhel;yum;rpm
Because they are arch dependent. Either rebuild the .src.rpm on the arch. you care about (the one in the source repos. is built on a random supported arch), or download and unpuck the .src.rpm and yum-buildep on the kernel.spec.
_softwareengineering.171024
It's an idea I've heard repeated in a handful of places. Some more or less acknowledging that once trying to solve a problem purely in SQL exceeds a certain level of complexity you should indeed be handling it in code.The logic behind the idea is that for the large majority of cases, the database engine will do a better job at finding the most efficient way of completing your task than you could in code. Especially when it comes to things like making the results conditional on operations performed on the data. Arguably with modern engines effectively JIT'ing + caching the compiled version of your query it'd make sense on the surface.The question is whether or not leveraging your database engine in this way is inherently bad design practice (and why). The lines become blurred further when all the logic exists inside the database and you're just hitting it via an ORM.
Never do in code what you can get the SQL server to do well for you - Is this a recipe for a bad design?
design patterns;sql
In layman's words:These are things that SQL is made to do and, believe it or not, I've seen done in code:joins - codewise it'd require complex array manipulationfiltering data (where) - codewise it'd require heavy inserting and deleting of items in listsselecting columns - codewise it'd require heavy list or array manipulationaggregate functions - codewise it'd require arrays to hold values and complex switch casesforeign key integrity - codewise it'd require queries prior to insert and assumes nobody will use the data outside appprimary key integrity - codewise it'd require queries prior to insert and assumes nobody will use the data outside appDoing these things instead of relying in SQL or the RDBMS leads to writing tons of code with no added value, meaning more code to debug and maintain. And it dangerously assumes the database will only be accessed via the application.
_unix.349074
I have this code where the cmd usually works if I sprintf something to it, but when I try to run my Rscript, it does not work. Any hints?I get the error:awk: cmd. line:9: cmd = Rscript ./date-script-r.r $1 3 2 1;awk: cmd. line:9: ^ syntax errorawk: cmd. line:9: cmd = Rscript ./date-script-r.r $1 3 2 1;awk: cmd. line:9: ^ unterminated regexpCode:awk=/usr/bin/awkawkcommand='#d is the delimiterBEGIN { OFS = FS = d }$1 { #Expected args for the Rscript: (1, 2, 3, 4) = (dateString, yearPosition, monthPosition, dayPosition) cmd = Rscript ./date-script-r.r $1 3 2 1; cmd | getline $1; print; close(cmd);}awk -v d=, $awkcommand output-data/$filename > output-data/tmp.csvExample of R-script output:Rscript date-script-r.r 17-12-12 1 2 312-12-2017
Running R project script with arguments within AWK in a Bash Script (Ubuntu Linux)
bash;awk;r
replacecmd = Rscript ./date-script-r.r $1 3 2 1;bycmd = Rscript ./date-script-r.r $1 3 2 1 ;for complex awk script it might be better to put them in a awk-script, e.g. date-awk.awk$1 { #Expected args for the Rscript: (1, 2, 3, 4) = (dateString, yearPosition, monthPosition, dayPosition) cmd = Rscript ./date-script-r.r $1 3 2 1; cmd | getline $1; print; close(cmd);}that you would call withawk -F, -f date-awk.awk output-data/$filename > output-data/tmp.csvnote that-F, will set , as separator, there is no need for a relay variable.I expect this is part of a bigger scheme, or self tutorial. (there are easier way to compute date in shell or in awk).
_codereview.11181
How can I improve this code for counting the number of bits of a positive integer n in Python?def bitcount(n): a = 1 while 1<<a <= n: a <<= 1 s = 0 while a>1: a >>= 1 if n >= 1<<a: n >>= a s += a if n>0: s += 1 return s
Counting the number of bits of a positive integer
python;algorithm;bitwise
null
_softwareengineering.356153
I am refactoring an application that collects and displays measurement data that is stored in a database. Currently I have an interface calleIMeasurementsDataService and an implementation MeasurementsDataServicepublic interface IMeasurementsDataService{ IEnumerable<MeasurementResult> GetResults(DateTime rangeStart, DateTime rangeEnd); void AddResult(MeasurementResult result); void ModifyResult(MeasurementResult result); void DeleteMeasurement(MeasurementResult result);}Since the service is not used continiously, each method creates a DbContext instance and disposes it when finshed. For example the add method looks like this: public void AddResult(MeasurementResult newResult){ using (var context = new MeasurementsDbContext()) { ... context.SaveChanges(); }}I am injecting a singleton instance of MeasurementsDataService to some ViewModels and create Mocks of IMeasurementsDataService for unit tests.But now I want to add a new method to IMeasurementsDataService and want to also add unit tests. Until now I have used a local SQL server database that I always clear and prefill with some data before each unit test and then use the real MeasurementsDbContextto perform the test operations. But this approach is very time consuming and I would prefer to also mock the DbContext and provide some data directly from code. But when I change the code and inject the DbContext to the MeasurementsDataService, the DbContext lives through the whole livetime of the application what I consider as bad practise. Creating a new instance of IMeasurementsDataService every time I need it also seems not a solution because than I loose the ability to unit test the ViewModels properly. I am thinking of providing some kind of MeasurementDataServiceFactory to the ViewModels, that creates instances of the data service when requested and can be mocked to provide some different data service for testing. The the data service can have a DbContext per instance.Do you think this is a good solution or does anyone has a better and easier to handle solution to inject data services to view models and contexts to data services without having a DbContext living for the whole application runtime?
C# EntityFramework 6 DbContext and data service with dependency injection
c#;unit testing;dependency injection;class design;entity framework
null
_webapps.88129
Is it possible to prevent YouTube's (HTML5) fullscreen player controls from showing whenever it first goes into fullscreen mode?Like, don't show this automatically (at all):I want it to still show if I move the mouse, as it does now.I couldn't find anything in about:config on Firefox about this, only settings to disable the fullscreen button and the 'this page is now in fullscreen' warning.
Hide player controls while fullscreen in YouTube
youtube
null
_unix.2645
I read uses of the word utilities for commands/programs such as 'ls', 'chmod', 'mv', etc.Is commands is Linux referring to the same things as top, ps, etc., or are those something different? What about programs? Are those the ones that don't come with the standard distribution which need to be installed like irssi, emacs, kismet, etc.?
Difference between references of Linux utilities, commands and programs
command line;utilities;terminology
This question is hard to answer, as there is no formal definitions of those terms and different people will use them differently. I here only give my use of them, others will have different points if view.For me tool and utility are synonyms. I use the words for small programs which just do one small job. I'd call e.g. all applications implemented as applets in busybox tools or utility.Any application is a program for me. I.e. 'ls' is a tool, a utility and a program. Firefox is a program but I wouldn't call it neither tool nor utility.
_unix.88281
I want to check connectivity between 2 servers (i.e. if ssh will succeed).The main idea is to check the shortest way between server-a and server-b using a list of middle servers (for example if I'm on dev server and I want to connect to prod server - usually a direct ssh will fail).Because this can take a while, I prefer not to use SSH - rather I prefer to check first if I can connect and if so then try to connect through SSH.Some possible routes to get the idea:server-a -> server-bserver-a -> middle-server-1 -> server-bserver-a -> middle-server-6 -> server-bserver-a -> middle-server-3 -> middle-server-2 -> server-bHope you understand what I'm looking for?
Method to check connectivity to other server
linux;networking;ssh
For checking server connectivity you have 4 tools at your disposal.pingThis will check to see if any of the servers you're attempting to connect through, but won't be able to see if middle-server-1 can reach server-b, for example.You can gate how long ping will attempt to ping another server through the use of the count switch (-c). Limiting it to 1 should suffice.$ ping -c 1 skinnerPING skinner (192.168.1.3) 56(84) bytes of data.64 bytes from skinner (192.168.1.3): icmp_req=1 ttl=64 time=5.94 ms--- skinner ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 5.946/5.946/5.946/0.000 msYou can check the status of this command through the use of this variable, $?. If it has the value 0 then it was successful, anything else and a problem occurred.$ echo $?0tracerouteAnother command you can use to check connectivity is traceroute.$ traceroute skinnertraceroute to skinner (192.168.1.3), 30 hops max, 60 byte packets1 skinner (192.168.1.3) 0.867 ms 0.859 ms 0.929 msAgain this tool will not show connectivity through one server to another (same issue as ping), but it will show you the path through the network that your taking to get to another server.sshssh can be used in BatchMode to test connectivity. With BatchMode=yes you'll attempt to connect to another server, bypassing the use of username/passwords and only public/private keys. This typically speeds things up quite a bit.$ ssh -o BatchMode=yes skinnerYou can construct a rough one liner that will check for connectivity to a server:$ ssh -q -o BatchMode=yes skinner echo 2>&1 && echo $host SSH_OK || echo $host SSH_NOKSSH_OKIf it works you'll get a SSH_OK message, if it fails you'll get a SSH_NOK message.An alternative to this method is to also include the ConnectTimeout option. This will guard the ssh client from taking a long time. Something like this typically is acceptable, ConnectTimeout=5. For example:$ ssh -o BatchMode=yes -o ConnectTimeout=5 skinner echo ok 2>&1okIf it fails it will look something like this:$ ssh -o BatchMode=yes -o ConnectTimeout=5 mungr echo ok 2>&1ssh: connect to host 192.168.1.2 port 22: No route to hostIt will also set the return status:$ echo $?255telnetYou can use this test to see if an ssh server is accessible on another server using just a basic telnet:$ echo quit | telnet skinner 22 2>/dev/null | grep ConnectedConnected to skinner.
_unix.382143
I understand why hard links on directories are dangerous (loops, problems for rmdir because of the parent-directory-link) and have read the other questions on that topic. And so I assumed that hard links on directories apart from . and .. are not used. And yet I see the following on CentOS 5 & 6:# ls -id /etc/init.d/459259 /etc/init.d/# ls -id /etc/rc.d/init.d/459259 /etc/rc.d/init.d/# ls -id /etc/init.d/../458798 /etc/init.d/../# ls -id /etc/rc.d/458798 /etc/rc.d/# ls -id /etc/425985 /etc/In other words 2 different paths to directories pointing to the same inode and the parent of /etc/init.d/ pointing to /etc/rc.d/ instead of /etc/. Is this really a case of hard-linked directories? If not, what is it? If yes, why does Red Hat do that?Edit: I'm sorry for asking a stupid question, I should have been able to see that it's a symlink. Not enough coffee today, it seems.
Is /etc/init.d hard-linked on CentOS?
centos;hard link;sysvinit
That's soft-link, not hard link. Symbolic links point to other files. Opening a symbolic link will open the file that the link points to. Removing a symbolic link with rm will remove the symbolic link itself, but not the actual file.This is indicated by the letter l at the beginning of the permissionslrwxrwxrwx. 1 root root 11 Aug 10 2016 init.d -> rc.d/init.dAlso all the rc0.d to rc6.d are symlinks to rc.d/rc0.d
_unix.14524
The list of uninstalled packages appearing in my aptitude is massive, but is full of packages that I'm very sure I will never ever install. E.g. my laptop is using intel graphics, so it doesn't make sense to install xserver-xorg-video-nouveau. Therefore I want to hide it forever. This is important while listing using !~i!~v, as it shows all packages available for installation (I still need to know the filter for packages with dependencies marked UNAVAILABLE), so if I can hide them, it's easier to find packages that I haven't tried out.How do I make it so, if possible, for the whole apt database to ignore the uninteresting packages?
How do I hide packages from appearing in apt system?
apt;aptitude
null
_unix.84592
VirtualBox seems to break my SSH ProxyCommand... here are the details:I want to open an SSH connection from my laptop to desktop over an SSH (reverse) tunnel. I want to do that in one step using ProxyCommand (and netcat). It works when run from the installed OS on my laptop. It fails when run from a VirtualBox guest on my laptop.My normal setup is that Kubuntu 12.04 is running in VirtualBox on the desktop and Kubuntu 12.04 is installed directly on my laptop. This works fine.However, if I try to do the exact same thing with a Kubuntu 12.04 VirtualBox instance on my laptop, it fails as I will detail below.Here's what my SSH tunnel looks like:laptop--->nat--->middleman<--nat<--desktopThe desktop & laptop run Kubuntu 12.04 regardless of whether I'm using VirtualBox; and the middleman is Ubuntu 8.04. I'll describe my SSH tunnel in more detail. Regarding this leg:middleman<--nat<--desktop...here is how it is established:autossh -M 5234 -N -f -R 1234:localhost:22 user@middleman.comThat part is automatic and trouble-free.From the laptop:laptop--->nat--->middlemanI can connect to middleman and then from the middleman to the desktop in two steps under all conditions:me@laptop:~$ ssh -i ~/.ssh/id_rsa admin@middleman admin@middleman:~$ ssh -i ~/.ssh/id_rsa localhost -p 1234To do this in one step I use netcat (nc) on middleman and I edit my SSH config file on laptop to use ProxyCommand and nc:me@laptop:~/.ssh$ nano configThe contents are:Host family_desktops ProxyCommand ssh middleman_fqdn nc localhost %p User admin PasswordAuthentication no IdentityFile ~/.ssh/my_id_rsaWhere middleman_fqdn is like middleman.comThen I just connect to desktop in one step:me@laptop:~$ ssh family_desktops -p 1234Here's the problem. This works when run directly from my laptop (where Kubuntu 12.04 is installed). But when I run it from VirtualBox instance of Kubuntu 12.04 (guest) on my laptop, it does not work. SSH asks for the password for middleman. There is no password set up, so I cannot connect. The strange thing is that running the two separate commands allows me to connect to my desktop and it doesn't ask for a password. The ssh -vvv option shows no errors and nothing helpful.I have been troubleshooting all day and I cannot find any difference in settings between the VirtualBox instance and the host OS on my laptop. I use the exact same id_rsa keys (public & private), the same user, and auth.log on the middleman server indicates it sees the same IP address in both cases. So why would running VirtualBox on my laptop make this SSH tunnel stop working (at least working in one step)?
SSH reverse tunnel works except when I use a VirtualBox instance at one end. Why does VB break SSH?
ssh;virtualbox;proxy;ssh tunneling
It appears that this was my problem:Bug #201786 ssh Agent admitted failure to sign using the key on... : Bugs : gnome-keyring package : Ubuntuhttps://bugs.launchpad.net/ubuntu/+source/gnome-keyring/+bug/201786To resolve my problem all I did was run ssh-add on the VirtualBox instance running on my laptop:ssh-add ~/.ssh/my_other_keywhich is the solution mentioned in this comment:https://bugs.launchpad.net/ubuntu/+source/gnome-keyring/+bug/201786/comments/58I do not understand why the problems shows up only when running VirtualBox, but since some feel it is related to endian-ness, maybe that explains it in a very nebulous way. Anyway, that's resolved it for me.
_codereview.90954
I recently came across the classic algorithm for detecting cycles in a directed graph using recursive DFS. This implementation makes use of a stack to track nodes currently being visited and an extra set of nodes which have already been explored. The second set is not strictly required, but it is an optimization to prevent iterating over path suffixes that have previously been determined not to be part of a cycle.I had no trouble implementing this in Python by simply creating a (mutable) set object and sharing it between all branches of the recursion.My attempt to reimplement this algorithm in Haskell turned out to be much worse (and still isn't as efficient as the original).I would like some pointers about how to restructure this so that it isn't a mess of recursive folds, but without giving up the set of visited, nodes, which lets me avoid taking branches that have already been explored.import Data.Maybeimport qualified Data.IntMap.Strict as Mimport qualified Data.Char as Cimport Debug.Traceedges :: Stringedges = a b\n\ \a c\n\ \b c\n\ \b d\n\ \c d\n\ \c e\n\ \e f\n\ \e g\n\ \f g\n\ \g ctype Node = IntparseGraph :: String -> M.IntMap [Node]parseGraph = foldr go M.empty . lines where go line m = let [key, rule] = map ruleToKey (words line) in M.insertWith inserter key [rule] m inserter [rule] olds = rule:oldsruleToKey :: String -> NoderuleToKey rule = C.ord (head rule) - 97keyToRule :: Node -> StringkeyToRule key = return $ C.chr (key + 97)hasCycle :: M.IntMap [Node] -> Maybe [Node]hasCycle m = reverse <$> ret where dummyM = M.insert phantom (reverse $ M.keys m) m phantom = -1 (_, _, ret) = hasCycleHelper dummyM phantom ([], [], Nothing)hasCycleHelper :: M.IntMap [Node] -> Node -> ([Node], [Node], Maybe [Node]) -> ([Node], [Node], Maybe [Node])hasCycleHelper rules rule (visited', visiting', cyc) = trace rendered $ case () of _ | isJust cyc || rule `elem` visited' -> (visited', visiting', cyc) | rule `elem` visiting' -> ([], [], Just (takeWhile (/= rule) visiting' ++ [rule])) | otherwise -> returned where children = M.findWithDefault [] rule rules (visited, _, ret) = foldr (hasCycleHelper rules) acc children returned = (rule:visited, visiting', ret) acc = (visited', rule:visiting', Nothing) rendered = Current ' ++ keyToRule rule ++ ', ++ Visiting ' ++ map (head . keyToRule) visiting' ++ ', ++ Visited ' ++ map (head . keyToRule) visited' ++ ', ++ Found ++ show (map (head . keyToRule) <$> cyc)main :: IO ()main = do let rules = parseGraph edges cyc = map keyToRule <$> hasCycle rules print cycWith output:Current '`', Visiting '', Visited '', Found NothingCurrent 'a', Visiting '`', Visited '', Found NothingCurrent 'c', Visiting 'a`', Visited '', Found NothingCurrent 'e', Visiting 'ca`', Visited '', Found NothingCurrent 'g', Visiting 'eca`', Visited '', Found NothingCurrent 'c', Visiting 'geca`', Visited '', Found NothingCurrent 'f', Visiting 'eca`', Visited 'g', Found Just gecCurrent 'd', Visiting 'ca`', Visited 'eg', Found Just gecCurrent 'b', Visiting 'a`', Visited 'ceg', Found Just gecCurrent 'b', Visiting '`', Visited 'aceg', Found Just gecCurrent 'c', Visiting '`', Visited 'aceg', Found Just gecCurrent 'e', Visiting '`', Visited 'aceg', Found Just gecCurrent 'f', Visiting '`', Visited 'aceg', Found Just gecCurrent 'g', Visiting '`', Visited 'aceg', Found Just gecJust [c,e,g]Because I'm using foldr, it can't abort the search after discovering a cycle. It has to iterate over all the nodes in the graph at the top level, carrying along the result. I think that's pretty miserable, but I thought I'd ask about it before rewriting everything.
Detecting cycles in a directed graph without using mutable sets of nodes
haskell;functional programming;graph;depth first search;memoization
An easy way to detect a cycle is through the implementation of a disjoint set structure, sometimes called a Union-Find structure. You start with a source node and attempt to add a node to that set. If your Find() call for the source node and the other node returns the same root node, then a cycle world result if the nodes are unioned.UPDATE:You could implement a topological sort, arranging the nodes with edges going from left to right. A topological sort can only be completed successfully if and only if the graph is a Directed Acyclic Graph. So, if the algorithm detects any cycles, it will stop.The runtime is O(|V| + |E|), which is better than DFS, in the case that repetition occurs during traversal.Topological Sorting
_unix.272800
I want to replace 0/1 with hetero but whenever I am trying to use the known commands it is showing syntax error because the / is aleady part of the command .. Can anyone suggest a command to solve this issue??
Find and replace in Linux
find;command;replace
null
_codereview.111449
I've attempted a functional solution to Conway's Game of Life in Python.The example code allows you to see the next generation of the universe by calling the step() function, passing the current generation of the universe. The universe is represented as a set of live cells. Live cells are represented as a tuple of x, y coordinates.All suggestions for improvement are welcome. I'd especially like feedback on the following:Approach - is it functional? If not, why not?Test cases - are there any test case I've missed that highlight a bug? Can it be done with less test cases while maintaining the same code coverage?Python idioms and conventionsMaking use of built-in functions and datatypesimport unittestdef get_neighbours(x, y): ''' Returns the set of the given cell's (x,y) 8 eight neighbours. :param x: x coordinate of cell :param y: y coordinate of cell ''' return {(x + dx, y + dy) for dx, dy in [(-1, -1), (-1, 0), (-1, 1), (0, -1), (0, 1), (1, -1), (1, 0), (1, 1)]}def is_survivor(universe, x, y): ''' Returns True if given cell will survive to the next generation, False otherwise. :param universe: set of live cells in the universe. A live cell is the tuple (x,y) :param x: x coordinate of cell :param y: y coordinate of cell ''' num_live_neighbours = len(get_neighbours(x, y) & universe) return num_live_neighbours == 2 or num_live_neighbours == 3def is_born(universe, x, y): ''' Returns True if given cell will be born in the next generation, False otherwise. :param universe: set of live cells in the universe. A live cell is the tuple (x,y) :param x: x coordinate of cell :param y: y coordinate of cell ''' return len(get_neighbours(x, y) & universe) == 3def step(universe): ''' Returns the new universe after a single step in the game of life. :param universe: set of live cells in the universe. A live cell is the tuple (x,y) ''' survivors = { (x, y) for x, y in universe if is_survivor(universe, x, y) } list_of_neighbour_sets = [get_neighbours(x, y) for x, y in universe] flattened_neighbour_set = {item for subset in list_of_neighbour_sets for item in subset} dead_neighbours = flattened_neighbour_set - universe births = { (x, y) for x, y in dead_neighbours if is_born(universe, x, y) } return survivors | birthsclass Test(unittest.TestCase): def test_get_neigbours(self): self.assertEqual({(-1, -1), (0, -1), (1, -1), (-1, 0), (1, 0), (-1, 1), (0, 1), (1, 1)}, get_neighbours(0, 0)) self.assertEqual({(4, 5), (4, 6), (4, 7), (5, 5), (5, 7), (6, 5), (6, 6), (6, 7)}, get_neighbours(5, 6)) def test_is_survivour_should_return_true_if_cell_has_2_live_neighbours(self): self.assertTrue(is_survivor({(0, 0), (1, 0), (2, 0)}, 1, 0)) def test_is_survivour_should_return_true_if_cell_has_3_live_neighbours(self): self.assertTrue(is_survivor({(0, 0), (1, 0), (0, 1), (1, 1)}, 0, 0)) self.assertTrue(is_survivor({(0, 0), (1, 0), (0, 1), (1, 1)}, 1, 0)) self.assertTrue(is_survivor({(0, 0), (1, 0), (0, 1), (1, 1)}, 0, 1)) self.assertTrue(is_survivor({(0, 0), (1, 0), (0, 1), (1, 1)}, 1, 1)) def test_is_survivour_should_return_false_if_cell_is_underpopulated(self): self.assertFalse(is_survivor({(0, 0)}, 0, 0)) def test_is_survivour_should_return_false_if_cell_is_overpopulated(self): self.assertFalse(is_survivor({(-1, -1), (0, -1), (1, -1), (-1, 0), (1, 0), (-1, 1), (0, 1), (1, 1)}, 0, 0)) def test_is_born_should_return_false_if_dead_cell_doesnt_have_exactly_3_live_neighbours(self): self.assertFalse(is_born({(0, 0)}, 0, 0)) def test_is_born_should_return_true_if_dead_cell_has_exactly_3_live_neighbours(self): self.assertTrue(is_born({(0, 0), (1, 0), (0, 1)}, 1, 1)) def test_L_becomes_block_after_step(self): self.assertEqual({(0, 0), (1, 0), (0, 1), (1, 1)}, step({(0, 0), (0, 1), (1, 1)}))if __name__ == __main__: unittest.main()
Game of Life rules in 14 lines of Python
python;python 3.x;functional programming;game of life
For docstrings, if you're following the Sphinx documentation format (which it looks like you are), you can specify an explicit :return: field to document what exactly it is your function is returning. PEP-0257 has various other conventions for docstrings like leaving a blank line between the summary line and the rest of the docstring, if you're really interested.In terms of functional programming, you could use some of Python's functional programming functions, e.g the builtins filter and map and also any functions from the functools module. Also, since you are already not mutating any of the variables in your program (as you wouldn't do when programming in a functional manner), you can use frozenset()s in place of where you're currently using sets, as frozen sets are explicitly immutable.Here is an attempt at functionifying the step function. Since Python 3, you can put type hints in the signature of functions, and since Python 3.5 you can also use the typing module to define types in a formal way, which can allow for better static type-checking if you're using a typing-aware IDE. Since you're wanting to program in a functional way, I'm guessing this would be of more interest than usual.from functools import reducefrom typing import Tuple, SetCell = Tuple[int, int]Universe = Set[Cell] # or even FrozenSet[Cell]...def step(universe: Universe) -> Universe: Evaluate the next step in the Game of Life. :param universe: set of live cells in the universe. A live cell is the tuple (x,y) :return: the new universe after a single step in the game of life. survivors = filter(lambda cell: is_survivor(universe, *cell), universe) neighbours = reduce(set.union, map(lambda cell: get_neighbours(*cell), universe)) dead_neighbours = filter(lambda cell: cell not in universe, neighbours) births = filter(lambda cell: is_born(universe, *cell), dead_neighbours) return frozenset(survivors) | frozenset(births)To be honest though, functional programming does not seem idiomatic in Python, even though it can be done; I would say your current code is pretty clear and idiomatic as is, using set comprehensions and such.
_softwareengineering.103375
I have a couple of Python modules that are meant to be run as scripts. How should I write the docstrings at the module and function level to make it clear how to run and use the module?
How should I document a Python script?
python;documentation
null
_webmaster.2938
i see many sites that have their heading as an image. what is the benefit of this? when is a best practice to use text as images as opposed to just having text in my html. is this just for supporting non standard fonts?
When should i consider having text as an image in my website heading
website design;images
Reasons to use an image header with text:The text isn't necessary for SEO.The font can't be reproduced on theweb.The text is integrated too closely with alogo to separate the two.Reasons to use text as your header:The text is necessary for SEO.The text needs to scale according tothe browser window.The text needs to scale according toan em-based font size.The text needs to be dynamic in any way.If you do use an image, consider using the tag with an alt attribute rather than a CSS image. This is important for accessibility purposes, but it could also contribute to SEO.
_webapps.62940
I'd like to align the titles in my swimlanes to the left side, but the collapse/expand buttons are hiding the words. Is there a way I can make that kind of button invisible?I've looked through the code that's editable and haven't found it there either. When I use element inspector it shows the collapse/expand button as being separate from the container it's attached to.
Is there a way to hide the collapse/expand button in draw.io?
draw.io
Select the swimlane and on the menu invoke Arrange->CollapsibleTo toggle the collapse/expand button.
_datascience.22118
Why do we need for Shortcut Connections to build Residual Networks, and how it help to train neural networks for classification and detection?
Why do we need for Shortcut Connections to build Residual Networks?
machine learning;neural network;deep learning;computer vision;caffe
null
_unix.332347
I have these results from find:$ find subprojects -mindepth 1 -maxdepth 1subprojects/install-globally-firstsubprojects/installation-test-project-custom-configsubprojects/install-via-githubsubprojects/init-from-nothingsubprojects/node-path-testsubprojects/install-globally-with-nvmsubprojects/installation-test-projectsubprojects/parallel-installs-of-sumanI want to map these results to:subprojects/install-globally-first/test.shsubprojects/installation-test-project-custom-config/test.shsubprojects/install-via-github/test.shsubprojects/init-from-nothing/test.shsubprojects/node-path-test/test.shsubprojects/install-globally-with-nvm/test.shsubprojects/installation-test-project/test.shsubprojects/parallel-installs-of-suman/test.sh(All I am doing in this case is appending /test.sh to the results...I am sure a good solution is something like:$ find subprojects -mindepth 1 -maxdepth 1 | something (?)but I don't know what it would be! Pretty newby here. Probably more than one way to do it, looking for simplest most robust solution I guess.Note that since the test.sh files already exist in these paths, I could just to do this:find subprojects -mindepth 2 -maxdepth 2 -name test.shBut I guess I am looking for away to do that, assuming these test.sh files don't exist yet on the filesystem.
map the results from find
find;pipe;xargs
null