qid
int64 1
74.7M
| question
stringlengths 15
58.3k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 4
30.2k
| response_k
stringlengths 11
36.5k
|
---|---|---|---|---|---|
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | Try this,i tried it ,
```
$text= "amharic (ethiopian)";
echo mb_convert_case($text, MB_CASE_TITLE, "UTF-8");
```
>
> **output** : Amharic (Ethiopian)
>
>
>
Note:It looks like the mbstring function on PHP is to be enabled. | For anyone else needing to title case around other characters including dashes, brackets and parentheses. You can use regex with preg\_replace\_callback to capture a word that is split with a dash or begins with a character not in the alphabet.
```
/** Capitalize first letter if string is only one word **/
$STR = ucwords(strtolower($STR));
/** Correct Title Case for words with non-alphabet charcters ex. Screw-Washer **/
$STR = preg_replace_callback('/([A-Z]*)([^A-Z]+)([A-Z]+)/i',
function ($matches) {
$return = ucwords(strtolower($matches[1])).$matches[2].ucwords(strtolower($matches[3]));
return $return;
}, $STR);
``` |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | Try this,i tried it ,
```
$text= "amharic (ethiopian)";
echo mb_convert_case($text, MB_CASE_TITLE, "UTF-8");
```
>
> **output** : Amharic (Ethiopian)
>
>
>
Note:It looks like the mbstring function on PHP is to be enabled. | You should be able to solve it with preg\_replace (<http://php.net/manual/en/function.preg-replace.php>) and a regular expression like `/[A-Z][a-zA-Z]*/` or similar. |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | This is a [known bug](https://bugs.php.net/bug.php?id=21626) which requires there to be whitespace between open-parenthesis and the first letter. Here's a workaround:
```
$var = "amharic (ethiopian)";
echo str_replace('( ', '(', ucwords(str_replace('(', '( ', $var)));
```
**Result**
```
Amharic (Ethiopian)
```
[See the demo](http://codepad.org/dF4gmNAL) | For anyone else needing to title case around other characters including dashes, brackets and parentheses. You can use regex with preg\_replace\_callback to capture a word that is split with a dash or begins with a character not in the alphabet.
```
/** Capitalize first letter if string is only one word **/
$STR = ucwords(strtolower($STR));
/** Correct Title Case for words with non-alphabet charcters ex. Screw-Washer **/
$STR = preg_replace_callback('/([A-Z]*)([^A-Z]+)([A-Z]+)/i',
function ($matches) {
$return = ucwords(strtolower($matches[1])).$matches[2].ucwords(strtolower($matches[3]));
return $return;
}, $STR);
``` |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | This is a [known bug](https://bugs.php.net/bug.php?id=21626) which requires there to be whitespace between open-parenthesis and the first letter. Here's a workaround:
```
$var = "amharic (ethiopian)";
echo str_replace('( ', '(', ucwords(str_replace('(', '( ', $var)));
```
**Result**
```
Amharic (Ethiopian)
```
[See the demo](http://codepad.org/dF4gmNAL) | Below function use to covert word with Bracket into Capitalize after Bracket
```
function ucWordWithBracket($edit){
$new_word = str_replace("(","( ",$edit);
$temp = ucwords($new_word);
$new_word = str_replace("( ","(",$temp);
return $new_word;
}
```
ucWordWithBracket(amharic (ethiopian))
Output of funcation is "Amharic (Ethiopian)"; |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | Below function use to covert word with Bracket into Capitalize after Bracket
```
function ucWordWithBracket($edit){
$new_word = str_replace("(","( ",$edit);
$temp = ucwords($new_word);
$new_word = str_replace("( ","(",$temp);
return $new_word;
}
```
ucWordWithBracket(amharic (ethiopian))
Output of funcation is "Amharic (Ethiopian)"; | For anyone else needing to title case around other characters including dashes, brackets and parentheses. You can use regex with preg\_replace\_callback to capture a word that is split with a dash or begins with a character not in the alphabet.
```
/** Capitalize first letter if string is only one word **/
$STR = ucwords(strtolower($STR));
/** Correct Title Case for words with non-alphabet charcters ex. Screw-Washer **/
$STR = preg_replace_callback('/([A-Z]*)([^A-Z]+)([A-Z]+)/i',
function ($matches) {
$return = ucwords(strtolower($matches[1])).$matches[2].ucwords(strtolower($matches[3]));
return $return;
}, $STR);
``` |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | Try this,i tried it ,
```
$text= "amharic (ethiopian)";
echo mb_convert_case($text, MB_CASE_TITLE, "UTF-8");
```
>
> **output** : Amharic (Ethiopian)
>
>
>
Note:It looks like the mbstring function on PHP is to be enabled. | I think that the problem is that php is going to capitalise the first character after the space and because the bracket is just a character after the space so it is considered as the first character in the word must be capitalise and other characters folow it are considered as characters of this world
So to work around this i think you must make space between bracket and characters so php can detect the bracket as a single character and not a begin character of a word :) |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | This is a [known bug](https://bugs.php.net/bug.php?id=21626) which requires there to be whitespace between open-parenthesis and the first letter. Here's a workaround:
```
$var = "amharic (ethiopian)";
echo str_replace('( ', '(', ucwords(str_replace('(', '( ', $var)));
```
**Result**
```
Amharic (Ethiopian)
```
[See the demo](http://codepad.org/dF4gmNAL) | I think that the problem is that php is going to capitalise the first character after the space and because the bracket is just a character after the space so it is considered as the first character in the word must be capitalise and other characters folow it are considered as characters of this world
So to work around this i think you must make space between bracket and characters so php can detect the bracket as a single character and not a begin character of a word :) |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | This is a [known bug](https://bugs.php.net/bug.php?id=21626) which requires there to be whitespace between open-parenthesis and the first letter. Here's a workaround:
```
$var = "amharic (ethiopian)";
echo str_replace('( ', '(', ucwords(str_replace('(', '( ', $var)));
```
**Result**
```
Amharic (Ethiopian)
```
[See the demo](http://codepad.org/dF4gmNAL) | You should be able to solve it with preg\_replace (<http://php.net/manual/en/function.preg-replace.php>) and a regular expression like `/[A-Z][a-zA-Z]*/` or similar. |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | Try this,i tried it ,
```
$text= "amharic (ethiopian)";
echo mb_convert_case($text, MB_CASE_TITLE, "UTF-8");
```
>
> **output** : Amharic (Ethiopian)
>
>
>
Note:It looks like the mbstring function on PHP is to be enabled. | Below function use to covert word with Bracket into Capitalize after Bracket
```
function ucWordWithBracket($edit){
$new_word = str_replace("(","( ",$edit);
$temp = ucwords($new_word);
$new_word = str_replace("( ","(",$temp);
return $new_word;
}
```
ucWordWithBracket(amharic (ethiopian))
Output of funcation is "Amharic (Ethiopian)"; |
15,695,992 | I am trying to capitalise every word in a php string but the function isn't detecting a word that is being immediately followed by a bracket. How can I make it so that a word directly after a bracket is capitalised?
Example: amharic (ethiopian) .... Amharic (Ethiopian)
(currently using ucwords(), PHP displays Amharic (ethiopian) ) | 2013/03/29 | [
"https://Stackoverflow.com/questions/15695992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844438/"
] | Below function use to covert word with Bracket into Capitalize after Bracket
```
function ucWordWithBracket($edit){
$new_word = str_replace("(","( ",$edit);
$temp = ucwords($new_word);
$new_word = str_replace("( ","(",$temp);
return $new_word;
}
```
ucWordWithBracket(amharic (ethiopian))
Output of funcation is "Amharic (Ethiopian)"; | You should be able to solve it with preg\_replace (<http://php.net/manual/en/function.preg-replace.php>) and a regular expression like `/[A-Z][a-zA-Z]*/` or similar. |
25,078 | As we've rolled into fruit season this year I've started canning up some jams using a traditional boiling process in my big stock pot (though I'm getting pretty close to investing in a pressure canner at this point) and I'm running into a problem: I wind up with too much jam to process in a single batch (which comfortably handles about 7 4-ounce jars or 4 8-ounce jars), so I'm forced to do one batch, then sterilize a new set of jars before I can process another batch β and in the meantime, my jam is continuing to cook down, pushing past 220F/105C and taking on a much thicker, taffy-like consistency. Is there any way of keeping my texture consistent across batches, or should I just be making smaller loads or accepting the difference in results? | 2012/07/17 | [
"https://cooking.stackexchange.com/questions/25078",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/2859/"
] | If it's getting cooked too much, well, stop cooking it. You can cover it to prevent water loss. If it's too hot sitting on the same burner (on an electric stove), move the pot. It sounds like your cast-iron pot might be part of the problem, too - if it's still staying too hot, you could pour it into something else. Finally, if you do still have significant water loss, I believe you should be able to add a little water (careful not to add too much) and stir and reheat.
P.S. you could also just get a lighter pot. (I know that's not ideal.) | You could avoid the problems by having more jars ready. Here's what I do to get my jars ready.
\*When you want to clean jars up for canning, the sterilizing process is between you and yours, not the health authorities.
Here is what I do to clean up my jars quickly. I save jars where the lid acts as a button indicator - if the button is popped up, you don't use it when you bought it in a shop.
First, you wash the jar immediately you empty it in the normal way. Then you put about 2cm to 3cm of water into the jar and microwave it for around a minute, so that the water boils.
Take the jar out carefully - use a cloth or oven glove, it is HOT, and be aware that it may spontaneously "bump" - moving it could cause the boiling water to jump out of the jar, do not point it at your face. Prod it with something long, like a wooden spoon, before you pick it up.
Screw the cap back on. Turn it upside down so that the hot water contacts the inside of the lid. If the water leaks out at this point, don't use that jar - the seal would be OK in an industrial process, but it will not be useful for this trick.
Stand the jar on one side until it cools, add it to your collection.
When the time comes that you want to use the jars, pick out enough to do the job and repeat the boiling out process once again, being sure to only use jars that have properly sealed - the button pops up again to tell you the seal was effective, something will grow in the water if the job was not done right.
Then you make the jam or chutney to put into the jars.
This way you never become short of jars. If you do not pick out enough, it is the work of a couple of minutes to make another jar ready. If you pick out too many, you can put the surplus back into your stockpile.\*
If it makes you feel more secure, you can use the solution for sterilizing baby feed bottles to immerse the jar lids during the process, but you must rinse off that solution before you re-fit the lids on the jars.
Wet heat kills more bugs than dry heat ... this way, it is a matter of moments to have extra jars ready - or more than you really need, you don't waste any sterilized jars.
I've taken to using this trick so I can use portions from a jar of, say, pasta sauce. When I use half the jar, I microwave it for a minute and put the lid back, then stand it to cool before I put it in the fridge.
Not properly and fully tested as a method, but I've been doing this for a year now without ill effects. |
25,078 | As we've rolled into fruit season this year I've started canning up some jams using a traditional boiling process in my big stock pot (though I'm getting pretty close to investing in a pressure canner at this point) and I'm running into a problem: I wind up with too much jam to process in a single batch (which comfortably handles about 7 4-ounce jars or 4 8-ounce jars), so I'm forced to do one batch, then sterilize a new set of jars before I can process another batch β and in the meantime, my jam is continuing to cook down, pushing past 220F/105C and taking on a much thicker, taffy-like consistency. Is there any way of keeping my texture consistent across batches, or should I just be making smaller loads or accepting the difference in results? | 2012/07/17 | [
"https://cooking.stackexchange.com/questions/25078",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/2859/"
] | If it's getting cooked too much, well, stop cooking it. You can cover it to prevent water loss. If it's too hot sitting on the same burner (on an electric stove), move the pot. It sounds like your cast-iron pot might be part of the problem, too - if it's still staying too hot, you could pour it into something else. Finally, if you do still have significant water loss, I believe you should be able to add a little water (careful not to add too much) and stir and reheat.
P.S. you could also just get a lighter pot. (I know that's not ideal.) | Why are you boiling-filling-boiling again? Check out canning myths. Trust me itβs there. I simply run my jars through the dishwasher checking for any possible grit after (weβre in a farm so sometimes get a silt/sand in the bottom of cups) turn the jars upside down on a towel while making jam. Fill bottles with pipping hot jam screw lid on tight. Turn upside down. Then go relax till fully cooled or 24 hours. Turn right side up push center of lid might click down (and stay down!) your good to go. Lol no batch of home made jam that fills 2 pint jars ever lasts long here. But Iβve been told by my grands at least 6 months possibly years. Just push the center of the lid every so often. Once opened as long as thereβs no mild and smells like jam I think youβre good.
We do a haywire canning spree every once in a while. Never had a problem and the ones that donβt take are easily spotted |
25,078 | As we've rolled into fruit season this year I've started canning up some jams using a traditional boiling process in my big stock pot (though I'm getting pretty close to investing in a pressure canner at this point) and I'm running into a problem: I wind up with too much jam to process in a single batch (which comfortably handles about 7 4-ounce jars or 4 8-ounce jars), so I'm forced to do one batch, then sterilize a new set of jars before I can process another batch β and in the meantime, my jam is continuing to cook down, pushing past 220F/105C and taking on a much thicker, taffy-like consistency. Is there any way of keeping my texture consistent across batches, or should I just be making smaller loads or accepting the difference in results? | 2012/07/17 | [
"https://cooking.stackexchange.com/questions/25078",
"https://cooking.stackexchange.com",
"https://cooking.stackexchange.com/users/2859/"
] | You could avoid the problems by having more jars ready. Here's what I do to get my jars ready.
\*When you want to clean jars up for canning, the sterilizing process is between you and yours, not the health authorities.
Here is what I do to clean up my jars quickly. I save jars where the lid acts as a button indicator - if the button is popped up, you don't use it when you bought it in a shop.
First, you wash the jar immediately you empty it in the normal way. Then you put about 2cm to 3cm of water into the jar and microwave it for around a minute, so that the water boils.
Take the jar out carefully - use a cloth or oven glove, it is HOT, and be aware that it may spontaneously "bump" - moving it could cause the boiling water to jump out of the jar, do not point it at your face. Prod it with something long, like a wooden spoon, before you pick it up.
Screw the cap back on. Turn it upside down so that the hot water contacts the inside of the lid. If the water leaks out at this point, don't use that jar - the seal would be OK in an industrial process, but it will not be useful for this trick.
Stand the jar on one side until it cools, add it to your collection.
When the time comes that you want to use the jars, pick out enough to do the job and repeat the boiling out process once again, being sure to only use jars that have properly sealed - the button pops up again to tell you the seal was effective, something will grow in the water if the job was not done right.
Then you make the jam or chutney to put into the jars.
This way you never become short of jars. If you do not pick out enough, it is the work of a couple of minutes to make another jar ready. If you pick out too many, you can put the surplus back into your stockpile.\*
If it makes you feel more secure, you can use the solution for sterilizing baby feed bottles to immerse the jar lids during the process, but you must rinse off that solution before you re-fit the lids on the jars.
Wet heat kills more bugs than dry heat ... this way, it is a matter of moments to have extra jars ready - or more than you really need, you don't waste any sterilized jars.
I've taken to using this trick so I can use portions from a jar of, say, pasta sauce. When I use half the jar, I microwave it for a minute and put the lid back, then stand it to cool before I put it in the fridge.
Not properly and fully tested as a method, but I've been doing this for a year now without ill effects. | Why are you boiling-filling-boiling again? Check out canning myths. Trust me itβs there. I simply run my jars through the dishwasher checking for any possible grit after (weβre in a farm so sometimes get a silt/sand in the bottom of cups) turn the jars upside down on a towel while making jam. Fill bottles with pipping hot jam screw lid on tight. Turn upside down. Then go relax till fully cooled or 24 hours. Turn right side up push center of lid might click down (and stay down!) your good to go. Lol no batch of home made jam that fills 2 pint jars ever lasts long here. But Iβve been told by my grands at least 6 months possibly years. Just push the center of the lid every so often. Once opened as long as thereβs no mild and smells like jam I think youβre good.
We do a haywire canning spree every once in a while. Never had a problem and the ones that donβt take are easily spotted |
15,772,626 | First off I'm not used to dealing with images, so if my wording is off please excuse me.
I am looking to take an image that is dropped onto a HTML5 canvas, sample it, do a reduction of the sampling, then create a polygonal representation of the image using mostly triangles with a few other polygons and draw that image to the canvas.
But I do not know where to start with an algorithm to do so. What sort of pseudocode do I need for this kind of algorithm?
This image may offer a better understanding of the end result:
[](https://tinypic.com/view.php?pic=1hcj2d&s=6) | 2013/04/02 | [
"https://Stackoverflow.com/questions/15772626",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/525374/"
] | I would do the following:
1. Create a field of randomly-placed dots.
2. Create a [Voronoi diagram](http://en.wikipedia.org/wiki/Voronoi_diagram) from the dots.
* Here's a good JavaScript library that I've used in the past for this: <https://github.com/gorhill/Javascript-Voronoi>
3. Color each cell based on sampling the colors.
* *Do you just pick the color at the dot? Sample all the colors within the cell and average them? Weight the average by the center of the cell? Each would produce different but possibly interesting results.*
If the result needs to be triangles and not polygons, then instead of a Voronoi diagram create a [Delaunay triangulation](http://en.wikipedia.org/wiki/Delaunay_triangulation). GitHub has [15 JavaScript libraries for this](https://github.com/search?l=JavaScript&q=Delaunay&ref=commandbar&type=Repositories), but I've not tried any to be able to specifically recommend one. | Ok, it's a bit indirect, but here goes.....!
This is a plugin for SVG that turns images into pointilized art: <https://github.com/gsmith85/SeuratJS/blob/master/seurat.js>
Here's the interesting part. Under the hood, it uses **canvas** to do the processing!
The examples show images composed of "dots" and "squares".
Perhaps you can modify the code to produce your triangles -- even just cut the squares diagonally to create triangles. |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Using a tool like [**Regexper**](http://www.regexper.com/) we can see what your regular expression is matching:

As you can see there are a few problems with this:
1. This assumes "url(" is at the start of the line.
2. This assumes the content ends with "g)", whereas all of your examples end with "g')".
3. This assumes after "g)" is the end of the line.
What we instead need to do is match just `"url(" [any character] ")"`. And we can do that using:
```
/url\(.*?\)/ig
```

Note that I've removed the requirement for the URL ending in "g" in case the image type is `.gif` or any other non-jpg, jpeg or png.
---
Edit: As [Seçkin M](https://stackoverflow.com/users/824495/seckin-m) commented below: *what if we don't want any other url's; such as fonts which have .ttf or .eot? What is the easiest way to define an allowed extensions group?*
For this we can predefine what extensions we're looking for by putting our extensions within brackets, separated with `|`. For instance, if we wanted to match only `gif`, `png`, `jpg` or `jpeg`, we could use:
```
/url\(.*?(gif|png|jpg|jpeg)\'\)/ig
```
 | I came to this question from google. Old one, but none of the answer is discussing about **data:image/\*** urls. Here is what I found on another google result that will only match http or relative urls.
```
/url\((?!['"]?(?:data):)['"]?([^'"\)]*)['"]?\)/g
```
I hope this would help other people coming from search engines. |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Try:
```
/url\(.*?g'\)/ig
```
Your mistake was including `^` and `$` in the regexp, which anchors it to the beginning and end of the input string. And you forgot to allow for the quote. | ```
/^url\((['"]?)(.*)\1\)$/
```
with jQuery :
```
var bg = $('.my_element').css('background-image'); // or pure js like element.style.backgroundImage
bg = /^url\((['"]?)(.*)\1\)$/.exec(bg);
bg = bg ? bg[2] : false;
if(bg) {
// Do something
}
```
*Cross with Chrome / Firefox*
<http://jsfiddle.net/KFWWv/> |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Using a tool like [**Regexper**](http://www.regexper.com/) we can see what your regular expression is matching:

As you can see there are a few problems with this:
1. This assumes "url(" is at the start of the line.
2. This assumes the content ends with "g)", whereas all of your examples end with "g')".
3. This assumes after "g)" is the end of the line.
What we instead need to do is match just `"url(" [any character] ")"`. And we can do that using:
```
/url\(.*?\)/ig
```

Note that I've removed the requirement for the URL ending in "g" in case the image type is `.gif` or any other non-jpg, jpeg or png.
---
Edit: As [Seçkin M](https://stackoverflow.com/users/824495/seckin-m) commented below: *what if we don't want any other url's; such as fonts which have .ttf or .eot? What is the easiest way to define an allowed extensions group?*
For this we can predefine what extensions we're looking for by putting our extensions within brackets, separated with `|`. For instance, if we wanted to match only `gif`, `png`, `jpg` or `jpeg`, we could use:
```
/url\(.*?(gif|png|jpg|jpeg)\'\)/ig
```
 | You were just not accounting for the closing quote
```
url\(.*g\'\)
```
and anchoring |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Using a tool like [**Regexper**](http://www.regexper.com/) we can see what your regular expression is matching:

As you can see there are a few problems with this:
1. This assumes "url(" is at the start of the line.
2. This assumes the content ends with "g)", whereas all of your examples end with "g')".
3. This assumes after "g)" is the end of the line.
What we instead need to do is match just `"url(" [any character] ")"`. And we can do that using:
```
/url\(.*?\)/ig
```

Note that I've removed the requirement for the URL ending in "g" in case the image type is `.gif` or any other non-jpg, jpeg or png.
---
Edit: As [Seçkin M](https://stackoverflow.com/users/824495/seckin-m) commented below: *what if we don't want any other url's; such as fonts which have .ttf or .eot? What is the easiest way to define an allowed extensions group?*
For this we can predefine what extensions we're looking for by putting our extensions within brackets, separated with `|`. For instance, if we wanted to match only `gif`, `png`, `jpg` or `jpeg`, we could use:
```
/url\(.*?(gif|png|jpg|jpeg)\'\)/ig
```
 | Use the following regex:
```
/\burl\([^)]+\)/gi
```
There's no need to use anchors (`^` and `$`) as they would restrict the match over whole string input and hence fail. To capture the image path separately so that you can use it easily (and avoid substring) when constructing your `<img>` tags use:
```
/\burl\('([^']+)'\)/gi
```
This captures `img/alert-xx.jpg` in group one. |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Try:
```
/url\(.*?g'\)/ig
```
Your mistake was including `^` and `$` in the regexp, which anchors it to the beginning and end of the input string. And you forgot to allow for the quote. | You were just not accounting for the closing quote
```
url\(.*g\'\)
```
and anchoring |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | ```
/^url\((['"]?)(.*)\1\)$/
```
with jQuery :
```
var bg = $('.my_element').css('background-image'); // or pure js like element.style.backgroundImage
bg = /^url\((['"]?)(.*)\1\)$/.exec(bg);
bg = bg ? bg[2] : false;
if(bg) {
// Do something
}
```
*Cross with Chrome / Firefox*
<http://jsfiddle.net/KFWWv/> | You were just not accounting for the closing quote
```
url\(.*g\'\)
```
and anchoring |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Using a tool like [**Regexper**](http://www.regexper.com/) we can see what your regular expression is matching:

As you can see there are a few problems with this:
1. This assumes "url(" is at the start of the line.
2. This assumes the content ends with "g)", whereas all of your examples end with "g')".
3. This assumes after "g)" is the end of the line.
What we instead need to do is match just `"url(" [any character] ")"`. And we can do that using:
```
/url\(.*?\)/ig
```

Note that I've removed the requirement for the URL ending in "g" in case the image type is `.gif` or any other non-jpg, jpeg or png.
---
Edit: As [Seçkin M](https://stackoverflow.com/users/824495/seckin-m) commented below: *what if we don't want any other url's; such as fonts which have .ttf or .eot? What is the easiest way to define an allowed extensions group?*
For this we can predefine what extensions we're looking for by putting our extensions within brackets, separated with `|`. For instance, if we wanted to match only `gif`, `png`, `jpg` or `jpeg`, we could use:
```
/url\(.*?(gif|png|jpg|jpeg)\'\)/ig
```
 | ```
/^url\((['"]?)(.*)\1\)$/
```
with jQuery :
```
var bg = $('.my_element').css('background-image'); // or pure js like element.style.backgroundImage
bg = /^url\((['"]?)(.*)\1\)$/.exec(bg);
bg = bg ? bg[2] : false;
if(bg) {
// Do something
}
```
*Cross with Chrome / Firefox*
<http://jsfiddle.net/KFWWv/> |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Use the following regex:
```
/\burl\([^)]+\)/gi
```
There's no need to use anchors (`^` and `$`) as they would restrict the match over whole string input and hence fail. To capture the image path separately so that you can use it easily (and avoid substring) when constructing your `<img>` tags use:
```
/\burl\('([^']+)'\)/gi
```
This captures `img/alert-xx.jpg` in group one. | You were just not accounting for the closing quote
```
url\(.*g\'\)
```
and anchoring |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | I came to this question from google. Old one, but none of the answer is discussing about **data:image/\*** urls. Here is what I found on another google result that will only match http or relative urls.
```
/url\((?!['"]?(?:data):)['"]?([^'"\)]*)['"]?\)/g
```
I hope this would help other people coming from search engines. | You were just not accounting for the closing quote
```
url\(.*g\'\)
```
and anchoring |
19,518,086 | i have a css file with a bunch of background image urls:
```
.alert-01 {
background: url('img/alert-01.jpg') center no-repeat;
}
.alert-02 {
background: url('img/alert-02.jpg') center no-repeat;
}
.alert-03 {
background: url('img/alert-03.jpg') center no-repeat;
}
.alert-04 {
background: url('img/alert-04.jpg') center no-repeat;
}
```
And I'd like to write a regex that would strip out the urls.
So initially i would get:
```
url('img/alert-01.jpg')
url('img/alert-02.jpg')
url('img/alert-03.jpg')
url('img/alert-04.jpg')
```
Which i could then replace bits to turn them into html `<img>` blocks.
I'm fine with the last bit, but my regex skills are a bit rusty.
My latest try was this: `/^url\(.*?g\)$/ig`
Any help?
BTW I was doing this all in javascript, but the language shouldn't affect the regex really.
Thanks in advance!
Also, just to note, i can't guarantee that the files will always be jpg and may be jpeg, png etc. Also that the arguments into background will always be the same. Hence why i want to extract just the url part. | 2013/10/22 | [
"https://Stackoverflow.com/questions/19518086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2907019/"
] | Try:
```
/url\(.*?g'\)/ig
```
Your mistake was including `^` and `$` in the regexp, which anchors it to the beginning and end of the input string. And you forgot to allow for the quote. | Use the following regex:
```
/\burl\([^)]+\)/gi
```
There's no need to use anchors (`^` and `$`) as they would restrict the match over whole string input and hence fail. To capture the image path separately so that you can use it easily (and avoid substring) when constructing your `<img>` tags use:
```
/\burl\('([^']+)'\)/gi
```
This captures `img/alert-xx.jpg` in group one. |
42,315,432 | We have a session logout script like:
```
<?php
//24 2 2015
session_start();
session_destroy();
header("location:login.php")
?>
```
now this script logouts and redirect it to login page where, username and password will be required to login again.
what if i wanted to have a temporary logout where after logging out it will direct us to a login page where it will only require password, cause session hasn't been destroyed and username is been passed to that page...
so, when you enter the password, it will check the input in database table where username = session username.
Hope i was clear.
---
The update::
templogout.php
```
<?php
//24 2 2015
session_start();
$_SESSION['temp_logout'] = true;
header("location:templogin.php")
?>
```
templogin.php
```
<?php
//24 2 2015
session_start();
?>
<form id="msform" action="templogincheck.php" method="post">
<fieldset>
<input type="password" name="password" placeholder="Enter password here" required />
<button type="submit" name="submit" class="submit action-button"> LogIn </button>
</form>
```
templogincheck.php
```
<?php
//15 2 2015
session_start();
$Cser =mysqli_connect("localhost","text","text","text") or die("Server connection failed : ".mysqli_error($Cser));
$password = md5($_REQUEST["password"]);
$mobile = $_SESSION['mobile'];
$s = "select * from users where password = '".$password."' and mobile = '".$mobile."'";
$result = mysqli_query($Cser,$s);
$count = mysqli_num_rows($result);
if($count>0)
{
$_SESSION["mobile"] = $mobile;
$_SESSION["login"]="1";
header("location:/index.php");
}
else
{
header("location:/templogin.php");
}
?>
```
index.php
```
<?php
//15 2 2015
session_start();
unset($_SESSION["temp_logout"]);
if(!isset($_SESSION["login"]))
header("location:login.php");
?>
```
I hope i did it right, but i have to presume i have something wrong cause it isn't working..
Am i passing the session mobile to the login check page?
user first login page:
```
<form id="msform" action="ulogincheck.php" method="post">
<fieldset>
<h2 class="fs-title">LogIn</h2>
<h3 class="fs-subtitle">Please Enter your details accordingly<br/><br/> <small>(case sensitive)</small></h3>
<input type="text" name="email" placeholder="Email" required />
<input type="text" name="mobile" placeholder="Mobile" required />
<input type="password" name="password" placeholder="Password" required />
<button type="submit" name="submit" class="submit action-button"> LogIn </button>
</form>
```
first logincheck page
```
session_start();
$email = $_REQUEST["email"];
$mobile = $_REQUEST["mobile"];
$password = md5($_REQUEST["password"]);
$s = "select * from users where email='".$email."' and password = '".$password."' and mobile = '".$mobile."'";
$result = mysqli_query($Cser,$s);
$count = mysqli_num_rows($result);
if($count>0)
{
$_SESSION["email"] = $email;
$_SESSION["mobile"] = $mobile;
$_SESSION["login"]="1";
header("location:/index2.php");
}
else
{
header("location:/usersignin.php");
``` | 2017/02/18 | [
"https://Stackoverflow.com/questions/42315432",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6833562/"
] | You could add a `"temp_logout"` field to the `$_SESSION` variable and when you redirect the user to the login page, you can check for it `$_SESSION["temp_logout"]` and if it is true, add the username in the input field.
logout script:
```
<?php
//24 2 2015
session_start();
$_SESSION['temp_logout'] = true;
header("location:login.php")
?>
```
login page:
```
session_start()
...
//where the "username" input is
<input name="username" <?php if(isset($_SESSION["temp_logout"]){
echo 'value="'.$_SESSION["username"] .'" ';
} ?> />
...
```
after a successfull login:
```
<?php
session_start();
unset($_SESSION["temp_logout"]);
?>
```
Also, anywhere on the site, don't forget to check if the user is temporarily logged out; then immediatelly redirect him to the login page | it is really depend on your platform:
You can only unset something like `password` instead of destroying session,
```
unset($_SESSION['password']);
```
or set another key in session:
```
$_SESSION['loggedIn'] = false;
```
and redirect to login page.
also you can put username in cookie and destroy session.
[setcookie](http://php.net/manual/en/function.setcookie.php)
If you want to store username in cookie it is better to encrypt it for security reasons. |
130,414 | I would like to have a virtual machine of ubuntu , that I can access everywhere , without having to carry my Virtual HD ,
could I put the VHD on something like dropbox or google drive and use it there ?
is this practical ? because as far as I know whenever I update the file , it will be uploaded to Dropbox or Google Drive or other cloud services , and if I run a OS it is gonna keep using updating the VHD file....
is this idea practical? | 2012/05/02 | [
"https://askubuntu.com/questions/130414",
"https://askubuntu.com",
"https://askubuntu.com/users/59270/"
] | You can also try Amazon Web Services EC2 [http://aws.amazon.com/ec2/][1]. EC2 is cloud computing facility. You can spawn a Virtual machine there and then install Ubuntu (or whatever OS) you want. There are free tiers in AWS.
Essentially, here is what you should try:-
1. Create an account on AWS.
2. Launch a Micro instance
3. Configure your OS | The minimum requirements for an Ubuntu install is 15GB of hard disk space. To back it up on Dropbox, you would need to go for their 'Pro50' plan which will cost around $10/month.
Further, you would need to ensure that the location you want to use the VHD from has a sufficiently speedy broadband connection, which may not always be the case.
I wouldn't call that practical, but as an alternative, I'd suggest carrying the VHD around with you on a USB drive and taking backups of the VHD on a regular basis, **or** creating a [Live USB with persistent storage](https://wiki.ubuntu.com/LiveUsbPendrivePersistent) so that you can plug and boot wherever you go. |
130,414 | I would like to have a virtual machine of ubuntu , that I can access everywhere , without having to carry my Virtual HD ,
could I put the VHD on something like dropbox or google drive and use it there ?
is this practical ? because as far as I know whenever I update the file , it will be uploaded to Dropbox or Google Drive or other cloud services , and if I run a OS it is gonna keep using updating the VHD file....
is this idea practical? | 2012/05/02 | [
"https://askubuntu.com/questions/130414",
"https://askubuntu.com",
"https://askubuntu.com/users/59270/"
] | I don't believe that Dropbox could be used that way efficiently. The VHD file is going to change all the time when the VM is running. Dropbox calculates hashes of files to check them with its online copy (it also helps them to de-duplicate the data), it will take ages just to compute the hash of a 15 Gb file each time that it changes. Performance will be horrible.
That said, I know some people that are doing exactly that. The trick is to pause the syncing (using the dropbox indicator) while the VM is running. When they power off the VM, they enable the sync and wait until it is complete (it takes a while). | The minimum requirements for an Ubuntu install is 15GB of hard disk space. To back it up on Dropbox, you would need to go for their 'Pro50' plan which will cost around $10/month.
Further, you would need to ensure that the location you want to use the VHD from has a sufficiently speedy broadband connection, which may not always be the case.
I wouldn't call that practical, but as an alternative, I'd suggest carrying the VHD around with you on a USB drive and taking backups of the VHD on a regular basis, **or** creating a [Live USB with persistent storage](https://wiki.ubuntu.com/LiveUsbPendrivePersistent) so that you can plug and boot wherever you go. |
130,414 | I would like to have a virtual machine of ubuntu , that I can access everywhere , without having to carry my Virtual HD ,
could I put the VHD on something like dropbox or google drive and use it there ?
is this practical ? because as far as I know whenever I update the file , it will be uploaded to Dropbox or Google Drive or other cloud services , and if I run a OS it is gonna keep using updating the VHD file....
is this idea practical? | 2012/05/02 | [
"https://askubuntu.com/questions/130414",
"https://askubuntu.com",
"https://askubuntu.com/users/59270/"
] | I don't believe that Dropbox could be used that way efficiently. The VHD file is going to change all the time when the VM is running. Dropbox calculates hashes of files to check them with its online copy (it also helps them to de-duplicate the data), it will take ages just to compute the hash of a 15 Gb file each time that it changes. Performance will be horrible.
That said, I know some people that are doing exactly that. The trick is to pause the syncing (using the dropbox indicator) while the VM is running. When they power off the VM, they enable the sync and wait until it is complete (it takes a while). | You can also try Amazon Web Services EC2 [http://aws.amazon.com/ec2/][1]. EC2 is cloud computing facility. You can spawn a Virtual machine there and then install Ubuntu (or whatever OS) you want. There are free tiers in AWS.
Essentially, here is what you should try:-
1. Create an account on AWS.
2. Launch a Micro instance
3. Configure your OS |
13,986,634 | I am working with a PHP mailer in which template will be select with `HTML` browse button to get data to send mail.
i wants to get data in a variable ..
Not able to get data..
```
Warning: fopen(filename.txt) [function.fopen]: failed to open stream: No such file or directory in PATH/php_mailer\sendmail.php on line 18
Cannot open file: filename.txt
```
**HTML**
```
<form name="addtemplate" id="addtemplate" method='POST'
action='' enctype="multipart/form-data">
<table style="padding-left:100px;width:100%" border="0" cellspacing="10" cellpadding="0" id='addtemplate'>
<span id='errfrmMsg'></span>
<tbody>
<tr>
<td class="field_text">
Select Template
</td>
<td>
<input name="template_file" type="file" class="template_file" id="template_file" required>
</td>
</tr>
<tr>
<td>
<input id="group_submit" value="Submit" type="submit" name='group_submit' />
</td>
</tr>
</tbody>
</table>
</form>
```
**PHP Code**
```
if(isset($_POST['group_submit'])){
if ($_FILES["template_file"]["error"] > 0) {
echo "Error: " . $_FILES["template_file"]["error"] . "<br>";
}
else {
echo $templFile = $_FILES["template_file"]["name"] ;
$templFileHandler = fopen($templFile, 'r') or die('Cannot open file: '.$templFile); //open file for writing ('w','r','a')...
echo $templFileData = fread($templFileHandler,filesize($templFile));
}
}
``` | 2012/12/21 | [
"https://Stackoverflow.com/questions/13986634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1868277/"
] | It doesn't work because `$_FILES['template_file']['name']` is the local file name that the browser sent to the server; to read the uploaded file you need `$_FILES['template_file']['tmp_name']` instead:
```
echo $templFile = $_FILES["template_file"]["tmp_name"] ;
echo $templFileData = file_get_contents($templFile);
```
I'm also using `file_get_contents()` which effectively replaces `fopen()`, `fread()` and `fclose()`. The above code doesn't check for failure on the part of `file_get_contents()` for any reason, this would:
```
if (false === ($templFileData = file_get_contents($_FILES["template_file"]["tmp_name"]))) {
die("Cannot read from uploaded file");
}
// rest of your code
echo $templFileData;
``` | Please Replace $\_FILES["template\_file"]["name"] to $\_FILES["template\_file"]["tmp\_name"] |
65,057,331 | I'm new to java. I'm trying to make my program output this [5,4] [3] [2,1] but instead I get [5,5] [4,4] [3,3] [2,2] [1,1].. what am I missing? I tried to answer it on my own but I just can't think of the answer.
Here's my full code:
```
public static void main(String[]args){
Scanner sc = new Scanner(System.in);
System.out.print("Enter Array Size:");
int arrSize = sc.nextInt();
System.out.println("The Size of your Array is "+ arrSize);
int arr[] = new int[arrSize];
System.out.println("Enter "+arrSize+" Elements of your Array: ");
for(int i=0;i<arr.length;i++){
arr[i] = sc.nextInt();
}
for(int i=0; i<arr.length;i++){
System.out.print(arr[i] + " ");
}
System.out.println(" ");
for(int i=arr.length-1; i>=0;i--){
System.out.print(Arrays.asList(arr[i]+","+arr[i]));
}
}
``` | 2020/11/29 | [
"https://Stackoverflow.com/questions/65057331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14727854/"
] | I have a theory on what happened: since the requestUserToken function is called on the main thread, using a semaphore creates an infinite wait(lock.wait() and lock.signal() are called on the same thread). What eventually worked for me was using completion handlers instead of semaphores. So my getUserToken function looked like this:
```
func getUserToken(completion: @escaping(_ userToken: String) -> Void) -> String {
SKCloudServiceController().requestUserToken(forDeveloperToken: developerToken) { (userToken, error) in
guard error == nil else {
return
}
completion(userToken)
}
}
```
And in any subsequent functions that need the userToken, I passed it in as a parameter:
```
func fetchStorefrontID(userToken: String, completion: @escaping(String) -> Void){
var storefrontID: String!
let musicURL = URL(string: "https://api.music.apple.com/v1/me/storefront")!
var musicRequest = URLRequest(url: musicURL)
musicRequest.httpMethod = "GET"
musicRequest.addValue("Bearer \(developerToken)", forHTTPHeaderField: "Authorization")
musicRequest.addValue(userToken, forHTTPHeaderField: "Music-User-Token")
URLSession.shared.dataTask(with: musicRequest) { (data, response, error) in
guard error == nil else { return }
if let json = try? JSON(data: data!) {
let result = (json["data"]).array!
let id = (result[0].dictionaryValue)["id"]!
storefrontID = id.stringValue
completion(storefrontID)
}
}.resume()
}
```
Calling `fetchStorefrontID` by first calling getUserToken then calling fetchStorefrontID in its completion handler
```
getUserToken{ userToken in
fetchStorefrontID(userToken){ storefrontID in
print(storefrontID)
//anything you want to do with storefrontID here
}
}
```
This is just what eventually worked for me. | Cleaning up a little of what has already been posted.
```
func getUserToken(completion: @escaping(_ userToken: String?) -> Void) {
SKCloudServiceController().requestUserToken(forDeveloperToken: developerToken) { (receivedToken, error) in
guard error == nil else { return }
completion(receivedToken)
}
}
func fetchStorefrontID(userToken: String, completion: @escaping(String) -> Void) {
var storefrontID: String! = ""
let musicURL = URL(string: "https://api.music.apple.com/v1/me/storefront")!
var musicRequest = URLRequest(url: musicURL)
musicRequest.httpMethod = "GET"
musicRequest.addValue("Bearer \(developerToken)", forHTTPHeaderField: "Authorization")
musicRequest.addValue(userToken, forHTTPHeaderField: "Music-User-Token")
URLSession.shared.dataTask(with: musicRequest) { (data, response, error) in
guard error == nil else { return }
if let json = try? JSON(data: data!) {
let result = (json["data"]).array!
let id = (result[0].dictionaryValue)["id"]!
storefrontID = id.stringValue
completion(storefrontID)
}
}.resume()
}
```
And then to call that code:
```
SKCloudServiceController.requestAuthorization { status in
if status == .authorized {
let api = AppleMusicAPI()
api.getUserToken { userToken in
guard let userToken = userToken else {
return
}
api.fetchStorefrontID(userToken: userToken) { data in
print(data)
}
}
}
}
``` |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | This is really a case of *read your camera manual*. If your camera did not come with a full one, there will be one on a CD. The behavior greatly varies.
Usually cameras use filenames which gives them 4-digit numbering (some use 5 numbers), so you could in theory have 9999 photos in a single folder. However, cameras can break down these images in folders differently. Furthermore, some cameras let you control this using their Setup/Config menus:
* Some DSLRs like the Pentax K-5 aim to keep folders with a set number of images (500 in this case) but they can give you more since they won't break up a bracket.
* Some cameras wont break up bursts, some put them in separate folders.
* Some also have separate folders for images taken in panorama assist mode.
* A few models also break up folders by calendar days and name the folder according to the month and day.
* They can also break things by number of photos shot instead of counting the ones kept. In this case you will have always less than a certain number. Nikon entry-level cameras usually follows this approach with a limit of 499 or 999.
* Finally, some DSLRs, SLDs and a handful of ultra-zoom let folders be created by the user. In this case the folder still contains 3 digit sequence number which increments should the folder itself goes beyond the capacity which the camera likes.
*Personally I find this annoying to have to copy from more than one folder but I'm not the one designing these things! What I do is to rename all files sequentially using a small Python script after copying the keepers to my computer.* | On most (all?) cameras I have seen, the directories are numbered starting at 100, and the files are numbered 0000-9999 in each folder for up to 10,000 images in each folder.
One logical reason I can think of to split files like this is to avoid running into filesystem limits with the maximum number of files per directory. For FAT32, which most modern cameras use, there is a limit of 65,534 files per directory. By limiting to 10,000 images per directory, that means a maximum of (in most cases) 20,000 files per directory (if shooting in RAW+JPEG mode), which is well under the 65k limit.
Of course most memory cards will fill up with far fewer than 65k images (unless perhaps shooting with a very low quality).
The other reason I know of is simply for convenience sake. Numbering the directories and the files gives you an easy way to tell how many images you've taken.
Now, when you start swapping memory cards between cameras (or even in the same one) you often throw off the camera's internal counter, and it gets confused about where images should go.
Suppose you have two cards, one who's last image was stored as 101/5000, then you put in a new card which contains an image called 104/2000, the camera will (in my experience, although this is likely vendor/firmware specific) jump to 104/2001 for the next image. Then when you put the first card back in, the camera may not jump back to 101/5001, but may continue with 104/2002, thus leaving you with an apparent gap in your photo numbers. |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | On most (all?) cameras I have seen, the directories are numbered starting at 100, and the files are numbered 0000-9999 in each folder for up to 10,000 images in each folder.
One logical reason I can think of to split files like this is to avoid running into filesystem limits with the maximum number of files per directory. For FAT32, which most modern cameras use, there is a limit of 65,534 files per directory. By limiting to 10,000 images per directory, that means a maximum of (in most cases) 20,000 files per directory (if shooting in RAW+JPEG mode), which is well under the 65k limit.
Of course most memory cards will fill up with far fewer than 65k images (unless perhaps shooting with a very low quality).
The other reason I know of is simply for convenience sake. Numbering the directories and the files gives you an easy way to tell how many images you've taken.
Now, when you start swapping memory cards between cameras (or even in the same one) you often throw off the camera's internal counter, and it gets confused about where images should go.
Suppose you have two cards, one who's last image was stored as 101/5000, then you put in a new card which contains an image called 104/2000, the camera will (in my experience, although this is likely vendor/firmware specific) jump to 104/2001 for the next image. Then when you put the first card back in, the camera may not jump back to 101/5001, but may continue with 104/2002, thus leaving you with an apparent gap in your photo numbers. | On my Nikkon D5100 it puts 1000 photos in each directory. The problem, as mentioned above, is that when connecting to the PC from the camera, Windows Exploerer does not see the folders. I recently had over 2300 photos from one session. I only got the photos from the first one. I have a USB chip reader that supports a number of different chips. Using this, Explorer saw the three folders o the chip and I could copy them easily to my PC. This is easier than writing a script, |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | On most (all?) cameras I have seen, the directories are numbered starting at 100, and the files are numbered 0000-9999 in each folder for up to 10,000 images in each folder.
One logical reason I can think of to split files like this is to avoid running into filesystem limits with the maximum number of files per directory. For FAT32, which most modern cameras use, there is a limit of 65,534 files per directory. By limiting to 10,000 images per directory, that means a maximum of (in most cases) 20,000 files per directory (if shooting in RAW+JPEG mode), which is well under the 65k limit.
Of course most memory cards will fill up with far fewer than 65k images (unless perhaps shooting with a very low quality).
The other reason I know of is simply for convenience sake. Numbering the directories and the files gives you an easy way to tell how many images you've taken.
Now, when you start swapping memory cards between cameras (or even in the same one) you often throw off the camera's internal counter, and it gets confused about where images should go.
Suppose you have two cards, one who's last image was stored as 101/5000, then you put in a new card which contains an image called 104/2000, the camera will (in my experience, although this is likely vendor/firmware specific) jump to 104/2001 for the next image. Then when you put the first card back in, the camera may not jump back to 101/5001, but may continue with 104/2002, thus leaving you with an apparent gap in your photo numbers. | Flimzy is on track with his theory about filesystem limits, but not quite. I don't recall it completely (it's been a couple of years since I learned it at the university) but the actual reason is performance. The more files you put in a directory the more difficult it gets to handle them. A list with 1000 files consumes much less memory than a list with 10.000 files, so this immensely speeds up filesystem operations, especially in a small system like digital cameras. Filesystem operations being: listing files, deleting files, creating previews (especially multiple images in a grid).
One could argue here that modern cameras should have enough computing power to handle such tasks, maybe this is correct. I have no idea what kind of processors are used in digital cameras, maybe this behavior is a relic from older cameras and the engineers program them out of habit that way. |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | On most (all?) cameras I have seen, the directories are numbered starting at 100, and the files are numbered 0000-9999 in each folder for up to 10,000 images in each folder.
One logical reason I can think of to split files like this is to avoid running into filesystem limits with the maximum number of files per directory. For FAT32, which most modern cameras use, there is a limit of 65,534 files per directory. By limiting to 10,000 images per directory, that means a maximum of (in most cases) 20,000 files per directory (if shooting in RAW+JPEG mode), which is well under the 65k limit.
Of course most memory cards will fill up with far fewer than 65k images (unless perhaps shooting with a very low quality).
The other reason I know of is simply for convenience sake. Numbering the directories and the files gives you an easy way to tell how many images you've taken.
Now, when you start swapping memory cards between cameras (or even in the same one) you often throw off the camera's internal counter, and it gets confused about where images should go.
Suppose you have two cards, one who's last image was stored as 101/5000, then you put in a new card which contains an image called 104/2000, the camera will (in my experience, although this is likely vendor/firmware specific) jump to 104/2001 for the next image. Then when you put the first card back in, the camera may not jump back to 101/5001, but may continue with 104/2002, thus leaving you with an apparent gap in your photo numbers. | Consider this: 1000, 5000, or 10000 files (photos) per folder, you're always at risk of losing some if they get consolidated into any SAME PC folder.
Once you see the message "File IMG 3333 (OR, any # from 1 - 9999) already exists in the folder you are moving [or copying] to -- do you wish to proceed"?
Which image do you wish to lose? Neither, I'll bet. Retaining the 3-digit prefix # of the camera's folder, along with the 4 digit file # (yyy-xxxx) provides a unique "Serial #" for each & every photo. No matter which or how many ever are directed to a same PC folder, no 'either / or' response will ever cause the loss of one or the other.
Thank you, camera designers, for the foresight. But few users catch on right away, indicating you just need to communicate more thoroughly in you manuals. |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | This is really a case of *read your camera manual*. If your camera did not come with a full one, there will be one on a CD. The behavior greatly varies.
Usually cameras use filenames which gives them 4-digit numbering (some use 5 numbers), so you could in theory have 9999 photos in a single folder. However, cameras can break down these images in folders differently. Furthermore, some cameras let you control this using their Setup/Config menus:
* Some DSLRs like the Pentax K-5 aim to keep folders with a set number of images (500 in this case) but they can give you more since they won't break up a bracket.
* Some cameras wont break up bursts, some put them in separate folders.
* Some also have separate folders for images taken in panorama assist mode.
* A few models also break up folders by calendar days and name the folder according to the month and day.
* They can also break things by number of photos shot instead of counting the ones kept. In this case you will have always less than a certain number. Nikon entry-level cameras usually follows this approach with a limit of 499 or 999.
* Finally, some DSLRs, SLDs and a handful of ultra-zoom let folders be created by the user. In this case the folder still contains 3 digit sequence number which increments should the folder itself goes beyond the capacity which the camera likes.
*Personally I find this annoying to have to copy from more than one folder but I'm not the one designing these things! What I do is to rename all files sequentially using a small Python script after copying the keepers to my computer.* | On my Nikkon D5100 it puts 1000 photos in each directory. The problem, as mentioned above, is that when connecting to the PC from the camera, Windows Exploerer does not see the folders. I recently had over 2300 photos from one session. I only got the photos from the first one. I have a USB chip reader that supports a number of different chips. Using this, Explorer saw the three folders o the chip and I could copy them easily to my PC. This is easier than writing a script, |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | This is really a case of *read your camera manual*. If your camera did not come with a full one, there will be one on a CD. The behavior greatly varies.
Usually cameras use filenames which gives them 4-digit numbering (some use 5 numbers), so you could in theory have 9999 photos in a single folder. However, cameras can break down these images in folders differently. Furthermore, some cameras let you control this using their Setup/Config menus:
* Some DSLRs like the Pentax K-5 aim to keep folders with a set number of images (500 in this case) but they can give you more since they won't break up a bracket.
* Some cameras wont break up bursts, some put them in separate folders.
* Some also have separate folders for images taken in panorama assist mode.
* A few models also break up folders by calendar days and name the folder according to the month and day.
* They can also break things by number of photos shot instead of counting the ones kept. In this case you will have always less than a certain number. Nikon entry-level cameras usually follows this approach with a limit of 499 or 999.
* Finally, some DSLRs, SLDs and a handful of ultra-zoom let folders be created by the user. In this case the folder still contains 3 digit sequence number which increments should the folder itself goes beyond the capacity which the camera likes.
*Personally I find this annoying to have to copy from more than one folder but I'm not the one designing these things! What I do is to rename all files sequentially using a small Python script after copying the keepers to my computer.* | Flimzy is on track with his theory about filesystem limits, but not quite. I don't recall it completely (it's been a couple of years since I learned it at the university) but the actual reason is performance. The more files you put in a directory the more difficult it gets to handle them. A list with 1000 files consumes much less memory than a list with 10.000 files, so this immensely speeds up filesystem operations, especially in a small system like digital cameras. Filesystem operations being: listing files, deleting files, creating previews (especially multiple images in a grid).
One could argue here that modern cameras should have enough computing power to handle such tasks, maybe this is correct. I have no idea what kind of processors are used in digital cameras, maybe this behavior is a relic from older cameras and the engineers program them out of habit that way. |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | This is really a case of *read your camera manual*. If your camera did not come with a full one, there will be one on a CD. The behavior greatly varies.
Usually cameras use filenames which gives them 4-digit numbering (some use 5 numbers), so you could in theory have 9999 photos in a single folder. However, cameras can break down these images in folders differently. Furthermore, some cameras let you control this using their Setup/Config menus:
* Some DSLRs like the Pentax K-5 aim to keep folders with a set number of images (500 in this case) but they can give you more since they won't break up a bracket.
* Some cameras wont break up bursts, some put them in separate folders.
* Some also have separate folders for images taken in panorama assist mode.
* A few models also break up folders by calendar days and name the folder according to the month and day.
* They can also break things by number of photos shot instead of counting the ones kept. In this case you will have always less than a certain number. Nikon entry-level cameras usually follows this approach with a limit of 499 or 999.
* Finally, some DSLRs, SLDs and a handful of ultra-zoom let folders be created by the user. In this case the folder still contains 3 digit sequence number which increments should the folder itself goes beyond the capacity which the camera likes.
*Personally I find this annoying to have to copy from more than one folder but I'm not the one designing these things! What I do is to rename all files sequentially using a small Python script after copying the keepers to my computer.* | Consider this: 1000, 5000, or 10000 files (photos) per folder, you're always at risk of losing some if they get consolidated into any SAME PC folder.
Once you see the message "File IMG 3333 (OR, any # from 1 - 9999) already exists in the folder you are moving [or copying] to -- do you wish to proceed"?
Which image do you wish to lose? Neither, I'll bet. Retaining the 3-digit prefix # of the camera's folder, along with the 4 digit file # (yyy-xxxx) provides a unique "Serial #" for each & every photo. No matter which or how many ever are directed to a same PC folder, no 'either / or' response will ever cause the loss of one or the other.
Thank you, camera designers, for the foresight. But few users catch on right away, indicating you just need to communicate more thoroughly in you manuals. |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | Flimzy is on track with his theory about filesystem limits, but not quite. I don't recall it completely (it's been a couple of years since I learned it at the university) but the actual reason is performance. The more files you put in a directory the more difficult it gets to handle them. A list with 1000 files consumes much less memory than a list with 10.000 files, so this immensely speeds up filesystem operations, especially in a small system like digital cameras. Filesystem operations being: listing files, deleting files, creating previews (especially multiple images in a grid).
One could argue here that modern cameras should have enough computing power to handle such tasks, maybe this is correct. I have no idea what kind of processors are used in digital cameras, maybe this behavior is a relic from older cameras and the engineers program them out of habit that way. | On my Nikkon D5100 it puts 1000 photos in each directory. The problem, as mentioned above, is that when connecting to the PC from the camera, Windows Exploerer does not see the folders. I recently had over 2300 photos from one session. I only got the photos from the first one. I have a USB chip reader that supports a number of different chips. Using this, Explorer saw the three folders o the chip and I could copy them easily to my PC. This is easier than writing a script, |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | On my Nikkon D5100 it puts 1000 photos in each directory. The problem, as mentioned above, is that when connecting to the PC from the camera, Windows Exploerer does not see the folders. I recently had over 2300 photos from one session. I only got the photos from the first one. I have a USB chip reader that supports a number of different chips. Using this, Explorer saw the three folders o the chip and I could copy them easily to my PC. This is easier than writing a script, | Consider this: 1000, 5000, or 10000 files (photos) per folder, you're always at risk of losing some if they get consolidated into any SAME PC folder.
Once you see the message "File IMG 3333 (OR, any # from 1 - 9999) already exists in the folder you are moving [or copying] to -- do you wish to proceed"?
Which image do you wish to lose? Neither, I'll bet. Retaining the 3-digit prefix # of the camera's folder, along with the 4 digit file # (yyy-xxxx) provides a unique "Serial #" for each & every photo. No matter which or how many ever are directed to a same PC folder, no 'either / or' response will ever cause the loss of one or the other.
Thank you, camera designers, for the foresight. But few users catch on right away, indicating you just need to communicate more thoroughly in you manuals. |
23,033 | Recently when looking at my memory card I noticed that there were multiple folders on the memory card. The file path looked like this:
```
DCIM/100D5100
DCIM/101D5100
```
The D5100 is the camera I am using but I have seen this behavior on multiple cameras. Why does the camera(s) automatically split the photos into multiple folders?
Note: The photos will go from 1 - 345 for folder 1 then 345 - 700 for folder 2 etc. but I have not seen any patterns that would dictate when a new folder is created. | 2012/05/02 | [
"https://photo.stackexchange.com/questions/23033",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/7438/"
] | Flimzy is on track with his theory about filesystem limits, but not quite. I don't recall it completely (it's been a couple of years since I learned it at the university) but the actual reason is performance. The more files you put in a directory the more difficult it gets to handle them. A list with 1000 files consumes much less memory than a list with 10.000 files, so this immensely speeds up filesystem operations, especially in a small system like digital cameras. Filesystem operations being: listing files, deleting files, creating previews (especially multiple images in a grid).
One could argue here that modern cameras should have enough computing power to handle such tasks, maybe this is correct. I have no idea what kind of processors are used in digital cameras, maybe this behavior is a relic from older cameras and the engineers program them out of habit that way. | Consider this: 1000, 5000, or 10000 files (photos) per folder, you're always at risk of losing some if they get consolidated into any SAME PC folder.
Once you see the message "File IMG 3333 (OR, any # from 1 - 9999) already exists in the folder you are moving [or copying] to -- do you wish to proceed"?
Which image do you wish to lose? Neither, I'll bet. Retaining the 3-digit prefix # of the camera's folder, along with the 4 digit file # (yyy-xxxx) provides a unique "Serial #" for each & every photo. No matter which or how many ever are directed to a same PC folder, no 'either / or' response will ever cause the loss of one or the other.
Thank you, camera designers, for the foresight. But few users catch on right away, indicating you just need to communicate more thoroughly in you manuals. |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use [plovr](http://plovr.com/) (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like `goog.array`).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
<https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8>
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace `http`. In my system, I have decided that anytime I need `http`, I will include it via:
```
var http = require('http');
```
Though I do not include that `require()` call in my code. Instead, I use the `output-wrapper` feature of the Closure Compiler the prepend all of the `require()`s at the start of the file, which when declared in plovr, in my current project looks like this:
```
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
```
In this way, my library code never calls Node's `require()`, but the Compiler tolerates the uses of things like `http` in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this [on the discussion list](https://groups.google.com/d/topic/closure-compiler-discuss/jws6kzCp4LI/discussion), I think the better solution is to have a new type annotation for namespaces that would look something like:
```
goog.scope(function() {
/** @type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
```
In this scenario, an externs file would define the `NodeHttpNamespace` such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of `require()` whatever you wanted because the type of `http` would be this special namespace type. (Identifying a "jQuery namespace" for `$` is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant `output-wrapper` in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
1. Uses plovr to build everything in `RAW` mode.
2. Runs `node` on the file generated by plovr.
Using `RAW` mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it. | Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | The svn HEAD of closure compiler seems to have [support for AMD](http://www.nonblocking.io/2011/12/experimental-support-for-common-js-and.html) | Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | I replaced my old approach with a way simpler approach:
**New approach**
* No require() calls for my own app code, only for Node modules
* I need to concatenate server code to a single file before I can run or compile it
* Concatenating and compiling is done using [a simple grunt script](https://github.com/blaise-io/xssnake/blob/master/build/server.js)
Funny thing is that I didn't even had to add an extern for the `require()` calls. The Google Closure compiler understands that automagically. I did have to [add externs for nodejs modules that I use.](https://github.com/blaise-io/xssnake/tree/master/build/externs)
**Old approach**
As requested by OP, I will elaborated on my way of compiling node.js code with Google Closure Compiler.
I was inspired by the way bolinfest solved the problem and my solution uses the same principle. The difference is that I made one node.js script that does everything, including inlining modules (bolinfest's solution lets GCC take care of that). This makes it more automated, but also more fragile.
I just added code comments to every step I take to compile server code. See this commit: <https://github.com/blaise-io/xssnake/commit/da52219567b3941f13b8d94e36f743b0cbef44a3>
To summarize:
1. I start with my main module, the JS file that I pass to Node when I want to run it.
In my case, this file is [start.js](https://github.com/blaise-io/xssnake/blob/master/server/start.js).
2. In this file, using a regular expression, I detect all `require()` calls, including the assignment part.
In start.js, this matches one require call: `var Server = require('./lib/server.js');`
3. I retrieve the path where the file exists based on the file name, fetch its contents as a string, and remove module.exports assignments within the contents.
4. Then I replace the require call from step 2 with the contents from step 3. Unless it is a core node.js module, then I add it to a list of core modules that I save for later.
5. Step 3 will probably contain more `require()` call, so I repeat step 3 and 4 recursively until all `require()` calls are gone and I'm left with one huge string containing all code.
6. If all recursion has completed, I compile the code using the REST API.
You could also use the offline compiler.
I have externs for every core node.js module. [This tool is useful for generating externs](http://www.dotnetwise.com/Code/Externs/).
7. I preprend the removed core.js module `require` calls to the compiled code.
---
Pre-Compiled code.
All `require` calls are removed. All my code is flattened.
<http://pastebin.com/eC2rVMiN>
Post-Compiled code.
Node.js core `require` calls have been prepended manually.
<http://pastebin.com/uB8CaejN>
---
Why you should not do it this way:
1. It uses regular expressions (not a parser or tokenizer) for detecting `require` calls, inlining and removing `module.exports`. This is fragile, as it does not cover all syntax variations.
2. When inlining, all module code is added to the global namespace. This is against the principles of Node.js, where every file has its own namespace, and this will cause errors if you have two different modules with the same global variables.
3. It does not improve the speed of your code that much, since V8 also performs a lot of code optimizations like inlining and dead code removal.
Why you should:
1. Because it does work when you have consistent code.
2. It will detect errors in your server code when you enable verbose warnings. | Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | Closure Library on Node.js in 60 seconds.
It's supported, check <https://code.google.com/p/closure-library/wiki/NodeJS>. | Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use [plovr](http://plovr.com/) (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like `goog.array`).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
<https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8>
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace `http`. In my system, I have decided that anytime I need `http`, I will include it via:
```
var http = require('http');
```
Though I do not include that `require()` call in my code. Instead, I use the `output-wrapper` feature of the Closure Compiler the prepend all of the `require()`s at the start of the file, which when declared in plovr, in my current project looks like this:
```
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
```
In this way, my library code never calls Node's `require()`, but the Compiler tolerates the uses of things like `http` in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this [on the discussion list](https://groups.google.com/d/topic/closure-compiler-discuss/jws6kzCp4LI/discussion), I think the better solution is to have a new type annotation for namespaces that would look something like:
```
goog.scope(function() {
/** @type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
```
In this scenario, an externs file would define the `NodeHttpNamespace` such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of `require()` whatever you wanted because the type of `http` would be this special namespace type. (Identifying a "jQuery namespace" for `$` is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant `output-wrapper` in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
1. Uses plovr to build everything in `RAW` mode.
2. Runs `node` on the file generated by plovr.
Using `RAW` mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it. | The svn HEAD of closure compiler seems to have [support for AMD](http://www.nonblocking.io/2011/12/experimental-support-for-common-js-and.html) |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use [plovr](http://plovr.com/) (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like `goog.array`).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
<https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8>
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace `http`. In my system, I have decided that anytime I need `http`, I will include it via:
```
var http = require('http');
```
Though I do not include that `require()` call in my code. Instead, I use the `output-wrapper` feature of the Closure Compiler the prepend all of the `require()`s at the start of the file, which when declared in plovr, in my current project looks like this:
```
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
```
In this way, my library code never calls Node's `require()`, but the Compiler tolerates the uses of things like `http` in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this [on the discussion list](https://groups.google.com/d/topic/closure-compiler-discuss/jws6kzCp4LI/discussion), I think the better solution is to have a new type annotation for namespaces that would look something like:
```
goog.scope(function() {
/** @type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
```
In this scenario, an externs file would define the `NodeHttpNamespace` such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of `require()` whatever you wanted because the type of `http` would be this special namespace type. (Identifying a "jQuery namespace" for `$` is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant `output-wrapper` in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
1. Uses plovr to build everything in `RAW` mode.
2. Runs `node` on the file generated by plovr.
Using `RAW` mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it. | I replaced my old approach with a way simpler approach:
**New approach**
* No require() calls for my own app code, only for Node modules
* I need to concatenate server code to a single file before I can run or compile it
* Concatenating and compiling is done using [a simple grunt script](https://github.com/blaise-io/xssnake/blob/master/build/server.js)
Funny thing is that I didn't even had to add an extern for the `require()` calls. The Google Closure compiler understands that automagically. I did have to [add externs for nodejs modules that I use.](https://github.com/blaise-io/xssnake/tree/master/build/externs)
**Old approach**
As requested by OP, I will elaborated on my way of compiling node.js code with Google Closure Compiler.
I was inspired by the way bolinfest solved the problem and my solution uses the same principle. The difference is that I made one node.js script that does everything, including inlining modules (bolinfest's solution lets GCC take care of that). This makes it more automated, but also more fragile.
I just added code comments to every step I take to compile server code. See this commit: <https://github.com/blaise-io/xssnake/commit/da52219567b3941f13b8d94e36f743b0cbef44a3>
To summarize:
1. I start with my main module, the JS file that I pass to Node when I want to run it.
In my case, this file is [start.js](https://github.com/blaise-io/xssnake/blob/master/server/start.js).
2. In this file, using a regular expression, I detect all `require()` calls, including the assignment part.
In start.js, this matches one require call: `var Server = require('./lib/server.js');`
3. I retrieve the path where the file exists based on the file name, fetch its contents as a string, and remove module.exports assignments within the contents.
4. Then I replace the require call from step 2 with the contents from step 3. Unless it is a core node.js module, then I add it to a list of core modules that I save for later.
5. Step 3 will probably contain more `require()` call, so I repeat step 3 and 4 recursively until all `require()` calls are gone and I'm left with one huge string containing all code.
6. If all recursion has completed, I compile the code using the REST API.
You could also use the offline compiler.
I have externs for every core node.js module. [This tool is useful for generating externs](http://www.dotnetwise.com/Code/Externs/).
7. I preprend the removed core.js module `require` calls to the compiled code.
---
Pre-Compiled code.
All `require` calls are removed. All my code is flattened.
<http://pastebin.com/eC2rVMiN>
Post-Compiled code.
Node.js core `require` calls have been prepended manually.
<http://pastebin.com/uB8CaejN>
---
Why you should not do it this way:
1. It uses regular expressions (not a parser or tokenizer) for detecting `require` calls, inlining and removing `module.exports`. This is fragile, as it does not cover all syntax variations.
2. When inlining, all module code is added to the global namespace. This is against the principles of Node.js, where every file has its own namespace, and this will cause errors if you have two different modules with the same global variables.
3. It does not improve the speed of your code that much, since V8 also performs a lot of code optimizations like inlining and dead code removal.
Why you should:
1. Because it does work when you have consistent code.
2. It will detect errors in your server code when you enable verbose warnings. |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use [plovr](http://plovr.com/) (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like `goog.array`).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
<https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8>
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace `http`. In my system, I have decided that anytime I need `http`, I will include it via:
```
var http = require('http');
```
Though I do not include that `require()` call in my code. Instead, I use the `output-wrapper` feature of the Closure Compiler the prepend all of the `require()`s at the start of the file, which when declared in plovr, in my current project looks like this:
```
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
```
In this way, my library code never calls Node's `require()`, but the Compiler tolerates the uses of things like `http` in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this [on the discussion list](https://groups.google.com/d/topic/closure-compiler-discuss/jws6kzCp4LI/discussion), I think the better solution is to have a new type annotation for namespaces that would look something like:
```
goog.scope(function() {
/** @type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
```
In this scenario, an externs file would define the `NodeHttpNamespace` such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of `require()` whatever you wanted because the type of `http` would be this special namespace type. (Identifying a "jQuery namespace" for `$` is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant `output-wrapper` in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
1. Uses plovr to build everything in `RAW` mode.
2. Runs `node` on the file generated by plovr.
Using `RAW` mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it. | Closure Library on Node.js in 60 seconds.
It's supported, check <https://code.google.com/p/closure-library/wiki/NodeJS>. |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | The svn HEAD of closure compiler seems to have [support for AMD](http://www.nonblocking.io/2011/12/experimental-support-for-common-js-and.html) | I replaced my old approach with a way simpler approach:
**New approach**
* No require() calls for my own app code, only for Node modules
* I need to concatenate server code to a single file before I can run or compile it
* Concatenating and compiling is done using [a simple grunt script](https://github.com/blaise-io/xssnake/blob/master/build/server.js)
Funny thing is that I didn't even had to add an extern for the `require()` calls. The Google Closure compiler understands that automagically. I did have to [add externs for nodejs modules that I use.](https://github.com/blaise-io/xssnake/tree/master/build/externs)
**Old approach**
As requested by OP, I will elaborated on my way of compiling node.js code with Google Closure Compiler.
I was inspired by the way bolinfest solved the problem and my solution uses the same principle. The difference is that I made one node.js script that does everything, including inlining modules (bolinfest's solution lets GCC take care of that). This makes it more automated, but also more fragile.
I just added code comments to every step I take to compile server code. See this commit: <https://github.com/blaise-io/xssnake/commit/da52219567b3941f13b8d94e36f743b0cbef44a3>
To summarize:
1. I start with my main module, the JS file that I pass to Node when I want to run it.
In my case, this file is [start.js](https://github.com/blaise-io/xssnake/blob/master/server/start.js).
2. In this file, using a regular expression, I detect all `require()` calls, including the assignment part.
In start.js, this matches one require call: `var Server = require('./lib/server.js');`
3. I retrieve the path where the file exists based on the file name, fetch its contents as a string, and remove module.exports assignments within the contents.
4. Then I replace the require call from step 2 with the contents from step 3. Unless it is a core node.js module, then I add it to a list of core modules that I save for later.
5. Step 3 will probably contain more `require()` call, so I repeat step 3 and 4 recursively until all `require()` calls are gone and I'm left with one huge string containing all code.
6. If all recursion has completed, I compile the code using the REST API.
You could also use the offline compiler.
I have externs for every core node.js module. [This tool is useful for generating externs](http://www.dotnetwise.com/Code/Externs/).
7. I preprend the removed core.js module `require` calls to the compiled code.
---
Pre-Compiled code.
All `require` calls are removed. All my code is flattened.
<http://pastebin.com/eC2rVMiN>
Post-Compiled code.
Node.js core `require` calls have been prepended manually.
<http://pastebin.com/uB8CaejN>
---
Why you should not do it this way:
1. It uses regular expressions (not a parser or tokenizer) for detecting `require` calls, inlining and removing `module.exports`. This is fragile, as it does not cover all syntax variations.
2. When inlining, all module code is added to the global namespace. This is against the principles of Node.js, where every file has its own namespace, and this will cause errors if you have two different modules with the same global variables.
3. It does not improve the speed of your code that much, since V8 also performs a lot of code optimizations like inlining and dead code removal.
Why you should:
1. Because it does work when you have consistent code.
2. It will detect errors in your server code when you enable verbose warnings. |
8,287,597 | Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a `require("./MyLib.js")`, that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
1. Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
2. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
3. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it? | 2011/11/27 | [
"https://Stackoverflow.com/questions/8287597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146821/"
] | The svn HEAD of closure compiler seems to have [support for AMD](http://www.nonblocking.io/2011/12/experimental-support-for-common-js-and.html) | Closure Library on Node.js in 60 seconds.
It's supported, check <https://code.google.com/p/closure-library/wiki/NodeJS>. |
66,122,035 | So I was working on a automated whatsapp messenger and I tried to install the module known as "pywhatkit" but I am getting an error everytime.
The error trace is the following:
```
ERROR: Exception: Traceback (most recent call last): File
"c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py",
line 171, in _merge_into_criterion
crit = self.state.criteria[name] KeyError: 'pymsgbox'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\base_command.py", line 189, in _main
status = self.run(options, args)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\req_command.py", line 178, in wrapper
return func(self, options, args)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\commands\install.py", line 316, in run
requirement_set = resolver.resolve(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 121, in resolve
self._result = resolver.resolve(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 318, in resolve
name, crit = self._merge_into_criterion(r, parent=None)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _merge_into_criterion
crit = Criterion.from_requirement(self._p, requirement, parent)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 82, in from_requirement
if not cands:
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\structs.py", line 124, in __bool__
return bool(self._sequence)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 143, in __bool__
return any(self)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 38, in _iter_built
candidate = func()
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 167, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 300, in __init__
super().__init__(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 144, in __init__
self.dist = self._prepare()
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 226, in _prepare
dist = self._prepare_distribution()
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 311, in _prepare_distribution
return self._factory.preparer.prepare_linked_requirement(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 457, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 500, in _prepare_linked_requirement
dist = _get_prepared_distribution(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 66, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(finder, build_isolation)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\distributions\sdist.py", line 39, in prepare_distribution_metadata
self._setup_isolation(finder)
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\distributions\sdist.py", line 66, in _setup_isolation
self.req.build_env = BuildEnvironment()
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\build_env.py", line 83, in __init__
fp.write(textwrap.dedent(
File "c:\users\ΰ€ͺΰ€\appdata\local\programs\python\python38\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 148-149: character maps to <undefined>
``` | 2021/02/09 | [
"https://Stackoverflow.com/questions/66122035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14027695/"
] | `pip install pywhatkit`
If this doesn't work use
`pip install pipwin`
And then
`pipwin install pywhatkit`
If it is installed and giving error try
`pip install pywhatkit --upgrade` | I had the same problem. I used a lot of tricks but finally got my solution. I didn't add my python scripts directory in the path variable. `C:\Users\Computer\AppData\Roaming\Python\Python310\Scripts`. When I added this path to the path variable. Then **pywhatkit** was running on my machine. Note: I had recently reset my computer and gave it windows 10 pro! |
55,461,960 | Let's say I have an array 3x3 `a` and would like to upsample it to a 30x30 array `b` with nearest neighbor interpolation.
Is it possible to use a technique which does not actually store repeated values? Something similar on how `broadcasting` works in `numpy`.
e.g. I would like to have an object such that when I call `b[x, x]` with `0 < x < 10` I get `a[0, 0]`. | 2019/04/01 | [
"https://Stackoverflow.com/questions/55461960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7812912/"
] | Which version of Laravel are you using? Since Laravel 5 you should define your routes and controllers in the file routes/web.php
**web.php**
`Route::get('user/{id}', 'UserController@show');`
**app\Http\Controllers\UserController**
```
<?php
namespace App\Http\Controllers;
use App\User;
use App\Http\Controllers\Controller;
class UserController extends Controller
{
/**
* Show the profile for the given user.
*
* @param int $id
* @return View
*/
public function show($id)
{
return view('user.profile', ['user' => User::findOrFail($id)]);
}
}
```
Checkout <https://laravel.com/docs/5.8/controllers> | You'll load the view from within the corresponding controller method. For instance:
```
public function index()
{
$employees = Employee::all();
return view('employees.index')->with('employees', $employees);
}
```
Laravel will translate `employees.index` to `resources/views/employees/index.blade.php`.
Next you will modify the `routes/web.php` file. You can define routes in a number of different ways, however for most use cases you'll probably want to define your controllers as [resourceful](https://laravel.com/docs/5.8/controllers#resource-controllers), and so the route definition would look like this:
```
Route::resource('employees', 'EmployeeController');
```
This means your `index` view would be accessible via an HTTP call to `/employees/`.
Hope this helps. |
48,798,846 | I'm trying to deploy a rails 5.1 & react app created with webpacker gem using AWS Elastic Beanstalk. The problem is I keep getting the following error:
```
Webpacker requires Node.js >= 6.0.0 and you are using 4.6.0
```
I'm using Node 9.5.0 on my computer. Any suggestions?? | 2018/02/15 | [
"https://Stackoverflow.com/questions/48798846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9362734/"
] | To install nodejs using yum (assuming you're using the default Amazon Linux)
<https://nodejs.org/en/download/package-manager/#enterprise-linux-and-fedora>
```
curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
yum -y install nodejs
```
Now to execute this on your instances, you need to add the required commands to a config file inside the `.ebextensions` dir, something like: `.ebextensions/01_install_dependencies.config`
File contents:
```
commands:
01_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
02_install_nodejs:
command: yum -y install nodejs
``` | For those of you who found this issue when upgrading to Rails 6, [I wrote a post](https://austingwalters.com/rails-6-on-elastic-beanstalk/) on how to fix it.
Basically, you have to:
* Create directory .ebextensions in the top level of your application
* Create .config file which contains the commands to fix deployments (script here)
+ Add commands for upgrading nodejs to >=6.14.4
+ Add container commands to install webpack & precompile assets
* Disable asset building through elastic beanstalk
+ set set RAILS\_SKIP\_ASSET\_COMPILATION to True
What that amounts to is:
```
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
container_commands:
04_install_webpack:
command: npm install --save-dev webpack
05_precompile:
command: bundle exec rake assets:precompile
``` |
48,798,846 | I'm trying to deploy a rails 5.1 & react app created with webpacker gem using AWS Elastic Beanstalk. The problem is I keep getting the following error:
```
Webpacker requires Node.js >= 6.0.0 and you are using 4.6.0
```
I'm using Node 9.5.0 on my computer. Any suggestions?? | 2018/02/15 | [
"https://Stackoverflow.com/questions/48798846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9362734/"
] | For those that run into needing to also install Yarn, I have found the below just worked for me:
```
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
``` | For those of you who found this issue when upgrading to Rails 6, [I wrote a post](https://austingwalters.com/rails-6-on-elastic-beanstalk/) on how to fix it.
Basically, you have to:
* Create directory .ebextensions in the top level of your application
* Create .config file which contains the commands to fix deployments (script here)
+ Add commands for upgrading nodejs to >=6.14.4
+ Add container commands to install webpack & precompile assets
* Disable asset building through elastic beanstalk
+ set set RAILS\_SKIP\_ASSET\_COMPILATION to True
What that amounts to is:
```
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
container_commands:
04_install_webpack:
command: npm install --save-dev webpack
05_precompile:
command: bundle exec rake assets:precompile
``` |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | Here is a possible solution. I have made a traversal to the innermost text element and then convert that into a checkbox.
```
$('#checkbox').click(function() {
var $p = "";
$p = $('.outputBox');
wrapText($p);
});
function wrapText(domObj) {
var re = /\S/;
domObj.contents().filter(function() {
if (this.nodeType === 1) {
wrapText($(this));
} else {
return (this.nodeType === 3 && re.test(this.nodeValue));
}
})
.wrap('<label> <input type="checkbox" name="field_1" value="on" /></label>');
}
```
Here is the fiddle:
<http://jsfiddle.net/wLrfqvss/6/>
Hope thats what you are looking for.
If you want this to happen for only certain elements, you could always add a check like so:
```
if (this.nodeType === 1 && this.tagName === 'DIV')
```
in the above code. | How about using `*(space)` to generate `li`? I used regex and process string to generate it.
```
function update() {
var str = $('#inputBox').html();
$('#outputBox').html(str);
$('#showHTML').text(str);
str = str.replace(/(<br>$)/,'')
.replace(/(<br>)/g,'$1\n')
.replace(/\* (.*)/g, '<li>$1</li>')
.replace(/^(?!<li>)/gm,'<input type="checkbox" name="field_1" value="on">')
.replace(/^<li>(.*)<\/li>/gm,'<li><input type="checkbox" name="field_1" value="on">$1</li>');
if(str.indexOf('<li>')!=-1) {
var index = str.indexOf('<li>');
var lastIndex;
str = str.substring(0, index) + '<ul>' + str.substring(index);
lastIndex = str.lastIndexOf('</li>') + '</li>'.length+ 1;
str = str.substring(0, lastIndex) + '</ul>' + str.substring(lastIndex);
}
$('#translatedBox').html(str);
$('#translatedHTML').text(str);
}
```
Here is the [jsfiddle](http://jsfiddle.net/ssk7833/8f04wawy/6/) |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | Here is a possible solution. I have made a traversal to the innermost text element and then convert that into a checkbox.
```
$('#checkbox').click(function() {
var $p = "";
$p = $('.outputBox');
wrapText($p);
});
function wrapText(domObj) {
var re = /\S/;
domObj.contents().filter(function() {
if (this.nodeType === 1) {
wrapText($(this));
} else {
return (this.nodeType === 3 && re.test(this.nodeValue));
}
})
.wrap('<label> <input type="checkbox" name="field_1" value="on" /></label>');
}
```
Here is the fiddle:
<http://jsfiddle.net/wLrfqvss/6/>
Hope thats what you are looking for.
If you want this to happen for only certain elements, you could always add a check like so:
```
if (this.nodeType === 1 && this.tagName === 'DIV')
```
in the above code. | Hello I change your js code as follow,
```
$('div[contenteditable=true]').blur(function() {
$('.outputBox').append('<br>').append("<input type='checkbox' value='"+$(this).html()+"' />"+$(this).html());
$(this).html("");
});
$('div[contenteditable=true]').keydown(function(e) {
//$('.outputBox').html($(this).html()); i commented this line
```
I hope its working something like you were looking for.. |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | Almost everybody has given answer to first case of question but i found answer for 2nd case
i added this two lines in the click event of **SHOW CONTENT with CHECKBOX**
I guess @arthi sreekumar is also getting me `li's` into `checkbox` Upvote to that.
```
$('.outputBox').find("ol").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
$('.outputBox').find("ul").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
```
**[Fiddle Demo](http://jsfiddle.net/vaibviad/wLrfqvss/10/)** | Here is a possible solution. I have made a traversal to the innermost text element and then convert that into a checkbox.
```
$('#checkbox').click(function() {
var $p = "";
$p = $('.outputBox');
wrapText($p);
});
function wrapText(domObj) {
var re = /\S/;
domObj.contents().filter(function() {
if (this.nodeType === 1) {
wrapText($(this));
} else {
return (this.nodeType === 3 && re.test(this.nodeValue));
}
})
.wrap('<label> <input type="checkbox" name="field_1" value="on" /></label>');
}
```
Here is the fiddle:
<http://jsfiddle.net/wLrfqvss/6/>
Hope thats what you are looking for.
If you want this to happen for only certain elements, you could always add a check like so:
```
if (this.nodeType === 1 && this.tagName === 'DIV')
```
in the above code. |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | you can use template for checkbox instead of hardcoding html
```
<div id="template" style="visibility:hidden">
<label> <input type="checkbox" name="field_1" value="on" /></label>
</div>
```
in code
```
$('#checkbox').click(function() {});
function createElement()
{
var targetElement=$("#template").clone(true);
targetElement.removeAttr("style");
targetElement.attr("name","new id");
var $p = "";
var re = /\S/;
$p = $('.outputBox');
$p.contents()
.filter(function() {
return this.nodeType == 3&& re.test(this.nodeValue);;
})
.wrap(targetElement)
}
```
this will also provide you flexibility of using any html template as the code is not control specific | How about using `*(space)` to generate `li`? I used regex and process string to generate it.
```
function update() {
var str = $('#inputBox').html();
$('#outputBox').html(str);
$('#showHTML').text(str);
str = str.replace(/(<br>$)/,'')
.replace(/(<br>)/g,'$1\n')
.replace(/\* (.*)/g, '<li>$1</li>')
.replace(/^(?!<li>)/gm,'<input type="checkbox" name="field_1" value="on">')
.replace(/^<li>(.*)<\/li>/gm,'<li><input type="checkbox" name="field_1" value="on">$1</li>');
if(str.indexOf('<li>')!=-1) {
var index = str.indexOf('<li>');
var lastIndex;
str = str.substring(0, index) + '<ul>' + str.substring(index);
lastIndex = str.lastIndexOf('</li>') + '</li>'.length+ 1;
str = str.substring(0, lastIndex) + '</ul>' + str.substring(lastIndex);
}
$('#translatedBox').html(str);
$('#translatedHTML').text(str);
}
```
Here is the [jsfiddle](http://jsfiddle.net/ssk7833/8f04wawy/6/) |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | Almost everybody has given answer to first case of question but i found answer for 2nd case
i added this two lines in the click event of **SHOW CONTENT with CHECKBOX**
I guess @arthi sreekumar is also getting me `li's` into `checkbox` Upvote to that.
```
$('.outputBox').find("ol").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
$('.outputBox').find("ul").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
```
**[Fiddle Demo](http://jsfiddle.net/vaibviad/wLrfqvss/10/)** | How about using `*(space)` to generate `li`? I used regex and process string to generate it.
```
function update() {
var str = $('#inputBox').html();
$('#outputBox').html(str);
$('#showHTML').text(str);
str = str.replace(/(<br>$)/,'')
.replace(/(<br>)/g,'$1\n')
.replace(/\* (.*)/g, '<li>$1</li>')
.replace(/^(?!<li>)/gm,'<input type="checkbox" name="field_1" value="on">')
.replace(/^<li>(.*)<\/li>/gm,'<li><input type="checkbox" name="field_1" value="on">$1</li>');
if(str.indexOf('<li>')!=-1) {
var index = str.indexOf('<li>');
var lastIndex;
str = str.substring(0, index) + '<ul>' + str.substring(index);
lastIndex = str.lastIndexOf('</li>') + '</li>'.length+ 1;
str = str.substring(0, lastIndex) + '</ul>' + str.substring(lastIndex);
}
$('#translatedBox').html(str);
$('#translatedHTML').text(str);
}
```
Here is the [jsfiddle](http://jsfiddle.net/ssk7833/8f04wawy/6/) |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | you can use template for checkbox instead of hardcoding html
```
<div id="template" style="visibility:hidden">
<label> <input type="checkbox" name="field_1" value="on" /></label>
</div>
```
in code
```
$('#checkbox').click(function() {});
function createElement()
{
var targetElement=$("#template").clone(true);
targetElement.removeAttr("style");
targetElement.attr("name","new id");
var $p = "";
var re = /\S/;
$p = $('.outputBox');
$p.contents()
.filter(function() {
return this.nodeType == 3&& re.test(this.nodeValue);;
})
.wrap(targetElement)
}
```
this will also provide you flexibility of using any html template as the code is not control specific | Hello I change your js code as follow,
```
$('div[contenteditable=true]').blur(function() {
$('.outputBox').append('<br>').append("<input type='checkbox' value='"+$(this).html()+"' />"+$(this).html());
$(this).html("");
});
$('div[contenteditable=true]').keydown(function(e) {
//$('.outputBox').html($(this).html()); i commented this line
```
I hope its working something like you were looking for.. |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | Almost everybody has given answer to first case of question but i found answer for 2nd case
i added this two lines in the click event of **SHOW CONTENT with CHECKBOX**
I guess @arthi sreekumar is also getting me `li's` into `checkbox` Upvote to that.
```
$('.outputBox').find("ol").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
$('.outputBox').find("ul").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
```
**[Fiddle Demo](http://jsfiddle.net/vaibviad/wLrfqvss/10/)** | you can use template for checkbox instead of hardcoding html
```
<div id="template" style="visibility:hidden">
<label> <input type="checkbox" name="field_1" value="on" /></label>
</div>
```
in code
```
$('#checkbox').click(function() {});
function createElement()
{
var targetElement=$("#template").clone(true);
targetElement.removeAttr("style");
targetElement.attr("name","new id");
var $p = "";
var re = /\S/;
$p = $('.outputBox');
$p.contents()
.filter(function() {
return this.nodeType == 3&& re.test(this.nodeValue);;
})
.wrap(targetElement)
}
```
this will also provide you flexibility of using any html template as the code is not control specific |
30,800,046 | I want to extract a menu index with python. The menu index is a tree like this:
```
1.
1.1.
1.1.1.
2.
3.1.
3.2.
```
To find this I wrote the following code:
```
first = re.findall(r"[0-9]{1}[.]{1}(?:([0-9][.])?(?:([0-9]?[.]?)))" , menu)
```
This doesn't work, but when I put the regex in the online regex tool (<http://www.regexr.com/>) then it works.
How is this possible? | 2015/06/12 | [
"https://Stackoverflow.com/questions/30800046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1104939/"
] | Almost everybody has given answer to first case of question but i found answer for 2nd case
i added this two lines in the click event of **SHOW CONTENT with CHECKBOX**
I guess @arthi sreekumar is also getting me `li's` into `checkbox` Upvote to that.
```
$('.outputBox').find("ol").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
$('.outputBox').find("ul").find("li").wrapInner(('<label style="display:block;"> <input type="checkbox" class="liolChk" name="field_1" value="on" />'));
```
**[Fiddle Demo](http://jsfiddle.net/vaibviad/wLrfqvss/10/)** | Hello I change your js code as follow,
```
$('div[contenteditable=true]').blur(function() {
$('.outputBox').append('<br>').append("<input type='checkbox' value='"+$(this).html()+"' />"+$(this).html());
$(this).html("");
});
$('div[contenteditable=true]').keydown(function(e) {
//$('.outputBox').html($(this).html()); i commented this line
```
I hope its working something like you were looking for.. |
2,998,384 | I trying to write a program that navigates the local file system using a config file containing relevant filepaths. My question is this: What are the best practices to use when performing file I/O (this will be from the desktop app to a server and back) and file system navigation in C#?
I know how to google, and I have found several solutions, but I would like to know which of the various functions is most robust and flexible. As well, if anyone has any tips regarding exception handling for C# file I/O that would also be very helpful. | 2010/06/08 | [
"https://Stackoverflow.com/questions/2998384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/259648/"
] | You don't need a separate library, use the classes in the `System.IO` namespace like `File`, `FileInfo`, `Directory`, `DirectoryInfo`. A simple example:
```
var d = new DirectoryInfo(@"c:\");
foreach(FileInfo fi in d.GetFiles())
Console.WriteLine(fi.Name);
``` | What various libraries are you talking about?
I would pretty much stick to `System.IO.Directory` and such. It has everything you need.
Something like:
```
foreach (var file in System.IO.Directory.GetFiles(@"C:\Yourpath"))
{
// Do ya thang.
}
``` |
2,998,384 | I trying to write a program that navigates the local file system using a config file containing relevant filepaths. My question is this: What are the best practices to use when performing file I/O (this will be from the desktop app to a server and back) and file system navigation in C#?
I know how to google, and I have found several solutions, but I would like to know which of the various functions is most robust and flexible. As well, if anyone has any tips regarding exception handling for C# file I/O that would also be very helpful. | 2010/06/08 | [
"https://Stackoverflow.com/questions/2998384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/259648/"
] | You don't need a separate library, use the classes in the `System.IO` namespace like `File`, `FileInfo`, `Directory`, `DirectoryInfo`. A simple example:
```
var d = new DirectoryInfo(@"c:\");
foreach(FileInfo fi in d.GetFiles())
Console.WriteLine(fi.Name);
``` | There are various classes you can use in the `System.IO` namespace, including `File`, `FileInfo`, `Directory`, and `DirectoryInfo`.
As for practices... as with any IO, make sure you close any streams you open. You also may need to utilize a `Disposable` object, so look into the `using` keyword. |
2,998,384 | I trying to write a program that navigates the local file system using a config file containing relevant filepaths. My question is this: What are the best practices to use when performing file I/O (this will be from the desktop app to a server and back) and file system navigation in C#?
I know how to google, and I have found several solutions, but I would like to know which of the various functions is most robust and flexible. As well, if anyone has any tips regarding exception handling for C# file I/O that would also be very helpful. | 2010/06/08 | [
"https://Stackoverflow.com/questions/2998384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/259648/"
] | You don't need a separate library, use the classes in the `System.IO` namespace like `File`, `FileInfo`, `Directory`, `DirectoryInfo`. A simple example:
```
var d = new DirectoryInfo(@"c:\");
foreach(FileInfo fi in d.GetFiles())
Console.WriteLine(fi.Name);
``` | `System.IO` is all you need :)
As regards exception handling. Unless we are *expecting* an exception, we should never catch it. Truly unexpected exceptions should go unhandled. [looks guilty] ok, the only exception is at the highest level, and *only* for reporting purposes, such as
```
// assuming console program, but every application Console, WinForm,
// Wpf, WindowsService, WebService, WcfService has a similar entry point
class Program
{
// assume log4net logging here, but could as easily be
// Console.WriteLine, or hand rolled logger
private static readonly ILog _log = LogManager.GetLogger (typeof (Program));
static void Main (string[] args)
{
AppDomain.CurrentDomain.UnhandledException +=
CurrentDomain_UnhandledException;
}
private static void CurrentDomain_UnhandledException (
object sender,
UnhandledExceptionEventArgs e)
{
_log.
Fatal (
string.Format (
"Unhandled exception caught by " +
"'CurrentDomain_UnhandledException'. Terminating program.",
e.ExceptionObject);
}
}
```
If you are expecting an exception then one of the following is acceptable
```
// example of first option. this applies ONLY when there is a
// well-defined negative path or recovery scenario
public void SomeFunction ()
{
try
{
string allText = System.IO.File.ReadAllText ();
}
// catch ONLY those exceptions you expect
catch (System.ArgumentException e)
{
// ALWAYS log an error, expected or otherwise.
_log.Warn ("Argument exception in SomeFunction", e);
// if the use-case\control flow is recoverable, invoke
// recovery logic, preferably out of try-catch scope
}
}
```
or
```
// example of second option. this applies ONLY when there is no
// well defined negative path and we require additional information
// on failure
public void SomeFunction ()
{
try
{
string allText = System.IO.File.ReadAllText ();
}
// catch ONLY those exceptions you expect
catch (System.ArgumentException innerException)
{
// do whatever you need to do to identify this
// problem area and provide additional context
// like parameters or what have you. ALWAYS
// provide inner exception
throw new SomeCustomException (
"some new message with extra info.",
maybeSomeAdditionalContext,
innerException);
// no requirement to log, assume caller will
// handle appropriately
}
}
``` |
817,325 | We're currently running on IIS6, but hoping to move to IIS 7 soon.
We're moving an existing web forms site over to ASP.Net MVC. We have quite a few legacy pages which we need to redirect to the new controllers. I came across this article which looked interesting:
<http://blog.eworldui.net/post/2008/04/ASPNET-MVC---Legacy-Url-Routing.aspx>
So I guess I could either write my own route handler, or do my redirect in the controller. The latter smells slightly.
However, I'm not quite sure how to handle the query string values from the legacy urls which ideally I need to pass to my controller's Show() method. For example:
**Legacy URL:**
>
> /Artists/ViewArtist.aspx?Id=4589
>
>
>
**I want this to map to:**
>
> ArtistsController Show action
>
>
>
Actually my Show action takes the artist name, so I do want the user to be redirected from the Legacy URL to /artists/Madonna
Thanks! | 2009/05/03 | [
"https://Stackoverflow.com/questions/817325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2048091/"
] | depending on the article you mentioned, these are the steps to accomplish this:
1-Your LegacyHandler must extract the routes values from the query string(in this case it is the artist's id)
here is the code to do that:
```
public class LegacyHandler:MvcHandler
{
private RequestContext requestContext;
public LegacyHandler(RequestContext requestContext) : base(requestContext)
{
this.requestContext = requestContext;
}
protected override void ProcessRequest(HttpContextBase httpContext)
{
string redirectActionName = ((LegacyRoute) RequestContext.RouteData.Route).RedirectActionName;
var queryString = requestContext.HttpContext.Request.QueryString;
foreach (var key in queryString.AllKeys)
{
requestContext.RouteData.Values.Add(key, queryString[key]);
}
VirtualPathData path = RouteTable.Routes.GetVirtualPath(requestContext, redirectActionName,
requestContext.RouteData.Values);
httpContext.Response.Status = "301 Moved Permanently";
httpContext.Response.AppendHeader("Location", path.VirtualPath);
}
}
```
2- you have to add these two routes to the RouteTable where you have an ArtistController with ViewArtist action that accept an id parameter of int type
```
routes.Add("Legacy", new LegacyRoute("Artists/ViewArtist.aspx", "Artist", new LegacyRouteHandler()));
routes.MapRoute("Artist", "Artist/ViewArtist/{id}", new
{
controller = "Artist",
action = "ViewArtist",
});
```
Now you can navigate to a url like : /Artists/ViewArtist.aspx?id=123
and you will be redirected to : /Artist/ViewArtist/123 | I was struggling a bit with this until I got my head around it. It was a lot easier to do this in a Controller like Perhentian did then directly in the route config, at least in my situation since our new URLs don't have `id` in them. The reason is that in the Controller I had access to all my repositories and domain objects. To help others this is what I did:
```
routes.MapRoute(null,
"product_list.aspx", // Matches legacy product_list.aspx
new { controller = "Products", action = "Legacy" }
);
public ActionResult Legacy(int catid)
{
MenuItem menuItem = menu.GetMenuItem(catid);
return RedirectPermanent(menuItem.Path);
}
```
`menu` is an object where I've stored information related to menu entries, like the Path which is the URL for the menu entry.
This redirects from for instance
`/product_list.aspx?catid=50`
to
`/pc-tillbehor/kylning-flaktar/flaktar/170-mm`
Note that `RedirectPermanent` is MVC3+. If you're using an older version you need to create the 301 manually. |
29,780,266 | In the process of decoupling some code and I extracted an interface for one of our classes, and mapped it using Unity like so
```
Container.RegisterType<IUserAuthorizationBC, UserAuthorizationBC>(
new Interceptor<InterfaceInterceptor>(), new InterceptionBehavior<PolicyInjectionBehavior>());
```
As you can see the class is UserAuthorizationBC and the interface is of course IUserAuthorizationBC. Once i did this I began to get an error in a class that I thought would not have mattered. The ctor for the class that now gives an error is as follows
```
public RoleAuthorizationService(IDataContractFactory factory,
UserAuthorizationBC businessProcessor)
: base(factory, businessProcessor)
{
_authBC = businessProcessor;
}
```
As you can see I haven't refactored it yet, I haven't even touched it, it was set up to get a concrete instance of UserAuthorizationBC and whoever created it is also injecting an IDataContractFactory which I did find mapped in our code base as you can see in following code snippet.
```
Container.RegisterType<IDataContractFactory, DefaultDataContractFactory>(
new Interceptor<InterfaceInterceptor>(),
new InterceptionBehavior<PolicyInjectionBehavior>());
```
I get an error from unity as follows
>
> Microsoft.Practices.Unity.ResolutionFailedException: Resolution of the
> dependency failed, type =
> "HumanArc.Compass.Shared.Interfaces.Service.IRoleAuthorizationService",
> name = "(none)".
> Exception occurred while: Calling constructor Microsoft.Practices.Unity.InterceptionExtension.PolicyInjectionBehavior(Microsoft.Practices.Unity.InterceptionExtension.CurrentInterceptionRequest
> interceptionRequest,
> Microsoft.Practices.Unity.InterceptionExtension.InjectionPolicy[]
> policies, Microsoft.Practices.Unity.IUnityContainer container).
> Exception is: ArgumentException - Type passed must be an interface.
>
>
>
Now if I go comment out my mapping for IUserAuthorizationBC it will work fine again. I have no idea why.
actually I don't know why it ever works because if you look at the DefaultDataContractFactory it uses all generics which I would assume would always fail to resolve as at the time the unity doesn't know the type of T - see the class below.
```
public class DefaultDataContractFactory : IDataContractFactory
{
public DefaultDataContractFactory();
public virtual T Create<T>();
public virtual object Create(Type type);
public virtual Example<T> CreateExample<T>();
protected Type CreateFactoryType(Type instanceType);
}
```
So to sum things up there are two questions.
1. How does it even work in the first place before I added an interface for the IUserAuthorizationBC class and then added the mapping for unity - I would think that the IDataContractFactory would always blow it up.
2. Since it does work if I comment out the container mapping for IUserAuthorizationBC why does it stop working when I uncomment it - I would have thought it wouldn't make a bit of difference as this particular class always has been injected with a concrete instance of the UserAuthorizationBC
I thought I understood dependency injection fairly well - especially with unity but I am clearly missing the boat here. | 2015/04/21 | [
"https://Stackoverflow.com/questions/29780266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/491366/"
] | A .txt file contains plain text characters of 1 byte, a .doc file includes all Word document metadata such as font style, size, page margins etc. | Self exploration:
1. Save the document as a ".docx" file.
2. Rename `foo.docx` to `foo.docx.zip` (all Microsoft "X-document" files are zips).
3. Extract `foo.docx.zip`.
View the extracted XML files - most of the files related to additional [*metadata*](http://en.wikipedia.org/wiki/Metadata) resources that are included in the saved Word Document.
A `.doc` file is essentially just an older binary version of a `.docx` file and contains similar metadata information. This can be viewed with a binary/hex editor.
(There are also structural differences when saving the content itself.)
The text file only contains the literal content - and nothing else - which is why it is the same size the content with a single byte encoding. |
19,554,835 | I have seen functions and variables beginning with underscores in various programming languages(PHP and Python) and am confused about the meaning behind it. | 2013/10/24 | [
"https://Stackoverflow.com/questions/19554835",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Assuming normal conventions are used in PHP:
* single underscore indicates a protected member variable or method
* double underscore indicates a private member variable or method
This stems from when PHP had weak OOP support and did not have a concept of private and protected (everything was public). This convention allowed developers to indicate a member variable or method was private or protected as to better communicate this to users of the code.
Users could choose to ignore these semantics and call the "private" and "protected" member variables and methods if so chose, though. | single underscore has no special meaning for class/instance attributes in Python. By *convention* it indicates private variables/function. `from module import *` will not import functions and variables beginning with a single underscore. (thanks Bi Rico).
double underscore invokes [name mangling](http://en.wikipedia.org/wiki/Name_mangling#Name_mangling_in_Python). This lets classes have an attribute that is distinct from one with the same name in their subclasses. |
34,470,085 | **Template Strings.**
This link might help a little bit:
[Does PHP have a feature like Python's template strings?](https://stackoverflow.com/questions/7683133/does-php-have-a-feature-like-pythons-template-strings1)
What **my main issue is**, is to know if there's a better way to **store** Text Strings.
Now, is this normally done with one **folder (DIR)**, and ***plenty of single standalone files with different strings***, and depending on what one might need, grab the contents of one file, process and replace the **{tags}** with *values*.
Or, is it better to define all of them inside **one single file array[]**?
**greetings.tpl.txt**
```
['welcome'] = 'Welcome {firstname} {lastname}'.
['good_morning'] = 'Good morning {firstname}'.
['good_afternoon'] = 'Good afternoon {firstname}'.
```
Here's another example, <https://github.com/oren/string-template-example/blob/master/template.txt>
Thx in advance!
Answers that include solutions, that state that one should use include("../file.php"); are NEVER ACCEPTED HERE. A solution that shows how to read a LIST of defined strings into an array. The definition is already array based. | 2015/12/26 | [
"https://Stackoverflow.com/questions/34470085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051815/"
] | To add values to templates, you can use [`strtr`](http://php.net/manual/en/function.strtr.php). Example below:
```
$msg = strtr('Welcome {firstname} {lastname}', array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
Regarding storing strings, you can save one array per language and then load only relevent one. E.g. you'll have a directory with 2 files:
* language
+ en.php
+ de.php
Each file should contain the following:
```
<?php
return (object) array(
'WELCOME' => 'Welcome {firstname} {lastname}'
);
```
When you need translations, you can just do the following:
```
$dictionary = include('language/en.php');
```
And the dictionary will then have an object that you can address. Changing the example above, it will be something like this:
```
$dic = include('language/en.php');
$msg = strtr($dic->WELCOME, array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
To avoid the situation when you don't have the template in dictionary, you can use a ternary operator with the default text:
```
$dic = include('language/en.php');
$tpl = $dic->WELCOME ?: 'Welcome {firstname} {lastname}';
$msg = strtr($tpl, array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
What people usually do to be able to edit the texts in db, you can have a simple export (e.g. [`var_export`](http://php.net/manual/en/function.var-export.php)) script to sync from db to files.
Hope this helps. | First, take a look at [gettext](http://php.net/manual/en/book.gettext.php). It is widely used and there is plenty of tools to handle translation process, like xgettext and [POEdit](https://poedit.net/). It is more comfortable to use real english strings in source code and then extract them using xgettext tool. Gettext can handle plural forms of practically all languages, which is not possible when using simple arrays of strings.
Very useful function to combine with gettext is [sprintf()](http://php.net/manual/en/function.sprintf.php) (or printf(), if you want to output text directly).
Example:
```
printf(gettext('Welcome %s %s.'), $firstname, $lastname);
printf(ngettext('You have %d new message.', 'You have %d new messages.',
$number_of_new_messages), $number_of_new_messages);
```
Then, when you want to translate this into language where last name usually precedes first name, you can use this: `'Welcome %2$s, %1$s.'`
The second example, the plural form, can be translated using more than two strings, because part of localization file is how plural forms are arranges. While for english it is `nplurals=2; plural=(n != 1);`, for example in czech it is `nplurals=3; plural=(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2;` (three forms, first is for one item, second for 2 to 4 items and third for the rest). For example Irish language [has five plural forms](http://localization-guide.readthedocs.org/en/latest/l10n/pluralforms.html).
To extract strings from source code use `xgettext -L php ...`. I recommend writing short script with the exact command fitting your project, something like:
```
# assuming this file is in locales directory
# and source code in src directory
find ../src -type f -iname '*.php' > "files.list"
xgettext -L php --from-code 'UTF-8' -f "files.list" -o messages.pot
```
You may want to add custom function names using -k argument. |
34,470,085 | **Template Strings.**
This link might help a little bit:
[Does PHP have a feature like Python's template strings?](https://stackoverflow.com/questions/7683133/does-php-have-a-feature-like-pythons-template-strings1)
What **my main issue is**, is to know if there's a better way to **store** Text Strings.
Now, is this normally done with one **folder (DIR)**, and ***plenty of single standalone files with different strings***, and depending on what one might need, grab the contents of one file, process and replace the **{tags}** with *values*.
Or, is it better to define all of them inside **one single file array[]**?
**greetings.tpl.txt**
```
['welcome'] = 'Welcome {firstname} {lastname}'.
['good_morning'] = 'Good morning {firstname}'.
['good_afternoon'] = 'Good afternoon {firstname}'.
```
Here's another example, <https://github.com/oren/string-template-example/blob/master/template.txt>
Thx in advance!
Answers that include solutions, that state that one should use include("../file.php"); are NEVER ACCEPTED HERE. A solution that shows how to read a LIST of defined strings into an array. The definition is already array based. | 2015/12/26 | [
"https://Stackoverflow.com/questions/34470085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051815/"
] | To add values to templates, you can use [`strtr`](http://php.net/manual/en/function.strtr.php). Example below:
```
$msg = strtr('Welcome {firstname} {lastname}', array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
Regarding storing strings, you can save one array per language and then load only relevent one. E.g. you'll have a directory with 2 files:
* language
+ en.php
+ de.php
Each file should contain the following:
```
<?php
return (object) array(
'WELCOME' => 'Welcome {firstname} {lastname}'
);
```
When you need translations, you can just do the following:
```
$dictionary = include('language/en.php');
```
And the dictionary will then have an object that you can address. Changing the example above, it will be something like this:
```
$dic = include('language/en.php');
$msg = strtr($dic->WELCOME, array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
To avoid the situation when you don't have the template in dictionary, you can use a ternary operator with the default text:
```
$dic = include('language/en.php');
$tpl = $dic->WELCOME ?: 'Welcome {firstname} {lastname}';
$msg = strtr($tpl, array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
What people usually do to be able to edit the texts in db, you can have a simple export (e.g. [`var_export`](http://php.net/manual/en/function.var-export.php)) script to sync from db to files.
Hope this helps. | You could store all the templates in one associative array and also the variables that are to replace the placeholders, like
```
$capt=array('welcome' => 'Welcome {firstname} {lastname}',
'good_morning' => 'Good morning {firstname}',
'good_afternoon' => 'Good afternoon {firstname}');
$vars=array('firstname'=>'Harry','lastname'=>'Potter', 'profession'=>'wizzard');
```
Then, you could transform the sentences through a simple `preg_replace_callback` call like
```
function repl($a){ global $vars;
return $vars[$a[1]];
}
function getcapt($type){ global $capt;
$str=$capt[$type];
$str=preg_replace_callback('/\{([^}]+)\}/','repl' ,$str);
echo "$str<br>";
}
getcapt('welcome');
getcapt('good_afternoon');
```
This example would produce
```
Welcome Harry Potter
Good afternoon Harry
``` |
34,470,085 | **Template Strings.**
This link might help a little bit:
[Does PHP have a feature like Python's template strings?](https://stackoverflow.com/questions/7683133/does-php-have-a-feature-like-pythons-template-strings1)
What **my main issue is**, is to know if there's a better way to **store** Text Strings.
Now, is this normally done with one **folder (DIR)**, and ***plenty of single standalone files with different strings***, and depending on what one might need, grab the contents of one file, process and replace the **{tags}** with *values*.
Or, is it better to define all of them inside **one single file array[]**?
**greetings.tpl.txt**
```
['welcome'] = 'Welcome {firstname} {lastname}'.
['good_morning'] = 'Good morning {firstname}'.
['good_afternoon'] = 'Good afternoon {firstname}'.
```
Here's another example, <https://github.com/oren/string-template-example/blob/master/template.txt>
Thx in advance!
Answers that include solutions, that state that one should use include("../file.php"); are NEVER ACCEPTED HERE. A solution that shows how to read a LIST of defined strings into an array. The definition is already array based. | 2015/12/26 | [
"https://Stackoverflow.com/questions/34470085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051815/"
] | To add values to templates, you can use [`strtr`](http://php.net/manual/en/function.strtr.php). Example below:
```
$msg = strtr('Welcome {firstname} {lastname}', array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
Regarding storing strings, you can save one array per language and then load only relevent one. E.g. you'll have a directory with 2 files:
* language
+ en.php
+ de.php
Each file should contain the following:
```
<?php
return (object) array(
'WELCOME' => 'Welcome {firstname} {lastname}'
);
```
When you need translations, you can just do the following:
```
$dictionary = include('language/en.php');
```
And the dictionary will then have an object that you can address. Changing the example above, it will be something like this:
```
$dic = include('language/en.php');
$msg = strtr($dic->WELCOME, array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
To avoid the situation when you don't have the template in dictionary, you can use a ternary operator with the default text:
```
$dic = include('language/en.php');
$tpl = $dic->WELCOME ?: 'Welcome {firstname} {lastname}';
$msg = strtr($tpl, array(
'{firstname}' => $user->getFistName(),
'{lastname}' => $user->getLastName()
));
```
What people usually do to be able to edit the texts in db, you can have a simple export (e.g. [`var_export`](http://php.net/manual/en/function.var-export.php)) script to sync from db to files.
Hope this helps. | OK John I will elaborate.
The best way is to create a php file, for each language, containing the definition of an array of texts, using printf format for string substitution.
If the amount of text is very large, you might consider partitioning it further. (a few MB is usually fine)
This is efficient in production, assuming the OS has a well tuned file cash. Slightly more so, it you use numerical indexes to the array.
It is much more efficient to let php populate the array, then to do it your self, reading a text file. this is after all, I assume, static text?
If production performance is not an issue, please disregard this post.
**greetings\_tpl\_en.php**
```
$text_tpl={
'welcome' => 'Welcome %s %s'
,'good_morning' => 'Good morning %s'
,'good_afternoon' => 'Good afternoon %s'
};
```
**your.php**
```
$language="en";
require('greetings_tpl_'. $language .'php');
....
printf($text_tpl['welcome'],$first_name,$last_name);
```
printf i a nice legacy from the C language. sprintf returns a string instead of outputting it.
You can find the full description of the php printf format here: <http://php.net/manual/en/function.sprintf.php>
(Do read Josef Kufner post again, when this is solved. +1 :c)
Hope this helps? |
34,470,085 | **Template Strings.**
This link might help a little bit:
[Does PHP have a feature like Python's template strings?](https://stackoverflow.com/questions/7683133/does-php-have-a-feature-like-pythons-template-strings1)
What **my main issue is**, is to know if there's a better way to **store** Text Strings.
Now, is this normally done with one **folder (DIR)**, and ***plenty of single standalone files with different strings***, and depending on what one might need, grab the contents of one file, process and replace the **{tags}** with *values*.
Or, is it better to define all of them inside **one single file array[]**?
**greetings.tpl.txt**
```
['welcome'] = 'Welcome {firstname} {lastname}'.
['good_morning'] = 'Good morning {firstname}'.
['good_afternoon'] = 'Good afternoon {firstname}'.
```
Here's another example, <https://github.com/oren/string-template-example/blob/master/template.txt>
Thx in advance!
Answers that include solutions, that state that one should use include("../file.php"); are NEVER ACCEPTED HERE. A solution that shows how to read a LIST of defined strings into an array. The definition is already array based. | 2015/12/26 | [
"https://Stackoverflow.com/questions/34470085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051815/"
] | First, take a look at [gettext](http://php.net/manual/en/book.gettext.php). It is widely used and there is plenty of tools to handle translation process, like xgettext and [POEdit](https://poedit.net/). It is more comfortable to use real english strings in source code and then extract them using xgettext tool. Gettext can handle plural forms of practically all languages, which is not possible when using simple arrays of strings.
Very useful function to combine with gettext is [sprintf()](http://php.net/manual/en/function.sprintf.php) (or printf(), if you want to output text directly).
Example:
```
printf(gettext('Welcome %s %s.'), $firstname, $lastname);
printf(ngettext('You have %d new message.', 'You have %d new messages.',
$number_of_new_messages), $number_of_new_messages);
```
Then, when you want to translate this into language where last name usually precedes first name, you can use this: `'Welcome %2$s, %1$s.'`
The second example, the plural form, can be translated using more than two strings, because part of localization file is how plural forms are arranges. While for english it is `nplurals=2; plural=(n != 1);`, for example in czech it is `nplurals=3; plural=(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2;` (three forms, first is for one item, second for 2 to 4 items and third for the rest). For example Irish language [has five plural forms](http://localization-guide.readthedocs.org/en/latest/l10n/pluralforms.html).
To extract strings from source code use `xgettext -L php ...`. I recommend writing short script with the exact command fitting your project, something like:
```
# assuming this file is in locales directory
# and source code in src directory
find ../src -type f -iname '*.php' > "files.list"
xgettext -L php --from-code 'UTF-8' -f "files.list" -o messages.pot
```
You may want to add custom function names using -k argument. | You could store all the templates in one associative array and also the variables that are to replace the placeholders, like
```
$capt=array('welcome' => 'Welcome {firstname} {lastname}',
'good_morning' => 'Good morning {firstname}',
'good_afternoon' => 'Good afternoon {firstname}');
$vars=array('firstname'=>'Harry','lastname'=>'Potter', 'profession'=>'wizzard');
```
Then, you could transform the sentences through a simple `preg_replace_callback` call like
```
function repl($a){ global $vars;
return $vars[$a[1]];
}
function getcapt($type){ global $capt;
$str=$capt[$type];
$str=preg_replace_callback('/\{([^}]+)\}/','repl' ,$str);
echo "$str<br>";
}
getcapt('welcome');
getcapt('good_afternoon');
```
This example would produce
```
Welcome Harry Potter
Good afternoon Harry
``` |
34,470,085 | **Template Strings.**
This link might help a little bit:
[Does PHP have a feature like Python's template strings?](https://stackoverflow.com/questions/7683133/does-php-have-a-feature-like-pythons-template-strings1)
What **my main issue is**, is to know if there's a better way to **store** Text Strings.
Now, is this normally done with one **folder (DIR)**, and ***plenty of single standalone files with different strings***, and depending on what one might need, grab the contents of one file, process and replace the **{tags}** with *values*.
Or, is it better to define all of them inside **one single file array[]**?
**greetings.tpl.txt**
```
['welcome'] = 'Welcome {firstname} {lastname}'.
['good_morning'] = 'Good morning {firstname}'.
['good_afternoon'] = 'Good afternoon {firstname}'.
```
Here's another example, <https://github.com/oren/string-template-example/blob/master/template.txt>
Thx in advance!
Answers that include solutions, that state that one should use include("../file.php"); are NEVER ACCEPTED HERE. A solution that shows how to read a LIST of defined strings into an array. The definition is already array based. | 2015/12/26 | [
"https://Stackoverflow.com/questions/34470085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051815/"
] | OK John I will elaborate.
The best way is to create a php file, for each language, containing the definition of an array of texts, using printf format for string substitution.
If the amount of text is very large, you might consider partitioning it further. (a few MB is usually fine)
This is efficient in production, assuming the OS has a well tuned file cash. Slightly more so, it you use numerical indexes to the array.
It is much more efficient to let php populate the array, then to do it your self, reading a text file. this is after all, I assume, static text?
If production performance is not an issue, please disregard this post.
**greetings\_tpl\_en.php**
```
$text_tpl={
'welcome' => 'Welcome %s %s'
,'good_morning' => 'Good morning %s'
,'good_afternoon' => 'Good afternoon %s'
};
```
**your.php**
```
$language="en";
require('greetings_tpl_'. $language .'php');
....
printf($text_tpl['welcome'],$first_name,$last_name);
```
printf i a nice legacy from the C language. sprintf returns a string instead of outputting it.
You can find the full description of the php printf format here: <http://php.net/manual/en/function.sprintf.php>
(Do read Josef Kufner post again, when this is solved. +1 :c)
Hope this helps? | First, take a look at [gettext](http://php.net/manual/en/book.gettext.php). It is widely used and there is plenty of tools to handle translation process, like xgettext and [POEdit](https://poedit.net/). It is more comfortable to use real english strings in source code and then extract them using xgettext tool. Gettext can handle plural forms of practically all languages, which is not possible when using simple arrays of strings.
Very useful function to combine with gettext is [sprintf()](http://php.net/manual/en/function.sprintf.php) (or printf(), if you want to output text directly).
Example:
```
printf(gettext('Welcome %s %s.'), $firstname, $lastname);
printf(ngettext('You have %d new message.', 'You have %d new messages.',
$number_of_new_messages), $number_of_new_messages);
```
Then, when you want to translate this into language where last name usually precedes first name, you can use this: `'Welcome %2$s, %1$s.'`
The second example, the plural form, can be translated using more than two strings, because part of localization file is how plural forms are arranges. While for english it is `nplurals=2; plural=(n != 1);`, for example in czech it is `nplurals=3; plural=(n==1) ? 0 : (n>=2 && n<=4) ? 1 : 2;` (three forms, first is for one item, second for 2 to 4 items and third for the rest). For example Irish language [has five plural forms](http://localization-guide.readthedocs.org/en/latest/l10n/pluralforms.html).
To extract strings from source code use `xgettext -L php ...`. I recommend writing short script with the exact command fitting your project, something like:
```
# assuming this file is in locales directory
# and source code in src directory
find ../src -type f -iname '*.php' > "files.list"
xgettext -L php --from-code 'UTF-8' -f "files.list" -o messages.pot
```
You may want to add custom function names using -k argument. |
34,470,085 | **Template Strings.**
This link might help a little bit:
[Does PHP have a feature like Python's template strings?](https://stackoverflow.com/questions/7683133/does-php-have-a-feature-like-pythons-template-strings1)
What **my main issue is**, is to know if there's a better way to **store** Text Strings.
Now, is this normally done with one **folder (DIR)**, and ***plenty of single standalone files with different strings***, and depending on what one might need, grab the contents of one file, process and replace the **{tags}** with *values*.
Or, is it better to define all of them inside **one single file array[]**?
**greetings.tpl.txt**
```
['welcome'] = 'Welcome {firstname} {lastname}'.
['good_morning'] = 'Good morning {firstname}'.
['good_afternoon'] = 'Good afternoon {firstname}'.
```
Here's another example, <https://github.com/oren/string-template-example/blob/master/template.txt>
Thx in advance!
Answers that include solutions, that state that one should use include("../file.php"); are NEVER ACCEPTED HERE. A solution that shows how to read a LIST of defined strings into an array. The definition is already array based. | 2015/12/26 | [
"https://Stackoverflow.com/questions/34470085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2051815/"
] | OK John I will elaborate.
The best way is to create a php file, for each language, containing the definition of an array of texts, using printf format for string substitution.
If the amount of text is very large, you might consider partitioning it further. (a few MB is usually fine)
This is efficient in production, assuming the OS has a well tuned file cash. Slightly more so, it you use numerical indexes to the array.
It is much more efficient to let php populate the array, then to do it your self, reading a text file. this is after all, I assume, static text?
If production performance is not an issue, please disregard this post.
**greetings\_tpl\_en.php**
```
$text_tpl={
'welcome' => 'Welcome %s %s'
,'good_morning' => 'Good morning %s'
,'good_afternoon' => 'Good afternoon %s'
};
```
**your.php**
```
$language="en";
require('greetings_tpl_'. $language .'php');
....
printf($text_tpl['welcome'],$first_name,$last_name);
```
printf i a nice legacy from the C language. sprintf returns a string instead of outputting it.
You can find the full description of the php printf format here: <http://php.net/manual/en/function.sprintf.php>
(Do read Josef Kufner post again, when this is solved. +1 :c)
Hope this helps? | You could store all the templates in one associative array and also the variables that are to replace the placeholders, like
```
$capt=array('welcome' => 'Welcome {firstname} {lastname}',
'good_morning' => 'Good morning {firstname}',
'good_afternoon' => 'Good afternoon {firstname}');
$vars=array('firstname'=>'Harry','lastname'=>'Potter', 'profession'=>'wizzard');
```
Then, you could transform the sentences through a simple `preg_replace_callback` call like
```
function repl($a){ global $vars;
return $vars[$a[1]];
}
function getcapt($type){ global $capt;
$str=$capt[$type];
$str=preg_replace_callback('/\{([^}]+)\}/','repl' ,$str);
echo "$str<br>";
}
getcapt('welcome');
getcapt('good_afternoon');
```
This example would produce
```
Welcome Harry Potter
Good afternoon Harry
``` |
33,844,357 | Im trying to solve project euler 10 problem (find the sum of all the primes below two million), but the code takes forever to finish, how do i make it go faster?
```
console.log("Starting...")
var primes = [1000];
var x = 0;
var n = 0;
var i = 2;
var b = 0;
var sum = 0;
for (i; i < 2000000; i++) {
x = 0;
if (i === 2) {
primes[b] = i
sum += primes[b];
console.log(primes[b]);
b++;
}
for (n = i - 1; n > 1; n--) {
if (i % n === 0) {
x++;
}
if (n === 2 && x === 0) {
primes[b] = i;
sum += primes[b];
console.log(primes[b]);
b++;
}
}
}
console.log(sum)
``` | 2015/11/21 | [
"https://Stackoverflow.com/questions/33844357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5289392/"
] | The biggest super easy things you can do to make this a lot faster:
* Break out of the inner loop when you find a divisor!
* When you're checking for primality, start with the small divisors instead of the big ones. You'll find the composites a lot faster.
* You only have to check for divisors <= Math.sqrt(n)
* You only need to check prime divisors. You have a list of them.
* Process 2 outside the loop, and then only do odd numbers inside the loop: `for(i=3;i<2000000;i+=2)` | Since you keep an array of your primes anyway, you can split the process in two steps:
* Generating the primes up to your limit of 2 million
* and summing up.
As pointed out by others, you need only check whether a candidate number is divisable by another prime not larger than the square root of the candidate. If you can write a number as a product of primes, then one of those primes will always be lower than or equal to the number's square root.
This code can be optimized further but it is several orders of magnitude faster than your initial version:
```
function primesUpTo(limit) {
if (limit < 2) return [];
var sqrt = Math.floor(Math.sqrt(limit));
var testPrimes = primesUpTo(sqrt);
var result = [].concat(testPrimes);
for (var i=sqrt+1 ; i<=limit ; i++) {
if (testPrimes.every(function(x) { return (i % x) > 0 })) {
result.push(i);
}
}
return result;
}
var primes = primesUpTo(2000000);
var sum = primes.reduce(function(acc, e) { return acc + e });
``` |
33,844,357 | Im trying to solve project euler 10 problem (find the sum of all the primes below two million), but the code takes forever to finish, how do i make it go faster?
```
console.log("Starting...")
var primes = [1000];
var x = 0;
var n = 0;
var i = 2;
var b = 0;
var sum = 0;
for (i; i < 2000000; i++) {
x = 0;
if (i === 2) {
primes[b] = i
sum += primes[b];
console.log(primes[b]);
b++;
}
for (n = i - 1; n > 1; n--) {
if (i % n === 0) {
x++;
}
if (n === 2 && x === 0) {
primes[b] = i;
sum += primes[b];
console.log(primes[b]);
b++;
}
}
}
console.log(sum)
``` | 2015/11/21 | [
"https://Stackoverflow.com/questions/33844357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5289392/"
] | The biggest super easy things you can do to make this a lot faster:
* Break out of the inner loop when you find a divisor!
* When you're checking for primality, start with the small divisors instead of the big ones. You'll find the composites a lot faster.
* You only have to check for divisors <= Math.sqrt(n)
* You only need to check prime divisors. You have a list of them.
* Process 2 outside the loop, and then only do odd numbers inside the loop: `for(i=3;i<2000000;i+=2)` | Here is another version based on the [Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes). It requires much more memory but if this does not concern you it's also pretty fast.
```
// just a helper to create integer arrays
function range(from, to) {
var numbers = [];
for (var i=from ; i<=to ; i++) {
numbers.push(i);
}
return numbers;
}
function primesUpTo(limit) {
if (limit < 2) return [];
var sqrt = Math.floor(Math.sqrt(limit));
var testPrimes = primesUpTo(sqrt);
var numbers = range(sqrt+1, limit);
testPrimes.forEach(function(p) {
numbers = numbers.filter(function(x) { return x % p > 0 });
});
return testPrimes.concat(numbers);
}
var primes = primesUpTo(2000000);
var sum = primes.reduce(function(acc, e) { return acc + e });
``` |
22,077,178 | I'm trying to set a connection string dynamically to the MyDataContext inheriting DbContext and I got this error:
>
> No connection string named 'Data Source=myserver\mine;Initial Catalog=MyDb;User ID=user1;Password=mypassword;Connect Timeout=300' could be found in the application config file.
>
>
>
Here's what my code look like:
```
public class DataContext : DbContext, IDataContext, IDataContextAsync
{
private readonly Guid _instanceId;
public DataContext(string nameOrConnectionString)
: base(nameOrConnectionString)
{
_instanceId = Guid.NewGuid();
Configuration.LazyLoadingEnabled = false;
Configuration.ProxyCreationEnabled = false;
}
//and so on...
}
public class MyDataContext : DataContext
{
public MyDataContext (string nameOrConnectionString)
:base(nameOrConnectionString)
{
}
}
```
I'm wondering why this is happening. As what I've understand, the constructor can accept either connection string name and connection string. In this case Entity Framework seem to expect a name.
Please help.
Thanks :) | 2014/02/27 | [
"https://Stackoverflow.com/questions/22077178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2374722/"
] | You probably want [`DbContext(String)`](http://msdn.microsoft.com/en-us/library/gg679467%28v=vs.113%29.aspx) and not [`DataContext(String)`](http://msdn.microsoft.com/en-us/library/bb350721%28v=vs.110%29.aspx) given the EF tag (and you referencing `nameOrConnectionString` argument).
```
public class MyDataContext : DbContext
{
public MyDataContext(String nameOrConnectionString)
: base(nameOrConnectionString)
{
}
}
```
Otherwise the `DataContext` is looking for a file source, which could be the problem you're seeing. | Try copying the connections string to the `.config` file in the MVC project (i.e main project) and make sure that MVC project as a startup. |
44,190 | I tried using iTunes, but it only seems to have downloads for iPhone and iPad only. How can I get a version I can play on my iMac? | 2012/03/18 | [
"https://apple.stackexchange.com/questions/44190",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/18060/"
] | The App Store found in iTunes is only for iPhone/iPod/iPad apps. Mac apps can be found on the [Mac App Store](http://www.apple.com/mac/app-store/). Angry Birds is available for Mac for [$5](http://itunes.apple.com/us/app/angry-birds/id403961173?mt=12).
You can also play for free online at <http://chrome.angrybirds.com/>.
Note that iPhone and iPad apps are not linked to desktop Mac apps, so when you buy an app for an iPhone or iPad, it does not come with a Mac version and there is no guarantee that a Mac version is even available.
Sometimes apps are available on both the iOS (iPhone/iPad) App Store and the separate Mac App Store, as is the case for Angry Birds. | Unfortunately, it is not supported anymore.
<https://support.rovio.com/hc/en-us/articles/360042909613-I-can-t-find-Angry-Birds-Classic-in-my-app-store-anymore-what-s-up-with-that-> |
4,297,708 | I want to print the contents of a single file from a remote repository, at a specified revision. How can I do that? In svn, it'd be:
```
svn cat <path to remote file>
```
### update
The main thing I want to avoid is cloning the entire repository; some of the ones I'm working with are quite large, and I just need project metadata from a single file within. | 2010/11/28 | [
"https://Stackoverflow.com/questions/4297708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23309/"
] | There's two ways to do what you want:
1. Clone the repository locally, and execute the appropriate `hg cat -r REV FILE` against it
2. Use a web interface that gives access to the remote repository
If you don't have the web interface, then you need to clone. | You may be able to download the file content with `wget` or your favorite browser. For example, for bitbucket repositories the URL looks like this:
```
http://bitbucket.org/<user>/<project>/raw/<revision>/<filename>
```
For the `hg serve` web interface, the URL looks like this:
```
http://<host>:<port>/raw-file/<revision>/<filename>
``` |
444,280 | >
> ΠΠ·ΡΠ»ΠΈ ΡΠΎΠ±Π°ΠΊΡ ΠΈΠ· ΠΏΡΠΈΡΡΠ°. ΠΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ, Π½ΠΎ ΡΠ»ΠΈΡΠΊΠΎΠΌ Π»ΠΎΡ
ΠΌΠ°ΡΡΠΉ.
>
>
>
ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ Π»ΠΈ Π·Π΄Π΅ΡΡ ΡΡΠ°ΠΊΡΠΎΠ²Π°ΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ "ΠΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ" ΠΊΠ°ΠΊ Π½Π°Π·ΡΠ²Π½ΠΎΠ΅? ΠΠΎ ΠΌΠΎΠ΅ΠΌΡ ΠΌΠ½Π΅Π½ΠΈΡ, Π·Π΄Π΅ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½Π° Π΄Π²ΠΎΡΠΊΠ°Ρ ΡΡΠ°ΠΊΡΠΎΠ²ΠΊΠ°: Ρ ΠΎΠ΄Π½ΠΎΠΉ ΡΡΠΎΡΠΎΠ½Ρ, "ΠΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ" ΠΌΠΎΠΆΠ΅Ρ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ ΠΊΠ°ΠΊ Π½Π΅ΠΏΠΎΠ»Π½ΠΎΠ΅ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ (ΠΏΠΎΠ΄Π»Π΅ΠΆΠ°ΡΠ΅Π΅ ΡΡΠ½ΠΎ ΠΈΠ· ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ°, Π° "Π΄ΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ" - ΡΠΊΠ°Π·ΡΠ΅ΠΌΠΎΠ΅), Ρ Π΄ΡΡΠ³ΠΎΠΉ ΡΡΠΎΡΠΎΠ½Ρ, ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΌΠΎΠΆΠ½ΠΎ ΡΡΠΈΡΠ°ΡΡ Π΄Π²ΡΡΠΎΡΡΠ°Π²Π½ΡΠΌ, Π½ΠΎ Ρ ΠΈΠ½Π²Π΅ΡΡΠΈΠ΅ΠΉ (Π΅ΡΠ»ΠΈ ΠΈΠΌΠ΅Π΅ΡΡΡ Π°ΠΊΡΠ΅Π½Ρ Π½Π° ΠΏΡΠΈΠ»Π°Π³Π°ΡΠ΅Π»ΡΠ½ΠΎΠΌ). Π ΠΌΠΎΠΆΠ½ΠΎ Π»ΠΈ ΡΡΠΈΡΠ°ΡΡ ΡΡΠΎ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ Π½Π°Π·ΡΠ²Π½ΡΠΌ? ΠΡΠ»ΠΈ Π΄Π°, ΡΠΎ ΠΏΠΎΡΠ΅ΠΌΡ. | 2018/09/25 | [
"https://rus.stackexchange.com/questions/444280",
"https://rus.stackexchange.com",
"https://rus.stackexchange.com/users/189895/"
] | ΠΠ°ΡΠΈΠ°Π½Ρ ΡΠ΅Π΄Π°ΠΊΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ:
**ΠΠΈΠ²ΠΎΡΠ½ΡΠ΅ ΠΏΡΠ΅Π΄ΡΡΠ°Π½ΡΡ ΠΏΠ΅ΡΠ΅Π΄ ΡΠΊΡΠΏΠ΅ΡΡΠ½ΠΎΠΉ ΠΊΠΎΠΌΠΈΡΡΠΈΠ΅ΠΉ, ΠΎΠ½Π° ΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΡ, ΡΠ΅ΠΉ Π²Π»Π°Π΄Π΅Π»Π΅Ρ ΠΏΠΎΠ»ΡΡΠΈΡ Π³Π»Π°Π²Π½ΡΠΉ ΠΏΡΠΈΠ· Π² 9,5 ΠΌΠ»Π½ Π΄ΠΎΠ»Π»Π°ΡΠΎΠ².**
**Π‘ΡΠΈΠ»ΠΈΡΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ Π½Π΅ΡΠΎΡΠ½ΠΎΡΡΠΈ**: ΠΏΠΎΠ²ΡΠΎΡ ΠΌΠ΅ΡΡΠΎΠΈΠΌΠ΅Π½ΠΈΠΉ ΠΎΠ½ΠΈ, ΠΎΠ½Π°, ΠΈΠ· Π½ΠΈΡ
; ΡΠΎΡΠ·Π½ΠΎΠ΅ ΡΠ»ΠΎΠ²ΠΎ ΠΠΠ’ΠΠ ΠΠΠ ΡΠ°ΡΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΎ Π½Π° Π·Π½Π°ΡΠΈΡΠ΅Π»ΡΠ½ΠΎΠΌ ΡΠ°ΡΡΡΠΎΡΠ½ΠΈΠΈ ΠΎΡ "ΠΊΠΎΡΠ°Π±Π»Ρ ΠΏΡΡΡΡΠ½ΠΈ". | ΠΠΎ-ΠΌΠΎΠ΅ΠΌΡ, ΡΠΎΡΠΌΠ°Π»ΡΠ½ΠΎ Π²ΡΠ΅ ΠΏΡΠ°Π²ΠΈΠ»Π° ΡΠΎΠ³Π»Π°ΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΎΠ±Π»ΡΠ΄Π΅Π½Ρ. ΠΠ΅ ΠΎΡΠ΅Π½Ρ ΠΌΠ½Π΅ Π½ΡΠ°Π²ΠΈΡΡΡ ΡΡΠΆΠ΅Π»ΠΎΠ΅ "Π²Π»Π°Π΄Π΅Π»Π΅Ρ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ ΠΈΠ· Π½ΠΈΡ
", Ρ Π±Ρ Π½Π°ΠΏΠΈΡΠ°Π» ΠΏΡΠΎΡΡΠΎ "Π²Π»Π°Π΄Π΅Π»Π΅Ρ ΠΊΠ°ΠΊΠΎΠ³ΠΎ Π²Π΅ΡΠ±Π»ΡΠ΄Π°". |
444,280 | >
> ΠΠ·ΡΠ»ΠΈ ΡΠΎΠ±Π°ΠΊΡ ΠΈΠ· ΠΏΡΠΈΡΡΠ°. ΠΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ, Π½ΠΎ ΡΠ»ΠΈΡΠΊΠΎΠΌ Π»ΠΎΡ
ΠΌΠ°ΡΡΠΉ.
>
>
>
ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ Π»ΠΈ Π·Π΄Π΅ΡΡ ΡΡΠ°ΠΊΡΠΎΠ²Π°ΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ "ΠΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ" ΠΊΠ°ΠΊ Π½Π°Π·ΡΠ²Π½ΠΎΠ΅? ΠΠΎ ΠΌΠΎΠ΅ΠΌΡ ΠΌΠ½Π΅Π½ΠΈΡ, Π·Π΄Π΅ΡΡ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½Π° Π΄Π²ΠΎΡΠΊΠ°Ρ ΡΡΠ°ΠΊΡΠΎΠ²ΠΊΠ°: Ρ ΠΎΠ΄Π½ΠΎΠΉ ΡΡΠΎΡΠΎΠ½Ρ, "ΠΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ" ΠΌΠΎΠΆΠ΅Ρ ΡΠ°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ ΠΊΠ°ΠΊ Π½Π΅ΠΏΠΎΠ»Π½ΠΎΠ΅ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ (ΠΏΠΎΠ΄Π»Π΅ΠΆΠ°ΡΠ΅Π΅ ΡΡΠ½ΠΎ ΠΈΠ· ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ°, Π° "Π΄ΠΎΠ±ΡΡΠΉ ΠΏΠ΅Ρ" - ΡΠΊΠ°Π·ΡΠ΅ΠΌΠΎΠ΅), Ρ Π΄ΡΡΠ³ΠΎΠΉ ΡΡΠΎΡΠΎΠ½Ρ, ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΌΠΎΠΆΠ½ΠΎ ΡΡΠΈΡΠ°ΡΡ Π΄Π²ΡΡΠΎΡΡΠ°Π²Π½ΡΠΌ, Π½ΠΎ Ρ ΠΈΠ½Π²Π΅ΡΡΠΈΠ΅ΠΉ (Π΅ΡΠ»ΠΈ ΠΈΠΌΠ΅Π΅ΡΡΡ Π°ΠΊΡΠ΅Π½Ρ Π½Π° ΠΏΡΠΈΠ»Π°Π³Π°ΡΠ΅Π»ΡΠ½ΠΎΠΌ). Π ΠΌΠΎΠΆΠ½ΠΎ Π»ΠΈ ΡΡΠΈΡΠ°ΡΡ ΡΡΠΎ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΈΠ΅ Π½Π°Π·ΡΠ²Π½ΡΠΌ? ΠΡΠ»ΠΈ Π΄Π°, ΡΠΎ ΠΏΠΎΡΠ΅ΠΌΡ. | 2018/09/25 | [
"https://rus.stackexchange.com/questions/444280",
"https://rus.stackexchange.com",
"https://rus.stackexchange.com/users/189895/"
] | **ΠΡΠΈΠ±ΠΊΠ°** - Π½Π°Π³ΡΠΎΠΌΠΎΠΆΠ΄Π΅Π½ΠΈΠ΅ ΠΌΠ΅ΡΡΠΎΠΈΠΌΠ΅Π½ΠΈΠΉ: "**ΠΠ½ΠΈ** ΠΏΡΠ΅Π΄ΡΡΠ°Π½ΡΡ ΠΏΠ΅ΡΠ΅Π΄ ΡΠΊΡΠΏΠ΅ΡΡΠ½ΠΎΠΉ ΠΊΠΎΠΌΠΈΡΡΠΈΠ΅ΠΉ, **ΠΎΠ½Π°** ΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΡ, Π²Π»Π°Π΄Π΅Π»Π΅Ρ **ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ** ΠΈΠ· **Π½ΠΈΡ
** ΠΏΠΎΠ»ΡΡΠΈΡ Π³Π»Π°Π²Π½ΡΠΉ ΠΏΡΠΈΠ· Π² 9,5 ΠΌΠ»Π½ Π΄ΠΎΠ»Π»Π°ΡΠΎΠ²."
ΠΡΠ΅Π΄Π»Π°Π³Π°Ρ ΠΏΡΠ°Π²ΠΊΡ:
**ΠΠΠ’ΠΠ ΠΠΠ (Π°ΡΡΠΎΡΠΈΠΈΡΡΠ΅ΡΡΡ ΡΠΎ ΡΡΡΡΠΎΠΌ) ΠΌΠ΅Π½ΡΠ΅ΠΌ Π½Π° ΠΠΠΠΠΠ, Π΄Π²Π° Π΄ΡΡΠ³ΠΈΡ
ΠΌΠ΅ΡΡΠΎΠΈΠΌΠ΅Π½ΠΈΡ Π·Π°ΠΌΠ΅Π½ΡΠ΅ΠΌ ΡΡΡΠ΅ΡΡΠ²ΠΈΡΠ΅Π»ΡΠ½ΡΠΌΠΈ.**
***ΠΠ΅ΡΠ±Π»ΡΠ΄Ρ ΠΏΡΠ΅Π΄ΡΡΠ°Π½ΡΡ ΠΏΠ΅ΡΠ΅Π΄ ΡΠΊΡΠΏΠ΅ΡΡΠ½ΠΎΠΉ ΠΊΠΎΠΌΠΈΡΡΠΈΠ΅ΠΉ, ΠΊΠΎΡΠΎΡΠ°Ρ ΠΈ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΡ, Π²Π»Π°Π΄Π΅Π»Π΅Ρ ΠΊΠ°ΠΊΠΎΠ³ΠΎ ΠΆΠΈΠ²ΠΎΡΠ½ΠΎΠ³ΠΎ ΠΏΠΎΠ»ΡΡΠΈΡ Π³Π»Π°Π²Π½ΡΠΉ ΠΏΡΠΈΠ· Π² 9,5 ΠΌΠ»Π½ Π΄ΠΎΠ»Π»Π°ΡΠΎΠ².*** | ΠΠΎ-ΠΌΠΎΠ΅ΠΌΡ, ΡΠΎΡΠΌΠ°Π»ΡΠ½ΠΎ Π²ΡΠ΅ ΠΏΡΠ°Π²ΠΈΠ»Π° ΡΠΎΠ³Π»Π°ΡΠΎΠ²Π°Π½ΠΈΡ ΡΠΎΠ±Π»ΡΠ΄Π΅Π½Ρ. ΠΠ΅ ΠΎΡΠ΅Π½Ρ ΠΌΠ½Π΅ Π½ΡΠ°Π²ΠΈΡΡΡ ΡΡΠΆΠ΅Π»ΠΎΠ΅ "Π²Π»Π°Π΄Π΅Π»Π΅Ρ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ ΠΈΠ· Π½ΠΈΡ
", Ρ Π±Ρ Π½Π°ΠΏΠΈΡΠ°Π» ΠΏΡΠΎΡΡΠΎ "Π²Π»Π°Π΄Π΅Π»Π΅Ρ ΠΊΠ°ΠΊΠΎΠ³ΠΎ Π²Π΅ΡΠ±Π»ΡΠ΄Π°". |
15,022 | Given $F = F(x\_0,\ldots,x\_n)$ the free group on $n+1$ generators. Define a function $M: F\rightarrow \mathbb{N}$ such that $F(w) = l$, if the smallest group in which $w$ is not an identity is of size $l$.
My question is what the function $M$ looks like. Are there nice bounds?
Here are some observations that have come from earlier discussions I have had about this question.
0)if there is a subset of the generators appearing in $w$ where the sum of the exponents is nonzero, then you can use a cyclic group where the order of the group does not divide this sum of exponents as an example.
1) an upper bound of $F(w)$ is $|w|!$: you can by hand construct a permutation group in which the identity is not satisfied. (the fact that $M$ is welldefined is equivalent to the residual finiteness of the free group)
2) the function $M$ is unbounded: every finite group $G$ on $n+1$ generators corresponds to a finite index subgroup of $F$ (a group $W\subseteq F$ for which $G = F/W$; for each $G$ there are finitely many such $W$), and the intersection of finitely many finite index subgroups is still of finite index. So take all groups of size less then $k$, every word in the intersection of their corresponding finite index subgroups requires a group of size greater than $k$ to not be satisfied. | 2010/02/11 | [
"https://mathoverflow.net/questions/15022",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3956/"
] | To make the question a little less open-ended while (I hope) retaining its spirit, let me interpret the question as asking for the rate of growth of the function $\mu(k)$ defined as the maximum of $M(w)$ over all nontrivial words $w$ of length up to $k$ in any number of symbols, where *length* is the number of symbols and their inverses multiplied together in $w$. (E.g., $x^5 y^{-3}$ has length $8$.) George Lowther observed that $M(x^{n!})>n$, so $\mu(n!)>n$. One can replace $n!$ by $\operatorname{LCM}(1,2,\ldots,n)$, which is $e^{(1+o(1))n}$, so this gives $\mu(k) > (1-o(1)) \log k$ as $k \to \infty$.
I will improve this by showing that $\mu(k)$ is at least of order $k^{1/4}$.
Let $C\_2(x,y):=[x,y]=xyx^{-1}y^{-1}$. If $C\_N$ has been defined, define $$C\_{2N}(x\_1,\ldots,x\_N,y\_1,\ldots,y\_N):=[C\_N(x\_1,\ldots,x\_N),C\_N(y\_1,\ldots,y\_N)].$$ By induction, if $N$ is a power of $2$, then $C\_N$ is a word of length $N^2$ that evaluates to $1$ whenever any of its arguments is $1$.
Given $m \ge 1$, let $N$ be the smallest power of $2$ such that $N \ge 2 \binom{m}{2}$. Construct $w$ by applying $C\_N$ to a sequence of arguments including $x\_i x\_j^{-1}$ for $1 \le i < j \le m$ and extra distinct interdeterminates inserted so that no two of the $x\_i x\_j^{-1}$ are adjacent arguments of $C\_N$. The extra indeterminates ensure that $w$ is not the trivial word. If $w$ is evaluated on elements of a group of size less than $m$, then by the pigeonhole principle two of the $x\_i$ have the same value, so some $x\_i x\_j^{-1}$ is $1$, so $w$ evaluates to $1$. Thus $M(w)>m$. The length of $w$ is at most $2N^2$, which is of order $m^4$. Thus $\mu(k)$ is at least of order $k^{1/4}$.
I have a feeling that this is not best possible$\ldots$ | The asymptotic version of this question raised by Bjorn Poonen has been studied by [Khalid Bou-Rabee](http://www.math.uchicago.edu/~khalid/) for general groups, not just free groups. That is, given G a residually finite group, for each g we can ask: how large is the smallest finite group F which detects g, meaning there exists f: G -> F so that f(g) is nontrivial? Now fix a word metric on G, and ask how the maximum of this "detection number" grows as you consider words of length at most n.
See "[Quantifying residual finiteness](http://www.math.uchicago.edu/~khalid/qrfiniteness.pdf)" and "[Asymptotic growth and least common multiples in groups](http://www.math.uchicago.edu/~khalid/lcm.pdf)" (with Ben McReynolds) for his results. For example, as long as G is linear, the growth function is polylog, meaning asymptotically less than (log n)k for some k, if and only if G is virtually nilpotent.
To answer your question, by considering congruence quotients of SL2Z, Bou-Rabee concludes that for every word of length n in the free group F2, there is a finite group of order O(n3) where the word is not an identity.
The same bound can be obtained uniformly as follows. Ury Hadad gives a lower bound in "[On the shortest identity in finite simple groups of Lie type](http://arxiv.org/abs/0808.0622)" which implies that the shortest identity in PSL2(Fq) has length at least (q-1)/3. Since the size of PSL2(Fq) is order q3, this implies that every word of length at most n fails to be an identity in *one single group* PSL2(Fq) of order O(n3)!
I learned this argument from Martin Kassabov and Francesco Matucci's paper "[Bounding the residual finiteness of free groups](http://arxiv.org/abs/0912.2368)". In it, they use a nice analysis of finite groups with elements of "large order" to construct a word of length n in the free group F2 which is trivial in every finite group of order at most O(n2/3). This improved on the lower bound of n1/3 due to Bou-Rabee and McReynolds; I believe this is now the best lower bound known. |
69,231,149 | I'm do output with json\_encode to sending to javascript, with this code.
```
<?php
include "Connection.php";
$syntax_query = "select * from product";
$thisExecuter = mysqli_query($conn, $syntax_query);
$result = array();
while($row = mysqli_fetch_assoc($thisExecuter)){
array_push(
$result,
array(
"id" => $row["product_id"],
"name" => $row["product_name"]
)
);
}
echo json_encode($result);
?>
```
so output show like this,
[{"id":"121353568","name":"Baju Casual - Black"},{"id":"556903232","name":"Tas LV - Red"},{"id":"795953280","name":"Sword - Wood"},{"id":"834032960","name":"Scooter - Iron Plate"}]
and code javascript like this
```
function showHint() {
const xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
var obj = this.responseText;
document.getElementById("txtHint").innerHTML = obj.id;
}
xmlhttp.open("GET", "Download.php");
xmlhttp.send();
}
```
so obj.id its not working, output show undifined. | 2021/09/18 | [
"https://Stackoverflow.com/questions/69231149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12891802/"
] | As far as I recall from 45 years ago. PIC ZZZ ,ZZZ9(6) is a 12-digit number that has 6 zero-suppressed digits and 6 non-zero-suppressed digits, as well as a single curiously-placed comma separator.
Don't write 9(6) if you don't want leading zeroes. Using Z(6). Or maybe Z(5)9.
Did you really need 12 digits for 'area' with a comma every 3 digits? If so, that would be PIC ZZZ,ZZZ,ZZZ,ZZ9.
---
Example: executed at <https://www.jdoodle.com/execute-cobol-online/>
```
IDENTIFICATION DIVISION.
PROGRAM-ID. ZERO.
DATA DIVISION.
WORKING-STORAGE SECTION.
77 X PIC Z(8)9.
77 Y PIC 9(9).
PROCEDURE DIVISION.
MOVE 1234 TO X.
MOVE 1234 TO Y.
DISPLAY 'X=', X, ' Y=', Y.
STOP RUN.
```
Displays
```
X= 1234 Y=000001234
``` | Your declarations define too large fields. e.g.
```
05 neOutputPop PIC ZZZ,ZZZ,ZZZ9(8).
```
This is equivalent to:
```
05 neOutputPop PIC ZZZ,ZZZ,ZZZ99999999.
```
Now only numbers larger than 99999999 will have any digits in the positions where you want zero suppression. In your sample data, the population numbers are 8 digit numbers, so no non-zero digit in the "Z" places.
Your declarations already do have the leading zero suppression, and in fact it does work, because you don't see zeros in the "Z" positions. Right?
What you need is
```
05 neOutputPop PIC ZZZ,ZZZ,ZZ9.
``` |
69,231,149 | I'm do output with json\_encode to sending to javascript, with this code.
```
<?php
include "Connection.php";
$syntax_query = "select * from product";
$thisExecuter = mysqli_query($conn, $syntax_query);
$result = array();
while($row = mysqli_fetch_assoc($thisExecuter)){
array_push(
$result,
array(
"id" => $row["product_id"],
"name" => $row["product_name"]
)
);
}
echo json_encode($result);
?>
```
so output show like this,
[{"id":"121353568","name":"Baju Casual - Black"},{"id":"556903232","name":"Tas LV - Red"},{"id":"795953280","name":"Sword - Wood"},{"id":"834032960","name":"Scooter - Iron Plate"}]
and code javascript like this
```
function showHint() {
const xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
var obj = this.responseText;
document.getElementById("txtHint").innerHTML = obj.id;
}
xmlhttp.open("GET", "Download.php");
xmlhttp.send();
}
```
so obj.id its not working, output show undifined. | 2021/09/18 | [
"https://Stackoverflow.com/questions/69231149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12891802/"
] | As far as I recall from 45 years ago. PIC ZZZ ,ZZZ9(6) is a 12-digit number that has 6 zero-suppressed digits and 6 non-zero-suppressed digits, as well as a single curiously-placed comma separator.
Don't write 9(6) if you don't want leading zeroes. Using Z(6). Or maybe Z(5)9.
Did you really need 12 digits for 'area' with a comma every 3 digits? If so, that would be PIC ZZZ,ZZZ,ZZZ,ZZ9.
---
Example: executed at <https://www.jdoodle.com/execute-cobol-online/>
```
IDENTIFICATION DIVISION.
PROGRAM-ID. ZERO.
DATA DIVISION.
WORKING-STORAGE SECTION.
77 X PIC Z(8)9.
77 Y PIC 9(9).
PROCEDURE DIVISION.
MOVE 1234 TO X.
MOVE 1234 TO Y.
DISPLAY 'X=', X, ' Y=', Y.
STOP RUN.
```
Displays
```
X= 1234 Y=000001234
``` | I do not know how many digits you want to display at most, so I post a general solution:
```
05 9digits pic zzz,zzz,zz9.
```
This will show
```
1
12
123
1,234
12,345
123,456
1,234,567
12,345,678
123,456,789
``` |
69,231,149 | I'm do output with json\_encode to sending to javascript, with this code.
```
<?php
include "Connection.php";
$syntax_query = "select * from product";
$thisExecuter = mysqli_query($conn, $syntax_query);
$result = array();
while($row = mysqli_fetch_assoc($thisExecuter)){
array_push(
$result,
array(
"id" => $row["product_id"],
"name" => $row["product_name"]
)
);
}
echo json_encode($result);
?>
```
so output show like this,
[{"id":"121353568","name":"Baju Casual - Black"},{"id":"556903232","name":"Tas LV - Red"},{"id":"795953280","name":"Sword - Wood"},{"id":"834032960","name":"Scooter - Iron Plate"}]
and code javascript like this
```
function showHint() {
const xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function() {
var obj = this.responseText;
document.getElementById("txtHint").innerHTML = obj.id;
}
xmlhttp.open("GET", "Download.php");
xmlhttp.send();
}
```
so obj.id its not working, output show undifined. | 2021/09/18 | [
"https://Stackoverflow.com/questions/69231149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12891802/"
] | Your declarations define too large fields. e.g.
```
05 neOutputPop PIC ZZZ,ZZZ,ZZZ9(8).
```
This is equivalent to:
```
05 neOutputPop PIC ZZZ,ZZZ,ZZZ99999999.
```
Now only numbers larger than 99999999 will have any digits in the positions where you want zero suppression. In your sample data, the population numbers are 8 digit numbers, so no non-zero digit in the "Z" places.
Your declarations already do have the leading zero suppression, and in fact it does work, because you don't see zeros in the "Z" positions. Right?
What you need is
```
05 neOutputPop PIC ZZZ,ZZZ,ZZ9.
``` | I do not know how many digits you want to display at most, so I post a general solution:
```
05 9digits pic zzz,zzz,zz9.
```
This will show
```
1
12
123
1,234
12,345
123,456
1,234,567
12,345,678
123,456,789
``` |
3,500,738 | is it possible to embed comments in my .xhtml-files that are only displayed in the source and not the rendered result?
I want to include author, date,... in the files but they should not be visible to the enduser in the generated output. If I use the standard comment-tags `<!-- -->` the browser displays them. | 2010/08/17 | [
"https://Stackoverflow.com/questions/3500738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417177/"
] | Add the following into your `web.xml`:
```xml
<context-param>
<param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name>
<param-value>true</param-value>
</context-param>
```
This way `Facelets` will skip the comments while parsing the view `xhtml` template. | Invisible comments in JSF is a drawback, specially for beginers. I agree with answer of Mr Minchev. Anyway, I provide an alternative way to comment content in JSF consisting of using **ui:remove**
```
<ui:remove> This is a comment </ui:remove>
```
The UI Remove tag is used to specify tags or blocks of content that should be removed from your page by the Facelets view handler at compile time. This tag has no attributes. You can use this tag to indicate that a particular tag should be removed from the rendered page.
It's useful to remove content which is required during [design time](https://stackoverflow.com/questions/17320914/practical-implications-of-facelets-uiremove-tag), but not during run time, such as comments, some stubbed content (e.g. "lorem ipsum") which aids in filling up the page content to fit the layout in visual designers such as Dreamweaver, etc.
See: [Practical implications of Facelets ui:remove tag](https://stackoverflow.com/questions/17320914/practical-implications-of-facelets-uiremove-tag)
Note that the Facelets ***compilation process is much faster than the JSP compilation process*** because no Java bytecode is actually generated and compiled behind the scenes when you first visit your page. The UI Remove tag is used to specify tags or blocks of content that should be removed from your page by the Facelets view handler at compile time. This tag has no attributes.
>
> [Examples of both comment options](http://www.mkyong.com/jsf2/how-to-use-comments-in-jsf-2-0/)
>
>
> |
3,500,738 | is it possible to embed comments in my .xhtml-files that are only displayed in the source and not the rendered result?
I want to include author, date,... in the files but they should not be visible to the enduser in the generated output. If I use the standard comment-tags `<!-- -->` the browser displays them. | 2010/08/17 | [
"https://Stackoverflow.com/questions/3500738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417177/"
] | Add the following into your `web.xml`:
```xml
<context-param>
<param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name>
<param-value>true</param-value>
</context-param>
```
This way `Facelets` will skip the comments while parsing the view `xhtml` template. | Wrong, the right way is:
```
<context-param>
<param-name>facelets.SKIP_COMMENTS</param-name>
<param-value>true</param-value>
```
This work for me, javax.faces.FACELETS\_SKIP\_COMMENTS no! |
3,500,738 | is it possible to embed comments in my .xhtml-files that are only displayed in the source and not the rendered result?
I want to include author, date,... in the files but they should not be visible to the enduser in the generated output. If I use the standard comment-tags `<!-- -->` the browser displays them. | 2010/08/17 | [
"https://Stackoverflow.com/questions/3500738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/417177/"
] | Invisible comments in JSF is a drawback, specially for beginers. I agree with answer of Mr Minchev. Anyway, I provide an alternative way to comment content in JSF consisting of using **ui:remove**
```
<ui:remove> This is a comment </ui:remove>
```
The UI Remove tag is used to specify tags or blocks of content that should be removed from your page by the Facelets view handler at compile time. This tag has no attributes. You can use this tag to indicate that a particular tag should be removed from the rendered page.
It's useful to remove content which is required during [design time](https://stackoverflow.com/questions/17320914/practical-implications-of-facelets-uiremove-tag), but not during run time, such as comments, some stubbed content (e.g. "lorem ipsum") which aids in filling up the page content to fit the layout in visual designers such as Dreamweaver, etc.
See: [Practical implications of Facelets ui:remove tag](https://stackoverflow.com/questions/17320914/practical-implications-of-facelets-uiremove-tag)
Note that the Facelets ***compilation process is much faster than the JSP compilation process*** because no Java bytecode is actually generated and compiled behind the scenes when you first visit your page. The UI Remove tag is used to specify tags or blocks of content that should be removed from your page by the Facelets view handler at compile time. This tag has no attributes.
>
> [Examples of both comment options](http://www.mkyong.com/jsf2/how-to-use-comments-in-jsf-2-0/)
>
>
> | Wrong, the right way is:
```
<context-param>
<param-name>facelets.SKIP_COMMENTS</param-name>
<param-value>true</param-value>
```
This work for me, javax.faces.FACELETS\_SKIP\_COMMENTS no! |
2,155,745 | >
> Find the integer $n$ which has the following property: If the numbers from $1$ to $n$ are all written down in decimal notation the total number of digit written down is $1998$. What is the next higher number (instead of $1998$) for which this problem has an answer?
>
>
>
My try:
This problem was given under exercise Inclusion-Exclusion Principle, I think we should first find that $n$ for which above holds, then we'll find the number of digits (same manner as in problem) in $n+1$.
Like digits from: $\begin{cases}1-9 &=9\\ 10-99 &=2\times 90\\ 100-999 &=3\times 900\\ & \vdots\\ \end{cases}$
We can easily see the pattern above. But I don't see whether we seriously need IEP.
How to take it from here? | 2017/02/22 | [
"https://math.stackexchange.com/questions/2155745",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/296971/"
] | Since $3\cdot900=2700>1998$, you need to solve the equation:
$1\cdot9+2\cdot90+3\cdot{k}=1998\iff3k=1998-189\iff{k}=603$.
This means that there are $603$ numbers of $3$ digits each, hence $n=100+603-1=702$.
The next number for which this problem has an answer is of course $1998+3=2001$. | The number of digits when we write down all numbers till 100 is 192.
Now adding 100 more will cause total number of digits to increase by 300, as every number added was a three digit number.
Then for n=700 gives 1992 total digits
Because the first 100 gives 192 and the rest of the 100s gives 300 digits.
Then n=701 will have 3 more and n = 702 will have 6 more i.e will have 1998
Then n = 702 is required number
and next possible will be 1998+3 = 2001. |
32,412 | Your headquarter has sent an encrypted message to your phone:
```
OGZOTZHETZKOBOZRESTOZTAZETHZLIARYAWZNISTATOZWORTROOMZ8ZMAZDENYTIFIZGINSAKZORFZA
774060025101
000511014005
110123431121
200120404006
```
It didn't take you long to figure out the letters, but the numbers were a bit tricky. You worked on them and got some clue, but you could have bet that you needed some word. Later on, you knew what to do. You followed your instructions and your counterpart answered:
*"I'm sorry Sir, I guess you won't find this here."*, pointing around you. *"May I offer you today's newspaper?"*
You felt somewhat snubbed, but agreed to take the newspaper. You found a seat and started reading. Suddenly, you noticed small, nearly invisible dots under some of the letters in four lines of a text. You noted them:
```
JHEBHBAHHFBJ
ABBECDCFFFBJ
BGCFCEGDDFCH
DFCCDIGEFGBJ
```
It looked somehow familiar and you had to think twice. You got stuck for a while and consulted your watch. But you knew that your upline literally counts on you. Finally, you left the place the right way.
>
> What were your initial instructions?
>
> What did the newspaper reveal?
>
>
> Hints are included in the text. Answers can be found independently.
>
> Constructive suggestions for improvement are welcome.
>
>
>
### Further hints
>
> You decrypted the missing word by initially regrouping the numbers. The only evidence you could make use of was the time they told you.
>
>
>
> As there were no 8s or 9s in the numbers, you combined that it could be some 3 bit encoding.
>
>
>
> Of course, you knew the first few decimals of pi by heart.
>
>
>
> You correctly assumed, that the newspaper's letters had to be transformed into numbers for further processing.
>
>
>
> Com*pair*ing the two number blocks helped you deciphering.
>
>
>
With many credits to LeppyR64 who gave the right answer to the second question and f'' for visualizing the first number block, I'm accepting Khale\_Kitha's answer, due to the fact, that I can only accept one answer. In my oppinion Khale\_Kitha contributed a lot to get both answers. Good job! | 2016/05/11 | [
"https://puzzling.stackexchange.com/questions/32412",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/20933/"
] | The first string of letters translates to
>
> GO TO THE BOOK STORE AT THE RAILWAY STATION TOMORROW 8 AM IDENTIFY ASKING FOR A
>
>
>
If you
>
> Remove the letter 'Z' separator and unscramble each word separately.
>
>
>
Using the new info from the hint, if the first series of numbers needs to be translated to 3-bit binary numbers, they would be:
>
> `111 111 100 000 110 000 000 010 101 001 000 001`
>
> `000 000 000 101 001 001 000 001 100 000 000 101`
>
> `001 001 000 001 010 011 100 011 001 001 010 001`
>
> `010 000 000 001 010 000 100 000 100 000 000 110`
>
>
>
f"'s Answer has a good representation of what this is, if you change the ones to a visible block and the 0's to a non-visible block.
>
> Taking f"s answer of pi, 6-9, you get the 6-9'th digits of pi, 2653. (Thanks LeppyR64) On a phone, this could easily translate to **"COKE"**, so I presume that you are supposed to ask for a coke. Sadly, you're not likely to find one in a bookstore.
>
>
>
This makes the original first phrase:
>
> GO TO THE BOOK STORE AT THE RAILWAY STATION TOMORROW 8 AM IDENTIFY ASKING FOR A COKE
>
>
>
The series of letters can be changed to 0 - 9 to coincide with the other grid, which gives:
>
> `9 7 4 1 7 1 0 7 7 5 1 9`
>
> `0 1 1 4 2 3 2 5 5 5 1 9`
>
> `1 6 2 5 2 4 6 3 3 5 2 7`
>
> `3 5 2 2 3 8 6 4 5 6 1 9`
>
>
>
If you set these numbers, and the other numbers, into pairs, and subtract them, as demonstrated in [LeppyR64's answer](https://puzzling.stackexchange.com/a/32792/19944), then you come up with a form of text separated by 'X's, similar to the first message. It decodes to:
>
> TAKE RAIL ONE AT FOUR PM
>
>
> | Ideas for the squares:
>
> I don't think the numbers and letters have to be combined as @Khale\_Kitha does, because the text says "knew what to do" before getting the letter rectangle.
>
>
>
You worked on them and got some clue, but you could have bet you needed some word:
>
> There is no 8 or 9 in the numbers, just 0-7, so they could be coordinates into a 8x8 square. But we need to fill the squares with letters.
>
>
>
Later on, you knew what to do:
>
> If you remove the blanks from the "Go to the book store ..." sentence, it has exactly 64 characters, so you can use it to fill the square:
>
> `01234567`
>
> `0 GOTOTHEB`
>
> `1 OOKSTORE`
>
> `2 ATTHERAI`
>
> `3 LWAYSTAT`
>
> `4 IONTOMOR`
>
> `5 ROW8AMID`
>
> `6 ENTIFYAS`
>
> `7 KINGFORA`
>
>
>
Unfortunately, my attempts at mapping numbers to letters didn't give any sensible results, yet.
The "Further hint" may have to do with
>
> There are 48 digits, or 24 digit pairs. This might refer to 24 hours, and "The time they told you" was 8 am.
>
>
> |
32,412 | Your headquarter has sent an encrypted message to your phone:
```
OGZOTZHETZKOBOZRESTOZTAZETHZLIARYAWZNISTATOZWORTROOMZ8ZMAZDENYTIFIZGINSAKZORFZA
774060025101
000511014005
110123431121
200120404006
```
It didn't take you long to figure out the letters, but the numbers were a bit tricky. You worked on them and got some clue, but you could have bet that you needed some word. Later on, you knew what to do. You followed your instructions and your counterpart answered:
*"I'm sorry Sir, I guess you won't find this here."*, pointing around you. *"May I offer you today's newspaper?"*
You felt somewhat snubbed, but agreed to take the newspaper. You found a seat and started reading. Suddenly, you noticed small, nearly invisible dots under some of the letters in four lines of a text. You noted them:
```
JHEBHBAHHFBJ
ABBECDCFFFBJ
BGCFCEGDDFCH
DFCCDIGEFGBJ
```
It looked somehow familiar and you had to think twice. You got stuck for a while and consulted your watch. But you knew that your upline literally counts on you. Finally, you left the place the right way.
>
> What were your initial instructions?
>
> What did the newspaper reveal?
>
>
> Hints are included in the text. Answers can be found independently.
>
> Constructive suggestions for improvement are welcome.
>
>
>
### Further hints
>
> You decrypted the missing word by initially regrouping the numbers. The only evidence you could make use of was the time they told you.
>
>
>
> As there were no 8s or 9s in the numbers, you combined that it could be some 3 bit encoding.
>
>
>
> Of course, you knew the first few decimals of pi by heart.
>
>
>
> You correctly assumed, that the newspaper's letters had to be transformed into numbers for further processing.
>
>
>
> Com*pair*ing the two number blocks helped you deciphering.
>
>
>
With many credits to LeppyR64 who gave the right answer to the second question and f'' for visualizing the first number block, I'm accepting Khale\_Kitha's answer, due to the fact, that I can only accept one answer. In my oppinion Khale\_Kitha contributed a lot to get both answers. Good job! | 2016/05/11 | [
"https://puzzling.stackexchange.com/questions/32412",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/20933/"
] | The first string of letters translates to
>
> GO TO THE BOOK STORE AT THE RAILWAY STATION TOMORROW 8 AM IDENTIFY ASKING FOR A
>
>
>
If you
>
> Remove the letter 'Z' separator and unscramble each word separately.
>
>
>
Using the new info from the hint, if the first series of numbers needs to be translated to 3-bit binary numbers, they would be:
>
> `111 111 100 000 110 000 000 010 101 001 000 001`
>
> `000 000 000 101 001 001 000 001 100 000 000 101`
>
> `001 001 000 001 010 011 100 011 001 001 010 001`
>
> `010 000 000 001 010 000 100 000 100 000 000 110`
>
>
>
f"'s Answer has a good representation of what this is, if you change the ones to a visible block and the 0's to a non-visible block.
>
> Taking f"s answer of pi, 6-9, you get the 6-9'th digits of pi, 2653. (Thanks LeppyR64) On a phone, this could easily translate to **"COKE"**, so I presume that you are supposed to ask for a coke. Sadly, you're not likely to find one in a bookstore.
>
>
>
This makes the original first phrase:
>
> GO TO THE BOOK STORE AT THE RAILWAY STATION TOMORROW 8 AM IDENTIFY ASKING FOR A COKE
>
>
>
The series of letters can be changed to 0 - 9 to coincide with the other grid, which gives:
>
> `9 7 4 1 7 1 0 7 7 5 1 9`
>
> `0 1 1 4 2 3 2 5 5 5 1 9`
>
> `1 6 2 5 2 4 6 3 3 5 2 7`
>
> `3 5 2 2 3 8 6 4 5 6 1 9`
>
>
>
If you set these numbers, and the other numbers, into pairs, and subtract them, as demonstrated in [LeppyR64's answer](https://puzzling.stackexchange.com/a/32792/19944), then you come up with a form of text separated by 'X's, similar to the first message. It decodes to:
>
> TAKE RAIL ONE AT FOUR PM
>
>
> | Khale\_Kitha's answer produces this sequence of 1's and 0's:
>
> `111 111 100 000 110 000 000 010 101 001 000 001`
>
> `000 000 000 101 001 001 000 001 100 000 000 101`
>
> `001 001 000 001 010 011 100 011 001 001 010 001`
>
> `010 000 000 001 010 000 100 000 100 000 000 110`
>
>
>
Now you can
>
> Rearrange them to have 24 digits on each line, then replace the 1's and 0's with black and white:
>
> `βββββββ ββ β`
>
> `β β β β β β`
>
> `β β ββ β β`
>
> `β β β β βββ ββ`
>
> `β β β β β β`
>
> `β β β ββ`
>
> This reads "Ο 6-9".
>
>
> |
32,412 | Your headquarter has sent an encrypted message to your phone:
```
OGZOTZHETZKOBOZRESTOZTAZETHZLIARYAWZNISTATOZWORTROOMZ8ZMAZDENYTIFIZGINSAKZORFZA
774060025101
000511014005
110123431121
200120404006
```
It didn't take you long to figure out the letters, but the numbers were a bit tricky. You worked on them and got some clue, but you could have bet that you needed some word. Later on, you knew what to do. You followed your instructions and your counterpart answered:
*"I'm sorry Sir, I guess you won't find this here."*, pointing around you. *"May I offer you today's newspaper?"*
You felt somewhat snubbed, but agreed to take the newspaper. You found a seat and started reading. Suddenly, you noticed small, nearly invisible dots under some of the letters in four lines of a text. You noted them:
```
JHEBHBAHHFBJ
ABBECDCFFFBJ
BGCFCEGDDFCH
DFCCDIGEFGBJ
```
It looked somehow familiar and you had to think twice. You got stuck for a while and consulted your watch. But you knew that your upline literally counts on you. Finally, you left the place the right way.
>
> What were your initial instructions?
>
> What did the newspaper reveal?
>
>
> Hints are included in the text. Answers can be found independently.
>
> Constructive suggestions for improvement are welcome.
>
>
>
### Further hints
>
> You decrypted the missing word by initially regrouping the numbers. The only evidence you could make use of was the time they told you.
>
>
>
> As there were no 8s or 9s in the numbers, you combined that it could be some 3 bit encoding.
>
>
>
> Of course, you knew the first few decimals of pi by heart.
>
>
>
> You correctly assumed, that the newspaper's letters had to be transformed into numbers for further processing.
>
>
>
> Com*pair*ing the two number blocks helped you deciphering.
>
>
>
With many credits to LeppyR64 who gave the right answer to the second question and f'' for visualizing the first number block, I'm accepting Khale\_Kitha's answer, due to the fact, that I can only accept one answer. In my oppinion Khale\_Kitha contributed a lot to get both answers. Good job! | 2016/05/11 | [
"https://puzzling.stackexchange.com/questions/32412",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/20933/"
] | The first string of letters translates to
>
> GO TO THE BOOK STORE AT THE RAILWAY STATION TOMORROW 8 AM IDENTIFY ASKING FOR A
>
>
>
If you
>
> Remove the letter 'Z' separator and unscramble each word separately.
>
>
>
Using the new info from the hint, if the first series of numbers needs to be translated to 3-bit binary numbers, they would be:
>
> `111 111 100 000 110 000 000 010 101 001 000 001`
>
> `000 000 000 101 001 001 000 001 100 000 000 101`
>
> `001 001 000 001 010 011 100 011 001 001 010 001`
>
> `010 000 000 001 010 000 100 000 100 000 000 110`
>
>
>
f"'s Answer has a good representation of what this is, if you change the ones to a visible block and the 0's to a non-visible block.
>
> Taking f"s answer of pi, 6-9, you get the 6-9'th digits of pi, 2653. (Thanks LeppyR64) On a phone, this could easily translate to **"COKE"**, so I presume that you are supposed to ask for a coke. Sadly, you're not likely to find one in a bookstore.
>
>
>
This makes the original first phrase:
>
> GO TO THE BOOK STORE AT THE RAILWAY STATION TOMORROW 8 AM IDENTIFY ASKING FOR A COKE
>
>
>
The series of letters can be changed to 0 - 9 to coincide with the other grid, which gives:
>
> `9 7 4 1 7 1 0 7 7 5 1 9`
>
> `0 1 1 4 2 3 2 5 5 5 1 9`
>
> `1 6 2 5 2 4 6 3 3 5 2 7`
>
> `3 5 2 2 3 8 6 4 5 6 1 9`
>
>
>
If you set these numbers, and the other numbers, into pairs, and subtract them, as demonstrated in [LeppyR64's answer](https://puzzling.stackexchange.com/a/32792/19944), then you come up with a form of text separated by 'X's, similar to the first message. It decodes to:
>
> TAKE RAIL ONE AT FOUR PM
>
>
> | Partial:
The newspaper revealed:
>
> TAKE RAIL ONE AT FOUR PM
>
>
>
By taking the newspaper letters and converting them to numbers yields:
>
> `9 7 4 1 7 1 0 7 7 5 1 9`
>
> `0 1 1 4 2 3 2 5 5 5 1 9`
>
> `1 6 2 5 2 4 6 3 3 5 2 7`
>
> `3 5 2 2 3 8 6 4 5 6 1 9`
>
>
>
Pairing the numbers in the two grids:
>
>
> ```
> 77 40 60 2 51 1
> 0 5 11 1 40 5
> 11 1 23 43 11 21
> 20 1 20 40 40 6
> ```
>
>
>
>
>
> ```
> 97 41 71 7 75 19
> 1 14 23 25 55 19
> 16 25 24 63 35 27
> 35 22 38 64 56 19
> ```
>
>
>
>
Compare the two grids:
>
> By subtracting grid 1 from grid 2
>
>
> ```
> 20 1 11 5 24 18
> 1 9 12 24 15 14
> 5 24 1 20 24 6
> 15 21 18 24 16 13
> ```
>
>
>
>
Convert the numbers to letters:
>
>
> ```
> T A K E X R
> A I L X O N
> E X A T X F
> O U R X P M
> ```
>
>
>
> |
32,412 | Your headquarter has sent an encrypted message to your phone:
```
OGZOTZHETZKOBOZRESTOZTAZETHZLIARYAWZNISTATOZWORTROOMZ8ZMAZDENYTIFIZGINSAKZORFZA
774060025101
000511014005
110123431121
200120404006
```
It didn't take you long to figure out the letters, but the numbers were a bit tricky. You worked on them and got some clue, but you could have bet that you needed some word. Later on, you knew what to do. You followed your instructions and your counterpart answered:
*"I'm sorry Sir, I guess you won't find this here."*, pointing around you. *"May I offer you today's newspaper?"*
You felt somewhat snubbed, but agreed to take the newspaper. You found a seat and started reading. Suddenly, you noticed small, nearly invisible dots under some of the letters in four lines of a text. You noted them:
```
JHEBHBAHHFBJ
ABBECDCFFFBJ
BGCFCEGDDFCH
DFCCDIGEFGBJ
```
It looked somehow familiar and you had to think twice. You got stuck for a while and consulted your watch. But you knew that your upline literally counts on you. Finally, you left the place the right way.
>
> What were your initial instructions?
>
> What did the newspaper reveal?
>
>
> Hints are included in the text. Answers can be found independently.
>
> Constructive suggestions for improvement are welcome.
>
>
>
### Further hints
>
> You decrypted the missing word by initially regrouping the numbers. The only evidence you could make use of was the time they told you.
>
>
>
> As there were no 8s or 9s in the numbers, you combined that it could be some 3 bit encoding.
>
>
>
> Of course, you knew the first few decimals of pi by heart.
>
>
>
> You correctly assumed, that the newspaper's letters had to be transformed into numbers for further processing.
>
>
>
> Com*pair*ing the two number blocks helped you deciphering.
>
>
>
With many credits to LeppyR64 who gave the right answer to the second question and f'' for visualizing the first number block, I'm accepting Khale\_Kitha's answer, due to the fact, that I can only accept one answer. In my oppinion Khale\_Kitha contributed a lot to get both answers. Good job! | 2016/05/11 | [
"https://puzzling.stackexchange.com/questions/32412",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/20933/"
] | Khale\_Kitha's answer produces this sequence of 1's and 0's:
>
> `111 111 100 000 110 000 000 010 101 001 000 001`
>
> `000 000 000 101 001 001 000 001 100 000 000 101`
>
> `001 001 000 001 010 011 100 011 001 001 010 001`
>
> `010 000 000 001 010 000 100 000 100 000 000 110`
>
>
>
Now you can
>
> Rearrange them to have 24 digits on each line, then replace the 1's and 0's with black and white:
>
> `βββββββ ββ β`
>
> `β β β β β β`
>
> `β β ββ β β`
>
> `β β β β βββ ββ`
>
> `β β β β β β`
>
> `β β β ββ`
>
> This reads "Ο 6-9".
>
>
> | Ideas for the squares:
>
> I don't think the numbers and letters have to be combined as @Khale\_Kitha does, because the text says "knew what to do" before getting the letter rectangle.
>
>
>
You worked on them and got some clue, but you could have bet you needed some word:
>
> There is no 8 or 9 in the numbers, just 0-7, so they could be coordinates into a 8x8 square. But we need to fill the squares with letters.
>
>
>
Later on, you knew what to do:
>
> If you remove the blanks from the "Go to the book store ..." sentence, it has exactly 64 characters, so you can use it to fill the square:
>
> `01234567`
>
> `0 GOTOTHEB`
>
> `1 OOKSTORE`
>
> `2 ATTHERAI`
>
> `3 LWAYSTAT`
>
> `4 IONTOMOR`
>
> `5 ROW8AMID`
>
> `6 ENTIFYAS`
>
> `7 KINGFORA`
>
>
>
Unfortunately, my attempts at mapping numbers to letters didn't give any sensible results, yet.
The "Further hint" may have to do with
>
> There are 48 digits, or 24 digit pairs. This might refer to 24 hours, and "The time they told you" was 8 am.
>
>
> |
32,412 | Your headquarter has sent an encrypted message to your phone:
```
OGZOTZHETZKOBOZRESTOZTAZETHZLIARYAWZNISTATOZWORTROOMZ8ZMAZDENYTIFIZGINSAKZORFZA
774060025101
000511014005
110123431121
200120404006
```
It didn't take you long to figure out the letters, but the numbers were a bit tricky. You worked on them and got some clue, but you could have bet that you needed some word. Later on, you knew what to do. You followed your instructions and your counterpart answered:
*"I'm sorry Sir, I guess you won't find this here."*, pointing around you. *"May I offer you today's newspaper?"*
You felt somewhat snubbed, but agreed to take the newspaper. You found a seat and started reading. Suddenly, you noticed small, nearly invisible dots under some of the letters in four lines of a text. You noted them:
```
JHEBHBAHHFBJ
ABBECDCFFFBJ
BGCFCEGDDFCH
DFCCDIGEFGBJ
```
It looked somehow familiar and you had to think twice. You got stuck for a while and consulted your watch. But you knew that your upline literally counts on you. Finally, you left the place the right way.
>
> What were your initial instructions?
>
> What did the newspaper reveal?
>
>
> Hints are included in the text. Answers can be found independently.
>
> Constructive suggestions for improvement are welcome.
>
>
>
### Further hints
>
> You decrypted the missing word by initially regrouping the numbers. The only evidence you could make use of was the time they told you.
>
>
>
> As there were no 8s or 9s in the numbers, you combined that it could be some 3 bit encoding.
>
>
>
> Of course, you knew the first few decimals of pi by heart.
>
>
>
> You correctly assumed, that the newspaper's letters had to be transformed into numbers for further processing.
>
>
>
> Com*pair*ing the two number blocks helped you deciphering.
>
>
>
With many credits to LeppyR64 who gave the right answer to the second question and f'' for visualizing the first number block, I'm accepting Khale\_Kitha's answer, due to the fact, that I can only accept one answer. In my oppinion Khale\_Kitha contributed a lot to get both answers. Good job! | 2016/05/11 | [
"https://puzzling.stackexchange.com/questions/32412",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/20933/"
] | Partial:
The newspaper revealed:
>
> TAKE RAIL ONE AT FOUR PM
>
>
>
By taking the newspaper letters and converting them to numbers yields:
>
> `9 7 4 1 7 1 0 7 7 5 1 9`
>
> `0 1 1 4 2 3 2 5 5 5 1 9`
>
> `1 6 2 5 2 4 6 3 3 5 2 7`
>
> `3 5 2 2 3 8 6 4 5 6 1 9`
>
>
>
Pairing the numbers in the two grids:
>
>
> ```
> 77 40 60 2 51 1
> 0 5 11 1 40 5
> 11 1 23 43 11 21
> 20 1 20 40 40 6
> ```
>
>
>
>
>
> ```
> 97 41 71 7 75 19
> 1 14 23 25 55 19
> 16 25 24 63 35 27
> 35 22 38 64 56 19
> ```
>
>
>
>
Compare the two grids:
>
> By subtracting grid 1 from grid 2
>
>
> ```
> 20 1 11 5 24 18
> 1 9 12 24 15 14
> 5 24 1 20 24 6
> 15 21 18 24 16 13
> ```
>
>
>
>
Convert the numbers to letters:
>
>
> ```
> T A K E X R
> A I L X O N
> E X A T X F
> O U R X P M
> ```
>
>
>
> | Ideas for the squares:
>
> I don't think the numbers and letters have to be combined as @Khale\_Kitha does, because the text says "knew what to do" before getting the letter rectangle.
>
>
>
You worked on them and got some clue, but you could have bet you needed some word:
>
> There is no 8 or 9 in the numbers, just 0-7, so they could be coordinates into a 8x8 square. But we need to fill the squares with letters.
>
>
>
Later on, you knew what to do:
>
> If you remove the blanks from the "Go to the book store ..." sentence, it has exactly 64 characters, so you can use it to fill the square:
>
> `01234567`
>
> `0 GOTOTHEB`
>
> `1 OOKSTORE`
>
> `2 ATTHERAI`
>
> `3 LWAYSTAT`
>
> `4 IONTOMOR`
>
> `5 ROW8AMID`
>
> `6 ENTIFYAS`
>
> `7 KINGFORA`
>
>
>
Unfortunately, my attempts at mapping numbers to letters didn't give any sensible results, yet.
The "Further hint" may have to do with
>
> There are 48 digits, or 24 digit pairs. This might refer to 24 hours, and "The time they told you" was 8 am.
>
>
> |
62,601,946 | So I was looking through PyPI and I found this: <https://pypi.org/project/pip/>
What is the reason for this? Why does this exist, and why would it be useful?? Isnt it kinda recursive in a way? | 2020/06/26 | [
"https://Stackoverflow.com/questions/62601946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8849702/"
] | Anyone looking for a quick copy-paste solution:
```
$ pip install altair_saver
$ npm install vega-lite vega-cli canvas
``` | To save charts to PNG, you need more than just the [altair\_saver](https://github.com/altair-viz/altair_saver) package. You also need either *selenium* or *node* dependencies, as outlined in altair\_saver's [installation](https://github.com/altair-viz/altair_saver#installation) section.
Follow those instructions and install the required dependencies, and it will resolve your error. |
27,867,499 | I have two elements and I want to delegate click event to their children.
This code works:
```
list.on('click', '> *', function (e) {
clickHandler($(this), e);
});
listSelected.on('click', '> *', function (e) {
clickHandler($(this), e);
});
```
Tried to use jQuery element array, but this code doesn't work:
```
$([list, listSelected]).on('click', '> *', function (e) {
clickHandler($(this), e);
});
``` | 2015/01/09 | [
"https://Stackoverflow.com/questions/27867499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3841049/"
] | You can use `add()`
```
list.add(listSelected).on('click', '> *', function (e) {
clickHandler($(this), e);
});
``` | ```
$('#your_list').children().each(function(){
$(this).bind('click', clickHandler) ;
}) ;
``` |
15,462,523 | I am super confused and have been searching. But as the title suggests I am trying to enter an array.
My question is how do I get this array to import into the database? As of now with the current script, it only imports the first record and not the rest. Here also, I am able to import other values within the same array this is a JSON call by the way and its already being decoded.
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$vehicle_dmg_id = $vehicle_name['id'];
$vehicle_dmg_name = $vehicle_name['name'];
$vehicle_dmg_value = $vehicle_name['value'];
$vehicle_dmg_faction_nc = $vehicle_name['faction']['nc'];
$vehicle_dmg_faction_tr = $vehicle_name['faction']['tr'];
$vehicle_dmg_faction_vs = $vehicle_name['faction']['vs'];
}
}
}
$add_dmg_veh = "INSERT INTO damage_given(character_number, vehicle_id,
vehicle_name, total_value, vehicle_faction_nc, vehicle_faction_tr,
vehicle_faction_vs) VALUES ('$character_id[$key]', '$vehicle_dmg_id',
'$vehicle_dmg_name','$vehicle_dmg_value', '$vehicle_dmg_faction_nc',
'$vehicle_dmg_faction_tr','$vehicle_dmg_faction_vs')";
``` | 2013/03/17 | [
"https://Stackoverflow.com/questions/15462523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2169854/"
] | Although it is not recommended to store an array in a database, you could `serialize()` your array to store it in a database. Basically, PHP will convert the array into a specially crafted string, which it can later interpret.
[Serialize](http://php.net/manual/en/function.serialize.php) to store it in the database, and [unserialize](http://php.net/manual/en/function.unserialize.php) it to work with it when you pull it out of the database
**Note:** I say serialization is not recommended, because your database is then not in [First Normal Form](http://en.wikipedia.org/wiki/First_normal_form), specifically because you are storing non-atomic values inside of a particular entry in the database. For this case, I would recommend creating a separate table which can store these values individually, and link the two tables together with a foreign key. | You should be looking about PDO\_MySQL and your insert string is outside the loop and should be execute inside it. |
15,462,523 | I am super confused and have been searching. But as the title suggests I am trying to enter an array.
My question is how do I get this array to import into the database? As of now with the current script, it only imports the first record and not the rest. Here also, I am able to import other values within the same array this is a JSON call by the way and its already being decoded.
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$vehicle_dmg_id = $vehicle_name['id'];
$vehicle_dmg_name = $vehicle_name['name'];
$vehicle_dmg_value = $vehicle_name['value'];
$vehicle_dmg_faction_nc = $vehicle_name['faction']['nc'];
$vehicle_dmg_faction_tr = $vehicle_name['faction']['tr'];
$vehicle_dmg_faction_vs = $vehicle_name['faction']['vs'];
}
}
}
$add_dmg_veh = "INSERT INTO damage_given(character_number, vehicle_id,
vehicle_name, total_value, vehicle_faction_nc, vehicle_faction_tr,
vehicle_faction_vs) VALUES ('$character_id[$key]', '$vehicle_dmg_id',
'$vehicle_dmg_name','$vehicle_dmg_value', '$vehicle_dmg_faction_nc',
'$vehicle_dmg_faction_tr','$vehicle_dmg_faction_vs')";
``` | 2013/03/17 | [
"https://Stackoverflow.com/questions/15462523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2169854/"
] | Although it is not recommended to store an array in a database, you could `serialize()` your array to store it in a database. Basically, PHP will convert the array into a specially crafted string, which it can later interpret.
[Serialize](http://php.net/manual/en/function.serialize.php) to store it in the database, and [unserialize](http://php.net/manual/en/function.unserialize.php) it to work with it when you pull it out of the database
**Note:** I say serialization is not recommended, because your database is then not in [First Normal Form](http://en.wikipedia.org/wiki/First_normal_form), specifically because you are storing non-atomic values inside of a particular entry in the database. For this case, I would recommend creating a separate table which can store these values individually, and link the two tables together with a foreign key. | You have to iterate through the array and insert every field of the array by it's own.
```
foreach($array as $value) {
// execute your insert statement here with $value
}
``` |
15,462,523 | I am super confused and have been searching. But as the title suggests I am trying to enter an array.
My question is how do I get this array to import into the database? As of now with the current script, it only imports the first record and not the rest. Here also, I am able to import other values within the same array this is a JSON call by the way and its already being decoded.
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$vehicle_dmg_id = $vehicle_name['id'];
$vehicle_dmg_name = $vehicle_name['name'];
$vehicle_dmg_value = $vehicle_name['value'];
$vehicle_dmg_faction_nc = $vehicle_name['faction']['nc'];
$vehicle_dmg_faction_tr = $vehicle_name['faction']['tr'];
$vehicle_dmg_faction_vs = $vehicle_name['faction']['vs'];
}
}
}
$add_dmg_veh = "INSERT INTO damage_given(character_number, vehicle_id,
vehicle_name, total_value, vehicle_faction_nc, vehicle_faction_tr,
vehicle_faction_vs) VALUES ('$character_id[$key]', '$vehicle_dmg_id',
'$vehicle_dmg_name','$vehicle_dmg_value', '$vehicle_dmg_faction_nc',
'$vehicle_dmg_faction_tr','$vehicle_dmg_faction_vs')";
``` | 2013/03/17 | [
"https://Stackoverflow.com/questions/15462523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2169854/"
] | Although it is not recommended to store an array in a database, you could `serialize()` your array to store it in a database. Basically, PHP will convert the array into a specially crafted string, which it can later interpret.
[Serialize](http://php.net/manual/en/function.serialize.php) to store it in the database, and [unserialize](http://php.net/manual/en/function.unserialize.php) it to work with it when you pull it out of the database
**Note:** I say serialization is not recommended, because your database is then not in [First Normal Form](http://en.wikipedia.org/wiki/First_normal_form), specifically because you are storing non-atomic values inside of a particular entry in the database. For this case, I would recommend creating a separate table which can store these values individually, and link the two tables together with a foreign key. | First of all you can't insert array in MySQL as you are doing .. Do as with iterating..
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$vehicle_dmg_id = $vehicle_name['id'];
$vehicle_dmg_name = $vehicle_name['name'];
$vehicle_dmg_value = $vehicle_name['value'];
$vehicle_dmg_faction_nc = $vehicle_name['faction']['nc'];
$vehicle_dmg_faction_tr = $vehicle_name['faction']['tr'];
$vehicle_dmg_faction_vs = $vehicle_name['faction']['vs'];
// if you wants to use insert query then do here.
$add_dmg_veh = "INSERT INTO damage_given(character_number, vehicle_id,
vehicle_name, total_value, vehicle_faction_nc, vehicle_faction_tr,
vehicle_faction_vs) VALUES ('$character_id[$key]', '$vehicle_dmg_id',
'$vehicle_dmg_name', '$vehicle_dmg_value', '$vehicle_dmg_faction_nc',
'$vehicle_dmg_faction_tr', '$vehicle_dmg_faction_vs')";
}
}
}
``` |
15,462,523 | I am super confused and have been searching. But as the title suggests I am trying to enter an array.
My question is how do I get this array to import into the database? As of now with the current script, it only imports the first record and not the rest. Here also, I am able to import other values within the same array this is a JSON call by the way and its already being decoded.
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$vehicle_dmg_id = $vehicle_name['id'];
$vehicle_dmg_name = $vehicle_name['name'];
$vehicle_dmg_value = $vehicle_name['value'];
$vehicle_dmg_faction_nc = $vehicle_name['faction']['nc'];
$vehicle_dmg_faction_tr = $vehicle_name['faction']['tr'];
$vehicle_dmg_faction_vs = $vehicle_name['faction']['vs'];
}
}
}
$add_dmg_veh = "INSERT INTO damage_given(character_number, vehicle_id,
vehicle_name, total_value, vehicle_faction_nc, vehicle_faction_tr,
vehicle_faction_vs) VALUES ('$character_id[$key]', '$vehicle_dmg_id',
'$vehicle_dmg_name','$vehicle_dmg_value', '$vehicle_dmg_faction_nc',
'$vehicle_dmg_faction_tr','$vehicle_dmg_faction_vs')";
``` | 2013/03/17 | [
"https://Stackoverflow.com/questions/15462523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2169854/"
] | Although it is not recommended to store an array in a database, you could `serialize()` your array to store it in a database. Basically, PHP will convert the array into a specially crafted string, which it can later interpret.
[Serialize](http://php.net/manual/en/function.serialize.php) to store it in the database, and [unserialize](http://php.net/manual/en/function.unserialize.php) it to work with it when you pull it out of the database
**Note:** I say serialization is not recommended, because your database is then not in [First Normal Form](http://en.wikipedia.org/wiki/First_normal_form), specifically because you are storing non-atomic values inside of a particular entry in the database. For this case, I would recommend creating a separate table which can store these values individually, and link the two tables together with a foreign key. | try building your insert data in an array and then implode the results into a single query:
```
<?php
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$sql[] = "
(
".$vehicle_name['id'].",
".$vehicle_name['name'].",
".$vehicle_name['value'].",
".$vehicle_name['faction']['nc'].",
".$vehicle_name['faction']['tr'].",
".$vehicle_name['faction']['vs']."
)";
}
}
}
$query = "
INSERT INTO damage_given
(
character_number,
vehicle_id,
vehicle_name,
total_value,
vehicle_faction_nc,
vehicle_faction_tr,
vehicle_faction_vs
)
VALUES
".implode(",",$sql)."
";
?>
``` |
15,462,523 | I am super confused and have been searching. But as the title suggests I am trying to enter an array.
My question is how do I get this array to import into the database? As of now with the current script, it only imports the first record and not the rest. Here also, I am able to import other values within the same array this is a JSON call by the way and its already being decoded.
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["damage_given"]["vehicle"])) {
$damage_given[$key] = $output[$key]["stats"]["damage_given"]["vehicle"];
foreach ($damage_given[$key] as $vehicle_name) {
$vehicle_dmg_id = $vehicle_name['id'];
$vehicle_dmg_name = $vehicle_name['name'];
$vehicle_dmg_value = $vehicle_name['value'];
$vehicle_dmg_faction_nc = $vehicle_name['faction']['nc'];
$vehicle_dmg_faction_tr = $vehicle_name['faction']['tr'];
$vehicle_dmg_faction_vs = $vehicle_name['faction']['vs'];
}
}
}
$add_dmg_veh = "INSERT INTO damage_given(character_number, vehicle_id,
vehicle_name, total_value, vehicle_faction_nc, vehicle_faction_tr,
vehicle_faction_vs) VALUES ('$character_id[$key]', '$vehicle_dmg_id',
'$vehicle_dmg_name','$vehicle_dmg_value', '$vehicle_dmg_faction_nc',
'$vehicle_dmg_faction_tr','$vehicle_dmg_faction_vs')";
``` | 2013/03/17 | [
"https://Stackoverflow.com/questions/15462523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2169854/"
] | Although it is not recommended to store an array in a database, you could `serialize()` your array to store it in a database. Basically, PHP will convert the array into a specially crafted string, which it can later interpret.
[Serialize](http://php.net/manual/en/function.serialize.php) to store it in the database, and [unserialize](http://php.net/manual/en/function.unserialize.php) it to work with it when you pull it out of the database
**Note:** I say serialization is not recommended, because your database is then not in [First Normal Form](http://en.wikipedia.org/wiki/First_normal_form), specifically because you are storing non-atomic values inside of a particular entry in the database. For this case, I would recommend creating a separate table which can store these values individually, and link the two tables together with a foreign key. | here is what I got to fix the problem!
$stmt = $dbh->prepare(
"INSERT INTO kills\_vehicle (character\_number, veh\_id, veh\_name, veh\_total, veh\_faction\_nc, veh\_faction\_tr, veh\_faction\_vs)
VALUES(:char\_id, :id, :vehname, :total\_value, :faction\_nc, :faction\_tr, :faction\_vs)");
```
foreach ($output as $key => $value) {
if (isset($output[$key]["stats"]["play_time"]["vehicle"])) {
$character_id[$key] = $output[$key]["id"];
$score_hit_count[$key] = $output[$key]["stats"]["kills"]["vehicle"];
foreach ($score_hit_count[$key] as $row) {
$stmt->bindValue(':char_id', $character_id[$key]);
$stmt->bindValue(':id', $row[id]);
$stmt->bindValue(':vehname', $row[name]);
$stmt->bindValue(':total_value', $row[value]);
$stmt->bindValue(':faction_nc', $row[faction][nc]);
$stmt->bindValue(':faction_tr', $row[faction][tr]);
$stmt->bindValue(':faction_vs', $row[faction][vs]);
$stmt->execute();
}
}
}
``` |
Subsets and Splits